mirror of
https://github.com/bigchaindb/bigchaindb.git
synced 2024-10-13 13:34:05 +00:00
Merge branch 'master' into remove-block-parameter
This commit is contained in:
commit
ec07065784
88
CHANGELOG.md
88
CHANGELOG.md
@ -18,6 +18,94 @@ For reference, the possible headings are:
|
||||
* **Known Issues**
|
||||
* **Notes**
|
||||
|
||||
## [2.0 Beta 5] - 2018-08-01
|
||||
|
||||
Tag name: v2.0.0b5
|
||||
|
||||
### Changed
|
||||
|
||||
* Supported version of Tendermint `0.22.3` -> `0.22.8`. [Pull request #2429](https://github.com/bigchaindb/bigchaindb/pull/2429).
|
||||
|
||||
### Fixed
|
||||
|
||||
* Stateful validation raises a DoubleSpend exception if there is any other transaction that spends the same output(s) even if it has the same transaction ID. [Pull request #2422](https://github.com/bigchaindb/bigchaindb/pull/2422).
|
||||
|
||||
## [2.0 Beta 4] - 2018-07-30
|
||||
|
||||
Tag name: v2.0.0b4
|
||||
|
||||
### Added
|
||||
|
||||
- Added scripts for creating a configuration to manage processes with Monit. [Pull request #2410](https://github.com/bigchaindb/bigchaindb/pull/2410).
|
||||
|
||||
### Fixed
|
||||
|
||||
- Redundant asset and metadata queries were removed. [Pull request #2409](https://github.com/bigchaindb/bigchaindb/pull/2409).
|
||||
- Signal handling was fixed for BigchainDB processes. [Pull request #2395](https://github.com/bigchaindb/bigchaindb/pull/2395).
|
||||
- Some of the abruptly closed sockets that used to stay in memory are being cleaned up now. [Pull request 2408](https://github.com/bigchaindb/bigchaindb/pull/2408).
|
||||
- Fixed the bug when WebSockets powering Events API became unresponsive. [Pull request #2413](https://github.com/bigchaindb/bigchaindb/pull/2413).
|
||||
|
||||
### Notes:
|
||||
|
||||
* The instructions on how to write a BEP were simplified. [Pull request #2347](https://github.com/bigchaindb/bigchaindb/pull/2347).
|
||||
* A section about troubleshooting was added to the network setup guide. [Pull request #2398](https://github.com/bigchaindb/bigchaindb/pull/2398).
|
||||
* Some of the core code was given a better package structure. [Pull request #2401](https://github.com/bigchaindb/bigchaindb/pull/2401).
|
||||
* Some of the previously disabled unit tests were re-enabled and updated. Pull requests [#2404](https://github.com/bigchaindb/bigchaindb/pull/2404) and [#2402](https://github.com/bigchaindb/bigchaindb/pull/2402).
|
||||
* Some building blocks for dynamically adding new validators were introduced. [Pull request #2392](https://github.com/bigchaindb/bigchaindb/pull/2392).
|
||||
|
||||
## [2.0 Beta 3] - 2018-07-18
|
||||
|
||||
Tag name: v2.0.0b3
|
||||
|
||||
### Fixed
|
||||
|
||||
Fixed a bug in transaction validation. For some more-complex situations, it would say that a valid transaction was invalid. This bug was actually fixed before; it was [issue #1271](https://github.com/bigchaindb/bigchaindb/issues/1271). The unit test for it was turned off while we integrated Tendermint. Then the query implementation code got changed, reintroducing the bug, but the unit test was off so the bug wasn't caught. When we turned the test back on, shortly after releasing Beta 2, it failed, unveiling the bug. [Pull request #2389](https://github.com/bigchaindb/bigchaindb/pull/2389)
|
||||
|
||||
## [2.0 Beta 2] - 2018-07-16
|
||||
|
||||
Tag name: v2.0.0b2
|
||||
|
||||
### Added
|
||||
|
||||
* Added new configuration settings `tendermint.host` and `tendermint.port`. [Pull request #2342](https://github.com/bigchaindb/bigchaindb/pull/2342)
|
||||
* Added tests to ensure that BigchainDB gracefully handles "nasty" strings in keys and values. [Pull request #2334](https://github.com/bigchaindb/bigchaindb/pull/2334)
|
||||
* Added a new logging handler to capture benchmark stats to a separate file. [Pull request #2349](https://github.com/bigchaindb/bigchaindb/pull/2349)
|
||||
|
||||
### Changed
|
||||
|
||||
* Changed the names of BigchainDB processes (Python processes) to include 'bigchaindb', so they are easier to spot and find. [Pull request #2354](https://github.com/bigchaindb/bigchaindb/pull/2354)
|
||||
* Updated all code to support the latest version of Tendermint. Note that the BigchainDB ABCI server now listens to port 26657 instead of 46657. Pull requests [#2375](https://github.com/bigchaindb/bigchaindb/pull/2375) and [#2380](https://github.com/bigchaindb/bigchaindb/pull/2380)
|
||||
|
||||
### Removed
|
||||
|
||||
Removed all support and code for the old backlog_reassign_delay setting. [Pull request #2332](https://github.com/bigchaindb/bigchaindb/pull/2332)
|
||||
|
||||
### Fixed
|
||||
|
||||
* Fixed a bug that sometimes arose when using Docker Compose. (Tendermint would freeze.) [Pull request #2341](https://github.com/bigchaindb/bigchaindb/pull/2341)
|
||||
* Fixed a bug in the code that creates a MongoDB index for the "id" in the transactions collection. It works now, and performance is improved. [Pull request #2378](https://github.com/bigchaindb/bigchaindb/pull/2378)
|
||||
* The logging server would keep runnning in some tear-down scenarios. It doesn't do that any more. [Pull request #2304](https://github.com/bigchaindb/bigchaindb/pull/2304)
|
||||
|
||||
### External Contributors
|
||||
|
||||
@hrntknr - [Pull request #2331](https://github.com/bigchaindb/bigchaindb/pull/2331)
|
||||
|
||||
### Known Issues
|
||||
|
||||
The `bigchaindb upsert-validator` subcommand is not working yet, but a solution ([BEP-21](https://github.com/bigchaindb/BEPs/tree/master/21)) has been finalized and will be implemented before we release the final BigchainDB 2.0.
|
||||
|
||||
### Notes
|
||||
|
||||
* A lot of old/dead code was deleted. Pull requests
|
||||
[#2319](https://github.com/bigchaindb/bigchaindb/pull/2319),
|
||||
[#2338](https://github.com/bigchaindb/bigchaindb/pull/2338),
|
||||
[#2357](https://github.com/bigchaindb/bigchaindb/pull/2357),
|
||||
[#2365](https://github.com/bigchaindb/bigchaindb/pull/2365),
|
||||
[#2366](https://github.com/bigchaindb/bigchaindb/pull/2366),
|
||||
[#2368](https://github.com/bigchaindb/bigchaindb/pull/2368) and
|
||||
[#2374](https://github.com/bigchaindb/bigchaindb/pull/2374)
|
||||
* Improved the documentation page "How to setup a BigchainDB Network". [Pull Request #2312](https://github.com/bigchaindb/bigchaindb/pull/2312)
|
||||
|
||||
## [2.0 Beta 1] - 2018-06-01
|
||||
|
||||
Tag name: v2.0.0b1
|
||||
|
||||
@ -7,7 +7,6 @@ RUN apt-get -qq update \
|
||||
&& apt-get -y upgrade \
|
||||
&& apt-get install -y jq \
|
||||
&& pip install --no-cache-dir --process-dependency-links . \
|
||||
&& pip install --no-cache-dir . \
|
||||
&& apt-get autoremove \
|
||||
&& apt-get clean
|
||||
|
||||
|
||||
51
Dockerfile-all-in-one
Normal file
51
Dockerfile-all-in-one
Normal file
@ -0,0 +1,51 @@
|
||||
FROM alpine:latest
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
|
||||
ARG TM_VERSION=0.22.8
|
||||
RUN mkdir -p /usr/src/app
|
||||
ENV HOME /root
|
||||
COPY . /usr/src/app/
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
RUN apk --update add sudo bash \
|
||||
&& apk --update add python3 openssl ca-certificates git \
|
||||
&& apk --update add --virtual build-dependencies python3-dev \
|
||||
libffi-dev openssl-dev build-base jq \
|
||||
&& apk add --no-cache libstdc++ dpkg gnupg \
|
||||
&& pip3 install --upgrade pip cffi \
|
||||
&& pip install --no-cache-dir --process-dependency-links -e . \
|
||||
&& apk del build-dependencies \
|
||||
&& rm -f /var/cache/apk/*
|
||||
|
||||
# Install mongodb and monit
|
||||
RUN apk --update add mongodb monit
|
||||
|
||||
# Install Tendermint
|
||||
RUN wget https://github.com/tendermint/tendermint/releases/download/v${TM_VERSION}-autodraft/tendermint_${TM_VERSION}_linux_amd64.zip \
|
||||
&& unzip tendermint_${TM_VERSION}_linux_amd64.zip \
|
||||
&& mv tendermint /usr/local/bin/ \
|
||||
&& rm tendermint_${TM_VERSION}_linux_amd64.zip
|
||||
|
||||
ENV TMHOME=/tendermint
|
||||
|
||||
# Set permissions required for mongodb
|
||||
RUN mkdir -p /data/db /data/configdb \
|
||||
&& chown -R mongodb:mongodb /data/db /data/configdb
|
||||
|
||||
# BigchainDB enviroment variables
|
||||
ENV BIGCHAINDB_DATABASE_PORT 27017
|
||||
ENV BIGCHAINDB_DATABASE_BACKEND localmongodb
|
||||
ENV BIGCHAINDB_SERVER_BIND 0.0.0.0:9984
|
||||
ENV BIGCHAINDB_WSSERVER_HOST 0.0.0.0
|
||||
ENV BIGCHAINDB_WSSERVER_SCHEME ws
|
||||
|
||||
ENV BIGCHAINDB_WSSERVER_ADVERTISED_HOST 0.0.0.0
|
||||
ENV BIGCHAINDB_WSSERVER_ADVERTISED_SCHEME ws
|
||||
ENV BIGCHAINDB_TENDERMINT_PORT 26657
|
||||
|
||||
VOLUME /data/db /data/configdb /tendermint
|
||||
|
||||
EXPOSE 27017 28017 9984 9985 26656 26657 26658
|
||||
|
||||
WORKDIR $HOME
|
||||
ENTRYPOINT ["/usr/src/app/pkg/scripts/all-in-one.bash"]
|
||||
@ -15,7 +15,7 @@ For the licenses on all other BigchainDB-related code (i.e. in other repositorie
|
||||
|
||||
## Documentation Licenses
|
||||
|
||||
The official BigchainDB documentation, _except for the short code snippets embedded within it_, is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license, the full text of which can be found at [http://creativecommons.org/licenses/by-sa/4.0/legalcode](http://creativecommons.org/licenses/by-sa/4.0/legalcode).
|
||||
The official BigchainDB documentation, _except for the short code snippets embedded within it_, is licensed under a Creative Commons Attribution 4.0 International license, the full text of which can be found at [http://creativecommons.org/licenses/by/4.0/legalcode](http://creativecommons.org/licenses/by/4.0/legalcode).
|
||||
|
||||
## Exceptions
|
||||
|
||||
|
||||
@ -51,7 +51,7 @@ The following steps are what we do to release a new version of _BigchainDB Serve
|
||||
- **Title:** Same as tag version above, e.g `v0.9.1`
|
||||
- **Description:** The body of the changelog entry (Added, Changed, etc.)
|
||||
1. Click "Publish release" to publish the release on GitHub.
|
||||
1. On your local computer, make sure you're on the `master` branch and that it's up-to-date with the `master` branch in the bigchaindb/bigchaindb repository (e.g. `git fetch upstream` and `git merge upstream/master`). We're going to use that to push a new `bigchaindb` package to PyPI.
|
||||
1. On your local computer, make sure you're on the `master` branch and that it's up-to-date with the `master` branch in the bigchaindb/bigchaindb repository (e.g. `git pull upstream master`). We're going to use that to push a new `bigchaindb` package to PyPI.
|
||||
1. Make sure you have a `~/.pypirc` file containing credentials for PyPI.
|
||||
1. Do `make release` to build and publish the new `bigchaindb` package on PyPI.
|
||||
1. [Log in to readthedocs.org](https://readthedocs.org/accounts/login/) and go to the **BigchainDB Server** project, then:
|
||||
@ -64,7 +64,11 @@ The following steps are what we do to release a new version of _BigchainDB Serve
|
||||
1. Make sure that the new version's tag is "Active" and "Public"
|
||||
1. Make sure the **stable** branch is _not_ active.
|
||||
1. Scroll to the bottom of the page and click the "Submit" button.
|
||||
1. Go to [Docker Hub](https://hub.docker.com/), sign in, go to bigchaindb/bigchaindb, and go to Settings --> Build Settings.
|
||||
1. Go to [Docker Hub](https://hub.docker.com/) and sign in, then:
|
||||
- Click on "Organizations"
|
||||
- Click on "bigchaindb"
|
||||
- Click on "bigchaindb/bigchaindb"
|
||||
- Click on "Build Settings"
|
||||
- Find the row where "Docker Tag Name" equals `latest`
|
||||
and change the value of "Name" to the name (Git tag)
|
||||
of the new release, e.g. `v0.9.0`.
|
||||
@ -73,5 +77,11 @@ The following steps are what we do to release a new version of _BigchainDB Serve
|
||||
You can do that by clicking the green "+" (plus) icon.
|
||||
The contents of the new row should be similar to the existing rows
|
||||
of previous releases like that.
|
||||
- Click on "Tags"
|
||||
- Delete the "latest" tag (so we can rebuild it)
|
||||
- Click on "Build Settings" again
|
||||
- Click on the "Trigger" button for the "latest" tag and make sure it worked by clicking on "Tags" again
|
||||
- If the release is an Alpha, Beta or Release Candidate release,
|
||||
then click on the "Trigger" button for that tag as well.
|
||||
|
||||
Congratulations, you have released a new version of BigchainDB Server!
|
||||
|
||||
@ -4,7 +4,7 @@ A high-level description of the files and subdirectories of BigchainDB.
|
||||
|
||||
## Files
|
||||
|
||||
### [`tendermint/lib.py`](./tendermint/lib.py)
|
||||
### [`lib.py`](lib.py)
|
||||
|
||||
The `BigchainDB` class is defined here. Most node-level operations and database interactions are found in this file. This is the place to start if you are interested in implementing a server API, since many of these class methods concern BigchainDB interacting with the outside world.
|
||||
|
||||
|
||||
@ -2,6 +2,9 @@ import copy
|
||||
import logging
|
||||
|
||||
from bigchaindb.log import DEFAULT_LOGGING_CONFIG as log_config
|
||||
from bigchaindb.lib import BigchainDB # noqa
|
||||
from bigchaindb.version import __version__ # noqa
|
||||
from bigchaindb.core import App # noqa
|
||||
|
||||
# from functools import reduce
|
||||
# PORT_NUMBER = reduce(lambda x, y: x * y, map(ord, 'BigchainDB')) % 2**16
|
||||
@ -84,5 +87,12 @@ config = {
|
||||
# the user wants to reconfigure the node. Check ``bigchaindb.config_utils``
|
||||
# for more info.
|
||||
_config = copy.deepcopy(config)
|
||||
from bigchaindb.tendermint import BigchainDB # noqa
|
||||
from bigchaindb.version import __version__ # noqa
|
||||
from bigchaindb.common.transaction import Transaction # noqa
|
||||
from bigchaindb import models # noqa
|
||||
from bigchaindb.upsert_validator import ValidatorElection # noqa
|
||||
from bigchaindb.upsert_validator import ValidatorElectionVote # noqa
|
||||
|
||||
Transaction.register_type(Transaction.CREATE, models.Transaction)
|
||||
Transaction.register_type(Transaction.TRANSFER, models.Transaction)
|
||||
Transaction.register_type(ValidatorElection.VALIDATOR_ELECTION, ValidatorElection)
|
||||
Transaction.register_type(ValidatorElectionVote.VALIDATOR_ELECTION_VOTE, ValidatorElectionVote)
|
||||
|
||||
@ -8,7 +8,6 @@ from bigchaindb.common.exceptions import MultipleValidatorOperationError
|
||||
from bigchaindb.backend.utils import module_dispatch_registrar
|
||||
from bigchaindb.backend.localmongodb.connection import LocalMongoDBConnection
|
||||
from bigchaindb.common.transaction import Transaction
|
||||
from bigchaindb.backend.query import VALIDATOR_UPDATE_ID
|
||||
|
||||
register_query = module_dispatch_registrar(backend.query)
|
||||
|
||||
@ -99,11 +98,13 @@ def get_assets(conn, asset_ids):
|
||||
|
||||
@register_query(LocalMongoDBConnection)
|
||||
def get_spent(conn, transaction_id, output):
|
||||
query = {'inputs.fulfills': {
|
||||
'transaction_id': transaction_id,
|
||||
'output_index': output}}
|
||||
|
||||
return conn.run(
|
||||
conn.collection('transactions')
|
||||
.find({'inputs.fulfills.transaction_id': transaction_id,
|
||||
'inputs.fulfills.output_index': output},
|
||||
{'_id': 0}))
|
||||
.find(query, {'_id': 0}))
|
||||
|
||||
|
||||
@register_query(LocalMongoDBConnection)
|
||||
@ -277,7 +278,7 @@ def get_pre_commit_state(conn, commit_id):
|
||||
|
||||
|
||||
@register_query(LocalMongoDBConnection)
|
||||
def store_validator_update(conn, validator_update):
|
||||
def store_validator_set(conn, validator_update):
|
||||
try:
|
||||
return conn.run(
|
||||
conn.collection('validators')
|
||||
@ -287,15 +288,16 @@ def store_validator_update(conn, validator_update):
|
||||
|
||||
|
||||
@register_query(LocalMongoDBConnection)
|
||||
def get_validator_update(conn, update_id=VALIDATOR_UPDATE_ID):
|
||||
return conn.run(
|
||||
conn.collection('validators')
|
||||
.find_one({'update_id': update_id}, projection={'_id': False}))
|
||||
def get_validator_set(conn, height=None):
|
||||
query = {}
|
||||
if height is not None:
|
||||
query = {'height': {'$lte': height}}
|
||||
|
||||
|
||||
@register_query(LocalMongoDBConnection)
|
||||
def delete_validator_update(conn, update_id=VALIDATOR_UPDATE_ID):
|
||||
return conn.run(
|
||||
cursor = conn.run(
|
||||
conn.collection('validators')
|
||||
.delete_one({'update_id': update_id})
|
||||
.find(query, projection={'_id': False})
|
||||
.sort([('height', DESCENDING)])
|
||||
.limit(1)
|
||||
)
|
||||
|
||||
return list(cursor)[0]
|
||||
|
||||
@ -126,6 +126,6 @@ def create_pre_commit_secondary_index(conn, dbname):
|
||||
def create_validators_secondary_index(conn, dbname):
|
||||
logger.info('Create `validators` secondary index.')
|
||||
|
||||
conn.conn[dbname]['validators'].create_index('update_id',
|
||||
name='update_id',
|
||||
conn.conn[dbname]['validators'].create_index('height',
|
||||
name='height',
|
||||
unique=True,)
|
||||
|
||||
@ -340,13 +340,6 @@ def store_pre_commit_state(connection, commit_id, state):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
@singledispatch
|
||||
def store_validator_update(conn, validator_update):
|
||||
"""Store a update for the validator set"""
|
||||
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
@singledispatch
|
||||
def get_pre_commit_state(connection, commit_id):
|
||||
"""Get pre-commit state where `id` is `commit_id`.
|
||||
@ -362,14 +355,15 @@ def get_pre_commit_state(connection, commit_id):
|
||||
|
||||
|
||||
@singledispatch
|
||||
def get_validator_update(conn):
|
||||
"""Get validator updates which are not synced"""
|
||||
def store_validator_set(conn, validator_update):
|
||||
"""Store updated validator set"""
|
||||
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
@singledispatch
|
||||
def delete_validator_update(conn, id):
|
||||
"""Set the sync status for validator update documents"""
|
||||
def get_validator_set(conn, height):
|
||||
"""Get validator set for a given `height`, if `height` is not specified
|
||||
then return the latest validator set"""
|
||||
|
||||
raise NotImplementedError
|
||||
|
||||
@ -21,7 +21,7 @@ from bigchaindb.commands import utils
|
||||
from bigchaindb.commands.utils import (configure_bigchaindb,
|
||||
input_on_stderr)
|
||||
from bigchaindb.log import setup_logging
|
||||
from bigchaindb.tendermint.utils import public_key_from_base64
|
||||
from bigchaindb.tendermint_utils import public_key_from_base64
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -97,7 +97,7 @@ def run_configure(args):
|
||||
def run_upsert_validator(args):
|
||||
"""Store validators which should be synced with Tendermint"""
|
||||
|
||||
b = bigchaindb.tendermint.BigchainDB()
|
||||
b = bigchaindb.BigchainDB()
|
||||
public_key = public_key_from_base64(args.public_key)
|
||||
validator = {'pub_key': {'type': 'ed25519',
|
||||
'data': public_key},
|
||||
@ -113,7 +113,7 @@ def run_upsert_validator(args):
|
||||
|
||||
|
||||
def _run_init():
|
||||
bdb = bigchaindb.tendermint.BigchainDB()
|
||||
bdb = bigchaindb.BigchainDB()
|
||||
|
||||
schema.init_database(connection=bdb.connection)
|
||||
|
||||
@ -170,7 +170,7 @@ def run_start(args):
|
||||
setup_logging()
|
||||
|
||||
logger.info('BigchainDB Version %s', bigchaindb.__version__)
|
||||
run_recover(bigchaindb.tendermint.lib.BigchainDB())
|
||||
run_recover(bigchaindb.lib.BigchainDB())
|
||||
|
||||
try:
|
||||
if not args.skip_initialize_database:
|
||||
@ -180,7 +180,7 @@ def run_start(args):
|
||||
pass
|
||||
|
||||
logger.info('Starting BigchainDB main process.')
|
||||
from bigchaindb.tendermint.commands import start
|
||||
from bigchaindb.start import start
|
||||
start()
|
||||
|
||||
|
||||
|
||||
@ -30,3 +30,17 @@ def generate_key_pair():
|
||||
|
||||
PrivateKey = crypto.Ed25519SigningKey
|
||||
PublicKey = crypto.Ed25519VerifyingKey
|
||||
|
||||
|
||||
def key_pair_from_ed25519_key(hex_private_key):
|
||||
"""Generate base58 encode public-private key pair from a hex encoded private key"""
|
||||
priv_key = crypto.Ed25519SigningKey(bytes.fromhex(hex_private_key)[:32], encoding='bytes')
|
||||
public_key = priv_key.get_verifying_key()
|
||||
return CryptoKeypair(private_key=priv_key.encode(encoding='base58').decode('utf-8'),
|
||||
public_key=public_key.encode(encoding='base58').decode('utf-8'))
|
||||
|
||||
|
||||
def public_key_from_ed25519_key(hex_public_key):
|
||||
"""Generate base58 public key from hex encoded public key"""
|
||||
public_key = crypto.Ed25519VerifyingKey(bytes.fromhex(hex_public_key), encoding='bytes')
|
||||
return public_key.encode(encoding='base58').decode('utf-8')
|
||||
|
||||
@ -102,3 +102,19 @@ class GenesisBlockAlreadyExistsError(ValidationError):
|
||||
|
||||
class MultipleValidatorOperationError(ValidationError):
|
||||
"""Raised when a validator update pending but new request is submited"""
|
||||
|
||||
|
||||
class MultipleInputsError(ValidationError):
|
||||
"""Raised if there were multiple inputs when only one was expected"""
|
||||
|
||||
|
||||
class InvalidProposer(ValidationError):
|
||||
"""Raised if the public key is not a part of the validator set"""
|
||||
|
||||
|
||||
class UnequalValidatorSet(ValidationError):
|
||||
"""Raised if the validator sets differ"""
|
||||
|
||||
|
||||
class InvalidPowerChange(ValidationError):
|
||||
"""Raised if proposed power change in validator set is >=1/3 total power"""
|
||||
|
||||
@ -13,9 +13,9 @@ from bigchaindb.common.exceptions import SchemaValidationError
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _load_schema(name):
|
||||
def _load_schema(name, path=__file__):
|
||||
"""Load a schema from disk"""
|
||||
path = os.path.join(os.path.dirname(__file__), name + '.yaml')
|
||||
path = os.path.join(os.path.dirname(path), name + '.yaml')
|
||||
with open(path) as handle:
|
||||
schema = yaml.safe_load(handle)
|
||||
fast_schema = rapidjson_schema.loads(rapidjson.dumps(schema))
|
||||
@ -31,6 +31,12 @@ _, TX_SCHEMA_CREATE = _load_schema('transaction_create_' +
|
||||
_, TX_SCHEMA_TRANSFER = _load_schema('transaction_transfer_' +
|
||||
TX_SCHEMA_VERSION)
|
||||
|
||||
_, TX_SCHEMA_VALIDATOR_ELECTION = _load_schema('transaction_validator_election_' +
|
||||
TX_SCHEMA_VERSION)
|
||||
|
||||
_, TX_SCHEMA_VALIDATOR_ELECTION_VOTE = _load_schema('transaction_validator_election_vote_' +
|
||||
TX_SCHEMA_VERSION)
|
||||
|
||||
|
||||
def _validate_schema(schema, body):
|
||||
"""Validate data against a schema"""
|
||||
|
||||
@ -58,6 +58,8 @@ definitions:
|
||||
enum:
|
||||
- CREATE
|
||||
- TRANSFER
|
||||
- VALIDATOR_ELECTION
|
||||
- VALIDATOR_ELECTION_VOTE
|
||||
asset:
|
||||
type: object
|
||||
additionalProperties: false
|
||||
|
||||
@ -0,0 +1,48 @@
|
||||
---
|
||||
"$schema": "http://json-schema.org/draft-04/schema#"
|
||||
type: object
|
||||
title: Validator Election Schema - Propose a change to validator set
|
||||
required:
|
||||
- operation
|
||||
- asset
|
||||
- outputs
|
||||
properties:
|
||||
operation:
|
||||
type: string
|
||||
value: "VALIDATOR_ELECTION"
|
||||
asset:
|
||||
additionalProperties: false
|
||||
properties:
|
||||
data:
|
||||
additionalProperties: false
|
||||
properties:
|
||||
node_id:
|
||||
type: string
|
||||
public_key:
|
||||
type: string
|
||||
power:
|
||||
"$ref": "#/definitions/positiveInteger"
|
||||
required:
|
||||
- node_id
|
||||
- public_key
|
||||
- power
|
||||
required:
|
||||
- data
|
||||
outputs:
|
||||
type: array
|
||||
items:
|
||||
"$ref": "#/definitions/output"
|
||||
definitions:
|
||||
output:
|
||||
type: object
|
||||
properties:
|
||||
condition:
|
||||
type: object
|
||||
required:
|
||||
- uri
|
||||
properties:
|
||||
uri:
|
||||
type: string
|
||||
pattern: "^ni:///sha-256;([a-zA-Z0-9_-]{0,86})[?]\
|
||||
(fpt=ed25519-sha-256(&)?|cost=[0-9]+(&)?|\
|
||||
subtypes=ed25519-sha-256(&)?){2,3}$"
|
||||
@ -0,0 +1,27 @@
|
||||
---
|
||||
"$schema": "http://json-schema.org/draft-04/schema#"
|
||||
type: object
|
||||
title: Validator Election Vote Schema - Vote on a validator set change
|
||||
required:
|
||||
- operation
|
||||
- outputs
|
||||
properties:
|
||||
operation: "VALIDATOR_ELECTION_VOTE"
|
||||
outputs:
|
||||
type: array
|
||||
items:
|
||||
"$ref": "#/definitions/output"
|
||||
definitions:
|
||||
output:
|
||||
type: object
|
||||
properties:
|
||||
condition:
|
||||
type: object
|
||||
required:
|
||||
- uri
|
||||
properties:
|
||||
uri:
|
||||
type: string
|
||||
pattern: "^ni:///sha-256;([a-zA-Z0-9_-]{0,86})[?]\
|
||||
(fpt=ed25519-sha-256(&)?|cost=[0-9]+(&)?|\
|
||||
subtypes=ed25519-sha-256(&)?){2,3}$"
|
||||
@ -18,6 +18,7 @@ from sha3 import sha3_256
|
||||
|
||||
from bigchaindb.common.crypto import PrivateKey, hash_data
|
||||
from bigchaindb.common.exceptions import (KeypairMismatchException,
|
||||
InputDoesNotExist, DoubleSpend,
|
||||
InvalidHash, InvalidSignature,
|
||||
AmountError, AssetIdMismatch,
|
||||
ThresholdTooDeep)
|
||||
@ -515,7 +516,7 @@ class Transaction(object):
|
||||
version (string): Defines the version number of a Transaction.
|
||||
hash_id (string): Hash id of the transaction.
|
||||
"""
|
||||
if operation not in Transaction.ALLOWED_OPERATIONS:
|
||||
if operation not in self.ALLOWED_OPERATIONS:
|
||||
allowed_ops = ', '.join(self.__class__.ALLOWED_OPERATIONS)
|
||||
raise ValueError('`operation` must be one of {}'
|
||||
.format(allowed_ops))
|
||||
@ -523,11 +524,11 @@ class Transaction(object):
|
||||
# Asset payloads for 'CREATE' operations must be None or
|
||||
# dicts holding a `data` property. Asset payloads for 'TRANSFER'
|
||||
# operations must be dicts holding an `id` property.
|
||||
if (operation == Transaction.CREATE and
|
||||
if (operation == self.CREATE and
|
||||
asset is not None and not (isinstance(asset, dict) and 'data' in asset)):
|
||||
raise TypeError(('`asset` must be None or a dict holding a `data` '
|
||||
" property instance for '{}' Transactions".format(operation)))
|
||||
elif (operation == Transaction.TRANSFER and
|
||||
elif (operation == self.TRANSFER and
|
||||
not (isinstance(asset, dict) and 'id' in asset)):
|
||||
raise TypeError(('`asset` must be a dict holding an `id` property '
|
||||
"for 'TRANSFER' Transactions".format(operation)))
|
||||
@ -555,9 +556,9 @@ class Transaction(object):
|
||||
structure containing relevant information for storing them in
|
||||
a UTXO set, and performing validation.
|
||||
"""
|
||||
if self.operation == Transaction.CREATE:
|
||||
if self.operation == self.CREATE:
|
||||
self._asset_id = self._id
|
||||
elif self.operation == Transaction.TRANSFER:
|
||||
elif self.operation == self.TRANSFER:
|
||||
self._asset_id = self.asset['id']
|
||||
return (UnspentOutput(
|
||||
transaction_id=self._id,
|
||||
@ -585,6 +586,38 @@ class Transaction(object):
|
||||
def _hash(self):
|
||||
self._id = hash_data(self.serialized)
|
||||
|
||||
@classmethod
|
||||
def validate_create(cls, tx_signers, recipients, asset, metadata):
|
||||
if not isinstance(tx_signers, list):
|
||||
raise TypeError('`tx_signers` must be a list instance')
|
||||
if not isinstance(recipients, list):
|
||||
raise TypeError('`recipients` must be a list instance')
|
||||
if len(tx_signers) == 0:
|
||||
raise ValueError('`tx_signers` list cannot be empty')
|
||||
if len(recipients) == 0:
|
||||
raise ValueError('`recipients` list cannot be empty')
|
||||
if not (asset is None or isinstance(asset, dict)):
|
||||
raise TypeError('`asset` must be a dict or None')
|
||||
if not (metadata is None or isinstance(metadata, dict)):
|
||||
raise TypeError('`metadata` must be a dict or None')
|
||||
|
||||
inputs = []
|
||||
outputs = []
|
||||
|
||||
# generate_outputs
|
||||
for recipient in recipients:
|
||||
if not isinstance(recipient, tuple) or len(recipient) != 2:
|
||||
raise ValueError(('Each `recipient` in the list must be a'
|
||||
' tuple of `([<list of public keys>],'
|
||||
' <amount>)`'))
|
||||
pub_keys, amount = recipient
|
||||
outputs.append(Output.generate(pub_keys, amount))
|
||||
|
||||
# generate inputs
|
||||
inputs.append(Input.generate(tx_signers))
|
||||
|
||||
return (inputs, outputs)
|
||||
|
||||
@classmethod
|
||||
def create(cls, tx_signers, recipients, metadata=None, asset=None):
|
||||
"""A simple way to generate a `CREATE` transaction.
|
||||
@ -613,21 +646,22 @@ class Transaction(object):
|
||||
Returns:
|
||||
:class:`~bigchaindb.common.transaction.Transaction`
|
||||
"""
|
||||
if not isinstance(tx_signers, list):
|
||||
raise TypeError('`tx_signers` must be a list instance')
|
||||
|
||||
(inputs, outputs) = cls.validate_create(tx_signers, recipients, asset, metadata)
|
||||
return cls(cls.CREATE, {'data': asset}, inputs, outputs, metadata)
|
||||
|
||||
@classmethod
|
||||
def validate_transfer(cls, inputs, recipients, asset_id, metadata):
|
||||
if not isinstance(inputs, list):
|
||||
raise TypeError('`inputs` must be a list instance')
|
||||
if len(inputs) == 0:
|
||||
raise ValueError('`inputs` must contain at least one item')
|
||||
if not isinstance(recipients, list):
|
||||
raise TypeError('`recipients` must be a list instance')
|
||||
if len(tx_signers) == 0:
|
||||
raise ValueError('`tx_signers` list cannot be empty')
|
||||
if len(recipients) == 0:
|
||||
raise ValueError('`recipients` list cannot be empty')
|
||||
if not (asset is None or isinstance(asset, dict)):
|
||||
raise TypeError('`asset` must be a dict or None')
|
||||
|
||||
inputs = []
|
||||
outputs = []
|
||||
|
||||
# generate_outputs
|
||||
for recipient in recipients:
|
||||
if not isinstance(recipient, tuple) or len(recipient) != 2:
|
||||
raise ValueError(('Each `recipient` in the list must be a'
|
||||
@ -636,10 +670,10 @@ class Transaction(object):
|
||||
pub_keys, amount = recipient
|
||||
outputs.append(Output.generate(pub_keys, amount))
|
||||
|
||||
# generate inputs
|
||||
inputs.append(Input.generate(tx_signers))
|
||||
if not isinstance(asset_id, str):
|
||||
raise TypeError('`asset_id` must be a string')
|
||||
|
||||
return cls(cls.CREATE, {'data': asset}, inputs, outputs, metadata)
|
||||
return (deepcopy(inputs), outputs)
|
||||
|
||||
@classmethod
|
||||
def transfer(cls, inputs, recipients, asset_id, metadata=None):
|
||||
@ -680,28 +714,7 @@ class Transaction(object):
|
||||
Returns:
|
||||
:class:`~bigchaindb.common.transaction.Transaction`
|
||||
"""
|
||||
if not isinstance(inputs, list):
|
||||
raise TypeError('`inputs` must be a list instance')
|
||||
if len(inputs) == 0:
|
||||
raise ValueError('`inputs` must contain at least one item')
|
||||
if not isinstance(recipients, list):
|
||||
raise TypeError('`recipients` must be a list instance')
|
||||
if len(recipients) == 0:
|
||||
raise ValueError('`recipients` list cannot be empty')
|
||||
|
||||
outputs = []
|
||||
for recipient in recipients:
|
||||
if not isinstance(recipient, tuple) or len(recipient) != 2:
|
||||
raise ValueError(('Each `recipient` in the list must be a'
|
||||
' tuple of `([<list of public keys>],'
|
||||
' <amount>)`'))
|
||||
pub_keys, amount = recipient
|
||||
outputs.append(Output.generate(pub_keys, amount))
|
||||
|
||||
if not isinstance(asset_id, str):
|
||||
raise TypeError('`asset_id` must be a string')
|
||||
|
||||
inputs = deepcopy(inputs)
|
||||
(inputs, outputs) = cls.validate_transfer(inputs, recipients, asset_id, metadata)
|
||||
return cls(cls.TRANSFER, {'id': asset_id}, inputs, outputs, metadata)
|
||||
|
||||
def __eq__(self, other):
|
||||
@ -939,14 +952,14 @@ class Transaction(object):
|
||||
Returns:
|
||||
bool: If all Inputs are valid.
|
||||
"""
|
||||
if self.operation == Transaction.CREATE:
|
||||
if self.operation == self.CREATE:
|
||||
# NOTE: Since in the case of a `CREATE`-transaction we do not have
|
||||
# to check for outputs, we're just submitting dummy
|
||||
# values to the actual method. This simplifies it's logic
|
||||
# greatly, as we do not have to check against `None` values.
|
||||
return self._inputs_valid(['dummyvalue'
|
||||
for _ in self.inputs])
|
||||
elif self.operation == Transaction.TRANSFER:
|
||||
elif self.operation == self.TRANSFER:
|
||||
return self._inputs_valid([output.fulfillment.condition_uri
|
||||
for output in outputs])
|
||||
else:
|
||||
@ -986,8 +999,7 @@ class Transaction(object):
|
||||
return all(validate(i, cond)
|
||||
for i, cond in enumerate(output_condition_uris))
|
||||
|
||||
@staticmethod
|
||||
def _input_valid(input_, operation, message, output_condition_uri=None):
|
||||
def _input_valid(self, input_, operation, message, output_condition_uri=None):
|
||||
"""Validates a single Input against a single Output.
|
||||
|
||||
Note:
|
||||
@ -1012,7 +1024,7 @@ class Transaction(object):
|
||||
ParsingError, ASN1DecodeError, ASN1EncodeError):
|
||||
return False
|
||||
|
||||
if operation == Transaction.CREATE:
|
||||
if operation == self.CREATE:
|
||||
# NOTE: In the case of a `CREATE` transaction, the
|
||||
# output is always valid.
|
||||
output_valid = True
|
||||
@ -1091,8 +1103,8 @@ class Transaction(object):
|
||||
tx = Transaction._remove_signatures(self.to_dict())
|
||||
return Transaction._to_str(tx)
|
||||
|
||||
@staticmethod
|
||||
def get_asset_id(transactions):
|
||||
@classmethod
|
||||
def get_asset_id(cls, transactions):
|
||||
"""Get the asset id from a list of :class:`~.Transactions`.
|
||||
|
||||
This is useful when we want to check if the multiple inputs of a
|
||||
@ -1116,7 +1128,7 @@ class Transaction(object):
|
||||
transactions = [transactions]
|
||||
|
||||
# create a set of the transactions' asset ids
|
||||
asset_ids = {tx.id if tx.operation == Transaction.CREATE
|
||||
asset_ids = {tx.id if tx.operation == tx.CREATE
|
||||
else tx.asset['id']
|
||||
for tx in transactions}
|
||||
|
||||
@ -1151,7 +1163,7 @@ class Transaction(object):
|
||||
raise InvalidHash(err_msg.format(proposed_tx_id))
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, tx):
|
||||
def from_dict(cls, tx, skip_schema_validation=True):
|
||||
"""Transforms a Python dictionary to a Transaction object.
|
||||
|
||||
Args:
|
||||
@ -1160,7 +1172,131 @@ class Transaction(object):
|
||||
Returns:
|
||||
:class:`~bigchaindb.common.transaction.Transaction`
|
||||
"""
|
||||
operation = tx.get('operation', Transaction.CREATE) if isinstance(tx, dict) else Transaction.CREATE
|
||||
cls = Transaction.resolve_class(operation)
|
||||
if not skip_schema_validation:
|
||||
cls.validate_schema(tx)
|
||||
|
||||
inputs = [Input.from_dict(input_) for input_ in tx['inputs']]
|
||||
outputs = [Output.from_dict(output) for output in tx['outputs']]
|
||||
return cls(tx['operation'], tx['asset'], inputs, outputs,
|
||||
tx['metadata'], tx['version'], hash_id=tx['id'])
|
||||
|
||||
@classmethod
|
||||
def from_db(cls, bigchain, tx_dict_list):
|
||||
"""Helper method that reconstructs a transaction dict that was returned
|
||||
from the database. It checks what asset_id to retrieve, retrieves the
|
||||
asset from the asset table and reconstructs the transaction.
|
||||
|
||||
Args:
|
||||
bigchain (:class:`~bigchaindb.tendermint.BigchainDB`): An instance
|
||||
of BigchainDB used to perform database queries.
|
||||
tx_dict_list (:list:`dict` or :obj:`dict`): The transaction dict or
|
||||
list of transaction dict as returned from the database.
|
||||
|
||||
Returns:
|
||||
:class:`~Transaction`
|
||||
|
||||
"""
|
||||
return_list = True
|
||||
if isinstance(tx_dict_list, dict):
|
||||
tx_dict_list = [tx_dict_list]
|
||||
return_list = False
|
||||
|
||||
tx_map = {}
|
||||
tx_ids = []
|
||||
for tx in tx_dict_list:
|
||||
tx.update({'metadata': None})
|
||||
tx_map[tx['id']] = tx
|
||||
tx_ids.append(tx['id'])
|
||||
|
||||
assets = list(bigchain.get_assets(tx_ids))
|
||||
for asset in assets:
|
||||
if asset is not None:
|
||||
tx = tx_map[asset['id']]
|
||||
del asset['id']
|
||||
tx['asset'] = asset
|
||||
|
||||
tx_ids = list(tx_map.keys())
|
||||
metadata_list = list(bigchain.get_metadata(tx_ids))
|
||||
for metadata in metadata_list:
|
||||
tx = tx_map[metadata['id']]
|
||||
tx.update({'metadata': metadata.get('metadata')})
|
||||
|
||||
if return_list:
|
||||
tx_list = []
|
||||
for tx_id, tx in tx_map.items():
|
||||
tx_list.append(cls.from_dict(tx))
|
||||
return tx_list
|
||||
else:
|
||||
tx = list(tx_map.values())[0]
|
||||
return cls.from_dict(tx)
|
||||
|
||||
type_registry = {}
|
||||
|
||||
@staticmethod
|
||||
def register_type(tx_type, tx_class):
|
||||
Transaction.type_registry[tx_type] = tx_class
|
||||
|
||||
def resolve_class(operation):
|
||||
"""For the given `tx` based on the `operation` key return its implementation class"""
|
||||
|
||||
create_txn_class = Transaction.type_registry.get(Transaction.CREATE)
|
||||
return Transaction.type_registry.get(operation, create_txn_class)
|
||||
|
||||
@classmethod
|
||||
def validate_schema(cls, tx):
|
||||
pass
|
||||
|
||||
def validate_transfer_inputs(self, bigchain, current_transactions=[]):
|
||||
# store the inputs so that we can check if the asset ids match
|
||||
input_txs = []
|
||||
input_conditions = []
|
||||
for input_ in self.inputs:
|
||||
input_txid = input_.fulfills.txid
|
||||
input_tx = bigchain.get_transaction(input_txid)
|
||||
|
||||
if input_tx is None:
|
||||
for ctxn in current_transactions:
|
||||
if ctxn.id == input_txid:
|
||||
input_tx = ctxn
|
||||
|
||||
if input_tx is None:
|
||||
raise InputDoesNotExist("input `{}` doesn't exist"
|
||||
.format(input_txid))
|
||||
|
||||
spent = bigchain.get_spent(input_txid, input_.fulfills.output,
|
||||
current_transactions)
|
||||
if spent:
|
||||
raise DoubleSpend('input `{}` was already spent'
|
||||
.format(input_txid))
|
||||
|
||||
output = input_tx.outputs[input_.fulfills.output]
|
||||
input_conditions.append(output)
|
||||
input_txs.append(input_tx)
|
||||
|
||||
# Validate that all inputs are distinct
|
||||
links = [i.fulfills.to_uri() for i in self.inputs]
|
||||
if len(links) != len(set(links)):
|
||||
raise DoubleSpend('tx "{}" spends inputs twice'.format(self.id))
|
||||
|
||||
# validate asset id
|
||||
asset_id = self.get_asset_id(input_txs)
|
||||
if asset_id != self.asset['id']:
|
||||
raise AssetIdMismatch(('The asset id of the input does not'
|
||||
' match the asset id of the'
|
||||
' transaction'))
|
||||
|
||||
input_amount = sum([input_condition.amount for input_condition in input_conditions])
|
||||
output_amount = sum([output_condition.amount for output_condition in self.outputs])
|
||||
|
||||
if output_amount != input_amount:
|
||||
raise AmountError(('The amount used in the inputs `{}`'
|
||||
' needs to be same as the amount used'
|
||||
' in the outputs `{}`')
|
||||
.format(input_amount, output_amount))
|
||||
|
||||
if not self.inputs_valid(input_conditions):
|
||||
raise InvalidSignature('Transaction signature is invalid.')
|
||||
|
||||
return True
|
||||
|
||||
@ -108,7 +108,7 @@ def file_config(filename=None):
|
||||
'Failed to parse the JSON configuration from `{}`, {}'.format(filename, err)
|
||||
)
|
||||
|
||||
logger.info('Configuration loaded from `{}`'.format(filename))
|
||||
logger.info('Configuration loaded from `{}`'.format(filename))
|
||||
|
||||
return config
|
||||
|
||||
|
||||
@ -0,0 +1,188 @@
|
||||
"""This module contains all the goodness to integrate BigchainDB
|
||||
with Tendermint."""
|
||||
import logging
|
||||
import codecs
|
||||
|
||||
from abci.application import BaseApplication
|
||||
from abci.types_pb2 import (
|
||||
ResponseInitChain,
|
||||
ResponseInfo,
|
||||
ResponseCheckTx,
|
||||
ResponseBeginBlock,
|
||||
ResponseDeliverTx,
|
||||
ResponseEndBlock,
|
||||
ResponseCommit,
|
||||
Validator,
|
||||
PubKey
|
||||
)
|
||||
|
||||
from bigchaindb import BigchainDB
|
||||
from bigchaindb.tendermint_utils import (decode_transaction,
|
||||
calculate_hash)
|
||||
from bigchaindb.lib import Block, PreCommitState
|
||||
from bigchaindb.backend.query import PRE_COMMIT_ID
|
||||
|
||||
|
||||
CodeTypeOk = 0
|
||||
CodeTypeError = 1
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class App(BaseApplication):
|
||||
"""Bridge between BigchainDB and Tendermint.
|
||||
|
||||
The role of this class is to expose the BigchainDB
|
||||
transactional logic to the Tendermint Consensus
|
||||
State Machine."""
|
||||
|
||||
def __init__(self, bigchaindb=None):
|
||||
self.bigchaindb = bigchaindb or BigchainDB()
|
||||
self.block_txn_ids = []
|
||||
self.block_txn_hash = ''
|
||||
self.block_transactions = []
|
||||
self.validators = None
|
||||
self.new_height = None
|
||||
|
||||
def init_chain(self, genesis):
|
||||
"""Initialize chain with block of height 0"""
|
||||
|
||||
validator_set = [decode_validator(v) for v in genesis.validators]
|
||||
block = Block(app_hash='', height=0, transactions=[])
|
||||
self.bigchaindb.store_block(block._asdict())
|
||||
self.bigchaindb.store_validator_set(1, validator_set)
|
||||
return ResponseInitChain()
|
||||
|
||||
def info(self, request):
|
||||
"""Return height of the latest committed block."""
|
||||
r = ResponseInfo()
|
||||
block = self.bigchaindb.get_latest_block()
|
||||
if block:
|
||||
r.last_block_height = block['height']
|
||||
r.last_block_app_hash = block['app_hash'].encode('utf-8')
|
||||
else:
|
||||
r.last_block_height = 0
|
||||
r.last_block_app_hash = b''
|
||||
return r
|
||||
|
||||
def check_tx(self, raw_transaction):
|
||||
"""Validate the transaction before entry into
|
||||
the mempool.
|
||||
|
||||
Args:
|
||||
raw_tx: a raw string (in bytes) transaction."""
|
||||
|
||||
logger.benchmark('CHECK_TX_INIT')
|
||||
logger.debug('check_tx: %s', raw_transaction)
|
||||
transaction = decode_transaction(raw_transaction)
|
||||
if self.bigchaindb.is_valid_transaction(transaction):
|
||||
logger.debug('check_tx: VALID')
|
||||
logger.benchmark('CHECK_TX_END, tx_id:%s', transaction['id'])
|
||||
return ResponseCheckTx(code=CodeTypeOk)
|
||||
else:
|
||||
logger.debug('check_tx: INVALID')
|
||||
logger.benchmark('CHECK_TX_END, tx_id:%s', transaction['id'])
|
||||
return ResponseCheckTx(code=CodeTypeError)
|
||||
|
||||
def begin_block(self, req_begin_block):
|
||||
"""Initialize list of transaction.
|
||||
Args:
|
||||
req_begin_block: block object which contains block header
|
||||
and block hash.
|
||||
"""
|
||||
logger.benchmark('BEGIN BLOCK, height:%s, num_txs:%s',
|
||||
req_begin_block.header.height,
|
||||
req_begin_block.header.num_txs)
|
||||
|
||||
self.block_txn_ids = []
|
||||
self.block_transactions = []
|
||||
return ResponseBeginBlock()
|
||||
|
||||
def deliver_tx(self, raw_transaction):
|
||||
"""Validate the transaction before mutating the state.
|
||||
|
||||
Args:
|
||||
raw_tx: a raw string (in bytes) transaction."""
|
||||
logger.debug('deliver_tx: %s', raw_transaction)
|
||||
transaction = self.bigchaindb.is_valid_transaction(
|
||||
decode_transaction(raw_transaction), self.block_transactions)
|
||||
|
||||
if not transaction:
|
||||
logger.debug('deliver_tx: INVALID')
|
||||
return ResponseDeliverTx(code=CodeTypeError)
|
||||
else:
|
||||
logger.debug('storing tx')
|
||||
self.block_txn_ids.append(transaction.id)
|
||||
self.block_transactions.append(transaction)
|
||||
return ResponseDeliverTx(code=CodeTypeOk)
|
||||
|
||||
def end_block(self, request_end_block):
|
||||
"""Calculate block hash using transaction ids and previous block
|
||||
hash to be stored in the next block.
|
||||
|
||||
Args:
|
||||
height (int): new height of the chain."""
|
||||
|
||||
height = request_end_block.height
|
||||
self.new_height = height
|
||||
block_txn_hash = calculate_hash(self.block_txn_ids)
|
||||
block = self.bigchaindb.get_latest_block()
|
||||
|
||||
if self.block_txn_ids:
|
||||
self.block_txn_hash = calculate_hash([block['app_hash'], block_txn_hash])
|
||||
else:
|
||||
self.block_txn_hash = block['app_hash']
|
||||
|
||||
# TODO: calculate if an election has concluded
|
||||
# NOTE: ensure the local validator set is updated
|
||||
# validator_updates = self.bigchaindb.get_validator_update()
|
||||
# validator_updates = [encode_validator(v) for v in validator_updates]
|
||||
validator_updates = []
|
||||
|
||||
# Store pre-commit state to recover in case there is a crash
|
||||
# during `commit`
|
||||
pre_commit_state = PreCommitState(commit_id=PRE_COMMIT_ID,
|
||||
height=self.new_height,
|
||||
transactions=self.block_txn_ids)
|
||||
logger.debug('Updating PreCommitState: %s', self.new_height)
|
||||
self.bigchaindb.store_pre_commit_state(pre_commit_state._asdict())
|
||||
return ResponseEndBlock(validator_updates=validator_updates)
|
||||
|
||||
def commit(self):
|
||||
"""Store the new height and along with block hash."""
|
||||
|
||||
data = self.block_txn_hash.encode('utf-8')
|
||||
|
||||
# register a new block only when new transactions are received
|
||||
if self.block_txn_ids:
|
||||
self.bigchaindb.store_bulk_transactions(self.block_transactions)
|
||||
block = Block(app_hash=self.block_txn_hash,
|
||||
height=self.new_height,
|
||||
transactions=self.block_txn_ids)
|
||||
# NOTE: storing the block should be the last operation during commit
|
||||
# this effects crash recovery. Refer BEP#8 for details
|
||||
self.bigchaindb.store_block(block._asdict())
|
||||
|
||||
logger.debug('Commit-ing new block with hash: apphash=%s ,'
|
||||
'height=%s, txn ids=%s', data, self.new_height,
|
||||
self.block_txn_ids)
|
||||
logger.benchmark('COMMIT_BLOCK, height:%s', self.new_height)
|
||||
return ResponseCommit(data=data)
|
||||
|
||||
|
||||
def encode_validator(v):
|
||||
ed25519_public_key = v['pub_key']['data']
|
||||
# NOTE: tendermint expects public to be encoded in go-amino format
|
||||
|
||||
pub_key = PubKey(type='ed25519',
|
||||
data=bytes.fromhex(ed25519_public_key))
|
||||
|
||||
return Validator(pub_key=pub_key,
|
||||
address=b'',
|
||||
power=v['power'])
|
||||
|
||||
|
||||
def decode_validator(v):
|
||||
return {'address': codecs.encode(v.address, 'hex').decode().upper().rstrip('\n'),
|
||||
'pub_key': {'type': v.pub_key.type,
|
||||
'data': codecs.encode(v.pub_key.data, 'base64').decode().rstrip('\n')},
|
||||
'voting_power': v.power}
|
||||
@ -8,7 +8,7 @@ import aiohttp
|
||||
from bigchaindb import config
|
||||
from bigchaindb.common.utils import gen_timestamp
|
||||
from bigchaindb.events import EventTypes, Event
|
||||
from bigchaindb.tendermint.utils import decode_transaction_base64
|
||||
from bigchaindb.tendermint_utils import decode_transaction_base64
|
||||
|
||||
|
||||
HOST = config['tendermint']['host']
|
||||
@ -16,13 +16,12 @@ except ImportError:
|
||||
import requests
|
||||
|
||||
import bigchaindb
|
||||
from bigchaindb import backend, config_utils
|
||||
from bigchaindb import backend, config_utils, fastquery
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.common.exceptions import (SchemaValidationError,
|
||||
ValidationError,
|
||||
DoubleSpend)
|
||||
from bigchaindb.tendermint.utils import encode_transaction, merkleroot
|
||||
from bigchaindb.tendermint import fastquery
|
||||
from bigchaindb.tendermint_utils import encode_transaction, merkleroot
|
||||
from bigchaindb import exceptions as core_exceptions
|
||||
from bigchaindb.consensus import BaseConsensusRules
|
||||
|
||||
@ -138,10 +137,10 @@ class BigchainDB(object):
|
||||
txns = []
|
||||
assets = []
|
||||
txn_metadatas = []
|
||||
for transaction in transactions:
|
||||
for transaction_obj in transactions:
|
||||
# self.update_utxoset(transaction)
|
||||
transaction = transaction.to_dict()
|
||||
if transaction['operation'] == 'CREATE':
|
||||
transaction = transaction_obj.to_dict()
|
||||
if transaction['operation'] == transaction_obj.CREATE:
|
||||
asset = transaction.pop('asset')
|
||||
asset['id'] = transaction['id']
|
||||
assets.append(asset)
|
||||
@ -242,10 +241,10 @@ class BigchainDB(object):
|
||||
|
||||
def get_transaction(self, transaction_id):
|
||||
transaction = backend.query.get_transaction(self.connection, transaction_id)
|
||||
asset = backend.query.get_asset(self.connection, transaction_id)
|
||||
metadata = backend.query.get_metadata(self.connection, [transaction_id])
|
||||
|
||||
if transaction:
|
||||
asset = backend.query.get_asset(self.connection, transaction_id)
|
||||
metadata = backend.query.get_metadata(self.connection, [transaction_id])
|
||||
if asset:
|
||||
transaction['asset'] = asset
|
||||
|
||||
@ -373,7 +372,7 @@ class BigchainDB(object):
|
||||
# CLEANUP: The conditional below checks for transaction in dict format.
|
||||
# It would be better to only have a single format for the transaction
|
||||
# throught the code base.
|
||||
if not isinstance(transaction, Transaction):
|
||||
if isinstance(transaction, dict):
|
||||
try:
|
||||
transaction = Transaction.from_dict(tx)
|
||||
except SchemaValidationError as e:
|
||||
@ -434,19 +433,13 @@ class BigchainDB(object):
|
||||
def fastquery(self):
|
||||
return fastquery.FastQuery(self.connection)
|
||||
|
||||
def get_validators(self):
|
||||
try:
|
||||
resp = requests.get('{}validators'.format(self.endpoint))
|
||||
validators = resp.json()['result']['validators']
|
||||
for v in validators:
|
||||
v.pop('accum')
|
||||
v.pop('address')
|
||||
def get_validators(self, height=None):
|
||||
result = backend.query.get_validator_set(self.connection, height)
|
||||
validators = result['validators']
|
||||
for v in validators:
|
||||
v.pop('address')
|
||||
|
||||
return validators
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger.error('Error while connecting to Tendermint HTTP API')
|
||||
raise e
|
||||
return validators
|
||||
|
||||
def get_validator_update(self):
|
||||
update = backend.query.get_validator_update(self.connection)
|
||||
@ -458,6 +451,14 @@ class BigchainDB(object):
|
||||
def store_pre_commit_state(self, state):
|
||||
return backend.query.store_pre_commit_state(self.connection, state)
|
||||
|
||||
def store_validator_set(self, height, validators):
|
||||
"""Store validator set at a given `height`.
|
||||
NOTE: If the validator set already exists at that `height` then an
|
||||
exception will be raised.
|
||||
"""
|
||||
return backend.query.store_validator_set(self.connection, {'height': height,
|
||||
'validators': validators})
|
||||
|
||||
|
||||
Block = namedtuple('Block', ('app_hash', 'height', 'transactions'))
|
||||
|
||||
@ -3,10 +3,10 @@ import logging
|
||||
|
||||
from bigchaindb.common.exceptions import ConfigurationError
|
||||
from logging.config import dictConfig as set_logging_config
|
||||
from os.path import expanduser, join
|
||||
import os
|
||||
|
||||
|
||||
DEFAULT_LOG_DIR = expanduser('~')
|
||||
DEFAULT_LOG_DIR = os.getcwd()
|
||||
BENCHMARK_LOG_LEVEL = 15
|
||||
|
||||
|
||||
@ -40,7 +40,7 @@ DEFAULT_LOGGING_CONFIG = {
|
||||
},
|
||||
'file': {
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'filename': join(DEFAULT_LOG_DIR, 'bigchaindb.log'),
|
||||
'filename': os.path.join(DEFAULT_LOG_DIR, 'bigchaindb.log'),
|
||||
'mode': 'w',
|
||||
'maxBytes': 209715200,
|
||||
'backupCount': 5,
|
||||
@ -49,7 +49,7 @@ DEFAULT_LOGGING_CONFIG = {
|
||||
},
|
||||
'errors': {
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'filename': join(DEFAULT_LOG_DIR, 'bigchaindb-errors.log'),
|
||||
'filename': os.path.join(DEFAULT_LOG_DIR, 'bigchaindb-errors.log'),
|
||||
'mode': 'w',
|
||||
'maxBytes': 209715200,
|
||||
'backupCount': 5,
|
||||
@ -58,7 +58,7 @@ DEFAULT_LOGGING_CONFIG = {
|
||||
},
|
||||
'benchmark': {
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'filename': 'bigchaindb-benchmark.log',
|
||||
'filename': os.path.join(DEFAULT_LOG_DIR, 'bigchaindb-benchmark.log'),
|
||||
'mode': 'w',
|
||||
'maxBytes': 209715200,
|
||||
'backupCount': 5,
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
from bigchaindb.common.exceptions import (InvalidSignature, DoubleSpend,
|
||||
InputDoesNotExist, AssetIdMismatch,
|
||||
AmountError, DuplicateTransaction)
|
||||
from bigchaindb.common.exceptions import (InvalidSignature,
|
||||
DuplicateTransaction)
|
||||
from bigchaindb.common.transaction import Transaction
|
||||
from bigchaindb.common.utils import (validate_txn_obj, validate_key)
|
||||
from bigchaindb.common.schema import validate_transaction_schema
|
||||
@ -8,17 +7,15 @@ from bigchaindb.backend.schema import validate_language_key
|
||||
|
||||
|
||||
class Transaction(Transaction):
|
||||
|
||||
def validate(self, bigchain, current_transactions=[]):
|
||||
"""Validate transaction spend
|
||||
|
||||
Args:
|
||||
bigchain (BigchainDB): an instantiated bigchaindb.tendermint.BigchainDB object.
|
||||
|
||||
bigchain (BigchainDB): an instantiated bigchaindb.BigchainDB object.
|
||||
Returns:
|
||||
The transaction (Transaction) if the transaction is valid else it
|
||||
raises an exception describing the reason why the transaction is
|
||||
invalid.
|
||||
|
||||
Raises:
|
||||
ValidationError: If the transaction is invalid
|
||||
"""
|
||||
@ -29,117 +26,26 @@ class Transaction(Transaction):
|
||||
if bigchain.get_transaction(self.to_dict()['id']) or duplicates:
|
||||
raise DuplicateTransaction('transaction `{}` already exists'
|
||||
.format(self.id))
|
||||
|
||||
if not self.inputs_valid(input_conditions):
|
||||
raise InvalidSignature('Transaction signature is invalid.')
|
||||
|
||||
elif self.operation == Transaction.TRANSFER:
|
||||
# store the inputs so that we can check if the asset ids match
|
||||
input_txs = []
|
||||
for input_ in self.inputs:
|
||||
input_txid = input_.fulfills.txid
|
||||
input_tx = bigchain.\
|
||||
get_transaction(input_txid)
|
||||
|
||||
if input_tx is None:
|
||||
for ctxn in current_transactions:
|
||||
if ctxn.id == input_txid:
|
||||
input_tx = ctxn
|
||||
|
||||
if input_tx is None:
|
||||
raise InputDoesNotExist("input `{}` doesn't exist"
|
||||
.format(input_txid))
|
||||
|
||||
spent = bigchain.get_spent(input_txid, input_.fulfills.output,
|
||||
current_transactions)
|
||||
if spent and spent.id != self.id:
|
||||
raise DoubleSpend('input `{}` was already spent'
|
||||
.format(input_txid))
|
||||
|
||||
output = input_tx.outputs[input_.fulfills.output]
|
||||
input_conditions.append(output)
|
||||
input_txs.append(input_tx)
|
||||
|
||||
# Validate that all inputs are distinct
|
||||
links = [i.fulfills.to_uri() for i in self.inputs]
|
||||
if len(links) != len(set(links)):
|
||||
raise DoubleSpend('tx "{}" spends inputs twice'.format(self.id))
|
||||
|
||||
# validate asset id
|
||||
asset_id = Transaction.get_asset_id(input_txs)
|
||||
if asset_id != self.asset['id']:
|
||||
raise AssetIdMismatch(('The asset id of the input does not'
|
||||
' match the asset id of the'
|
||||
' transaction'))
|
||||
|
||||
input_amount = sum([input_condition.amount for input_condition in input_conditions])
|
||||
output_amount = sum([output_condition.amount for output_condition in self.outputs])
|
||||
|
||||
if output_amount != input_amount:
|
||||
raise AmountError(('The amount used in the inputs `{}`'
|
||||
' needs to be same as the amount used'
|
||||
' in the outputs `{}`')
|
||||
.format(input_amount, output_amount))
|
||||
|
||||
if not self.inputs_valid(input_conditions):
|
||||
raise InvalidSignature('Transaction signature is invalid.')
|
||||
self.validate_transfer_inputs(bigchain, current_transactions)
|
||||
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, tx_body):
|
||||
super().validate_id(tx_body)
|
||||
return super().from_dict(tx_body, False)
|
||||
|
||||
@classmethod
|
||||
def validate_schema(cls, tx_body):
|
||||
cls.validate_id(tx_body)
|
||||
validate_transaction_schema(tx_body)
|
||||
validate_txn_obj('asset', tx_body['asset'], 'data', validate_key)
|
||||
validate_txn_obj('metadata', tx_body, 'metadata', validate_key)
|
||||
validate_language_key(tx_body['asset'], 'data')
|
||||
return super().from_dict(tx_body)
|
||||
|
||||
@classmethod
|
||||
def from_db(cls, bigchain, tx_dict_list):
|
||||
"""Helper method that reconstructs a transaction dict that was returned
|
||||
from the database. It checks what asset_id to retrieve, retrieves the
|
||||
asset from the asset table and reconstructs the transaction.
|
||||
|
||||
Args:
|
||||
bigchain (:class:`~bigchaindb.tendermint.BigchainDB`): An instance
|
||||
of BigchainDB used to perform database queries.
|
||||
tx_dict_list (:list:`dict` or :obj:`dict`): The transaction dict or
|
||||
list of transaction dict as returned from the database.
|
||||
|
||||
Returns:
|
||||
:class:`~Transaction`
|
||||
|
||||
"""
|
||||
return_list = True
|
||||
if isinstance(tx_dict_list, dict):
|
||||
tx_dict_list = [tx_dict_list]
|
||||
return_list = False
|
||||
|
||||
tx_map = {}
|
||||
tx_ids = []
|
||||
for tx in tx_dict_list:
|
||||
tx.update({'metadata': None})
|
||||
tx_map[tx['id']] = tx
|
||||
if tx['operation'] == Transaction.CREATE:
|
||||
tx_ids.append(tx['id'])
|
||||
|
||||
assets = list(bigchain.get_assets(tx_ids))
|
||||
for asset in assets:
|
||||
tx = tx_map[asset['id']]
|
||||
del asset['id']
|
||||
tx.update({'asset': asset})
|
||||
|
||||
tx_ids = list(tx_map.keys())
|
||||
metadata_list = list(bigchain.get_metadata(tx_ids))
|
||||
for metadata in metadata_list:
|
||||
tx = tx_map[metadata['id']]
|
||||
tx.update({'metadata': metadata.get('metadata')})
|
||||
|
||||
if return_list:
|
||||
tx_list = []
|
||||
for tx_id, tx in tx_map.items():
|
||||
tx_list.append(cls.from_dict(tx))
|
||||
return tx_list
|
||||
else:
|
||||
tx = list(tx_map.values())[0]
|
||||
return cls.from_dict(tx)
|
||||
|
||||
|
||||
class FastTransaction:
|
||||
|
||||
@ -1,12 +1,11 @@
|
||||
import logging
|
||||
|
||||
import setproctitle
|
||||
|
||||
import bigchaindb
|
||||
from bigchaindb.tendermint.lib import BigchainDB
|
||||
from bigchaindb.tendermint.core import App
|
||||
from bigchaindb.lib import BigchainDB
|
||||
from bigchaindb.core import App
|
||||
from bigchaindb.web import server, websocket_server
|
||||
from bigchaindb.tendermint import event_stream
|
||||
from bigchaindb import event_stream
|
||||
from bigchaindb.events import Exchange, EventTypes
|
||||
from bigchaindb.utils import Process
|
||||
|
||||
@ -34,14 +33,14 @@ BANNER = """
|
||||
|
||||
def start():
|
||||
# Exchange object for event stream api
|
||||
logger.info('Starting BigchainDB')
|
||||
exchange = Exchange()
|
||||
|
||||
# start the web api
|
||||
app_server = server.create_server(
|
||||
settings=bigchaindb.config['server'],
|
||||
log_config=bigchaindb.config['log'],
|
||||
bigchaindb_factory=BigchainDB)
|
||||
p_webapi = Process(name='bigchaindb_webapi', target=app_server.run)
|
||||
p_webapi = Process(name='bigchaindb_webapi', target=app_server.run, daemon=True)
|
||||
p_webapi.start()
|
||||
|
||||
# start message
|
||||
@ -50,16 +49,18 @@ def start():
|
||||
# start websocket server
|
||||
p_websocket_server = Process(name='bigchaindb_ws',
|
||||
target=websocket_server.start,
|
||||
daemon=True,
|
||||
args=(exchange.get_subscriber_queue(EventTypes.BLOCK_VALID),))
|
||||
p_websocket_server.start()
|
||||
|
||||
# connect to tendermint event stream
|
||||
p_websocket_client = Process(name='bigchaindb_ws_to_tendermint',
|
||||
target=event_stream.start,
|
||||
daemon=True,
|
||||
args=(exchange.get_publisher_queue(),))
|
||||
p_websocket_client.start()
|
||||
|
||||
p_exchange = Process(name='bigchaindb_exchange', target=exchange.run)
|
||||
p_exchange = Process(name='bigchaindb_exchange', target=exchange.run, daemon=True)
|
||||
p_exchange.start()
|
||||
|
||||
# We need to import this after spawning the web server
|
||||
@ -69,6 +70,7 @@ def start():
|
||||
|
||||
setproctitle.setproctitle('bigchaindb')
|
||||
|
||||
# Start the ABCIServer
|
||||
app = ABCIServer(app=App())
|
||||
app.run()
|
||||
|
||||
@ -1,7 +0,0 @@
|
||||
"""Code necessary for integrating with Tendermint."""
|
||||
|
||||
# Order is important!
|
||||
# If we import core first, core will try to load BigchainDB from
|
||||
# __init__ itself, causing a loop.
|
||||
from bigchaindb.tendermint.lib import BigchainDB # noqa
|
||||
from bigchaindb.tendermint.core import App # noqa
|
||||
@ -1,178 +0,0 @@
|
||||
"""This module contains all the goodness to integrate BigchainDB
|
||||
with Tendermint."""
|
||||
import logging
|
||||
|
||||
from abci.application import BaseApplication
|
||||
from abci.types_pb2 import (
|
||||
ResponseInitChain,
|
||||
ResponseInfo,
|
||||
ResponseCheckTx,
|
||||
ResponseBeginBlock,
|
||||
ResponseDeliverTx,
|
||||
ResponseEndBlock,
|
||||
ResponseCommit,
|
||||
Validator,
|
||||
PubKey
|
||||
)
|
||||
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
from bigchaindb.tendermint.utils import (decode_transaction,
|
||||
calculate_hash)
|
||||
from bigchaindb.tendermint.lib import Block, PreCommitState
|
||||
from bigchaindb.backend.query import PRE_COMMIT_ID
|
||||
|
||||
|
||||
CodeTypeOk = 0
|
||||
CodeTypeError = 1
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class App(BaseApplication):
|
||||
"""Bridge between BigchainDB and Tendermint.
|
||||
|
||||
The role of this class is to expose the BigchainDB
|
||||
transactional logic to the Tendermint Consensus
|
||||
State Machine."""
|
||||
|
||||
def __init__(self, bigchaindb=None):
|
||||
self.bigchaindb = bigchaindb or BigchainDB()
|
||||
self.block_txn_ids = []
|
||||
self.block_txn_hash = ''
|
||||
self.block_transactions = []
|
||||
self.validators = None
|
||||
self.new_height = None
|
||||
|
||||
def init_chain(self, validators):
|
||||
"""Initialize chain with block of height 0"""
|
||||
|
||||
block = Block(app_hash='', height=0, transactions=[])
|
||||
self.bigchaindb.store_block(block._asdict())
|
||||
return ResponseInitChain()
|
||||
|
||||
def info(self, request):
|
||||
"""Return height of the latest committed block."""
|
||||
r = ResponseInfo()
|
||||
block = self.bigchaindb.get_latest_block()
|
||||
if block:
|
||||
r.last_block_height = block['height']
|
||||
r.last_block_app_hash = block['app_hash'].encode('utf-8')
|
||||
else:
|
||||
r.last_block_height = 0
|
||||
r.last_block_app_hash = b''
|
||||
return r
|
||||
|
||||
def check_tx(self, raw_transaction):
|
||||
"""Validate the transaction before entry into
|
||||
the mempool.
|
||||
|
||||
Args:
|
||||
raw_tx: a raw string (in bytes) transaction."""
|
||||
|
||||
logger.benchmark('CHECK_TX_INIT')
|
||||
logger.debug('check_tx: %s', raw_transaction)
|
||||
transaction = decode_transaction(raw_transaction)
|
||||
if self.bigchaindb.is_valid_transaction(transaction):
|
||||
logger.debug('check_tx: VALID')
|
||||
logger.benchmark('CHECK_TX_END, tx_id:%s', transaction['id'])
|
||||
return ResponseCheckTx(code=CodeTypeOk)
|
||||
else:
|
||||
logger.debug('check_tx: INVALID')
|
||||
logger.benchmark('CHECK_TX_END, tx_id:%s', transaction['id'])
|
||||
return ResponseCheckTx(code=CodeTypeError)
|
||||
|
||||
def begin_block(self, req_begin_block):
|
||||
"""Initialize list of transaction.
|
||||
Args:
|
||||
req_begin_block: block object which contains block header
|
||||
and block hash.
|
||||
"""
|
||||
logger.benchmark('BEGIN BLOCK, height:%s, num_txs:%s',
|
||||
req_begin_block.header.height,
|
||||
req_begin_block.header.num_txs)
|
||||
|
||||
self.block_txn_ids = []
|
||||
self.block_transactions = []
|
||||
return ResponseBeginBlock()
|
||||
|
||||
def deliver_tx(self, raw_transaction):
|
||||
"""Validate the transaction before mutating the state.
|
||||
|
||||
Args:
|
||||
raw_tx: a raw string (in bytes) transaction."""
|
||||
logger.debug('deliver_tx: %s', raw_transaction)
|
||||
transaction = self.bigchaindb.is_valid_transaction(
|
||||
decode_transaction(raw_transaction), self.block_transactions)
|
||||
|
||||
if not transaction:
|
||||
logger.debug('deliver_tx: INVALID')
|
||||
return ResponseDeliverTx(code=CodeTypeError)
|
||||
else:
|
||||
logger.debug('storing tx')
|
||||
self.block_txn_ids.append(transaction.id)
|
||||
self.block_transactions.append(transaction)
|
||||
return ResponseDeliverTx(code=CodeTypeOk)
|
||||
|
||||
def end_block(self, request_end_block):
|
||||
"""Calculate block hash using transaction ids and previous block
|
||||
hash to be stored in the next block.
|
||||
|
||||
Args:
|
||||
height (int): new height of the chain."""
|
||||
|
||||
height = request_end_block.height
|
||||
self.new_height = height
|
||||
block_txn_hash = calculate_hash(self.block_txn_ids)
|
||||
block = self.bigchaindb.get_latest_block()
|
||||
|
||||
if self.block_txn_ids:
|
||||
self.block_txn_hash = calculate_hash([block['app_hash'], block_txn_hash])
|
||||
else:
|
||||
self.block_txn_hash = block['app_hash']
|
||||
|
||||
validator_updates = self.bigchaindb.get_validator_update()
|
||||
validator_updates = [encode_validator(v) for v in validator_updates]
|
||||
|
||||
# set sync status to true
|
||||
self.bigchaindb.delete_validator_update()
|
||||
|
||||
# Store pre-commit state to recover in case there is a crash
|
||||
# during `commit`
|
||||
pre_commit_state = PreCommitState(commit_id=PRE_COMMIT_ID,
|
||||
height=self.new_height,
|
||||
transactions=self.block_txn_ids)
|
||||
logger.debug('Updating PreCommitState: %s', self.new_height)
|
||||
self.bigchaindb.store_pre_commit_state(pre_commit_state._asdict())
|
||||
return ResponseEndBlock(validator_updates=validator_updates)
|
||||
|
||||
def commit(self):
|
||||
"""Store the new height and along with block hash."""
|
||||
|
||||
data = self.block_txn_hash.encode('utf-8')
|
||||
|
||||
# register a new block only when new transactions are received
|
||||
if self.block_txn_ids:
|
||||
self.bigchaindb.store_bulk_transactions(self.block_transactions)
|
||||
block = Block(app_hash=self.block_txn_hash,
|
||||
height=self.new_height,
|
||||
transactions=self.block_txn_ids)
|
||||
# NOTE: storing the block should be the last operation during commit
|
||||
# this effects crash recovery. Refer BEP#8 for details
|
||||
self.bigchaindb.store_block(block._asdict())
|
||||
|
||||
logger.debug('Commit-ing new block with hash: apphash=%s ,'
|
||||
'height=%s, txn ids=%s', data, self.new_height,
|
||||
self.block_txn_ids)
|
||||
logger.benchmark('COMMIT_BLOCK, height:%s', self.new_height)
|
||||
return ResponseCommit(data=data)
|
||||
|
||||
|
||||
def encode_validator(v):
|
||||
ed25519_public_key = v['pub_key']['data']
|
||||
# NOTE: tendermint expects public to be encoded in go-amino format
|
||||
|
||||
pub_key = PubKey(type='ed25519',
|
||||
data=bytes.fromhex(ed25519_public_key))
|
||||
|
||||
return Validator(pub_key=pub_key,
|
||||
address=b'',
|
||||
power=v['power'])
|
||||
@ -75,12 +75,20 @@ def public_key64_to_address(base64_public_key):
|
||||
|
||||
|
||||
def public_key_from_base64(base64_public_key):
|
||||
return base64.b64decode(base64_public_key).hex().upper()
|
||||
return key_from_base64(base64_public_key)
|
||||
|
||||
|
||||
def key_from_base64(base64_key):
|
||||
return base64.b64decode(base64_key).hex().upper()
|
||||
|
||||
|
||||
def public_key_to_base64(ed25519_public_key):
|
||||
ed25519_public_key = bytes.fromhex(ed25519_public_key)
|
||||
return base64.b64encode(ed25519_public_key).decode('utf-8')
|
||||
return key_to_base64(ed25519_public_key)
|
||||
|
||||
|
||||
def key_to_base64(ed25519_key):
|
||||
ed25519_key = bytes.fromhex(ed25519_key)
|
||||
return base64.b64encode(ed25519_key).decode('utf-8')
|
||||
|
||||
|
||||
def amino_encoded_public_key(ed25519_public_key):
|
||||
3
bigchaindb/upsert_validator/__init__.py
Normal file
3
bigchaindb/upsert_validator/__init__.py
Normal file
@ -0,0 +1,3 @@
|
||||
|
||||
from bigchaindb.upsert_validator.validator_election import ValidatorElection # noqa
|
||||
from bigchaindb.upsert_validator.validator_election_vote import ValidatorElectionVote # noqa
|
||||
143
bigchaindb/upsert_validator/validator_election.py
Normal file
143
bigchaindb/upsert_validator/validator_election.py
Normal file
@ -0,0 +1,143 @@
|
||||
from bigchaindb.common.exceptions import (InvalidSignature,
|
||||
MultipleInputsError,
|
||||
InvalidProposer,
|
||||
UnequalValidatorSet,
|
||||
InvalidPowerChange,
|
||||
DuplicateTransaction)
|
||||
from bigchaindb.tendermint_utils import key_from_base64
|
||||
from bigchaindb.common.crypto import (public_key_from_ed25519_key)
|
||||
from bigchaindb.common.transaction import Transaction
|
||||
from bigchaindb.common.schema import (_validate_schema,
|
||||
TX_SCHEMA_VALIDATOR_ELECTION,
|
||||
TX_SCHEMA_COMMON,
|
||||
TX_SCHEMA_CREATE)
|
||||
|
||||
|
||||
class ValidatorElection(Transaction):
|
||||
|
||||
VALIDATOR_ELECTION = 'VALIDATOR_ELECTION'
|
||||
# NOTE: this transaction class extends create so the operation inheritence is achieved
|
||||
# by renaming CREATE to VALIDATOR_ELECTION
|
||||
CREATE = VALIDATOR_ELECTION
|
||||
ALLOWED_OPERATIONS = (VALIDATOR_ELECTION,)
|
||||
|
||||
def __init__(self, operation, asset, inputs, outputs,
|
||||
metadata=None, version=None, hash_id=None):
|
||||
# operation `CREATE` is being passed as argument as `VALIDATOR_ELECTION` is an extension
|
||||
# of `CREATE` and any validation on `CREATE` in the parent class should apply to it
|
||||
super().__init__(operation, asset, inputs, outputs, metadata, version, hash_id)
|
||||
|
||||
@classmethod
|
||||
def current_validators(cls, bigchain):
|
||||
"""Return a dictionary of validators with key as `public_key` and
|
||||
value as the `voting_power`
|
||||
"""
|
||||
|
||||
validators = {}
|
||||
for validator in bigchain.get_validators():
|
||||
# NOTE: we assume that Tendermint encodes public key in base64
|
||||
public_key = public_key_from_ed25519_key(key_from_base64(validator['pub_key']['value']))
|
||||
validators[public_key] = validator['voting_power']
|
||||
|
||||
return validators
|
||||
|
||||
@classmethod
|
||||
def recipients(cls, bigchain):
|
||||
"""Convert validator dictionary to a recipient list for `Transaction`"""
|
||||
|
||||
recipients = []
|
||||
for public_key, voting_power in cls.current_validators(bigchain).items():
|
||||
recipients.append(([public_key], voting_power))
|
||||
|
||||
return recipients
|
||||
|
||||
@classmethod
|
||||
def is_same_topology(cls, current_topology, election_topology):
|
||||
voters = {}
|
||||
for voter in election_topology:
|
||||
if len(voter.public_keys) > 1:
|
||||
return False
|
||||
|
||||
[public_key] = voter.public_keys
|
||||
voting_power = voter.amount
|
||||
voters[public_key] = voting_power
|
||||
|
||||
# Check whether the voters and their votes is same to that of the
|
||||
# validators and their voting power in the network
|
||||
return (current_topology == voters)
|
||||
|
||||
def validate(self, bigchain, current_transactions=[]):
|
||||
"""Validate election transaction
|
||||
For more details refer BEP-21: https://github.com/bigchaindb/BEPs/tree/master/21
|
||||
|
||||
NOTE:
|
||||
* A valid election is initiated by an existing validator.
|
||||
|
||||
* A valid election is one where voters are validators and votes are
|
||||
alloacted according to the voting power of each validator node.
|
||||
|
||||
Args:
|
||||
bigchain (BigchainDB): an instantiated bigchaindb.lib.BigchainDB object.
|
||||
|
||||
Returns:
|
||||
`True` if the election is valid
|
||||
|
||||
Raises:
|
||||
ValidationError: If the election is invalid
|
||||
"""
|
||||
input_conditions = []
|
||||
|
||||
duplicates = any(txn for txn in current_transactions if txn.id == self.id)
|
||||
if bigchain.get_transaction(self.id) or duplicates:
|
||||
raise DuplicateTransaction('transaction `{}` already exists'
|
||||
.format(self.id))
|
||||
|
||||
if not self.inputs_valid(input_conditions):
|
||||
raise InvalidSignature('Transaction signature is invalid.')
|
||||
|
||||
current_validators = self.current_validators(bigchain)
|
||||
|
||||
# NOTE: Proposer should be a single node
|
||||
if len(self.inputs) != 1 or len(self.inputs[0].owners_before) != 1:
|
||||
raise MultipleInputsError('`tx_signers` must be a list instance of length one')
|
||||
|
||||
# NOTE: change more than 1/3 of the current power is not allowed
|
||||
if self.asset['data']['power'] >= (1/3)*sum(current_validators.values()):
|
||||
raise InvalidPowerChange('`power` change must be less than 1/3 of total power')
|
||||
|
||||
# NOTE: Check if the proposer is a validator.
|
||||
[election_initiator_node_pub_key] = self.inputs[0].owners_before
|
||||
if election_initiator_node_pub_key not in current_validators.keys():
|
||||
raise InvalidProposer('Public key is not a part of the validator set')
|
||||
|
||||
# NOTE: Check if all validators have been assigned votes equal to their voting power
|
||||
if not self.is_same_topology(current_validators, self.outputs):
|
||||
raise UnequalValidatorSet('Validator set much be exactly same to the outputs of election')
|
||||
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def generate(cls, initiator, voters, election_data, metadata=None):
|
||||
(inputs, outputs) = cls.validate_create(initiator, voters, election_data, metadata)
|
||||
election = cls(cls.VALIDATOR_ELECTION, {'data': election_data}, inputs, outputs, metadata)
|
||||
cls.validate_schema(election.to_dict(), skip_id=True)
|
||||
return election
|
||||
|
||||
@classmethod
|
||||
def validate_schema(cls, tx, skip_id=False):
|
||||
"""Validate the validator election transaction. Since `VALIDATOR_ELECTION` extends `CREATE`
|
||||
transaction, all the validations for `CREATE` transaction should be inherited
|
||||
"""
|
||||
if not skip_id:
|
||||
cls.validate_id(tx)
|
||||
_validate_schema(TX_SCHEMA_COMMON, tx)
|
||||
_validate_schema(TX_SCHEMA_CREATE, tx)
|
||||
_validate_schema(TX_SCHEMA_VALIDATOR_ELECTION, tx)
|
||||
|
||||
@classmethod
|
||||
def create(cls, tx_signers, recipients, metadata=None, asset=None):
|
||||
raise NotImplementedError
|
||||
|
||||
@classmethod
|
||||
def transfer(cls, tx_signers, recipients, metadata=None, asset=None):
|
||||
raise NotImplementedError
|
||||
65
bigchaindb/upsert_validator/validator_election_vote.py
Normal file
65
bigchaindb/upsert_validator/validator_election_vote.py
Normal file
@ -0,0 +1,65 @@
|
||||
import base58
|
||||
|
||||
from bigchaindb.common.transaction import Transaction
|
||||
from bigchaindb.common.schema import (_validate_schema,
|
||||
TX_SCHEMA_COMMON,
|
||||
TX_SCHEMA_TRANSFER,
|
||||
TX_SCHEMA_VALIDATOR_ELECTION_VOTE)
|
||||
|
||||
|
||||
class ValidatorElectionVote(Transaction):
|
||||
|
||||
VALIDATOR_ELECTION_VOTE = 'VALIDATOR_ELECTION_VOTE'
|
||||
# NOTE: This class inherits TRANSFER txn type. The `TRANSFER` property is
|
||||
# overriden to re-use methods from parent class
|
||||
TRANSFER = VALIDATOR_ELECTION_VOTE
|
||||
ALLOWED_OPERATIONS = (VALIDATOR_ELECTION_VOTE,)
|
||||
|
||||
def validate(self, bigchain, current_transactions=[]):
|
||||
"""Validate election vote transaction
|
||||
NOTE: There are no additional validity conditions on casting votes i.e.
|
||||
a vote is just a valid TRANFER transaction
|
||||
|
||||
For more details refer BEP-21: https://github.com/bigchaindb/BEPs/tree/master/21
|
||||
|
||||
Args:
|
||||
bigchain (BigchainDB): an instantiated bigchaindb.lib.BigchainDB object.
|
||||
|
||||
Returns:
|
||||
`True` if the election vote is valid
|
||||
|
||||
Raises:
|
||||
ValidationError: If the election vote is invalid
|
||||
"""
|
||||
self.validate_transfer_inputs(bigchain, current_transactions)
|
||||
return self
|
||||
|
||||
@classmethod
|
||||
def to_public_key(cls, election_id):
|
||||
return base58.b58encode(bytes.fromhex(election_id))
|
||||
|
||||
@classmethod
|
||||
def generate(cls, inputs, recipients, election_id, metadata=None):
|
||||
(inputs, outputs) = cls.validate_transfer(inputs, recipients, election_id, metadata)
|
||||
election_vote = cls(cls.VALIDATOR_ELECTION_VOTE, {'id': election_id}, inputs, outputs, metadata)
|
||||
cls.validate_schema(election_vote.to_dict(), skip_id=True)
|
||||
return election_vote
|
||||
|
||||
@classmethod
|
||||
def validate_schema(cls, tx, skip_id=False):
|
||||
"""Validate the validator election vote transaction. Since `VALIDATOR_ELECTION_VOTE` extends `TRANFER`
|
||||
transaction, all the validations for `CREATE` transaction should be inherited
|
||||
"""
|
||||
if not skip_id:
|
||||
cls.validate_id(tx)
|
||||
_validate_schema(TX_SCHEMA_COMMON, tx)
|
||||
_validate_schema(TX_SCHEMA_TRANSFER, tx)
|
||||
_validate_schema(TX_SCHEMA_VALIDATOR_ELECTION_VOTE, tx)
|
||||
|
||||
@classmethod
|
||||
def create(cls, tx_signers, recipients, metadata=None, asset=None):
|
||||
raise NotImplementedError
|
||||
|
||||
@classmethod
|
||||
def transfer(cls, tx_signers, recipients, metadata=None, asset=None):
|
||||
raise NotImplementedError
|
||||
@ -1,2 +1,2 @@
|
||||
__version__ = '2.0.0b1'
|
||||
__short_version__ = '2.0b1'
|
||||
__version__ = '2.0.0b5'
|
||||
__short_version__ = '2.0b5'
|
||||
|
||||
@ -11,7 +11,7 @@ from flask_cors import CORS
|
||||
import gunicorn.app.base
|
||||
|
||||
from bigchaindb import utils
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
from bigchaindb import BigchainDB
|
||||
from bigchaindb.web.routes import add_routes
|
||||
from bigchaindb.web.strip_content_type_middleware import StripContentTypeMiddleware
|
||||
|
||||
|
||||
@ -8,9 +8,10 @@ from flask import current_app, request, jsonify
|
||||
from flask_restful import Resource, reqparse
|
||||
|
||||
from bigchaindb.common.exceptions import SchemaValidationError, ValidationError
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.web.views.base import make_error
|
||||
from bigchaindb.web.views import parameters
|
||||
from bigchaindb.models import Transaction
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@ -16,6 +16,7 @@ import asyncio
|
||||
import logging
|
||||
import threading
|
||||
from uuid import uuid4
|
||||
from concurrent.futures import CancelledError
|
||||
|
||||
import aiohttp
|
||||
from aiohttp import web
|
||||
@ -105,7 +106,7 @@ class Dispatcher:
|
||||
|
||||
for _, websocket in self.subscribers.items():
|
||||
for str_item in str_buffer:
|
||||
websocket.send_str(str_item)
|
||||
yield from websocket.send_str(str_item)
|
||||
|
||||
|
||||
@asyncio.coroutine
|
||||
@ -125,6 +126,9 @@ def websocket_handler(request):
|
||||
except RuntimeError as e:
|
||||
logger.debug('Websocket exception: %s', str(e))
|
||||
break
|
||||
except CancelledError as e:
|
||||
logger.debug('Websocket closed')
|
||||
break
|
||||
if msg.type == aiohttp.WSMsgType.CLOSED:
|
||||
logger.debug('Websocket closed')
|
||||
break
|
||||
|
||||
@ -44,7 +44,7 @@ services:
|
||||
retries: 3
|
||||
command: '.ci/entrypoint.sh'
|
||||
tendermint:
|
||||
image: tendermint/tendermint:0.22.3
|
||||
image: tendermint/tendermint:0.22.8
|
||||
# volumes:
|
||||
# - ./tmdata:/tendermint
|
||||
entrypoint: ''
|
||||
|
||||
@ -32,7 +32,7 @@ $ curl -fOL https://raw.githubusercontent.com/bigchaindb/bigchaindb/${GIT_BRANCH
|
||||
|
||||
## Quick Start
|
||||
If you run `stack.sh` out of the box i.e. without any configuration changes, you will be able to deploy a 4 node
|
||||
BigchainDB network with Docker containers, created from `master` branch of `bigchaindb/bigchaindb` repo and Tendermint version `0.22.3`.
|
||||
BigchainDB network with Docker containers, created from `master` branch of `bigchaindb/bigchaindb` repo and Tendermint version `0.22.8`.
|
||||
|
||||
**Note**: Run `stack.sh` with either root or non-root user with sudo enabled.
|
||||
|
||||
@ -90,7 +90,7 @@ $ bash stack.sh -h
|
||||
variable. (default: master)
|
||||
|
||||
ENV[TM_VERSION]
|
||||
(Optional) Tendermint version to use for the setup. (default: 0.22.3)
|
||||
(Optional) Tendermint version to use for the setup. (default: 0.22.8)
|
||||
|
||||
ENV[MONGO_VERSION]
|
||||
(Optional) MongoDB version to use with the setup. (default: 3.6)
|
||||
@ -171,8 +171,8 @@ $ export STACK_REPO=bigchaindb/bigchaindb
|
||||
# Default: master
|
||||
$ export STACK_BRANCH=master
|
||||
|
||||
#Optional, since 0.22.3 is the default tendermint version.
|
||||
$ export TM_VERSION=0.22.3
|
||||
#Optional, since 0.22.8 is the default tendermint version.
|
||||
$ export TM_VERSION=0.22.8
|
||||
|
||||
#Optional, since 3.6 is the default MongoDB version.
|
||||
$ export MONGO_VERSION=3.6
|
||||
@ -222,8 +222,8 @@ $ export STACK_REPO=bigchaindb/bigchaindb
|
||||
# Default: master
|
||||
$ export STACK_BRANCH=master
|
||||
|
||||
#Optional, since 0.22.3 is the default tendermint version
|
||||
$ export TM_VERSION=0.22.3
|
||||
#Optional, since 0.22.8 is the default tendermint version
|
||||
$ export TM_VERSION=0.22.8
|
||||
|
||||
#Optional, since 3.6 is the default MongoDB version.
|
||||
$ export MONGO_VERSION=3.6
|
||||
|
||||
@ -19,13 +19,13 @@ After the installation of MongoDB is complete, run MongoDB using `sudo mongod`
|
||||
|
||||
### Installing a Tendermint Executable
|
||||
|
||||
Find [the version number of the latest Tendermint release](https://github.com/tendermint/tendermint/releases) and install it using the following, where 0.22.3 should be replaced by the latest released version number:
|
||||
Find [the version number of the latest Tendermint release](https://github.com/tendermint/tendermint/releases) and install it using the following, where 0.22.8 should be replaced by the latest released version number:
|
||||
|
||||
```bash
|
||||
$ sudo apt install -y unzip
|
||||
$ wget https://github.com/tendermint/tendermint/releases/download/v0.22.3/tendermint_0.22.3_linux_amd64.zip
|
||||
$ unzip tendermint_0.22.3_linux_amd64.zip
|
||||
$ rm tendermint_0.22.3_linux_amd64.zip
|
||||
$ wget https://github.com/tendermint/tendermint/releases/download/v0.22.8-autodraft/tendermint_0.22.8_linux_amd64.zip
|
||||
$ unzip tendermint_0.22.8_linux_amd64.zip
|
||||
$ rm tendermint_0.22.8_linux_amd64.zip
|
||||
$ sudo mv tendermint /usr/local/bin
|
||||
```
|
||||
|
||||
|
||||
@ -1,7 +1,14 @@
|
||||
# Write a BigchaindB Enhancement Proposal (BEP)
|
||||
# Write a BigchainDB Enhancement Proposal (BEP)
|
||||
|
||||
- Review [1/C4](https://github.com/bigchaindb/BEPs/tree/master/1), the process we use to accept any new code or PR of any kind, including one that adds a BEP to `bigchaindb/BEPs`.
|
||||
- Review [2/COSS](https://github.com/bigchaindb/BEPs/tree/master/2). Maybe print it for reference. It outlines what can go in a BEP.
|
||||
- Don't spend weeks on your BEP. Version 1 should take up to a few hours to write. You can add to it in the future. The process is iterative. If you need more than a few hours, then consider writing multiple BEPs.
|
||||
- Do _not_ start writing code before you think about it. You should always write a BEP first. Once you do that, you can start implementing it. To do that, make a pull request and say it implements your BEP.
|
||||
- Do _not_ write your BEP as an issue (i.e. a GitHub issue).
|
||||
If you have an idea for a new feature or enhancement, and you want some feedback before you write a full BigchainDB Enhancement Proposal (BEP), then feel free to:
|
||||
- ask in the [bigchaindb/bigchaindb Gitter chat room](https://gitter.im/bigchaindb/bigchaindb) or
|
||||
- [open a new issue in the bigchaindb/BEPs repo](https://github.com/bigchaindb/BEPs/issues/new) and give it the label **BEP idea**.
|
||||
|
||||
If you want to discuss an existing BEP, then [open a new issue in the bigchaindb/BEPs repo](https://github.com/bigchaindb/BEPs/issues/new) and give it the label **discuss existing BEP**.
|
||||
|
||||
## Steps to Write a New BEP
|
||||
|
||||
1. Look at the structure of existing BEPs in the [bigchaindb/BEPs repo](https://github.com/bigchaindb/BEPs). Note the section headings. [BEP-2](https://github.com/bigchaindb/BEPs/tree/master/2) (our variant of the consensus-oriented specification system [COSS]) says more about the expected structure and process.
|
||||
1. Write a first draft of your BEP. It doesn't have to be long or perfect.
|
||||
1. Push your BEP draft to the [bigchaindb/BEPs repo](https://github.com/bigchaindb/BEPs) and make a pull request. [BEP-1](https://github.com/bigchaindb/BEPs/tree/master/1) (our variant of C4) outlines the process we use to handle all pull requests. In particular, we try to merge all pull requests quickly.
|
||||
1. Your BEP can be revised by pushing more pull requests.
|
||||
|
||||
@ -91,4 +91,5 @@ More About BigchainDB
|
||||
transaction-concepts
|
||||
store-files
|
||||
permissions
|
||||
private-data
|
||||
Data Models <https://docs.bigchaindb.com/projects/server/en/latest/data-models/index.html>
|
||||
|
||||
@ -53,20 +53,7 @@ You could do more elaborate things too. As one example, each time someone writes
|
||||
Read Permissions
|
||||
================
|
||||
|
||||
All the data stored in a BigchainDB network can be read by anyone with access to that network. One *can* store encrypted data, but if the decryption key ever leaks out, then the encrypted data can be read, decrypted, and leak out too. (Deleting the encrypted data is :doc:`not an option <immutable>`.)
|
||||
|
||||
The permission to read some specific information (e.g. a music file) can be thought of as an *asset*. (In many countries, that permission or "right" is a kind of intellectual property.)
|
||||
BigchainDB can be used to register that asset and transfer it from owner to owner.
|
||||
Today, BigchainDB does not have a way to restrict read access of data stored in a BigchainDB network, but many third-party services do offer that (e.g. Google Docs, Dropbox).
|
||||
In principle, a third party service could ask a BigchainDB network to determine if a particular user has permission to read some particular data. Indeed they could use BigchainDB to keep track of *all* the rights a user has for some data (not just the right to read it).
|
||||
That third party could also use BigchainDB to store audit logs, i.e. records of every read, write or other operation on stored data.
|
||||
|
||||
BigchainDB can be used in other ways to help parties exchange private data:
|
||||
|
||||
- It can be used to publicly disclose the *availability* of some private data (stored elsewhere). For example, there might be a description of the data and a price.
|
||||
- It can be used to record the TLS handshakes which two parties sent to each other to establish an encrypted and authenticated TLS connection, which they could use to exchange private data with each other. (The stored handshake information wouldn't be enough, by itself, to decrypt the data.) It would be a "proof of TLS handshake."
|
||||
- See the BigchainDB `Privacy Protocols repository <https://github.com/bigchaindb/privacy-protocols>`_ for more techniques.
|
||||
|
||||
See the page titled, :doc:`BigchainDB, Privacy and Private Data <private-data>`.
|
||||
|
||||
Role-Based Access Control (RBAC)
|
||||
================================
|
||||
|
||||
100
docs/root/source/private-data.rst
Normal file
100
docs/root/source/private-data.rst
Normal file
@ -0,0 +1,100 @@
|
||||
BigchainDB, Privacy and Private Data
|
||||
------------------------------------
|
||||
|
||||
Basic Facts
|
||||
===========
|
||||
|
||||
#. One can store arbitrary data (including encrypted data) in a BigchainDB network, within limits: there’s a maximum transaction size. Every transaction has a ``metadata`` section which can store almost any Unicode string (up to some maximum length). Similarly, every CREATE transaction has an ``asset.data`` section which can store almost any Unicode string.
|
||||
#. The data stored in certain BigchainDB transaction fields must not be encrypted, e.g. public keys and amounts. BigchainDB doesn’t offer private transactions akin to Zcoin.
|
||||
#. Once data has been stored in a BigchainDB network, it’s best to assume it can’t be change or deleted.
|
||||
#. Every node in a BigchainDB network has a full copy of all the stored data.
|
||||
#. Every node in a BigchainDB network can read all the stored data.
|
||||
#. Everyone with full access to a BigchainDB node (e.g. the sysadmin of a node) can read all the data stored on that node.
|
||||
#. Everyone given access to a node via the BigchainDB HTTP API can find and read all the data stored by BigchainDB. The list of people with access might be quite short.
|
||||
#. If the connection between an external user and a BigchainDB node isn’t encrypted (using HTTPS, for example), then a wiretapper can read all HTTP requests and responses in transit.
|
||||
#. If someone gets access to plaintext (regardless of where they got it), then they can (in principle) share it with the whole world. One can make it difficult for them to do that, e.g. if it is a lot of data and they only get access inside a secure room where they are searched as they leave the room.
|
||||
|
||||
Storing Private Data Off-Chain
|
||||
==============================
|
||||
|
||||
A system could store data off-chain, e.g. in a third-party database, document store, or content management system (CMS) and it could use BigchainDB to:
|
||||
|
||||
- Keep track of who has read permissions (or other permissions) in a third-party system. An example of how this could be done is described below.
|
||||
- Keep a permanent record of all requests made to the third-party system.
|
||||
- Store hashes of documents-stored-elsewhere, so that a change in any document can be detected.
|
||||
- Record all handshake-establishing requests and responses between two off-chain parties (e.g. a Diffie-Hellman key exchange), so as to prove that they established an encrypted tunnel (without giving readers access to that tunnel). There are more details about this idea in `the BigchainDB Privacy Protocols repository <https://github.com/bigchaindb/privacy-protocols>`_.
|
||||
|
||||
A simple way to record who has read permission on a particular document would be for the third-party system (“DocPile”) to store a CREATE transaction in a BigchainDB network for every document+user pair, to indicate that that user has read permissions for that document. The transaction could be signed by DocPile (or maybe by a document owner, as a variation). The asset data field would contain 1) the unique ID of the user and 2) the unique ID of the document. The one output on the CREATE transaction would only be transferable/spendable by DocPile (or, again, a document owner).
|
||||
|
||||
To revoke the read permission, DocPile could create a TRANSFER transaction, to spend the one output on the original CREATE transaction, with a metadata field to say that the user in question no longer has read permission on that document.
|
||||
|
||||
This can be carried on indefinitely, i.e. another TRANSFER transaction could be created by DocPile to indicate that the user now has read permissions again.
|
||||
|
||||
DocPile can figure out if a given user has read permissions on a given document by reading the last transaction in the CREATE → TRANSFER → TRANSFER → etc. chain for that user+document pair.
|
||||
|
||||
There are other ways to accomplish the same thing. The above is just one example.
|
||||
|
||||
You might have noticed that the above example didn’t treat the “read permission” as an asset owned (controlled) by a user because if the permission asset is given to (transferred to or created by) the user then it cannot be controlled any further (by DocPile) until the user transfers it back to DocPile. Moreover, the user could transfer the asset to someone else, which might be problematic.
|
||||
|
||||
Storing Private Data On-Chain, Encrypted
|
||||
========================================
|
||||
|
||||
There are many ways to store private data on-chain, encrypted. Every use case has its own objectives and constraints, and the best solution depends on the use case. `The BigchainDB consulting team <https://www.bigchaindb.com/services/>`_, along with our partners, can help you design the best solution for your use case.
|
||||
|
||||
Below we describe some example system setups, using various crypto primitives, to give a sense of what’s possible.
|
||||
|
||||
Please note:
|
||||
|
||||
- Ed25519 keypairs are designed for signing and verifying cryptographic signatures, `not for encrypting and decrypting messages <https://crypto.stackexchange.com/questions/27866/why-curve25519-for-encryption-but-ed25519-for-signatures>`_. For encryption, you should use keypairs designed for encryption, such as X25519.
|
||||
- If someone (or some group) publishes how to decrypt some encrypted data on-chain, then anyone with access to that encrypted data will be able to get the plaintext. The data can’t be deleted.
|
||||
- Encrypted data can’t be indexed or searched by MongoDB. (It can index and search the ciphertext, but that’s not very useful.) One might use homomorphic encryption to index and search encrypted data, but MongoDB doesn’t have any plans to support that any time soon. If there is indexing or keyword search needed, then some fields of the ``asset.data`` or ``metadata`` objects can be left as plain text and the sensitive information can be stored in an encrypted child-object.
|
||||
|
||||
System Example 1
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Encrypt the data with a symmetric key and store the ciphertext on-chain (in ``metadata`` or ``asset.data``). To communicate the key to a third party, use their public key to encrypt the symmetric key and send them that. They can decrypt the symmetric key with their private key, and then use that symmetric key to decrypt the on-chain ciphertext.
|
||||
|
||||
The reason for using a symmetric key along with public/private keypairs is so the ciphertext only has to be stored once.
|
||||
|
||||
System Example 2
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This example uses `proxy re-encryption <https://en.wikipedia.org/wiki/Proxy_re-encryption>`_:
|
||||
|
||||
#. MegaCorp encrypts some data using its own public key, then stores that encrypted data (ciphertext 1) in a BigchainDB network.
|
||||
#. MegaCorp wants to let others read that encrypted data, but without ever sharing their private key and without having to re-encrypt themselves for every new recipient. Instead, they find a “proxy” named Moxie, to provide proxy re-encryption services.
|
||||
#. Zorban contacts MegaCorp and asks for permission to read the data.
|
||||
#. MegaCorp asks Zorban for his public key.
|
||||
#. MegaCorp generates a “re-encryption key” and sends it to their proxy, Moxie.
|
||||
#. Moxie (the proxy) uses the re-encryption key to encrypt ciphertext 1, creating ciphertext 2.
|
||||
#. Moxie sends ciphertext 2 to Zorban (or to MegaCorp who forwards it to Zorban).
|
||||
#. Zorban uses his private key to decrypt ciphertext 2, getting the original un-encrypted data.
|
||||
|
||||
Note:
|
||||
|
||||
- The proxy only ever sees ciphertext. They never see any un-encrypted data.
|
||||
- Zorban never got the ability to decrypt ciphertext 1, i.e. the on-chain data.
|
||||
- There are variations on the above flow.
|
||||
|
||||
System Example 3
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This example uses `erasure coding <https://en.wikipedia.org/wiki/Erasure_code>`_:
|
||||
|
||||
#. Erasure-code the data into n pieces.
|
||||
#. Encrypt each of the n pieces with a different encryption key.
|
||||
#. Store the n encrypted pieces on-chain, e.g. in n separate transactions.
|
||||
#. Share each of the the n decryption keys with a different party.
|
||||
|
||||
If k < N of the key-holders gets and decrypts k of the pieces, they can reconstruct the original plaintext. Less than k would not be enough.
|
||||
|
||||
System Example 4
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This setup could be used in an enterprise blockchain scenario where a special node should be able to see parts of the data, but the others should not.
|
||||
|
||||
- The special node generates an X25519 keypair (or similar asymmetric *encryption* keypair).
|
||||
- A BigchainDB end user finds out the X25519 public key (encryption key) of the special node.
|
||||
- The end user creates a valid BigchainDB transaction, with either the asset.data or the metadata (or both) encrypted using the above-mentioned public key.
|
||||
- This is only done for transactions where the contents of asset.data or metadata don't matter for validation, so all node operators can validate the transaction.
|
||||
- The special node is able to decrypt the encrypted data, but the other node operators can't, and nor can any other end user.
|
||||
@ -3,7 +3,7 @@ BigchainDB and Smart Contracts
|
||||
|
||||
One can store the source code of any smart contract (i.e. a computer program) in BigchainDB, but BigchainDB won't run arbitrary smart contracts.
|
||||
|
||||
BigchainDB will run the subset of smart contracts expressible using `Crypto-Conditions <https://tools.ietf.org/html/draft-thomas-crypto-conditions-03>`_. Crypto-conditions are part of the `Interledger Protocol <https://interledger.org/>`_.
|
||||
BigchainDB will run the subset of smart contracts expressible using `Crypto-Conditions <https://tools.ietf.org/html/draft-thomas-crypto-conditions-03>`_.
|
||||
|
||||
The owners of an asset can impose conditions on it that must be met for the asset to be transferred to new owners. Examples of possible conditions (crypto-conditions) include:
|
||||
|
||||
|
||||
@ -27,9 +27,8 @@ and the other output might have 15 oak trees for another set of owners.
|
||||
|
||||
Each output also has an associated condition: the condition that must be met
|
||||
(by a TRANSFER transaction) to transfer/spend the output.
|
||||
BigchainDB supports a variety of conditions,
|
||||
a subset of the [Interledger Protocol (ILP)](https://interledger.org/)
|
||||
crypto-conditions. For details, see
|
||||
BigchainDB supports a variety of conditions.
|
||||
For details, see
|
||||
the section titled **Transaction Components: Conditions**
|
||||
in the relevant
|
||||
[BigchainDB Transactions Spec](https://github.com/bigchaindb/BEPs/tree/master/tx-specs/).
|
||||
|
||||
@ -5,8 +5,7 @@ import os
|
||||
import os.path
|
||||
|
||||
from bigchaindb.common.transaction import Transaction, Input, TransactionLink
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
from bigchaindb.tendermint import lib
|
||||
from bigchaindb import lib
|
||||
from bigchaindb.web import server
|
||||
|
||||
|
||||
|
||||
85
docs/server/source/appendices/all-in-one-bigchaindb.md
Normal file
85
docs/server/source/appendices/all-in-one-bigchaindb.md
Normal file
@ -0,0 +1,85 @@
|
||||
# Run BigchainDB with all-in-one Docker
|
||||
|
||||
For those who like using Docker and wish to experiment with BigchainDB in
|
||||
non-production environments, we currently maintain a BigchainDB all-in-one
|
||||
Docker image and a
|
||||
`Dockerfile-all-in-one` that can be used to build an image for `bigchaindb`.
|
||||
|
||||
This image contains all the services required for a BigchainDB node i.e.
|
||||
|
||||
- BigchainDB Server
|
||||
- MongoDB
|
||||
- Tendermint
|
||||
|
||||
**Note:** **NOT for Production Use:** *This is an single node opinionated image not well suited for a network deployment.*
|
||||
*This image is to help quick deployment for early adopters, for a more standard approach please refer to one of our deployment guides:*
|
||||
|
||||
- [BigchainDB developer setup guides](https://docs.bigchaindb.com/projects/contributing/en/latest/dev-setup-coding-and-contribution-process/index.html).
|
||||
- [BigchainDB with Kubernetes](http://docs.bigchaindb.com/projects/server/en/latest/k8s-deployment-template/index.html).
|
||||
|
||||
## Prerequisite(s)
|
||||
- [Docker](https://docs.docker.com/engine/installation/)
|
||||
|
||||
## Pull and Run the Image from Docker Hub
|
||||
|
||||
With Docker installed, you can proceed as follows.
|
||||
|
||||
In a terminal shell, pull the latest version of the BigchainDB all-in-one Docker image using:
|
||||
```text
|
||||
$ docker pull bigchaindb/bigchaindb:all-in-one
|
||||
|
||||
$ docker run \
|
||||
--detach \
|
||||
--name bigchaindb \
|
||||
--publish 9984:9984 \
|
||||
--publish 9985:9985 \
|
||||
--publish 27017:27017 \
|
||||
--publish 26657:26657 \
|
||||
--volume $HOME/bigchaindb_docker/mongodb/data/db:/data/db \
|
||||
--volume $HOME/bigchaindb_docker/mongodb/data/configdb:/data/configdb \
|
||||
--volume $HOME/bigchaindb_docker/tendermint:/tendermint \
|
||||
bigchaindb/bigchaindb:all-in-one
|
||||
```
|
||||
|
||||
Let's analyze that command:
|
||||
|
||||
* `docker run` tells Docker to run some image
|
||||
* `--detach` run the container in the background
|
||||
* `publish 9984:9984` map the host port `9984` to the container port `9984`
|
||||
(the BigchainDB API server)
|
||||
* `9985` BigchainDB Websocket server
|
||||
* `27017` Default port for MongoDB
|
||||
* `26657` Tendermint RPC server
|
||||
* `--volume "$HOME/bigchaindb_docker/mongodb:/data"` map the host directory
|
||||
`$HOME/bigchaindb_docker/mongodb` to the container directory `/data`;
|
||||
this allows us to have the data persisted on the host machine,
|
||||
you can read more in the [official Docker
|
||||
documentation](https://docs.docker.com/engine/tutorials/dockervolumes)
|
||||
* `$HOME/bigchaindb_docker/tendermint:/tendermint` to persist Tendermint data.
|
||||
* `bigchaindb/bigchaindb:all-in-one` the image to use. All the options after the container name are passed on to the entrypoint inside the container.
|
||||
|
||||
## Verify
|
||||
|
||||
```text
|
||||
$ docker ps | grep bigchaindb
|
||||
```
|
||||
|
||||
Send your first transaction using [BigchainDB drivers](../drivers-clients/index.html).
|
||||
|
||||
|
||||
## Building Your Own Image
|
||||
|
||||
Assuming you have Docker installed, you would proceed as follows.
|
||||
|
||||
In a terminal shell:
|
||||
```text
|
||||
git clone git@github.com:bigchaindb/bigchaindb.git
|
||||
cd bigchaindb/
|
||||
```
|
||||
|
||||
Build the Docker image:
|
||||
```text
|
||||
docker build --file Dockerfile-all-in-one --tag <tag/name:latest> .
|
||||
```
|
||||
|
||||
Now you can use your own image to run BigchainDB all-in-one container.
|
||||
@ -4,15 +4,14 @@ Appendices
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
install-os-level-deps
|
||||
json-serialization
|
||||
cryptography
|
||||
the-bigchaindb-class
|
||||
backend
|
||||
commands
|
||||
tendermint-integration
|
||||
aws-setup
|
||||
generate-key-pair-for-ssh
|
||||
firewall-notes
|
||||
ntp-notes
|
||||
licenses
|
||||
all-in-one-bigchaindb
|
||||
|
||||
@ -1,17 +0,0 @@
|
||||
# How to Install OS-Level Dependencies
|
||||
|
||||
BigchainDB Server has some OS-level dependencies that must be installed.
|
||||
|
||||
On Ubuntu 16.04, we found that the following was enough:
|
||||
```text
|
||||
sudo apt-get update
|
||||
sudo apt-get install libffi-dev libssl-dev
|
||||
```
|
||||
|
||||
On Fedora 23–25, we found that the following was enough:
|
||||
```text
|
||||
sudo dnf update
|
||||
sudo dnf install gcc-c++ redhat-rpm-config python3-devel libffi-devel
|
||||
```
|
||||
|
||||
(If you're using a version of Fedora before version 22, you may have to use `yum` instead of `dnf`.)
|
||||
@ -1,26 +0,0 @@
|
||||
######################
|
||||
Tendermint Integration
|
||||
######################
|
||||
|
||||
|
||||
.. automodule:: bigchaindb.tendermint
|
||||
:special-members: __init__
|
||||
|
||||
.. automodule:: bigchaindb.tendermint.lib
|
||||
:special-members: __init__
|
||||
:noindex:
|
||||
|
||||
.. automodule:: bigchaindb.tendermint.core
|
||||
:special-members: __init__
|
||||
|
||||
.. automodule:: bigchaindb.tendermint.event_stream
|
||||
:special-members: __init__
|
||||
|
||||
.. automodule:: bigchaindb.tendermint.fastquery
|
||||
:special-members: __init__
|
||||
|
||||
.. automodule:: bigchaindb.tendermint.commands
|
||||
:special-members: __init__
|
||||
|
||||
.. automodule:: bigchaindb.tendermint.utils
|
||||
:special-members: __init__
|
||||
@ -2,4 +2,4 @@
|
||||
The BigchainDB Class
|
||||
####################
|
||||
|
||||
.. autoclass:: bigchaindb.tendermint.BigchainDB
|
||||
.. autoclass:: bigchaindb.BigchainDB
|
||||
|
||||
@ -2,7 +2,6 @@
|
||||
|
||||
A **BigchainDB Cluster** is a set of connected **BigchainDB Nodes**, managed by a **BigchainDB Consortium** (i.e. an organization). Those terms are defined in the [BigchainDB Terminology page](https://docs.bigchaindb.com/en/latest/terminology.html).
|
||||
|
||||
|
||||
## Consortium Structure & Governance
|
||||
|
||||
The consortium might be a company, a foundation, a cooperative, or [some other form of organization](https://en.wikipedia.org/wiki/Organizational_structure).
|
||||
@ -13,13 +12,6 @@ This documentation doesn't explain how to create a consortium, nor does it outli
|
||||
It's worth noting that the decentralization of a BigchainDB cluster depends,
|
||||
to some extent, on the decentralization of the associated consortium. See the pages about [decentralization](https://docs.bigchaindb.com/en/latest/decentralized.html) and [node diversity](https://docs.bigchaindb.com/en/latest/diversity.html).
|
||||
|
||||
|
||||
## Relevant Technical Documentation
|
||||
|
||||
Anyone building or managing a BigchainDB cluster may be interested
|
||||
in [our production deployment template](production-deployment-template/index.html).
|
||||
|
||||
|
||||
## Cluster DNS Records and SSL Certificates
|
||||
|
||||
We now describe how *we* set up the external (public-facing) DNS records for a BigchainDB cluster. Your consortium may opt to do it differently.
|
||||
@ -30,14 +22,12 @@ There were several goals:
|
||||
* There should be no sharing of SSL certificates among BigchainDB node operators.
|
||||
* Optional: Allow clients to connect to a "random" BigchainDB node in the cluster at one particular domain (or subdomain).
|
||||
|
||||
|
||||
### Node Operator Responsibilities
|
||||
|
||||
1. Register a domain (or use one that you already have) for your BigchainDB node. You can use a subdomain if you like. For example, you might opt to use `abc-org73.net`, `api.dynabob8.io` or `figmentdb3.ninja`.
|
||||
2. Get an SSL certificate for your domain or subdomain, and properly install it in your node (e.g. in your NGINX instance).
|
||||
3. Create a DNS A Record mapping your domain or subdomain to the public IP address of your node (i.e. the one that serves the BigchainDB HTTP API).
|
||||
|
||||
|
||||
### Consortium Responsibilities
|
||||
|
||||
Optional: The consortium managing the BigchainDB cluster could register a domain name and set up CNAME records mapping that domain name (or one of its subdomains) to each of the nodes in the cluster. For example, if the consortium registered `bdbcluster.io`, they could set up CNAME records like the following:
|
||||
|
||||
@ -6,6 +6,7 @@ Libraries and Tools Maintained by the BigchainDB Team
|
||||
|
||||
* `Python Driver <https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_
|
||||
* `JavaScript / Node.js Driver <https://github.com/bigchaindb/js-bigchaindb-driver>`_
|
||||
* `Java driver <https://github.com/bigchaindb/java-bigchaindb-driver>`_
|
||||
|
||||
Community-Driven Libraries and Tools
|
||||
------------------------------------
|
||||
@ -17,6 +18,5 @@ Community-Driven Libraries and Tools
|
||||
|
||||
* `Haskell transaction builder <https://github.com/bigchaindb/bigchaindb-hs>`_
|
||||
* `Go driver <https://github.com/zbo14/envoke/blob/master/bigchain/bigchain.go>`_
|
||||
* `Java driver <https://github.com/authenteq/java-bigchaindb-driver>`_
|
||||
* `Ruby driver <https://github.com/LicenseRocks/bigchaindb_ruby>`_
|
||||
* `Ruby library for preparing/signing transactions and submitting them or querying a BigchainDB node (MIT licensed) <https://rubygems.org/gems/bigchaindb>`_
|
||||
|
||||
@ -10,13 +10,13 @@ BigchainDB Server Documentation
|
||||
simple-network-setup
|
||||
production-nodes/index
|
||||
clusters
|
||||
production-deployment-template/index
|
||||
dev-and-test/index
|
||||
server-reference/index
|
||||
http-client-server-api
|
||||
events/index
|
||||
drivers-clients/index
|
||||
data-models/index
|
||||
k8s-deployment-template/index
|
||||
release-notes
|
||||
glossary
|
||||
appendices/index
|
||||
|
||||
@ -1,13 +1,25 @@
|
||||
Architecture of a BigchainDB Node
|
||||
==================================
|
||||
Architecture of a BigchainDB Node Running in a Kubernetes Cluster
|
||||
=================================================================
|
||||
|
||||
A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes:
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
If you deploy a BigchainDB node into a Kubernetes cluster
|
||||
as described in these docs, it will include:
|
||||
|
||||
* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint
|
||||
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
|
||||
* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent.
|
||||
* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent
|
||||
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
|
||||
* MongoDB and Tendermint `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* MongoDB and Tendermint `Kubernetes StatefulSets <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
|
||||
* Third party services like `3scale <https://3scale.net>`_,
|
||||
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
|
||||
`Azure Operations Management Suite
|
||||
@ -3,6 +3,17 @@
|
||||
Kubernetes Template: Deploying a BigchainDB network
|
||||
===================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page describes how to deploy a static BigchainDB + Tendermint network.
|
||||
|
||||
If you want to deploy a stand-alone BigchainDB node in a BigchainDB cluster,
|
||||
@ -41,7 +41,7 @@ Configure MongoDB Cloud Manager for Monitoring
|
||||
|
||||
* If you have authentication enabled, select the option to enable
|
||||
authentication and specify the authentication mechanism as per your
|
||||
deployment. The default BigchainDB production deployment currently
|
||||
deployment. The default BigchainDB Kubernetes deployment template currently
|
||||
supports ``X.509 Client Certificate`` as the authentication mechanism.
|
||||
|
||||
* If you have TLS enabled, select the option to enable TLS/SSL for MongoDB
|
||||
40
docs/server/source/k8s-deployment-template/index.rst
Normal file
40
docs/server/source/k8s-deployment-template/index.rst
Normal file
@ -0,0 +1,40 @@
|
||||
Kubernetes Deployment Template
|
||||
==============================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This section outlines a way to deploy a BigchainDB node (or BigchainDB cluster)
|
||||
on Microsoft Azure using Kubernetes.
|
||||
You may choose to use it as a template or reference for your own deployment,
|
||||
but *we make no claim that it is suitable for your purposes*.
|
||||
Feel free change things to suit your needs or preferences.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
workflow
|
||||
ca-installation
|
||||
server-tls-certificate
|
||||
client-tls-certificate
|
||||
revoke-tls-certificate
|
||||
template-kubernetes-azure
|
||||
node-on-kubernetes
|
||||
node-config-map-and-secrets
|
||||
log-analytics
|
||||
cloud-manager
|
||||
easy-rsa
|
||||
upgrade-on-kubernetes
|
||||
bigchaindb-network-on-kubernetes
|
||||
tectonic-azure
|
||||
troubleshoot
|
||||
architecture
|
||||
@ -3,6 +3,17 @@
|
||||
How to Configure a BigchainDB Node
|
||||
==================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page outlines the steps to set a bunch of configuration settings
|
||||
in your BigchainDB node.
|
||||
They are pushed to the Kubernetes cluster in two files,
|
||||
@ -3,7 +3,18 @@
|
||||
Kubernetes Template: Deploy a Single BigchainDB Node
|
||||
====================================================
|
||||
|
||||
This page describes how to deploy a BigchainDB + Tendermint node
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page describes how to deploy a BigchainDB node
|
||||
using `Kubernetes <https://kubernetes.io/>`_.
|
||||
It assumes you already have a running Kubernetes cluster.
|
||||
|
||||
@ -29,7 +40,7 @@ If you don't have that file, then you need to get it.
|
||||
|
||||
**Azure.** If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0 (as per :doc:`our template
|
||||
<../production-deployment-template/template-kubernetes-azure>`),
|
||||
<../k8s-deployment-template/template-kubernetes-azure>`),
|
||||
then you can get the ``~/.kube/config`` file using:
|
||||
|
||||
.. code:: bash
|
||||
@ -277,7 +288,7 @@ The first thing to do is create the Kubernetes storage classes.
|
||||
First, you need an Azure storage account.
|
||||
If you deployed your Kubernetes cluster on Azure
|
||||
using the Azure CLI 2.0
|
||||
(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`),
|
||||
(as per :doc:`our template <../k8s-deployment-template/template-kubernetes-azure>`),
|
||||
then the `az acs create` command already created a
|
||||
storage account in the same location and resource group
|
||||
as your Kubernetes cluster.
|
||||
@ -289,7 +300,7 @@ in the same data center.
|
||||
Premium storage is higher-cost and higher-performance.
|
||||
It uses solid state drives (SSD).
|
||||
|
||||
We recommend using Premium storage for our production template.
|
||||
We recommend using Premium storage with our Kubernetes deployment template.
|
||||
Create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
|
||||
for Premium storage and associate it with your Azure resource group.
|
||||
For future reference, the command to create a storage account is
|
||||
@ -372,7 +383,7 @@ but it should become "Bound" fairly quickly.
|
||||
$ kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
|
||||
|
||||
For notes on recreating a private volume form a released Azure disk resource consult
|
||||
:doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`.
|
||||
:doc:`the page about cluster troubleshooting <../k8s-deployment-template/troubleshoot>`.
|
||||
|
||||
.. _start-kubernetes-stateful-set-mongodb:
|
||||
|
||||
@ -569,7 +580,7 @@ Step 19(Optional): Configure the MongoDB Cloud Manager
|
||||
------------------------------------------------------
|
||||
|
||||
Refer to the
|
||||
:doc:`documentation <../production-deployment-template/cloud-manager>`
|
||||
:doc:`documentation <../k8s-deployment-template/cloud-manager>`
|
||||
for details on how to configure the MongoDB Cloud Manager to enable
|
||||
monitoring and backup.
|
||||
|
||||
@ -749,4 +760,4 @@ verify that your node or cluster works as expected.
|
||||
|
||||
Next, you can set up log analytics and monitoring, by following our templates:
|
||||
|
||||
* :doc:`../production-deployment-template/log-analytics`.
|
||||
* :doc:`../k8s-deployment-template/log-analytics`.
|
||||
@ -1,6 +1,17 @@
|
||||
Walkthrough: Deploy a Kubernetes Cluster on Azure using Tectonic by CoreOS
|
||||
==========================================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
A BigchainDB node can be run inside a `Kubernetes <https://kubernetes.io/>`_
|
||||
cluster.
|
||||
This page describes one way to deploy a Kubernetes cluster on Azure using Tectonic.
|
||||
@ -1,6 +1,17 @@
|
||||
Template: Deploy a Kubernetes Cluster on Azure
|
||||
==============================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
A BigchainDB node can be run inside a `Kubernetes <https://kubernetes.io/>`_
|
||||
cluster.
|
||||
This page describes one way to deploy a Kubernetes cluster on Azure.
|
||||
@ -1,6 +1,17 @@
|
||||
Kubernetes Template: Upgrade all Software in a BigchainDB Node
|
||||
==============================================================
|
||||
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page outlines how to upgrade all the software associated
|
||||
with a BigchainDB node running on Kubernetes,
|
||||
including host operating systems, Docker, Kubernetes,
|
||||
@ -3,9 +3,19 @@
|
||||
Overview
|
||||
========
|
||||
|
||||
This page summarizes the steps *we* go through
|
||||
to set up a production BigchainDB cluster.
|
||||
We are constantly improving them.
|
||||
.. note::
|
||||
|
||||
A highly-available Kubernetes cluster requires at least five virtual machines
|
||||
(three for the master and two for your app's containers).
|
||||
Therefore we don't recommend using Kubernetes to run a BigchainDB node
|
||||
if that's the only thing the Kubernetes cluster will be running.
|
||||
Instead, see **How to Set Up a BigchainDB Network**.
|
||||
If your organization already *has* a big Kubernetes cluster running many containers,
|
||||
and your organization has people who know Kubernetes,
|
||||
then this Kubernetes deployment template might be helpful.
|
||||
|
||||
This page summarizes some steps to go through
|
||||
to set up a BigchainDB cluster.
|
||||
You can modify them to suit your needs.
|
||||
|
||||
.. _generate-the-blockchain-id-and-genesis-time:
|
||||
@ -44,7 +54,7 @@ you can do this:
|
||||
.. code::
|
||||
|
||||
$ mkdir $(pwd)/tmdata
|
||||
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:0.22.3 init
|
||||
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:0.22.8 init
|
||||
$ cat $(pwd)/tmdata/genesis.json
|
||||
|
||||
You should see something that looks like:
|
||||
@ -113,13 +123,13 @@ and set it equal to your secret token, e.g.
|
||||
|
||||
|
||||
3. Deploy a Kubernetes cluster for your BigchainDB node. We have some instructions for how to
|
||||
:doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`.
|
||||
:doc:`Deploy a Kubernetes cluster on Azure <../k8s-deployment-template/template-kubernetes-azure>`.
|
||||
|
||||
.. warning::
|
||||
|
||||
In theory, you can deploy your BigchainDB node to any Kubernetes cluster, but there can be differences
|
||||
between different Kubernetes clusters, especially if they are running different versions of Kubernetes.
|
||||
We tested this Production Deployment Template on Azure ACS in February 2018 and at that time
|
||||
We tested this Kubernetes Deployment Template on Azure ACS in February 2018 and at that time
|
||||
ACS was deploying a **Kubernetes 1.7.7** cluster. If you can force your cluster to have that version of Kubernetes,
|
||||
then you'll increase the likelihood that everything will work in your cluster.
|
||||
|
||||
@ -1,31 +0,0 @@
|
||||
Production Deployment Template
|
||||
==============================
|
||||
|
||||
This section outlines how *we* deploy production BigchainDB,
|
||||
integrated with Tendermint(backend for BFT consensus),
|
||||
clusters on Microsoft Azure using
|
||||
Kubernetes. We improve it constantly.
|
||||
You may choose to use it as a template or reference for your own deployment,
|
||||
but *we make no claim that it is suitable for your purposes*.
|
||||
Feel free change things to suit your needs or preferences.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
workflow
|
||||
ca-installation
|
||||
server-tls-certificate
|
||||
client-tls-certificate
|
||||
revoke-tls-certificate
|
||||
template-kubernetes-azure
|
||||
node-on-kubernetes
|
||||
node-config-map-and-secrets
|
||||
log-analytics
|
||||
cloud-manager
|
||||
easy-rsa
|
||||
upgrade-on-kubernetes
|
||||
bigchaindb-network-on-kubernetes
|
||||
tectonic-azure
|
||||
troubleshoot
|
||||
architecture
|
||||
@ -4,7 +4,8 @@ Production Nodes
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
node-requirements
|
||||
node-assumptions
|
||||
node-components
|
||||
node-requirements
|
||||
node-security-and-privacy
|
||||
reverse-proxy-notes
|
||||
|
||||
@ -10,5 +10,3 @@ We make some assumptions about production nodes:
|
||||
1. Production nodes use MongoDB (not RethinkDB, PostgreSQL, Couchbase or whatever).
|
||||
1. Each production node is set up and managed by an experienced professional system administrator or a team of them.
|
||||
1. Each production node in a cluster is managed by a different person or team.
|
||||
|
||||
We don't provide a detailed cookbook explaining how to secure a server, or other things that a sysadmin should know. We do provide some templates, but those are just starting points.
|
||||
|
||||
@ -0,0 +1,11 @@
|
||||
# Production Node Security & Privacy
|
||||
|
||||
Here are some references about how to secure an Ubuntu 18.04 server:
|
||||
|
||||
- [Ubuntu 18.04 - Ubuntu Server Guide - Security](https://help.ubuntu.com/lts/serverguide/security.html.en)
|
||||
- [Ubuntu Blog: National Cyber Security Centre publish Ubuntu 18.04 LTS Security Guide](https://blog.ubuntu.com/2018/07/30/national-cyber-security-centre-publish-ubuntu-18-04-lts-security-guide)
|
||||
|
||||
Also, here are some recommendations a node operator can follow to enhance the privacy of the data coming to, stored on, and leaving their node:
|
||||
|
||||
- Ensure that all data stored on a node is encrypted at rest, e.g. using full disk encryption. This can be provided as a service by the operating system, transparently to BigchainDB, MongoDB and Tendermint.
|
||||
- Ensure that all data is encrypted in transit, i.e. enforce using HTTPS for the HTTP API and the Websocket API. This can be done using NGINX or similar, as we do with the BigchainDB Testnet.
|
||||
@ -16,7 +16,9 @@ A Network will stop working if more than one third of the Nodes are down or faul
|
||||
|
||||
## Before We Start
|
||||
|
||||
This tutorial assumes you have basic knowledge on how to manage a GNU/Linux machine. The commands are tailored for an up-to-date *Debian-like* distribution. (We use an **Ubuntu 18.04 LTS** Virtual Machine on Microsoft Azure.) If you are on a different Linux distribution then you might need to adapt the names of the packages installed.
|
||||
This tutorial assumes you have basic knowledge on how to manage a GNU/Linux machine.
|
||||
|
||||
**Please note: The commands on this page work on Ubuntu 18.04. Similar commands will work on other versions of Ubuntu, and other recent Debian-like Linux distros, but you may have to change the names of the packages, or install more packages.**
|
||||
|
||||
We don't make any assumptions about **where** you run the Node.
|
||||
You can run BigchainDB Server on a Virtual Machine on the cloud, on a machine in your data center, or even on a Raspberry Pi. Just make sure that your Node is reachable by the other Nodes. Here's a **non-exhaustive list of examples**:
|
||||
@ -49,13 +51,16 @@ sudo apt full-upgrade
|
||||
BigchainDB Server requires **Python 3.6+**, so make sure your system has it. Install the required packages:
|
||||
|
||||
```
|
||||
# For Ubuntu 18.04:
|
||||
sudo apt install -y python3-pip libssl-dev
|
||||
# Ubuntu 16.04, and other Linux distros, may require other packages or more packages
|
||||
```
|
||||
|
||||
Now install the latest version of BigchainDB. Check the [project page on PyPI][bdb:pypi] for the last version (which was `2.0.0a6` at the time of writing) and install it:
|
||||
Now install the latest version of BigchainDB. You can find the latest version by going to the [BigchainDB project release history page on PyPI][bdb:pypi]. For example, to install version 2.0.0b3, you would do:
|
||||
|
||||
```
|
||||
sudo pip3 install bigchaindb==2.0.0a6
|
||||
# Change 2.0.0b3 to the latest version as explained above:
|
||||
sudo pip3 install bigchaindb==2.0.0b3
|
||||
```
|
||||
|
||||
Check that you installed the correct version of BigchainDB Server using `bigchaindb --version`.
|
||||
@ -74,13 +79,13 @@ Note: The `mongodb` package is _not_ the official MongoDB package from MongoDB t
|
||||
|
||||
#### Install Tendermint
|
||||
|
||||
Install a [recent version of Tendermint][tendermint:releases]. BigchainDB Server requires version 0.22.3 or newer.
|
||||
Install a [recent version of Tendermint][tendermint:releases]. BigchainDB Server requires version 0.22.8 or newer.
|
||||
|
||||
```
|
||||
sudo apt install -y unzip
|
||||
wget https://github.com/tendermint/tendermint/releases/download/v0.22.3/tendermint_0.22.3_linux_amd64.zip
|
||||
unzip tendermint_0.22.3_linux_amd64.zip
|
||||
rm tendermint_0.22.3_linux_amd64.zip
|
||||
wget https://github.com/tendermint/tendermint/releases/download/v0.22.8/tendermint_0.22.8_linux_amd64.zip
|
||||
unzip tendermint_0.22.8_linux_amd64.zip
|
||||
rm tendermint_0.22.8_linux_amd64.zip
|
||||
sudo mv tendermint /usr/local/bin
|
||||
```
|
||||
|
||||
@ -124,17 +129,17 @@ The `public_key` is stored in the file `.tendermint/config/priv_validator.json`,
|
||||
|
||||
```json
|
||||
{
|
||||
"address": "5943A9EF6285791A504918E1D117BC7F6A615C98",
|
||||
"address": "E22D4340E5A92E4A9AD7C62DA62888929B3921E9",
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "W3tqeMCU3d4SHDKqrwQWTahTW/wpieIAKZQxUxLv6rI="
|
||||
"type": "tendermint/PubKeyEd25519",
|
||||
"value": "P+aweH73Hii8RyCmNWbwPsa9o4inq3I+0fSfprVkZa0="
|
||||
},
|
||||
"last_height": 0,
|
||||
"last_round": 0,
|
||||
"last_height": "0",
|
||||
"last_round": "0",
|
||||
"last_step": 0,
|
||||
"priv_key": {
|
||||
"type": "954568A3288910",
|
||||
"value": "3sv9aExgME6MMjx0JoKVy7KtED8PBiPcyAgsYmVizslbe2p4wJTd3hIcMqqvBBZNqFNb/CmJ4gAplDFTEu/qsg=="
|
||||
"type": "tendermint/PrivKeyEd25519",
|
||||
"value": "AHBiZXdZhkVZoPUAiMzClxhl0VvUp7Xl3YT6GvCc93A/5rB4fvceKLxHIKY1ZvA+xr2jiKercj7R9J+mtWRlrQ=="
|
||||
}
|
||||
}
|
||||
```
|
||||
@ -158,42 +163,64 @@ Share the `node_id`, `pub_key.value` and hostname of your Node with all other Me
|
||||
At this point the Coordinator should have received the data from all the Members, and should combine them in the `.tendermint/config/genesis.json` file:
|
||||
|
||||
```json
|
||||
{
|
||||
"genesis_time": "0001-01-01T00:00:00Z",
|
||||
"chain_id": "test-chain-la6HSr",
|
||||
"validators": [
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "<Member 1 public key>"
|
||||
{
|
||||
"genesis_time":"0001-01-01T00:00:00Z",
|
||||
"chain_id":"test-chain-la6HSr",
|
||||
"consensus_params":{
|
||||
"block_size_params":{
|
||||
"max_bytes":"22020096",
|
||||
"max_txs":"10000",
|
||||
"max_gas":"-1"
|
||||
},
|
||||
"power": 10,
|
||||
"name": "<Member 1 name>"
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "<Member 2 public key>"
|
||||
"tx_size_params":{
|
||||
"max_bytes":"10240",
|
||||
"max_gas":"-1"
|
||||
},
|
||||
"power": 10,
|
||||
"name": "<Member 2 name>"
|
||||
},
|
||||
{
|
||||
"...": { },
|
||||
},
|
||||
{
|
||||
"pub_key": {
|
||||
"type": "AC26791624DE60",
|
||||
"value": "<Member N public key>"
|
||||
},
|
||||
"power": 10,
|
||||
"name": "<Member N name>"
|
||||
}
|
||||
],
|
||||
"app_hash": ""
|
||||
"block_gossip_params":{
|
||||
"block_part_size_bytes":"65536"
|
||||
},
|
||||
"evidence_params":{
|
||||
"max_age":"100000"
|
||||
}
|
||||
},
|
||||
"validators":[
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"AC26791624DE60",
|
||||
"value":"<Member 1 public key>"
|
||||
},
|
||||
"power":10,
|
||||
"name":"<Member 1 name>"
|
||||
},
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"AC26791624DE60",
|
||||
"value":"<Member 2 public key>"
|
||||
},
|
||||
"power":10,
|
||||
"name":"<Member 2 name>"
|
||||
},
|
||||
{
|
||||
"...":{
|
||||
|
||||
},
|
||||
|
||||
},
|
||||
{
|
||||
"pub_key":{
|
||||
"type":"AC26791624DE60",
|
||||
"value":"<Member N public key>"
|
||||
},
|
||||
"power":10,
|
||||
"name":"<Member N name>"
|
||||
}
|
||||
],
|
||||
"app_hash":""
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** `consensus_params` in the `genesis.json` are default values for Tendermint consensus.
|
||||
|
||||
The new `genesis.json` file contains the data that describes the Network. The key `name` is the Member's moniker; it can be any valid string, but put something human-readable like `"Alice's Node Shop"`.
|
||||
|
||||
At this point, the Coordinator must share the new `genesis.json` file with all Members.
|
||||
@ -219,7 +246,7 @@ persistent_peers = "<Member 1 node id>@<Member 1 hostname>:26656,\
|
||||
<Member N node id>@<Member N hostname>:26656,"
|
||||
```
|
||||
|
||||
## Member: Start MongoDB, BigchainDB and Tendermint
|
||||
## Member: Start MongoDB
|
||||
|
||||
If you installed MongoDB using `sudo apt install mongodb`, then MongoDB should already be running in the background. You can check using `systemctl status mongodb`.
|
||||
|
||||
@ -227,6 +254,10 @@ If MongoDB isn't running, then you can start it using the command `mongod`, but
|
||||
|
||||
If you installed MongoDB using `sudo apt install mongodb`, then a MongoDB startup script should already be installed (so MongoDB will start automatically when the machine is restarted). Otherwise, you should install a startup script for MongoDB.
|
||||
|
||||
## Member: Start BigchainDB and Tendermint
|
||||
|
||||
If you want to use a process manager, jump to the [next section](member-start-bigchaindb-and-tendermint-using-monit).
|
||||
|
||||
To start BigchainDB, one uses the command `bigchaindb start` but that will run it in the foreground. If you want to run it in the background (so it will continue running after you logout), you can use `nohup`, `tmux`, or `screen`. For example, `nohup bigchaindb start 2>&1 > bigchaindb.log &`
|
||||
|
||||
The _recommended_ approach is to create a startup script for BigchainDB, so it will start right after the boot of the operating system. (As mentioned earlier, MongoDB should already have a startup script.)
|
||||
@ -239,10 +270,62 @@ Note: We'll share some example startup scripts in the future. This document is a
|
||||
|
||||
If you followed the recommended approach and created startup scripts for BigchainDB and Tendermint, then you can reboot the machine now. MongoDB, BigchainDB and Tendermint should all start.
|
||||
|
||||
|
||||
### Member: Start BigchainDB and Tendermint using Monit
|
||||
|
||||
This section describes how to manage the BigchainDB and Tendermint processes using [Monit][monit] - a small open-source utility for managing and monitoring Unix processes.
|
||||
|
||||
This section assumes that you followed the guide down to the [start MongoDB section](#member-start-mongodb) inclusive.
|
||||
|
||||
Install Monit:
|
||||
|
||||
```
|
||||
sudo apt install monit
|
||||
```
|
||||
|
||||
If you installed the `bigchaindb` Python package, you should have the `bigchaindb-monit-config` script in your `PATH` now.
|
||||
|
||||
Run the script:
|
||||
|
||||
```
|
||||
bigchaindb-monit-config
|
||||
```
|
||||
|
||||
The script builds a configuration file for Monit.
|
||||
|
||||
Run Monit as a daemon, instructing it to wake up every second to check on processes:
|
||||
|
||||
```
|
||||
monit -d 1
|
||||
```
|
||||
|
||||
It will run the processes and restart them when they crash. If the root `bigchaindb_` process crashes, Monit will also restart the Tendermint process.
|
||||
|
||||
Check the status by running `monit status` or `monit summary`.
|
||||
|
||||
By default, it will collect program logs into the `~/.bigchaindb-monit/logs` folder.
|
||||
|
||||
Consult `monit -h` or [the Monit documentation][monit-manual] to know more about the operational power you've just got the taste of.
|
||||
|
||||
Check `bigchaindb-monit-config -h` if you want to arrange a different folder for logs or some of the Monit internal artifacts.
|
||||
|
||||
## How Others Can Access Your Node
|
||||
|
||||
If you followed the above instructions, then your node should be publicly-accessible with BigchainDB Root URL `http://hostname:9984` (where hostname is something like `bdb7.canada.vmsareus.net` or `17.122.200.76`). That is, anyone can interact with your node using the [BigchainDB HTTP API](http-client-server-api.html) exposed at that address. The most common way to do that is to use one of the [BigchainDB Drivers](./drivers-clients/index.html).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
To check which nodes your node is connected to (via Tendermint protocols), do:
|
||||
|
||||
```text
|
||||
# if you don't jq installed, then install it
|
||||
sudo apt install jq
|
||||
# then do
|
||||
curl -s localhost:26657/net_info | jq ".result.peers[].node_info | {id, listen_addr, moniker}"
|
||||
```
|
||||
|
||||
Tendermint has other endpoints besides `/net_info`: see [the Tendermint RPC docs](https://tendermint.github.io/slate/?shell#introduction).
|
||||
|
||||
## Refreshing Your Node
|
||||
|
||||
If you want to refresh your node back to a fresh empty state, then your best bet is to terminate it and deploy a new virtual machine, but if that's not an option, then you can:
|
||||
@ -251,6 +334,48 @@ If you want to refresh your node back to a fresh empty state, then your best bet
|
||||
- reset Tendermint using `tendermint unsafe_reset_all`
|
||||
- delete the directory `$HOME/.tendermint`
|
||||
|
||||
## Shutting down BigchainDB
|
||||
|
||||
If you want to stop/kill BigchainDB, you can do so by sending `SIGINT`, `SIGQUIT` or `SIGTERM` to the running BigchainDB
|
||||
process(es). Depending on how you started BigchainDB i.e. foreground or background. e.g. you started BigchainDB in the background as mentioned above in the guide:
|
||||
|
||||
```bash
|
||||
$ nohup bigchaindb start 2>&1 > bigchaindb.log &
|
||||
|
||||
$ # Check the PID of the main BigchainDB process
|
||||
$ ps -ef | grep bigchaindb
|
||||
<user> *<pid> <ppid> <C> <STIME> <tty> <time> bigchaindb
|
||||
<user> <pid> <ppid>* <C> <STIME> <tty> <time> gunicorn: master [bigchaindb_gunicorn]
|
||||
<user> <pid> <ppid>* <C> <STIME> <tty> <time> bigchaindb_ws
|
||||
<user> <pid> <ppid>* <C> <STIME> <tty> <time> bigchaindb_ws_to_tendermint
|
||||
<user> <pid> <ppid>* <C> <STIME> <tty> <time> bigchaindb_exchange
|
||||
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
|
||||
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
|
||||
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
|
||||
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
|
||||
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
|
||||
...
|
||||
|
||||
$ # Send any of the above mentioned signals to the parent/root process(marked with `*` for clarity)
|
||||
# Sending SIGINT
|
||||
$ kill -2 <bigchaindb_parent_pid>
|
||||
|
||||
$ # OR
|
||||
|
||||
# Sending SIGTERM
|
||||
$ kill -15 <bigchaindb_parent_pid>
|
||||
|
||||
$ # OR
|
||||
|
||||
# Sending SIGQUIT
|
||||
$ kill -3 <bigchaindb_parent_pid>
|
||||
|
||||
# If you want to kill all the processes by name yourself
|
||||
$ pgrep bigchaindb | xargs kill -9
|
||||
```
|
||||
|
||||
If you started BigchainDB in the foreground, a `Ctrl + C` or `Ctrl + Z` would shut down BigchainDB.
|
||||
|
||||
## Member: Dynamically Add a New Member to the Network
|
||||
|
||||
TBD.
|
||||
@ -259,3 +384,5 @@ TBD.
|
||||
[bdb:software]: https://github.com/bigchaindb/bigchaindb/
|
||||
[bdb:pypi]: https://pypi.org/project/BigchainDB/#history
|
||||
[tendermint:releases]: https://github.com/tendermint/tendermint/releases
|
||||
[monit]: https://www.mmonit.com/monit
|
||||
[monit-manual]: https://mmonit.com/monit/documentation/monit.html
|
||||
|
||||
@ -154,7 +154,7 @@ spec:
|
||||
timeoutSeconds: 15
|
||||
# BigchainDB container
|
||||
- name: bigchaindb
|
||||
image: bigchaindb/bigchaindb:2.0.0-beta1
|
||||
image: bigchaindb/bigchaindb:2.0.0-beta5
|
||||
imagePullPolicy: Always
|
||||
args:
|
||||
- start
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
FROM tendermint/tendermint:0.22.3
|
||||
FROM tendermint/tendermint:0.22.8
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
WORKDIR /
|
||||
USER root
|
||||
|
||||
@ -34,7 +34,7 @@ spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: bigchaindb
|
||||
image: bigchaindb/bigchaindb:2.0.0-beta1
|
||||
image: bigchaindb/bigchaindb:2.0.0-beta5
|
||||
imagePullPolicy: Always
|
||||
args:
|
||||
- start
|
||||
|
||||
@ -15,13 +15,11 @@ The derived files (`nginx.conf.template` and `nginx.lua.template`), along with
|
||||
the other files in this directory, are _also_ licensed under an MIT License,
|
||||
the text of which can be found below.
|
||||
|
||||
## Documentation Licenses
|
||||
|
||||
# Documentation Licenses
|
||||
|
||||
The documentation in this directory is licensed under a Creative Commons Attribution-ShareAlike
|
||||
The documentation in this directory is licensed under a Creative Commons Attribution
|
||||
4.0 International license, the full text of which can be found at
|
||||
[http://creativecommons.org/licenses/by-sa/4.0/legalcode](http://creativecommons.org/licenses/by-sa/4.0/legalcode).
|
||||
|
||||
[http://creativecommons.org/licenses/by/4.0/legalcode](http://creativecommons.org/licenses/by/4.0/legalcode).
|
||||
|
||||
<hr>
|
||||
|
||||
@ -47,7 +45,6 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
||||
|
||||
<hr>
|
||||
|
||||
The MIT License
|
||||
@ -71,4 +68,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
||||
|
||||
@ -13,6 +13,7 @@
|
||||
- name: Get Running BigchainDB Process(es)
|
||||
shell: "ps aux | grep \"[b]igchaindb\" | awk '{print $2}'"
|
||||
register: bdb_ps
|
||||
ignore_errors: yes
|
||||
when: stack_type|lower == "local"
|
||||
tags: [bigchaindb]
|
||||
|
||||
@ -30,4 +31,4 @@
|
||||
- gunicorn
|
||||
ignore_errors: yes
|
||||
when: stack_type|lower == "local"
|
||||
tags: [bigchaindb]
|
||||
tags: [bigchaindb]
|
||||
|
||||
@ -8,7 +8,8 @@ distribution_major: "{{ ansible_distribution_major_version }}"
|
||||
server_arch: "amd64,arm64"
|
||||
|
||||
# MongoDB Repos
|
||||
mongodb_apt_repo: "deb [arch={{ server_arch }}] http://repo.mongodb.org/apt/{{ distribution_name }} {{ distribution_codename }}/{{ mongodb_package }}/{{ mongo_version }} {{'main' if ansible_distribution == 'debian' else 'multiverse'}}"
|
||||
mongodb_apt_repo: "deb [arch={{ server_arch }}] http://repo.mongodb.org/apt/{{ distribution_name }} {{ distribution_codename }}/{{ mongodb_package }}/{{ mongo_version }} multiverse"
|
||||
mongodb_deb_repo: "deb http://repo.mongodb.org/apt/{{ distribution_name }} {{ distribution_codename }}/{{ mongodb_package }}/{{ mongo_version }} main"
|
||||
mongodb_yum_base_url: "https://repo.mongodb.org/yum/{{ ansible_os_family|lower }}/$releasever/{{ mongodb_package }}/{{ mongo_version }}/{{ ansible_architecture }}"
|
||||
mongodb_dnf_base_url: "https://repo.mongodb.org/yum/{{ ansible_os_family|lower }}/7/{{ mongodb_package }}/{{ mongo_version }}/{{ ansible_architecture }}"
|
||||
|
||||
|
||||
@ -22,6 +22,15 @@
|
||||
repo: "{{ mongodb_apt_repo }}"
|
||||
state: present
|
||||
update_cache: no
|
||||
when: distribution_name == "ubuntu"
|
||||
tags: [mongodb]
|
||||
|
||||
- name: Add MongoDB repo and update cache | deb
|
||||
apt_repository:
|
||||
repo: "{{ mongodb_deb_repo }}"
|
||||
state: present
|
||||
update_cache: no
|
||||
when: distribution_name == "debian"
|
||||
tags: [mongodb]
|
||||
|
||||
- name: Install MongoDB | apt
|
||||
@ -31,4 +40,4 @@
|
||||
update_cache: yes
|
||||
with_items:
|
||||
- "{{ mongodb_package }}"
|
||||
tags: [mongodb]
|
||||
tags: [mongodb]
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
ARG tm_version=0.22.3
|
||||
ARG tm_version=0.22.8
|
||||
FROM tendermint/tendermint:${tm_version}
|
||||
LABEL maintainer "dev@bigchaindb.com"
|
||||
WORKDIR /
|
||||
|
||||
14
pkg/scripts/all-in-one.bash
Executable file
14
pkg/scripts/all-in-one.bash
Executable file
@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
|
||||
# MongoDB configuration
|
||||
[ "$(stat -c %U /data/db)" = mongodb ] || chown -R mongodb /data/db
|
||||
|
||||
# BigchainDB configuration
|
||||
bigchaindb-monit-config
|
||||
|
||||
nohup mongod > "$HOME/.bigchaindb-monit/logs/mongodb_log_$(date +%Y%m%d_%H%M%S)" 2>&1 &
|
||||
|
||||
# Tendermint configuration
|
||||
tendermint init
|
||||
|
||||
monit -d 5 -I -B
|
||||
173
pkg/scripts/bigchaindb-monit-config
Normal file
173
pkg/scripts/bigchaindb-monit-config
Normal file
@ -0,0 +1,173 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -o nounset
|
||||
|
||||
# Check if directory for monit logs exists
|
||||
if [ ! -d "$HOME/.bigchaindb-monit" ]; then
|
||||
mkdir -p "$HOME/.bigchaindb-monit"
|
||||
fi
|
||||
|
||||
monit_pid_path=${MONIT_PID_PATH:=$HOME/.bigchaindb-monit/monit_processes}
|
||||
monit_script_path=${MONIT_SCRIPT_PATH:=$HOME/.bigchaindb-monit/monit_script}
|
||||
monit_log_path=${MONIT_LOG_PATH:=$HOME/.bigchaindb-monit/logs}
|
||||
monitrc_path=${MONITRC_PATH:=$HOME/.monitrc}
|
||||
|
||||
function usage() {
|
||||
cat <<EOM
|
||||
|
||||
Usage: ${0##*/} [-h]
|
||||
|
||||
Configure Monit for BigchainDB and Tendermint process management.
|
||||
|
||||
ENV[MONIT_PID_PATH] || --monit-pid-path PATH
|
||||
|
||||
Absolute path to directory where the the program's pid-file will reside.
|
||||
The pid-file contains the ID(s) of the process(es). (default: ${monit_pid_path})
|
||||
|
||||
ENV[MONIT_SCRIPT_PATH] || --monit-script-path PATH
|
||||
|
||||
Absolute path to the directory where the executable program or
|
||||
script is present. (default: ${monit_script_path})
|
||||
|
||||
ENV[MONIT_LOG_PATH] || --monit-log-path PATH
|
||||
|
||||
Absolute path to the directory where all the logs for processes
|
||||
monitored by Monit are stored. (default: ${monit_log_path})
|
||||
|
||||
ENV[MONITRC_PATH] || --monitrc-path PATH
|
||||
|
||||
Absolute path to the monit control file(monitrc). (default: ${monitrc_path})
|
||||
|
||||
-h|--help
|
||||
Show this help and exit.
|
||||
|
||||
EOM
|
||||
}
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
arg="$1"
|
||||
case $arg in
|
||||
--monit-pid-path)
|
||||
monit_pid_path="$2"
|
||||
shift
|
||||
;;
|
||||
--monit-script-path)
|
||||
monit_script_path="$2"
|
||||
shift
|
||||
;;
|
||||
--monit-log-path)
|
||||
monit_log_path="$2"
|
||||
shift
|
||||
;;
|
||||
--monitrc-path)
|
||||
monitrc_path="$2"
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
exit
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Check if directory for monit logs exists
|
||||
if [ ! -d "$monit_log_path" ]; then
|
||||
mkdir -p "$monit_log_path"
|
||||
fi
|
||||
|
||||
# Check if directory for monit pid files exists
|
||||
if [ ! -d "$monit_pid_path" ]; then
|
||||
mkdir -p "$monit_pid_path"
|
||||
fi
|
||||
|
||||
cat >${monit_script_path} <<EOF
|
||||
#!/bin/bash
|
||||
case \$1 in
|
||||
|
||||
start_bigchaindb)
|
||||
|
||||
pushd \$4
|
||||
nohup bigchaindb -l DEBUG start >> \$3/bigchaindb.out.log 2>> \$3/bigchaindb.err.log &
|
||||
|
||||
echo \$! > \$2
|
||||
popd
|
||||
|
||||
;;
|
||||
|
||||
stop_bigchaindb)
|
||||
|
||||
kill -2 \`cat \$2\`
|
||||
rm -f \$2
|
||||
|
||||
;;
|
||||
|
||||
start_tendermint)
|
||||
|
||||
pushd \$4
|
||||
nohup tendermint node --consensus.create_empty_blocks=false >> \$3/tendermint.out.log 2>> \$3/tendermint.err.log &
|
||||
|
||||
echo \$! > \$2
|
||||
popd
|
||||
|
||||
;;
|
||||
|
||||
stop_tendermint)
|
||||
|
||||
kill -2 \`cat \$2\`
|
||||
rm -f \$2
|
||||
|
||||
;;
|
||||
|
||||
esac
|
||||
exit 0
|
||||
EOF
|
||||
chmod +x ${monit_script_path}
|
||||
|
||||
# Handling overwriting of control file interactively
|
||||
if [ -f "$monitrc_path" ]; then
|
||||
echo "$monitrc_path already exists."
|
||||
read -p "Overwrite[Y]? " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Overriding $monitrc_path/.monitrc"
|
||||
else
|
||||
read -p "Enter absolute path to store Monit control file: " monitrc_path
|
||||
eval monitrc_path="$monitrc_path"
|
||||
if [ ! -d "$(dirname $monitrc_path)" ]; then
|
||||
echo "Failed to save monit control file '$monitrc_path': No such file or directory."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# configure monitrc
|
||||
cat >${monitrc_path} <<EOF
|
||||
set httpd
|
||||
port 2812
|
||||
allow localhost
|
||||
|
||||
check process bigchaindb
|
||||
with pidfile ${monit_pid_path}/bigchaindb.pid
|
||||
start program "${monit_script_path} start_bigchaindb $monit_pid_path/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
|
||||
restart program "${monit_script_path} start_bigchaindb $monit_pid_path/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
|
||||
stop program "${monit_script_path} stop_bigchaindb $monit_pid_path/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
|
||||
|
||||
check process tendermint
|
||||
with pidfile ${monit_pid_path}/tendermint.pid
|
||||
start program "${monit_script_path} start_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
|
||||
restart program "${monit_script_path} start_bigchaindb ${monit_pid_path}/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
|
||||
stop program "${monit_script_path} stop_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
|
||||
depends on bigchaindb
|
||||
EOF
|
||||
|
||||
# Setting permissions for control file
|
||||
chmod 0700 ${monitrc_path}
|
||||
|
||||
echo -e "BigchainDB process manager configured!"
|
||||
set -o errexit
|
||||
@ -11,7 +11,7 @@ stack_repo=${STACK_REPO:="bigchaindb/bigchaindb"}
|
||||
stack_size=${STACK_SIZE:=4}
|
||||
stack_type=${STACK_TYPE:="docker"}
|
||||
stack_type_provider=${STACK_TYPE_PROVIDER:=""}
|
||||
tm_version=${TM_VERSION:="0.22.3"}
|
||||
tm_version=${TM_VERSION:="0.22.8"}
|
||||
mongo_version=${MONGO_VERSION:="3.6"}
|
||||
stack_vm_memory=${STACK_VM_MEMORY:=2048}
|
||||
stack_vm_cpus=${STACK_VM_CPUS:=2}
|
||||
|
||||
@ -11,7 +11,7 @@ stack_repo=${STACK_REPO:="bigchaindb/bigchaindb"}
|
||||
stack_size=${STACK_SIZE:=4}
|
||||
stack_type=${STACK_TYPE:="docker"}
|
||||
stack_type_provider=${STACK_TYPE_PROVIDER:=""}
|
||||
tm_version=${TM_VERSION:="0.22.3"}
|
||||
tm_version=${TM_VERSION:="0.22.8"}
|
||||
mongo_version=${MONGO_VERSION:="3.6"}
|
||||
stack_vm_memory=${STACK_VM_MEMORY:=2048}
|
||||
stack_vm_cpus=${STACK_VM_CPUS:=2}
|
||||
|
||||
6
setup.py
6
setup.py
@ -56,7 +56,7 @@ tests_require = [
|
||||
'flake8-quotes==0.8.1',
|
||||
'hypothesis~=3.18.5',
|
||||
'hypothesis-regex',
|
||||
'pylint',
|
||||
# Removed pylint because its GPL license isn't Apache2-compatible
|
||||
'pytest>=3.0.0',
|
||||
'pytest-cov>=2.2.1',
|
||||
'pytest-mock',
|
||||
@ -85,7 +85,7 @@ install_requires = [
|
||||
'gunicorn~=19.0',
|
||||
'jsonschema~=2.5.1',
|
||||
'pyyaml~=3.12',
|
||||
'aiohttp~=2.3',
|
||||
'aiohttp~=3.0',
|
||||
'python-rapidjson-schema==0.1.1',
|
||||
'bigchaindb-abci==0.5.1',
|
||||
'setproctitle~=1.1.0',
|
||||
@ -128,6 +128,8 @@ setup(
|
||||
|
||||
packages=find_packages(exclude=['tests*']),
|
||||
|
||||
scripts = ['pkg/scripts/bigchaindb-monit-config'],
|
||||
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
'bigchaindb=bigchaindb.commands.bigchaindb:main'
|
||||
|
||||
@ -12,7 +12,7 @@ def test_asset_transfer(b, signed_create_tx, user_pk, user_sk):
|
||||
signed_create_tx.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([signed_create_tx, tx_transfer])
|
||||
b.store_bulk_transactions([signed_create_tx])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert tx_transfer_signed.asset['id'] == signed_create_tx.id
|
||||
@ -27,7 +27,7 @@ def test_validate_transfer_asset_id_mismatch(b, signed_create_tx, user_pk, user_
|
||||
tx_transfer.asset['id'] = 'a' * 64
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([signed_create_tx, tx_transfer_signed])
|
||||
b.store_bulk_transactions([signed_create_tx])
|
||||
|
||||
with pytest.raises(AssetIdMismatch):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
@ -1,6 +1,8 @@
|
||||
import pytest
|
||||
import random
|
||||
|
||||
from bigchaindb.common.exceptions import DoubleSpend
|
||||
|
||||
|
||||
pytestmark = pytest.mark.tendermint
|
||||
|
||||
@ -127,7 +129,7 @@ def test_single_in_single_own_single_out_single_own_transfer(alice, b, user_pk,
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b)
|
||||
assert len(tx_transfer_signed.outputs) == 1
|
||||
@ -154,7 +156,7 @@ def test_single_in_single_own_multiple_out_single_own_transfer(alice, b, user_pk
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 2
|
||||
@ -182,7 +184,7 @@ def test_single_in_single_own_single_out_multiple_own_transfer(alice, b, user_pk
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 1
|
||||
@ -194,6 +196,10 @@ def test_single_in_single_own_single_out_multiple_own_transfer(alice, b, user_pk
|
||||
|
||||
assert len(tx_transfer_signed.inputs) == 1
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
# TRANSFER divisible asset
|
||||
# Single input
|
||||
@ -215,7 +221,7 @@ def test_single_in_single_own_multiple_out_mix_own_transfer(alice, b, user_pk,
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 2
|
||||
@ -228,6 +234,10 @@ def test_single_in_single_own_multiple_out_mix_own_transfer(alice, b, user_pk,
|
||||
|
||||
assert len(tx_transfer_signed.inputs) == 1
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
# TRANSFER divisible asset
|
||||
# Single input
|
||||
@ -249,7 +259,7 @@ def test_single_in_multiple_own_single_out_single_own_transfer(alice, b, user_pk
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 1
|
||||
@ -260,6 +270,10 @@ def test_single_in_multiple_own_single_out_single_own_transfer(alice, b, user_pk
|
||||
assert 'subconditions' in ffill
|
||||
assert len(ffill['subconditions']) == 2
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
# TRANSFER divisible asset
|
||||
# Multiple inputs
|
||||
@ -280,13 +294,17 @@ def test_multiple_in_single_own_single_out_single_own_transfer(alice, b, user_pk
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b)
|
||||
assert len(tx_transfer_signed.outputs) == 1
|
||||
assert tx_transfer_signed.outputs[0].amount == 100
|
||||
assert len(tx_transfer_signed.inputs) == 2
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
# TRANSFER divisible asset
|
||||
# Multiple inputs
|
||||
@ -309,9 +327,9 @@ def test_multiple_in_multiple_own_single_out_single_own_transfer(alice, b, user_
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b)
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 1
|
||||
assert tx_transfer_signed.outputs[0].amount == 100
|
||||
assert len(tx_transfer_signed.inputs) == 2
|
||||
@ -323,6 +341,10 @@ def test_multiple_in_multiple_own_single_out_single_own_transfer(alice, b, user_
|
||||
assert len(ffill_fid0['subconditions']) == 2
|
||||
assert len(ffill_fid1['subconditions']) == 2
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
# TRANSFER divisible asset
|
||||
# Multiple inputs
|
||||
@ -345,7 +367,7 @@ def test_muiltiple_in_mix_own_multiple_out_single_own_transfer(alice, b, user_pk
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 1
|
||||
@ -358,6 +380,10 @@ def test_muiltiple_in_mix_own_multiple_out_single_own_transfer(alice, b, user_pk
|
||||
assert 'subconditions' in ffill_fid1
|
||||
assert len(ffill_fid1['subconditions']) == 2
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
# TRANSFER divisible asset
|
||||
# Multiple inputs
|
||||
@ -382,7 +408,7 @@ def test_muiltiple_in_mix_own_multiple_out_mix_own_transfer(alice, b, user_pk,
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 2
|
||||
@ -402,6 +428,10 @@ def test_muiltiple_in_mix_own_multiple_out_mix_own_transfer(alice, b, user_pk,
|
||||
assert 'subconditions' in ffill_fid1
|
||||
assert len(ffill_fid1['subconditions']) == 2
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
# TRANSFER divisible asset
|
||||
# Multiple inputs from different transactions
|
||||
@ -436,7 +466,7 @@ def test_multiple_in_different_transactions(alice, b, user_pk, user_sk):
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer2_signed = tx_transfer2.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer1_signed, tx_transfer2_signed])
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer1_signed])
|
||||
|
||||
assert tx_transfer2_signed.validate(b) == tx_transfer2_signed
|
||||
assert len(tx_transfer2_signed.outputs) == 1
|
||||
@ -501,10 +531,14 @@ def test_threshold_same_public_key(alice, b, user_pk, user_sk):
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk, user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
def test_sum_amount(alice, b, user_pk, user_sk):
|
||||
from bigchaindb.models import Transaction
|
||||
@ -520,12 +554,16 @@ def test_sum_amount(alice, b, user_pk, user_sk):
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 1
|
||||
assert tx_transfer_signed.outputs[0].amount == 3
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
|
||||
def test_divide(alice, b, user_pk, user_sk):
|
||||
from bigchaindb.models import Transaction
|
||||
@ -541,9 +579,13 @@ def test_divide(alice, b, user_pk, user_sk):
|
||||
asset_id=tx_create.id)
|
||||
tx_transfer_signed = tx_transfer.sign([user_sk])
|
||||
|
||||
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
|
||||
b.store_bulk_transactions([tx_create_signed])
|
||||
|
||||
assert tx_transfer_signed.validate(b) == tx_transfer_signed
|
||||
assert len(tx_transfer_signed.outputs) == 3
|
||||
for output in tx_transfer_signed.outputs:
|
||||
assert output.amount == 1
|
||||
|
||||
b.store_bulk_transactions([tx_transfer_signed])
|
||||
with pytest.raises(DoubleSpend):
|
||||
tx_transfer_signed.validate(b)
|
||||
|
||||
@ -229,7 +229,7 @@ def test_get_spending_transactions(user_pk, user_sk):
|
||||
|
||||
def test_store_block():
|
||||
from bigchaindb.backend import connect, query
|
||||
from bigchaindb.tendermint.lib import Block
|
||||
from bigchaindb.lib import Block
|
||||
conn = connect()
|
||||
|
||||
block = Block(app_hash='random_utxo',
|
||||
@ -242,7 +242,7 @@ def test_store_block():
|
||||
|
||||
def test_get_block():
|
||||
from bigchaindb.backend import connect, query
|
||||
from bigchaindb.tendermint.lib import Block
|
||||
from bigchaindb.lib import Block
|
||||
conn = connect()
|
||||
|
||||
block = Block(app_hash='random_utxo',
|
||||
@ -345,7 +345,7 @@ def test_get_unspent_outputs(db_context, utxoset):
|
||||
|
||||
def test_store_pre_commit_state(db_context):
|
||||
from bigchaindb.backend import query
|
||||
from bigchaindb.tendermint.lib import PreCommitState
|
||||
from bigchaindb.lib import PreCommitState
|
||||
|
||||
state = PreCommitState(commit_id='test',
|
||||
height=3,
|
||||
@ -359,7 +359,7 @@ def test_store_pre_commit_state(db_context):
|
||||
|
||||
def test_get_pre_commit_state(db_context):
|
||||
from bigchaindb.backend import query
|
||||
from bigchaindb.tendermint.lib import PreCommitState
|
||||
from bigchaindb.lib import PreCommitState
|
||||
|
||||
state = PreCommitState(commit_id='test2',
|
||||
height=3,
|
||||
@ -370,22 +370,23 @@ def test_get_pre_commit_state(db_context):
|
||||
assert resp == state._asdict()
|
||||
|
||||
|
||||
def test_store_validator_update():
|
||||
def test_validator_update():
|
||||
from bigchaindb.backend import connect, query
|
||||
from bigchaindb.backend.query import VALIDATOR_UPDATE_ID
|
||||
from bigchaindb.common.exceptions import MultipleValidatorOperationError
|
||||
|
||||
conn = connect()
|
||||
|
||||
validator_update = {'validator': {'key': 'value'},
|
||||
'update_id': VALIDATOR_UPDATE_ID}
|
||||
query.store_validator_update(conn, deepcopy(validator_update))
|
||||
def gen_validator_update(height):
|
||||
return {'data': 'somedata', 'height': height}
|
||||
|
||||
with pytest.raises(MultipleValidatorOperationError):
|
||||
query.store_validator_update(conn, deepcopy(validator_update))
|
||||
for i in range(1, 100, 10):
|
||||
value = gen_validator_update(i)
|
||||
query.store_validator_set(conn, value)
|
||||
|
||||
resp = query.get_validator_update(conn, VALIDATOR_UPDATE_ID)
|
||||
v1 = query.get_validator_set(conn, 8)
|
||||
assert v1['height'] == 1
|
||||
|
||||
assert resp == validator_update
|
||||
assert query.delete_validator_update(conn, VALIDATOR_UPDATE_ID)
|
||||
assert not query.get_validator_update(conn, VALIDATOR_UPDATE_ID)
|
||||
v41 = query.get_validator_set(conn, 50)
|
||||
assert v41['height'] == 41
|
||||
|
||||
v91 = query.get_validator_set(conn)
|
||||
assert v91['height'] == 91
|
||||
|
||||
@ -40,7 +40,7 @@ def test_init_creates_db_tables_and_indexes():
|
||||
assert set(indexes) == {'_id_', 'pre_commit_id'}
|
||||
|
||||
indexes = conn.conn[dbname]['validators'].index_information().keys()
|
||||
assert set(indexes) == {'_id_', 'update_id'}
|
||||
assert set(indexes) == {'_id_', 'height'}
|
||||
|
||||
|
||||
def test_init_database_fails_if_db_exists():
|
||||
|
||||
@ -90,7 +90,7 @@ def test_bigchain_run_init_when_db_exists(mocker, capsys):
|
||||
def test__run_init(mocker):
|
||||
from bigchaindb.commands.bigchaindb import _run_init
|
||||
bigchain_mock = mocker.patch(
|
||||
'bigchaindb.commands.bigchaindb.bigchaindb.tendermint.BigchainDB')
|
||||
'bigchaindb.commands.bigchaindb.bigchaindb.BigchainDB')
|
||||
init_db_mock = mocker.patch(
|
||||
'bigchaindb.commands.bigchaindb.schema.init_database',
|
||||
autospec=True,
|
||||
@ -274,7 +274,7 @@ def test_calling_main(start_mock, base_parser_mock, parse_args_mock,
|
||||
|
||||
@patch('bigchaindb.config_utils.autoconfigure')
|
||||
@patch('bigchaindb.commands.bigchaindb.run_recover')
|
||||
@patch('bigchaindb.tendermint.commands.start')
|
||||
@patch('bigchaindb.start.start')
|
||||
def test_recover_db_on_start(mock_autoconfigure,
|
||||
mock_run_recover,
|
||||
mock_start,
|
||||
@ -293,7 +293,7 @@ def test_recover_db_on_start(mock_autoconfigure,
|
||||
def test_run_recover(b, alice, bob):
|
||||
from bigchaindb.commands.bigchaindb import run_recover
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.tendermint.lib import Block, PreCommitState
|
||||
from bigchaindb.lib import Block, PreCommitState
|
||||
from bigchaindb.backend.query import PRE_COMMIT_ID
|
||||
from bigchaindb.backend import query
|
||||
|
||||
@ -341,6 +341,7 @@ class MockResponse():
|
||||
return {'result': {'latest_block_height': self.height}}
|
||||
|
||||
|
||||
@pytest.mark.skip
|
||||
@patch('bigchaindb.config_utils.autoconfigure')
|
||||
@patch('bigchaindb.backend.query.store_validator_update')
|
||||
@pytest.mark.tendermint
|
||||
|
||||
@ -11,7 +11,6 @@ USER2_PUBLIC_KEY = 'GDxwMFbwdATkQELZbMfW8bd9hbNYMZLyVXA3nur2aNbE'
|
||||
USER3_PRIVATE_KEY = '4rNQFzWQbVwuTiDVxwuFMvLG5zd8AhrQKCtVovBvcYsB'
|
||||
USER3_PUBLIC_KEY = 'Gbrg7JtxdjedQRmr81ZZbh1BozS7fBW88ZyxNDy7WLNC'
|
||||
|
||||
|
||||
CC_FULFILLMENT_URI = (
|
||||
'pGSAINdamAGCsQq31Uv-08lkBzoO4XLz2qYjJa8CGmj3B1EagUDlVkMAw2CscpCG4syAboKKh'
|
||||
'Id_Hrjl2XTYc-BlIkkBVV-4ghWQozusxh45cBz5tGvSW_XwWVu-JGVRQUOOehAL'
|
||||
@ -26,10 +25,6 @@ ASSET_DEFINITION = {
|
||||
}
|
||||
}
|
||||
|
||||
ASSET_LINK = {
|
||||
'id': 'a' * 64
|
||||
}
|
||||
|
||||
DATA = {
|
||||
'msg': 'Hello BigchainDB!'
|
||||
}
|
||||
@ -104,12 +99,6 @@ def user_input(user_Ed25519, user_pub):
|
||||
return Input(user_Ed25519, [user_pub])
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def user2_input(user2_Ed25519, user2_pub):
|
||||
from bigchaindb.common.transaction import Input
|
||||
return Input(user2_Ed25519, [user2_pub])
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def user_user2_threshold_output(user_user2_threshold, user_pub, user2_pub):
|
||||
from bigchaindb.common.transaction import Output
|
||||
@ -139,11 +128,6 @@ def asset_definition():
|
||||
return ASSET_DEFINITION
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def asset_link():
|
||||
return ASSET_LINK
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def data():
|
||||
return DATA
|
||||
@ -200,7 +184,7 @@ def dummy_transaction():
|
||||
},
|
||||
'public_keys': [58 * 'b']
|
||||
}],
|
||||
'version': '1.0'
|
||||
'version': '2.0'
|
||||
}
|
||||
|
||||
|
||||
@ -271,38 +255,6 @@ def fulfilled_transaction():
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def fulfilled_and_hashed_transaction():
|
||||
return {
|
||||
'asset': {
|
||||
'data': {
|
||||
'msg': 'Hello BigchainDB!',
|
||||
}
|
||||
},
|
||||
'id': '7a7c827cf4ef7985f08f4e9d16f5ffc58ca4e82271921dfbed32e70cb462485f',
|
||||
'inputs': [{
|
||||
'fulfillment': ('pGSAIP_2P1Juh-94sD3uno1lxMPd9EkIalRo7QB014pT6dD9g'
|
||||
'UANRNxasDy1Dfg9C2Fk4UgHdYFsJzItVYi5JJ_vWc6rKltn0k'
|
||||
'jagynI0xfyR6X9NhzccTt5oiNH9mThEb4QmagN'),
|
||||
'fulfills': None,
|
||||
'owners_before': ['JEAkEJqLbbgDRAtMm8YAjGp759Aq2qTn9eaEHUj2XePE']
|
||||
}],
|
||||
'metadata': None,
|
||||
'operation': 'CREATE',
|
||||
'outputs': [{
|
||||
'amount': '1',
|
||||
'condition': {
|
||||
'details': {
|
||||
'public_key': 'JEAkEJqLbbgDRAtMm8YAjGp759Aq2qTn9eaEHUj2XePE',
|
||||
'type': 'ed25519-sha-256'
|
||||
},
|
||||
'uri': 'ni:///sha-256;49C5UWNODwtcINxLgLc90bMCFqCymFYONGEmV4a0sG4?fpt=ed25519-sha-256&cost=131072'},
|
||||
'public_keys': ['JEAkEJqLbbgDRAtMm8YAjGp759Aq2qTn9eaEHUj2XePE']
|
||||
}],
|
||||
'version': '1.0'
|
||||
}
|
||||
|
||||
|
||||
# TODO For reviewers: Pick which approach you like best: parametrized or not?
|
||||
@pytest.fixture(params=(
|
||||
{'id': None,
|
||||
|
||||
@ -4,6 +4,8 @@ properties related to validation.
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from hypothesis import given
|
||||
from hypothesis_regex import regex
|
||||
from pytest import raises
|
||||
@ -19,9 +21,13 @@ UNSUPPORTED_CRYPTOCONDITION_TYPES = (
|
||||
'preimage-sha-256', 'prefix-sha-256', 'rsa-sha-256')
|
||||
|
||||
|
||||
pytestmark = pytest.mark.tendermint
|
||||
|
||||
|
||||
################################################################################
|
||||
# Test of schema utils
|
||||
|
||||
|
||||
def _test_additionalproperties(node, path=''):
|
||||
"""Validate that each object node has additionalProperties set, so that
|
||||
objects with junk keys do not pass as valid.
|
||||
|
||||
@ -17,7 +17,11 @@ from pymongo import MongoClient
|
||||
|
||||
from bigchaindb.common import crypto
|
||||
from bigchaindb.log import setup_logging
|
||||
from bigchaindb.tendermint.lib import Block
|
||||
from bigchaindb.tendermint_utils import key_from_base64
|
||||
from bigchaindb.common.crypto import (key_pair_from_ed25519_key,
|
||||
public_key_from_ed25519_key)
|
||||
from bigchaindb.lib import Block
|
||||
|
||||
|
||||
TEST_DB_NAME = 'bigchain_test'
|
||||
|
||||
@ -269,13 +273,13 @@ def merlin_pubkey(merlin):
|
||||
|
||||
@pytest.fixture
|
||||
def b():
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
from bigchaindb import BigchainDB
|
||||
return BigchainDB()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tb():
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
from bigchaindb import BigchainDB
|
||||
return BigchainDB()
|
||||
|
||||
|
||||
@ -514,7 +518,7 @@ def event_loop(request):
|
||||
@pytest.fixture(scope='session')
|
||||
def abci_server():
|
||||
from abci import ABCIServer
|
||||
from bigchaindb.tendermint.core import App
|
||||
from bigchaindb.core import App
|
||||
from bigchaindb.utils import Process
|
||||
|
||||
app = ABCIServer(app=App())
|
||||
@ -615,3 +619,52 @@ def utxoset(dummy_unspent_outputs, utxo_collection):
|
||||
assert res.acknowledged
|
||||
assert len(res.inserted_ids) == 3
|
||||
return dummy_unspent_outputs, utxo_collection
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def network_validators(node_keys):
|
||||
validator_pub_power = {}
|
||||
voting_power = [8, 10, 7, 9]
|
||||
for pub, priv in node_keys.items():
|
||||
validator_pub_power[pub] = voting_power.pop()
|
||||
|
||||
return validator_pub_power
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def network_validators58(network_validators):
|
||||
network_validators_base58 = {}
|
||||
for p, v in network_validators.items():
|
||||
p = public_key_from_ed25519_key(key_from_base64(p))
|
||||
network_validators_base58[p] = v
|
||||
|
||||
return network_validators_base58
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def node_key(node_keys):
|
||||
(pub, priv) = list(node_keys.items())[0]
|
||||
return key_pair_from_ed25519_key(key_from_base64(priv))
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ed25519_node_keys(node_keys):
|
||||
(pub, priv) = list(node_keys.items())[0]
|
||||
node_keys_dict = {}
|
||||
for pub, priv in node_keys.items():
|
||||
key = key_pair_from_ed25519_key(key_from_base64(priv))
|
||||
node_keys_dict[key.public_key] = key
|
||||
|
||||
return node_keys_dict
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def node_keys():
|
||||
return {'zL/DasvKulXZzhSNFwx4cLRXKkSM9GPK7Y0nZ4FEylM=':
|
||||
'cM5oW4J0zmUSZ/+QRoRlincvgCwR0pEjFoY//ZnnjD3Mv8Nqy8q6VdnOFI0XDHhwtFcqRIz0Y8rtjSdngUTKUw==',
|
||||
'GIijU7GBcVyiVUcB0GwWZbxCxdk2xV6pxdvL24s/AqM=':
|
||||
'mdz7IjP6mGXs6+ebgGJkn7kTXByUeeGhV+9aVthLuEAYiKNTsYFxXKJVRwHQbBZlvELF2TbFXqnF28vbiz8Cow==',
|
||||
'JbfwrLvCVIwOPm8tj8936ki7IYbmGHjPiKb6nAZegRA=':
|
||||
'83VINXdj2ynOHuhvSZz5tGuOE5oYzIi0mEximkX1KYMlt/Csu8JUjA4+by2Pz3fqSLshhuYYeM+IpvqcBl6BEA==',
|
||||
'PecJ58SaNRsWJZodDmqjpCWqG6btdwXFHLyE40RYlYM=':
|
||||
'uz8bYgoL4rHErWT1gjjrnA+W7bgD/uDQWSRKDmC8otc95wnnxJo1GxYlmh0OaqOkJaobpu13BcUcvITjRFiVgw=='}
|
||||
|
||||
@ -407,7 +407,7 @@ class TestBigchainApi(object):
|
||||
from bigchaindb.common.exceptions import InputDoesNotExist
|
||||
from bigchaindb.common.transaction import Input, TransactionLink
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
from bigchaindb import BigchainDB
|
||||
|
||||
# Create an input for a non existing transaction
|
||||
input = Input(Ed25519Sha256(public_key=b58decode(user_pk)),
|
||||
@ -938,8 +938,8 @@ class TestMultipleInputs(object):
|
||||
|
||||
|
||||
def test_get_owned_ids_calls_get_outputs_filtered():
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
with patch('bigchaindb.tendermint.BigchainDB.get_outputs_filtered') as gof:
|
||||
from bigchaindb import BigchainDB
|
||||
with patch('bigchaindb.BigchainDB.get_outputs_filtered') as gof:
|
||||
b = BigchainDB()
|
||||
res = b.get_owned_ids('abc')
|
||||
gof.assert_called_once_with('abc', spent=False)
|
||||
@ -949,13 +949,13 @@ def test_get_owned_ids_calls_get_outputs_filtered():
|
||||
@pytest.mark.tendermint
|
||||
def test_get_outputs_filtered_only_unspent():
|
||||
from bigchaindb.common.transaction import TransactionLink
|
||||
from bigchaindb.tendermint.lib import BigchainDB
|
||||
from bigchaindb.lib import BigchainDB
|
||||
|
||||
go = 'bigchaindb.tendermint.fastquery.FastQuery.get_outputs_by_public_key'
|
||||
go = 'bigchaindb.fastquery.FastQuery.get_outputs_by_public_key'
|
||||
with patch(go) as get_outputs:
|
||||
get_outputs.return_value = [TransactionLink('a', 1),
|
||||
TransactionLink('b', 2)]
|
||||
fs = 'bigchaindb.tendermint.fastquery.FastQuery.filter_spent_outputs'
|
||||
fs = 'bigchaindb.fastquery.FastQuery.filter_spent_outputs'
|
||||
with patch(fs) as filter_spent:
|
||||
filter_spent.return_value = [TransactionLink('b', 2)]
|
||||
out = BigchainDB().get_outputs_filtered('abc', spent=False)
|
||||
@ -966,12 +966,12 @@ def test_get_outputs_filtered_only_unspent():
|
||||
@pytest.mark.tendermint
|
||||
def test_get_outputs_filtered_only_spent():
|
||||
from bigchaindb.common.transaction import TransactionLink
|
||||
from bigchaindb.tendermint.lib import BigchainDB
|
||||
go = 'bigchaindb.tendermint.fastquery.FastQuery.get_outputs_by_public_key'
|
||||
from bigchaindb.lib import BigchainDB
|
||||
go = 'bigchaindb.fastquery.FastQuery.get_outputs_by_public_key'
|
||||
with patch(go) as get_outputs:
|
||||
get_outputs.return_value = [TransactionLink('a', 1),
|
||||
TransactionLink('b', 2)]
|
||||
fs = 'bigchaindb.tendermint.fastquery.FastQuery.filter_unspent_outputs'
|
||||
fs = 'bigchaindb.fastquery.FastQuery.filter_unspent_outputs'
|
||||
with patch(fs) as filter_spent:
|
||||
filter_spent.return_value = [TransactionLink('b', 2)]
|
||||
out = BigchainDB().get_outputs_filtered('abc', spent=True)
|
||||
@ -980,13 +980,13 @@ def test_get_outputs_filtered_only_spent():
|
||||
|
||||
|
||||
@pytest.mark.tendermint
|
||||
@patch('bigchaindb.tendermint.fastquery.FastQuery.filter_unspent_outputs')
|
||||
@patch('bigchaindb.tendermint.fastquery.FastQuery.filter_spent_outputs')
|
||||
@patch('bigchaindb.fastquery.FastQuery.filter_unspent_outputs')
|
||||
@patch('bigchaindb.fastquery.FastQuery.filter_spent_outputs')
|
||||
def test_get_outputs_filtered(filter_spent, filter_unspent):
|
||||
from bigchaindb.common.transaction import TransactionLink
|
||||
from bigchaindb.tendermint.lib import BigchainDB
|
||||
from bigchaindb.lib import BigchainDB
|
||||
|
||||
go = 'bigchaindb.tendermint.fastquery.FastQuery.get_outputs_by_public_key'
|
||||
go = 'bigchaindb.fastquery.FastQuery.get_outputs_by_public_key'
|
||||
with patch(go) as get_outputs:
|
||||
get_outputs.return_value = [TransactionLink('a', 1),
|
||||
TransactionLink('b', 2)]
|
||||
|
||||
@ -1,12 +1,25 @@
|
||||
import pytest
|
||||
import codecs
|
||||
|
||||
import abci.types_pb2 as types
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def b():
|
||||
from bigchaindb.tendermint import BigchainDB
|
||||
from bigchaindb import BigchainDB
|
||||
return BigchainDB()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def validator_pub_key():
|
||||
return 'B0E42D2589A455EAD339A035D6CE1C8C3E25863F268120AA0162AD7D003A4014'
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def init_chain_request():
|
||||
addr = codecs.decode(b'9FD479C869C7D7E7605BF99293457AA5D80C3033', 'hex')
|
||||
pk = codecs.decode(b'VAgFZtYw8bNR5TMZHFOBDWk9cAmEu3/c6JgRBmddbbI=', 'base64')
|
||||
val_a = types.Validator(address=addr, power=10,
|
||||
pub_key=types.PubKey(type='ed25519', data=pk))
|
||||
|
||||
return types.RequestInitChain(validators=[val_a])
|
||||
|
||||
@ -6,7 +6,7 @@ from abci.types_pb2 import (
|
||||
RequestEndBlock
|
||||
)
|
||||
|
||||
from bigchaindb.tendermint.core import CodeTypeOk, CodeTypeError
|
||||
from bigchaindb.core import CodeTypeOk, CodeTypeError
|
||||
|
||||
|
||||
pytestmark = [pytest.mark.tendermint, pytest.mark.bdb]
|
||||
@ -17,7 +17,7 @@ def encode_tx_to_bytes(transaction):
|
||||
|
||||
|
||||
def test_check_tx__signed_create_is_ok(b):
|
||||
from bigchaindb.tendermint import App
|
||||
from bigchaindb import App
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.common.crypto import generate_key_pair
|
||||
|
||||
@ -34,7 +34,7 @@ def test_check_tx__signed_create_is_ok(b):
|
||||
|
||||
|
||||
def test_check_tx__unsigned_create_is_error(b):
|
||||
from bigchaindb.tendermint import App
|
||||
from bigchaindb import App
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.common.crypto import generate_key_pair
|
||||
|
||||
@ -50,8 +50,8 @@ def test_check_tx__unsigned_create_is_error(b):
|
||||
|
||||
|
||||
@pytest.mark.bdb
|
||||
def test_deliver_tx__valid_create_updates_db(b):
|
||||
from bigchaindb.tendermint import App
|
||||
def test_deliver_tx__valid_create_updates_db(b, init_chain_request):
|
||||
from bigchaindb import App
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.common.crypto import generate_key_pair
|
||||
|
||||
@ -64,8 +64,9 @@ def test_deliver_tx__valid_create_updates_db(b):
|
||||
|
||||
app = App(b)
|
||||
|
||||
app.init_chain(init_chain_request)
|
||||
|
||||
begin_block = RequestBeginBlock()
|
||||
app.init_chain(['ignore'])
|
||||
app.begin_block(begin_block)
|
||||
|
||||
result = app.deliver_tx(encode_tx_to_bytes(tx))
|
||||
@ -83,8 +84,8 @@ def test_deliver_tx__valid_create_updates_db(b):
|
||||
# next(unspent_outputs)
|
||||
|
||||
|
||||
def test_deliver_tx__double_spend_fails(b):
|
||||
from bigchaindb.tendermint import App
|
||||
def test_deliver_tx__double_spend_fails(b, init_chain_request):
|
||||
from bigchaindb import App
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.common.crypto import generate_key_pair
|
||||
|
||||
@ -96,7 +97,7 @@ def test_deliver_tx__double_spend_fails(b):
|
||||
.sign([alice.private_key])
|
||||
|
||||
app = App(b)
|
||||
app.init_chain(['ignore'])
|
||||
app.init_chain(init_chain_request)
|
||||
|
||||
begin_block = RequestBeginBlock()
|
||||
app.begin_block(begin_block)
|
||||
@ -112,13 +113,13 @@ def test_deliver_tx__double_spend_fails(b):
|
||||
assert result.code == CodeTypeError
|
||||
|
||||
|
||||
def test_deliver_transfer_tx__double_spend_fails(b):
|
||||
from bigchaindb.tendermint import App
|
||||
def test_deliver_transfer_tx__double_spend_fails(b, init_chain_request):
|
||||
from bigchaindb import App
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.common.crypto import generate_key_pair
|
||||
|
||||
app = App(b)
|
||||
app.init_chain(['ignore'])
|
||||
app.init_chain(init_chain_request)
|
||||
|
||||
begin_block = RequestBeginBlock()
|
||||
app.begin_block(begin_block)
|
||||
@ -156,14 +157,16 @@ def test_deliver_transfer_tx__double_spend_fails(b):
|
||||
assert result.code == CodeTypeError
|
||||
|
||||
|
||||
def test_end_block_return_validator_updates(b):
|
||||
from bigchaindb.tendermint import App
|
||||
# The test below has to re-written one election conclusion logic has been implemented
|
||||
@pytest.mark.skip
|
||||
def test_end_block_return_validator_updates(b, init_chain_request):
|
||||
from bigchaindb import App
|
||||
from bigchaindb.backend import query
|
||||
from bigchaindb.tendermint.core import encode_validator
|
||||
from bigchaindb.core import encode_validator
|
||||
from bigchaindb.backend.query import VALIDATOR_UPDATE_ID
|
||||
|
||||
app = App(b)
|
||||
app.init_chain(['ignore'])
|
||||
app.init_chain(init_chain_request)
|
||||
|
||||
begin_block = RequestBeginBlock()
|
||||
app.begin_block(begin_block)
|
||||
@ -182,8 +185,8 @@ def test_end_block_return_validator_updates(b):
|
||||
assert updates == []
|
||||
|
||||
|
||||
def test_store_pre_commit_state_in_end_block(b, alice):
|
||||
from bigchaindb.tendermint import App
|
||||
def test_store_pre_commit_state_in_end_block(b, alice, init_chain_request):
|
||||
from bigchaindb import App
|
||||
from bigchaindb.backend import query
|
||||
from bigchaindb.models import Transaction
|
||||
from bigchaindb.backend.query import PRE_COMMIT_ID
|
||||
@ -194,7 +197,7 @@ def test_store_pre_commit_state_in_end_block(b, alice):
|
||||
.sign([alice.private_key])
|
||||
|
||||
app = App(b)
|
||||
app.init_chain(['ignore'])
|
||||
app.init_chain(init_chain_request)
|
||||
|
||||
begin_block = RequestBeginBlock()
|
||||
app.begin_block(begin_block)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user