Merge branch 'master' into reenable-test_bigchain_api

This commit is contained in:
z-bowen 2018-08-09 17:23:12 +02:00
commit dacdf58c79
101 changed files with 1910 additions and 843 deletions

View File

@ -18,6 +18,41 @@ For reference, the possible headings are:
* **Known Issues**
* **Notes**
## [2.0 Beta 5] - 2018-08-01
Tag name: v2.0.0b5
### Changed
* Supported version of Tendermint `0.22.3` -> `0.22.8`. [Pull request #2429](https://github.com/bigchaindb/bigchaindb/pull/2429).
### Fixed
* Stateful validation raises a DoubleSpend exception if there is any other transaction that spends the same output(s) even if it has the same transaction ID. [Pull request #2422](https://github.com/bigchaindb/bigchaindb/pull/2422).
## [2.0 Beta 4] - 2018-07-30
Tag name: v2.0.0b4
### Added
- Added scripts for creating a configuration to manage processes with Monit. [Pull request #2410](https://github.com/bigchaindb/bigchaindb/pull/2410).
### Fixed
- Redundant asset and metadata queries were removed. [Pull request #2409](https://github.com/bigchaindb/bigchaindb/pull/2409).
- Signal handling was fixed for BigchainDB processes. [Pull request #2395](https://github.com/bigchaindb/bigchaindb/pull/2395).
- Some of the abruptly closed sockets that used to stay in memory are being cleaned up now. [Pull request 2408](https://github.com/bigchaindb/bigchaindb/pull/2408).
- Fixed the bug when WebSockets powering Events API became unresponsive. [Pull request #2413](https://github.com/bigchaindb/bigchaindb/pull/2413).
### Notes:
* The instructions on how to write a BEP were simplified. [Pull request #2347](https://github.com/bigchaindb/bigchaindb/pull/2347).
* A section about troubleshooting was added to the network setup guide. [Pull request #2398](https://github.com/bigchaindb/bigchaindb/pull/2398).
* Some of the core code was given a better package structure. [Pull request #2401](https://github.com/bigchaindb/bigchaindb/pull/2401).
* Some of the previously disabled unit tests were re-enabled and updated. Pull requests [#2404](https://github.com/bigchaindb/bigchaindb/pull/2404) and [#2402](https://github.com/bigchaindb/bigchaindb/pull/2402).
* Some building blocks for dynamically adding new validators were introduced. [Pull request #2392](https://github.com/bigchaindb/bigchaindb/pull/2392).
## [2.0 Beta 3] - 2018-07-18
Tag name: v2.0.0b3

View File

@ -7,7 +7,6 @@ RUN apt-get -qq update \
&& apt-get -y upgrade \
&& apt-get install -y jq \
&& pip install --no-cache-dir --process-dependency-links . \
&& pip install --no-cache-dir . \
&& apt-get autoremove \
&& apt-get clean

51
Dockerfile-all-in-one Normal file
View File

@ -0,0 +1,51 @@
FROM alpine:latest
LABEL maintainer "dev@bigchaindb.com"
ARG TM_VERSION=0.22.8
RUN mkdir -p /usr/src/app
ENV HOME /root
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN apk --update add sudo bash \
&& apk --update add python3 openssl ca-certificates git \
&& apk --update add --virtual build-dependencies python3-dev \
libffi-dev openssl-dev build-base jq \
&& apk add --no-cache libstdc++ dpkg gnupg \
&& pip3 install --upgrade pip cffi \
&& pip install --no-cache-dir --process-dependency-links -e . \
&& apk del build-dependencies \
&& rm -f /var/cache/apk/*
# Install mongodb and monit
RUN apk --update add mongodb monit
# Install Tendermint
RUN wget https://github.com/tendermint/tendermint/releases/download/v${TM_VERSION}/tendermint_${TM_VERSION}_linux_amd64.zip \
&& unzip tendermint_${TM_VERSION}_linux_amd64.zip \
&& mv tendermint /usr/local/bin/ \
&& rm tendermint_${TM_VERSION}_linux_amd64.zip
ENV TMHOME=/tendermint
# Set permissions required for mongodb
RUN mkdir -p /data/db /data/configdb \
&& chown -R mongodb:mongodb /data/db /data/configdb
# BigchainDB enviroment variables
ENV BIGCHAINDB_DATABASE_PORT 27017
ENV BIGCHAINDB_DATABASE_BACKEND localmongodb
ENV BIGCHAINDB_SERVER_BIND 0.0.0.0:9984
ENV BIGCHAINDB_WSSERVER_HOST 0.0.0.0
ENV BIGCHAINDB_WSSERVER_SCHEME ws
ENV BIGCHAINDB_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV BIGCHAINDB_WSSERVER_ADVERTISED_SCHEME ws
ENV BIGCHAINDB_TENDERMINT_PORT 26657
VOLUME /data/db /data/configdb /tendermint
EXPOSE 27017 28017 9984 9985 26656 26657 26658
WORKDIR $HOME
ENTRYPOINT ["/usr/src/app/pkg/scripts/all-in-one.bash"]

View File

@ -1,80 +1,3 @@
# How to Handle Pull Requests
# How to Handle External Pull Requests
This document is for whoever has the ability to merge pull requests in the Git repositories associated with BigchainDB.
If the pull request is from an employee of BigchainDB GmbH, then you can ignore this document.
If the pull request is from someone who is _not_ an employee of BigchainDB, then:
A. Have they agreed to the Individual Contributor Agreement in the past? There's a list of them in [a Google Spreadsheet that's accessible to all bigchaindb.com accounts](https://docs.google.com/spreadsheets/d/1VhekO6lgk1ZPx8dSjriucy4UinaU9pIdPQ5JXKcbD_Y/edit?usp=sharing). If yes, then you can merge the PR and ignore the rest of this document.
B. Do they belong to a company or organization which agreed to the Entity Contributor Agreement in the past, and will they be contributing on behalf of that company or organization? (See the Google Spreadsheet link in A.) If yes, then you can merge the PR and ignore the rest of this document.
C. Did they make a pull request to one of the bigchaindb repositories on GitHub (e.g. bigchaindb/bigchaindb)? If you're not sure, or you can't find one, then respond with an email of the form:
Dear [NAME OF PERSON WHO AGREED TO THE CLA]
According to the email copied below, you agreed to the BigchainDB Contributor License Agreement (CLA).
Did you intend to do that? If no, then feel free to ignore this email and we'll pretend it never happened.
If you did intend to do that, then do you intend to make a pull request in a BigchainDB repository? Maybe you already did? If so, can you please point me to the pull request in question?
Sincerely,
[INSERT YOUR NAME HERE]
D. Otherwise, go to the pull request in question and post a comment using this template:
Hi @nameofuser
Before we can merge this pull request, we need you or your organization to agree to one of our contributor agreements. One of the big concerns for people using and developing open source software is that someone who contributed to the code might claim the code infringes on their copyright or patent. To guard against this, we ask all our contributors to sign a Contributor License Agreement. This gives us the right to use the code contributed and any patents the contribution relies on. It also gives us and our users comfort that they won't be sued for using open source software. We know it's a hassle, but it makes the project more reliable in the long run. Thank you for your understanding and your contribution!
If you are contributing on behalf of yourself (and not on behalf of your employer or another organization you are part of) then you should:
1. Go to: https://www.bigchaindb.com/cla/
2. Read the Individual Contributor Agreement
3. Fill in the form "For Individuals"
4. Check the box to agree
5. Click the SEND button
If you're contributing as an employee, and/or you want all employees of your employing organization to be covered by our contributor agreement, then someone in your organization with the authority to enter agreements on behalf of all employees must do the following:
1. Go to: https://www.bigchaindb.com/cla/
2. Read the Entity Contributor Agreement
3. Fill in the form "For Organizations”
4. Check the box to agree
5. Click the SEND button
We will email you (or your employer) with further instructions.
(END OF COMMENT)
Once they click SEND, we (BigchainDB) will get an email with the information in the form. (Troy gets those emails for sure, I'm not sure who else.) The next step is to send an email to the email address submitted in the form, saying something like (where the stuff in [square brackets] should be replaced):
Hi [NAME],
The next step is for you to copy the following block of text into the comments of Pull Request #[NN] on GitHub:
BEGIN BLOCK
This is to confirm that I agreed to and accepted the BigchainDB [Entity/Individual] Contributor Agreement at https://www.bigchaindb.com/cla/ and to represent and warrant that I have authority to do so.
[Insert long random string here. One good source of those is https://www.grc.com/passwords.htm ]
END BLOCK
(END OF EMAIL)
The next step is to wait for them to copy that comment into the comments of the indicated pull request. Once they do so, it's safe to merge the pull request.
## How to Handle CLA Agreement Emails with No Associated Pull Request
Reply with an email like this:
Hi [First Name],
Today I got an email (copied below) to tell me that you agreed to the BigchainDB Contributor License Agreement. Did you intend to do that?
If no, then you can ignore this email.
If yes, then there's another step to connect your email address with your GitHub account. To do that, you must first create a pull request in one of the BigchainDB repositories on GitHub. Once you've done that, please reply to this email with a link to the pull request. Then I'll send you a special block of text to paste into the comments on that pull request.
See [BEP-16](https://github.com/bigchaindb/BEPs/tree/master/16).

View File

@ -15,7 +15,7 @@ For the licenses on all other BigchainDB-related code (i.e. in other repositorie
## Documentation Licenses
The official BigchainDB documentation, _except for the short code snippets embedded within it_, is licensed under a Creative Commons Attribution-ShareAlike 4.0 International license, the full text of which can be found at [http://creativecommons.org/licenses/by-sa/4.0/legalcode](http://creativecommons.org/licenses/by-sa/4.0/legalcode).
The official BigchainDB documentation, _except for the short code snippets embedded within it_, is licensed under a Creative Commons Attribution 4.0 International license, the full text of which can be found at [http://creativecommons.org/licenses/by/4.0/legalcode](http://creativecommons.org/licenses/by/4.0/legalcode).
## Exceptions

View File

@ -2,9 +2,8 @@
(including pre-release versions) from PyPI,
so show the latest GitHub release instead.
--->
<!--- Codecov isn't working for us lately, so comment it out for now:
[![Codecov branch](https://img.shields.io/codecov/c/github/bigchaindb/bigchaindb/master.svg)](https://codecov.io/github/bigchaindb/bigchaindb?branch=master)
--->
[![Latest release](https://img.shields.io/github/release/bigchaindb/bigchaindb/all.svg)](https://github.com/bigchaindb/bigchaindb/releases)
[![Status on PyPI](https://img.shields.io/pypi/status/bigchaindb.svg)](https://pypi.org/project/BigchainDB/)
[![Travis branch](https://img.shields.io/travis/bigchaindb/bigchaindb/master.svg)](https://travis-ci.org/bigchaindb/bigchaindb)

View File

@ -51,7 +51,7 @@ The following steps are what we do to release a new version of _BigchainDB Serve
- **Title:** Same as tag version above, e.g `v0.9.1`
- **Description:** The body of the changelog entry (Added, Changed, etc.)
1. Click "Publish release" to publish the release on GitHub.
1. On your local computer, make sure you're on the `master` branch and that it's up-to-date with the `master` branch in the bigchaindb/bigchaindb repository (e.g. `git fetch upstream` and `git merge upstream/master`). We're going to use that to push a new `bigchaindb` package to PyPI.
1. On your local computer, make sure you're on the `master` branch and that it's up-to-date with the `master` branch in the bigchaindb/bigchaindb repository (e.g. `git pull upstream master`). We're going to use that to push a new `bigchaindb` package to PyPI.
1. Make sure you have a `~/.pypirc` file containing credentials for PyPI.
1. Do `make release` to build and publish the new `bigchaindb` package on PyPI.
1. [Log in to readthedocs.org](https://readthedocs.org/accounts/login/) and go to the **BigchainDB Server** project, then:
@ -64,7 +64,11 @@ The following steps are what we do to release a new version of _BigchainDB Serve
1. Make sure that the new version's tag is "Active" and "Public"
1. Make sure the **stable** branch is _not_ active.
1. Scroll to the bottom of the page and click the "Submit" button.
1. Go to [Docker Hub](https://hub.docker.com/), sign in, go to bigchaindb/bigchaindb, and go to Settings --> Build Settings.
1. Go to [Docker Hub](https://hub.docker.com/) and sign in, then:
- Click on "Organizations"
- Click on "bigchaindb"
- Click on "bigchaindb/bigchaindb"
- Click on "Build Settings"
- Find the row where "Docker Tag Name" equals `latest`
and change the value of "Name" to the name (Git tag)
of the new release, e.g. `v0.9.0`.
@ -73,5 +77,11 @@ The following steps are what we do to release a new version of _BigchainDB Serve
You can do that by clicking the green "+" (plus) icon.
The contents of the new row should be similar to the existing rows
of previous releases like that.
- Click on "Tags"
- Delete the "latest" tag (so we can rebuild it)
- Click on "Build Settings" again
- Click on the "Trigger" button for the "latest" tag and make sure it worked by clicking on "Tags" again
- If the release is an Alpha, Beta or Release Candidate release,
then click on the "Trigger" button for that tag as well.
Congratulations, you have released a new version of BigchainDB Server!

View File

@ -87,3 +87,12 @@ config = {
# the user wants to reconfigure the node. Check ``bigchaindb.config_utils``
# for more info.
_config = copy.deepcopy(config)
from bigchaindb.common.transaction import Transaction # noqa
from bigchaindb import models # noqa
from bigchaindb.upsert_validator import ValidatorElection # noqa
from bigchaindb.upsert_validator import ValidatorElectionVote # noqa
Transaction.register_type(Transaction.CREATE, models.Transaction)
Transaction.register_type(Transaction.TRANSFER, models.Transaction)
Transaction.register_type(ValidatorElection.VALIDATOR_ELECTION, ValidatorElection)
Transaction.register_type(ValidatorElectionVote.VALIDATOR_ELECTION_VOTE, ValidatorElectionVote)

View File

@ -8,7 +8,6 @@ from bigchaindb.common.exceptions import MultipleValidatorOperationError
from bigchaindb.backend.utils import module_dispatch_registrar
from bigchaindb.backend.localmongodb.connection import LocalMongoDBConnection
from bigchaindb.common.transaction import Transaction
from bigchaindb.backend.query import VALIDATOR_UPDATE_ID
register_query = module_dispatch_registrar(backend.query)
@ -279,7 +278,7 @@ def get_pre_commit_state(conn, commit_id):
@register_query(LocalMongoDBConnection)
def store_validator_update(conn, validator_update):
def store_validator_set(conn, validator_update):
try:
return conn.run(
conn.collection('validators')
@ -289,15 +288,16 @@ def store_validator_update(conn, validator_update):
@register_query(LocalMongoDBConnection)
def get_validator_update(conn, update_id=VALIDATOR_UPDATE_ID):
return conn.run(
conn.collection('validators')
.find_one({'update_id': update_id}, projection={'_id': False}))
def get_validator_set(conn, height=None):
query = {}
if height is not None:
query = {'height': {'$lte': height}}
@register_query(LocalMongoDBConnection)
def delete_validator_update(conn, update_id=VALIDATOR_UPDATE_ID):
return conn.run(
cursor = conn.run(
conn.collection('validators')
.delete_one({'update_id': update_id})
.find(query, projection={'_id': False})
.sort([('height', DESCENDING)])
.limit(1)
)
return list(cursor)[0]

View File

@ -126,6 +126,6 @@ def create_pre_commit_secondary_index(conn, dbname):
def create_validators_secondary_index(conn, dbname):
logger.info('Create `validators` secondary index.')
conn.conn[dbname]['validators'].create_index('update_id',
name='update_id',
conn.conn[dbname]['validators'].create_index('height',
name='height',
unique=True,)

View File

@ -340,13 +340,6 @@ def store_pre_commit_state(connection, commit_id, state):
raise NotImplementedError
@singledispatch
def store_validator_update(conn, validator_update):
"""Store a update for the validator set"""
raise NotImplementedError
@singledispatch
def get_pre_commit_state(connection, commit_id):
"""Get pre-commit state where `id` is `commit_id`.
@ -362,14 +355,15 @@ def get_pre_commit_state(connection, commit_id):
@singledispatch
def get_validator_update(conn):
"""Get validator updates which are not synced"""
def store_validator_set(conn, validator_update):
"""Store updated validator set"""
raise NotImplementedError
@singledispatch
def delete_validator_update(conn, id):
"""Set the sync status for validator update documents"""
def get_validator_set(conn, height):
"""Get validator set for a given `height`, if `height` is not specified
then return the latest validator set"""
raise NotImplementedError

View File

@ -30,3 +30,17 @@ def generate_key_pair():
PrivateKey = crypto.Ed25519SigningKey
PublicKey = crypto.Ed25519VerifyingKey
def key_pair_from_ed25519_key(hex_private_key):
"""Generate base58 encode public-private key pair from a hex encoded private key"""
priv_key = crypto.Ed25519SigningKey(bytes.fromhex(hex_private_key)[:32], encoding='bytes')
public_key = priv_key.get_verifying_key()
return CryptoKeypair(private_key=priv_key.encode(encoding='base58').decode('utf-8'),
public_key=public_key.encode(encoding='base58').decode('utf-8'))
def public_key_from_ed25519_key(hex_public_key):
"""Generate base58 public key from hex encoded public key"""
public_key = crypto.Ed25519VerifyingKey(bytes.fromhex(hex_public_key), encoding='bytes')
return public_key.encode(encoding='base58').decode('utf-8')

View File

@ -98,3 +98,19 @@ class ThresholdTooDeep(ValidationError):
class MultipleValidatorOperationError(ValidationError):
"""Raised when a validator update pending but new request is submited"""
class MultipleInputsError(ValidationError):
"""Raised if there were multiple inputs when only one was expected"""
class InvalidProposer(ValidationError):
"""Raised if the public key is not a part of the validator set"""
class UnequalValidatorSet(ValidationError):
"""Raised if the validator sets differ"""
class InvalidPowerChange(ValidationError):
"""Raised if proposed power change in validator set is >=1/3 total power"""

View File

@ -13,9 +13,9 @@ from bigchaindb.common.exceptions import SchemaValidationError
logger = logging.getLogger(__name__)
def _load_schema(name):
def _load_schema(name, path=__file__):
"""Load a schema from disk"""
path = os.path.join(os.path.dirname(__file__), name + '.yaml')
path = os.path.join(os.path.dirname(path), name + '.yaml')
with open(path) as handle:
schema = yaml.safe_load(handle)
fast_schema = rapidjson_schema.loads(rapidjson.dumps(schema))
@ -31,6 +31,12 @@ _, TX_SCHEMA_CREATE = _load_schema('transaction_create_' +
_, TX_SCHEMA_TRANSFER = _load_schema('transaction_transfer_' +
TX_SCHEMA_VERSION)
_, TX_SCHEMA_VALIDATOR_ELECTION = _load_schema('transaction_validator_election_' +
TX_SCHEMA_VERSION)
_, TX_SCHEMA_VALIDATOR_ELECTION_VOTE = _load_schema('transaction_validator_election_vote_' +
TX_SCHEMA_VERSION)
def _validate_schema(schema, body):
"""Validate data against a schema"""

View File

@ -58,6 +58,8 @@ definitions:
enum:
- CREATE
- TRANSFER
- VALIDATOR_ELECTION
- VALIDATOR_ELECTION_VOTE
asset:
type: object
additionalProperties: false

View File

@ -0,0 +1,48 @@
---
"$schema": "http://json-schema.org/draft-04/schema#"
type: object
title: Validator Election Schema - Propose a change to validator set
required:
- operation
- asset
- outputs
properties:
operation:
type: string
value: "VALIDATOR_ELECTION"
asset:
additionalProperties: false
properties:
data:
additionalProperties: false
properties:
node_id:
type: string
public_key:
type: string
power:
"$ref": "#/definitions/positiveInteger"
required:
- node_id
- public_key
- power
required:
- data
outputs:
type: array
items:
"$ref": "#/definitions/output"
definitions:
output:
type: object
properties:
condition:
type: object
required:
- uri
properties:
uri:
type: string
pattern: "^ni:///sha-256;([a-zA-Z0-9_-]{0,86})[?]\
(fpt=ed25519-sha-256(&)?|cost=[0-9]+(&)?|\
subtypes=ed25519-sha-256(&)?){2,3}$"

View File

@ -0,0 +1,27 @@
---
"$schema": "http://json-schema.org/draft-04/schema#"
type: object
title: Validator Election Vote Schema - Vote on a validator set change
required:
- operation
- outputs
properties:
operation: "VALIDATOR_ELECTION_VOTE"
outputs:
type: array
items:
"$ref": "#/definitions/output"
definitions:
output:
type: object
properties:
condition:
type: object
required:
- uri
properties:
uri:
type: string
pattern: "^ni:///sha-256;([a-zA-Z0-9_-]{0,86})[?]\
(fpt=ed25519-sha-256(&)?|cost=[0-9]+(&)?|\
subtypes=ed25519-sha-256(&)?){2,3}$"

View File

@ -18,6 +18,7 @@ from sha3 import sha3_256
from bigchaindb.common.crypto import PrivateKey, hash_data
from bigchaindb.common.exceptions import (KeypairMismatchException,
InputDoesNotExist, DoubleSpend,
InvalidHash, InvalidSignature,
AmountError, AssetIdMismatch,
ThresholdTooDeep)
@ -515,7 +516,7 @@ class Transaction(object):
version (string): Defines the version number of a Transaction.
hash_id (string): Hash id of the transaction.
"""
if operation not in Transaction.ALLOWED_OPERATIONS:
if operation not in self.ALLOWED_OPERATIONS:
allowed_ops = ', '.join(self.__class__.ALLOWED_OPERATIONS)
raise ValueError('`operation` must be one of {}'
.format(allowed_ops))
@ -523,11 +524,11 @@ class Transaction(object):
# Asset payloads for 'CREATE' operations must be None or
# dicts holding a `data` property. Asset payloads for 'TRANSFER'
# operations must be dicts holding an `id` property.
if (operation == Transaction.CREATE and
if (operation == self.CREATE and
asset is not None and not (isinstance(asset, dict) and 'data' in asset)):
raise TypeError(('`asset` must be None or a dict holding a `data` '
" property instance for '{}' Transactions".format(operation)))
elif (operation == Transaction.TRANSFER and
elif (operation == self.TRANSFER and
not (isinstance(asset, dict) and 'id' in asset)):
raise TypeError(('`asset` must be a dict holding an `id` property '
"for 'TRANSFER' Transactions".format(operation)))
@ -555,9 +556,9 @@ class Transaction(object):
structure containing relevant information for storing them in
a UTXO set, and performing validation.
"""
if self.operation == Transaction.CREATE:
if self.operation == self.CREATE:
self._asset_id = self._id
elif self.operation == Transaction.TRANSFER:
elif self.operation == self.TRANSFER:
self._asset_id = self.asset['id']
return (UnspentOutput(
transaction_id=self._id,
@ -585,6 +586,38 @@ class Transaction(object):
def _hash(self):
self._id = hash_data(self.serialized)
@classmethod
def validate_create(cls, tx_signers, recipients, asset, metadata):
if not isinstance(tx_signers, list):
raise TypeError('`tx_signers` must be a list instance')
if not isinstance(recipients, list):
raise TypeError('`recipients` must be a list instance')
if len(tx_signers) == 0:
raise ValueError('`tx_signers` list cannot be empty')
if len(recipients) == 0:
raise ValueError('`recipients` list cannot be empty')
if not (asset is None or isinstance(asset, dict)):
raise TypeError('`asset` must be a dict or None')
if not (metadata is None or isinstance(metadata, dict)):
raise TypeError('`metadata` must be a dict or None')
inputs = []
outputs = []
# generate_outputs
for recipient in recipients:
if not isinstance(recipient, tuple) or len(recipient) != 2:
raise ValueError(('Each `recipient` in the list must be a'
' tuple of `([<list of public keys>],'
' <amount>)`'))
pub_keys, amount = recipient
outputs.append(Output.generate(pub_keys, amount))
# generate inputs
inputs.append(Input.generate(tx_signers))
return (inputs, outputs)
@classmethod
def create(cls, tx_signers, recipients, metadata=None, asset=None):
"""A simple way to generate a `CREATE` transaction.
@ -613,21 +646,22 @@ class Transaction(object):
Returns:
:class:`~bigchaindb.common.transaction.Transaction`
"""
if not isinstance(tx_signers, list):
raise TypeError('`tx_signers` must be a list instance')
(inputs, outputs) = cls.validate_create(tx_signers, recipients, asset, metadata)
return cls(cls.CREATE, {'data': asset}, inputs, outputs, metadata)
@classmethod
def validate_transfer(cls, inputs, recipients, asset_id, metadata):
if not isinstance(inputs, list):
raise TypeError('`inputs` must be a list instance')
if len(inputs) == 0:
raise ValueError('`inputs` must contain at least one item')
if not isinstance(recipients, list):
raise TypeError('`recipients` must be a list instance')
if len(tx_signers) == 0:
raise ValueError('`tx_signers` list cannot be empty')
if len(recipients) == 0:
raise ValueError('`recipients` list cannot be empty')
if not (asset is None or isinstance(asset, dict)):
raise TypeError('`asset` must be a dict or None')
inputs = []
outputs = []
# generate_outputs
for recipient in recipients:
if not isinstance(recipient, tuple) or len(recipient) != 2:
raise ValueError(('Each `recipient` in the list must be a'
@ -636,10 +670,10 @@ class Transaction(object):
pub_keys, amount = recipient
outputs.append(Output.generate(pub_keys, amount))
# generate inputs
inputs.append(Input.generate(tx_signers))
if not isinstance(asset_id, str):
raise TypeError('`asset_id` must be a string')
return cls(cls.CREATE, {'data': asset}, inputs, outputs, metadata)
return (deepcopy(inputs), outputs)
@classmethod
def transfer(cls, inputs, recipients, asset_id, metadata=None):
@ -680,28 +714,7 @@ class Transaction(object):
Returns:
:class:`~bigchaindb.common.transaction.Transaction`
"""
if not isinstance(inputs, list):
raise TypeError('`inputs` must be a list instance')
if len(inputs) == 0:
raise ValueError('`inputs` must contain at least one item')
if not isinstance(recipients, list):
raise TypeError('`recipients` must be a list instance')
if len(recipients) == 0:
raise ValueError('`recipients` list cannot be empty')
outputs = []
for recipient in recipients:
if not isinstance(recipient, tuple) or len(recipient) != 2:
raise ValueError(('Each `recipient` in the list must be a'
' tuple of `([<list of public keys>],'
' <amount>)`'))
pub_keys, amount = recipient
outputs.append(Output.generate(pub_keys, amount))
if not isinstance(asset_id, str):
raise TypeError('`asset_id` must be a string')
inputs = deepcopy(inputs)
(inputs, outputs) = cls.validate_transfer(inputs, recipients, asset_id, metadata)
return cls(cls.TRANSFER, {'id': asset_id}, inputs, outputs, metadata)
def __eq__(self, other):
@ -939,14 +952,14 @@ class Transaction(object):
Returns:
bool: If all Inputs are valid.
"""
if self.operation == Transaction.CREATE:
if self.operation == self.CREATE:
# NOTE: Since in the case of a `CREATE`-transaction we do not have
# to check for outputs, we're just submitting dummy
# values to the actual method. This simplifies it's logic
# greatly, as we do not have to check against `None` values.
return self._inputs_valid(['dummyvalue'
for _ in self.inputs])
elif self.operation == Transaction.TRANSFER:
elif self.operation == self.TRANSFER:
return self._inputs_valid([output.fulfillment.condition_uri
for output in outputs])
else:
@ -986,8 +999,7 @@ class Transaction(object):
return all(validate(i, cond)
for i, cond in enumerate(output_condition_uris))
@staticmethod
def _input_valid(input_, operation, message, output_condition_uri=None):
def _input_valid(self, input_, operation, message, output_condition_uri=None):
"""Validates a single Input against a single Output.
Note:
@ -1012,7 +1024,7 @@ class Transaction(object):
ParsingError, ASN1DecodeError, ASN1EncodeError):
return False
if operation == Transaction.CREATE:
if operation == self.CREATE:
# NOTE: In the case of a `CREATE` transaction, the
# output is always valid.
output_valid = True
@ -1091,8 +1103,8 @@ class Transaction(object):
tx = Transaction._remove_signatures(self.to_dict())
return Transaction._to_str(tx)
@staticmethod
def get_asset_id(transactions):
@classmethod
def get_asset_id(cls, transactions):
"""Get the asset id from a list of :class:`~.Transactions`.
This is useful when we want to check if the multiple inputs of a
@ -1116,7 +1128,7 @@ class Transaction(object):
transactions = [transactions]
# create a set of the transactions' asset ids
asset_ids = {tx.id if tx.operation == Transaction.CREATE
asset_ids = {tx.id if tx.operation == tx.CREATE
else tx.asset['id']
for tx in transactions}
@ -1151,7 +1163,7 @@ class Transaction(object):
raise InvalidHash(err_msg.format(proposed_tx_id))
@classmethod
def from_dict(cls, tx):
def from_dict(cls, tx, skip_schema_validation=True):
"""Transforms a Python dictionary to a Transaction object.
Args:
@ -1160,7 +1172,131 @@ class Transaction(object):
Returns:
:class:`~bigchaindb.common.transaction.Transaction`
"""
operation = tx.get('operation', Transaction.CREATE) if isinstance(tx, dict) else Transaction.CREATE
cls = Transaction.resolve_class(operation)
if not skip_schema_validation:
cls.validate_schema(tx)
inputs = [Input.from_dict(input_) for input_ in tx['inputs']]
outputs = [Output.from_dict(output) for output in tx['outputs']]
return cls(tx['operation'], tx['asset'], inputs, outputs,
tx['metadata'], tx['version'], hash_id=tx['id'])
@classmethod
def from_db(cls, bigchain, tx_dict_list):
"""Helper method that reconstructs a transaction dict that was returned
from the database. It checks what asset_id to retrieve, retrieves the
asset from the asset table and reconstructs the transaction.
Args:
bigchain (:class:`~bigchaindb.tendermint.BigchainDB`): An instance
of BigchainDB used to perform database queries.
tx_dict_list (:list:`dict` or :obj:`dict`): The transaction dict or
list of transaction dict as returned from the database.
Returns:
:class:`~Transaction`
"""
return_list = True
if isinstance(tx_dict_list, dict):
tx_dict_list = [tx_dict_list]
return_list = False
tx_map = {}
tx_ids = []
for tx in tx_dict_list:
tx.update({'metadata': None})
tx_map[tx['id']] = tx
tx_ids.append(tx['id'])
assets = list(bigchain.get_assets(tx_ids))
for asset in assets:
if asset is not None:
tx = tx_map[asset['id']]
del asset['id']
tx['asset'] = asset
tx_ids = list(tx_map.keys())
metadata_list = list(bigchain.get_metadata(tx_ids))
for metadata in metadata_list:
tx = tx_map[metadata['id']]
tx.update({'metadata': metadata.get('metadata')})
if return_list:
tx_list = []
for tx_id, tx in tx_map.items():
tx_list.append(cls.from_dict(tx))
return tx_list
else:
tx = list(tx_map.values())[0]
return cls.from_dict(tx)
type_registry = {}
@staticmethod
def register_type(tx_type, tx_class):
Transaction.type_registry[tx_type] = tx_class
def resolve_class(operation):
"""For the given `tx` based on the `operation` key return its implementation class"""
create_txn_class = Transaction.type_registry.get(Transaction.CREATE)
return Transaction.type_registry.get(operation, create_txn_class)
@classmethod
def validate_schema(cls, tx):
pass
def validate_transfer_inputs(self, bigchain, current_transactions=[]):
# store the inputs so that we can check if the asset ids match
input_txs = []
input_conditions = []
for input_ in self.inputs:
input_txid = input_.fulfills.txid
input_tx = bigchain.get_transaction(input_txid)
if input_tx is None:
for ctxn in current_transactions:
if ctxn.id == input_txid:
input_tx = ctxn
if input_tx is None:
raise InputDoesNotExist("input `{}` doesn't exist"
.format(input_txid))
spent = bigchain.get_spent(input_txid, input_.fulfills.output,
current_transactions)
if spent:
raise DoubleSpend('input `{}` was already spent'
.format(input_txid))
output = input_tx.outputs[input_.fulfills.output]
input_conditions.append(output)
input_txs.append(input_tx)
# Validate that all inputs are distinct
links = [i.fulfills.to_uri() for i in self.inputs]
if len(links) != len(set(links)):
raise DoubleSpend('tx "{}" spends inputs twice'.format(self.id))
# validate asset id
asset_id = self.get_asset_id(input_txs)
if asset_id != self.asset['id']:
raise AssetIdMismatch(('The asset id of the input does not'
' match the asset id of the'
' transaction'))
input_amount = sum([input_condition.amount for input_condition in input_conditions])
output_amount = sum([output_condition.amount for output_condition in self.outputs])
if output_amount != input_amount:
raise AmountError(('The amount used in the inputs `{}`'
' needs to be same as the amount used'
' in the outputs `{}`')
.format(input_amount, output_amount))
if not self.inputs_valid(input_conditions):
raise InvalidSignature('Transaction signature is invalid.')
return True

View File

@ -1,6 +1,7 @@
"""This module contains all the goodness to integrate BigchainDB
with Tendermint."""
import logging
import codecs
from abci.application import BaseApplication
from abci.types_pb2 import (
@ -42,11 +43,13 @@ class App(BaseApplication):
self.validators = None
self.new_height = None
def init_chain(self, validators):
def init_chain(self, genesis):
"""Initialize chain with block of height 0"""
validator_set = [decode_validator(v) for v in genesis.validators]
block = Block(app_hash='', height=0, transactions=[])
self.bigchaindb.store_block(block._asdict())
self.bigchaindb.store_validator_set(1, validator_set)
return ResponseInitChain()
def info(self, request):
@ -129,11 +132,11 @@ class App(BaseApplication):
else:
self.block_txn_hash = block['app_hash']
validator_updates = self.bigchaindb.get_validator_update()
validator_updates = [encode_validator(v) for v in validator_updates]
# set sync status to true
self.bigchaindb.delete_validator_update()
# TODO: calculate if an election has concluded
# NOTE: ensure the local validator set is updated
# validator_updates = self.bigchaindb.get_validator_update()
# validator_updates = [encode_validator(v) for v in validator_updates]
validator_updates = []
# Store pre-commit state to recover in case there is a crash
# during `commit`
@ -176,3 +179,10 @@ def encode_validator(v):
return Validator(pub_key=pub_key,
address=b'',
power=v['power'])
def decode_validator(v):
return {'address': codecs.encode(v.address, 'hex').decode().upper().rstrip('\n'),
'pub_key': {'type': v.pub_key.type,
'data': codecs.encode(v.pub_key.data, 'base64').decode().rstrip('\n')},
'voting_power': v.power}

View File

@ -67,7 +67,7 @@ class Exchange:
"""
try:
self.started_queue.get_nowait()
self.started_queue.get(timeout=1)
raise RuntimeError('Cannot create a new subscriber queue while Exchange is running.')
except Empty:
pass

View File

@ -34,18 +34,6 @@ class BigchainDB(object):
Create, read, sign, write transactions to the database
"""
BLOCK_INVALID = 'invalid'
"""return if a block is invalid"""
BLOCK_VALID = TX_VALID = 'valid'
"""return if a block is valid, or tx is in valid block"""
BLOCK_UNDECIDED = TX_UNDECIDED = 'undecided'
"""return if block is undecided, or tx is in undecided block"""
TX_IN_BACKLOG = 'backlog'
"""return if transaction is in backlog"""
def __init__(self, connection=None):
"""Initialize the Bigchain instance
@ -149,10 +137,10 @@ class BigchainDB(object):
txns = []
assets = []
txn_metadatas = []
for transaction in transactions:
for transaction_obj in transactions:
# self.update_utxoset(transaction)
transaction = transaction.to_dict()
if transaction['operation'] == 'CREATE':
transaction = transaction_obj.to_dict()
if transaction['operation'] == transaction_obj.CREATE:
asset = transaction.pop('asset')
asset['id'] = transaction['id']
assets.append(asset)
@ -251,7 +239,7 @@ class BigchainDB(object):
return backend.query.delete_unspent_outputs(
self.connection, *unspent_outputs)
def get_transaction(self, transaction_id, include_status=False):
def get_transaction(self, transaction_id):
transaction = backend.query.get_transaction(self.connection, transaction_id)
if transaction:
@ -269,9 +257,6 @@ class BigchainDB(object):
transaction = Transaction.from_dict(transaction)
if include_status:
return transaction, self.TX_VALID if transaction else None
else:
return transaction
def get_transactions_filtered(self, asset_id, operation=None):
@ -280,9 +265,7 @@ class BigchainDB(object):
txids = backend.query.get_txids_filtered(self.connection, asset_id,
operation)
for txid in txids:
tx, status = self.get_transaction(txid, True)
if status == self.TX_VALID:
yield tx
yield self.get_transaction(txid)
def get_outputs_filtered(self, owner, spent=None):
"""Get a list of output links filtered on some criteria
@ -391,7 +374,7 @@ class BigchainDB(object):
# CLEANUP: The conditional below checks for transaction in dict format.
# It would be better to only have a single format for the transaction
# throught the code base.
if not isinstance(transaction, Transaction):
if isinstance(transaction, dict):
try:
transaction = Transaction.from_dict(tx)
except SchemaValidationError as e:
@ -421,17 +404,9 @@ class BigchainDB(object):
Returns:
iter: An iterator of assets that match the text search.
"""
objects = backend.query.text_search(self.connection, search, limit=limit,
return backend.query.text_search(self.connection, search, limit=limit,
table=table)
# TODO: This is not efficient. There may be a more efficient way to
# query by storing block ids with the assets and using fastquery.
# See https://github.com/bigchaindb/bigchaindb/issues/1496
for obj in objects:
tx, status = self.get_transaction(obj['id'], True)
if status == self.TX_VALID:
yield obj
def get_assets(self, asset_ids):
"""Return a list of assets that match the asset_ids
@ -460,20 +435,14 @@ class BigchainDB(object):
def fastquery(self):
return fastquery.FastQuery(self.connection)
def get_validators(self):
try:
resp = requests.get('{}validators'.format(self.endpoint))
validators = resp.json()['result']['validators']
def get_validators(self, height=None):
result = backend.query.get_validator_set(self.connection, height)
validators = result['validators']
for v in validators:
v.pop('accum')
v.pop('address')
return validators
except requests.exceptions.RequestException as e:
logger.error('Error while connecting to Tendermint HTTP API')
raise e
def get_validator_update(self):
update = backend.query.get_validator_update(self.connection)
return [update['validator']] if update else []
@ -484,6 +453,14 @@ class BigchainDB(object):
def store_pre_commit_state(self, state):
return backend.query.store_pre_commit_state(self.connection, state)
def store_validator_set(self, height, validators):
"""Store validator set at a given `height`.
NOTE: If the validator set already exists at that `height` then an
exception will be raised.
"""
return backend.query.store_validator_set(self.connection, {'height': height,
'validators': validators})
Block = namedtuple('Block', ('app_hash', 'height', 'transactions'))

View File

@ -3,10 +3,10 @@ import logging
from bigchaindb.common.exceptions import ConfigurationError
from logging.config import dictConfig as set_logging_config
from os.path import expanduser, join
import os
DEFAULT_LOG_DIR = expanduser('~')
DEFAULT_LOG_DIR = os.getcwd()
BENCHMARK_LOG_LEVEL = 15
@ -40,7 +40,7 @@ DEFAULT_LOGGING_CONFIG = {
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': join(DEFAULT_LOG_DIR, 'bigchaindb.log'),
'filename': os.path.join(DEFAULT_LOG_DIR, 'bigchaindb.log'),
'mode': 'w',
'maxBytes': 209715200,
'backupCount': 5,
@ -49,7 +49,7 @@ DEFAULT_LOGGING_CONFIG = {
},
'errors': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': join(DEFAULT_LOG_DIR, 'bigchaindb-errors.log'),
'filename': os.path.join(DEFAULT_LOG_DIR, 'bigchaindb-errors.log'),
'mode': 'w',
'maxBytes': 209715200,
'backupCount': 5,
@ -58,7 +58,7 @@ DEFAULT_LOGGING_CONFIG = {
},
'benchmark': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': 'bigchaindb-benchmark.log',
'filename': os.path.join(DEFAULT_LOG_DIR, 'bigchaindb-benchmark.log'),
'mode': 'w',
'maxBytes': 209715200,
'backupCount': 5,

View File

@ -1,7 +1,4 @@
from bigchaindb.common.exceptions import (InvalidSignature, DoubleSpend,
InputDoesNotExist,
TransactionNotInValidBlock,
AssetIdMismatch, AmountError,
from bigchaindb.common.exceptions import (InvalidSignature,
DuplicateTransaction)
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.utils import (validate_txn_obj, validate_key)
@ -10,17 +7,15 @@ from bigchaindb.backend.schema import validate_language_key
class Transaction(Transaction):
def validate(self, bigchain, current_transactions=[]):
"""Validate transaction spend
Args:
bigchain (BigchainDB): an instantiated bigchaindb.BigchainDB object.
Returns:
The transaction (Transaction) if the transaction is valid else it
raises an exception describing the reason why the transaction is
invalid.
Raises:
ValidationError: If the transaction is invalid
"""
@ -31,125 +26,26 @@ class Transaction(Transaction):
if bigchain.get_transaction(self.to_dict()['id']) or duplicates:
raise DuplicateTransaction('transaction `{}` already exists'
.format(self.id))
elif self.operation == Transaction.TRANSFER:
# store the inputs so that we can check if the asset ids match
input_txs = []
for input_ in self.inputs:
input_txid = input_.fulfills.txid
input_tx, status = bigchain.\
get_transaction(input_txid, include_status=True)
if input_tx is None:
for ctxn in current_transactions:
# assume that the status as valid for previously validated
# transactions in current round
if ctxn.id == input_txid:
input_tx = ctxn
status = bigchain.TX_VALID
if input_tx is None:
raise InputDoesNotExist("input `{}` doesn't exist"
.format(input_txid))
if status != bigchain.TX_VALID:
raise TransactionNotInValidBlock(
'input `{}` does not exist in a valid block'.format(
input_txid))
spent = bigchain.get_spent(input_txid, input_.fulfills.output,
current_transactions)
if spent and spent.id != self.id:
raise DoubleSpend('input `{}` was already spent'
.format(input_txid))
output = input_tx.outputs[input_.fulfills.output]
input_conditions.append(output)
input_txs.append(input_tx)
# Validate that all inputs are distinct
links = [i.fulfills.to_uri() for i in self.inputs]
if len(links) != len(set(links)):
raise DoubleSpend('tx "{}" spends inputs twice'.format(self.id))
# validate asset id
asset_id = Transaction.get_asset_id(input_txs)
if asset_id != self.asset['id']:
raise AssetIdMismatch(('The asset id of the input does not'
' match the asset id of the'
' transaction'))
input_amount = sum([input_condition.amount for input_condition in input_conditions])
output_amount = sum([output_condition.amount for output_condition in self.outputs])
if output_amount != input_amount:
raise AmountError(('The amount used in the inputs `{}`'
' needs to be same as the amount used'
' in the outputs `{}`')
.format(input_amount, output_amount))
if not self.inputs_valid(input_conditions):
raise InvalidSignature('Transaction signature is invalid.')
elif self.operation == Transaction.TRANSFER:
self.validate_transfer_inputs(bigchain, current_transactions)
return self
@classmethod
def from_dict(cls, tx_body):
super().validate_id(tx_body)
return super().from_dict(tx_body, False)
@classmethod
def validate_schema(cls, tx_body):
cls.validate_id(tx_body)
validate_transaction_schema(tx_body)
validate_txn_obj('asset', tx_body['asset'], 'data', validate_key)
validate_txn_obj('metadata', tx_body, 'metadata', validate_key)
validate_language_key(tx_body['asset'], 'data')
return super().from_dict(tx_body)
@classmethod
def from_db(cls, bigchain, tx_dict_list):
"""Helper method that reconstructs a transaction dict that was returned
from the database. It checks what asset_id to retrieve, retrieves the
asset from the asset table and reconstructs the transaction.
Args:
bigchain (:class:`~bigchaindb.BigchainDB`): An instance
of BigchainDB used to perform database queries.
tx_dict_list (:list:`dict` or :obj:`dict`): The transaction dict or
list of transaction dict as returned from the database.
Returns:
:class:`~Transaction`
"""
return_list = True
if isinstance(tx_dict_list, dict):
tx_dict_list = [tx_dict_list]
return_list = False
tx_map = {}
tx_ids = []
for tx in tx_dict_list:
tx.update({'metadata': None})
tx_map[tx['id']] = tx
if tx['operation'] == Transaction.CREATE:
tx_ids.append(tx['id'])
assets = list(bigchain.get_assets(tx_ids))
for asset in assets:
tx = tx_map[asset['id']]
del asset['id']
tx.update({'asset': asset})
tx_ids = list(tx_map.keys())
metadata_list = list(bigchain.get_metadata(tx_ids))
for metadata in metadata_list:
tx = tx_map[metadata['id']]
tx.update({'metadata': metadata.get('metadata')})
if return_list:
tx_list = []
for tx_id, tx in tx_map.items():
tx_list.append(cls.from_dict(tx))
return tx_list
else:
tx = list(tx_map.values())[0]
return cls.from_dict(tx)
class FastTransaction:

View File

@ -1,5 +1,4 @@
import logging
import setproctitle
import bigchaindb
@ -34,14 +33,14 @@ BANNER = """
def start():
# Exchange object for event stream api
logger.info('Starting BigchainDB')
exchange = Exchange()
# start the web api
app_server = server.create_server(
settings=bigchaindb.config['server'],
log_config=bigchaindb.config['log'],
bigchaindb_factory=BigchainDB)
p_webapi = Process(name='bigchaindb_webapi', target=app_server.run)
p_webapi = Process(name='bigchaindb_webapi', target=app_server.run, daemon=True)
p_webapi.start()
# start message
@ -50,16 +49,18 @@ def start():
# start websocket server
p_websocket_server = Process(name='bigchaindb_ws',
target=websocket_server.start,
daemon=True,
args=(exchange.get_subscriber_queue(EventTypes.BLOCK_VALID),))
p_websocket_server.start()
# connect to tendermint event stream
p_websocket_client = Process(name='bigchaindb_ws_to_tendermint',
target=event_stream.start,
daemon=True,
args=(exchange.get_publisher_queue(),))
p_websocket_client.start()
p_exchange = Process(name='bigchaindb_exchange', target=exchange.run)
p_exchange = Process(name='bigchaindb_exchange', target=exchange.run, daemon=True)
p_exchange.start()
# We need to import this after spawning the web server
@ -69,6 +70,7 @@ def start():
setproctitle.setproctitle('bigchaindb')
# Start the ABCIServer
app = ABCIServer(app=App())
app.run()

View File

@ -75,12 +75,20 @@ def public_key64_to_address(base64_public_key):
def public_key_from_base64(base64_public_key):
return base64.b64decode(base64_public_key).hex().upper()
return key_from_base64(base64_public_key)
def key_from_base64(base64_key):
return base64.b64decode(base64_key).hex().upper()
def public_key_to_base64(ed25519_public_key):
ed25519_public_key = bytes.fromhex(ed25519_public_key)
return base64.b64encode(ed25519_public_key).decode('utf-8')
return key_to_base64(ed25519_public_key)
def key_to_base64(ed25519_key):
ed25519_key = bytes.fromhex(ed25519_key)
return base64.b64encode(ed25519_key).decode('utf-8')
def amino_encoded_public_key(ed25519_public_key):

View File

@ -0,0 +1,3 @@
from bigchaindb.upsert_validator.validator_election import ValidatorElection # noqa
from bigchaindb.upsert_validator.validator_election_vote import ValidatorElectionVote # noqa

View File

@ -0,0 +1,143 @@
from bigchaindb.common.exceptions import (InvalidSignature,
MultipleInputsError,
InvalidProposer,
UnequalValidatorSet,
InvalidPowerChange,
DuplicateTransaction)
from bigchaindb.tendermint_utils import key_from_base64
from bigchaindb.common.crypto import (public_key_from_ed25519_key)
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.schema import (_validate_schema,
TX_SCHEMA_VALIDATOR_ELECTION,
TX_SCHEMA_COMMON,
TX_SCHEMA_CREATE)
class ValidatorElection(Transaction):
VALIDATOR_ELECTION = 'VALIDATOR_ELECTION'
# NOTE: this transaction class extends create so the operation inheritence is achieved
# by renaming CREATE to VALIDATOR_ELECTION
CREATE = VALIDATOR_ELECTION
ALLOWED_OPERATIONS = (VALIDATOR_ELECTION,)
def __init__(self, operation, asset, inputs, outputs,
metadata=None, version=None, hash_id=None):
# operation `CREATE` is being passed as argument as `VALIDATOR_ELECTION` is an extension
# of `CREATE` and any validation on `CREATE` in the parent class should apply to it
super().__init__(operation, asset, inputs, outputs, metadata, version, hash_id)
@classmethod
def current_validators(cls, bigchain):
"""Return a dictionary of validators with key as `public_key` and
value as the `voting_power`
"""
validators = {}
for validator in bigchain.get_validators():
# NOTE: we assume that Tendermint encodes public key in base64
public_key = public_key_from_ed25519_key(key_from_base64(validator['pub_key']['value']))
validators[public_key] = validator['voting_power']
return validators
@classmethod
def recipients(cls, bigchain):
"""Convert validator dictionary to a recipient list for `Transaction`"""
recipients = []
for public_key, voting_power in cls.current_validators(bigchain).items():
recipients.append(([public_key], voting_power))
return recipients
@classmethod
def is_same_topology(cls, current_topology, election_topology):
voters = {}
for voter in election_topology:
if len(voter.public_keys) > 1:
return False
[public_key] = voter.public_keys
voting_power = voter.amount
voters[public_key] = voting_power
# Check whether the voters and their votes is same to that of the
# validators and their voting power in the network
return (current_topology == voters)
def validate(self, bigchain, current_transactions=[]):
"""Validate election transaction
For more details refer BEP-21: https://github.com/bigchaindb/BEPs/tree/master/21
NOTE:
* A valid election is initiated by an existing validator.
* A valid election is one where voters are validators and votes are
alloacted according to the voting power of each validator node.
Args:
bigchain (BigchainDB): an instantiated bigchaindb.lib.BigchainDB object.
Returns:
`True` if the election is valid
Raises:
ValidationError: If the election is invalid
"""
input_conditions = []
duplicates = any(txn for txn in current_transactions if txn.id == self.id)
if bigchain.get_transaction(self.id) or duplicates:
raise DuplicateTransaction('transaction `{}` already exists'
.format(self.id))
if not self.inputs_valid(input_conditions):
raise InvalidSignature('Transaction signature is invalid.')
current_validators = self.current_validators(bigchain)
# NOTE: Proposer should be a single node
if len(self.inputs) != 1 or len(self.inputs[0].owners_before) != 1:
raise MultipleInputsError('`tx_signers` must be a list instance of length one')
# NOTE: change more than 1/3 of the current power is not allowed
if self.asset['data']['power'] >= (1/3)*sum(current_validators.values()):
raise InvalidPowerChange('`power` change must be less than 1/3 of total power')
# NOTE: Check if the proposer is a validator.
[election_initiator_node_pub_key] = self.inputs[0].owners_before
if election_initiator_node_pub_key not in current_validators.keys():
raise InvalidProposer('Public key is not a part of the validator set')
# NOTE: Check if all validators have been assigned votes equal to their voting power
if not self.is_same_topology(current_validators, self.outputs):
raise UnequalValidatorSet('Validator set much be exactly same to the outputs of election')
return True
@classmethod
def generate(cls, initiator, voters, election_data, metadata=None):
(inputs, outputs) = cls.validate_create(initiator, voters, election_data, metadata)
election = cls(cls.VALIDATOR_ELECTION, {'data': election_data}, inputs, outputs, metadata)
cls.validate_schema(election.to_dict(), skip_id=True)
return election
@classmethod
def validate_schema(cls, tx, skip_id=False):
"""Validate the validator election transaction. Since `VALIDATOR_ELECTION` extends `CREATE`
transaction, all the validations for `CREATE` transaction should be inherited
"""
if not skip_id:
cls.validate_id(tx)
_validate_schema(TX_SCHEMA_COMMON, tx)
_validate_schema(TX_SCHEMA_CREATE, tx)
_validate_schema(TX_SCHEMA_VALIDATOR_ELECTION, tx)
@classmethod
def create(cls, tx_signers, recipients, metadata=None, asset=None):
raise NotImplementedError
@classmethod
def transfer(cls, tx_signers, recipients, metadata=None, asset=None):
raise NotImplementedError

View File

@ -0,0 +1,65 @@
import base58
from bigchaindb.common.transaction import Transaction
from bigchaindb.common.schema import (_validate_schema,
TX_SCHEMA_COMMON,
TX_SCHEMA_TRANSFER,
TX_SCHEMA_VALIDATOR_ELECTION_VOTE)
class ValidatorElectionVote(Transaction):
VALIDATOR_ELECTION_VOTE = 'VALIDATOR_ELECTION_VOTE'
# NOTE: This class inherits TRANSFER txn type. The `TRANSFER` property is
# overriden to re-use methods from parent class
TRANSFER = VALIDATOR_ELECTION_VOTE
ALLOWED_OPERATIONS = (VALIDATOR_ELECTION_VOTE,)
def validate(self, bigchain, current_transactions=[]):
"""Validate election vote transaction
NOTE: There are no additional validity conditions on casting votes i.e.
a vote is just a valid TRANFER transaction
For more details refer BEP-21: https://github.com/bigchaindb/BEPs/tree/master/21
Args:
bigchain (BigchainDB): an instantiated bigchaindb.lib.BigchainDB object.
Returns:
`True` if the election vote is valid
Raises:
ValidationError: If the election vote is invalid
"""
self.validate_transfer_inputs(bigchain, current_transactions)
return self
@classmethod
def to_public_key(cls, election_id):
return base58.b58encode(bytes.fromhex(election_id))
@classmethod
def generate(cls, inputs, recipients, election_id, metadata=None):
(inputs, outputs) = cls.validate_transfer(inputs, recipients, election_id, metadata)
election_vote = cls(cls.VALIDATOR_ELECTION_VOTE, {'id': election_id}, inputs, outputs, metadata)
cls.validate_schema(election_vote.to_dict(), skip_id=True)
return election_vote
@classmethod
def validate_schema(cls, tx, skip_id=False):
"""Validate the validator election vote transaction. Since `VALIDATOR_ELECTION_VOTE` extends `TRANFER`
transaction, all the validations for `CREATE` transaction should be inherited
"""
if not skip_id:
cls.validate_id(tx)
_validate_schema(TX_SCHEMA_COMMON, tx)
_validate_schema(TX_SCHEMA_TRANSFER, tx)
_validate_schema(TX_SCHEMA_VALIDATOR_ELECTION_VOTE, tx)
@classmethod
def create(cls, tx_signers, recipients, metadata=None, asset=None):
raise NotImplementedError
@classmethod
def transfer(cls, tx_signers, recipients, metadata=None, asset=None):
raise NotImplementedError

View File

@ -1,2 +1,2 @@
__version__ = '2.0.0b3'
__short_version__ = '2.0b3'
__version__ = '2.0.0b5'
__short_version__ = '2.0b5'

View File

@ -8,9 +8,10 @@ from flask import current_app, request, jsonify
from flask_restful import Resource, reqparse
from bigchaindb.common.exceptions import SchemaValidationError, ValidationError
from bigchaindb.models import Transaction
from bigchaindb.web.views.base import make_error
from bigchaindb.web.views import parameters
from bigchaindb.models import Transaction
logger = logging.getLogger(__name__)
@ -28,9 +29,9 @@ class TransactionApi(Resource):
pool = current_app.config['bigchain_pool']
with pool() as bigchain:
tx, status = bigchain.get_transaction(tx_id, include_status=True)
tx = bigchain.get_transaction(tx_id)
if not tx or status is not bigchain.TX_VALID:
if not tx:
return make_error(404)
return tx.to_dict()

View File

@ -16,6 +16,7 @@ import asyncio
import logging
import threading
from uuid import uuid4
from concurrent.futures import CancelledError
import aiohttp
from aiohttp import web
@ -105,7 +106,7 @@ class Dispatcher:
for _, websocket in self.subscribers.items():
for str_item in str_buffer:
websocket.send_str(str_item)
yield from websocket.send_str(str_item)
@asyncio.coroutine
@ -125,6 +126,9 @@ def websocket_handler(request):
except RuntimeError as e:
logger.debug('Websocket exception: %s', str(e))
break
except CancelledError as e:
logger.debug('Websocket closed')
break
if msg.type == aiohttp.WSMsgType.CLOSED:
logger.debug('Websocket closed')
break

View File

@ -44,7 +44,7 @@ services:
retries: 3
command: '.ci/entrypoint.sh'
tendermint:
image: tendermint/tendermint:0.22.3
image: tendermint/tendermint:0.22.8
# volumes:
# - ./tmdata:/tendermint
entrypoint: ''

View File

@ -32,7 +32,7 @@ $ curl -fOL https://raw.githubusercontent.com/bigchaindb/bigchaindb/${GIT_BRANCH
## Quick Start
If you run `stack.sh` out of the box i.e. without any configuration changes, you will be able to deploy a 4 node
BigchainDB network with Docker containers, created from `master` branch of `bigchaindb/bigchaindb` repo and Tendermint version `0.22.3`.
BigchainDB network with Docker containers, created from `master` branch of `bigchaindb/bigchaindb` repo and Tendermint version `0.22.8`.
**Note**: Run `stack.sh` with either root or non-root user with sudo enabled.
@ -90,7 +90,7 @@ $ bash stack.sh -h
variable. (default: master)
ENV[TM_VERSION]
(Optional) Tendermint version to use for the setup. (default: 0.22.3)
(Optional) Tendermint version to use for the setup. (default: 0.22.8)
ENV[MONGO_VERSION]
(Optional) MongoDB version to use with the setup. (default: 3.6)
@ -171,8 +171,8 @@ $ export STACK_REPO=bigchaindb/bigchaindb
# Default: master
$ export STACK_BRANCH=master
#Optional, since 0.22.3 is the default tendermint version.
$ export TM_VERSION=0.22.3
#Optional, since 0.22.8 is the default tendermint version.
$ export TM_VERSION=0.22.8
#Optional, since 3.6 is the default MongoDB version.
$ export MONGO_VERSION=3.6
@ -222,8 +222,8 @@ $ export STACK_REPO=bigchaindb/bigchaindb
# Default: master
$ export STACK_BRANCH=master
#Optional, since 0.22.3 is the default tendermint version
$ export TM_VERSION=0.22.3
#Optional, since 0.22.8 is the default tendermint version
$ export TM_VERSION=0.22.8
#Optional, since 3.6 is the default MongoDB version.
$ export MONGO_VERSION=3.6

View File

@ -19,13 +19,13 @@ After the installation of MongoDB is complete, run MongoDB using `sudo mongod`
### Installing a Tendermint Executable
Find [the version number of the latest Tendermint release](https://github.com/tendermint/tendermint/releases) and install it using the following, where 0.22.3 should be replaced by the latest released version number:
Find [the version number of the latest Tendermint release](https://github.com/tendermint/tendermint/releases) and install it using the following, where 0.22.8 should be replaced by the latest released version number:
```bash
$ sudo apt install -y unzip
$ wget https://github.com/tendermint/tendermint/releases/download/v0.22.3/tendermint_0.22.3_linux_amd64.zip
$ unzip tendermint_0.22.3_linux_amd64.zip
$ rm tendermint_0.22.3_linux_amd64.zip
$ wget https://github.com/tendermint/tendermint/releases/download/v0.22.8/tendermint_0.22.8_linux_amd64.zip
$ unzip tendermint_0.22.8_linux_amd64.zip
$ rm tendermint_0.22.8_linux_amd64.zip
$ sudo mv tendermint /usr/local/bin
```

View File

@ -2,13 +2,14 @@
The word _immutable_ means "unchanging over time or unable to be changed." For example, the decimal digits of π are immutable (3.14159…).
The blockchain community often describes blockchains as “immutable.” If we interpret that word literally, it means that blockchain data is unchangeable or permanent, which is absurd. The data _can_ be changed. For example, a plague might drive humanity extinct; the data would then get corrupted over time due to water damage, thermal noise, and the general increase of entropy. In the case of Bitcoin, nothing so drastic is required: a 51% attack will suffice.
The blockchain community often describes blockchains as “immutable.” If we interpret that word literally, it means that blockchain data is unchangeable or permanent, which is absurd. The data _can_ be changed. For example, a plague might drive humanity extinct; the data would then get corrupted over time due to water damage, thermal noise, and the general increase of entropy.
Its true that blockchain data is more difficult to change (or delete) than usual. It's more than just "tamper-resistant" (which implies intent), blockchain data also resists random changes that can happen without any intent, such as data corruption on a hard drive. Therefore, in the context of blockchains, we interpret the word “immutable” to mean *practically* immutable, for all intents and purposes. (Linguists would say that the word “immutable” is a _term of art_ in the blockchain community.)
Blockchain data can achieve immutability in several ways:
Blockchain data can be made immutable in several ways:
1. **Replication.** All data is replicated (copied) to several different places. The replication factor can be set by the consortium. The higher the replication factor, the more difficult it becomes to change or delete all replicas.
1. **No APIs for changing or deleting data.** Blockchain software usually doesn't expose any APIs for changing or deleting the data stored in the blockchain. BigchainDB has no such APIs. This doesn't prevent changes or deletions from happening in _other_ ways; it's just one line of defense.
1. **Replication.** All data is replicated (copied) to several different places. The higher the replication factor, the more difficult it becomes to change or delete all replicas.
1. **Internal watchdogs.** All nodes monitor all changes and if some unallowed change happens, then appropriate action can be taken.
1. **External watchdogs.** A consortium may opt to have trusted third-parties to monitor and audit their data, looking for irregularities. For a consortium with publicly-readable data, the public can act as an auditor.
1. **Economic incentives.** Some blockchain systems make it very expensive to change old stored data. Examples include proof-of-work and proof-of-stake systems. BigchainDB doesn't use explicit incentives like those.
@ -17,5 +18,3 @@ Blockchain data can achieve immutability in several ways:
1. **Full or partial backups** may be recorded from time to time, possibly on magnetic tape storage, other blockchains, printouts, etc.
1. **Strong security.** Node owners can adopt and enforce strong security policies.
1. **Node diversity.** Diversity makes it so that no one thing (e.g. natural disaster or operating system bug) can compromise enough of the nodes. See [the section on the kinds of node diversity](diversity.html).
Some of these things come "for free" as part of the BigchainDB software, and others require some extra effort from the consortium and node owners.

View File

@ -91,4 +91,5 @@ More About BigchainDB
transaction-concepts
store-files
permissions
private-data
Data Models <https://docs.bigchaindb.com/projects/server/en/latest/data-models/index.html>

View File

@ -53,20 +53,7 @@ You could do more elaborate things too. As one example, each time someone writes
Read Permissions
================
All the data stored in a BigchainDB network can be read by anyone with access to that network. One *can* store encrypted data, but if the decryption key ever leaks out, then the encrypted data can be read, decrypted, and leak out too. (Deleting the encrypted data is :doc:`not an option <immutable>`.)
The permission to read some specific information (e.g. a music file) can be thought of as an *asset*. (In many countries, that permission or "right" is a kind of intellectual property.)
BigchainDB can be used to register that asset and transfer it from owner to owner.
Today, BigchainDB does not have a way to restrict read access of data stored in a BigchainDB network, but many third-party services do offer that (e.g. Google Docs, Dropbox).
In principle, a third party service could ask a BigchainDB network to determine if a particular user has permission to read some particular data. Indeed they could use BigchainDB to keep track of *all* the rights a user has for some data (not just the right to read it).
That third party could also use BigchainDB to store audit logs, i.e. records of every read, write or other operation on stored data.
BigchainDB can be used in other ways to help parties exchange private data:
- It can be used to publicly disclose the *availability* of some private data (stored elsewhere). For example, there might be a description of the data and a price.
- It can be used to record the TLS handshakes which two parties sent to each other to establish an encrypted and authenticated TLS connection, which they could use to exchange private data with each other. (The stored handshake information wouldn't be enough, by itself, to decrypt the data.) It would be a "proof of TLS handshake."
- See the BigchainDB `Privacy Protocols repository <https://github.com/bigchaindb/privacy-protocols>`_ for more techniques.
See the page titled, :doc:`BigchainDB, Privacy and Private Data <private-data>`.
Role-Based Access Control (RBAC)
================================

View File

@ -0,0 +1,100 @@
BigchainDB, Privacy and Private Data
------------------------------------
Basic Facts
===========
#. One can store arbitrary data (including encrypted data) in a BigchainDB network, within limits: theres a maximum transaction size. Every transaction has a ``metadata`` section which can store almost any Unicode string (up to some maximum length). Similarly, every CREATE transaction has an ``asset.data`` section which can store almost any Unicode string.
#. The data stored in certain BigchainDB transaction fields must not be encrypted, e.g. public keys and amounts. BigchainDB doesnt offer private transactions akin to Zcoin.
#. Once data has been stored in a BigchainDB network, its best to assume it cant be change or deleted.
#. Every node in a BigchainDB network has a full copy of all the stored data.
#. Every node in a BigchainDB network can read all the stored data.
#. Everyone with full access to a BigchainDB node (e.g. the sysadmin of a node) can read all the data stored on that node.
#. Everyone given access to a node via the BigchainDB HTTP API can find and read all the data stored by BigchainDB. The list of people with access might be quite short.
#. If the connection between an external user and a BigchainDB node isnt encrypted (using HTTPS, for example), then a wiretapper can read all HTTP requests and responses in transit.
#. If someone gets access to plaintext (regardless of where they got it), then they can (in principle) share it with the whole world. One can make it difficult for them to do that, e.g. if it is a lot of data and they only get access inside a secure room where they are searched as they leave the room.
Storing Private Data Off-Chain
==============================
A system could store data off-chain, e.g. in a third-party database, document store, or content management system (CMS) and it could use BigchainDB to:
- Keep track of who has read permissions (or other permissions) in a third-party system. An example of how this could be done is described below.
- Keep a permanent record of all requests made to the third-party system.
- Store hashes of documents-stored-elsewhere, so that a change in any document can be detected.
- Record all handshake-establishing requests and responses between two off-chain parties (e.g. a Diffie-Hellman key exchange), so as to prove that they established an encrypted tunnel (without giving readers access to that tunnel). There are more details about this idea in `the BigchainDB Privacy Protocols repository <https://github.com/bigchaindb/privacy-protocols>`_.
A simple way to record who has read permission on a particular document would be for the third-party system (“DocPile”) to store a CREATE transaction in a BigchainDB network for every document+user pair, to indicate that that user has read permissions for that document. The transaction could be signed by DocPile (or maybe by a document owner, as a variation). The asset data field would contain 1) the unique ID of the user and 2) the unique ID of the document. The one output on the CREATE transaction would only be transferable/spendable by DocPile (or, again, a document owner).
To revoke the read permission, DocPile could create a TRANSFER transaction, to spend the one output on the original CREATE transaction, with a metadata field to say that the user in question no longer has read permission on that document.
This can be carried on indefinitely, i.e. another TRANSFER transaction could be created by DocPile to indicate that the user now has read permissions again.
DocPile can figure out if a given user has read permissions on a given document by reading the last transaction in the CREATE → TRANSFER → TRANSFER → etc. chain for that user+document pair.
There are other ways to accomplish the same thing. The above is just one example.
You might have noticed that the above example didnt treat the “read permission” as an asset owned (controlled) by a user because if the permission asset is given to (transferred to or created by) the user then it cannot be controlled any further (by DocPile) until the user transfers it back to DocPile. Moreover, the user could transfer the asset to someone else, which might be problematic.
Storing Private Data On-Chain, Encrypted
========================================
There are many ways to store private data on-chain, encrypted. Every use case has its own objectives and constraints, and the best solution depends on the use case. `The BigchainDB consulting team <https://www.bigchaindb.com/services/>`_, along with our partners, can help you design the best solution for your use case.
Below we describe some example system setups, using various crypto primitives, to give a sense of whats possible.
Please note:
- Ed25519 keypairs are designed for signing and verifying cryptographic signatures, `not for encrypting and decrypting messages <https://crypto.stackexchange.com/questions/27866/why-curve25519-for-encryption-but-ed25519-for-signatures>`_. For encryption, you should use keypairs designed for encryption, such as X25519.
- If someone (or some group) publishes how to decrypt some encrypted data on-chain, then anyone with access to that encrypted data will be able to get the plaintext. The data cant be deleted.
- Encrypted data cant be indexed or searched by MongoDB. (It can index and search the ciphertext, but thats not very useful.) One might use homomorphic encryption to index and search encrypted data, but MongoDB doesnt have any plans to support that any time soon. If there is indexing or keyword search needed, then some fields of the ``asset.data`` or ``metadata`` objects can be left as plain text and the sensitive information can be stored in an encrypted child-object.
System Example 1
~~~~~~~~~~~~~~~~
Encrypt the data with a symmetric key and store the ciphertext on-chain (in ``metadata`` or ``asset.data``). To communicate the key to a third party, use their public key to encrypt the symmetric key and send them that. They can decrypt the symmetric key with their private key, and then use that symmetric key to decrypt the on-chain ciphertext.
The reason for using a symmetric key along with public/private keypairs is so the ciphertext only has to be stored once.
System Example 2
~~~~~~~~~~~~~~~~
This example uses `proxy re-encryption <https://en.wikipedia.org/wiki/Proxy_re-encryption>`_:
#. MegaCorp encrypts some data using its own public key, then stores that encrypted data (ciphertext 1) in a BigchainDB network.
#. MegaCorp wants to let others read that encrypted data, but without ever sharing their private key and without having to re-encrypt themselves for every new recipient. Instead, they find a “proxy” named Moxie, to provide proxy re-encryption services.
#. Zorban contacts MegaCorp and asks for permission to read the data.
#. MegaCorp asks Zorban for his public key.
#. MegaCorp generates a “re-encryption key” and sends it to their proxy, Moxie.
#. Moxie (the proxy) uses the re-encryption key to encrypt ciphertext 1, creating ciphertext 2.
#. Moxie sends ciphertext 2 to Zorban (or to MegaCorp who forwards it to Zorban).
#. Zorban uses his private key to decrypt ciphertext 2, getting the original un-encrypted data.
Note:
- The proxy only ever sees ciphertext. They never see any un-encrypted data.
- Zorban never got the ability to decrypt ciphertext 1, i.e. the on-chain data.
- There are variations on the above flow.
System Example 3
~~~~~~~~~~~~~~~~
This example uses `erasure coding <https://en.wikipedia.org/wiki/Erasure_code>`_:
#. Erasure-code the data into n pieces.
#. Encrypt each of the n pieces with a different encryption key.
#. Store the n encrypted pieces on-chain, e.g. in n separate transactions.
#. Share each of the the n decryption keys with a different party.
If k < N of the key-holders gets and decrypts k of the pieces, they can reconstruct the original plaintext. Less than k would not be enough.
System Example 4
~~~~~~~~~~~~~~~~
This setup could be used in an enterprise blockchain scenario where a special node should be able to see parts of the data, but the others should not.
- The special node generates an X25519 keypair (or similar asymmetric *encryption* keypair).
- A BigchainDB end user finds out the X25519 public key (encryption key) of the special node.
- The end user creates a valid BigchainDB transaction, with either the asset.data or the metadata (or both) encrypted using the above-mentioned public key.
- This is only done for transactions where the contents of asset.data or metadata don't matter for validation, so all node operators can validate the transaction.
- The special node is able to decrypt the encrypted data, but the other node operators can't, and nor can any other end user.

View File

@ -0,0 +1,85 @@
# Run BigchainDB with all-in-one Docker
For those who like using Docker and wish to experiment with BigchainDB in
non-production environments, we currently maintain a BigchainDB all-in-one
Docker image and a
`Dockerfile-all-in-one` that can be used to build an image for `bigchaindb`.
This image contains all the services required for a BigchainDB node i.e.
- BigchainDB Server
- MongoDB
- Tendermint
**Note:** **NOT for Production Use:** *This is an single node opinionated image not well suited for a network deployment.*
*This image is to help quick deployment for early adopters, for a more standard approach please refer to one of our deployment guides:*
- [BigchainDB developer setup guides](https://docs.bigchaindb.com/projects/contributing/en/latest/dev-setup-coding-and-contribution-process/index.html).
- [BigchainDB with Kubernetes](http://docs.bigchaindb.com/projects/server/en/latest/k8s-deployment-template/index.html).
## Prerequisite(s)
- [Docker](https://docs.docker.com/engine/installation/)
## Pull and Run the Image from Docker Hub
With Docker installed, you can proceed as follows.
In a terminal shell, pull the latest version of the BigchainDB all-in-one Docker image using:
```text
$ docker pull bigchaindb/bigchaindb:all-in-one
$ docker run \
--detach \
--name bigchaindb \
--publish 9984:9984 \
--publish 9985:9985 \
--publish 27017:27017 \
--publish 26657:26657 \
--volume $HOME/bigchaindb_docker/mongodb/data/db:/data/db \
--volume $HOME/bigchaindb_docker/mongodb/data/configdb:/data/configdb \
--volume $HOME/bigchaindb_docker/tendermint:/tendermint \
bigchaindb/bigchaindb:all-in-one
```
Let's analyze that command:
* `docker run` tells Docker to run some image
* `--detach` run the container in the background
* `publish 9984:9984` map the host port `9984` to the container port `9984`
(the BigchainDB API server)
* `9985` BigchainDB Websocket server
* `27017` Default port for MongoDB
* `26657` Tendermint RPC server
* `--volume "$HOME/bigchaindb_docker/mongodb:/data"` map the host directory
`$HOME/bigchaindb_docker/mongodb` to the container directory `/data`;
this allows us to have the data persisted on the host machine,
you can read more in the [official Docker
documentation](https://docs.docker.com/engine/tutorials/dockervolumes)
* `$HOME/bigchaindb_docker/tendermint:/tendermint` to persist Tendermint data.
* `bigchaindb/bigchaindb:all-in-one` the image to use. All the options after the container name are passed on to the entrypoint inside the container.
## Verify
```text
$ docker ps | grep bigchaindb
```
Send your first transaction using [BigchainDB drivers](../drivers-clients/index.html).
## Building Your Own Image
Assuming you have Docker installed, you would proceed as follows.
In a terminal shell:
```text
git clone git@github.com:bigchaindb/bigchaindb.git
cd bigchaindb/
```
Build the Docker image:
```text
docker build --file Dockerfile-all-in-one --tag <tag/name:latest> .
```
Now you can use your own image to run BigchainDB all-in-one container.

View File

@ -4,7 +4,6 @@ Appendices
.. toctree::
:maxdepth: 1
install-os-level-deps
json-serialization
cryptography
the-bigchaindb-class
@ -15,3 +14,4 @@ Appendices
firewall-notes
ntp-notes
licenses
all-in-one-bigchaindb

View File

@ -1,17 +0,0 @@
# How to Install OS-Level Dependencies
BigchainDB Server has some OS-level dependencies that must be installed.
On Ubuntu 16.04, we found that the following was enough:
```text
sudo apt-get update
sudo apt-get install libffi-dev libssl-dev
```
On Fedora 2325, we found that the following was enough:
```text
sudo dnf update
sudo dnf install gcc-c++ redhat-rpm-config python3-devel libffi-devel
```
(If you're using a version of Fedora before version 22, you may have to use `yum` instead of `dnf`.)

View File

@ -2,7 +2,6 @@
A **BigchainDB Cluster** is a set of connected **BigchainDB Nodes**, managed by a **BigchainDB Consortium** (i.e. an organization). Those terms are defined in the [BigchainDB Terminology page](https://docs.bigchaindb.com/en/latest/terminology.html).
## Consortium Structure & Governance
The consortium might be a company, a foundation, a cooperative, or [some other form of organization](https://en.wikipedia.org/wiki/Organizational_structure).
@ -13,13 +12,6 @@ This documentation doesn't explain how to create a consortium, nor does it outli
It's worth noting that the decentralization of a BigchainDB cluster depends,
to some extent, on the decentralization of the associated consortium. See the pages about [decentralization](https://docs.bigchaindb.com/en/latest/decentralized.html) and [node diversity](https://docs.bigchaindb.com/en/latest/diversity.html).
## Relevant Technical Documentation
Anyone building or managing a BigchainDB cluster may be interested
in [our production deployment template](production-deployment-template/index.html).
## Cluster DNS Records and SSL Certificates
We now describe how *we* set up the external (public-facing) DNS records for a BigchainDB cluster. Your consortium may opt to do it differently.
@ -30,14 +22,12 @@ There were several goals:
* There should be no sharing of SSL certificates among BigchainDB node operators.
* Optional: Allow clients to connect to a "random" BigchainDB node in the cluster at one particular domain (or subdomain).
### Node Operator Responsibilities
1. Register a domain (or use one that you already have) for your BigchainDB node. You can use a subdomain if you like. For example, you might opt to use `abc-org73.net`, `api.dynabob8.io` or `figmentdb3.ninja`.
2. Get an SSL certificate for your domain or subdomain, and properly install it in your node (e.g. in your NGINX instance).
3. Create a DNS A Record mapping your domain or subdomain to the public IP address of your node (i.e. the one that serves the BigchainDB HTTP API).
### Consortium Responsibilities
Optional: The consortium managing the BigchainDB cluster could register a domain name and set up CNAME records mapping that domain name (or one of its subdomains) to each of the nodes in the cluster. For example, if the consortium registered `bdbcluster.io`, they could set up CNAME records like the following:

View File

@ -6,6 +6,7 @@ Libraries and Tools Maintained by the BigchainDB Team
* `Python Driver <https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_
* `JavaScript / Node.js Driver <https://github.com/bigchaindb/js-bigchaindb-driver>`_
* `Java driver <https://github.com/bigchaindb/java-bigchaindb-driver>`_
Community-Driven Libraries and Tools
------------------------------------
@ -17,6 +18,5 @@ Community-Driven Libraries and Tools
* `Haskell transaction builder <https://github.com/bigchaindb/bigchaindb-hs>`_
* `Go driver <https://github.com/zbo14/envoke/blob/master/bigchain/bigchain.go>`_
* `Java driver <https://github.com/authenteq/java-bigchaindb-driver>`_
* `Ruby driver <https://github.com/LicenseRocks/bigchaindb_ruby>`_
* `Ruby library for preparing/signing transactions and submitting them or querying a BigchainDB node (MIT licensed) <https://rubygems.org/gems/bigchaindb>`_

View File

@ -1,19 +0,0 @@
Glossary
========
.. glossary::
:sorted:
associative array
A collection of key/value (or name/value) pairs
such that each possible key appears at most once
in the collection.
In JavaScript (and JSON), all objects behave as associative arrays
with string-valued keys.
In Python and .NET, associative arrays are called *dictionaries*.
In Java and Go, they are called *maps*.
In Ruby, they are called *hashes*.
See also: Wikipedia's articles for
`Associative array <https://en.wikipedia.org/wiki/Associative_array>`_
and
`Comparison of programming languages (associative array) <https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(associative_array)>`_

View File

@ -301,7 +301,7 @@ Assets
Currently this endpoint is only supported if using MongoDB.
.. http:get:: /api/v1/assets?search={search}
.. http:get:: /api/v1/assets/?search={search}
Return all assets that match a given text search.
@ -310,6 +310,10 @@ Assets
The ``id`` of the asset
is the same ``id`` of the CREATE transaction that created the asset.
.. note::
You can use ``assets/?search`` or ``assets?search``.
If no assets match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
@ -425,6 +429,10 @@ Transaction Metadata
The ``id`` of the metadata
is the same ``id`` of the transaction where it was defined.
.. note::
You can use ``metadata/?search`` or ``metadata?search``.
If no metadata objects match the text search it returns an empty list.
If the text string is empty or the server does not support text search,

View File

@ -10,13 +10,12 @@ BigchainDB Server Documentation
simple-network-setup
production-nodes/index
clusters
production-deployment-template/index
dev-and-test/index
server-reference/index
http-client-server-api
events/index
drivers-clients/index
data-models/index
k8s-deployment-template/index
release-notes
glossary
appendices/index

View File

@ -1,13 +1,25 @@
Architecture of a BigchainDB Node
==================================
Architecture of a BigchainDB Node Running in a Kubernetes Cluster
=================================================================
A BigchainDB Production deployment is hosted on a Kubernetes cluster and includes:
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
If you deploy a BigchainDB node into a Kubernetes cluster
as described in these docs, it will include:
* NGINX, OpenResty, BigchainDB, MongoDB and Tendermint
`Kubernetes Services <https://kubernetes.io/docs/concepts/services-networking/service/>`_.
* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent.
* NGINX, OpenResty, BigchainDB and MongoDB Monitoring Agent
`Kubernetes Deployments <https://kubernetes.io/docs/concepts/workloads/controllers/deployment/>`_.
* MongoDB and Tendermint `Kubernetes StatefulSet <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
* MongoDB and Tendermint `Kubernetes StatefulSets <https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/>`_.
* Third party services like `3scale <https://3scale.net>`_,
`MongoDB Cloud Manager <https://cloud.mongodb.com>`_ and the
`Azure Operations Management Suite

View File

@ -3,6 +3,17 @@
Kubernetes Template: Deploying a BigchainDB network
===================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page describes how to deploy a static BigchainDB + Tendermint network.
If you want to deploy a stand-alone BigchainDB node in a BigchainDB cluster,

View File

@ -41,7 +41,7 @@ Configure MongoDB Cloud Manager for Monitoring
* If you have authentication enabled, select the option to enable
authentication and specify the authentication mechanism as per your
deployment. The default BigchainDB production deployment currently
deployment. The default BigchainDB Kubernetes deployment template currently
supports ``X.509 Client Certificate`` as the authentication mechanism.
* If you have TLS enabled, select the option to enable TLS/SSL for MongoDB

View File

@ -0,0 +1,40 @@
Kubernetes Deployment Template
==============================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This section outlines a way to deploy a BigchainDB node (or BigchainDB cluster)
on Microsoft Azure using Kubernetes.
You may choose to use it as a template or reference for your own deployment,
but *we make no claim that it is suitable for your purposes*.
Feel free change things to suit your needs or preferences.
.. toctree::
:maxdepth: 1
workflow
ca-installation
server-tls-certificate
client-tls-certificate
revoke-tls-certificate
template-kubernetes-azure
node-on-kubernetes
node-config-map-and-secrets
log-analytics
cloud-manager
easy-rsa
upgrade-on-kubernetes
bigchaindb-network-on-kubernetes
tectonic-azure
troubleshoot
architecture

View File

@ -3,6 +3,17 @@
How to Configure a BigchainDB Node
==================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page outlines the steps to set a bunch of configuration settings
in your BigchainDB node.
They are pushed to the Kubernetes cluster in two files,

View File

@ -3,7 +3,18 @@
Kubernetes Template: Deploy a Single BigchainDB Node
====================================================
This page describes how to deploy a BigchainDB + Tendermint node
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page describes how to deploy a BigchainDB node
using `Kubernetes <https://kubernetes.io/>`_.
It assumes you already have a running Kubernetes cluster.
@ -29,7 +40,7 @@ If you don't have that file, then you need to get it.
**Azure.** If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0 (as per :doc:`our template
<../production-deployment-template/template-kubernetes-azure>`),
<../k8s-deployment-template/template-kubernetes-azure>`),
then you can get the ``~/.kube/config`` file using:
.. code:: bash
@ -277,7 +288,7 @@ The first thing to do is create the Kubernetes storage classes.
First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0
(as per :doc:`our template <../production-deployment-template/template-kubernetes-azure>`),
(as per :doc:`our template <../k8s-deployment-template/template-kubernetes-azure>`),
then the `az acs create` command already created a
storage account in the same location and resource group
as your Kubernetes cluster.
@ -289,7 +300,7 @@ in the same data center.
Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD).
We recommend using Premium storage for our production template.
We recommend using Premium storage with our Kubernetes deployment template.
Create a `storage account <https://docs.microsoft.com/en-us/azure/storage/common/storage-create-storage-account>`_
for Premium storage and associate it with your Azure resource group.
For future reference, the command to create a storage account is
@ -372,7 +383,7 @@ but it should become "Bound" fairly quickly.
$ kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
For notes on recreating a private volume form a released Azure disk resource consult
:doc:`the page about cluster troubleshooting <../production-deployment-template/troubleshoot>`.
:doc:`the page about cluster troubleshooting <../k8s-deployment-template/troubleshoot>`.
.. _start-kubernetes-stateful-set-mongodb:
@ -569,7 +580,7 @@ Step 19(Optional): Configure the MongoDB Cloud Manager
------------------------------------------------------
Refer to the
:doc:`documentation <../production-deployment-template/cloud-manager>`
:doc:`documentation <../k8s-deployment-template/cloud-manager>`
for details on how to configure the MongoDB Cloud Manager to enable
monitoring and backup.
@ -749,4 +760,4 @@ verify that your node or cluster works as expected.
Next, you can set up log analytics and monitoring, by following our templates:
* :doc:`../production-deployment-template/log-analytics`.
* :doc:`../k8s-deployment-template/log-analytics`.

View File

@ -1,6 +1,17 @@
Walkthrough: Deploy a Kubernetes Cluster on Azure using Tectonic by CoreOS
==========================================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
A BigchainDB node can be run inside a `Kubernetes <https://kubernetes.io/>`_
cluster.
This page describes one way to deploy a Kubernetes cluster on Azure using Tectonic.

View File

@ -1,6 +1,17 @@
Template: Deploy a Kubernetes Cluster on Azure
==============================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
A BigchainDB node can be run inside a `Kubernetes <https://kubernetes.io/>`_
cluster.
This page describes one way to deploy a Kubernetes cluster on Azure.

View File

@ -1,6 +1,17 @@
Kubernetes Template: Upgrade all Software in a BigchainDB Node
==============================================================
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page outlines how to upgrade all the software associated
with a BigchainDB node running on Kubernetes,
including host operating systems, Docker, Kubernetes,

View File

@ -3,9 +3,19 @@
Overview
========
This page summarizes the steps *we* go through
to set up a production BigchainDB cluster.
We are constantly improving them.
.. note::
A highly-available Kubernetes cluster requires at least five virtual machines
(three for the master and two for your app's containers).
Therefore we don't recommend using Kubernetes to run a BigchainDB node
if that's the only thing the Kubernetes cluster will be running.
Instead, see **How to Set Up a BigchainDB Network**.
If your organization already *has* a big Kubernetes cluster running many containers,
and your organization has people who know Kubernetes,
then this Kubernetes deployment template might be helpful.
This page summarizes some steps to go through
to set up a BigchainDB cluster.
You can modify them to suit your needs.
.. _generate-the-blockchain-id-and-genesis-time:
@ -44,7 +54,7 @@ you can do this:
.. code::
$ mkdir $(pwd)/tmdata
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:0.22.3 init
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:0.22.8 init
$ cat $(pwd)/tmdata/genesis.json
You should see something that looks like:
@ -113,13 +123,13 @@ and set it equal to your secret token, e.g.
3. Deploy a Kubernetes cluster for your BigchainDB node. We have some instructions for how to
:doc:`Deploy a Kubernetes cluster on Azure <../production-deployment-template/template-kubernetes-azure>`.
:doc:`Deploy a Kubernetes cluster on Azure <../k8s-deployment-template/template-kubernetes-azure>`.
.. warning::
In theory, you can deploy your BigchainDB node to any Kubernetes cluster, but there can be differences
between different Kubernetes clusters, especially if they are running different versions of Kubernetes.
We tested this Production Deployment Template on Azure ACS in February 2018 and at that time
We tested this Kubernetes Deployment Template on Azure ACS in February 2018 and at that time
ACS was deploying a **Kubernetes 1.7.7** cluster. If you can force your cluster to have that version of Kubernetes,
then you'll increase the likelihood that everything will work in your cluster.

View File

@ -1,31 +0,0 @@
Production Deployment Template
==============================
This section outlines how *we* deploy production BigchainDB,
integrated with Tendermint(backend for BFT consensus),
clusters on Microsoft Azure using
Kubernetes. We improve it constantly.
You may choose to use it as a template or reference for your own deployment,
but *we make no claim that it is suitable for your purposes*.
Feel free change things to suit your needs or preferences.
.. toctree::
:maxdepth: 1
workflow
ca-installation
server-tls-certificate
client-tls-certificate
revoke-tls-certificate
template-kubernetes-azure
node-on-kubernetes
node-config-map-and-secrets
log-analytics
cloud-manager
easy-rsa
upgrade-on-kubernetes
bigchaindb-network-on-kubernetes
tectonic-azure
troubleshoot
architecture

View File

@ -4,7 +4,8 @@ Production Nodes
.. toctree::
:maxdepth: 1
node-requirements
node-assumptions
node-components
node-requirements
node-security-and-privacy
reverse-proxy-notes

View File

@ -10,5 +10,3 @@ We make some assumptions about production nodes:
1. Production nodes use MongoDB (not RethinkDB, PostgreSQL, Couchbase or whatever).
1. Each production node is set up and managed by an experienced professional system administrator or a team of them.
1. Each production node in a cluster is managed by a different person or team.
We don't provide a detailed cookbook explaining how to secure a server, or other things that a sysadmin should know. We do provide some templates, but those are just starting points.

View File

@ -0,0 +1,11 @@
# Production Node Security & Privacy
Here are some references about how to secure an Ubuntu 18.04 server:
- [Ubuntu 18.04 - Ubuntu Server Guide - Security](https://help.ubuntu.com/lts/serverguide/security.html.en)
- [Ubuntu Blog: National Cyber Security Centre publish Ubuntu 18.04 LTS Security Guide](https://blog.ubuntu.com/2018/07/30/national-cyber-security-centre-publish-ubuntu-18-04-lts-security-guide)
Also, here are some recommendations a node operator can follow to enhance the privacy of the data coming to, stored on, and leaving their node:
- Ensure that all data stored on a node is encrypted at rest, e.g. using full disk encryption. This can be provided as a service by the operating system, transparently to BigchainDB, MongoDB and Tendermint.
- Ensure that all data is encrypted in transit, i.e. enforce using HTTPS for the HTTP API and the Websocket API. This can be done using NGINX or similar, as we do with the BigchainDB Testnet.

View File

@ -94,7 +94,7 @@ Example usage,
$ bigchaindb upsert-validator B0E42D2589A455EAD339A035D6CE1C8C3E25863F268120AA0162AD7D003A4014 10
```
If the command is returns without any error then a request to update the validator set has been successfully submitted. So, even if the command has been successfully executed it doesn't imply that the validator set has been updated. In order to check whether the change has been applied, the node operator can execute `curl http://node_ip:9985/api/v1/validators` which will list the current validators set. Refer to the [validators](/http-client-server-api.html#validators) section of the HTTP API docs for more detail.
If the command is returns without any error then a request to update the validator set has been successfully submitted. So, even if the command has been successfully executed it doesn't imply that the validator set has been updated. In order to check whether the change has been applied, the node operator can execute `curl http://node_ip:9984/api/v1/validators` which will list the current validators set. Refer to the [validators](/http-client-server-api.html#validators) section of the HTTP API docs for more detail.
Note:
- When `POWER`is set to `0` then the validator will be removed from the validator set.

View File

@ -16,7 +16,9 @@ A Network will stop working if more than one third of the Nodes are down or faul
## Before We Start
This tutorial assumes you have basic knowledge on how to manage a GNU/Linux machine. The commands are tailored for an up-to-date *Debian-like* distribution. (We use an **Ubuntu 18.04 LTS** Virtual Machine on Microsoft Azure.) If you are on a different Linux distribution then you might need to adapt the names of the packages installed.
This tutorial assumes you have basic knowledge on how to manage a GNU/Linux machine.
**Please note: The commands on this page work on Ubuntu 18.04. Similar commands will work on other versions of Ubuntu, and other recent Debian-like Linux distros, but you may have to change the names of the packages, or install more packages.**
We don't make any assumptions about **where** you run the Node.
You can run BigchainDB Server on a Virtual Machine on the cloud, on a machine in your data center, or even on a Raspberry Pi. Just make sure that your Node is reachable by the other Nodes. Here's a **non-exhaustive list of examples**:
@ -49,7 +51,9 @@ sudo apt full-upgrade
BigchainDB Server requires **Python 3.6+**, so make sure your system has it. Install the required packages:
```
# For Ubuntu 18.04:
sudo apt install -y python3-pip libssl-dev
# Ubuntu 16.04, and other Linux distros, may require other packages or more packages
```
Now install the latest version of BigchainDB. You can find the latest version by going to the [BigchainDB project release history page on PyPI][bdb:pypi]. For example, to install version 2.0.0b3, you would do:
@ -75,13 +79,13 @@ Note: The `mongodb` package is _not_ the official MongoDB package from MongoDB t
#### Install Tendermint
Install a [recent version of Tendermint][tendermint:releases]. BigchainDB Server requires version 0.22.3 or newer.
Install a [recent version of Tendermint][tendermint:releases]. BigchainDB Server requires version 0.22.8 or newer.
```
sudo apt install -y unzip
wget https://github.com/tendermint/tendermint/releases/download/v0.22.3/tendermint_0.22.3_linux_amd64.zip
unzip tendermint_0.22.3_linux_amd64.zip
rm tendermint_0.22.3_linux_amd64.zip
wget https://github.com/tendermint/tendermint/releases/download/v0.22.8/tendermint_0.22.8_linux_amd64.zip
unzip tendermint_0.22.8_linux_amd64.zip
rm tendermint_0.22.8_linux_amd64.zip
sudo mv tendermint /usr/local/bin
```
@ -125,17 +129,17 @@ The `public_key` is stored in the file `.tendermint/config/priv_validator.json`,
```json
{
"address": "5943A9EF6285791A504918E1D117BC7F6A615C98",
"address": "E22D4340E5A92E4A9AD7C62DA62888929B3921E9",
"pub_key": {
"type": "AC26791624DE60",
"value": "W3tqeMCU3d4SHDKqrwQWTahTW/wpieIAKZQxUxLv6rI="
"type": "tendermint/PubKeyEd25519",
"value": "P+aweH73Hii8RyCmNWbwPsa9o4inq3I+0fSfprVkZa0="
},
"last_height": 0,
"last_round": 0,
"last_height": "0",
"last_round": "0",
"last_step": 0,
"priv_key": {
"type": "954568A3288910",
"value": "3sv9aExgME6MMjx0JoKVy7KtED8PBiPcyAgsYmVizslbe2p4wJTd3hIcMqqvBBZNqFNb/CmJ4gAplDFTEu/qsg=="
"type": "tendermint/PrivKeyEd25519",
"value": "AHBiZXdZhkVZoPUAiMzClxhl0VvUp7Xl3YT6GvCc93A/5rB4fvceKLxHIKY1ZvA+xr2jiKercj7R9J+mtWRlrQ=="
}
}
```
@ -162,6 +166,23 @@ At this point the Coordinator should have received the data from all the Members
{
"genesis_time":"0001-01-01T00:00:00Z",
"chain_id":"test-chain-la6HSr",
"consensus_params":{
"block_size_params":{
"max_bytes":"22020096",
"max_txs":"10000",
"max_gas":"-1"
},
"tx_size_params":{
"max_bytes":"10240",
"max_gas":"-1"
},
"block_gossip_params":{
"block_part_size_bytes":"65536"
},
"evidence_params":{
"max_age":"100000"
}
},
"validators":[
{
"pub_key":{
@ -180,7 +201,10 @@ At this point the Coordinator should have received the data from all the Members
"name":"<Member 2 name>"
},
{
"...": { },
"...":{
},
},
{
"pub_key":{
@ -195,6 +219,8 @@ At this point the Coordinator should have received the data from all the Members
}
```
**Note:** `consensus_params` in the `genesis.json` are default values for Tendermint consensus.
The new `genesis.json` file contains the data that describes the Network. The key `name` is the Member's moniker; it can be any valid string, but put something human-readable like `"Alice's Node Shop"`.
At this point, the Coordinator must share the new `genesis.json` file with all Members.
@ -220,7 +246,7 @@ persistent_peers = "<Member 1 node id>@<Member 1 hostname>:26656,\
<Member N node id>@<Member N hostname>:26656,"
```
## Member: Start MongoDB, BigchainDB and Tendermint
## Member: Start MongoDB
If you installed MongoDB using `sudo apt install mongodb`, then MongoDB should already be running in the background. You can check using `systemctl status mongodb`.
@ -228,6 +254,10 @@ If MongoDB isn't running, then you can start it using the command `mongod`, but
If you installed MongoDB using `sudo apt install mongodb`, then a MongoDB startup script should already be installed (so MongoDB will start automatically when the machine is restarted). Otherwise, you should install a startup script for MongoDB.
## Member: Start BigchainDB and Tendermint
If you want to use a process manager, jump to the [next section](member-start-bigchaindb-and-tendermint-using-monit).
To start BigchainDB, one uses the command `bigchaindb start` but that will run it in the foreground. If you want to run it in the background (so it will continue running after you logout), you can use `nohup`, `tmux`, or `screen`. For example, `nohup bigchaindb start 2>&1 > bigchaindb.log &`
The _recommended_ approach is to create a startup script for BigchainDB, so it will start right after the boot of the operating system. (As mentioned earlier, MongoDB should already have a startup script.)
@ -240,6 +270,45 @@ Note: We'll share some example startup scripts in the future. This document is a
If you followed the recommended approach and created startup scripts for BigchainDB and Tendermint, then you can reboot the machine now. MongoDB, BigchainDB and Tendermint should all start.
### Member: Start BigchainDB and Tendermint using Monit
This section describes how to manage the BigchainDB and Tendermint processes using [Monit][monit] - a small open-source utility for managing and monitoring Unix processes.
This section assumes that you followed the guide down to the [start MongoDB section](#member-start-mongodb) inclusive.
Install Monit:
```
sudo apt install monit
```
If you installed the `bigchaindb` Python package, you should have the `bigchaindb-monit-config` script in your `PATH` now.
Run the script:
```
bigchaindb-monit-config
```
The script builds a configuration file for Monit.
Run Monit as a daemon, instructing it to wake up every second to check on processes:
```
monit -d 1
```
It will run the processes and restart them when they crash. If the root `bigchaindb_` process crashes, Monit will also restart the Tendermint process.
Check the status by running `monit status` or `monit summary`.
By default, it will collect program logs into the `~/.bigchaindb-monit/logs` folder.
Consult `monit -h` or [the Monit documentation][monit-manual] to know more about the operational power you've just got the taste of.
Check `bigchaindb-monit-config -h` if you want to arrange a different folder for logs or some of the Monit internal artifacts.
## How Others Can Access Your Node
If you followed the above instructions, then your node should be publicly-accessible with BigchainDB Root URL `http://hostname:9984` (where hostname is something like `bdb7.canada.vmsareus.net` or `17.122.200.76`). That is, anyone can interact with your node using the [BigchainDB HTTP API](http-client-server-api.html) exposed at that address. The most common way to do that is to use one of the [BigchainDB Drivers](./drivers-clients/index.html).
@ -265,6 +334,48 @@ If you want to refresh your node back to a fresh empty state, then your best bet
- reset Tendermint using `tendermint unsafe_reset_all`
- delete the directory `$HOME/.tendermint`
## Shutting down BigchainDB
If you want to stop/kill BigchainDB, you can do so by sending `SIGINT`, `SIGQUIT` or `SIGTERM` to the running BigchainDB
process(es). Depending on how you started BigchainDB i.e. foreground or background. e.g. you started BigchainDB in the background as mentioned above in the guide:
```bash
$ nohup bigchaindb start 2>&1 > bigchaindb.log &
$ # Check the PID of the main BigchainDB process
$ ps -ef | grep bigchaindb
<user> *<pid> <ppid> <C> <STIME> <tty> <time> bigchaindb
<user> <pid> <ppid>* <C> <STIME> <tty> <time> gunicorn: master [bigchaindb_gunicorn]
<user> <pid> <ppid>* <C> <STIME> <tty> <time> bigchaindb_ws
<user> <pid> <ppid>* <C> <STIME> <tty> <time> bigchaindb_ws_to_tendermint
<user> <pid> <ppid>* <C> <STIME> <tty> <time> bigchaindb_exchange
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
<user> <pid> <ppid> <C> <STIME> <tty> <time> gunicorn: worker [bigchaindb_gunicorn]
...
$ # Send any of the above mentioned signals to the parent/root process(marked with `*` for clarity)
# Sending SIGINT
$ kill -2 <bigchaindb_parent_pid>
$ # OR
# Sending SIGTERM
$ kill -15 <bigchaindb_parent_pid>
$ # OR
# Sending SIGQUIT
$ kill -3 <bigchaindb_parent_pid>
# If you want to kill all the processes by name yourself
$ pgrep bigchaindb | xargs kill -9
```
If you started BigchainDB in the foreground, a `Ctrl + C` or `Ctrl + Z` would shut down BigchainDB.
## Member: Dynamically Add a New Member to the Network
TBD.
@ -273,3 +384,5 @@ TBD.
[bdb:software]: https://github.com/bigchaindb/bigchaindb/
[bdb:pypi]: https://pypi.org/project/BigchainDB/#history
[tendermint:releases]: https://github.com/tendermint/tendermint/releases
[monit]: https://www.mmonit.com/monit
[monit-manual]: https://mmonit.com/monit/documentation/monit.html

View File

@ -154,7 +154,7 @@ spec:
timeoutSeconds: 15
# BigchainDB container
- name: bigchaindb
image: bigchaindb/bigchaindb:2.0.0-beta3
image: bigchaindb/bigchaindb:2.0.0-beta5
imagePullPolicy: Always
args:
- start

View File

@ -1,4 +1,4 @@
FROM tendermint/tendermint:0.22.3
FROM tendermint/tendermint:0.22.8
LABEL maintainer "dev@bigchaindb.com"
WORKDIR /
USER root

View File

@ -34,7 +34,7 @@ spec:
terminationGracePeriodSeconds: 10
containers:
- name: bigchaindb
image: bigchaindb/bigchaindb:2.0.0-beta3
image: bigchaindb/bigchaindb:2.0.0-beta5
imagePullPolicy: Always
args:
- start

View File

@ -15,13 +15,11 @@ The derived files (`nginx.conf.template` and `nginx.lua.template`), along with
the other files in this directory, are _also_ licensed under an MIT License,
the text of which can be found below.
## Documentation Licenses
# Documentation Licenses
The documentation in this directory is licensed under a Creative Commons Attribution-ShareAlike
The documentation in this directory is licensed under a Creative Commons Attribution
4.0 International license, the full text of which can be found at
[http://creativecommons.org/licenses/by-sa/4.0/legalcode](http://creativecommons.org/licenses/by-sa/4.0/legalcode).
[http://creativecommons.org/licenses/by/4.0/legalcode](http://creativecommons.org/licenses/by/4.0/legalcode).
<hr>
@ -47,7 +45,6 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
<hr>
The MIT License
@ -71,4 +68,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View File

@ -1,4 +1,4 @@
ARG tm_version=0.22.3
ARG tm_version=0.22.8
FROM tendermint/tendermint:${tm_version}
LABEL maintainer "dev@bigchaindb.com"
WORKDIR /

14
pkg/scripts/all-in-one.bash Executable file
View File

@ -0,0 +1,14 @@
#!/bin/bash
# MongoDB configuration
[ "$(stat -c %U /data/db)" = mongodb ] || chown -R mongodb /data/db
# BigchainDB configuration
bigchaindb-monit-config
nohup mongod > "$HOME/.bigchaindb-monit/logs/mongodb_log_$(date +%Y%m%d_%H%M%S)" 2>&1 &
# Tendermint configuration
tendermint init
monit -d 5 -I -B

View File

@ -0,0 +1,173 @@
#!/bin/bash
set -o nounset
# Check if directory for monit logs exists
if [ ! -d "$HOME/.bigchaindb-monit" ]; then
mkdir -p "$HOME/.bigchaindb-monit"
fi
monit_pid_path=${MONIT_PID_PATH:=$HOME/.bigchaindb-monit/monit_processes}
monit_script_path=${MONIT_SCRIPT_PATH:=$HOME/.bigchaindb-monit/monit_script}
monit_log_path=${MONIT_LOG_PATH:=$HOME/.bigchaindb-monit/logs}
monitrc_path=${MONITRC_PATH:=$HOME/.monitrc}
function usage() {
cat <<EOM
Usage: ${0##*/} [-h]
Configure Monit for BigchainDB and Tendermint process management.
ENV[MONIT_PID_PATH] || --monit-pid-path PATH
Absolute path to directory where the the program's pid-file will reside.
The pid-file contains the ID(s) of the process(es). (default: ${monit_pid_path})
ENV[MONIT_SCRIPT_PATH] || --monit-script-path PATH
Absolute path to the directory where the executable program or
script is present. (default: ${monit_script_path})
ENV[MONIT_LOG_PATH] || --monit-log-path PATH
Absolute path to the directory where all the logs for processes
monitored by Monit are stored. (default: ${monit_log_path})
ENV[MONITRC_PATH] || --monitrc-path PATH
Absolute path to the monit control file(monitrc). (default: ${monitrc_path})
-h|--help
Show this help and exit.
EOM
}
while [[ $# -gt 0 ]]; do
arg="$1"
case $arg in
--monit-pid-path)
monit_pid_path="$2"
shift
;;
--monit-script-path)
monit_script_path="$2"
shift
;;
--monit-log-path)
monit_log_path="$2"
shift
;;
--monitrc-path)
monitrc_path="$2"
shift
;;
-h|--help)
usage
exit
;;
*)
echo "Unknown option: $1"
usage
exit 1
;;
esac
shift
done
# Check if directory for monit logs exists
if [ ! -d "$monit_log_path" ]; then
mkdir -p "$monit_log_path"
fi
# Check if directory for monit pid files exists
if [ ! -d "$monit_pid_path" ]; then
mkdir -p "$monit_pid_path"
fi
cat >${monit_script_path} <<EOF
#!/bin/bash
case \$1 in
start_bigchaindb)
pushd \$4
nohup bigchaindb -l DEBUG start >> \$3/bigchaindb.out.log 2>> \$3/bigchaindb.err.log &
echo \$! > \$2
popd
;;
stop_bigchaindb)
kill -2 \`cat \$2\`
rm -f \$2
;;
start_tendermint)
pushd \$4
nohup tendermint node --consensus.create_empty_blocks=false >> \$3/tendermint.out.log 2>> \$3/tendermint.err.log &
echo \$! > \$2
popd
;;
stop_tendermint)
kill -2 \`cat \$2\`
rm -f \$2
;;
esac
exit 0
EOF
chmod +x ${monit_script_path}
# Handling overwriting of control file interactively
if [ -f "$monitrc_path" ]; then
echo "$monitrc_path already exists."
read -p "Overwrite[Y]? " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "Overriding $monitrc_path/.monitrc"
else
read -p "Enter absolute path to store Monit control file: " monitrc_path
eval monitrc_path="$monitrc_path"
if [ ! -d "$(dirname $monitrc_path)" ]; then
echo "Failed to save monit control file '$monitrc_path': No such file or directory."
exit 1
fi
fi
fi
# configure monitrc
cat >${monitrc_path} <<EOF
set httpd
port 2812
allow localhost
check process bigchaindb
with pidfile ${monit_pid_path}/bigchaindb.pid
start program "${monit_script_path} start_bigchaindb $monit_pid_path/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
restart program "${monit_script_path} start_bigchaindb $monit_pid_path/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
stop program "${monit_script_path} stop_bigchaindb $monit_pid_path/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
check process tendermint
with pidfile ${monit_pid_path}/tendermint.pid
start program "${monit_script_path} start_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
restart program "${monit_script_path} start_bigchaindb ${monit_pid_path}/bigchaindb.pid ${monit_log_path} ${monit_log_path}"
stop program "${monit_script_path} stop_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
depends on bigchaindb
EOF
# Setting permissions for control file
chmod 0700 ${monitrc_path}
echo -e "BigchainDB process manager configured!"
set -o errexit

View File

@ -11,7 +11,7 @@ stack_repo=${STACK_REPO:="bigchaindb/bigchaindb"}
stack_size=${STACK_SIZE:=4}
stack_type=${STACK_TYPE:="docker"}
stack_type_provider=${STACK_TYPE_PROVIDER:=""}
tm_version=${TM_VERSION:="0.22.3"}
tm_version=${TM_VERSION:="0.22.8"}
mongo_version=${MONGO_VERSION:="3.6"}
stack_vm_memory=${STACK_VM_MEMORY:=2048}
stack_vm_cpus=${STACK_VM_CPUS:=2}

View File

@ -11,7 +11,7 @@ stack_repo=${STACK_REPO:="bigchaindb/bigchaindb"}
stack_size=${STACK_SIZE:=4}
stack_type=${STACK_TYPE:="docker"}
stack_type_provider=${STACK_TYPE_PROVIDER:=""}
tm_version=${TM_VERSION:="0.22.3"}
tm_version=${TM_VERSION:="0.22.8"}
mongo_version=${MONGO_VERSION:="3.6"}
stack_vm_memory=${STACK_VM_MEMORY:=2048}
stack_vm_cpus=${STACK_VM_CPUS:=2}

View File

@ -56,7 +56,7 @@ tests_require = [
'flake8-quotes==0.8.1',
'hypothesis~=3.18.5',
'hypothesis-regex',
'pylint',
# Removed pylint because its GPL license isn't Apache2-compatible
'pytest>=3.0.0',
'pytest-cov>=2.2.1',
'pytest-mock',
@ -85,7 +85,7 @@ install_requires = [
'gunicorn~=19.0',
'jsonschema~=2.5.1',
'pyyaml~=3.12',
'aiohttp~=2.3',
'aiohttp~=3.0',
'python-rapidjson-schema==0.1.1',
'bigchaindb-abci==0.5.1',
'setproctitle~=1.1.0',
@ -128,6 +128,8 @@ setup(
packages=find_packages(exclude=['tests*']),
scripts = ['pkg/scripts/bigchaindb-monit-config'],
entry_points={
'console_scripts': [
'bigchaindb=bigchaindb.commands.bigchaindb:main'

View File

@ -12,7 +12,7 @@ def test_asset_transfer(b, signed_create_tx, user_pk, user_sk):
signed_create_tx.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([signed_create_tx, tx_transfer])
b.store_bulk_transactions([signed_create_tx])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert tx_transfer_signed.asset['id'] == signed_create_tx.id
@ -27,7 +27,7 @@ def test_validate_transfer_asset_id_mismatch(b, signed_create_tx, user_pk, user_
tx_transfer.asset['id'] = 'a' * 64
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([signed_create_tx, tx_transfer_signed])
b.store_bulk_transactions([signed_create_tx])
with pytest.raises(AssetIdMismatch):
tx_transfer_signed.validate(b)

View File

@ -1,6 +1,8 @@
import pytest
import random
from bigchaindb.common.exceptions import DoubleSpend
pytestmark = pytest.mark.tendermint
@ -127,7 +129,7 @@ def test_single_in_single_own_single_out_single_own_transfer(alice, b, user_pk,
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b)
assert len(tx_transfer_signed.outputs) == 1
@ -154,7 +156,7 @@ def test_single_in_single_own_multiple_out_single_own_transfer(alice, b, user_pk
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 2
@ -182,7 +184,7 @@ def test_single_in_single_own_single_out_multiple_own_transfer(alice, b, user_pk
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 1
@ -194,6 +196,10 @@ def test_single_in_single_own_single_out_multiple_own_transfer(alice, b, user_pk
assert len(tx_transfer_signed.inputs) == 1
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
# TRANSFER divisible asset
# Single input
@ -215,7 +221,7 @@ def test_single_in_single_own_multiple_out_mix_own_transfer(alice, b, user_pk,
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 2
@ -228,6 +234,10 @@ def test_single_in_single_own_multiple_out_mix_own_transfer(alice, b, user_pk,
assert len(tx_transfer_signed.inputs) == 1
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
# TRANSFER divisible asset
# Single input
@ -249,7 +259,7 @@ def test_single_in_multiple_own_single_out_single_own_transfer(alice, b, user_pk
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 1
@ -260,6 +270,10 @@ def test_single_in_multiple_own_single_out_single_own_transfer(alice, b, user_pk
assert 'subconditions' in ffill
assert len(ffill['subconditions']) == 2
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
# TRANSFER divisible asset
# Multiple inputs
@ -280,13 +294,17 @@ def test_multiple_in_single_own_single_out_single_own_transfer(alice, b, user_pk
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b)
assert len(tx_transfer_signed.outputs) == 1
assert tx_transfer_signed.outputs[0].amount == 100
assert len(tx_transfer_signed.inputs) == 2
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
# TRANSFER divisible asset
# Multiple inputs
@ -309,9 +327,9 @@ def test_multiple_in_multiple_own_single_out_single_own_transfer(alice, b, user_
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b)
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 1
assert tx_transfer_signed.outputs[0].amount == 100
assert len(tx_transfer_signed.inputs) == 2
@ -323,6 +341,10 @@ def test_multiple_in_multiple_own_single_out_single_own_transfer(alice, b, user_
assert len(ffill_fid0['subconditions']) == 2
assert len(ffill_fid1['subconditions']) == 2
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
# TRANSFER divisible asset
# Multiple inputs
@ -345,7 +367,7 @@ def test_muiltiple_in_mix_own_multiple_out_single_own_transfer(alice, b, user_pk
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 1
@ -358,6 +380,10 @@ def test_muiltiple_in_mix_own_multiple_out_single_own_transfer(alice, b, user_pk
assert 'subconditions' in ffill_fid1
assert len(ffill_fid1['subconditions']) == 2
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
# TRANSFER divisible asset
# Multiple inputs
@ -382,7 +408,7 @@ def test_muiltiple_in_mix_own_multiple_out_mix_own_transfer(alice, b, user_pk,
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([alice.private_key, user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 2
@ -402,6 +428,10 @@ def test_muiltiple_in_mix_own_multiple_out_mix_own_transfer(alice, b, user_pk,
assert 'subconditions' in ffill_fid1
assert len(ffill_fid1['subconditions']) == 2
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
# TRANSFER divisible asset
# Multiple inputs from different transactions
@ -436,7 +466,7 @@ def test_multiple_in_different_transactions(alice, b, user_pk, user_sk):
asset_id=tx_create.id)
tx_transfer2_signed = tx_transfer2.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer1_signed, tx_transfer2_signed])
b.store_bulk_transactions([tx_create_signed, tx_transfer1_signed])
assert tx_transfer2_signed.validate(b) == tx_transfer2_signed
assert len(tx_transfer2_signed.outputs) == 1
@ -501,10 +531,14 @@ def test_threshold_same_public_key(alice, b, user_pk, user_sk):
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk, user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
def test_sum_amount(alice, b, user_pk, user_sk):
from bigchaindb.models import Transaction
@ -520,12 +554,16 @@ def test_sum_amount(alice, b, user_pk, user_sk):
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 1
assert tx_transfer_signed.outputs[0].amount == 3
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)
def test_divide(alice, b, user_pk, user_sk):
from bigchaindb.models import Transaction
@ -541,9 +579,13 @@ def test_divide(alice, b, user_pk, user_sk):
asset_id=tx_create.id)
tx_transfer_signed = tx_transfer.sign([user_sk])
b.store_bulk_transactions([tx_create_signed, tx_transfer_signed])
b.store_bulk_transactions([tx_create_signed])
assert tx_transfer_signed.validate(b) == tx_transfer_signed
assert len(tx_transfer_signed.outputs) == 3
for output in tx_transfer_signed.outputs:
assert output.amount == 1
b.store_bulk_transactions([tx_transfer_signed])
with pytest.raises(DoubleSpend):
tx_transfer_signed.validate(b)

View File

@ -370,22 +370,23 @@ def test_get_pre_commit_state(db_context):
assert resp == state._asdict()
def test_store_validator_update():
def test_validator_update():
from bigchaindb.backend import connect, query
from bigchaindb.backend.query import VALIDATOR_UPDATE_ID
from bigchaindb.common.exceptions import MultipleValidatorOperationError
conn = connect()
validator_update = {'validator': {'key': 'value'},
'update_id': VALIDATOR_UPDATE_ID}
query.store_validator_update(conn, deepcopy(validator_update))
def gen_validator_update(height):
return {'data': 'somedata', 'height': height}
with pytest.raises(MultipleValidatorOperationError):
query.store_validator_update(conn, deepcopy(validator_update))
for i in range(1, 100, 10):
value = gen_validator_update(i)
query.store_validator_set(conn, value)
resp = query.get_validator_update(conn, VALIDATOR_UPDATE_ID)
v1 = query.get_validator_set(conn, 8)
assert v1['height'] == 1
assert resp == validator_update
assert query.delete_validator_update(conn, VALIDATOR_UPDATE_ID)
assert not query.get_validator_update(conn, VALIDATOR_UPDATE_ID)
v41 = query.get_validator_set(conn, 50)
assert v41['height'] == 41
v91 = query.get_validator_set(conn)
assert v91['height'] == 91

View File

@ -40,7 +40,7 @@ def test_init_creates_db_tables_and_indexes():
assert set(indexes) == {'_id_', 'pre_commit_id'}
indexes = conn.conn[dbname]['validators'].index_information().keys()
assert set(indexes) == {'_id_', 'update_id'}
assert set(indexes) == {'_id_', 'height'}
def test_init_database_fails_if_db_exists():

View File

@ -341,6 +341,7 @@ class MockResponse():
return {'result': {'latest_block_height': self.height}}
@pytest.mark.skip
@patch('bigchaindb.config_utils.autoconfigure')
@patch('bigchaindb.backend.query.store_validator_update')
@pytest.mark.tendermint

View File

@ -11,7 +11,6 @@ USER2_PUBLIC_KEY = 'GDxwMFbwdATkQELZbMfW8bd9hbNYMZLyVXA3nur2aNbE'
USER3_PRIVATE_KEY = '4rNQFzWQbVwuTiDVxwuFMvLG5zd8AhrQKCtVovBvcYsB'
USER3_PUBLIC_KEY = 'Gbrg7JtxdjedQRmr81ZZbh1BozS7fBW88ZyxNDy7WLNC'
CC_FULFILLMENT_URI = (
'pGSAINdamAGCsQq31Uv-08lkBzoO4XLz2qYjJa8CGmj3B1EagUDlVkMAw2CscpCG4syAboKKh'
'Id_Hrjl2XTYc-BlIkkBVV-4ghWQozusxh45cBz5tGvSW_XwWVu-JGVRQUOOehAL'
@ -26,10 +25,6 @@ ASSET_DEFINITION = {
}
}
ASSET_LINK = {
'id': 'a' * 64
}
DATA = {
'msg': 'Hello BigchainDB!'
}
@ -104,12 +99,6 @@ def user_input(user_Ed25519, user_pub):
return Input(user_Ed25519, [user_pub])
@pytest.fixture
def user2_input(user2_Ed25519, user2_pub):
from bigchaindb.common.transaction import Input
return Input(user2_Ed25519, [user2_pub])
@pytest.fixture
def user_user2_threshold_output(user_user2_threshold, user_pub, user2_pub):
from bigchaindb.common.transaction import Output
@ -139,11 +128,6 @@ def asset_definition():
return ASSET_DEFINITION
@pytest.fixture
def asset_link():
return ASSET_LINK
@pytest.fixture
def data():
return DATA
@ -200,7 +184,7 @@ def dummy_transaction():
},
'public_keys': [58 * 'b']
}],
'version': '1.0'
'version': '2.0'
}
@ -271,38 +255,6 @@ def fulfilled_transaction():
}
@pytest.fixture
def fulfilled_and_hashed_transaction():
return {
'asset': {
'data': {
'msg': 'Hello BigchainDB!',
}
},
'id': '7a7c827cf4ef7985f08f4e9d16f5ffc58ca4e82271921dfbed32e70cb462485f',
'inputs': [{
'fulfillment': ('pGSAIP_2P1Juh-94sD3uno1lxMPd9EkIalRo7QB014pT6dD9g'
'UANRNxasDy1Dfg9C2Fk4UgHdYFsJzItVYi5JJ_vWc6rKltn0k'
'jagynI0xfyR6X9NhzccTt5oiNH9mThEb4QmagN'),
'fulfills': None,
'owners_before': ['JEAkEJqLbbgDRAtMm8YAjGp759Aq2qTn9eaEHUj2XePE']
}],
'metadata': None,
'operation': 'CREATE',
'outputs': [{
'amount': '1',
'condition': {
'details': {
'public_key': 'JEAkEJqLbbgDRAtMm8YAjGp759Aq2qTn9eaEHUj2XePE',
'type': 'ed25519-sha-256'
},
'uri': 'ni:///sha-256;49C5UWNODwtcINxLgLc90bMCFqCymFYONGEmV4a0sG4?fpt=ed25519-sha-256&cost=131072'},
'public_keys': ['JEAkEJqLbbgDRAtMm8YAjGp759Aq2qTn9eaEHUj2XePE']
}],
'version': '1.0'
}
# TODO For reviewers: Pick which approach you like best: parametrized or not?
@pytest.fixture(params=(
{'id': None,

View File

@ -4,6 +4,8 @@ properties related to validation.
from unittest.mock import patch
import pytest
from hypothesis import given
from hypothesis_regex import regex
from pytest import raises
@ -19,9 +21,13 @@ UNSUPPORTED_CRYPTOCONDITION_TYPES = (
'preimage-sha-256', 'prefix-sha-256', 'rsa-sha-256')
pytestmark = pytest.mark.tendermint
################################################################################
# Test of schema utils
def _test_additionalproperties(node, path=''):
"""Validate that each object node has additionalProperties set, so that
objects with junk keys do not pass as valid.

View File

@ -17,8 +17,12 @@ from pymongo import MongoClient
from bigchaindb.common import crypto
from bigchaindb.log import setup_logging
from bigchaindb.tendermint_utils import key_from_base64
from bigchaindb.common.crypto import (key_pair_from_ed25519_key,
public_key_from_ed25519_key)
from bigchaindb.lib import Block
TEST_DB_NAME = 'bigchain_test'
USER2_SK, USER2_PK = crypto.generate_key_pair()
@ -617,3 +621,52 @@ def utxoset(dummy_unspent_outputs, utxo_collection):
assert res.acknowledged
assert len(res.inserted_ids) == 3
return dummy_unspent_outputs, utxo_collection
@pytest.fixture
def network_validators(node_keys):
validator_pub_power = {}
voting_power = [8, 10, 7, 9]
for pub, priv in node_keys.items():
validator_pub_power[pub] = voting_power.pop()
return validator_pub_power
@pytest.fixture
def network_validators58(network_validators):
network_validators_base58 = {}
for p, v in network_validators.items():
p = public_key_from_ed25519_key(key_from_base64(p))
network_validators_base58[p] = v
return network_validators_base58
@pytest.fixture
def node_key(node_keys):
(pub, priv) = list(node_keys.items())[0]
return key_pair_from_ed25519_key(key_from_base64(priv))
@pytest.fixture
def ed25519_node_keys(node_keys):
(pub, priv) = list(node_keys.items())[0]
node_keys_dict = {}
for pub, priv in node_keys.items():
key = key_pair_from_ed25519_key(key_from_base64(priv))
node_keys_dict[key.public_key] = key
return node_keys_dict
@pytest.fixture(scope='session')
def node_keys():
return {'zL/DasvKulXZzhSNFwx4cLRXKkSM9GPK7Y0nZ4FEylM=':
'cM5oW4J0zmUSZ/+QRoRlincvgCwR0pEjFoY//ZnnjD3Mv8Nqy8q6VdnOFI0XDHhwtFcqRIz0Y8rtjSdngUTKUw==',
'GIijU7GBcVyiVUcB0GwWZbxCxdk2xV6pxdvL24s/AqM=':
'mdz7IjP6mGXs6+ebgGJkn7kTXByUeeGhV+9aVthLuEAYiKNTsYFxXKJVRwHQbBZlvELF2TbFXqnF28vbiz8Cow==',
'JbfwrLvCVIwOPm8tj8936ki7IYbmGHjPiKb6nAZegRA=':
'83VINXdj2ynOHuhvSZz5tGuOE5oYzIi0mEximkX1KYMlt/Csu8JUjA4+by2Pz3fqSLshhuYYeM+IpvqcBl6BEA==',
'PecJ58SaNRsWJZodDmqjpCWqG6btdwXFHLyE40RYlYM=':
'uz8bYgoL4rHErWT1gjjrnA+W7bgD/uDQWSRKDmC8otc95wnnxJo1GxYlmh0OaqOkJaobpu13BcUcvITjRFiVgw=='}

View File

@ -1,4 +1,7 @@
import pytest
import codecs
import abci.types_pb2 as types
@pytest.fixture
@ -10,3 +13,13 @@ def b():
@pytest.fixture
def validator_pub_key():
return 'B0E42D2589A455EAD339A035D6CE1C8C3E25863F268120AA0162AD7D003A4014'
@pytest.fixture
def init_chain_request():
addr = codecs.decode(b'9FD479C869C7D7E7605BF99293457AA5D80C3033', 'hex')
pk = codecs.decode(b'VAgFZtYw8bNR5TMZHFOBDWk9cAmEu3/c6JgRBmddbbI=', 'base64')
val_a = types.Validator(address=addr, power=10,
pub_key=types.PubKey(type='ed25519', data=pk))
return types.RequestInitChain(validators=[val_a])

View File

@ -50,7 +50,7 @@ def test_check_tx__unsigned_create_is_error(b):
@pytest.mark.bdb
def test_deliver_tx__valid_create_updates_db(b):
def test_deliver_tx__valid_create_updates_db(b, init_chain_request):
from bigchaindb import App
from bigchaindb.models import Transaction
from bigchaindb.common.crypto import generate_key_pair
@ -64,8 +64,9 @@ def test_deliver_tx__valid_create_updates_db(b):
app = App(b)
app.init_chain(init_chain_request)
begin_block = RequestBeginBlock()
app.init_chain(['ignore'])
app.begin_block(begin_block)
result = app.deliver_tx(encode_tx_to_bytes(tx))
@ -83,7 +84,7 @@ def test_deliver_tx__valid_create_updates_db(b):
# next(unspent_outputs)
def test_deliver_tx__double_spend_fails(b):
def test_deliver_tx__double_spend_fails(b, init_chain_request):
from bigchaindb import App
from bigchaindb.models import Transaction
from bigchaindb.common.crypto import generate_key_pair
@ -96,7 +97,7 @@ def test_deliver_tx__double_spend_fails(b):
.sign([alice.private_key])
app = App(b)
app.init_chain(['ignore'])
app.init_chain(init_chain_request)
begin_block = RequestBeginBlock()
app.begin_block(begin_block)
@ -112,13 +113,13 @@ def test_deliver_tx__double_spend_fails(b):
assert result.code == CodeTypeError
def test_deliver_transfer_tx__double_spend_fails(b):
def test_deliver_transfer_tx__double_spend_fails(b, init_chain_request):
from bigchaindb import App
from bigchaindb.models import Transaction
from bigchaindb.common.crypto import generate_key_pair
app = App(b)
app.init_chain(['ignore'])
app.init_chain(init_chain_request)
begin_block = RequestBeginBlock()
app.begin_block(begin_block)
@ -156,14 +157,16 @@ def test_deliver_transfer_tx__double_spend_fails(b):
assert result.code == CodeTypeError
def test_end_block_return_validator_updates(b):
# The test below has to re-written one election conclusion logic has been implemented
@pytest.mark.skip
def test_end_block_return_validator_updates(b, init_chain_request):
from bigchaindb import App
from bigchaindb.backend import query
from bigchaindb.core import encode_validator
from bigchaindb.backend.query import VALIDATOR_UPDATE_ID
app = App(b)
app.init_chain(['ignore'])
app.init_chain(init_chain_request)
begin_block = RequestBeginBlock()
app.begin_block(begin_block)
@ -182,7 +185,7 @@ def test_end_block_return_validator_updates(b):
assert updates == []
def test_store_pre_commit_state_in_end_block(b, alice):
def test_store_pre_commit_state_in_end_block(b, alice, init_chain_request):
from bigchaindb import App
from bigchaindb.backend import query
from bigchaindb.models import Transaction
@ -194,7 +197,7 @@ def test_store_pre_commit_state_in_end_block(b, alice):
.sign([alice.private_key])
app = App(b)
app.init_chain(['ignore'])
app.init_chain(init_chain_request)
begin_block = RequestBeginBlock()
app.begin_block(begin_block)

View File

@ -1,3 +1,5 @@
import codecs
import abci.types_pb2 as types
import json
import pytest
@ -11,7 +13,7 @@ from io import BytesIO
@pytest.mark.tendermint
@pytest.mark.bdb
def test_app(tb):
def test_app(tb, init_chain_request):
from bigchaindb import App
from bigchaindb.tendermint_utils import calculate_hash
from bigchaindb.common.crypto import generate_key_pair
@ -28,12 +30,17 @@ def test_app(tb):
assert res.info.last_block_height == 0
assert not b.get_latest_block()
p.process('init_chain', types.Request(init_chain=types.RequestInitChain()))
p.process('init_chain', types.Request(init_chain=init_chain_request))
block0 = b.get_latest_block()
assert block0
assert block0['height'] == 0
assert block0['app_hash'] == ''
pk = codecs.encode(init_chain_request.validators[0].pub_key.data, 'base64').decode().strip('\n')
[validator] = b.get_validators(height=1)
assert validator['pub_key']['data'] == pk
assert validator['voting_power'] == 10
alice = generate_key_pair()
bob = generate_key_pair()
tx = Transaction.create([alice.public_key],
@ -98,6 +105,7 @@ def test_app(tb):
assert block0['app_hash'] == new_block_hash
@pytest.mark.skip
@pytest.mark.abci
def test_upsert_validator(b, alice):
from bigchaindb.backend.query import VALIDATOR_UPDATE_ID

View File

@ -139,6 +139,7 @@ def test_post_transaction_invalid_mode(b):
b.write_transaction(tx, 'nope')
@pytest.mark.skip
@pytest.mark.bdb
def test_validator_updates(b, validator_pub_key):
from bigchaindb.backend import query
@ -382,8 +383,16 @@ def test_get_spent_transaction_critical_double_spend(b, alice, bob, carol):
asset_id=tx.id)\
.sign([alice.private_key])
same_input_double_spend = Transaction.transfer(tx.to_inputs() + tx.to_inputs(),
[([bob.public_key], 1)],
asset_id=tx.id)\
.sign([alice.private_key])
b.store_bulk_transactions([tx])
with pytest.raises(DoubleSpend):
same_input_double_spend.validate(b)
assert b.get_spent(tx.id, tx_transfer.inputs[0].fulfills.output, [tx_transfer])
with pytest.raises(DoubleSpend):

View File

@ -1,5 +1,4 @@
import copy
import logging
from unittest.mock import mock_open, patch
import pytest
@ -10,13 +9,13 @@ import bigchaindb
ORIGINAL_CONFIG = copy.deepcopy(bigchaindb._config)
pytestmark = pytest.mark.tendermint
@pytest.fixture(scope='function', autouse=True)
def clean_config(monkeypatch, request):
import bigchaindb
original_config = copy.deepcopy(ORIGINAL_CONFIG)
backend = request.config.getoption('--database-backend')
if backend == 'mongodb-ssl':
backend = 'mongodb'
original_config['database'] = bigchaindb._database_map[backend]
monkeypatch.setattr('bigchaindb.config', original_config)
@ -31,21 +30,6 @@ def test_bigchain_instance_is_initialized_when_conf_provided(request):
assert bigchaindb.config['CONFIGURED'] is True
def test_bigchain_instance_raises_when_not_configured(request, monkeypatch):
import bigchaindb
from bigchaindb import config_utils
from bigchaindb.common import exceptions
from bigchaindb import BigchainDB
assert 'CONFIGURED' not in bigchaindb.config
# We need to disable ``bigchaindb.config_utils.autoconfigure`` to avoid reading
# from existing configurations
monkeypatch.setattr(config_utils, 'autoconfigure', lambda: 0)
with pytest.raises(exceptions.ConfigurationError):
BigchainDB()
def test_load_consensus_plugin_loads_default_rules_without_name():
from bigchaindb import config_utils
from bigchaindb.consensus import BaseConsensusRules
@ -146,7 +130,7 @@ def test_env_config(monkeypatch):
assert result == expected
def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request, ssl_context):
def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request):
# constants
DATABASE_HOST = 'test-host'
DATABASE_NAME = 'test-dbname'
@ -159,7 +143,6 @@ def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request, ssl_con
WSSERVER_ADVERTISED_SCHEME = 'wss'
WSSERVER_ADVERTISED_HOST = 'a.b.c.d'
WSSERVER_ADVERTISED_PORT = 89
KEYRING = 'pubkey_0:pubkey_1:pubkey_2'
LOG_FILE = '/somewhere/something.log'
file_config = {
@ -171,28 +154,11 @@ def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request, ssl_con
},
}
monkeypatch.setattr('bigchaindb.config_utils.file_config', lambda *args, **kwargs: file_config)
monkeypatch.setattr('bigchaindb.config_utils.file_config',
lambda *args, **kwargs: file_config)
if DATABASE_BACKEND == 'mongodb-ssl':
monkeypatch.setattr('os.environ', {'BIGCHAINDB_DATABASE_NAME': DATABASE_NAME,
'BIGCHAINDB_DATABASE_PORT': str(DATABASE_PORT),
'BIGCHAINDB_DATABASE_BACKEND': 'mongodb',
'BIGCHAINDB_SERVER_BIND': SERVER_BIND,
'BIGCHAINDB_WSSERVER_SCHEME': WSSERVER_SCHEME,
'BIGCHAINDB_WSSERVER_HOST': WSSERVER_HOST,
'BIGCHAINDB_WSSERVER_PORT': WSSERVER_PORT,
'BIGCHAINDB_WSSERVER_ADVERTISED_SCHEME': WSSERVER_ADVERTISED_SCHEME,
'BIGCHAINDB_WSSERVER_ADVERTISED_HOST': WSSERVER_ADVERTISED_HOST,
'BIGCHAINDB_WSSERVER_ADVERTISED_PORT': WSSERVER_ADVERTISED_PORT,
'BIGCHAINDB_KEYRING': KEYRING,
'BIGCHAINDB_LOG_FILE': LOG_FILE,
'BIGCHAINDB_DATABASE_CA_CERT': ssl_context.ca,
'BIGCHAINDB_DATABASE_CRLFILE': ssl_context.crl,
'BIGCHAINDB_DATABASE_CERTFILE': ssl_context.cert,
'BIGCHAINDB_DATABASE_KEYFILE': ssl_context.key,
'BIGCHAINDB_DATABASE_KEYFILE_PASSPHRASE': None})
else:
monkeypatch.setattr('os.environ', {'BIGCHAINDB_DATABASE_NAME': DATABASE_NAME,
monkeypatch.setattr('os.environ', {
'BIGCHAINDB_DATABASE_NAME': DATABASE_NAME,
'BIGCHAINDB_DATABASE_PORT': str(DATABASE_PORT),
'BIGCHAINDB_DATABASE_BACKEND': DATABASE_BACKEND,
'BIGCHAINDB_SERVER_BIND': SERVER_BIND,
@ -202,62 +168,43 @@ def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request, ssl_con
'BIGCHAINDB_WSSERVER_ADVERTISED_SCHEME': WSSERVER_ADVERTISED_SCHEME,
'BIGCHAINDB_WSSERVER_ADVERTISED_HOST': WSSERVER_ADVERTISED_HOST,
'BIGCHAINDB_WSSERVER_ADVERTISED_PORT': WSSERVER_ADVERTISED_PORT,
'BIGCHAINDB_KEYRING': KEYRING,
'BIGCHAINDB_LOG_FILE': LOG_FILE})
'BIGCHAINDB_LOG_FILE': LOG_FILE,
'BIGCHAINDB_LOG_FILE': LOG_FILE,
'BIGCHAINDB_DATABASE_CA_CERT': 'ca_cert',
'BIGCHAINDB_DATABASE_CRLFILE': 'crlfile',
'BIGCHAINDB_DATABASE_CERTFILE': 'certfile',
'BIGCHAINDB_DATABASE_KEYFILE': 'keyfile',
'BIGCHAINDB_DATABASE_KEYFILE_PASSPHRASE': 'passphrase',
})
import bigchaindb
from bigchaindb import config_utils
from bigchaindb.log.configs import SUBSCRIBER_LOGGING_CONFIG as log_config
from bigchaindb.log import DEFAULT_LOGGING_CONFIG as log_config
config_utils.autoconfigure()
database_mongodb = {
'backend': 'mongodb',
'backend': 'localmongodb',
'host': DATABASE_HOST,
'port': DATABASE_PORT,
'name': DATABASE_NAME,
'connection_timeout': 5000,
'max_tries': 3,
'replicaset': 'bigchain-rs',
'replicaset': None,
'ssl': False,
'login': None,
'password': None,
'ca_cert': None,
'certfile': None,
'keyfile': None,
'keyfile_passphrase': None,
'crlfile': None
'ca_cert': 'ca_cert',
'certfile': 'certfile',
'keyfile': 'keyfile',
'keyfile_passphrase': 'passphrase',
'crlfile': 'crlfile',
}
database_mongodb_ssl = {
'backend': 'mongodb',
'host': DATABASE_HOST,
'port': DATABASE_PORT,
'name': DATABASE_NAME,
'connection_timeout': 5000,
'max_tries': 3,
'replicaset': 'bigchain-rs',
'ssl': True,
'login': None,
'password': None,
'ca_cert': ssl_context.ca,
'crlfile': ssl_context.crl,
'certfile': ssl_context.cert,
'keyfile': ssl_context.key,
'keyfile_passphrase': None
}
database = {}
if DATABASE_BACKEND == 'mongodb':
database = database_mongodb
elif DATABASE_BACKEND == 'mongodb-ssl':
database = database_mongodb_ssl
assert bigchaindb.config == {
'CONFIGURED': True,
'server': {
'bind': SERVER_BIND,
'loglevel': logging.getLevelName(
log_config['handlers']['console']['level']).lower(),
'loglevel': 'info',
'workers': None,
},
'wsserver': {
@ -268,23 +215,22 @@ def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request, ssl_con
'advertised_host': WSSERVER_ADVERTISED_HOST,
'advertised_port': WSSERVER_ADVERTISED_PORT,
},
'database': database,
'database': database_mongodb,
'tendermint': {
'host': None,
'port': None,
'host': 'localhost',
'port': 26657,
},
'log': {
'file': LOG_FILE,
'level_console': 'debug',
'error_file': log_config['handlers']['errors']['filename'],
'level_console': 'debug',
'level_logfile': logging.getLevelName(
log_config['handlers']['file']['level']).lower(),
'level_logfile': 'info',
'datefmt_console': log_config['formatters']['console']['datefmt'],
'datefmt_logfile': log_config['formatters']['file']['datefmt'],
'fmt_console': log_config['formatters']['console']['format'],
'fmt_logfile': log_config['formatters']['file']['format'],
'granular_levels': {},
'port': 9020
},
}
@ -381,18 +327,3 @@ def test_database_envs(env_name, env_value, config_key, monkeypatch):
expected_config['database'][config_key] = env_value
assert bigchaindb.config == expected_config
def test_database_envs_replicaset(monkeypatch):
# the replica set env is only used if the backend is mongodb
import bigchaindb
monkeypatch.setattr('os.environ', {'BIGCHAINDB_DATABASE_REPLICASET':
'test-replicaset'})
bigchaindb.config['database'] = bigchaindb._database_mongodb
bigchaindb.config_utils.autoconfigure()
expected_config = copy.deepcopy(bigchaindb.config)
expected_config['database']['replicaset'] = 'test-replicaset'
assert bigchaindb.config == expected_config

View File

@ -1,6 +1,5 @@
import pytest
pytestmark = pytest.mark.tendermint
@ -21,6 +20,10 @@ def config(request, monkeypatch):
'connection_timeout': 5000,
'max_tries': 3
},
'tendermint': {
'host': 'localhost',
'port': 26657,
},
'CONFIGURED': True,
}
@ -29,7 +32,6 @@ def config(request, monkeypatch):
return config
@pytest.mark.skipif(reason='will be fixed in another PR')
def test_bigchain_class_default_initialization(config):
from bigchaindb import BigchainDB
from bigchaindb.consensus import BaseConsensusRules
@ -42,8 +44,7 @@ def test_bigchain_class_default_initialization(config):
assert bigchain.consensus == BaseConsensusRules
@pytest.mark.skipif(reason='will be fixed in another PR')
def test_bigchain_class_initialization_with_parameters(config):
def test_bigchain_class_initialization_with_parameters():
from bigchaindb import BigchainDB
from bigchaindb.backend import connect
from bigchaindb.consensus import BaseConsensusRules
@ -54,7 +55,7 @@ def test_bigchain_class_initialization_with_parameters(config):
'name': 'this_is_the_db_name',
}
connection = connect(**init_db_kwargs)
bigchain = BigchainDB(connection=connection, **init_db_kwargs)
bigchain = BigchainDB(connection=connection)
assert bigchain.connection == connection
assert bigchain.connection.host == init_db_kwargs['host']
assert bigchain.connection.port == init_db_kwargs['port']
@ -62,20 +63,6 @@ def test_bigchain_class_initialization_with_parameters(config):
assert bigchain.consensus == BaseConsensusRules
@pytest.mark.skipif(reason='will be fixed in another PR')
def test_get_blocks_status_containing_tx(monkeypatch):
from bigchaindb.backend import query as backend_query
from bigchaindb import BigchainDB
blocks = [
{'id': 1}, {'id': 2}
]
monkeypatch.setattr(backend_query, 'get_blocks_status_from_transaction', lambda x: blocks)
monkeypatch.setattr(BigchainDB, 'block_election_status', lambda x, y, z: BigchainDB.BLOCK_VALID)
bigchain = BigchainDB(public_key='pubkey', private_key='privkey')
with pytest.raises(Exception):
bigchain.get_blocks_status_containing_tx('txid')
@pytest.mark.genesis
def test_get_spent_issue_1271(b, alice, bob, carol):
from bigchaindb.models import Transaction

View File

@ -4,46 +4,27 @@ This test module defines it's own fixture which is used by all the tests.
"""
import pytest
pytestmark = pytest.mark.tendermint
@pytest.fixture
def txlist(b, user_pk, user2_pk, user_sk, user2_sk, genesis_block):
def txlist(b, user_pk, user2_pk, user_sk, user2_sk):
from bigchaindb.models import Transaction
prev_block_id = genesis_block.id
# Create first block with CREATE transactions
# Create two CREATE transactions
create1 = Transaction.create([user_pk], [([user2_pk], 6)]) \
.sign([user_sk])
create2 = Transaction.create([user2_pk],
[([user2_pk], 5), ([user_pk], 5)]) \
.sign([user2_sk])
block1 = b.create_block([create1, create2])
b.write_block(block1)
# Create second block with TRANSFER transactions
# Create a TRANSFER transactions
transfer1 = Transaction.transfer(create1.to_inputs(),
[([user_pk], 8)],
create1.id).sign([user2_sk])
block2 = b.create_block([transfer1])
b.write_block(block2)
# Create block with double spend
tx_doublespend = Transaction.transfer(create1.to_inputs(), [([user_pk], 9)],
create1.id).sign([user2_sk])
block_doublespend = b.create_block([tx_doublespend])
b.write_block(block_doublespend)
# Vote on all the blocks
prev_block_id = genesis_block.id
for bid in [block1.id, block2.id]:
vote = b.vote(bid, prev_block_id, True)
prev_block_id = bid
b.write_vote(vote)
# Create undecided block
untx = Transaction.create([user_pk], [([user2_pk], 7)]) \
.sign([user_sk])
block_undecided = b.create_block([untx])
b.write_block(block_undecided)
b.store_bulk_transactions([create1, create2, transfer1])
return type('', (), {
'create1': create1,
@ -54,8 +35,8 @@ def txlist(b, user_pk, user2_pk, user_sk, user2_sk, genesis_block):
@pytest.mark.bdb
def test_get_txlist_by_asset(b, txlist):
res = b.get_transactions_filtered(txlist.create1.id)
assert set(tx.id for tx in res) == set([txlist.transfer1.id,
txlist.create1.id])
assert sorted(set(tx.id for tx in res)) == sorted(
set([txlist.transfer1.id, txlist.create1.id]))
@pytest.mark.bdb

View File

View File

@ -0,0 +1,42 @@
import pytest
from bigchaindb.upsert_validator import ValidatorElection
@pytest.fixture
def b_mock(b, network_validators):
b.get_validators = mock_get_validators(network_validators)
return b
@pytest.fixture
def new_validator():
public_key = '1718D2DBFF00158A0852A17A01C78F4DCF3BA8E4FB7B8586807FAC182A535034'
power = 1
node_id = 'fake_node_id'
return {'public_key': public_key,
'power': power,
'node_id': node_id}
def mock_get_validators(network_validators):
def validator_set():
validators = []
for public_key, power in network_validators.items():
validators.append({
'pub_key': {'type': 'AC26791624DE60', 'value': public_key},
'voting_power': power
})
return validators
return validator_set
@pytest.fixture
def valid_election(b_mock, node_key, new_validator):
voters = ValidatorElection.recipients(b_mock)
return ValidatorElection.generate([node_key.public_key],
voters,
new_validator, None).sign([node_key.private_key])

View File

@ -0,0 +1,93 @@
import pytest
from bigchaindb.upsert_validator import ValidatorElection
from bigchaindb.common.exceptions import (DuplicateTransaction,
UnequalValidatorSet,
InvalidProposer,
MultipleInputsError,
InvalidPowerChange)
pytestmark = [pytest.mark.tendermint, pytest.mark.bdb]
def test_upsert_validator_valid_election(b_mock, new_validator, node_key):
voters = ValidatorElection.recipients(b_mock)
election = ValidatorElection.generate([node_key.public_key],
voters,
new_validator, None).sign([node_key.private_key])
assert election.validate(b_mock)
def test_upsert_validator_invalid_power_election(b_mock, new_validator, node_key):
voters = ValidatorElection.recipients(b_mock)
new_validator['power'] = 30
election = ValidatorElection.generate([node_key.public_key],
voters,
new_validator, None).sign([node_key.private_key])
with pytest.raises(InvalidPowerChange):
election.validate(b_mock)
def test_upsert_validator_invalid_proposed_election(b_mock, new_validator, node_key):
from bigchaindb.common.crypto import generate_key_pair
alice = generate_key_pair()
voters = ValidatorElection.recipients(b_mock)
election = ValidatorElection.generate([alice.public_key],
voters,
new_validator, None).sign([alice.private_key])
with pytest.raises(InvalidProposer):
election.validate(b_mock)
def test_upsert_validator_invalid_inputs_election(b_mock, new_validator, node_key):
from bigchaindb.common.crypto import generate_key_pair
alice = generate_key_pair()
voters = ValidatorElection.recipients(b_mock)
election = ValidatorElection.generate([node_key.public_key, alice.public_key],
voters,
new_validator, None).sign([node_key.private_key, alice.private_key])
with pytest.raises(MultipleInputsError):
election.validate(b_mock)
def test_upsert_validator_invalid_election(b_mock, new_validator, node_key):
voters = ValidatorElection.recipients(b_mock)
valid_election = ValidatorElection.generate([node_key.public_key],
voters,
new_validator, None).sign([node_key.private_key])
duplicate_election = ValidatorElection.generate([node_key.public_key],
voters,
new_validator, None).sign([node_key.private_key])
with pytest.raises(DuplicateTransaction):
valid_election.validate(b_mock, [duplicate_election])
b_mock.store_bulk_transactions([valid_election])
with pytest.raises(DuplicateTransaction):
duplicate_election.validate(b_mock)
# Try creating an election with incomplete voter set
invalid_election = ValidatorElection.generate([node_key.public_key],
voters[1:],
new_validator, None).sign([node_key.private_key])
with pytest.raises(UnequalValidatorSet):
invalid_election.validate(b_mock)
recipients = ValidatorElection.recipients(b_mock)
altered_recipients = []
for r in recipients:
([r_public_key], voting_power) = r
altered_recipients.append(([r_public_key], voting_power - 1))
# Create a transaction which doesn't enfore the network power
tx_election = ValidatorElection.generate([node_key.public_key],
altered_recipients,
new_validator, None).sign([node_key.private_key])
with pytest.raises(UnequalValidatorSet):
tx_election.validate(b_mock)

View File

@ -0,0 +1,80 @@
import pytest
from bigchaindb.upsert_validator import ValidatorElectionVote
from bigchaindb.common.exceptions import AmountError
pytestmark = [pytest.mark.tendermint, pytest.mark.bdb]
def test_upsert_validator_valid_election_vote(b_mock, valid_election, ed25519_node_keys):
b_mock.store_bulk_transactions([valid_election])
input0 = valid_election.to_inputs()[0]
votes = valid_election.outputs[0].amount
public_key0 = input0.owners_before[0]
key0 = ed25519_node_keys[public_key0]
election_pub_key = ValidatorElectionVote.to_public_key(valid_election.id)
vote = ValidatorElectionVote.generate([input0],
[([election_pub_key], votes)],
election_id=valid_election.id)\
.sign([key0.private_key])
assert vote.validate(b_mock)
def test_upsert_validator_delegate_election_vote(b_mock, valid_election, ed25519_node_keys):
from bigchaindb.common.crypto import generate_key_pair
alice = generate_key_pair()
b_mock.store_bulk_transactions([valid_election])
input0 = valid_election.to_inputs()[0]
votes = valid_election.outputs[0].amount
public_key0 = input0.owners_before[0]
key0 = ed25519_node_keys[public_key0]
delegate_vote = ValidatorElectionVote.generate([input0],
[([alice.public_key], 3), ([key0.public_key], votes-3)],
election_id=valid_election.id)\
.sign([key0.private_key])
assert delegate_vote.validate(b_mock)
b_mock.store_bulk_transactions([delegate_vote])
election_pub_key = ValidatorElectionVote.to_public_key(valid_election.id)
alice_votes = delegate_vote.to_inputs()[0]
alice_casted_vote = ValidatorElectionVote.generate([alice_votes],
[([election_pub_key], 3)],
election_id=valid_election.id)\
.sign([alice.private_key])
assert alice_casted_vote.validate(b_mock)
key0_votes = delegate_vote.to_inputs()[1]
key0_casted_vote = ValidatorElectionVote.generate([key0_votes],
[([election_pub_key], votes-3)],
election_id=valid_election.id)\
.sign([key0.private_key])
assert key0_casted_vote.validate(b_mock)
def test_upsert_validator_invalid_election_vote(b_mock, valid_election, ed25519_node_keys):
b_mock.store_bulk_transactions([valid_election])
input0 = valid_election.to_inputs()[0]
votes = valid_election.outputs[0].amount
public_key0 = input0.owners_before[0]
key0 = ed25519_node_keys[public_key0]
election_pub_key = ValidatorElectionVote.to_public_key(valid_election.id)
vote = ValidatorElectionVote.generate([input0],
[([election_pub_key], votes+1)],
election_id=valid_election.id)\
.sign([key0.private_key])
with pytest.raises(AmountError):
assert vote.validate(b_mock)

View File

@ -402,30 +402,6 @@ def test_transactions_get_list_bad(client):
assert client.get(url).status_code == 400
@pytest.mark.tendermint
def test_return_only_valid_transaction(client):
from bigchaindb import BigchainDB
def get_transaction_patched(status):
def inner(self, tx_id, include_status):
return {}, status
return inner
# NOTE: `get_transaction` only returns a transaction if it's included in an
# UNDECIDED or VALID block, as well as transactions from the backlog.
# As the endpoint uses `get_transaction`, we don't have to test
# against invalid transactions here.
with patch('bigchaindb.BigchainDB.get_transaction',
get_transaction_patched(BigchainDB.TX_UNDECIDED)):
url = '{}{}'.format(TX_ENDPOINT, '123')
assert client.get(url).status_code == 404
with patch('bigchaindb.BigchainDB.get_transaction',
get_transaction_patched(BigchainDB.TX_IN_BACKLOG)):
url = '{}{}'.format(TX_ENDPOINT, '123')
assert client.get(url).status_code == 404
@pytest.mark.tendermint
@patch('requests.post')
@pytest.mark.parametrize('mode', [

View File

@ -1,49 +1,22 @@
import pytest
from requests.exceptions import RequestException
pytestmark = pytest.mark.tendermint
VALIDATORS_ENDPOINT = '/api/v1/validators/'
def test_get_validators_endpoint(b, client, monkeypatch):
def mock_get(uri):
return MockResponse()
monkeypatch.setattr('requests.get', mock_get)
validator_set = [{'address': 'F5426F0980E36E03044F74DD414248D29ABCBDB2',
'pub_key': {'data': '4E2685D9016126864733225BE00F005515200727FBAB1312FC78C8B76831255A',
'type': 'ed25519'},
'voting_power': 10}]
b.store_validator_set(23, validator_set)
res = client.get(VALIDATORS_ENDPOINT)
assert is_validator(res.json[0])
assert res.status_code == 200
def test_get_validators_500_endpoint(b, client, monkeypatch):
def mock_get(uri):
raise RequestException
monkeypatch.setattr('requests.get', mock_get)
with pytest.raises(RequestException):
client.get(VALIDATORS_ENDPOINT)
# Helper
def is_validator(v):
return ('pub_key' in v) and ('voting_power' in v)
class MockResponse():
def json(self):
return {'id': '',
'jsonrpc': '2.0',
'result':
{'block_height': 5,
'validators': [
{'accum': 0,
'address': 'F5426F0980E36E03044F74DD414248D29ABCBDB2',
'pub_key': {'data': '4E2685D9016126864733225BE00F005515200727FBAB1312FC78C8B76831255A',
'type': 'ed25519'},
'voting_power': 10}]}}