Merge branch 'master' into remove-unused-db-reconnect

This commit is contained in:
vrde 2016-11-02 17:01:35 +01:00
commit a12882b51e
No known key found for this signature in database
GPG Key ID: 6581C7C39B3D397D
108 changed files with 2079 additions and 2169 deletions

View File

@ -16,7 +16,9 @@ install:
- pip install -e .[test]
- pip install codecov
before_script: rethinkdb --daemon
before_script:
- flake8 --max-line-length 119 bigchaindb/
- rethinkdb --daemon
script: py.test -n auto -s -v --cov=bigchaindb

View File

@ -15,11 +15,75 @@ For reference, the possible headings are:
* **Notes**
## [0.6.0] - 2016-09-01
Tag name: v0.6.0
## [0.7.0] - 2016-10-28
Tag name: v0.7.0
= commit:
committed:
### Added
- Stale transactions in the `backlog` table now get reassigned to another node (for inclusion in a new block): [Pull Request #359](https://github.com/bigchaindb/bigchaindb/pull/359)
- Many improvements to make the database connection more robust: [Pull Request #623](https://github.com/bigchaindb/bigchaindb/pull/623)
- The new `--dev-allow-temp-keypair` option on `bigchaindb start` will generate a temporary keypair if no keypair is found. The `Dockerfile` was updated to use this. [Pull Request #635](https://github.com/bigchaindb/bigchaindb/pull/635)
- The AWS deployment scripts now allow you to:
- specify the AWS security group as a configuration parameter: [Pull Request #620](https://github.com/bigchaindb/bigchaindb/pull/620)
- tell RethinkDB to bind HTTP to localhost (a more secure setup; now the default in the example config file): [Pull Request #666](https://github.com/bigchaindb/bigchaindb/pull/666)
### Changed
- Integrated the new `Transaction` model. This was a **big** change; 49 files changed. [Pull Request #641](https://github.com/bigchaindb/bigchaindb/pull/641)
- Merged "common" code (used by BigchainDB Server and the Python driver), which used to be in its own repository (`bigchaindb/bigchaindb-common`), into the main `bigchaindb/bigchaindb` repository (this one): [Pull Request #742](https://github.com/bigchaindb/bigchaindb/pull/742)
- Integrated the new digital asset model. This changed the data structure of a transaction and will make it easier to support divisible assets in the future. [Pull Request #680](https://github.com/bigchaindb/bigchaindb/pull/680)
- Transactions are now deleted from the `backlog` table _after_ a block containing them is written to the `bigchain` table: [Pull Request #609](https://github.com/bigchaindb/bigchaindb/pull/609)
- Changed the example AWS deployment configuration file: [Pull Request #665](https://github.com/bigchaindb/bigchaindb/pull/665)
- Support for version 0.5.0 of the `cryptoconditions` Python package. Note that this means you must now install `ffi.h` (e.g. `sudo apt-get install libffi-dev` on Ubuntu). See Pull Requests [#685](https://github.com/bigchaindb/bigchaindb/pull/685) and [#698](https://github.com/bigchaindb/bigchaindb/pull/698)
- Updated some database access code: Pull Requests [#676](https://github.com/bigchaindb/bigchaindb/pull/676) and [#701](https://github.com/bigchaindb/bigchaindb/pull/701)
### Fixed
- Internally, when a transaction is in the `backlog` table, it carries some extra book-keeping fields:
1. an `assignment_timestamp` (i.e. the time when it was assigned to a node), which is used to determine if it has gone stale.
2. an `assignee`: the public key of the node it was assigned to.
- The `assignment_timestamp` wasn't removed before writing the transaction to a block. That was fixed in [Pull Request #627](https://github.com/bigchaindb/bigchaindb/pull/627)
- The `assignment_timestamp` and `assignee` weren't removed in the response to an HTTP API request sent to the `/api/v1/transactions/<txid>` endpoint. That was fixed in [Pull Request #646](https://github.com/bigchaindb/bigchaindb/pull/646)
- When validating a TRANSFER transaction, if any fulfillment refers to a transaction that's _not_ in a valid block, then the transaction isn't valid. This wasn't checked before but it is now. [Pull Request #629](https://github.com/bigchaindb/bigchaindb/pull/629)
### External Contributors
- @MinchinWeb - [Pull Request #696](https://github.com/bigchaindb/bigchaindb/pull/696)
### Notes
- We made a small change to how we do version labeling. Going forward, we will have the version label set to 0.X.Y.dev in the master branch as we work on what will eventually be released as version 0.X.Y. The version number will only be changed to 0.X.Y just before the release. This version labeling scheme began with [Pull Request #752](https://github.com/bigchaindb/bigchaindb/pull/752)
- Several additions and changes to the documentation, e.g. Pull Requests
[#618](https://github.com/bigchaindb/bigchaindb/pull/618),
[#621](https://github.com/bigchaindb/bigchaindb/pull/621),
[#625](https://github.com/bigchaindb/bigchaindb/pull/625),
[#645](https://github.com/bigchaindb/bigchaindb/pull/645),
[#647](https://github.com/bigchaindb/bigchaindb/pull/647),
[#648](https://github.com/bigchaindb/bigchaindb/pull/648),
[#650](https://github.com/bigchaindb/bigchaindb/pull/650),
[#651](https://github.com/bigchaindb/bigchaindb/pull/651),
[#653](https://github.com/bigchaindb/bigchaindb/pull/653),
[#655](https://github.com/bigchaindb/bigchaindb/pull/655),
[#656](https://github.com/bigchaindb/bigchaindb/pull/656),
[#657](https://github.com/bigchaindb/bigchaindb/pull/657),
[#667](https://github.com/bigchaindb/bigchaindb/pull/667),
[#668](https://github.com/bigchaindb/bigchaindb/pull/668),
[#669](https://github.com/bigchaindb/bigchaindb/pull/669),
[#673](https://github.com/bigchaindb/bigchaindb/pull/673),
[#678](https://github.com/bigchaindb/bigchaindb/pull/678),
[#684](https://github.com/bigchaindb/bigchaindb/pull/684),
[#688](https://github.com/bigchaindb/bigchaindb/pull/688),
[#699](https://github.com/bigchaindb/bigchaindb/pull/699),
[#705](https://github.com/bigchaindb/bigchaindb/pull/705),
[#737](https://github.com/bigchaindb/bigchaindb/pull/737),
[#748](https://github.com/bigchaindb/bigchaindb/pull/748),
[#753](https://github.com/bigchaindb/bigchaindb/pull/753),
[#757](https://github.com/bigchaindb/bigchaindb/pull/757),
[#759](https://github.com/bigchaindb/bigchaindb/pull/759), and more
## [0.6.0] - 2016-09-01
Tag name: v0.6.0
= commit: bfc86e0295c7d1ef0acd3c275c125798bd5b0dfd
committed: Sep 1, 2016, 2:15 PM GMT+2
### Added
- Support for multiple operations in the ChangeFeed class: [Pull Request #569](https://github.com/bigchaindb/bigchaindb/pull/569)
- Instructions, templates and code for deploying a starter node on AWS using Terraform and Ansible: Pull Requests

View File

@ -1,8 +1,8 @@
# Overview
A high-level description of the files and subdirectories of BigchainDB. Heavily used external dependencies include [`multipipes`](https://github.com/bigchaindb/multipipes) and [`bigchaindb-common`](https://github.com/bigchaindb/bigchaindb-common).
A high-level description of the files and subdirectories of BigchainDB.
There are three database tables which underpin BigchainDB: `backlog`, where incoming transactions are held temporarily until they can be consumed by; `bigchain`, where blocks of transactions are written permanently; and `votes`, where votes are written permanently. It is the votes in the `votes` table which must be queried to determine block validity and order. For more in-depth explanation, see [the whitepaper](https://www.bigchaindb.com/whitepaper/).
There are three database tables which underpin BigchainDB: `backlog`, where incoming transactions are held temporarily until they can be consumed; `bigchain`, where blocks of transactions are written permanently; and `votes`, where votes are written permanently. It is the votes in the `votes` table which must be queried to determine block validity and order. For more in-depth explanation, see [the whitepaper](https://www.bigchaindb.com/whitepaper/).
## Files
@ -16,7 +16,7 @@ The `Bigchain` class is defined here. Most operations outlined in the [whitepap
### [`consensus.py`](./config_utils.py)
Base class for consensus methods (verification of votes, blocks, and transactions). The actual logic is mostly found in `transaction` and `block` models, defined in [`models.py`](https://github.com/bigchaindb/bigchaindb/blob/master/bigchaindb/models.py).
Base class for consensus methods (verification of votes, blocks, and transactions). The actual logic is mostly found in `transaction` and `block` models, defined in [`models.py`](./models.py).
### [`processes.py`](./processes.py)
@ -34,7 +34,7 @@ Code for monitoring speed of various processes in BigchainDB via `statsd` and Gr
### [`pipelines`](./pipelines)
Structure and implementation of various subprocesses started in [`processes.py`](https://github.com/bigchaindb/bigchaindb/blob/master/bigchaindb/processes.py).
Structure and implementation of various subprocesses started in [`processes.py`](./processes.py).
### [`commands`](./commands)

View File

@ -11,8 +11,8 @@ config = {
# Note: this section supports all the Gunicorn settings:
# - http://docs.gunicorn.org/en/stable/settings.html
'bind': os.environ.get('BIGCHAINDB_SERVER_BIND') or 'localhost:9984',
'workers': None, # if none, the value will be cpu_count * 2 + 1
'threads': None, # if none, the value will be cpu_count * 2 + 1
'workers': None, # if none, the value will be cpu_count * 2 + 1
'threads': None, # if none, the value will be cpu_count * 2 + 1
},
'database': {
'host': os.environ.get('BIGCHAINDB_DATABASE_HOST', 'localhost'),

View File

@ -82,7 +82,6 @@ def run_configure(args, skip_if_exists=False):
conf,
bigchaindb.config_utils.env_config(bigchaindb.config))
print('Generating keypair', file=sys.stderr)
conf['keypair']['private'], conf['keypair']['public'] = \
crypto.generate_key_pair()
@ -162,7 +161,7 @@ def run_start(args):
if args.allow_temp_keypair:
if not (bigchaindb.config['keypair']['private'] or
bigchaindb.config['keypair']['public']):
bigchaindb.config['keypair']['public']):
private_key, public_key = crypto.generate_key_pair()
bigchaindb.config['keypair']['private'] = private_key
@ -170,7 +169,6 @@ def run_start(args):
else:
logger.warning('Keypair found, no need to create one on the fly.')
if args.start_rethinkdb:
try:
proc = utils.start_rethinkdb()

View File

@ -14,19 +14,13 @@ from bigchaindb import db
from bigchaindb.version import __version__
def start_rethinkdb():
"""Start RethinkDB as a child process and wait for it to be
available.
Args:
wait_for_db (bool): wait for the database to be ready
extra_opts (list): a list of extra options to be used when
starting the db
Raises:
``bigchaindb.common.exceptions.StartupError`` if RethinkDB cannot
be started.
:class:`~bigchaindb.common.exceptions.StartupError` if
RethinkDB cannot be started.
"""
proc = subprocess.Popen(['rethinkdb', '--bind', 'all'],

View File

View File

@ -6,7 +6,7 @@ determined according to the following rules:
* If it's set by an environment variable, then use that value
* Otherwise, if it's set in a local config file, then use that
value
* Otherwise, use the default value (contained in
* Otherwise, use the default value (contained in
``bigchaindb.__init__``)
"""

View File

@ -6,13 +6,13 @@ from time import time
from itertools import compress
from bigchaindb.common import crypto, exceptions
from bigchaindb.common.util import gen_timestamp, serialize
from bigchaindb.common.transaction import TransactionLink, Metadata
from bigchaindb.common.transaction import TransactionLink
import rethinkdb as r
import bigchaindb
from bigchaindb.db.utils import Connection
from bigchaindb.db.utils import Connection, get_backend
from bigchaindb import config_utils, util
from bigchaindb.consensus import BaseConsensusRules
from bigchaindb.models import Block, Transaction
@ -33,7 +33,7 @@ class Bigchain(object):
# return if transaction is in backlog
TX_IN_BACKLOG = 'backlog'
def __init__(self, host=None, port=None, dbname=None,
def __init__(self, host=None, port=None, dbname=None, backend=None,
public_key=None, private_key=None, keyring=[],
backlog_reassign_delay=None):
"""Initialize the Bigchain instance
@ -51,6 +51,8 @@ class Bigchain(object):
host (str): hostname where RethinkDB is running.
port (int): port in which RethinkDB is running (usually 28015).
dbname (str): the name of the database to connect to (usually bigchain).
backend (:class:`~bigchaindb.db.backends.rethinkdb.RehinkDBBackend`):
the database backend to use.
public_key (str): the base58 encoded public key for the ED25519 curve.
private_key (str): the base58 encoded private key for the ED25519 curve.
keyring (list[str]): list of base58 encoded public keys of the federation nodes.
@ -60,6 +62,7 @@ class Bigchain(object):
self.host = host or bigchaindb.config['database']['host']
self.port = port or bigchaindb.config['database']['port']
self.dbname = dbname or bigchaindb.config['database']['name']
self.backend = backend or get_backend(host, port, dbname)
self.me = public_key or bigchaindb.config['keypair']['public']
self.me_private = private_key or bigchaindb.config['keypair']['private']
self.nodes_except_me = keyring or bigchaindb.config['keyring']
@ -99,12 +102,9 @@ class Bigchain(object):
signed_transaction.update({'assignment_timestamp': time()})
# write to the backlog
response = self.connection.run(
r.table('backlog')
.insert(signed_transaction, durability=durability))
return response
return self.backend.write_transaction(signed_transaction)
def reassign_transaction(self, transaction, durability='hard'):
def reassign_transaction(self, transaction):
"""Assign a transaction to a new node
Args:
@ -128,23 +128,30 @@ class Bigchain(object):
# There is no other node to assign to
new_assignee = self.me
response = self.connection.run(
r.table('backlog')
.get(transaction['id'])
.update({'assignee': new_assignee, 'assignment_timestamp': time()},
durability=durability))
return response
return self.backend.update_transaction(
transaction['id'],
{'assignee': new_assignee, 'assignment_timestamp': time()})
def delete_transaction(self, *transaction_id):
"""Delete a transaction from the backlog.
Args:
*transaction_id (str): the transaction(s) to delete
Returns:
The database response.
"""
return self.backend.delete_transaction(*transaction_id)
def get_stale_transactions(self):
"""Get a RethinkDB cursor of stale transactions
"""Get a cursor of stale transactions.
Transactions are considered stale if they have been assigned a node, but are still in the
backlog after some amount of time specified in the configuration
"""
return self.connection.run(
r.table('backlog')
.filter(lambda tx: time() - tx['assignment_timestamp'] > self.backlog_reassign_delay))
return self.backend.get_stale_transactions(self.backlog_reassign_delay)
def validate_transaction(self, transaction):
"""Validate a transaction.
@ -195,10 +202,10 @@ class Bigchain(object):
the return value is then a tuple: (tx, status)
Returns:
A dict with the transaction details if the transaction was found.
Will add the transaction status to payload ('valid', 'undecided',
or 'backlog'). If no transaction with that `txid` was found it
returns `None`
A :class:`~.models.Transaction` instance if the transaction
was found, otherwise ``None``.
If :attr:`include_status` is ``True``, also returns the
transaction's status if the transaction was found.
"""
response, tx_status = None, None
@ -208,7 +215,7 @@ class Bigchain(object):
if validity:
# Disregard invalid blocks, and return if there are no valid or undecided blocks
validity = {_id: status for _id, status in validity.items()
if status != Bigchain.BLOCK_INVALID}
if status != Bigchain.BLOCK_INVALID}
if validity:
tx_status = self.TX_UNDECIDED
@ -221,19 +228,12 @@ class Bigchain(object):
break
# Query the transaction in the target block and return
response = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get(target_block_id)
.get_field('block')
.get_field('transactions')
.filter(lambda tx: tx['id'] == txid))[0]
response = self.backend.get_transaction_from_block(txid, target_block_id)
else:
# Otherwise, check the backlog
response = self.connection.run(r.table('backlog')
.get(txid)
.without('assignee', 'assignment_timestamp')
.default(None))
response = self.backend.get_transaction_from_backlog(txid)
if response:
tx_status = self.TX_IN_BACKLOG
@ -259,24 +259,6 @@ class Bigchain(object):
_, status = self.get_transaction(txid, include_status=True)
return status
def search_block_election_on_index(self, value, index):
"""Retrieve block election information given a secondary index and value
Args:
value: a value to search (e.g. transaction id string, payload hash string)
index (str): name of a secondary index, e.g. 'transaction_id'
Returns:
:obj:`list` of :obj:`dict`: A list of blocks with with only election information
"""
# First, get information on all blocks which contain this transaction
response = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get_all(value, index=index)
.pluck('votes', 'id', {'block': ['voters']}))
return list(response)
def get_blocks_status_containing_tx(self, txid):
"""Retrieve block ids and statuses related to a transaction
@ -291,7 +273,7 @@ class Bigchain(object):
"""
# First, get information on all blocks which contain this transaction
blocks = self.search_block_election_on_index(txid, 'transaction_id')
blocks = self.backend.get_blocks_status_from_transaction(txid)
if blocks:
# Determine the election status of each block
validity = {
@ -302,7 +284,7 @@ class Bigchain(object):
}
# NOTE: If there are multiple valid blocks with this transaction,
# something has gone wrong
# something has gone wrong
if list(validity.values()).count(Bigchain.BLOCK_VALID) > 1:
block_ids = str([block for block in validity
if validity[block] == Bigchain.BLOCK_VALID])
@ -333,14 +315,8 @@ class Bigchain(object):
A list of transactions containing that metadata. If no transaction exists with that metadata it
returns an empty list `[]`
"""
cursor = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get_all(metadata_id, index='metadata_id')
.concat_map(lambda block: block['block']['transactions'])
.filter(lambda transaction: transaction['transaction']['metadata']['id'] == metadata_id))
transactions = list(cursor)
return [Transaction.from_dict(tx) for tx in transactions]
cursor = self.backend.get_transactions_by_metadata_id(metadata_id)
return [Transaction.from_dict(tx) for tx in cursor]
def get_txs_by_asset_id(self, asset_id):
"""Retrieves transactions related to a particular asset.
@ -355,12 +331,8 @@ class Bigchain(object):
A list of transactions containing related to the asset. If no transaction exists for that asset it
returns an empty list `[]`
"""
cursor = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get_all(asset_id, index='asset_id')
.concat_map(lambda block: block['block']['transactions'])
.filter(lambda transaction: transaction['transaction']['asset']['id'] == asset_id))
cursor = self.backend.get_transactions_by_asset_id(asset_id)
return [Transaction.from_dict(tx) for tx in cursor]
def get_spent(self, txid, cid):
@ -379,13 +351,7 @@ class Bigchain(object):
"""
# checks if an input was already spent
# checks if the bigchain has any transaction with input {'txid': ..., 'cid': ...}
response = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda transaction: transaction['transaction']['fulfillments']
.contains(lambda fulfillment: fulfillment['input'] == {'txid': txid, 'cid': cid})))
transactions = list(response)
transactions = list(self.backend.get_spent(txid, cid))
# a transaction_id should have been spent at most one time
if transactions:
@ -397,8 +363,9 @@ class Bigchain(object):
if self.get_transaction(transaction['id']):
num_valid_transactions += 1
if num_valid_transactions > 1:
raise exceptions.DoubleSpend('`{}` was spent more then once. There is a problem with the chain'.format(
txid))
raise exceptions.DoubleSpend(
'`{}` was spent more then once. There is a problem with the chain'.format(
txid))
if num_valid_transactions:
return Transaction.from_dict(transactions[0])
@ -409,23 +376,18 @@ class Bigchain(object):
return None
def get_owned_ids(self, owner):
"""Retrieve a list of `txids` that can we used has inputs.
"""Retrieve a list of `txid`s that can be used as inputs.
Args:
owner (str): base58 encoded public key.
Returns:
list (TransactionLink): list of `txid`s and `cid`s pointing to
another transaction's condition
:obj:`list` of TransactionLink: list of `txid`s and `cid`s
pointing to another transaction's condition
"""
# get all transactions in which owner is in the `owners_after` list
response = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda tx: tx['transaction']['conditions']
.contains(lambda c: c['owners_after']
.contains(owner))))
response = self.backend.get_owned_ids(owner)
owned = []
for tx in response:
@ -436,7 +398,7 @@ class Bigchain(object):
continue
# NOTE: It's OK to not serialize the transaction here, as we do not
# use it after the execution of this function.
# use it after the execution of this function.
# a transaction can contain multiple outputs (conditions) so we need to iterate over all of them
# to get a list of outputs available to spend
for index, cond in enumerate(tx['transaction']['conditions']):
@ -510,9 +472,7 @@ class Bigchain(object):
but the vote is invalid.
"""
votes = list(self.connection.run(
r.table('votes', read_mode=self.read_mode)
.get_all([block_id, self.me], index='block_and_voter')))
votes = list(self.backend.get_votes_by_block_id_and_voter(block_id, self.me))
if len(votes) > 1:
raise exceptions.MultipleVotesError('Block {block_id} has {n_votes} votes from public key {me}'
@ -534,15 +494,10 @@ class Bigchain(object):
block (Block): block to write to bigchain.
"""
self.connection.run(
r.table('bigchain')
.insert(r.json(block.to_str()), durability=durability))
return self.backend.write_block(block.to_str(), durability=durability)
def transaction_exists(self, transaction_id):
response = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)\
.get_all(transaction_id, index='transaction_id'))
return len(response.items) > 0
return self.backend.has_transaction(transaction_id)
def prepare_genesis_block(self):
"""Prepare a genesis block."""
@ -571,9 +526,7 @@ class Bigchain(object):
# 2. create the block with one transaction
# 3. write the block to the bigchain
blocks_count = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.count())
blocks_count = self.backend.count_blocks()
if blocks_count:
raise exceptions.GenesisBlockAlreadyExistsError('Cannot create the Genesis block')
@ -584,11 +537,11 @@ class Bigchain(object):
return block
def vote(self, block_id, previous_block_id, decision, invalid_reason=None):
"""Cast your vote on the block given the previous_block_hash and the decision (valid/invalid)
return the block to the updated in the database.
"""Create a signed vote for a block given the
:attr:`previous_block_id` and the :attr:`decision` (valid/invalid).
Args:
block_id (str): The id of the block to vote.
block_id (str): The id of the block to vote on.
previous_block_id (str): The id of the previous block.
decision (bool): Whether the block is valid or invalid.
invalid_reason (Optional[str]): Reason the block is invalid
@ -618,69 +571,12 @@ class Bigchain(object):
def write_vote(self, vote):
"""Write the vote to the database."""
self.connection.run(
r.table('votes')
.insert(vote))
return self.backend.write_vote(vote)
def get_last_voted_block(self):
"""Returns the last block that this node voted on."""
try:
# get the latest value for the vote timestamp (over all votes)
max_timestamp = self.connection.run(
r.table('votes', read_mode=self.read_mode)
.filter(r.row['node_pubkey'] == self.me)
.max(r.row['vote']['timestamp']))['vote']['timestamp']
last_voted = list(self.connection.run(
r.table('votes', read_mode=self.read_mode)
.filter(r.row['vote']['timestamp'] == max_timestamp)
.filter(r.row['node_pubkey'] == self.me)))
except r.ReqlNonExistenceError:
# return last vote if last vote exists else return Genesis block
res = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.filter(util.is_genesis_block))
block = list(res)[0]
return Block.from_dict(block)
# Now the fun starts. Since the resolution of timestamp is a second,
# we might have more than one vote per timestamp. If this is the case
# then we need to rebuild the chain for the blocks that have been retrieved
# to get the last one.
# Given a block_id, mapping returns the id of the block pointing at it.
mapping = {v['vote']['previous_block']: v['vote']['voting_for_block']
for v in last_voted}
# Since we follow the chain backwards, we can start from a random
# point of the chain and "move up" from it.
last_block_id = list(mapping.values())[0]
# We must be sure to break the infinite loop. This happens when:
# - the block we are currenty iterating is the one we are looking for.
# This will trigger a KeyError, breaking the loop
# - we are visiting again a node we already explored, hence there is
# a loop. This might happen if a vote points both `previous_block`
# and `voting_for_block` to the same `block_id`
explored = set()
while True:
try:
if last_block_id in explored:
raise exceptions.CyclicBlockchainError()
explored.add(last_block_id)
last_block_id = mapping[last_block_id]
except KeyError:
break
res = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get(last_block_id))
return Block.from_dict(res)
return Block.from_dict(self.backend.get_last_voted_block(self.me))
def get_unvoted_blocks(self):
"""Return all the blocks that have not been voted on by this node.
@ -689,37 +585,26 @@ class Bigchain(object):
:obj:`list` of :obj:`dict`: a list of unvoted blocks
"""
unvoted = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.filter(lambda block: r.table('votes', read_mode=self.read_mode)
.get_all([block['id'], self.me], index='block_and_voter')
.is_empty())
.order_by(r.asc(r.row['block']['timestamp'])))
# FIXME: I (@vrde) don't like this solution. Filtering should be done at a
# database level. Solving issue #444 can help untangling the situation
unvoted_blocks = filter(lambda block: not util.is_genesis_block(block), unvoted)
return unvoted_blocks
# XXX: should this return instaces of Block?
return self.backend.get_unvoted_blocks(self.me)
def block_election_status(self, block_id, voters):
"""Tally the votes on a block, and return the status: valid, invalid, or undecided."""
votes = self.connection.run(r.table('votes', read_mode=self.read_mode)
.between([block_id, r.minval], [block_id, r.maxval], index='block_and_voter'))
votes = list(votes)
votes = list(self.backend.get_votes_by_block_id(block_id))
n_voters = len(voters)
voter_counts = collections.Counter([vote['node_pubkey'] for vote in votes])
for node in voter_counts:
if voter_counts[node] > 1:
raise exceptions.MultipleVotesError('Block {block_id} has multiple votes ({n_votes}) from voting node {node_id}'
.format(block_id=block_id, n_votes=str(voter_counts[node]), node_id=node))
raise exceptions.MultipleVotesError(
'Block {block_id} has multiple votes ({n_votes}) from voting node {node_id}'
.format(block_id=block_id, n_votes=str(voter_counts[node]), node_id=node))
if len(votes) > n_voters:
raise exceptions.MultipleVotesError('Block {block_id} has {n_votes} votes cast, but only {n_voters} voters'
.format(block_id=block_id, n_votes=str(len(votes)), n_voters=str(n_voters)))
.format(block_id=block_id, n_votes=str(len(votes)),
n_voters=str(n_voters)))
# vote_cast is the list of votes e.g. [True, True, False]
vote_cast = [vote['vote']['is_block_valid'] for vote in votes]

View File

@ -1,2 +1,2 @@
# TODO can we use explicit imports?
from bigchaindb.db.utils import *
from bigchaindb.db.utils import * # noqa: F401,F403

View File

View File

@ -0,0 +1,381 @@
"""Backend implementation for RethinkDB.
This module contains all the methods to store and retrieve data from RethinkDB.
"""
from time import time
import rethinkdb as r
from bigchaindb import util
from bigchaindb.db.utils import Connection
from bigchaindb.common import exceptions
class RethinkDBBackend:
def __init__(self, host=None, port=None, db=None):
"""Initialize a new RethinkDB Backend instance.
Args:
host (str): the host to connect to.
port (int): the port to connect to.
db (str): the name of the database to use.
"""
self.read_mode = 'majority'
self.durability = 'soft'
self.connection = Connection(host=host, port=port, db=db)
def write_transaction(self, signed_transaction):
"""Write a transaction to the backlog table.
Args:
signed_transaction (dict): a signed transaction.
Returns:
The result of the operation.
"""
return self.connection.run(
r.table('backlog')
.insert(signed_transaction, durability=self.durability))
def update_transaction(self, transaction_id, doc):
"""Update a transaction in the backlog table.
Args:
transaction_id (str): the id of the transaction.
doc (dict): the values to update.
Returns:
The result of the operation.
"""
return self.connection.run(
r.table('backlog')
.get(transaction_id)
.update(doc))
def delete_transaction(self, *transaction_id):
"""Delete a transaction from the backlog.
Args:
*transaction_id (str): the transaction(s) to delete
Returns:
The database response.
"""
return self.connection.run(
r.table('backlog')
.get_all(*transaction_id)
.delete(durability='hard'))
def get_stale_transactions(self, reassign_delay):
"""Get a cursor of stale transactions.
Transactions are considered stale if they have been assigned a node,
but are still in the backlog after some amount of time specified in the
configuration.
Args:
reassign_delay (int): threshold (in seconds) to mark a transaction stale.
Returns:
A cursor of transactions.
"""
return self.connection.run(
r.table('backlog')
.filter(lambda tx: time() - tx['assignment_timestamp'] > reassign_delay))
def get_transaction_from_block(self, transaction_id, block_id):
"""Get a transaction from a specific block.
Args:
transaction_id (str): the id of the transaction.
block_id (str): the id of the block.
Returns:
The matching transaction.
"""
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get(block_id)
.get_field('block')
.get_field('transactions')
.filter(lambda tx: tx['id'] == transaction_id))[0]
def get_transaction_from_backlog(self, transaction_id):
"""Get a transaction from backlog.
Args:
transaction_id (str): the id of the transaction.
Returns:
The matching transaction.
"""
return self.connection.run(
r.table('backlog')
.get(transaction_id)
.without('assignee', 'assignment_timestamp')
.default(None))
def get_blocks_status_from_transaction(self, transaction_id):
"""Retrieve block election information given a secondary index and value
Args:
value: a value to search (e.g. transaction id string, payload hash string)
index (str): name of a secondary index, e.g. 'transaction_id'
Returns:
:obj:`list` of :obj:`dict`: A list of blocks with with only election information
"""
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get_all(transaction_id, index='transaction_id')
.pluck('votes', 'id', {'block': ['voters']}))
def get_transactions_by_metadata_id(self, metadata_id):
"""Retrieves transactions related to a metadata.
When creating a transaction one of the optional arguments is the `metadata`. The metadata is a generic
dict that contains extra information that can be appended to the transaction.
To make it easy to query the bigchain for that particular metadata we create a UUID for the metadata and
store it with the transaction.
Args:
metadata_id (str): the id for this particular metadata.
Returns:
A list of transactions containing that metadata. If no transaction exists with that metadata it
returns an empty list `[]`
"""
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get_all(metadata_id, index='metadata_id')
.concat_map(lambda block: block['block']['transactions'])
.filter(lambda transaction: transaction['transaction']['metadata']['id'] == metadata_id))
def get_transactions_by_asset_id(self, asset_id):
"""Retrieves transactions related to a particular asset.
A digital asset in bigchaindb is identified by an uuid. This allows us to query all the transactions
related to a particular digital asset, knowing the id.
Args:
asset_id (str): the id for this particular metadata.
Returns:
A list of transactions containing related to the asset. If no transaction exists for that asset it
returns an empty list `[]`
"""
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get_all(asset_id, index='asset_id')
.concat_map(lambda block: block['block']['transactions'])
.filter(lambda transaction: transaction['transaction']['asset']['id'] == asset_id))
def get_spent(self, transaction_id, condition_id):
"""Check if a `txid` was already used as an input.
A transaction can be used as an input for another transaction. Bigchain needs to make sure that a
given `txid` is only used once.
Args:
transaction_id (str): The id of the transaction.
condition_id (int): The index of the condition in the respective transaction.
Returns:
The transaction that used the `txid` as an input else `None`
"""
# TODO: use index!
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda transaction: transaction['transaction']['fulfillments'].contains(
lambda fulfillment: fulfillment['input'] == {'txid': transaction_id, 'cid': condition_id})))
def get_owned_ids(self, owner):
"""Retrieve a list of `txids` that can we used has inputs.
Args:
owner (str): base58 encoded public key.
Returns:
A cursor for the matching transactions.
"""
# TODO: use index!
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.concat_map(lambda doc: doc['block']['transactions'])
.filter(lambda tx: tx['transaction']['conditions'].contains(
lambda c: c['owners_after'].contains(owner))))
def get_votes_by_block_id(self, block_id):
"""Get all the votes casted for a specific block.
Args:
block_id (str): the block id to use.
Returns:
A cursor for the matching votes.
"""
return self.connection.run(
r.table('votes', read_mode=self.read_mode)
.between([block_id, r.minval], [block_id, r.maxval], index='block_and_voter'))
def get_votes_by_block_id_and_voter(self, block_id, node_pubkey):
"""Get all the votes casted for a specific block by a specific voter.
Args:
block_id (str): the block id to use.
node_pubkey (str): base58 encoded public key
Returns:
A cursor for the matching votes.
"""
return self.connection.run(
r.table('votes', read_mode=self.read_mode)
.get_all([block_id, node_pubkey], index='block_and_voter'))
def write_block(self, block, durability='soft'):
"""Write a block to the bigchain table.
Args:
block (dict): the block to write.
Returns:
The database response.
"""
return self.connection.run(
r.table('bigchain')
.insert(r.json(block), durability=durability))
def has_transaction(self, transaction_id):
"""Check if a transaction exists in the bigchain table.
Args:
transaction_id (str): the id of the transaction to check.
Returns:
``True`` if the transaction exists, ``False`` otherwise.
"""
return bool(self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get_all(transaction_id, index='transaction_id').count()))
def count_blocks(self):
"""Count the number of blocks in the bigchain table.
Returns:
The number of blocks.
"""
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.count())
def write_vote(self, vote):
"""Write a vote to the votes table.
Args:
vote (dict): the vote to write.
Returns:
The database response.
"""
return self.connection.run(
r.table('votes')
.insert(vote))
def get_last_voted_block(self, node_pubkey):
"""Get the last voted block for a specific node.
Args:
node_pubkey (str): base58 encoded public key.
Returns:
The last block the node has voted on. If the node didn't cast
any vote then the genesis block is returned.
"""
try:
# get the latest value for the vote timestamp (over all votes)
max_timestamp = self.connection.run(
r.table('votes', read_mode=self.read_mode)
.filter(r.row['node_pubkey'] == node_pubkey)
.max(r.row['vote']['timestamp']))['vote']['timestamp']
last_voted = list(self.connection.run(
r.table('votes', read_mode=self.read_mode)
.filter(r.row['vote']['timestamp'] == max_timestamp)
.filter(r.row['node_pubkey'] == node_pubkey)))
except r.ReqlNonExistenceError:
# return last vote if last vote exists else return Genesis block
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.filter(util.is_genesis_block)
.nth(0))
# Now the fun starts. Since the resolution of timestamp is a second,
# we might have more than one vote per timestamp. If this is the case
# then we need to rebuild the chain for the blocks that have been retrieved
# to get the last one.
# Given a block_id, mapping returns the id of the block pointing at it.
mapping = {v['vote']['previous_block']: v['vote']['voting_for_block']
for v in last_voted}
# Since we follow the chain backwards, we can start from a random
# point of the chain and "move up" from it.
last_block_id = list(mapping.values())[0]
# We must be sure to break the infinite loop. This happens when:
# - the block we are currenty iterating is the one we are looking for.
# This will trigger a KeyError, breaking the loop
# - we are visiting again a node we already explored, hence there is
# a loop. This might happen if a vote points both `previous_block`
# and `voting_for_block` to the same `block_id`
explored = set()
while True:
try:
if last_block_id in explored:
raise exceptions.CyclicBlockchainError()
explored.add(last_block_id)
last_block_id = mapping[last_block_id]
except KeyError:
break
return self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.get(last_block_id))
def get_unvoted_blocks(self, node_pubkey):
"""Return all the blocks that have not been voted by the specified node.
Args:
node_pubkey (str): base58 encoded public key
Returns:
:obj:`list` of :obj:`dict`: a list of unvoted blocks
"""
unvoted = self.connection.run(
r.table('bigchain', read_mode=self.read_mode)
.filter(lambda block: r.table('votes', read_mode=self.read_mode)
.get_all([block['id'], node_pubkey], index='block_and_voter')
.is_empty())
.order_by(r.asc(r.row['block']['timestamp'])))
# FIXME: I (@vrde) don't like this solution. Filtering should be done at a
# database level. Solving issue #444 can help untangling the situation
unvoted_blocks = filter(lambda block: not util.is_genesis_block(block), unvoted)
return unvoted_blocks

View File

@ -67,6 +67,18 @@ class Connection:
time.sleep(2**i)
def get_backend(host=None, port=None, db=None):
'''Get a backend instance.'''
from bigchaindb.db.backends import rethinkdb
# NOTE: this function will be re-implemented when we have real
# multiple backends to support. Right now it returns the RethinkDB one.
return rethinkdb.RethinkDBBackend(host=host or bigchaindb.config['database']['host'],
port=port or bigchaindb.config['database']['port'],
db=db or bigchaindb.config['database']['name'])
def get_conn():
'''Get the connection to the database.'''

View File

@ -190,7 +190,7 @@ class Block(object):
def is_signature_valid(self):
block = self.to_dict()['block']
# cc only accepts bytesting messages
# cc only accepts bytesting messages
block_serialized = serialize(block).encode()
verifying_key = VerifyingKey(block['node_pubkey'])
try:

View File

@ -69,10 +69,7 @@ class BlockPipeline:
# if the tx is already in a valid or undecided block,
# then it no longer should be in the backlog, or added
# to a new block. We can delete and drop it.
self.bigchain.connection.run(
r.table('backlog')
.get(tx.id)
.delete(durability='hard'))
self.bigchain.delete_transaction(tx.id)
return None
tx_validated = self.bigchain.is_valid_transaction(tx)
@ -81,10 +78,7 @@ class BlockPipeline:
else:
# if the transaction is not valid, remove it from the
# backlog
self.bigchain.connection.run(
r.table('backlog')
.get(tx.id)
.delete(durability='hard'))
self.bigchain.delete_transaction(tx.id)
return None
def create(self, tx, timeout=False):
@ -136,10 +130,7 @@ class BlockPipeline:
Returns:
:class:`~bigchaindb.models.Block`: The block.
"""
self.bigchain.connection.run(
r.table('backlog')
.get_all(*[tx.id for tx in block.transactions])
.delete(durability='hard'))
self.bigchain.delete_transaction(*[tx.id for tx in block.transactions])
return block

View File

@ -26,8 +26,8 @@ class StaleTransactionMonitor:
Args:
timeout: how often to check for stale tx (in sec)
backlog_reassign_delay: How stale a transaction should
be before reassignment (in sec). If supplied, overrides the
Bigchain default value.
be before reassignment (in sec). If supplied, overrides
the Bigchain default value.
"""
self.bigchain = Bigchain(backlog_reassign_delay=backlog_reassign_delay)
self.timeout = timeout

View File

@ -13,16 +13,16 @@ logger = logging.getLogger(__name__)
class ChangeFeed(Node):
"""This class wraps a RethinkDB changefeed adding a `prefeed`.
"""This class wraps a RethinkDB changefeed adding a ``prefeed``.
It extends the ``multipipes::Node`` class to make it pluggable in
other Pipelines instances, and it makes usage of ``self.outqueue``
to output the data.
It extends :class:`multipipes.Node` to make it pluggable in other
Pipelines instances, and makes usage of ``self.outqueue`` to output
the data.
A changefeed is a real time feed on inserts, updates, and deletes, and
it's volatile. This class is a helper to create changefeeds. Moreover
it provides a way to specify a `prefeed`, that is a set of data (iterable)
to output before the actual changefeed.
is volatile. This class is a helper to create changefeeds. Moreover,
it provides a way to specify a ``prefeed`` of iterable data to output
before the actual changefeed.
"""
INSERT = 1
@ -35,8 +35,8 @@ class ChangeFeed(Node):
Args:
table (str): name of the table to listen to for changes.
operation (int): can be ChangeFeed.INSERT, ChangeFeed.DELETE, or
ChangeFeed.UPDATE. Combining multiple operation is possible using
the bitwise ``|`` operator
ChangeFeed.UPDATE. Combining multiple operation is possible
with the bitwise ``|`` operator
(e.g. ``ChangeFeed.INSERT | ChangeFeed.UPDATE``)
prefeed (iterable): whatever set of data you want to be published
first.
@ -73,4 +73,3 @@ class ChangeFeed(Node):
self.outqueue.put(change['old_val'])
elif is_update and (self.operation & ChangeFeed.UPDATE):
self.outqueue.put(change['new_val'])

View File

@ -119,7 +119,7 @@ def condition_details_has_owner(condition_details, owner):
def verify_vote_signature(voters, signed_vote):
"""Verify the signature of a vote
A valid vote should have been signed `owner_before` corresponding private key.
A valid vote should have been signed by a voter's private key.
Args:
voters (list): voters of the block that is under election

View File

@ -1,2 +1,2 @@
__version__ = '0.6.0'
__short_version__ = '0.6'
__version__ = '0.8.0.dev'
__short_version__ = '0.8.dev'

View File

@ -12,4 +12,3 @@ def make_error(status_code, message=None):
})
response.status_code = status_code
return response

View File

@ -44,9 +44,8 @@ USE_KEYPAIRS_FILE=False
# and you can search for one that meets your needs at:
# https://cloud-images.ubuntu.com/locator/ec2/
# Example:
# "ami-72c33e1d"
# (eu-central-1 Ubuntu 14.04 LTS amd64 hvm:ebs-ssd 20160919)
IMAGE_ID="ami-72c33e1d"
# (eu-central-1 Ubuntu 14.04 LTS amd64 hvm:ebs-ssd 20161020)
IMAGE_ID="ami-9c09f0f3"
# INSTANCE_TYPE is the type of AWS instance to launch
# i.e. How many CPUs do you want? How much storage? etc.

View File

@ -10,12 +10,14 @@
# How to Generate the HTML Version of the Long-Form Documentation
If you want to generate the HTML version of the long-form documentation on your local machine, you need to have Sphinx and some Sphinx-contrib packages installed. To do that, go to the BigchainDB `docs` directory (i.e. this directory) and do:
If you want to generate the HTML version of the long-form documentation on your local machine, you need to have Sphinx and some Sphinx-contrib packages installed. To do that, go to a subdirectory of `docs` (e.g. `docs/server`) and do:
```bash
pip install -r requirements.txt
```
You can then generate the HTML documentation by doing:
You can then generate the HTML documentation _in that subdirectory_ by doing:
```bash
make html
```
The generated HTML documentation will be in the `docs/build/html` directory. You can view it by opening `docs/build/html/index.html` in your web browser.
It should tell you where the generated documentation (HTML files) can be found. You can view it in your web browser.

225
docs/root/Makefile Normal file
View File

@ -0,0 +1,225 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " epub3 to make an epub3"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " dummy to check syntax errors of document sources"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/*
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/BigchainDB.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/BigchainDB.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/BigchainDB"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/BigchainDB"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: epub3
epub3:
$(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
@echo
@echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
.PHONY: dummy
dummy:
$(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
@echo
@echo "Build finished. Dummy builder generates no files."

View File

@ -0,0 +1,24 @@
How BigchainDB is Good for Asset Registrations & Transfers
==========================================================
BigchainDB can store data of any kind (within reason), but it's designed to be particularly good for storing asset registrations and transfers:
* The fundamental thing that one submits to a BigchainDB federation to be checked and stored (if valid) is a *transaction*, and there are two kinds: creation transactions and transfer transactions.
* A creation transaction can be use to register any kind of indivisible asset, along with arbitrary metadata.
* An asset can have zero, one, or several owners.
* The owners of an asset can specify (crypto-)conditions which must be satisified by anyone wishing transfer the asset to new owners. For example, a condition might be that at least 3 of the 5 current owners must cryptographically sign a transfer transaction.
* BigchainDB verifies that the conditions have been satisified as part of checking the validity of transfer transactions. (Moreover, anyone can check that they were satisfied.)
* BigchainDB prevents double-spending of an asset.
* Validated transactions are strongly tamper-resistant; see [the section about immutability / tamper-resistance](immutable.html).
BigchainDB Integration with Other Blockchains
---------------------------------------------
BigchainDB works with the `Interledger protocol <https://interledger.org/>`_, enabling the transfer of assets between BigchainDB and other blockchains, ledgers, and payment systems.
Were actively exploring ways that BigchainDB can be used with other blockchains and platforms.
.. note::
We used the word "owners" somewhat loosely above. A more accurate word might be fulfillers, signers, controllers, or tranfer-enablers. See BigchainDB Server `issue #626 <https://github.com/bigchaindb/bigchaindb/issues/626>`_.

19
docs/root/source/bft.md Normal file
View File

@ -0,0 +1,19 @@
# BigchainDB and Byzantine Fault Tolerance
We have Byzantine fault tolerance (BFT) in our roadmap, as a switch that people can turn on. We anticipate that turning it on will cause a severe dropoff in performance (to gain some extra security). See [Issue #293](https://github.com/bigchaindb/bigchaindb/issues/293).
Among the big, industry-used distributed databases in production today (e.g. DynamoDB, Bigtable, MongoDB, Cassandra, Elasticsearch), none of them are BFT. Indeed, almost all wide-area distributed systems in production are not BFT, including military, banking, healthcare, and other security-sensitive systems.
The are many more practical things that nodes can do to increase security (e.g. firewalls, key management, access controls).
From a [recent essay by Ken Birman](http://sigops.org/sosp/sosp15/history/05-birman.pdf) (of Cornell):
> Oh, and with respect to the BFT point: Jim [Gray] felt that real systems fail by crashing [54]. Others have since done studies reinforcing this view, or finding that even crash-failure solutions can sometimes defend against application corruption. One interesting study, reported during a SOSP WIPS session by Ben Reed (one of the co-developers of Zookeeper), found that at Yahoo, Zookeeper itself had never experienced Byzantine faults in a one-year period that they studied closely.
> [54] Jim Gray. Why Do Computers Stop and What Can Be Done About It? SOSP, 1985.
Ben Reed never published those results, but Birman wrote more about them in the book *Guide to Reliable Distributed Systems: Building High-Assurance Applications*. From page 358 of that book:
> But the cloud community, led by Ben Reed and Flavio Junqueira at Yahoo, sees things differently (these are the two inventors [sic] of Yahoos ZooKeeper service). **They have described informal studies of how applications and machines at Yahoo failed, concluding that the frequency of Byzantine failures was extremely small relative to the frequency of crash failures** [emphasis added]. Sometimes they did see data corruption, but then they often saw it occur in a correlated way that impacted many replicas all at once. And very often they saw failures occur in the client layer, then propagate into the service. BFT techniques tend to be used only within a service, not in the client layer that talks to that service, hence offer no protection against malfunctioning clients. **All of this, Reed and Junqueira conclude, lead to the realization that BFT just does not match the real needs of a cloud computing company like Yahoo, even if the data being managed by a service really is of very high importance** [emphasis added]. Unfortunately, they have not published this study; it was reported at an “outrageous opinions” session at the ACM Symposium on Operating Systems Principles, in 2009.
> The practical use of the Byzantine protocol raises another concern: The timing assumptions built into the model [i.e. synchronous or partially-synchronous nodes] are not realizable in most computing environments…

347
docs/root/source/conf.py Normal file
View File

@ -0,0 +1,347 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# BigchainDB documentation build configuration file, created by
# sphinx-quickstart on Thu Sep 29 11:13:27 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
from recommonmark.parser import CommonMarkParser
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
import sphinx_rtd_theme
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
source_parsers = {
'.md': CommonMarkParser,
}
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = ['.rst', '.md']
# The encoding of source files.
#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'BigchainDB'
copyright = '2016, BigchainDB Contributors'
author = 'BigchainDB Contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = ''
# The full version, including alpha/beta/rc tags.
release = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#
# today = ''
#
# Else, today_fmt is used as the format for a strftime call.
#
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#
# html_title = 'BigchainDB v0.1'
# A shorter title for the navigation bar. Default is the same as html_title.
#
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#
# html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#
# html_extra_path = []
# If not None, a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
# The empty string is equivalent to '%b %d, %Y'.
#
# html_last_updated_fmt = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#
# html_additional_pages = {}
# If false, no module index is generated.
#
# html_domain_indices = True
# If false, no index is generated.
#
# html_use_index = True
# If true, the index is split into individual pages for each letter.
#
# html_split_index = False
# If true, links to the reST sources are added to the pages.
#
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr', 'zh'
#
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
#
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'BigchainDBdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'BigchainDB.tex', 'BigchainDB Documentation',
'BigchainDB Contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#
# latex_use_parts = False
# If true, show page references after internal links.
#
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
#
# latex_appendices = []
# It false, will not define \strong, \code, itleref, \crossref ... but only
# \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added
# packages.
#
# latex_keep_old_macro_names = True
# If false, no module index is generated.
#
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'bigchaindb', 'BigchainDB Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'BigchainDB', 'BigchainDB Documentation',
author, 'BigchainDB', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#
# texinfo_appendices = []
# If false, no module index is generated.
#
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#
# texinfo_no_detailmenu = False

View File

@ -0,0 +1,32 @@
# The Digital Asset Model
To avoid redundant data in transactions, the digital asset model is different for `CREATE` and `TRANSFER` transactions.
A digital asset's properties are defined in a `CREATE` transaction with the following model:
```json
{
"id": "<uuid>",
"divisible": "<true | false>",
"updatable": "<true | false>",
"refillable": "<true | false>",
"data": "<json document>"
}
```
For `TRANSFER` transactions we only keep the asset id.
```json
{
"id": "<uuid>"
}
```
- `id`: UUID version 4 (random) converted to a string of hex digits in standard form. Added server side.
- `divisible`: Whether the asset is divisible or not. Defaults to false.
- `updatable`: Whether the data in the asset can be updated in the future or not. Defaults to false.
- `refillable`: Whether the amount of the asset can change after its creation. Defaults to false.
- `data`: A user supplied JSON document with custom information about the asset. Defaults to null.
- _amount_: The amount of "shares". Only relevant if the asset is marked as divisible. Defaults to 1. The amount is not specified in the asset, but in the conditions (see next section).
At the time of this writing divisible, updatable, and refillable assets are not yet implemented.
See [Issue #487 on Github](https://github.com/bigchaindb/bigchaindb/issues/487)

View File

@ -0,0 +1,36 @@
The Block Model
===============
A block has the following structure:
.. code-block:: json
{
"id": "<hash of block>",
"block": {
"timestamp": "<block-creation timestamp>",
"transactions": ["<list of transactions>"],
"node_pubkey": "<public/verifying key of the node creating the block>",
"voters": ["<list of federation nodes verifying keys>"]
},
"signature": "<signature of block>"
}
- ``id``: The hash of the serialized ``block`` (i.e. the ``timestamp``, ``transactions``, ``node_pubkey``, and ``voters``). This is also a database primary key; that's how we ensure that all blocks are unique.
- ``block``:
- ``timestamp``: The Unix time when the block was created. It's provided by the node that created the block. See `the page about timestamps <https://docs.bigchaindb.com/en/latest/timestamps.html>`_.
- ``transactions``: A list of the transactions included in the block.
- ``node_pubkey``: The public/verifying key of the node that created the block.
- ``voters``: A list of the verifying keys of federation nodes at the time the block was created.
It's the list of federation nodes which can cast a vote on this block.
This list can change from block to block, as nodes join and leave the federation.
- ``signature``: Cryptographic signature of the block by the node that created the block. (To create the signature, the node serializes the block contents and signs that with its signing key.)
Working with Blocks
-------------------
There's a **Block** class for creating and working with Block objects; look in `/bigchaindb/models.py <https://github.com/bigchaindb/bigchaindb/blob/master/bigchaindb/models.py>`_. (The link is to the latest version on the master branch on GitHub.)

View File

@ -0,0 +1,133 @@
# Crypto-Conditions and Fulfillments
To create a transaction that transfers an asset to new owners, one must fulfill the assets current conditions (crypto-conditions). The most basic kinds of conditions are:
* **A hashlock condition:** One can fulfill a hashlock condition by providing the correct “preimage” (similar to a password or secret phrase)
* **A simple signature condition:** One can fulfill a simple signature condition by a providing a valid cryptographic signature (i.e. corresponding to the public key of an owner, usually)
* **A timeout condition:** Anyone can fulfill a timeout condition before the conditions expiry time. After the expiry time, nobody can fulfill the condition. Another way to say this is that a timeout conditions fulfillment is valid (TRUE) before the expiry time and invalid (FALSE) after the expiry time. Note: at the time of writing, timeout conditions are BigchainDB-specific (i.e. not part of the Interledger specs).
A more complex condition can be composed by using n of the above conditions as inputs to an m-of-n threshold condition (a logic gate which outputs TRUE iff m or more inputs are TRUE). If there are n inputs to a threshold condition:
* 1-of-n is the same as a logical OR of all the inputs
* n-of-n is the same as a logical AND of all the inputs
For example, one could create a condition requiring that m (of n) owners provide signatures before their asset can be transferred to new owners.
One can also put different weights on the inputs to threshold condition, along with a threshold that the weighted-sum-of-inputs must pass for the output to be TRUE. Weights could be used, for example, to express the number of shares that someone owns in an asset.
The (single) output of a threshold condition can be used as one of the inputs of other threshold conditions. This means that one can combine threshold conditions to build complex logical expressions, e.g. (x OR y) AND (u OR v).
Aside: In BigchainDB, the output of an m-of-n threshold condition can be inverted on the way out, so an output that would have been TRUE would get changed to FALSE (and vice versa). This enables the creation of NOT, NOR and NAND gates. At the time of writing, this “inverted threshold condition” is BigchainDB-specific (i.e. not part of the Interledger specs). It should only be used in combination with a timeout condition.
When one creates a condition, one can calculate its fulfillment length (e.g. 96). The more complex the condition, the larger its fulfillment length will be. A BigchainDB federation can put an upper limit on the allowed fulfillment length, as a way of capping the complexity of conditions (and the computing time required to validate them).
If someone tries to make a condition where the output of a threshold condition feeds into the input of another “earlier” threshold condition (i.e. in a closed logical circuit), then their computer will take forever to calculate the (infinite) “condition URI”, at least in theory. In practice, their computer will run out of memory or their client software will timeout after a while.
Aside: In what follows, the list of `owners_after` (in a condition) is always who owned the asset at the time the transaction completed, but before the next transaction started. The list of `owners_before` (in a fulfillment) is always equal to the list of `owners_after` in that asset's previous transaction.
## (Crypto-) Conditions
### One New Owner
If there is only one _new owner_, the condition will be a simple signature condition (i.e. only one signature is required).
```json
{
"cid": "<condition index>",
"condition": {
"details": {
"bitmask": "<base16 int>",
"public_key": "<new owner public key>",
"signature": null,
"type": "fulfillment",
"type_id": "<base16 int>"
},
"uri": "<string>"
},
"owners_after": ["<new owner public key>"],
"amount": "<int>"
}
```
- **Condition header**:
- `cid`: Condition index so that we can reference this output as an input to another transaction. It also matches
the input `fid`, making this the condition to fulfill in order to spend the asset used as input with `fid`.
- `owners_after`: A list containing one item: the public key of the new owner.
- `amount`: The amount of shares for a divisible asset to send to the new owners.
- **Condition body**:
- `bitmask`: A set of bits representing the features required by the condition type.
- `public_key`: The new owner's public key.
- `type_id`: The fulfillment type ID; see the [ILP spec](https://interledger.org/five-bells-condition/spec.html).
- `uri`: Binary representation of the condition using only URL-safe characters.
### Multiple New Owners
If there are multiple _new owners_, they can create a ThresholdCondition requiring a signature from each of them in order
to spend the asset. For example:
```json
{
"cid": "<condition index>",
"condition": {
"details": {
"bitmask": 41,
"subfulfillments": [
{
"bitmask": 32,
"public_key": "<new owner 1 public key>",
"signature": null,
"type": "fulfillment",
"type_id": 4,
"weight": 1
},
{
"bitmask": 32,
"public_key": "<new owner 2 public key>",
"signature": null,
"type": "fulfillment",
"type_id": 4,
"weight": 1
}
],
"threshold": 2,
"type": "fulfillment",
"type_id": 2
},
"uri": "cc:2:29:ytNK3X6-bZsbF-nCGDTuopUIMi1HCyCkyPewm6oLI3o:206"},
"owners_after": [
"owner 1 public key>",
"owner 2 public key>"
]
}
```
- `subfulfillments`: a list of fulfillments
- `weight`: integer weight for each subfulfillment's contribution to the threshold
- `threshold`: threshold to reach for the subfulfillments to reach a valid fulfillment
The `weight`s and `threshold` could be adjusted. For example, if the `threshold` was changed to 1 above, then only one of the new owners would have to provide a signature to spend the asset.
## Fulfillments
### One Current Owner
If there is only one _current owner_, the fulfillment will be a simple signature fulfillment (i.e. containing just one signature).
```json
{
"owners_before": ["<public key of the owner before the transaction happened>"],
"fid": 0,
"fulfillment": "cf:4:RxFzIE679tFBk8zwEgizhmTuciAylvTUwy6EL6ehddHFJOhK5F4IjwQ1xLu2oQK9iyRCZJdfWAefZVjTt3DeG5j2exqxpGliOPYseNkRAWEakqJ_UrCwgnj92dnFRAEE",
"input": {
"cid": 0,
"txid": "11b3e7d893cc5fdfcf1a1706809c7def290a3b10b0bef6525d10b024649c42d3"
}
}
```
- `fid`: Fulfillment index. It matches a `cid` in the conditions with a new _crypto-condition_ that the new owner
needs to fulfill to spend this asset.
- `owners_before`: A list of public keys of the owners before the transaction; in this case it has just one public key.
- `fulfillment`: A crypto-conditions URI that encodes the cryptographic fulfillments like signatures and others, see [crypto-conditions](https://interledger.org/five-bells-condition/spec.html).
- `input`: Pointer to the asset and condition of a previous transaction
- `cid`: Condition index
- `txid`: Transaction id

View File

@ -0,0 +1,19 @@
Data Models
===========
BigchainDB stores all data in the underlying database as JSON documents (conceptually, at least). There are three main kinds:
1. Transactions, which contain digital assets, conditions, fulfillments and other things
2. Blocks
3. Votes
This section unpacks each one in turn.
.. toctree::
:maxdepth: 1
transaction-model
asset-model
crypto-conditions
block-model
vote-model

View File

@ -0,0 +1,45 @@
# The Transaction Model
A transaction has the following structure:
```json
{
"id": "<hash of transaction, excluding signatures (see explanation)>",
"version": "<version number of the transaction model>",
"transaction": {
"fulfillments": ["<list of fulfillments>"],
"conditions": ["<list of conditions>"],
"operation": "<string>",
"timestamp": "<timestamp from client>",
"asset": "<digital asset description (explained in the next section)>",
"metadata": {
"id": "<uuid>",
"data": "<any JSON document>"
}
}
}
```
Here's some explanation of the contents of a transaction:
- `id`: The hash of everything inside the serialized `transaction` body (i.e. `fulfillments`, `conditions`, `operation`, `timestamp` and `data`; see below), with one wrinkle: for each fulfillment in `fulfillments`, `fulfillment` is set to `null`. The `id` is also the database primary key.
- `version`: Version number of the transaction model, so that software can support different transaction models.
- `transaction`:
- `fulfillments`: List of fulfillments. Each _fulfillment_ contains a pointer to an unspent asset
and a _crypto fulfillment_ that satisfies a spending condition set on the unspent asset. A _fulfillment_
is usually a signature proving the ownership of the asset.
See the page about [Crypto-Conditions and Fulfillments](crypto-conditions.html).
- `conditions`: List of conditions. Each _condition_ is a _crypto-condition_ that needs to be fulfilled by a transfer transaction in order to transfer ownership to new owners.
See the page about [Crypto-Conditions and Fulfillments](crypto-conditions.html).
- `operation`: String representation of the operation being performed (currently either "CREATE", "TRANSFER" or "GENESIS"). It determines how the transaction should be validated.
- `timestamp`: The Unix time when the transaction was created. It's provided by the client. See the page about [timestamps in BigchainDB](../timestamps.html).
- `asset`: Definition of the digital asset. See next section.
- `metadata`:
- `id`: UUID version 4 (random) converted to a string of hex digits in standard form.
- `data`: Can be any JSON document. It may be empty in the case of a transfer transaction.
Later, when we get to the models for the block and the vote, we'll see that both include a signature (from the node which created it). You may wonder why transactions don't have signatures... The answer is that they do! They're just hidden inside the `fulfillment` string of each fulfillment. A creation transaction is signed by the node that created it. A transfer transaction is signed by whoever currently controls or owns it.
What gets signed? For each fulfillment in the transaction, the "fullfillment message" that gets signed includes the `operation`, `timestamp`, `data`, `version`, `id`, corresponding `condition`, and the fulfillment itself, except with its fulfillment string set to `null`. The computed signature goes into creating the `fulfillment` string of the fulfillment.
One other note: Currently, transactions contain only the public keys of asset-owners (i.e. who own an asset or who owned an asset in the past), inside the conditions and fulfillments. A transaction does _not_ contain the public key of the client (computer) which generated and sent it to a BigchainDB node. In fact, there's no need for a client to _have_ a public/private keypair. In the future, each client may also have a keypair, and it may have to sign each sent transaction (using its private key); see [Issue #347 on GitHub](https://github.com/bigchaindb/bigchaindb/issues/347). In practice, a person might think of their keypair as being both their "ownership-keypair" and their "client-keypair," but there is a difference, just like there's a difference between Joe and Joe's computer.

View File

@ -0,0 +1,20 @@
# The Vote Model
A vote has the following structure:
```json
{
"id": "<RethinkDB-generated ID for the vote>",
"node_pubkey": "<the public key of the voting node>",
"vote": {
"voting_for_block": "<id of the block the node is voting for>",
"previous_block": "<id of the block previous to this one>",
"is_block_valid": "<true|false>",
"invalid_reason": "<None|DOUBLE_SPEND|TRANSACTIONS_HASH_MISMATCH|NODES_PUBKEYS_MISMATCH",
"timestamp": "<Unix time when the vote was generated, provided by the voting node>"
},
"signature": "<signature of vote>",
}
```
Note: The `invalid_reason` was not being used and may be dropped in a future version of BigchainDB. See [Issue #217](https://github.com/bigchaindb/bigchaindb/issues/217) on GitHub.

View File

@ -0,0 +1,21 @@
# How BigchainDB is Decentralized
Decentralization means that no one owns or controls everything, and there is no single point of failure.
Ideally, each node in a BigchainDB cluster is owned and controlled by a different person or organization. Even if the cluster lives within one organization, it's still preferable to have each node controlled by a different person or subdivision.
We use the phrase "BigchainDB federation" (or just "federation") to refer to the set of people and/or organizations who run the nodes of a BigchainDB cluster. A federation requires some form of governance to make decisions such as membership and policies. The exact details of the governance process are determined by each federation, but it can be very decentralized (e.g. purely vote-based, where each node gets a vote, and there are no special leadership roles).
The actual data is decentralized in that it doesnt all get stored in one place. Each federation node stores the primary of one shard and replicas of some other shards. (A shard is a subset of the total set of documents.) Sharding and replication are handled by RethinkDB.
Every node has its own locally-stored list of the public keys of other federation members: the so-called keyring. There's no centrally-stored or centrally-shared keyring.
A federation can increase its decentralization (and its resilience) by increasing its jurisdictional diversity, geographic diversity, and other kinds of diversity. This idea is expanded upon in [the section on node diversity](diversity.html).
Theres no node that has a long-term special position in the federation. All nodes run the same software and perform the same duties.
RethinkDB has an “admin” user which cant be deleted and which can make big changes to the database, such as dropping a table. Right now, thats a big security vulnerability, but we have plans to mitigate it by:
1. Locking down the admin user as much as possible.
2. Having all nodes inspect RethinkDB admin-type requests before acting on them. Requests can be checked against an evolving whitelist of allowed actions (voted on by federation nodes).
Its worth noting that the RethinkDB admin user cant transfer assets, even today. The only way to create a valid transfer transaction is to fulfill the current (crypto) conditions on the asset, and the admin user cant do that because the admin user doesnt have the necessary private keys (or preimages, in the case of hashlock conditions). Theyre not stored in the database.

View File

@ -0,0 +1,11 @@
# Kinds of Node Diversity
Steps should be taken to make it difficult for any one actor or event to control or damage “enough” of the nodes. (“Enough” is usually a quorum.) There are many kinds of diversity to consider, listed below. It may be quite difficult to have high diversity of all kinds.
1. **Jurisdictional diversity.** The nodes should be controlled by entities within multiple legal jurisdictions, so that it becomes difficult to use legal means to compel enough of them to do something.
2. **Geographic diversity.** The servers should be physically located at multiple geographic locations, so that it becomes difficult for a natural disaster (such as a flood or earthquake) to damage enough of them to cause problems.
3. **Hosting diversity.** The servers should be hosted by multiple hosting providers (e.g. Amazon Web Services, Microsoft Azure, Digital Ocean, Rackspace), so that it becomes difficult for one hosting provider to influence enough of the nodes.
4. **Operating system diversity.** The servers should use a variety of operating systems, so that a security bug in one OS cant be used to exploit enough of the nodes.
5. **Diversity in general.** In general, membership diversity (of all kinds) confers many advantages on a federation. For example, it provides the federation with a source of various ideas for addressing challenges.
Note: If all the nodes are running the same code, i.e. the same implementation of BigchainDB, then a bug in that code could be used to compromise all of the nodes. Ideally, there would be several different, well-maintained implementations of BigchainDB Server (e.g. one in Python, one in Go, etc.), so that a federation could also have a diversity of server implementations.

View File

@ -0,0 +1,19 @@
# How BigchainDB is Immutable / Tamper-Resistant
The word _immutable_ means "unchanging over time or unable to be changed." For example, the decimal digits of π are immutable (3.14159…).
The blockchain community often describes blockchains as “immutable.” If we interpret that word literally, it means that blockchain data is unchangeable or permanent, which is absurd. The data _can_ be changed. For example, a plague might drive humanity extinct; the data would then get corrupted over time due to water damage, thermal noise, and the general increase of entropy. In the case of Bitcoin, nothing so drastic is required: a 51% attack will suffice.
Its true that blockchain data is more difficult to change than usual: its more tamper-resistant than a typical file system or database. Therefore, in the context of blockchains, we interpret the word “immutable” to mean tamper-resistant. (Linguists would say that the word “immutable” is a _term of art_ in the blockchain community.)
BigchainDB achieves strong tamper-resistance in the following ways:
1. **Replication.** All data is sharded and shards are replicated in several (different) places. The replication factor can be set by the federation. The higher the replication factor, the more difficult it becomes to change or delete all replicas.
2. **Internal watchdogs.** All nodes monitor all changes and if some unallowed change happens, then appropriate action is taken. For example, if a valid block is deleted, then it is put back.
3. **External watchdogs.** Federations may opt to have trusted third-parties to monitor and audit their data, looking for irregularities. For federations with publicly-readable data, the public can act as an auditor.
4. **Cryptographic signatures** are used throughout BigchainDB as a way to check if messages (transactions, blocks and votes) have been tampered with enroute, and as a way to verify who signed the messages. Each block is signed by the node that created it. Each vote is signed by the node that cast it. A creation transaction is signed by the node that created it, although there are plans to improve that by adding signatures from the sending client and multiple nodes; see [Issue #347](https://github.com/bigchaindb/bigchaindb/issues/347). Transfer transactions can contain multiple fulfillments (one per asset transferred). Each fulfillment will typically contain one or more signatures from the owners (i.e. the owners before the transfer). Hashlock fulfillments are an exception; theres an open issue ([#339](https://github.com/bigchaindb/bigchaindb/issues/339)) to address that.
5. **Full or partial backups** of the database may be recorded from time to time, possibly on magnetic tape storage, other blockchains, printouts, etc.
6. **Strong security.** Node owners can adopt and enforce strong security policies.
7. **Node diversity.** Diversity makes it so that no one thing (e.g. natural disaster or operating system bug) can compromise enough of the nodes. See [the section on the kinds of node diversity](diversity.html).
Some of these things come "for free" as part of the BigchainDB software, and others require some extra effort from the federation and node owners.

View File

@ -0,0 +1,86 @@
BigchainDB Documentation
========================
`BigchainDB <https://www.bigchaindb.com/>`_ is a scalable blockchain database.
That is, it's a "big data" database with some blockchain characteristics added, including `decentralization <decentralized.html>`_,
`immutability <immutable.html>`_
and
`native support for assets <assets.html>`_.
You can read about the motivations, goals and high-level architecture in the `BigchainDB whitepaper <https://www.bigchaindb.com/whitepaper/>`_.
At a high level, one can communicate with a BigchainDB cluster (set of nodes) using the BigchainDB Client-Server HTTP API, or a wrapper for that API, such as the BigchainDB Python Driver. Each BigchainDB node runs BigchainDB Server and various other software. The `terminology page <terminology.html>`_ explains some of those terms in more detail.
.. raw:: html
<style media="screen" type="text/css">
.button {
border-top: 1px solid #96d1f8;
background: #65a9d7;
background: -webkit-gradient(linear, left top, left bottom, from(#3e779d), to(#65a9d7));
background: -webkit-linear-gradient(top, #3e779d, #65a9d7);
background: -moz-linear-gradient(top, #3e779d, #65a9d7);
background: -ms-linear-gradient(top, #3e779d, #65a9d7);
background: -o-linear-gradient(top, #3e779d, #65a9d7);
padding: 8.5px 17px;
-webkit-border-radius: 3px;
-moz-border-radius: 3px;
border-radius: 3px;
-webkit-box-shadow: rgba(0,0,0,1) 0 1px 0;
-moz-box-shadow: rgba(0,0,0,1) 0 1px 0;
box-shadow: rgba(0,0,0,1) 0 1px 0;
text-shadow: rgba(0,0,0,.4) 0 1px 0;
color: white;
font-size: 16px;
font-family: Arial, Sans-Serif;
text-decoration: none;
vertical-align: middle;
}
.button:hover {
border-top-color: #28597a;
background: #28597a;
color: #ccc;
}
.button:active {
border-top-color: #1b435e;
background: #1b435e;
}
a.button:visited {
color: white
}
.buttondiv {
margin-bottom: 1.5em;
}
</style>
<div class="buttondiv">
<a class="button" href="http://docs.bigchaindb.com/projects/server/en/latest/drivers-clients/http-client-server-api.html">HTTP API Docs</a>
</div>
<div class="buttondiv">
<a class="button" href="http://docs.bigchaindb.com/projects/py-driver/en/latest/index.html">Python Driver Docs</a>
</div>
<div class="buttondiv">
<a class="button" href="http://docs.bigchaindb.com/projects/server/en/latest/index.html">Server Docs</a>
</div>
<div class="buttondiv">
<a class="button" href="http://docs.bigchaindb.com/projects/server/en/latest/quickstart.html">Server Quickstart</a>
</div>
More About BigchainDB
---------------------
.. toctree::
:maxdepth: 1
BigchainDB Docs Home <self>
production-ready
terminology
decentralized
diversity
immutable
bft
assets
smart-contracts
transaction-concepts
timestamps
data-models/index

View File

@ -0,0 +1,7 @@
# Production-Ready?
BigchainDB is not production-ready. You can use it to build a prototype or proof-of-concept (POC); many people are already doing that.
BigchainDB Server is currently in version 0.X. ([The Releases page on GitHub](https://github.com/bigchaindb/bigchaindb/releases) has the exact version number.) Once we believe that BigchainDB Server is production-ready, we'll release version 1.0.
[The BigchainDB Roadmap](https://github.com/bigchaindb/org/blob/master/ROADMAP.md) will give you a sense of the things we intend to do with BigchainDB in the near term and the long term.

View File

@ -0,0 +1,22 @@
BigchainDB and Smart Contracts
==============================
One can store the source code of any smart contract (i.e. a computer program) in BigchainDB, but BigchainDB won't run arbitrary smart contracts.
BigchainDB will run the subset of smart contracts expressible using "crypto-conditions," a subset we like to call "simple contracts." Crypto-conditions are part of the `Interledger Protocol <https://interledger.org/>`_.
The owners of an asset can impose conditions on it that must be met for the asset to be transferred to new owners. Examples of possible conditions (crypto-conditions) include:
- The current owner must sign the transfer transaction (one which transfers ownership to new owners)
- Three out of five current owners must sign the transfer transaction
- (Shannon and Kelly) or Morgan must sign the transfer transaction
- Anyone who provides the secret password (technically, the preimage of a known hash) can create a valid transfer transaction
Crypto-conditions can be quite complex if-this-then-that type conditions, where the "this" can be a long boolean expression. Crypto-conditions can't include loops or recursion and are therefore will always run/check in finite time.
BigchainDB also supports a timeout condition which enables it to support a form of escrow.
.. note::
We used the word "owners" somewhat loosely above. A more accurate word might be fulfillers, signers, controllers, or tranfer-enablers. See BigchainDB Server `issue #626 <https://github.com/bigchaindb/bigchaindb/issues/626>`_.

View File

@ -0,0 +1,22 @@
# Terminology
There is some specialized terminology associated with BigchainDB. To get started, you should at least know what what we mean by a BigchainDB *node*, *cluster* and *federation*.
## Node
A **BigchainDB node** is a machine or set of closely-linked machines running RethinkDB Server, BigchainDB Server, and related software. (A "machine" might be a bare-metal server, a virtual machine or a container.) Each node is controlled by one person or organization.
## Cluster
A set of BigchainDB nodes can connect to each other to form a **cluster**. Each node in the cluster runs the same software. A cluster contains one logical RethinkDB datastore. A cluster may have additional machines to do things such as cluster monitoring.
## Federation
The people and organizations that run the nodes in a cluster belong to a **federation** (i.e. another organization). A federation must have some sort of governance structure to make decisions. If a cluster is run by a single company, then the federation is just that company.
**What's the Difference Between a Cluster and a Federation?**
A cluster is just a bunch of connected nodes. A federation is an organization which has a cluster, and where each node in the cluster has a different operator. Confusingly, we sometimes call a federation's cluster its "federation." You can probably tell what we mean from context.

View File

@ -0,0 +1,95 @@
# Timestamps in BigchainDB
Each transaction, block and vote has an associated timestamp. Interpreting those timestamps is tricky, hence the need for this section.
## Timestamp Sources & Accuracy
A transaction's timestamp is provided by the client which created and submitted the transaction to a BigchainDB node. A block's timestamp is provided by the BigchainDB node which created the block. A vote's timestamp is provided by the BigchainDB node which created the vote.
When a BigchainDB client or node needs a timestamp, it calls a BigchainDB utility function named `timestamp()`. There's a detailed explanation of how that function works below, but the short version is that it gets the [Unix time](https://en.wikipedia.org/wiki/Unix_time) from its system clock, rounded to the nearest second.
We can't say anything about the accuracy of the system clock on clients. Timestamps from clients are still potentially useful, however, in a statistical sense. We say more about that below.
We advise BigchainDB nodes to run special software (an "NTP daemon") to keep their system clock in sync with standard time servers. (NTP stands for [Network Time Protocol](https://en.wikipedia.org/wiki/Network_Time_Protocol).)
## Converting Timestamps to UTC
To convert a BigchainDB timestamp (a Unix time) to UTC, you need to know how the node providing the timestamp was set up. That's because different setups will report a different "Unix time" value around leap seconds! There's [a nice Red Hat Developer Blog post about the various setup options](http://developers.redhat.com/blog/2015/06/01/five-different-ways-handle-leap-seconds-ntp/). If you want more details, see [David Mills' pages about leap seconds, NTP, etc.](https://www.eecis.udel.edu/~mills/leap.html) (David Mills designed NTP.)
We advise BigchainDB nodes to run an NTP daemon with particular settings so that their timestamps are consistent.
If a timestamp comes from a node that's set up as we advise, it can be converted to UTC as follows:
1. Use a standard "Unix time to UTC" converter to get a UTC timestamp.
2. Is the UTC timestamp a leap second, or the second before/after a leap second? There's [a list of all the leap seconds on Wikipedia](https://en.wikipedia.org/wiki/Leap_second).
3. If no, then you are done.
4. If yes, then it might not be possible to convert it to a single UTC timestamp. Even if it can't be converted to a single UTC timestamp, it _can_ be converted to a list of two possible UTC timestamps.
Showing how to do that is beyond the scope of this documentation.
In all likelihood, you will never have to worry about leap seconds because they are very rare.
(There were only 26 between 1972 and the end of 2015.)
## Calculating Elapsed Time Between Two Timestamps
There's another gotcha with (Unix time) timestamps: you can't calculate the real-world elapsed time between two timestamps (correctly) by subtracting the smaller timestamp from the larger one. The result won't include any of the leap seconds that occured between the two timestamps. You could look up how many leap seconds happened between the two timestamps and add that to the result. There are many library functions for working with timestamps; those are beyond the scope of this documentation.
## Avoid Doing Transactions Around Leap Seconds
Because of the ambiguity and confusion that arises with Unix time around leap seconds, we advise users to avoid creating transactions around leap seconds.
## Interpreting Sets of Timestamps
You can look at many timestamps to get a statistical sense of when something happened. For example, a transaction in a decided-valid block has many associated timestamps:
* its own timestamp
* the timestamps of the other transactions in the block; there could be as many as 999 of those
* the timestamp of the block
* the timestamps of all the votes on the block
Those timestamps come from many sources, so you can look at all of them to get some statistical sense of when the transaction "actually happened." The timestamp of the block should always be after the timestamp of the transaction, and the timestamp of the votes should always be after the timestamp of the block.
## How BigchainDB Uses Timestamps
BigchainDB _doesn't_ use timestamps to determine the order of transactions or blocks. In particular, the order of blocks is determined by RethinkDB's changefeed on the bigchain table.
BigchainDB does use timestamps for some things. It uses them to determine if a transaction has been waiting in the backlog for too long (i.e. because the node assigned to it hasn't handled it yet). It also uses timestamps to determine the status of timeout conditions (used by escrow).
## Including Trusted Timestamps
If you want to create a transaction payload with a trusted timestamp, you can.
One way to do that would be to send a payload to a trusted timestamping service. They will send back a timestamp, a signature, and their public key. They should also explain how you can verify the signature. You can then include the original payload, the timestamp, the signature, and the service's public key in your transaction. That way, anyone with the verification instructions can verify that the original payload was signed by the trusted timestamping service.
## How the timestamp() Function Works
BigchainDB has a utility function named `timestamp()` which amounts to:
```python
timestamp() = str(round(time.time()))
```
In other words, it calls the `time()` function in Python's `time` module, [rounds](https://docs.python.org/3/library/functions.html#round) that to the nearest integer, and converts the result to a string.
It rounds the output of `time.time()` to the nearest second because, according to [the Python documentation for `time.time()`](https://docs.python.org/3.4/library/time.html#time.time), "...not all systems provide time with a better precision than 1 second."
How does `time.time()` work? If you look in the C source code, it calls `floattime()` and `floattime()` calls [clock_gettime()](https://www.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html), if it's available.
```text
ret = clock_gettime(CLOCK_REALTIME, &tp);
```
With `CLOCK_REALTIME` as the first argument, it returns the "Unix time." ("Unix time" is in quotes because its value around leap seconds depends on how the system is set up; see above.)
## Why Not Use UTC, TAI or Some Other Time that Has Unambiguous Timestamps for Leap Seconds?
It would be nice to use UTC or TAI timestamps, but unfortunately there's no commonly-available, standard way to get always-accurate UTC or TAI timestamps from the operating system on typical computers today (i.e. accurate around leap seconds).
There _are_ commonly-available, standard ways to get the "Unix time," such as clock_gettime() function available in C. That's what we use (indirectly via Python). ("Unix time" is in quotes because its value around leap seconds depends on how the system is set up; see above.)
The Unix-time-based timestamps we use are only ambiguous circa leap seconds, and those are very rare. Even for those timestamps, the extra uncertainty is only one second, and that's not bad considering that we only report timestamps to a precision of one second in the first place. All other timestamps can be converted to UTC with no ambiguity.

View File

@ -0,0 +1,29 @@
# Transaction Concepts
In BigchainDB, _Transactions_ are used to register, issue, create or transfer things (e.g. assets).
Transactions are the most basic kind of record stored by BigchainDB. There are two kinds: creation transactions and transfer transactions.
A _creation transaction_ can be used to register, issue, create or otherwise initiate the history of a single thing (or asset) in BigchainDB. For example, one might register an identity or a creative work. The things are often called "assets" but they might not be literal assets.
Currently, BigchainDB only supports indivisible assets. You can't split an asset apart into multiple assets, nor can you combine several assets together into one. [Issue #129](https://github.com/bigchaindb/bigchaindb/issues/129) is an enhancement proposal to support divisible assets.
A creation transaction also establishes the conditions that must be met to transfer the asset. For example, there may be a condition that any transfer must be signed (cryptographically) by the signing/private key associated with a given verifying/public key. More sophisticated conditions are possible. BigchainDB's conditions are based on the crypto-conditions of the [Interledger Protocol (ILP)](https://interledger.org/).
A _transfer transaction_ can transfer an asset by fulfilling the current conditions on the asset. It can also specify new transfer conditions.
Today, every transaction contains one fulfillment-condition pair. The fulfillment in a transfer transaction must fulfill a condition in a previous transaction.
When a node is asked to check if a transaction is valid, it checks several things. Some things it checks are:
* Are all the fulfillments valid? (Do they correctly satisfy the conditions they claim to satisfy?)
* If it's a creation transaction, is the asset valid?
* If it's a transfer transaction:
* Is it trying to fulfill a condition in a nonexistent transaction?
* Is it trying to fulfill a condition that's not in a valid transaction? (It's okay if the condition is in a transaction in an invalid block; those transactions are ignored. Transactions in the backlog or undecided blocks are not ignored.)
* Is it trying to fulfill a condition that has already been fulfilled, or that some other pending transaction (in the backlog or an undecided block) also aims to fulfill?
* Is the asset ID in the transaction the same as the asset ID in all transactions whose conditions are being fulfilled?
If you're curious about the details of transaction validation, the code is in the `validate` method of the `Transaction` class, in `bigchaindb/models.py` (at the time of writing).
Note: The check to see if the transaction ID is equal to the hash of the transaction body is actually done whenever the transaction is converted from a Python dict to a Transaction object, which must be done before the `validate` method can be called (since it's called on a Transaction object).

View File

@ -0,0 +1,5 @@
Sphinx>=1.3.5
recommonmark>=0.4.0
sphinx-rtd-theme>=0.1.9
sphinxcontrib-napoleon>=0.4.4
sphinxcontrib-httpdomain>=1.5.0

View File

Before

Width:  |  Height:  |  Size: 82 KiB

After

Width:  |  Height:  |  Size: 82 KiB

View File

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

Before

Width:  |  Height:  |  Size: 60 KiB

After

Width:  |  Height:  |  Size: 60 KiB

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -126,7 +126,7 @@ BRANCH="master"
WHAT_TO_DEPLOY="servers"
SSH_KEY_NAME="not-set-yet"
USE_KEYPAIRS_FILE=False
IMAGE_ID="ami-72c33e1d"
IMAGE_ID="ami-9c09f0f3"
INSTANCE_TYPE="t2.medium"
SECURITY_GROUP="bigchaindb"
USING_EBS=True

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# BigchainDB documentation build configuration file, created by
# BigchainDB Server documentation build configuration file, created by
# sphinx-quickstart on Tue Jan 19 14:42:58 2016.
#
# This file is execfile()d with the current directory set to its
@ -32,7 +32,7 @@ import sphinx_rtd_theme
# get version
_version = {}
with open('../../bigchaindb/version.py') as fp:
with open('../../../bigchaindb/version.py') as fp:
exec(fp.read(), _version)
@ -41,7 +41,7 @@ extensions = [
'sphinx.ext.intersphinx',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinxcontrib.napoleon',
'sphinx.ext.napoleon',
'sphinxcontrib.httpdomain',
]

View File

@ -0,0 +1,263 @@
The HTTP Client-Server API
==========================
.. note::
The HTTP client-server API is currently quite rudimentary. For example, there is no ability to do complex queries using the HTTP API. We plan to add querying capabilities in the future.
When you start Bigchaindb using `bigchaindb start`, an HTTP API is exposed at the address stored in the BigchainDB node configuration settings. The default is:
`http://localhost:9984/api/v1/ <http://localhost:9984/api/v1/>`_
but that address can be changed by changing the "API endpoint" configuration setting (e.g. in a local config file). There's more information about setting the API endpoint in :doc:`the section about BigchainDB Configuration Settings <../server-reference/configuration>`.
There are other configuration settings related to the web server (serving the HTTP API). In particular, the default is for the web server socket to bind to ``localhost:9984`` but that can be changed (e.g. to ``0.0.0.0:9984``). For more details, see the "server" settings ("bind", "workers" and "threads") in :doc:`the section about BigchainDB Configuration Settings <../server-reference/configuration>`.
API Root
--------
If you send an HTTP GET request to e.g. ``http://localhost:9984`` (with no ``/api/v1/`` on the end), then you should get an HTTP response with something like the following in the body:
.. code-block:: json
{
"api_endpoint": "http://localhost:9984/api/v1",
"keyring": [
"6qHyZew94NMmUTYyHnkZsB8cxJYuRNEiEpXHe1ih9QX3",
"AdDuyrTyjrDt935YnFu4VBCVDhHtY2Y6rcy7x2TFeiRi"
],
"public_key": "AiygKSRhZWTxxYT4AfgKoTG4TZAoPsWoEt6C6bLq4jJR",
"software": "BigchainDB",
"version": "0.6.0"
}
POST /transactions/
-------------------
.. http:post:: /transactions/
Push a new transaction.
Note: The posted transaction should be valid `transaction <https://bigchaindb.readthedocs.io/en/latest/data-models/transaction-model.html>`_. The steps to build a valid transaction are beyond the scope of this page. One would normally use a driver such as the `BigchainDB Python Driver <https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_ to build a valid transaction.
**Example request**:
.. sourcecode:: http
POST /transactions/ HTTP/1.1
Host: example.com
Content-Type: application/json
{
"transaction": {
"conditions": [
{
"cid": 0,
"condition": {
"uri": "cc:4:20:GG-pi3CeIlySZhQoJVBh9O23PzrOuhnYI7OHqIbHjkk:96",
"details": {
"signature": null,
"type": "fulfillment",
"type_id": 4,
"bitmask": 32,
"public_key": "2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
}
},
"amount": 1,
"owners_after": [
"2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
]
}
],
"operation": "CREATE",
"asset": {
"divisible": false,
"updatable": false,
"data": null,
"id": "aebeab22-e672-4d3b-a187-bde5fda6533d",
"refillable": false
},
"metadata": null,
"timestamp": "1477578978",
"fulfillments": [
{
"fid": 0,
"input": null,
"fulfillment": "cf:4:GG-pi3CeIlySZhQoJVBh9O23PzrOuhnYI7OHqIbHjkn2VnQaEWvecO1x82Qr2Va_JjFywLKIOEV1Ob9Ofkeln2K89ny2mB-s7RLNvYAVzWNiQnp18_nQEUsvwACEXTYJ",
"owners_before": [
"2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
]
}
]
},
"id": "2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e",
"version": 1
}
**Example response**:
.. sourcecode:: http
HTTP/1.1 201 Created
Content-Type: application/json
{
"id": "2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e",
"version": 1,
"transaction": {
"conditions": [
{
"amount": 1,
"condition": {
"uri": "cc:4:20:GG-pi3CeIlySZhQoJVBh9O23PzrOuhnYI7OHqIbHjkk:96",
"details": {
"signature": null,
"type_id": 4,
"type": "fulfillment",
"bitmask": 32,
"public_key": "2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
}
},
"owners_after": [
"2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
],
"cid": 0
}
],
"fulfillments": [
{
"input": null,
"fulfillment": "cf:4:GG-pi3CeIlySZhQoJVBh9O23PzrOuhnYI7OHqIbHjkn2VnQaEWvecO1x82Qr2Va_JjFywLKIOEV1Ob9Ofkeln2K89ny2mB-s7RLNvYAVzWNiQnp18_nQEUsvwACEXTYJ",
"fid": 0,
"owners_before": [
"2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
]
}
],
"operation": "CREATE",
"timestamp": "1477578978",
"asset": {
"updatable": false,
"refillable": false,
"divisible": false,
"data": null,
"id": "aebeab22-e672-4d3b-a187-bde5fda6533d"
},
"metadata": null
}
}
:statuscode 201: A new transaction was created.
:statuscode 400: The transaction was invalid and not created.
GET /transactions/{tx_id}/status
--------------------------------
.. http:get:: /transactions/{tx_id}/status
Get the status of the transaction with the ID ``tx_id``, if a transaction with that ``tx_id`` exists.
The possible status values are ``backlog``, ``undecided``, ``valid`` or ``invalid``.
:param tx_id: transaction ID
:type tx_id: hex string
**Example request**:
.. sourcecode:: http
GET /transactions/7ad5a4b83bc8c70c4fd7420ff3c60693ab8e6d0e3124378ca69ed5acd2578792/status HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{
"status": "valid"
}
:statuscode 200: A transaction with that ID was found and the status is returned.
:statuscode 404: A transaction with that ID was not found.
GET /transactions/{tx_id}
-------------------------
.. http:get:: /transactions/{tx_id}
Get the transaction with the ID ``tx_id``.
This endpoint returns only a transaction from a ``VALID`` or ``UNDECIDED`` block on ``bigchain``, if exists.
:param tx_id: transaction ID
:type tx_id: hex string
**Example request**:
.. sourcecode:: http
GET /transactions/2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{
"transaction": {
"conditions": [
{
"cid": 0,
"condition": {
"uri": "cc:4:20:GG-pi3CeIlySZhQoJVBh9O23PzrOuhnYI7OHqIbHjkk:96",
"details": {
"signature": null,
"type": "fulfillment",
"type_id": 4,
"bitmask": 32,
"public_key": "2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
}
},
"amount": 1,
"owners_after": [
"2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
]
}
],
"operation": "CREATE",
"asset": {
"divisible": false,
"updatable": false,
"data": null,
"id": "aebeab22-e672-4d3b-a187-bde5fda6533d",
"refillable": false
},
"metadata": null,
"timestamp": "1477578978",
"fulfillments": [
{
"fid": 0,
"input": null,
"fulfillment": "cf:4:GG-pi3CeIlySZhQoJVBh9O23PzrOuhnYI7OHqIbHjkn2VnQaEWvecO1x82Qr2Va_JjFywLKIOEV1Ob9Ofkeln2K89ny2mB-s7RLNvYAVzWNiQnp18_nQEUsvwACEXTYJ",
"owners_before": [
"2ePYHfV3yS3xTxF9EE3Xjo8zPwq2RmLPFAJGQqQKc3j6"
]
}
]
},
"id": "2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e",
"version": 1
}
:statuscode 200: A transaction with that ID was found.
:statuscode 404: A transaction with that ID was not found.

View File

@ -4,7 +4,6 @@ Drivers & Clients
.. toctree::
:maxdepth: 1
python-server-api-examples
http-client-server-api
python-driver
example-apps

View File

@ -34,4 +34,4 @@ bigchaindb start
That's it!
For now, you can get a good sense of how to work with BigchainDB Server by going through [the examples in the section on the Python Server API](drivers-clients/python-server-api-examples.html).
Next Steps: You could build valid transactions and push them to your running BigchainDB Server using the [BigchaindB Python Driver](https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html).

Some files were not shown because too many files have changed in this diff Show More