Merge pull request #1 from Sangatdas/main

Initial Migration
This commit is contained in:
Jürgen Eckel 2022-01-19 09:30:55 +01:00 committed by GitHub
commit c7086a1982
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
454 changed files with 42542 additions and 723 deletions

14
.ci/entrypoint.sh Executable file
View File

@ -0,0 +1,14 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
if [[ ${PLANETMINT_CI_ABCI} == 'enable' ]]; then
sleep 3600
else
planetmint -l DEBUG start
fi

12
.ci/travis-after-success.sh Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
if [[ -z ${TOXENV} ]] && [[ ${PLANETMINT_CI_ABCI} != 'enable' ]] && [[ ${PLANETMINT_ACCEPTANCE_TEST} != 'enable' ]]; then
codecov -v -f htmlcov/coverage.xml
fi

16
.ci/travis-before-install.sh Executable file
View File

@ -0,0 +1,16 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
if [[ -z ${TOXENV} ]]; then
sudo apt-get update
sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
sudo rm /usr/local/bin/docker-compose
curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
chmod +x docker-compose
sudo mv docker-compose /usr/local/bin
fi

18
.ci/travis-before-script.sh Executable file
View File

@ -0,0 +1,18 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
if [[ -z ${TOXENV} ]]; then
if [[ ${PLANETMINT_CI_ABCI} == 'enable' ]]; then
docker-compose up -d planetmint
else
docker-compose up -d bdb
fi
fi

21
.ci/travis-install.sh Executable file
View File

@ -0,0 +1,21 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
pip install --upgrade pip
if [[ -n ${TOXENV} ]]; then
pip install --upgrade tox
elif [[ ${PLANETMINT_CI_ABCI} == 'enable' ]]; then
docker-compose build --no-cache --build-arg abci_status=enable planetmint
elif [[ $PLANETMINT_INTEGRATION_TEST == 'enable' ]]; then
docker-compose build planetmint python-driver
else
docker-compose build --no-cache planetmint
pip install --upgrade codecov
fi

18
.ci/travis_script.sh Executable file
View File

@ -0,0 +1,18 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
if [[ -n ${TOXENV} ]]; then
tox -e ${TOXENV}
elif [[ ${PLANETMINT_CI_ABCI} == 'enable' ]]; then
docker-compose exec planetmint pytest -v -m abci
elif [[ ${PLANETMINT_ACCEPTANCE_TEST} == 'enable' ]]; then
./run-acceptance-test.sh
else
docker-compose exec planetmint pytest -v --cov=planetmint --cov-report xml:htmlcov/coverage.xml
fi

9
.dockerignore Normal file
View File

@ -0,0 +1,9 @@
.cache/
.coverage
.eggs/
.git/
.gitignore
.ropeproject/
.travis.yml
Planetmint.egg-info/
dist/

14
.github/CONTRIBUTING.md vendored Normal file
View File

@ -0,0 +1,14 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# How to Contribute to the Planetmint Project
There are many ways you can contribute to the Planetmint project, some very easy and others more involved.
All of that is documented elsewhere: go to the "[Contributing to Planetmint" docs on ReadTheDocs](https://docs.bigchaindb.com/projects/contributing/en/latest/index.html).
Note: GitHub automatically links to this file (`.github/CONTRIBUTING.md`) when a contributor creates a new issue or pull request, so you shouldn't delete it. Just use it to point people to full and proper help elsewhere.

20
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,20 @@
# Do you want to:
- make a bug report? Then read below about what should go in a bug report.
- make a feature request or proposal? Then read [the page about how to make a feature request or proposal](https://docs.bigchaindb.com/projects/contributing/en/latest/ways-can-contribute/make-a-feature-request-or-proposal.html).
- ask a question about Planetmint? Then [go to Gitter](https://gitter.im/bigchaindb/bigchaindb) (our chat room) and ask it there.
- share your neat idea or realization? Then [go to Gitter](https://gitter.im/bigchaindb/bigchaindb) (our chat room) and share it there.
# What Should Go in a Bug Report
- What computer are you on (hardware)?
- What operating system are you using, including version. e.g. Ubuntu 14.04? Fedora 23?
- What version of Planetmint software were you using? Is that the latest version?
- What, exactly, did you do to get to the point where you got stuck? Describe all the steps so we can get there too. Show screenshots or copy-and-paste text to GitHub.
- Show what actually happened.
- Say what you tried to do to resolve the problem.
- Provide details to convince us that it matters to you. Is it for a school project, a job, a contract with a deadline, a child who needs it for Christmas?
We will do our best but please understand that we don't have time to help everyone, especially people who don't care to help us help them. "It doesn't work." is not going to get any reaction from us. We need _details_.
Tip: Use Github code block formatting to make code render pretty in GitHub. To do that, put three backticks followed by a string to set the type of code (e.g. `Python`), then the code, and then end with three backticks. There's more information about [inserting code blocks](https://help.github.com/articles/creating-and-highlighting-code-blocks/) in the GitHub help pages.

30
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,30 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Logs or terminal output**
If applicable, add add textual content to help explain your problem.
**Desktop (please complete the following information):**
- Distribution: [e.g. Ubuntu 18.04]
- Bigchaindb version:
- Tendermint version:
- Mongodb version:
- Python full version: [e.g. Python 3.6.6]
**Additional context**
Add any other context about the problem here.

18
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,18 @@
Make sure the title of this pull request has the form:
**Problem: A short statement of the problem.**
## Solution
A short statement about how this PR solves the **Problem**.
## Issues Resolved
What issues does this PR resolve, if any? Please include lines like the following (i.e. "Resolves #NNNN), so that when this PR gets merged, GitHub will automatically close those issues.
Resolves #NNNN
Resolves #MMMM
## BEPs Implemented
What [BEPs](https://github.com/bigchaindb/beps) does this pull request implement, if any?

92
.gitignore vendored
View File

@ -6,8 +6,24 @@ __pycache__/
# C extensions
*.so
# Swap -- copypasta from https://github.com/github/gitignore/blob/master/Global/Vim.gitignore
[._]*.s[a-v][a-z]
[._]*.sw[a-p]
[._]s[a-v][a-z]
[._]sw[a-p]
# Session
Session.vim
# Temporary
.netrwhist
*~
# Auto-generated tag files
tags
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
@ -19,13 +35,9 @@ lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
@ -40,16 +52,14 @@ pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
.pytest_cache/
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
@ -58,72 +68,32 @@ coverage.xml
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
docs/build/
# PyBuilder
target/
# Jupyter Notebook
# Ipython Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# Just in time documentation
docs/server/source/http-samples
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Terraform state files
# See https://stackoverflow.com/a/41482391
terraform.tfstate
terraform.tfstate.backup
# Celery stuff
celerybeat-schedule
celerybeat.pid
# tendermint data
tmdata/data
network/*/data
# SageMath parsed files
*.sage.py
# Docs that are fetched at build time
docs/contributing/source/cross-project-policies/*.md
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
.DS_Store

25
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,25 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
repos:
- repo: git://github.com/pre-commit/pre-commit-hooks
sha: v1.1.1
hooks:
- id: trailing-whitespace
args: ['--no-markdown-linebreak-ext']
- id: check-merge-conflict
- id: debug-statements
- id: check-added-large-files
- id: flake8
- repo: git://github.com/chewse/pre-commit-mirrors-pydocstyle
sha: v2.1.1
hooks:
- id: pydocstyle
# list of error codes to check, see: http://www.pydocstyle.org/en/latest/error_codes.html
args: ['--select=D204,D201,D209,D210,D212,D300,D403']
# negate the exclude to only apply the hooks to 'bigchaindb' and 'tests' folder
exclude: '^(?!bigchaindb/)(?!tests/)(?!acceptance/)'

11
.readthedocs.yml Normal file
View File

@ -0,0 +1,11 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
build:
image: latest
python:
version: 3.6
pip_install: true

77
.travis.yml Normal file
View File

@ -0,0 +1,77 @@
# Copyright © 2020, 2021 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
sudo: required
dist: focal
services:
- docker
language: python
cache: pip
python:
- 3.6
- 3.7
- 3.8
env:
global:
- DOCKER_COMPOSE_VERSION=1.29.2
matrix:
- TOXENV=flake8
- TOXENV=docsroot
matrix:
fast_finish: true
include:
- python: 3.6
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- python: 3.6
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- PLANETMINT_CI_ABCI=enable
- python: 3.6
env:
- PLANETMINT_ACCEPTANCE_TEST=enable
- python: 3.7
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- python: 3.7
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- PLANETMINT_CI_ABCI=enable
- python: 3.7
env:
- PLANETMINT_ACCEPTANCE_TEST=enable
- python: 3.8
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- python: 3.8
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- PLANETMINT_CI_ABCI=enable
- python: 3.8
env:
- PLANETMINT_ACCEPTANCE_TEST=enable
before_install: sudo .ci/travis-before-install.sh
install: .ci/travis-install.sh
before_script: .ci/travis-before-script.sh
script: .ci/travis_script.sh
after_success: .ci/travis-after-success.sh

1162
CHANGELOG.md Normal file

File diff suppressed because it is too large Load Diff

57
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,57 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of
fostering an open and welcoming community, we pledge to respect all people who
contribute to the project.
We are committed to making participation in this project a harassment-free
experience for everyone, regardless of level of experience, gender, gender
identity and expression, sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, nationality, or species--no picking on Wrigley for being a buffalo!
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information, such as physical or electronic
addresses, without explicit permission
* Deliberate intimidation
* Other unethical or unprofessional conduct
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
By adopting this Code of Conduct, project maintainers commit themselves to
fairly and consistently applying these principles to every aspect of managing
this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior directed at yourself or another community member may be
reported by contacting a project maintainer at [contact@bigchaindb.com](mailto:contact@bigchaindb.com). All
complaints will be reviewed and investigated and will result in a response that
is appropriate to the circumstances. Maintainers are
obligated to maintain confidentiality with regard to the reporter of an
incident.
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 1.3.0, available at
[http://contributor-covenant.org/version/1/3/0/][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/3/0/

24
Dockerfile Normal file
View File

@ -0,0 +1,24 @@
FROM python:3.6
LABEL maintainer "contact@ipdb.global"
RUN mkdir -p /usr/src/app
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN apt-get -qq update \
&& apt-get -y upgrade \
&& apt-get install -y jq \
&& pip install . \
&& apt-get autoremove \
&& apt-get clean
VOLUME ["/data", "/certs"]
ENV PYTHONUNBUFFERED 0
ENV PLANETMINT_CONFIG_PATH /data/.bigchaindb
ENV PLANETMINT_SERVER_BIND 0.0.0.0:9984
ENV PLANETMINT_WSSERVER_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_SCHEME ws
ENV PLANETMINT_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws
ENV PLANETMINT_WSSERVER_ADVERTISED_PORT 9985
ENTRYPOINT ["bigchaindb"]
CMD ["start"]

51
Dockerfile-all-in-one Normal file
View File

@ -0,0 +1,51 @@
FROM alpine:3.9
LABEL maintainer "contact@ipdb.global"
ARG TM_VERSION=v0.31.5
RUN mkdir -p /usr/src/app
ENV HOME /root
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN apk --update add sudo bash \
&& apk --update add python3 openssl ca-certificates git \
&& apk --update add --virtual build-dependencies python3-dev \
libffi-dev openssl-dev build-base jq \
&& apk add --no-cache libstdc++ dpkg gnupg \
&& pip3 install --upgrade pip cffi \
&& pip install -e . \
&& apk del build-dependencies \
&& rm -f /var/cache/apk/*
# Install mongodb and monit
RUN apk --update add mongodb monit
# Install Tendermint
RUN wget https://github.com/tendermint/tendermint/releases/download/${TM_VERSION}/tendermint_${TM_VERSION}_linux_amd64.zip \
&& unzip tendermint_${TM_VERSION}_linux_amd64.zip \
&& mv tendermint /usr/local/bin/ \
&& rm tendermint_${TM_VERSION}_linux_amd64.zip
ENV TMHOME=/tendermint
# Set permissions required for mongodb
RUN mkdir -p /data/db /data/configdb \
&& chown -R mongodb:mongodb /data/db /data/configdb
# Planetmint enviroment variables
ENV PLANETMINT_DATABASE_PORT 27017
ENV PLANETMINT_DATABASE_BACKEND localmongodb
ENV PLANETMINT_SERVER_BIND 0.0.0.0:9984
ENV PLANETMINT_WSSERVER_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_SCHEME ws
ENV PLANETMINT_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws
ENV PLANETMINT_TENDERMINT_PORT 26657
VOLUME /data/db /data/configdb /tendermint
EXPOSE 27017 28017 9984 9985 26656 26657 26658
WORKDIR $HOME
ENTRYPOINT ["/usr/src/app/pkg/scripts/all-in-one.bash"]

30
Dockerfile-alpine Normal file
View File

@ -0,0 +1,30 @@
FROM alpine:latest
LABEL maintainer "contact@ipdb.global"
RUN mkdir -p /usr/src/app
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN apk --update add sudo \
&& apk --update add python3 py-pip openssl ca-certificates git\
&& apk --update add --virtual build-dependencies python3-dev \
libffi-dev openssl-dev build-base \
&& apk add --no-cache libstdc++ \
&& pip3 install --upgrade pip cffi \
&& pip install -e . \
&& apk del build-dependencies \
&& rm -f /var/cache/apk/*
# When developing with Python in a docker container, we are using PYTHONBUFFERED
# to force stdin, stdout and stderr to be totally unbuffered and to capture logs/outputs
ENV PYTHONUNBUFFERED 0
ENV PLANETMINT_DATABASE_PORT 27017
ENV PLANETMINT_DATABASE_BACKEND $backend
ENV PLANETMINT_SERVER_BIND 0.0.0.0:9984
ENV PLANETMINT_WSSERVER_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_SCHEME ws
ENV PLANETMINT_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws
ENV PLANETMINT_TENDERMINT_PORT 26657
ARG backend
RUN bigchaindb -y configure "$backend"

35
Dockerfile-dev Normal file
View File

@ -0,0 +1,35 @@
ARG python_version=3.6
FROM python:${python_version}
LABEL maintainer "contact@ipdb.global"
RUN apt-get update \
&& apt-get install -y git \
&& pip install -U pip \
&& apt-get autoremove \
&& apt-get clean
ARG backend
ARG abci_status
# When developing with Python in a docker container, we are using PYTHONBUFFERED
# to force stdin, stdout and stderr to be totally unbuffered and to capture logs/outputs
ENV PYTHONUNBUFFERED 0
ENV PLANETMINT_DATABASE_PORT 27017
ENV PLANETMINT_DATABASE_BACKEND $backend
ENV PLANETMINT_SERVER_BIND 0.0.0.0:9984
ENV PLANETMINT_WSSERVER_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_SCHEME ws
ENV PLANETMINT_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws
ENV PLANETMINT_TENDERMINT_PORT 26657
ENV PLANETMINT_CI_ABCI ${abci_status}
RUN mkdir -p /usr/src/app
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN pip install -e .[dev]
RUN planetmint -y configure

862
LICENSE
View File

@ -1,661 +1,201 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

24
LICENSES.md Normal file
View File

@ -0,0 +1,24 @@
# Copyrights and Licenses
## Copyrights
For all the code and documentation in this repository, the copyright is owned by one or more of the following:
- Planetmint GmbH
- A Planetmint contributor who agreed to a Planetmint Contributor License Agreement (CLA) with Planetmint GmbH. (See [BEP-16](https://github.com/bigchaindb/BEPs/tree/master/16).)
- A Planetmint contributor who signed off on the Developer Certificate of Origin (DCO) for all their contributions. (See [BEP-24](https://github.com/bigchaindb/BEPs/tree/master/24).)
- (Rarely, see the **Exceptions Section** below) A third pary who licensed the code in question under an open source license.
## Code Licenses
All code in this repository, including short code snippets in the documentation, but not including the **Exceptions** noted below, is licensed under the Apache License, Version 2.0, the full text of which can be found at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0).
For the licenses on all other Planetmint-related code (i.e. in other repositories), see the LICENSE file in the associated repository.
## Documentation Licenses
The official Planetmint documentation, _except for the short code snippets embedded within it_, is licensed under a Creative Commons Attribution 4.0 International license, the full text of which can be found at [http://creativecommons.org/licenses/by/4.0/legalcode](http://creativecommons.org/licenses/by/4.0/legalcode).
## Exceptions
The contents of the `k8s/nginx-openresty/` directory are licensed as described in the `LICENSE.md` file in that directory.

144
Makefile Normal file
View File

@ -0,0 +1,144 @@
.PHONY: help run start stop logs test test-unit test-unit-watch test-acceptance cov doc doc-acceptance clean reset release dist check-deps clean-build clean-pyc clean-test
.DEFAULT_GOAL := help
#############################
# Open a URL in the browser #
#############################
define BROWSER_PYSCRIPT
import os, webbrowser, sys
try:
from urllib import pathname2url
except:
from urllib.request import pathname2url
webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1])))
endef
export BROWSER_PYSCRIPT
##################################
# Display help for this makefile #
##################################
define PRINT_HELP_PYSCRIPT
import re, sys
print("Planetmint 2.0 developer toolbox")
print("--------------------------------")
print("Usage: make COMMAND")
print("")
print("Commands:")
for line in sys.stdin:
match = re.match(r'^([a-zA-Z_-]+):.*?## (.*)$$', line)
if match:
target, help = match.groups()
print(" %-16s %s" % (target, help))
endef
export PRINT_HELP_PYSCRIPT
##################
# Basic commands #
##################
DOCKER := docker
DC := docker-compose
BROWSER := python -c "$$BROWSER_PYSCRIPT"
HELP := python -c "$$PRINT_HELP_PYSCRIPT"
ECHO := /usr/bin/env echo
IS_DOCKER_COMPOSE_INSTALLED := $(shell command -v docker-compose 2> /dev/null)
################
# Main targets #
################
help: ## Show this help
@$(HELP) < $(MAKEFILE_LIST)
run: check-deps ## Run Planetmint from source (stop it with ctrl+c)
# although planetmint has tendermint and mongodb in depends_on,
# launch them first otherwise tendermint will get stuck upon sending yet another log
# due to some docker-compose issue; does not happen when containers are run as daemons
@$(DC) up --no-deps mongodb tendermint planetmint
start: check-deps ## Run Planetmint from source and daemonize it (stop with `make stop`)
@$(DC) up -d planetmint
stop: check-deps ## Stop Planetmint
@$(DC) stop
logs: check-deps ## Attach to the logs
@$(DC) logs -f planetmint
test: check-deps test-unit test-acceptance ## Run unit and acceptance tests
test-unit: check-deps ## Run all tests once
@$(DC) up -d bdb
@$(DC) exec planetmint pytest
test-unit-watch: check-deps ## Run all tests and wait. Every time you change code, tests will be run again
@$(DC) run --rm --no-deps planetmint pytest -f
test-acceptance: check-deps ## Run all acceptance tests
@./run-acceptance-test.sh
cov: check-deps ## Check code coverage and open the result in the browser
@$(DC) run --rm planetmint pytest -v --cov=planetmint --cov-report html
$(BROWSER) htmlcov/index.html
doc: check-deps ## Generate HTML documentation and open it in the browser
@$(DC) run --rm --no-deps bdocs make -C docs/root html
@$(DC) run --rm --no-deps bdocs make -C docs/server html
@$(DC) run --rm --no-deps bdocs make -C docs/contributing html
$(BROWSER) docs/root/build/html/index.html
doc-acceptance: check-deps ## Create documentation for acceptance tests
@$(DC) run --rm python-acceptance pycco -i -s /src -d /docs
$(BROWSER) acceptance/python/docs/index.html
clean: clean-build clean-pyc clean-test ## Remove all build, test, coverage and Python artifacts
@$(ECHO) "Cleaning was successful."
reset: check-deps ## Stop and REMOVE all containers. WARNING: you will LOSE all data stored in Planetmint.
@$(DC) down
release: dist ## package and upload a release
twine upload dist/*
dist: clean ## builds source (and not for now, wheel package)
python setup.py sdist
# python setup.py bdist_wheel
ls -l dist
###############
# Sub targets #
###############
check-deps:
ifndef IS_DOCKER_COMPOSE_INSTALLED
@$(ECHO) "Error: docker-compose is not installed"
@$(ECHO)
@$(ECHO) "You need docker-compose to run this command. Check out the official docs on how to install it in your system:"
@$(ECHO) "- https://docs.docker.com/compose/install/"
@$(ECHO)
@$(DC) # docker-compose is not installed, so we call it to generate an error and exit
endif
clean-build: # Remove build artifacts
@rm -fr build/
@rm -fr dist/
@rm -fr .eggs/
@find . -name '*.egg-info' -exec rm -fr {} +
@find . -name '*.egg' -exec rm -f {} +
clean-pyc: # Remove Python file artifacts
@find . -name '*.pyc' -exec rm -f {} +
@find . -name '*.pyo' -exec rm -f {} +
@find . -name '*~' -exec rm -f {} +
@find . -name '__pycache__' -exec rm -fr {} +
clean-test: # Remove test and coverage artifacts
@find . -name '.pytest_cache' -exec rm -fr {} +
@rm -fr .tox/
@rm -f .coverage
@rm -fr htmlcov/

97
PYTHON_STYLE_GUIDE.md Normal file
View File

@ -0,0 +1,97 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Python Style Guide
This guide starts out with our general Python coding style guidelines and ends with a section on how we write & run (Python) tests.
## General Python Coding Style Guidelines
Our starting point is [PEP8](https://www.python.org/dev/peps/pep-0008/), the standard "Style Guide for Python Code." Many Python IDEs will check your code against PEP8. (Note that PEP8 isn't frozen; it actually changes over time, but slowly.)
Planetmint uses Python 3.5+, so you can ignore all PEP8 guidelines specific to Python 2.
We use [pre-commit](http://pre-commit.com/) to check some of the rules below before every commit but not everything is realized yet.
The hooks we use can be found in the [.pre-commit-config.yaml](https://github.com/bigchaindb/bigchaindb/blob/master/.pre-commit-config.yaml) file.
### Python Docstrings
PEP8 says some things about docstrings, but not what to put in them or how to structure them. [PEP257](https://www.python.org/dev/peps/pep-0257/) was one proposal for docstring conventions, but we prefer [Google-style docstrings](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments) instead: they're easier to read and the [napoleon extension](http://www.sphinx-doc.org/en/stable/ext/napoleon.html) for Sphinx lets us turn them into nice-looking documentation. Here are some references on Google-style docstrings:
* [Google's docs on Google-style docstrings](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments)
* [napoleon's docs include an overview of Google-style docstrings](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/index.html)
* [Example Google-style docstrings](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/example_google.html) (from napoleon's docs)
### Maximum Line Length
PEP8 has some [maximum line length guidelines](https://www.python.org/dev/peps/pep-0008/#id17), starting with "Limit all lines to a maximum of 79 characters" but "for flowing long blocks of text with fewer structural restrictions (docstrings or comments), the line length should be limited to 72 characters."
We discussed this at length, and it seems that the consensus is: _try_ to keep line lengths less than 79/72 characters, unless you have a special situation where longer lines would improve readability. (The basic reason is that 79/72 works for everyone, and Planetmint is an open source project.) As a hard limit, keep all lines less than 119 characters (which is the width of GitHub code review).
### Single or Double Quotes?
Python lets you use single or double quotes. PEP8 says you can use either, as long as you're consistent. We try to stick to using single quotes, except in cases where using double quotes is more readable. For example:
```python
print('This doesn\'t look so nice.')
print("Doesn't this look nicer?")
```
### Breaking Strings Across Multiple Lines
Should we use parentheses or slashes (`\`) to break strings across multiple lines, i.e.
```python
my_string = ('This is a very long string, so long that it will not fit into just one line '
'so it must be split across multiple lines.')
# or
my_string = 'This is a very long string, so long that it will not fit into just one line ' \
'so it must be split across multiple lines.'
```
It seems the preference is for slashes, but using parentheses is okay too. (There are good arguments either way. Arguing about it seems like a waste of time.)
### How to Format Long import Statements
If you need to `import` lots of names from a module or package, and they won't all fit in one line (without making the line too long), then use parentheses to spread the names across multiple lines, like so:
```python
from Tkinter import (
Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END,
)
# Or
from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END)
```
For the rationale, see [PEP 328](https://www.python.org/dev/peps/pep-0328/#rationale-for-parentheses).
### Using the % operator or `format()` to Format Strings
Given the choice:
```python
x = 'name: %s; score: %d' % (name, n)
# or
x = 'name: {}; score: {}'.format(name, n)
```
we use the `format()` version. The [official Python documentation says](https://docs.python.org/2/library/stdtypes.html#str.format), "This method of string formatting is the new standard in Python 3, and should be preferred to the % formatting described in String Formatting Operations in new code."
## Running the Flake8 Style Checker
We use [Flake8](http://flake8.pycqa.org/en/latest/index.html) to check our Python code style. Once you have it installed, you can run it using:
```text
flake8 --max-line-length 119 bigchaindb/
```
## Writing and Running (Python) Tests
The content of this section was moved to [`bigchaindb/tests/README.md`](https://github.com/bigchaindb/bigchaindb/blob/master/tests/README.md).
Note: We automatically run all tests on all pull requests (using Travis CI), so you should definitely run all tests locally before you submit a pull request. See the above-linked README file for instructions.

View File

@ -1 +1,77 @@
# planetmint
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
<!--- There is no shield to get the latest version
(including pre-release versions) from PyPI,
so show the latest GitHub release instead.
--->
[![Codecov branch](https://img.shields.io/codecov/c/github/bigchaindb/bigchaindb/master.svg)](https://codecov.io/github/bigchaindb/bigchaindb?branch=master)
[![Latest release](https://img.shields.io/github/release/bigchaindb/bigchaindb/all.svg)](https://github.com/bigchaindb/bigchaindb/releases)
[![Status on PyPI](https://img.shields.io/pypi/status/bigchaindb.svg)](https://pypi.org/project/Planetmint/)
[![Travis branch](https://img.shields.io/travis/bigchaindb/bigchaindb/master.svg)](https://travis-ci.com/bigchaindb/bigchaindb)
[![Documentation Status](https://readthedocs.org/projects/bigchaindb-server/badge/?version=latest)](https://docs.bigchaindb.com/projects/server/en/latest/)
[![Join the chat at https://gitter.im/bigchaindb/bigchaindb](https://badges.gitter.im/bigchaindb/bigchaindb.svg)](https://gitter.im/bigchaindb/bigchaindb?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
# Planetmint Server
Planetmint is the blockchain database. This repository is for _BigchainDB Server_.
## The Basics
* [Try the Quickstart](https://docs.bigchaindb.com/projects/server/en/latest/quickstart.html)
* [Read the Planetmint 2.0 whitepaper](https://www.bigchaindb.com/whitepaper/)
* [Check out the _Hitchiker's Guide to BigchainDB_](https://www.bigchaindb.com/developers/guide/)
## Run and Test Planetmint Server from the `master` Branch
Running and testing the latest version of Planetmint Server is easy. Make sure you have a recent version of [Docker Compose](https://docs.docker.com/compose/install/) installed. When you are ready, fire up a terminal and run:
```text
git clone https://github.com/bigchaindb/bigchaindb.git
cd bigchaindb
make run
```
Planetmint should be reachable now on `http://localhost:9984/`.
There are also other commands you can execute:
* `make start`: Run Planetmint from source and daemonize it (stop it with `make stop`).
* `make stop`: Stop Planetmint.
* `make logs`: Attach to the logs.
* `make test`: Run all unit and acceptance tests.
* `make test-unit-watch`: Run all tests and wait. Every time you change code, tests will be run again.
* `make cov`: Check code coverage and open the result in the browser.
* `make doc`: Generate HTML documentation and open it in the browser.
* `make clean`: Remove all build, test, coverage and Python artifacts.
* `make reset`: Stop and REMOVE all containers. WARNING: you will LOSE all data stored in Planetmint.
To view all commands available, run `make`.
## Links for Everyone
* [Planetmint.com](https://www.bigchaindb.com/) - the main Planetmint website, including newsletter signup
* [Roadmap](https://github.com/bigchaindb/org/blob/master/ROADMAP.md)
* [Blog](https://medium.com/the-bigchaindb-blog)
* [Twitter](https://twitter.com/Planetmint)
## Links for Developers
* [All Planetmint Documentation](https://docs.bigchaindb.com/en/latest/)
* [Planetmint Server Documentation](https://docs.bigchaindb.com/projects/server/en/latest/index.html)
* [CONTRIBUTING.md](.github/CONTRIBUTING.md) - how to contribute
* [Community guidelines](CODE_OF_CONDUCT.md)
* [Open issues](https://github.com/bigchaindb/bigchaindb/issues)
* [Open pull requests](https://github.com/bigchaindb/bigchaindb/pulls)
* [Gitter chatroom](https://gitter.im/bigchaindb/bigchaindb)
## Legal
* [Licenses](LICENSES.md) - open source & open content
* [Imprint](https://www.bigchaindb.com/imprint/)
* [Contact Us](https://www.bigchaindb.com/contact/)

77
README_cn.md Normal file
View File

@ -0,0 +1,77 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
<!--- There is no shield to get the latest version
(including pre-release versions) from PyPI,
so show the latest GitHub release instead.
--->
[![Codecov branch](https://img.shields.io/codecov/c/github/bigchaindb/bigchaindb/master.svg)](https://codecov.io/github/bigchaindb/bigchaindb?branch=master)
[![Latest release](https://img.shields.io/github/release/bigchaindb/bigchaindb/all.svg)](https://github.com/bigchaindb/bigchaindb/releases)
[![Status on PyPI](https://img.shields.io/pypi/status/bigchaindb.svg)](https://pypi.org/project/Planetmint/)
[![Travis branch](https://img.shields.io/travis/bigchaindb/bigchaindb/master.svg)](https://travis-ci.com/bigchaindb/bigchaindb)
[![Documentation Status](https://readthedocs.org/projects/bigchaindb-server/badge/?version=latest)](https://docs.bigchaindb.com/projects/server/en/latest/)
[![Join the chat at https://gitter.im/bigchaindb/bigchaindb](https://badges.gitter.im/bigchaindb/bigchaindb.svg)](https://gitter.im/bigchaindb/bigchaindb?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
# Planetmint 服务器
Planetmint 是区块链数据库. 这是 _BigchainDB 服务器_ 的仓库.
## 基础知识
* [尝试快速开始](https://docs.bigchaindb.com/projects/server/en/latest/quickstart.html)
* [阅读 Planetmint 2.0 白皮书](https://www.bigchaindb.com/whitepaper/)
* [查阅漫游指南](https://www.bigchaindb.com/developers/guide/)
## 运行和测试 `master` 分支的 Planetmint 服务器
运行和测试最新版本的 Planetmint 服务器非常简单. 确认你有安装最新版本的 [Docker Compose](https://docs.docker.com/compose/install/). 当你准备好了, 打开一个终端并运行:
```text
git clone https://github.com/bigchaindb/bigchaindb.git
cd bigchaindb
make run
```
Planetmint 应该可以通过 `http://localhost:9984/` 访问.
这里也有一些其他的命令你可以运行:
* `make start`: 通过源码和守护进程的方式运行 Planetmint (通过 `make stop` 停止).
* `make stop`: 停止运行 Planetmint.
* `make logs`: 附在日志上.
* `make test`: 运行所有单元和验收测试.
* `make test-unit-watch`: 运行所有测试并等待. 每次更改代码时都会再次运行测试.
* `make cov`: 检查代码覆盖率并在浏览器中打开结果.
* `make doc`: 生成 HTML 文档并在浏览器中打开它.
* `make clean`: 删除所有构建, 测试, 覆盖和 Python 生成物.
* `make reset`: 停止并移除所有容器. 警告: 您将丢失存储在 Planetmint 中的所有数据.
查看所有可用命令, 请运行 `make`.
## 一般人员链接
* [Planetmint.com](https://www.bigchaindb.com/) - Planetmint 主网站, 包括新闻订阅
* [路线图](https://github.com/bigchaindb/org/blob/master/ROADMAP.md)
* [博客](https://medium.com/the-bigchaindb-blog)
* [推特](https://twitter.com/Planetmint)
## 开发人员链接
* [所有的 Planetmint 文档](https://docs.bigchaindb.com/en/latest/)
* [Planetmint 服务器 文档](https://docs.bigchaindb.com/projects/server/en/latest/index.html)
* [CONTRIBUTING.md](.github/CONTRIBUTING.md) - how to contribute
* [社区指南](CODE_OF_CONDUCT.md)
* [公开问题](https://github.com/bigchaindb/bigchaindb/issues)
* [公开的 pull request](https://github.com/bigchaindb/bigchaindb/pulls)
* [Gitter 聊天室](https://gitter.im/bigchaindb/bigchaindb)
## 法律声明
* [许可](LICENSES.md) - 开源代码 & 开源内容
* [印记](https://www.bigchaindb.com/imprint/)
* [联系我们](https://www.bigchaindb.com/contact/)

65
README_kor.md Normal file
View File

@ -0,0 +1,65 @@
[![Codecov branch](https://img.shields.io/codecov/c/github/bigchaindb/bigchaindb/master.svg)](https://codecov.io/github/bigchaindb/bigchaindb?branch=master)
[![Latest release](https://img.shields.io/github/release/bigchaindb/bigchaindb/all.svg)](https://github.com/bigchaindb/bigchaindb/releases)
[![Status on PyPI](https://img.shields.io/pypi/status/bigchaindb.svg)](https://pypi.org/project/Planetmint/)
[![Travis branch](https://img.shields.io/travis/bigchaindb/bigchaindb/master.svg)](https://travis-ci.org/bigchaindb/bigchaindb)
[![Documentation Status](https://readthedocs.org/projects/bigchaindb-server/badge/?version=latest)](https://docs.bigchaindb.com/projects/server/en/latest/)
[![Join the chat at https://gitter.im/bigchaindb/bigchaindb](https://badges.gitter.im/bigchaindb/bigchaindb.svg)](https://gitter.im/bigchaindb/bigchaindb?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
# Planetmint 서버
BigchaingDB는 블록체인 데이터베이스입니다. 이 저장소는 _BigchaingDB 서버_를 위한 저장소입니다.
### 기본 사항
* [빠른 시작 사용해보기](https://docs.bigchaindb.com/projects/server/en/latest/quickstart.html)
* [Planetmint 2.0 백서 읽기](https://www.bigchaindb.com/whitepaper/)
* [BigchainDB에 대한 _Hitchiker's Guide_를 확인십시오.](https://www.bigchaindb.com/developers/guide/)
### `master` Branch에서 Planetmint 서버 실행 및 테스트
BigchaingDB 서버의 최신 버전을 실행하고 테스트하는 것은 어렵지 않습니다. [Docker Compose](https://docs.docker.com/compose/install/)의 최신 버전이 설치되어 있는지 확인하십시오. 준비가 되었다면, 터미널에서 다음을 실행하십시오.
```text
git clone https://github.com/bigchaindb/bigchaindb.git
cd bigchaindb
make run
```
이제 BigchainDB는 `http://localhost:9984/`에 연결되어야 합니다.
또한, 실행시키기 위한 다른 명령어들도 있습니다.
* `make start` : 소스로부터 BigchainDB를 실행하고 데몬화합니다. \(이는 `make stop` 을 하면 중지합니다.\)
* `make stop` : BigchainDB를 중지합니다.
* `make logs` : 로그에 첨부합니다.
* `make text` : 모든 유닛과 허가 테스트를 실행합니다.
* `make test-unit-watch` : 모든 테스트를 수행하고 기다립니다. 코드를 변경할 때마다 테스트는 다시 실행될 것입니다.
* `make cov` : 코드 커버리지를 확인하고 브라우저에서 결과를 엽니다.
* `make doc` : HTML 문서를 만들고, 브라우저에서 엽니다.
* `make clean` : 모든 빌드와 테스트, 커버리지 및 파이썬 아티팩트를 제거합니다.
* `make reset` : 모든 컨테이너들을 중지하고 제거합니다. 경고 : BigchainDB에 저장된 모든 데이터를 잃을 수 있습니다.
사용 가능한 모든 명령어를 보기 위해서는 `make` 를 실행하십시오.
### 모두를 위한 링크들
* [Planetmint.com ](https://www.bigchaindb.com/)- 뉴스 레터 가입을 포함하는 Planetmint 주요 웹 사이트
* [로드맵](https://github.com/bigchaindb/org/blob/master/ROADMAP.md)
* [블로그](https://medium.com/the-bigchaindb-blog)
* [트위터](https://twitter.com/Planetmint)
### 개발자들을 위한 링크들
* [모든 Planetmint 문서](https://docs.bigchaindb.com/en/latest/)
* [Planetmint 서버 문서](https://docs.bigchaindb.com/projects/server/en/latest/index.html)
* [CONTRIBUTING.md](https://github.com/bigchaindb/bigchaindb/blob/master/.github/CONTRIBUTING.md) - 기여를 하는 방법
* [커뮤니티 가이드라인](https://github.com/bigchaindb/bigchaindb/blob/master/CODE_OF_CONDUCT.md)
* [이슈 작성](https://github.com/bigchaindb/bigchaindb/issues)
* [pull request 하기](https://github.com/bigchaindb/bigchaindb/pulls)
* [Gitter 채팅방](https://gitter.im/bigchaindb/bigchaindb)
### 합법
* [라이선스](https://github.com/bigchaindb/bigchaindb/blob/master/LICENSES.md) - 오픈 소스 & 오픈 콘텐츠
* [발행](https://www.bigchaindb.com/imprint/)
* [연락처](https://www.bigchaindb.com/contact/)

101
RELEASE_PROCESS.md Normal file
View File

@ -0,0 +1,101 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Our Release Process
## Notes
Planetmint follows
[the Python form of Semantic Versioning](https://packaging.python.org/tutorials/distributing-packages/#choosing-a-versioning-scheme)
(i.e. MAJOR.MINOR.PATCH),
which is almost identical
to [regular semantic versioning](http://semver.org/), but there's no hyphen, e.g.
- `0.9.0` for a typical final release
- `4.5.2a1` not `4.5.2-a1` for the first Alpha release
- `3.4.5rc2` not `3.4.5-rc2` for Release Candidate 2
**Note 1:** For Git tags (which are used to identify releases on GitHub), we append a `v` in front. For example, the Git tag for version `2.0.0a1` was `v2.0.0a1`.
**Note 2:** For Docker image tags (e.g. on Docker Hub), we use longer version names for Alpha, Beta and Release Candidate releases. For example, the Docker image tag for version `2.0.0a2` was `2.0.0-alpha2`.
We use `0.9` and `0.9.0` as example version and short-version values below. You should replace those with the correct values for your new version.
We follow [BEP-1](https://github.com/bigchaindb/BEPs/tree/master/1), which is our variant of C4, the Collective Code Construction Contract, so a release is just a [tagged commit](https://git-scm.com/book/en/v2/Git-Basics-Tagging) on the `master` branch, i.e. a label for a particular Git commit.
The following steps are what we do to release a new version of _BigchainDB Server_. The steps to release the Python Driver are similar but not the same.
## Steps
1. Create a pull request where you make the following changes:
- Update `CHANGELOG.md`
- Update all Docker image tags in all Kubernetes YAML files (in the `k8s/` directory).
For example, in the files:
- `k8s/bigchaindb/bigchaindb-ss.yaml` and
- `k8s/dev-setup/bigchaindb.yaml`
find the line of the form `image: bigchaindb/bigchaindb:0.8.1` and change the version number to the new version number, e.g. `0.9.0`. (This is the Docker image that Kubernetes should pull from Docker Hub.)
Keep in mind that this is a _Docker image tag_ so our naming convention is
a bit different; see Note 2 in the **Notes** section above.
- In `bigchaindb/version.py`:
- update `__version__` to e.g. `0.9.0` (with no `.dev` on the end)
- update `__short_version__` to e.g. `0.9` (with no `.dev` on the end)
- In the docs about installing Planetmint (and Tendermint), and in the associated scripts, recommend/install a version of Tendermint that _actually works_ with the soon-to-be-released version of Planetmint. You can find all such references by doing a search for the previously-recommended version number, such as `0.31.5`.
- In `setup.py`, _maybe_ update the development status item in the `classifiers` list. For example, one allowed value is `"Development Status :: 5 - Production/Stable"`. The [allowed values are listed at pypi.python.org](https://pypi.python.org/pypi?%3Aaction=list_classifiers).
2. **Wait for all the tests to pass!**
3. Merge the pull request into the `master` branch.
4. Go to the [bigchaindb/bigchaindb Releases page on GitHub](https://github.com/bigchaindb/bigchaindb/releases)
and click the "Draft a new release" button.
5. Fill in the details:
- **Tag version:** version number preceded by `v`, e.g. `v0.9.1`
- **Target:** the last commit that was just merged. In other words, that commit will get a Git tag with the value given for tag version above.
- **Title:** Same as tag version above, e.g `v0.9.1`
- **Description:** The body of the changelog entry (Added, Changed, etc.)
6. Click "Publish release" to publish the release on GitHub.
7. On your local computer, make sure you're on the `master` branch and that it's up-to-date with the `master` branch in the bigchaindb/bigchaindb repository (e.g. `git pull upstream master`). We're going to use that to push a new `bigchaindb` package to PyPI.
8. Make sure you have a `~/.pypirc` file containing credentials for PyPI.
9. Do `make release` to build and publish the new `bigchaindb` package on PyPI. For this step you need to have `twine` installed. If you get an error like `Makefile:135: recipe for target 'clean-pyc' failed` then try doing
```text
sudo chown -R $(whoami):$(whoami) .
```
10. [Log in to readthedocs.org](https://readthedocs.org/accounts/login/) and go to the **Planetmint Server** project, then:
- Click on "Builds", select "latest" from the drop-down menu, then click the "Build Version:" button.
- Wait for the build of "latest" to finish. This can take a few minutes.
- Go to Admin --> Advanced Settings
and make sure that "Default branch:" (i.e. what "latest" points to)
is set to the new release's tag, e.g. `v0.9.1`.
(It won't be an option if you didn't wait for the build of "latest" to finish.)
Then scroll to the bottom and click "Save".
- Go to Admin --> Versions
and under **Choose Active Versions**, do these things:
1. Make sure that the new version's tag is "Active" and "Public"
2. Make sure the **stable** branch is _not_ active.
3. Scroll to the bottom of the page and click "Save".
11. Go to [Docker Hub](https://hub.docker.com/) and sign in, then:
- Click on "Organizations"
- Click on "bigchaindb"
- Click on "bigchaindb/bigchaindb"
- Click on "Build Settings"
- Find the row where "Docker Tag Name" equals `latest`
and change the value of "Name" to the name (Git tag)
of the new release, e.g. `v0.9.0`.
- If the release is an Alpha, Beta or Release Candidate release,
then a new row must be added.
You can do that by clicking the green "+" (plus) icon.
The contents of the new row should be similar to the existing rows
of previous releases like that.
- Click on "Tags"
- Delete the "latest" tag (so we can rebuild it)
- Click on "Build Settings" again
- Click on the "Trigger" button for the "latest" tag and make sure it worked by clicking on "Tags" again
- If the release is an Alpha, Beta or Release Candidate release,
then click on the "Trigger" button for that tag as well.
Congratulations, you have released a new version of Planetmint Server!

14
ROADMAP.md Normal file
View File

@ -0,0 +1,14 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Planetmint Roadmap
We moved the Planetmint Roadmap to the bigchaindb/org repository; see:
[https://github.com/bigchaindb/org/blob/master/ROADMAP.md](https://github.com/bigchaindb/org/blob/master/ROADMAP.md)
(We kept this file here to avoid breaking some links.)

27
acceptance/README.md Normal file
View File

@ -0,0 +1,27 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Acceptance test suite
This directory contains the acceptance test suite for Planetmint.
The suite uses Docker Compose to set up a single Planetmint node, run all tests, and finally stop the node. In the future we will add support for a four node network setup.
## Running the tests
It should be as easy as `make test-acceptance`.
Note that `make test-acceptance` will take some time to start the node and shutting it down. If you are developing a test, or you wish to run a specific test in the acceptance test suite, first start the node with `make start`. After the node is running, you can run `pytest` inside the `python-acceptance` container with:
```bash
docker-compose run --rm python-acceptance pytest <use whatever option you need>
```
## Writing and documenting the tests
Tests are sometimes difficult to read. For acceptance tests, we try to be really explicit on what the test is doing, so please write code that is *simple* and easy to understand. We decided to use literate-programming documentation. To generate the documentation run:
```bash
make doc-acceptance
```

1
acceptance/python/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
docs

View File

@ -0,0 +1,9 @@
FROM python:3.6.3
RUN mkdir -p /src
RUN pip install --upgrade \
pycco \
websocket-client~=0.47.0 \
pytest~=3.0 \
bigchaindb-driver~=0.6.2 \
blns

View File

@ -0,0 +1,125 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Basic Acceptance Test
# Here we check that the primitives of the system behave as expected.
# As you will see, this script tests basic stuff like:
#
# - create a transaction
# - check if the transaction is stored
# - check for the outputs of a given public key
# - transfer the transaction to another key
#
# We run a series of checks for each steps, that is retrieving the transaction from
# the remote system, and also checking the `outputs` of a given public key.
#
# This acceptance test is a rip-off of our
# [tutorial](https://docs.bigchaindb.com/projects/py-driver/en/latest/usage.html).
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
import os
# For this test case we import and use the Python Driver.
from bigchaindb_driver import BigchainDB
from bigchaindb_driver.crypto import generate_keypair
def test_basic():
# ## Set up a connection to Planetmint
# To use BighainDB we need a connection. Here we create one. By default we
# connect to localhost, but you can override this value using the env variable
# called `PLANETMINT_ENDPOINT`, a valid value must include the schema:
# `https://example.com:9984`
bdb = BigchainDB(os.environ.get('PLANETMINT_ENDPOINT'))
# ## Create keypairs
# This test requires the interaction between two actors with their own keypair.
# The two keypairs will be called—drum roll—Alice and Bob.
alice, bob = generate_keypair(), generate_keypair()
# ## Alice registers her bike in Planetmint
# Alice has a nice bike, and here she creates the "digital twin"
# of her bike.
bike = {'data': {'bicycle': {'serial_number': 420420}}}
# She prepares a `CREATE` transaction...
prepared_creation_tx = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset=bike)
# ... and she fulfills it with her private key.
fulfilled_creation_tx = bdb.transactions.fulfill(
prepared_creation_tx,
private_keys=alice.private_key)
# We will use the `id` of this transaction several time, so we store it in
# a variable with a short and easy name
bike_id = fulfilled_creation_tx['id']
# Now she is ready to send it to the Planetmint Network.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_creation_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(bike_id), 'Cannot find transaction {}'.format(bike_id)
# Alice is now the proud owner of one unspent asset.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 1
assert bdb.outputs.get(alice.public_key)[0]['transaction_id'] == bike_id
# ## Alice transfers her bike to Bob
# After registering her bike, Alice is ready to transfer it to Bob.
# She needs to create a new `TRANSFER` transaction.
# A `TRANSFER` transaction contains a pointer to the original asset. The original asset
# is identified by the `id` of the `CREATE` transaction that defined it.
transfer_asset = {'id': bike_id}
# Alice wants to spend the one and only output available, the one with index `0`.
output_index = 0
output = fulfilled_creation_tx['outputs'][output_index]
# Here, she defines the `input` of the `TRANSFER` transaction. The `input` contains
# several keys:
#
# - `fulfillment`, taken from the previous `CREATE` transaction.
# - `fulfills`, that specifies which condition she is fulfilling.
# - `owners_before`.
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_creation_tx['id']},
'owners_before': output['public_keys']}
# Now that all the elements are set, she creates the actual transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=bob.public_key)
# ... and signs it with her private key.
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=alice.private_key)
# She finally sends the transaction to a Planetmint node.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(fulfilled_transfer_tx['id']) == sent_transfer_tx
# Now Alice has zero unspent transactions.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 0
# While Bob has one.
assert len(bdb.outputs.get(bob.public_key, spent=False)) == 1
# Bob double checks what he got was the actual bike.
bob_tx_id = bdb.outputs.get(bob.public_key, spent=False)[0]['transaction_id']
assert bdb.transactions.retrieve(bob_tx_id) == sent_transfer_tx

View File

@ -0,0 +1,181 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Divisible assets integration testing
# This test checks if we can successfully divide assets.
# The script tests various things like:
#
# - create a transaction with a divisible asset and issue them to someone
# - check if the transaction is stored and has the right amount of tokens
# - spend some tokens
# - try to spend more tokens than available
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the `amount`
# of a given transaction.
#
# This integration test is a rip-off of our
# [tutorial](https://docs.bigchaindb.com/projects/py-driver/en/latest/usage.html).
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
# We need the `pytest` package to catch the `BadRequest` exception properly.
# And of course, we also need the `BadRequest`.
import os
import pytest
from bigchaindb_driver.exceptions import BadRequest
# For this test case we import and use the Python Driver.
from bigchaindb_driver import BigchainDB
from bigchaindb_driver.crypto import generate_keypair
def test_divisible_assets():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = BigchainDB(os.environ.get('PLANETMINT_ENDPOINT'))
# Oh look, it is Alice again and she brought her friend Bob along.
alice, bob = generate_keypair(), generate_keypair()
# ## Alice creates a time sharing token
# Alice wants to go on vacation, while Bobs bike just broke down.
# Alice decides to rent her bike to Bob while she is gone.
# So she prepares a `CREATE` transaction to issues 10 tokens.
# First, she prepares an asset for a time sharing token. As you can see in
# the description, Bob and Alice agree that each token can be used to ride
# the bike for one hour.
bike_token = {
'data': {
'token_for': {
'bike': {
'serial_number': 420420
}
},
'description': 'Time share token. Each token equals one hour of riding.',
},
}
# She prepares a `CREATE` transaction and issues 10 tokens.
# Here, Alice defines in a tuple that she wants to assign
# these 10 tokens to Bob.
prepared_token_tx = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
recipients=[([bob.public_key], 10)],
asset=bike_token)
# She fulfills and sends the transaction.
fulfilled_token_tx = bdb.transactions.fulfill(
prepared_token_tx,
private_keys=alice.private_key)
bdb.transactions.send_commit(fulfilled_token_tx)
# We store the `id` of the transaction to use it later on.
bike_token_id = fulfilled_token_tx['id']
# Let's check if the transaction was successful.
assert bdb.transactions.retrieve(bike_token_id), \
'Cannot find transaction {}'.format(bike_token_id)
# Bob owns 10 tokens now.
assert bdb.transactions.retrieve(bike_token_id)['outputs'][0][
'amount'] == '10'
# ## Bob wants to use the bike
# Now that Bob got the tokens and the sun is shining, he wants to get out
# with the bike for three hours.
# To use the bike he has to send the tokens back to Alice.
# To learn about the details of transferring a transaction check out
# [test_basic.py](./test_basic.html)
transfer_asset = {'id': bike_token_id}
output_index = 0
output = fulfilled_token_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_token_tx[
'id']},
'owners_before': output['public_keys']}
# To use the tokens Bob has to reassign 7 tokens to himself and the
# amount he wants to use to Alice.
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=[([alice.public_key], 3), ([bob.public_key], 7)])
# He signs and sends the transaction.
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=bob.private_key)
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# First, Bob checks if the transaction was successful.
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id']) == sent_transfer_tx
# There are two outputs in the transaction now.
# The first output shows that Alice got back 3 tokens...
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id'])['outputs'][0]['amount'] == '3'
# ... while Bob still has 7 left.
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id'])['outputs'][1]['amount'] == '7'
# ## Bob wants to ride the bike again
# It's been a week and Bob wants to right the bike again.
# Now he wants to ride for 8 hours, that's a lot Bob!
# He prepares the transaction again.
transfer_asset = {'id': bike_token_id}
# This time we need an `output_index` of 1, since we have two outputs
# in the `fulfilled_transfer_tx` we created before. The first output with
# index 0 is for Alice and the second output is for Bob.
# Since Bob wants to spend more of his tokens he has to provide the
# correct output with the correct amount of tokens.
output_index = 1
output = fulfilled_transfer_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_transfer_tx['id']},
'owners_before': output['public_keys']}
# This time Bob only provides Alice in the `recipients` because he wants
# to spend all his tokens
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=[([alice.public_key], 8)])
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=bob.private_key)
# Oh Bob, what have you done?! You tried to spend more tokens than you had.
# Remember Bob, last time you spent 3 tokens already,
# so you only have 7 left.
with pytest.raises(BadRequest) as error:
bdb.transactions.send_commit(fulfilled_transfer_tx)
# Now Bob gets an error saying that the amount he wanted to spent is
# higher than the amount of tokens he has left.
assert error.value.args[0] == 400
message = 'Invalid transaction (AmountError): The amount used in the ' \
'inputs `7` needs to be same as the amount used in the ' \
'outputs `8`'
assert error.value.args[2]['message'] == message
# We have to stop this test now, I am sorry, but Bob is pretty upset
# about his mistake. See you next time :)

View File

@ -0,0 +1,48 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Double Spend testing
# This test challenge the system with double spends.
import os
from uuid import uuid4
from threading import Thread
import queue
import bigchaindb_driver.exceptions
from bigchaindb_driver import BigchainDB
from bigchaindb_driver.crypto import generate_keypair
def test_double_create():
bdb = BigchainDB(os.environ.get('PLANETMINT_ENDPOINT'))
alice = generate_keypair()
results = queue.Queue()
tx = bdb.transactions.fulfill(
bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset={'data': {'uuid': str(uuid4())}}),
private_keys=alice.private_key)
def send_and_queue(tx):
try:
bdb.transactions.send_commit(tx)
results.put('OK')
except bigchaindb_driver.exceptions.TransportError as e:
results.put('FAIL')
t1 = Thread(target=send_and_queue, args=(tx, ))
t2 = Thread(target=send_and_queue, args=(tx, ))
t1.start()
t2.start()
results = [results.get(timeout=2), results.get(timeout=2)]
assert results.count('OK') == 1
assert results.count('FAIL') == 1

View File

@ -0,0 +1,126 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Multiple owners integration testing
# This test checks if we can successfully create and transfer a transaction
# with multiple owners.
# The script tests various things like:
#
# - create a transaction with multiple owners
# - check if the transaction is stored and has the right amount of public keys
# - transfer the transaction to a third person
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the public keys
# of a given transaction.
#
# This integration test is a rip-off of our
# [tutorial](https://docs.bigchaindb.com/projects/py-driver/en/latest/usage.html).
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
import os
# For this test case we import and use the Python Driver.
from bigchaindb_driver import BigchainDB
from bigchaindb_driver.crypto import generate_keypair
def test_multiple_owners():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = BigchainDB(os.environ.get('PLANETMINT_ENDPOINT'))
# Hey Alice and Bob, nice to see you again!
alice, bob = generate_keypair(), generate_keypair()
# ## Alice and Bob create a transaction
# Alice and Bob just moved into a shared flat, no one can afford these
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = {
'data': {
'dish washer': {
'serial_number': 1337
}
}
}
# They prepare a `CREATE` transaction. To have multiple owners, both
# Bob and Alice need to be the recipients.
prepared_dw_tx = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
recipients=(alice.public_key, bob.public_key),
asset=dw_asset)
# Now they both sign the transaction by providing their private keys.
# And send it afterwards.
fulfilled_dw_tx = bdb.transactions.fulfill(
prepared_dw_tx,
private_keys=[alice.private_key, bob.private_key])
bdb.transactions.send_commit(fulfilled_dw_tx)
# We store the `id` of the transaction to use it later on.
dw_id = fulfilled_dw_tx['id']
# Let's check if the transaction was successful.
assert bdb.transactions.retrieve(dw_id), \
'Cannot find transaction {}'.format(dw_id)
# The transaction should have two public keys in the outputs.
assert len(
bdb.transactions.retrieve(dw_id)['outputs'][0]['public_keys']) == 2
# ## Alice and Bob transfer a transaction to Carol.
# Alice and Bob save a lot of money living together. They often go out
# for dinner and don't cook at home. But now they don't have any dishes to
# wash, so they decide to sell the dish washer to their friend Carol.
# Hey Carol, nice to meet you!
carol = generate_keypair()
# Alice and Bob prepare the transaction to transfer the dish washer to
# Carol.
transfer_asset = {'id': dw_id}
output_index = 0
output = fulfilled_dw_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_dw_tx[
'id']},
'owners_before': output['public_keys']}
# Now they create the transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=carol.public_key)
# ... and sign it with their private keys, then send it.
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=[alice.private_key, bob.private_key])
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# They check if the transaction was successful.
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id']) == sent_transfer_tx
# The owners before should include both Alice and Bob.
assert len(
bdb.transactions.retrieve(fulfilled_transfer_tx['id'])['inputs'][0][
'owners_before']) == 2
# While the new owner is Carol.
assert bdb.transactions.retrieve(fulfilled_transfer_tx['id'])[
'outputs'][0]['public_keys'][0] == carol.public_key

View File

@ -0,0 +1,101 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# ## Testing potentially hazardous strings
# This test uses a library of `naughty` strings (code injections, weird unicode chars., etc.) as both keys and values.
# We look for either a successful tx, or in the case that we use a naughty string as a key, and it violates some key
# constraints, we expect to receive a well formatted error message.
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
import os
# Since the naughty strings get encoded and decoded in odd ways,
# we'll use a regex to sweep those details under the rug.
import re
# We'll use a nice library of naughty strings...
from blns import blns
# And parameterize our test so each one is treated as a separate test case
import pytest
# For this test case we import and use the Python Driver.
from bigchaindb_driver import BigchainDB
from bigchaindb_driver.crypto import generate_keypair
from bigchaindb_driver.exceptions import BadRequest
naughty_strings = blns.all()
# This is our base test case, but we'll reuse it to send naughty strings as both keys and values.
def send_naughty_tx(asset, metadata):
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = BigchainDB(os.environ.get('PLANETMINT_ENDPOINT'))
# Here's Alice.
alice = generate_keypair()
# Alice is in a naughty mood today, so she creates a tx with some naughty strings
prepared_transaction = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset=asset,
metadata=metadata)
# She fulfills the transaction
fulfilled_transaction = bdb.transactions.fulfill(
prepared_transaction,
private_keys=alice.private_key)
# The fulfilled tx gets sent to the BDB network
try:
sent_transaction = bdb.transactions.send_commit(fulfilled_transaction)
except BadRequest as e:
sent_transaction = e
# If her key contained a '.', began with a '$', or contained a NUL character
regex = '.*\..*|\$.*|.*\x00.*'
key = next(iter(metadata))
if re.match(regex, key):
# Then she expects a nicely formatted error code
status_code = sent_transaction.status_code
error = sent_transaction.error
regex = (
r'\{\s*\n*'
r'\s*"message":\s*"Invalid transaction \(ValidationError\):\s*'
r'Invalid key name.*The key name cannot contain characters.*\n*'
r'\s*"status":\s*400\n*'
r'\s*\}\n*')
assert status_code == 400
assert re.fullmatch(regex, error), sent_transaction
# Otherwise, she expects to see her transaction in the database
elif 'id' in sent_transaction.keys():
tx_id = sent_transaction['id']
assert bdb.transactions.retrieve(tx_id)
# If neither condition was true, then something weird happened...
else:
raise TypeError(sent_transaction)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_keys(naughty_string):
asset = {'data': {naughty_string: 'nice_value'}}
metadata = {naughty_string: 'nice_value'}
send_naughty_tx(asset, metadata)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_values(naughty_string):
asset = {'data': {'nice_key': naughty_string}}
metadata = {'nice_key': naughty_string}
send_naughty_tx(asset, metadata)

View File

@ -0,0 +1,132 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Stream Acceptance Test
# This test checks if the event stream works correctly. The basic idea of this
# test is to generate some random **valid** transaction, send them to a
# Planetmint node, and expect those transactions to be returned by the valid
# transactions Stream API. During this test, two threads work together,
# sharing a queue to exchange events.
#
# - The *main thread* first creates and sends the transactions to Planetmint;
# then it run through all events in the shared queue to check if all
# transactions sent have been validated by Planetmint.
# - The *listen thread* listens to the events coming from Planetmint and puts
# them in a queue shared with the main thread.
import os
import queue
import json
from threading import Thread, Event
from uuid import uuid4
# For this script, we need to set up a websocket connection, that's the reason
# we import the
# [websocket](https://github.com/websocket-client/websocket-client) module
from websocket import create_connection
from bigchaindb_driver import BigchainDB
from bigchaindb_driver.crypto import generate_keypair
def test_stream():
# ## Set up the test
# We use the env variable `BICHAINDB_ENDPOINT` to know where to connect.
# Check [test_basic.py](./test_basic.html) for more information.
BDB_ENDPOINT = os.environ.get('PLANETMINT_ENDPOINT')
# *That's pretty bad, but let's do like this for now.*
WS_ENDPOINT = 'ws://{}:9985/api/v1/streams/valid_transactions'.format(BDB_ENDPOINT.rsplit(':')[0])
bdb = BigchainDB(BDB_ENDPOINT)
# Hello to Alice again, she is pretty active in those tests, good job
# Alice!
alice = generate_keypair()
# We need few variables to keep the state, specifically we need `sent` to
# keep track of all transactions Alice sent to Planetmint, while `received`
# are the transactions Planetmint validated and sent back to her.
sent = []
received = queue.Queue()
# In this test we use a websocket. The websocket must be started **before**
# sending transactions to Planetmint, otherwise we might lose some
# transactions. The `ws_ready` event is used to synchronize the main thread
# with the listen thread.
ws_ready = Event()
# ## Listening to events
# This is the function run by the complementary thread.
def listen():
# First we connect to the remote endpoint using the WebSocket protocol.
ws = create_connection(WS_ENDPOINT)
# After the connection has been set up, we can signal the main thread
# to proceed (continue reading, it should make sense in a second.)
ws_ready.set()
# It's time to consume all events coming from the Planetmint stream API.
# Every time a new event is received, it is put in the queue shared
# with the main thread.
while True:
result = ws.recv()
received.put(result)
# Put `listen` in a thread, and start it. Note that `listen` is a local
# function and it can access all variables in the enclosing function.
t = Thread(target=listen, daemon=True)
t.start()
# ## Pushing the transactions to Planetmint
# After starting the listen thread, we wait for it to connect, and then we
# proceed.
ws_ready.wait()
# Here we prepare, sign, and send ten different `CREATE` transactions. To
# make sure each transaction is different from the other, we generate a
# random `uuid`.
for _ in range(10):
tx = bdb.transactions.fulfill(
bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset={'data': {'uuid': str(uuid4())}}),
private_keys=alice.private_key)
# We don't want to wait for each transaction to be in a block. By using
# `async` mode, we make sure that the driver returns as soon as the
# transaction is pushed to the Planetmint API. Remember: we expect all
# transactions to be in the shared queue: this is a two phase test,
# first we send a bunch of transactions, then we check if they are
# valid (and, in this case, they should).
bdb.transactions.send_async(tx)
# The `id` of every sent transaction is then stored in a list.
sent.append(tx['id'])
# ## Check the valid transactions coming from Planetmint
# Now we are ready to check if Planetmint did its job. A simple way to
# check if all sent transactions have been processed is to **remove** from
# `sent` the transactions we get from the *listen thread*. At one point in
# time, `sent` should be empty, and we exit the test.
while sent:
# To avoid waiting forever, we have an arbitrary timeout of 5
# seconds: it should be enough time for Planetmint to create
# blocks, in fact a new block is created every second. If we hit
# the timeout, then game over ¯\\\_(ツ)\_/¯
try:
event = received.get(timeout=5)
txid = json.loads(event)['transaction_id']
except queue.Empty:
assert False, 'Did not receive all expected transactions'
# Last thing is to try to remove the `txid` from the set of sent
# transactions. If this test is running in parallel with others, we
# might get a transaction id of another test, and `remove` can fail.
# It's OK if this happens.
try:
sent.remove(txid)
except ValueError:
pass

35
codecov.yml Normal file
View File

@ -0,0 +1,35 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
codecov:
branch: master # the branch to show by default
coverage:
precision: 2
round: down
range: "70...100"
status:
project:
default:
target: auto
if_no_uploads: error
patch:
default:
target: "80%"
if_no_uploads: error
ignore: # files and folders that will be removed during processing
- "docs/*"
- "tests/*"
- "bigchaindb/version.py"
- "k8s/*"
comment:
# @stevepeak (from codecov.io) suggested we change 'suggestions' to 'uncovered'
# in the following line. Thanks Steve!
layout: "header, diff, changes, sunburst, uncovered"
behavior: default

105
docker-compose.yml Normal file
View File

@ -0,0 +1,105 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
version: '2.2'
services:
# Build: docker-compose build -d planetmint
# Run: docker-compose run -d bdb
mongodb:
image: mongo:3.6
ports:
- "27017:27017"
command: mongod
restart: always
planetmint:
depends_on:
- mongodb
- tendermint
build:
context: .
dockerfile: Dockerfile-dev
volumes:
- ./planetmint:/usr/src/app/planetmint
- ./tests:/usr/src/app/tests
- ./docs:/usr/src/app/docs
- ./htmlcov:/usr/src/app/htmlcov
- ./setup.py:/usr/src/app/setup.py
- ./setup.cfg:/usr/src/app/setup.cfg
- ./pytest.ini:/usr/src/app/pytest.ini
- ./tox.ini:/usr/src/app/tox.ini
environment:
PLANETMINT_DATABASE_BACKEND: localmongodb
PLANETMINT_DATABASE_HOST: mongodb
PLANETMINT_DATABASE_PORT: 27017
PLANETMINT_SERVER_BIND: 0.0.0.0:9984
PLANETMINT_WSSERVER_HOST: 0.0.0.0
PLANETMINT_WSSERVER_ADVERTISED_HOST: planetmint
PLANETMINT_TENDERMINT_HOST: tendermint
PLANETMINT_TENDERMINT_PORT: 26657
ports:
- "9984:9984"
- "9985:9985"
- "26658"
healthcheck:
test: ["CMD", "bash", "-c", "curl http://planetmint:9984 && curl http://tendermint:26657/abci_query"]
interval: 3s
timeout: 5s
retries: 3
command: '.ci/entrypoint.sh'
restart: always
tendermint:
image: tendermint/tendermint:v0.31.5
# volumes:
# - ./tmdata:/tendermint
entrypoint: ''
ports:
- "26656:26656"
- "26657:26657"
command: sh -c "tendermint init && tendermint node --consensus.create_empty_blocks=false --proxy_app=tcp://planetmint:26658"
restart: always
bdb:
image: busybox
depends_on:
planetmint:
condition: service_healthy
# curl client to check the health of development env
curl-client:
image: appropriate/curl
command: /bin/sh -c "curl -s http://planetmint:9984/ > /dev/null && curl -s http://tendermint:26657/ > /dev/null"
# Planetmint setup to do acceptance testing with Python
python-acceptance:
build:
context: .
dockerfile: ./acceptance/python/Dockerfile
volumes:
- ./acceptance/python/docs:/docs
- ./acceptance/python/src:/src
environment:
- PLANETMINT_ENDPOINT=planetmint
# Build docs only
# docker-compose build bdocs
# docker-compose up -d bdocs
bdocs:
depends_on:
- vdocs
build:
context: .
dockerfile: Dockerfile-dev
args:
backend: localmongodb
volumes:
- .:/usr/src/app/
command: make -C docs/root html
vdocs:
image: nginx
ports:
- '33333:80'
volumes:
- ./docs/root/build/html:/usr/share/nginx/html

51
docs/README.md Normal file
View File

@ -0,0 +1,51 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
- [Documentation on ReadTheDocs](http://bigchaindb.readthedocs.org/)
- [Planetmint Upgrade Guides](upgrade-guides/)
# The Planetmint Documentation Strategy
* Include explanatory comments and docstrings in your code. Write [Google style docstrings](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments) with a maximum line width of 119 characters.
* For quick overview and help documents, feel free to create `README.md` or other `X.md` files, written using [GitHub-flavored Markdown](https://help.github.com/categories/writing-on-github/). Markdown files render nicely on GitHub. We might auto-convert some .md files into a format that can be included in the long-form documentation.
* We use [Sphinx](http://www.sphinx-doc.org/en/stable/) to generate the long-form documentation in various formats (e.g. HTML, PDF).
* We also use [Sphinx](http://www.sphinx-doc.org/en/stable/) to generate Python code documentation (from docstrings and possibly other sources).
* We also use Sphinx to document all REST APIs, with the help of [the `httpdomain` extension](https://pythonhosted.org/sphinxcontrib-httpdomain/).
# How to Generate the HTML Version of the Long-Form Documentation
If you want to generate the HTML version of the long-form documentation on your local machine, you need to have Sphinx and some Sphinx-contrib packages installed. To do that, go to a subdirectory of `docs` (e.g. `docs/server`) and do:
```bash
pip install -r requirements.txt
```
If you're building the *Server* docs (in `docs/server`) then you must also do:
```bash
pip install -e ../../
```
Note: Don't put `-e ../../` in the `requirements.txt` file. That will work locally
but not on ReadTheDocs.
You can then generate the HTML documentation _in that subdirectory_ by doing:
```bash
make html
```
It should tell you where the generated documentation (HTML files) can be found. You can view it in your web browser.
# Building Docs with Docker Compose
You can also use [Docker Compose](https://docs.docker.com/compose/) to build and host docs.
```text
$ docker-compose up -d bdocs
```
The docs will be hosted on port **33333**, and can be accessed over [localhost](http://localhost:33333), [127.0.0.1](http://127.0.0.1:33333)
OR http:/HOST_IP:33333.

225
docs/root/Makefile Normal file
View File

@ -0,0 +1,225 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS = -W
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " epub3 to make an epub3"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
@echo " dummy to check syntax errors of document sources"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/*
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Planetmint.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Planetmint.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/Planetmint"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Planetmint"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: epub3
epub3:
$(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
@echo
@echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
.PHONY: dummy
dummy:
$(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
@echo
@echo "Build finished. Dummy builder generates no files."

View File

@ -0,0 +1,207 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
""" Script to build http examples for http server api docs """
import json
import os
import os.path
from planetmint.common.transaction import Transaction, Input, TransactionLink
from planetmint import lib
from planetmint.web import server
TPLS = {}
TPLS['index-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(index)s
"""
TPLS['api-index-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(api_index)s
"""
TPLS['get-tx-id-request'] = """\
GET /api/v1/transactions/%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-id-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(tx)s
"""
TPLS['get-tx-by-asset-request'] = """\
GET /api/v1/transactions?operation=TRANSFER&asset_id=%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-by-asset-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
[%(tx_transfer)s,
%(tx_transfer_last)s]
"""
TPLS['post-tx-request'] = """\
POST /api/v1/transactions?mode=async HTTP/1.1
Host: example.com
Content-Type: application/json
%(tx)s
"""
TPLS['post-tx-response'] = """\
HTTP/1.1 202 Accepted
Content-Type: application/json
%(tx)s
"""
TPLS['get-block-request'] = """\
GET /api/v1/blocks/%(blockid)s HTTP/1.1
Host: example.com
"""
TPLS['get-block-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(block)s
"""
TPLS['get-block-txid-request'] = """\
GET /api/v1/blocks?transaction_id=%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-block-txid-response'] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(block_list)s
"""
def main():
""" Main function """
ctx = {}
def pretty_json(data):
return json.dumps(data, indent=2, sort_keys=True)
client = server.create_app().test_client()
host = 'example.com:9984'
# HTTP Index
res = client.get('/', environ_overrides={'HTTP_HOST': host})
res_data = json.loads(res.data.decode())
ctx['index'] = pretty_json(res_data)
# API index
res = client.get('/api/v1/', environ_overrides={'HTTP_HOST': host})
ctx['api_index'] = pretty_json(json.loads(res.data.decode()))
# tx create
privkey = 'CfdqtD7sS7FgkMoGPXw55MVGGFwQLAoHYTcBhZDtF99Z'
pubkey = '4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD'
asset = {'msg': 'Hello Planetmint!'}
tx = Transaction.create([pubkey], [([pubkey], 1)], asset=asset, metadata={'sequence': 0})
tx = tx.sign([privkey])
ctx['tx'] = pretty_json(tx.to_dict())
ctx['public_keys'] = tx.outputs[0].public_keys[0]
ctx['txid'] = tx.id
# tx transfer
privkey_transfer = '3AeWpPdhEZzWLYfkfYHBfMFC2r1f8HEaGS9NtbbKssya'
pubkey_transfer = '3yfQPHeWAa1MxTX9Zf9176QqcpcnWcanVZZbaHb8B3h9'
cid = 0
input_ = Input(fulfillment=tx.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx.id, output=cid),
owners_before=tx.outputs[cid].public_keys)
tx_transfer = Transaction.transfer([input_], [([pubkey_transfer], 1)], asset_id=tx.id, metadata={'sequence': 1})
tx_transfer = tx_transfer.sign([privkey])
ctx['tx_transfer'] = pretty_json(tx_transfer.to_dict())
ctx['public_keys_transfer'] = tx_transfer.outputs[0].public_keys[0]
ctx['tx_transfer_id'] = tx_transfer.id
# privkey_transfer_last = 'sG3jWDtdTXUidBJK53ucSTrosktG616U3tQHBk81eQe'
pubkey_transfer_last = '3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm'
cid = 0
input_ = Input(fulfillment=tx_transfer.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx_transfer.id, output=cid),
owners_before=tx_transfer.outputs[cid].public_keys)
tx_transfer_last = Transaction.transfer([input_], [([pubkey_transfer_last], 1)],
asset_id=tx.id, metadata={'sequence': 2})
tx_transfer_last = tx_transfer_last.sign([privkey_transfer])
ctx['tx_transfer_last'] = pretty_json(tx_transfer_last.to_dict())
ctx['tx_transfer_last_id'] = tx_transfer_last.id
ctx['public_keys_transfer_last'] = tx_transfer_last.outputs[0].public_keys[0]
# block
node_private = "5G2kE1zJAgTajkVSbPAQWo4c2izvtwqaNHYsaNpbbvxX"
node_public = "DngBurxfeNVKZWCEcDnLj1eMPAS7focUZTE5FndFGuHT"
signature = "53wxrEQDYk1dXzmvNSytbCfmNVnPqPkDQaTnAe8Jf43s6ssejPxezkCvUnGTnduNUmaLjhaan1iRLi3peu6s5DzA"
app_hash = 'f6e0c49c6d94d6924351f25bb334cf2a99af4206339bf784e741d1a5ab599056'
block = lib.Block(height=1, transactions=[tx.to_dict()], app_hash=app_hash)
block_dict = block._asdict()
block_dict.pop('app_hash')
ctx['block'] = pretty_json(block_dict)
ctx['blockid'] = block.height
# block status
block_list = [
block.height
]
ctx['block_list'] = pretty_json(block_list)
base_path = os.path.join(os.path.dirname(__file__),
'source/installation/api/http-samples')
if not os.path.exists(base_path):
os.makedirs(base_path)
for name, tpl in TPLS.items():
path = os.path.join(base_path, name + '.http')
code = tpl % ctx
with open(path, 'w') as handle:
handle.write(code)
def setup(*_):
""" Fool sphinx into think it's an extension muahaha """
main()
if __name__ == '__main__':
main()

263
docs/root/make.bat Normal file
View File

@ -0,0 +1,263 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% source
set I18NSPHINXOPTS=%SPHINXOPTS% source
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
echo. coverage to run coverage check of the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
REM Check if sphinx-build is available and fallback to Python version if any
%SPHINXBUILD% 1>NUL 2>NUL
if errorlevel 9009 goto sphinx_python
goto sphinx_ok
:sphinx_python
set SPHINXBUILD=python -m sphinx.__init__
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
:sphinx_ok
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\bigchaindb.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\bigchaindb.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %~dp0
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "coverage" (
%SPHINXBUILD% -b coverage %ALLSPHINXOPTS% %BUILDDIR%/coverage
if errorlevel 1 exit /b 1
echo.
echo.Testing of coverage in the sources finished, look at the ^
results in %BUILDDIR%/coverage/python.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end

View File

@ -0,0 +1,9 @@
Sphinx~=1.0
recommonmark>=0.4.0
sphinx-rtd-theme>=0.1.9
sphinxcontrib-napoleon>=0.4.4
sphinxcontrib-httpdomain>=1.5.0
pyyaml>=4.2b1
aafigure>=0.6
packaging~=18.0
wget

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 166 KiB

View File

@ -0,0 +1,130 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
About Planetmint
----------------
Basic Facts
===========
#. One can store arbitrary data (including encrypted data) in a Planetmint network, within limits: theres a maximum transaction size. Every transaction has a ``metadata`` section which can store almost any Unicode string (up to some maximum length). Similarly, every CREATE transaction has an ``asset.data`` section which can store almost any Unicode string.
#. The data stored in certain Planetmint transaction fields must not be encrypted, e.g. public keys and amounts. Planetmint doesnt offer private transactions akin to Zcoin.
#. Once data has been stored in a Planetmint network, its best to assume it cant be change or deleted.
#. Every node in a Planetmint network has a full copy of all the stored data.
#. Every node in a Planetmint network can read all the stored data.
#. Everyone with full access to a Planetmint node (e.g. the sysadmin of a node) can read all the data stored on that node.
#. Everyone given access to a node via the Planetmint HTTP API can find and read all the data stored by Planetmint. The list of people with access might be quite short.
#. If the connection between an external user and a Planetmint node isnt encrypted (using HTTPS, for example), then a wiretapper can read all HTTP requests and responses in transit.
#. If someone gets access to plaintext (regardless of where they got it), then they can (in principle) share it with the whole world. One can make it difficult for them to do that, e.g. if it is a lot of data and they only get access inside a secure room where they are searched as they leave the room.
Planetmint for Asset Registrations & Transfers
==============================================
Planetmint can store data of any kind, but it's designed to be particularly good for storing asset registrations and transfers:
* The fundamental thing that one sends to a Planetmint network, to be checked and stored (if valid), is a *transaction*, and there are two kinds: CREATE transactions and TRANSFER transactions.
* A CREATE transaction can be use to register any kind of asset (divisible or indivisible), along with arbitrary metadata.
* An asset can have zero, one, or several owners.
* The owners of an asset can specify (crypto-)conditions which must be satisfied by anyone wishing transfer the asset to new owners. For example, a condition might be that at least 3 of the 5 current owners must cryptographically sign a TRANSFER transaction.
* Planetmint verifies that the conditions have been satisfied as part of checking the validity of TRANSFER transactions. (Moreover, anyone can check that they were satisfied.)
* Planetmint prevents double-spending of an asset.
* Validated transactions are immutable.
.. note::
We used the word "owners" somewhat loosely above. A more accurate word might be fulfillers, signers, controllers, or transfer-enablers. See the section titled **A Note about Owners** in the relevant `Planetmint Transactions Spec <https://github.com/bigchaindb/BEPs/tree/master/tx-specs/>`_.
# Production-Ready?
Depending on your use case, Planetmint may or may not be production-ready. You should ask your service provider.
If you want to go live (into production) with Planetmint, please consult with your service provider.
Note: Planetmint has an open source license with a "no warranty" section that is typical of open source licenses. This is standard in the software industry. For example, the Linux kernel is used in production by billions of machines even though its license includes a "no warranty" section. Warranties are usually provided above the level of the software license, by service providers.
Storing Private Data Off-Chain
==============================
A system could store data off-chain, e.g. in a third-party database, document store, or content management system (CMS) and it could use Planetmint to:
- Keep track of who has read permissions (or other permissions) in a third-party system. An example of how this could be done is described below.
- Keep a permanent record of all requests made to the third-party system.
- Store hashes of documents-stored-elsewhere, so that a change in any document can be detected.
- Record all handshake-establishing requests and responses between two off-chain parties (e.g. a Diffie-Hellman key exchange), so as to prove that they established an encrypted tunnel (without giving readers access to that tunnel). There are more details about this idea in `the Privacy Protocols repository <https://github.com/bigchaindb/privacy-protocols>`_.
A simple way to record who has read permission on a particular document would be for the third-party system (“DocPile”) to store a CREATE transaction in a Planetmint network for every document+user pair, to indicate that that user has read permissions for that document. The transaction could be signed by DocPile (or maybe by a document owner, as a variation). The asset data field would contain 1) the unique ID of the user and 2) the unique ID of the document. The one output on the CREATE transaction would only be transferable/spendable by DocPile (or, again, a document owner).
To revoke the read permission, DocPile could create a TRANSFER transaction, to spend the one output on the original CREATE transaction, with a metadata field to say that the user in question no longer has read permission on that document.
This can be carried on indefinitely, i.e. another TRANSFER transaction could be created by DocPile to indicate that the user now has read permissions again.
DocPile can figure out if a given user has read permissions on a given document by reading the last transaction in the CREATE → TRANSFER → TRANSFER → etc. chain for that user+document pair.
There are other ways to accomplish the same thing. The above is just one example.
You might have noticed that the above example didnt treat the “read permission” as an asset owned (controlled) by a user because if the permission asset is given to (transferred to or created by) the user then it cannot be controlled any further (by DocPile) until the user transfers it back to DocPile. Moreover, the user could transfer the asset to someone else, which might be problematic.
Storing Private Data On-Chain, Encrypted
========================================
There are many ways to store private data on-chain, encrypted. Every use case has its own objectives and constraints, and the best solution depends on the use case. `The IPDB consulting team <contact@ipdb.global>`_ can help you design the best solution for your use case.
Below we describe some example system setups, using various crypto primitives, to give a sense of whats possible.
Please note:
- Ed25519 keypairs are designed for signing and verifying cryptographic signatures, `not for encrypting and decrypting messages <https://crypto.stackexchange.com/questions/27866/why-curve25519-for-encryption-but-ed25519-for-signatures>`_. For encryption, you should use keypairs designed for encryption, such as X25519.
- If someone (or some group) publishes how to decrypt some encrypted data on-chain, then anyone with access to that encrypted data will be able to get the plaintext. The data cant be deleted.
- Encrypted data cant be indexed or searched by MongoDB. (It can index and search the ciphertext, but thats not very useful.) One might use homomorphic encryption to index and search encrypted data, but MongoDB doesnt have any plans to support that any time soon. If there is indexing or keyword search needed, then some fields of the ``asset.data`` or ``metadata`` objects can be left as plain text and the sensitive information can be stored in an encrypted child-object.
System Example 1
~~~~~~~~~~~~~~~~
Encrypt the data with a symmetric key and store the ciphertext on-chain (in ``metadata`` or ``asset.data``). To communicate the key to a third party, use their public key to encrypt the symmetric key and send them that. They can decrypt the symmetric key with their private key, and then use that symmetric key to decrypt the on-chain ciphertext.
The reason for using a symmetric key along with public/private keypairs is so the ciphertext only has to be stored once.
System Example 2
~~~~~~~~~~~~~~~~
This example uses `proxy re-encryption <https://en.wikipedia.org/wiki/Proxy_re-encryption>`_:
#. MegaCorp encrypts some data using its own public key, then stores that encrypted data (ciphertext 1) in a Planetmint network.
#. MegaCorp wants to let others read that encrypted data, but without ever sharing their private key and without having to re-encrypt themselves for every new recipient. Instead, they find a “proxy” named Moxie, to provide proxy re-encryption services.
#. Zorban contacts MegaCorp and asks for permission to read the data.
#. MegaCorp asks Zorban for his public key.
#. MegaCorp generates a “re-encryption key” and sends it to their proxy, Moxie.
#. Moxie (the proxy) uses the re-encryption key to encrypt ciphertext 1, creating ciphertext 2.
#. Moxie sends ciphertext 2 to Zorban (or to MegaCorp who forwards it to Zorban).
#. Zorban uses his private key to decrypt ciphertext 2, getting the original un-encrypted data.
Note:
- The proxy only ever sees ciphertext. They never see any un-encrypted data.
- Zorban never got the ability to decrypt ciphertext 1, i.e. the on-chain data.
- There are variations on the above flow.
System Example 3
~~~~~~~~~~~~~~~~
This example uses `erasure coding <https://en.wikipedia.org/wiki/Erasure_code>`_:
#. Erasure-code the data into n pieces.
#. Encrypt each of the n pieces with a different encryption key.
#. Store the n encrypted pieces on-chain, e.g. in n separate transactions.
#. Share each of the the n decryption keys with a different party.
If k < N of the key-holders gets and decrypts k of the pieces, they can reconstruct the original plaintext. Less than k would not be enough.
System Example 4
~~~~~~~~~~~~~~~~
This setup could be used in an enterprise blockchain scenario where a special node should be able to see parts of the data, but the others should not.
- The special node generates an X25519 keypair (or similar asymmetric *encryption* keypair).
- A Planetmint end user finds out the X25519 public key (encryption key) of the special node.
- The end user creates a valid Planetmint transaction, with either the asset.data or the metadata (or both) encrypted using the above-mentioned public key.
- This is only done for transactions where the contents of asset.data or metadata don't matter for validation, so all node operators can validate the transaction.
- The special node is able to decrypt the encrypted data, but the other node operators can't, and nor can any other end user.

View File

@ -0,0 +1,131 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Basic usage
## Transactions in Planetmint
In Planetmint, _transactions_ are used to register, issue, create or transfer
things (e.g. assets).
Transactions are the most basic kind of record stored by Planetmint. There are
two kinds: CREATE transactions and TRANSFER transactions.
You can view the transaction specifications in Github, which describe transaction components and the conditions they have to fulfill in order to be valid.
[Planetmint Transactions Specs](https://github.com/bigchaindb/BEPs/tree/master/13/)
### CREATE Transactions
A CREATE transaction can be used to register, issue, create or otherwise
initiate the history of a single thing (or asset) in Planetmint. For example,
one might register an identity or a creative work. The things are often called
"assets" but they might not be literal assets.
Planetmint supports divisible assets as of Planetmint Server v0.8.0.
That means you can create/register an asset with an initial number of "shares."
For example, A CREATE transaction could register a truckload of 50 oak trees.
Each share of a divisible asset must be interchangeable with each other share;
the shares must be fungible.
A CREATE transaction can have one or more outputs.
Each output has an associated amount: the number of shares tied to that output.
For example, if the asset consists of 50 oak trees,
one output might have 35 oak trees for one set of owners,
and the other output might have 15 oak trees for another set of owners.
Each output also has an associated condition: the condition that must be met
(by a TRANSFER transaction) to transfer/spend the output.
Planetmint supports a variety of conditions.
For details, see
the section titled **Transaction Components: Conditions**
in the relevant
[Planetmint Transactions Spec](https://github.com/bigchaindb/BEPs/tree/master/13/).
![Example Planetmint CREATE transaction](./_static/CREATE_example.png)
Above we see a diagram of an example Planetmint CREATE transaction.
It has one output: Pam owns/controls three shares of the asset
and there are no other shares (because there are no other outputs).
Each output also has a list of all the public keys associated
with the conditions on that output.
Loosely speaking, that list might be interpreted as the list of "owners."
A more accurate word might be fulfillers, signers, controllers,
or transfer-enablers.
See the section titled **A Note about Owners**
in the relevant [Planetmint Transactions Spec](https://github.com/bigchaindb/BEPs/tree/master/13/).
A CREATE transaction must be signed by all the owners.
(If you're looking for that signature,
it's in the one "fulfillment" of the one input, albeit encoded.)
### TRANSFER Transactions
A TRANSFER transaction can transfer/spend one or more outputs
on other transactions (CREATE transactions or other TRANSFER transactions).
Those outputs must all be associated with the same asset;
a TRANSFER transaction can only transfer shares of one asset at a time.
Each input on a TRANSFER transaction connects to one output
on another transaction.
Each input must satisfy the condition on the output it's trying
to transfer/spend.
A TRANSFER transaction can have one or more outputs,
just like a CREATE transaction (described above).
The total number of shares coming in on the inputs must equal
the total number of shares going out on the outputs.
![Example Planetmint transactions](./_static/CREATE_and_TRANSFER_example.png)
Above we see a diagram of two example Planetmint transactions,
a CREATE transaction and a TRANSFER transaction.
The CREATE transaction is the same as in the earlier diagram.
The TRANSFER transaction spends Pam's output,
so the input on that TRANSFER transaction must contain a valid signature
from Pam (i.e. a valid fulfillment).
The TRANSFER transaction has two outputs:
Jim gets one share, and Pam gets the remaining two shares.
Terminology: The "Pam, 3" output is called a "spent transaction output"
and the "Jim, 1" and "Pam, 2" outputs are called "unspent transaction outputs"
(UTXOs).
**Example 1:** Suppose a red car is owned and controlled by Joe.
Suppose the current transfer condition on the car says
that any valid transfer must be signed by Joe.
Joe could build a TRANSFER transaction containing
an input with Joe's signature (to fulfill the current output condition)
plus a new output condition saying that any valid transfer
must be signed by Rae.
**Example 2:** Someone might construct a TRANSFER transaction
that fulfills the output conditions on four
previously-untransferred assets of the same asset type
e.g. paperclips. The amounts might be 20, 10, 45 and 25, say,
for a total of 100 paperclips.
The TRANSFER transaction would also set up new transfer conditions.
For example, maybe a set of 60 paperclips can only be transferred
if Gertrude signs, and a separate set of 40 paperclips can only be
transferred if both Jack and Kelly sign.
Note how the sum of the incoming paperclips must equal the sum
of the outgoing paperclips (100).
### Transaction Validity
When a node is asked to check if a transaction is valid, it checks several
things. We documented those things in a post on *The Planetmint Blog*:
["What is a Valid Transaction in Planetmint?"](https://blog.bigchaindb.com/what-is-a-valid-transaction-in-bigchaindb-9a1a075a9598)
(Note: That post was about Planetmint Server v1.0.0.)
### Example Transactions
There are example Planetmint transactions in
[the HTTP API documentation](./installation/api/http-client-server-api)
and
[the Python Driver documentation](./drivers/index).

409
docs/root/source/conf.py Normal file
View File

@ -0,0 +1,409 @@
#!/usr/bin/env python3
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Planetmint documentation build configuration file, created by
# sphinx-quickstart on Thu Sep 29 11:13:27 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import wget
import datetime
import os
import sys
import inspect
from os import rename, remove
from recommonmark.parser import CommonMarkParser
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# get version
_version = {}
with open('../../../planetmint/version.py') as fp:
exec(fp.read(), _version)
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
#sys.path.insert(0, "/home/myname/pythonfiles")
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
import sphinx_rtd_theme
extensions = [
'sphinx.ext.autosectionlabel',
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'sphinx.ext.todo',
'sphinx.ext.napoleon',
'sphinxcontrib.httpdomain',
'aafigure.sphinxext',
# Below are actually build steps made to look like sphinx extensions.
# It was the easiest way to get it running with ReadTheDocs.
'generate_http_server_api_documentation',
]
try:
remove('contributing/cross-project-policies/code-of-conduct.md')
remove('contributing/cross-project-policies/release-process.md')
remove('contributing/cross-project-policies/python-style-guide.md')
except:
print('done')
def get_old_new(url, old, new):
filename = wget.download(url)
rename(old, new)
get_old_new('https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/CODE_OF_CONDUCT.md',
'CODE_OF_CONDUCT.md', 'contributing/cross-project-policies/code-of-conduct.md')
get_old_new('https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/RELEASE_PROCESS.md',
'RELEASE_PROCESS.md', 'contributing/cross-project-policies/release-process.md')
get_old_new('https://raw.githubusercontent.com/bigchaindb/bigchaindb/master/PYTHON_STYLE_GUIDE.md',
'PYTHON_STYLE_GUIDE.md', 'contributing/cross-project-policies/python-style-guide.md')
suppress_warnings = ['misc.highlighting_failure']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# autodoc settings
autodoc_member_order = 'bysource'
autodoc_default_options = {
'members': None,
}
source_parsers = {
'.md': CommonMarkParser,
}
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = ['.rst', '.md']
# The encoding of source files.
#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'Planetmint'
now = datetime.datetime.now()
copyright = str(now.year) + ', Planetmint Contributors'
author = 'Planetmint Contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = _version['__short_version__']
# The full version, including alpha/beta/rc tags.
release = _version['__version__']
# The full version, including alpha/beta/rc tags.
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#
# today = ''
#
# Else, today_fmt is used as the format for a strftime call.
#
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#
# html_title = 'Planetmint v0.1'
# A shorter title for the navigation bar. Default is the same as html_title.
#
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#
# html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#
# html_extra_path = []
# If not None, a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
# The empty string is equivalent to '%b %d, %Y'.
#
# html_last_updated_fmt = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#
# html_additional_pages = {}
# If false, no module index is generated.
#
# html_domain_indices = True
# If false, no index is generated.
#
# html_use_index = True
# If true, the index is split into individual pages for each letter.
#
# html_split_index = False
# If true, links to the reST sources are added to the pages.
#
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr', 'zh'
#
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
#
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'BigchainDBdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'Planetmint.tex', 'Planetmint Documentation',
'Planetmint Contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#
# latex_use_parts = False
# If true, show page references after internal links.
#
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
#
# latex_appendices = []
# It false, will not define \strong, \code, itleref, \crossref ... but only
# \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added
# packages.
#
# latex_keep_old_macro_names = True
# If false, no module index is generated.
#
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'bigchaindb', 'Planetmint Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'Planetmint', 'Planetmint Documentation',
author, 'Planetmint', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#
# texinfo_appendices = []
# If false, no module index is generated.
#
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#
# texinfo_no_detailmenu = False

View File

@ -0,0 +1,57 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
BigchainDB and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of
fostering an open and welcoming community, we pledge to respect all people who
contribute to the project.
We are committed to making participation in this project a harassment-free
experience for everyone, regardless of level of experience, gender, gender
identity and expression, sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, nationality, or species--no picking on Wrigley for being a buffalo!
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information, such as physical or electronic
addresses, without explicit permission
* Deliberate intimidation
* Other unethical or unprofessional conduct
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
By adopting this Code of Conduct, project maintainers commit themselves to
fairly and consistently applying these principles to every aspect of managing
this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior directed at yourself or another community member may be
reported by contacting a project maintainer at [contact@bigchaindb.com](mailto:contact@bigchaindb.com). All
complaints will be reviewed and investigated and will result in a response that
is appropriate to the circumstances. Maintainers are
obligated to maintain confidentiality with regard to the reporter of an
incident.
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 1.3.0, available at
[http://contributor-covenant.org/version/1/3/0/][version]
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/3/0/

View File

@ -0,0 +1,18 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Policies
========
.. toctree::
:maxdepth: 1
code-of-conduct
python-style-guide
JavaScript Style Guide <https://github.com/ascribe/javascript>
release-process

View File

@ -0,0 +1,97 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
BigchainDB and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Python Style Guide
This guide starts out with our general Python coding style guidelines and ends with a section on how we write & run (Python) tests.
## General Python Coding Style Guidelines
Our starting point is [PEP8](https://www.python.org/dev/peps/pep-0008/), the standard "Style Guide for Python Code." Many Python IDEs will check your code against PEP8. (Note that PEP8 isn't frozen; it actually changes over time, but slowly.)
BigchainDB uses Python 3.5+, so you can ignore all PEP8 guidelines specific to Python 2.
We use [pre-commit](http://pre-commit.com/) to check some of the rules below before every commit but not everything is realized yet.
The hooks we use can be found in the [.pre-commit-config.yaml](https://github.com/bigchaindb/bigchaindb/blob/master/.pre-commit-config.yaml) file.
### Python Docstrings
PEP8 says some things about docstrings, but not what to put in them or how to structure them. [PEP257](https://www.python.org/dev/peps/pep-0257/) was one proposal for docstring conventions, but we prefer [Google-style docstrings](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments) instead: they're easier to read and the [napoleon extension](http://www.sphinx-doc.org/en/stable/ext/napoleon.html) for Sphinx lets us turn them into nice-looking documentation. Here are some references on Google-style docstrings:
* [Google's docs on Google-style docstrings](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments)
* [napoleon's docs include an overview of Google-style docstrings](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/index.html)
* [Example Google-style docstrings](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/example_google.html) (from napoleon's docs)
### Maximum Line Length
PEP8 has some [maximum line length guidelines](https://www.python.org/dev/peps/pep-0008/#id17), starting with "Limit all lines to a maximum of 79 characters" but "for flowing long blocks of text with fewer structural restrictions (docstrings or comments), the line length should be limited to 72 characters."
We discussed this at length, and it seems that the consensus is: _try_ to keep line lengths less than 79/72 characters, unless you have a special situation where longer lines would improve readability. (The basic reason is that 79/72 works for everyone, and BigchainDB is an open source project.) As a hard limit, keep all lines less than 119 characters (which is the width of GitHub code review).
### Single or Double Quotes?
Python lets you use single or double quotes. PEP8 says you can use either, as long as you're consistent. We try to stick to using single quotes, except in cases where using double quotes is more readable. For example:
```python
print('This doesn\'t look so nice.')
print("Doesn't this look nicer?")
```
### Breaking Strings Across Multiple Lines
Should we use parentheses or slashes (`\`) to break strings across multiple lines, i.e.
```python
my_string = ('This is a very long string, so long that it will not fit into just one line '
'so it must be split across multiple lines.')
# or
my_string = 'This is a very long string, so long that it will not fit into just one line ' \
'so it must be split across multiple lines.'
```
It seems the preference is for slashes, but using parentheses is okay too. (There are good arguments either way. Arguing about it seems like a waste of time.)
### How to Format Long import Statements
If you need to `import` lots of names from a module or package, and they won't all fit in one line (without making the line too long), then use parentheses to spread the names across multiple lines, like so:
```python
from Tkinter import (
Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END,
)
# Or
from Tkinter import (Tk, Frame, Button, Entry, Canvas, Text,
LEFT, DISABLED, NORMAL, RIDGE, END)
```
For the rationale, see [PEP 328](https://www.python.org/dev/peps/pep-0328/#rationale-for-parentheses).
### Using the % operator or `format()` to Format Strings
Given the choice:
```python
x = 'name: %s; score: %d' % (name, n)
# or
x = 'name: {}; score: {}'.format(name, n)
```
we use the `format()` version. The [official Python documentation says](https://docs.python.org/2/library/stdtypes.html#str.format), "This method of string formatting is the new standard in Python 3, and should be preferred to the % formatting described in String Formatting Operations in new code."
## Running the Flake8 Style Checker
We use [Flake8](http://flake8.pycqa.org/en/latest/index.html) to check our Python code style. Once you have it installed, you can run it using:
```text
flake8 --max-line-length 119 bigchaindb/
```
## Writing and Running (Python) Tests
The content of this section was moved to [`bigchaindb/tests/README.md`](https://github.com/bigchaindb/bigchaindb/blob/master/tests/README.md).
Note: We automatically run all tests on all pull requests (using Travis CI), so you should definitely run all tests locally before you submit a pull request. See the above-linked README file for instructions.

View File

@ -0,0 +1,101 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
BigchainDB and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Our Release Process
## Notes
BigchainDB follows
[the Python form of Semantic Versioning](https://packaging.python.org/tutorials/distributing-packages/#choosing-a-versioning-scheme)
(i.e. MAJOR.MINOR.PATCH),
which is almost identical
to [regular semantic versioning](http://semver.org/), but there's no hyphen, e.g.
- `0.9.0` for a typical final release
- `4.5.2a1` not `4.5.2-a1` for the first Alpha release
- `3.4.5rc2` not `3.4.5-rc2` for Release Candidate 2
**Note 1:** For Git tags (which are used to identify releases on GitHub), we append a `v` in front. For example, the Git tag for version `2.0.0a1` was `v2.0.0a1`.
**Note 2:** For Docker image tags (e.g. on Docker Hub), we use longer version names for Alpha, Beta and Release Candidate releases. For example, the Docker image tag for version `2.0.0a2` was `2.0.0-alpha2`.
We use `0.9` and `0.9.0` as example version and short-version values below. You should replace those with the correct values for your new version.
We follow [BEP-1](https://github.com/bigchaindb/BEPs/tree/master/1), which is our variant of C4, the Collective Code Construction Contract, so a release is just a [tagged commit](https://git-scm.com/book/en/v2/Git-Basics-Tagging) on the `master` branch, i.e. a label for a particular Git commit.
The following steps are what we do to release a new version of _BigchainDB Server_. The steps to release the Python Driver are similar but not the same.
## Steps
1. Create a pull request where you make the following changes:
- Update `CHANGELOG.md`
- Update all Docker image tags in all Kubernetes YAML files (in the `k8s/` directory).
For example, in the files:
- `k8s/bigchaindb/bigchaindb-ss.yaml` and
- `k8s/dev-setup/bigchaindb.yaml`
find the line of the form `image: bigchaindb/bigchaindb:0.8.1` and change the version number to the new version number, e.g. `0.9.0`. (This is the Docker image that Kubernetes should pull from Docker Hub.)
Keep in mind that this is a _Docker image tag_ so our naming convention is
a bit different; see Note 2 in the **Notes** section above.
- In `bigchaindb/version.py`:
- update `__version__` to e.g. `0.9.0` (with no `.dev` on the end)
- update `__short_version__` to e.g. `0.9` (with no `.dev` on the end)
- In the docs about installing BigchainDB (and Tendermint), and in the associated scripts, recommend/install a version of Tendermint that _actually works_ with the soon-to-be-released version of BigchainDB. You can find all such references by doing a search for the previously-recommended version number, such as `0.31.5`.
- In `setup.py`, _maybe_ update the development status item in the `classifiers` list. For example, one allowed value is `"Development Status :: 5 - Production/Stable"`. The [allowed values are listed at pypi.python.org](https://pypi.python.org/pypi?%3Aaction=list_classifiers).
2. **Wait for all the tests to pass!**
3. Merge the pull request into the `master` branch.
4. Go to the [bigchaindb/bigchaindb Releases page on GitHub](https://github.com/bigchaindb/bigchaindb/releases)
and click the "Draft a new release" button.
5. Fill in the details:
- **Tag version:** version number preceded by `v`, e.g. `v0.9.1`
- **Target:** the last commit that was just merged. In other words, that commit will get a Git tag with the value given for tag version above.
- **Title:** Same as tag version above, e.g `v0.9.1`
- **Description:** The body of the changelog entry (Added, Changed, etc.)
6. Click "Publish release" to publish the release on GitHub.
7. On your local computer, make sure you're on the `master` branch and that it's up-to-date with the `master` branch in the bigchaindb/bigchaindb repository (e.g. `git pull upstream master`). We're going to use that to push a new `bigchaindb` package to PyPI.
8. Make sure you have a `~/.pypirc` file containing credentials for PyPI.
9. Do `make release` to build and publish the new `bigchaindb` package on PyPI. For this step you need to have `twine` installed. If you get an error like `Makefile:135: recipe for target 'clean-pyc' failed` then try doing
```text
sudo chown -R $(whoami):$(whoami) .
```
10. [Log in to readthedocs.org](https://readthedocs.org/accounts/login/) and go to the **BigchainDB Server** project, then:
- Click on "Builds", select "latest" from the drop-down menu, then click the "Build Version:" button.
- Wait for the build of "latest" to finish. This can take a few minutes.
- Go to Admin --> Advanced Settings
and make sure that "Default branch:" (i.e. what "latest" points to)
is set to the new release's tag, e.g. `v0.9.1`.
(It won't be an option if you didn't wait for the build of "latest" to finish.)
Then scroll to the bottom and click "Save".
- Go to Admin --> Versions
and under **Choose Active Versions**, do these things:
1. Make sure that the new version's tag is "Active" and "Public"
2. Make sure the **stable** branch is _not_ active.
3. Scroll to the bottom of the page and click "Save".
11. Go to [Docker Hub](https://hub.docker.com/) and sign in, then:
- Click on "Organizations"
- Click on "bigchaindb"
- Click on "bigchaindb/bigchaindb"
- Click on "Build Settings"
- Find the row where "Docker Tag Name" equals `latest`
and change the value of "Name" to the name (Git tag)
of the new release, e.g. `v0.9.0`.
- If the release is an Alpha, Beta or Release Candidate release,
then a new row must be added.
You can do that by clicking the green "+" (plus) icon.
The contents of the new row should be similar to the existing rows
of previous releases like that.
- Click on "Tags"
- Delete the "latest" tag (so we can rebuild it)
- Click on "Build Settings" again
- Click on the "Trigger" button for the "latest" tag and make sure it worked by clicking on "Tags" again
- If the release is an Alpha, Beta or Release Candidate release,
then click on the "Trigger" button for that tag as well.
Congratulations, you have released a new version of BigchainDB Server!

View File

@ -0,0 +1,17 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Developer Setup, Coding & Contribution Process
==============================================
.. toctree::
:maxdepth: 2
write-code
run-node-with-docker-compose
run-node-as-processes
run-dev-network-stack
run-dev-network-ansible

View File

@ -0,0 +1,170 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Run a Planetmint network with Ansible
**NOT for Production Use**
You can use the following instructions to deploy a single or multi node
Planetmint network for dev/test using Ansible. Ansible will configure the Planetmint node(s).
Currently, this workflow is only supported for the following distributions:
- Ubuntu >= 16.04
- CentOS >= 7
- Fedora >= 24
- MacOSX
## Minimum Requirements
Minimum resource requirements for a single node Planetmint dev setup. **The more the better**:
- Memory >= 512MB
- VCPUs >= 1
## Clone the Planetmint repository
```text
$ git clone https://github.com/bigchaindb/bigchaindb.git
```
## Install dependencies
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html)
You can also install `ansible` and other dependencies, if any, using the `boostrap.sh` script
inside the Planetmint repository.
Navigate to `bigchaindb/pkg/scripts` and run the `bootstrap.sh` script to install the dependencies
for your OS. The script also checks if the OS you are running is compatible with the
supported versions.
**Note**: `bootstrap.sh` only supports Ubuntu >= 16.04, CentOS >= 7 and Fedora >=24 and MacOSX.
```text
$ cd bigchaindb/pkg/scripts/
$ bash bootstrap.sh --operation install
```
### Planetmint Setup Configuration(s)
#### Local Setup
You can run the Ansible playbook `bigchaindb-start.yml` on your local dev machine and set up the Planetmint node where
Planetmint can be run as a process or inside a Docker container(s) depending on your configuration.
Before, running the playbook locally, you need to update the `hosts` and `stack-config.yml` configuration, which will notify Ansible that we need to run the play locally.
##### Update Hosts
Navigate to `bigchaindb/pkg/configuration/hosts` inside the Planetmint repository.
```text
$ cd bigchaindb/pkg/configuration/hosts
```
Edit `all` configuration file:
```text
# Delete any existing configuration in this file and insert
# Hostname of dev machine
<HOSTNAME> ansible_connection=local
```
##### Update Configuration
Navigate to `bigchaindb/pkg/configuration/vars` inside the Planetmint repository.
```text
$ cd bigchaindb/pkg/configuration/vars/stack-config.yml
```
Edit `bdb-config.yml` configuration file as per your requirements, sample configuration file(s):
```text
---
stack_type: "docker"
stack_size: "4"
OR
---
stack_type: "local"
stack_type: "1"
```
### Planetmint Setup
Now, You can safely run the `bigchaindb-start.yml` playbook and everything will be taken care of by `Ansible`. To run the playbook please navigate to the `bigchaindb/pkg/configuration` directory inside the Planetmint repository and run the `bigchaindb-start.yml` playbook.
```text
$ cd bigchaindb/pkg/configuration/
$ ansible-playbook bigchaindb-start.yml -i hosts/all --extra-vars "operation=start home_path=$(pwd)"
```
After successful execution of the playbook, you can verify that Planetmint docker(s)/process(es) is(are) running.
Verify Planetmint process(es):
```text
$ ps -ef | grep bigchaindb
```
OR
Verify Planetmint Docker(s):
```text
$ docker ps | grep bigchaindb
```
You can now send transactions and verify the functionality of your Planetmint node.
See the [Planetmint Python Driver documentation](https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html)
for details on how to use it.
**Note**: The `bdb_root_url` can be be one of the following:
```text
# Planetmint is running as a process
bdb_root_url = http://<HOST-IP>:9984
OR
# Planetmint is running inside a docker container
bdb_root_url = http://<HOST-IP>:<DOCKER-PUBLISHED-PORT>
```
**Note**: Planetmint has [other drivers as well](http://docs.bigchaindb.com/projects/server/en/latest/drivers-clients/index.html).
### Experimental: Running Ansible a Remote Dev/Host
#### Remote Setup
You can also run the Ansible playbook `bigchaindb-start.yml` on remote machine(s) and set up the Planetmint node where
Planetmint can run as a process or inside a Docker container(s) depending on your configuration.
Before, running the playbook on a remote host, you need to update the `hosts` and `stack-config.yml` configuration, which will notify Ansible that we need to run the play on a remote host.
##### Update Remote Hosts
Navigate to `bigchaindb/pkg/configuration/hosts` inside the Planetmint repository.
```text
$ cd bigchaindb/pkg/configuration/hosts
```
Edit `all` configuration file:
```text
# Delete any existing configuration in this file and insert
<Remote_Host_IP/Hostname> ansible_ssh_user=<USERNAME> ansible_sudo_pass=<PASSWORD>
```
**Note**: You can add multiple hosts to the `all` configuration file. Non-root user with sudo enabled password is needed because ansible will run some tasks that require those permissions.
**Note**: You can also use other methods to get inside the remote machines instead of password based SSH. For other methods
please consult [Ansible Documentation](http://docs.ansible.com/ansible/latest/intro_getting_started.html).
##### Update Remote Configuration
Navigate to `bigchaindb/pkg/configuration/vars` inside the Planetmint repository.
```text
$ cd bigchaindb/pkg/configuration/vars/stack-config.yml
```
Edit `stack-config.yml` configuration file as per your requirements, sample configuration file(s):
```text
---
stack_type: "docker"
stack_size: "4"
OR
---
stack_type: "local"
stack_type: "1"
```
After, the configuration of remote hosts, [run the Ansible playbook and verify your deployment](#bigchaindb-setup-ansible).

View File

@ -0,0 +1,324 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Run a Planetmint network
**NOT for Production Use**
You can use the following instructions to deploy a single or multi node
Planetmint network for dev/test using the extensible `stack` script(s).
Currently, this workflow is only supported for the following Operating systems:
- Ubuntu >= 16.04
- CentOS >= 7
- Fedora >= 24
- MacOSX
## Machine Minimum Requirements
Minimum resource requirements for a single node Planetmint dev setup. **The more the better**:
- Memory >= 512MB
- VCPUs >= 1
## Download the scripts
> **Note**: If you're working on Planetmint Server code, on a branch based on
> recent code, then you already have local recent versions of *stack.sh* and
> *unstack.sh* in your bigchaindb/pkg/scripts/ directory. Otherwise you can
> get them using:
```text
$ export GIT_BRANCH=master
$ curl -fOL https://raw.githubusercontent.com/bigchaindb/bigchaindb/${GIT_BRANCH}/pkg/scripts/stack.sh
# Optional
$ curl -fOL https://raw.githubusercontent.com/bigchaindb/bigchaindb/${GIT_BRANCH}/pkg/scripts/unstack.sh
```
## Quick Start
If you run `stack.sh` out of the box i.e. without any configuration changes, you will be able to deploy a 4 node
Planetmint network with Docker containers, created from `master` branch of `bigchaindb/bigchaindb` repo and Tendermint version `0.22.8`.
**Note**: Run `stack.sh` with either root or non-root user with sudo enabled.
```text
$ bash stack.sh
...Logs..
.........
.........
Finished stacking!
```
## Configure the Planetmint network
The `stack.sh` script has multiple deployment methods and parameters and they can be explored using: `bash stack.sh -h`
```text
$ bash stack.sh -h
Usage: $ bash stack.sh [-h]
Deploys the Planetmint network.
ENV[STACK_SIZE]
Set STACK_SIZE environment variable to the size of the network you desire.
Network mimics a production network environment with single or multiple BDB
nodes. (default: 4).
ENV[STACK_TYPE]
Set STACK_TYPE environment variable to the type of deployment you desire.
You can set it one of the following: ["docker", "local", "cloud"].
(default: docker)
ENV[STACK_TYPE_PROVIDER]
Set only when STACK_TYPE="cloud". Only "azure" is supported.
(default: )
ENV[STACK_VM_MEMORY]
(Optional) Set only when STACK_TYPE="local". This sets the memory
of the instance(s) spawned. (default: 2048)
ENV[STACK_VM_CPUS]
(Optional) Set only when STACK_TYPE="local". This sets the number of VCPUs
of the instance(s) spawned. (default: 2)
ENV[STACK_BOX_NAME]
(Optional) Set only when STACK_TYPE="local". This sets the box Vagrant box name
of the instance(s) spawned. (default: ubuntu/xenial64)
ENV[STACK_REPO]
(Optional) To configure bigchaindb repo to use, set STACK_REPO environment
variable. (default: bigchaindb/bigchaindb)
ENV[STACK_BRANCH]
(Optional) To configure bigchaindb repo branch to use set STACK_BRANCH environment
variable. (default: master)
ENV[TM_VERSION]
(Optional) Tendermint version to use for the setup. (default: 0.22.8)
ENV[MONGO_VERSION]
(Optional) MongoDB version to use with the setup. (default: 3.6)
ENV[AZURE_CLIENT_ID]
Only required when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure". Steps to generate:
https://github.com/Azure/vagrant-azure#create-an-azure-active-directory-aad-application
ENV[AZURE_TENANT_ID]
Only required when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure". Steps to generate:
https://github.com/Azure/vagrant-azure#create-an-azure-active-directory-aad-application
ENV[AZURE_SUBSCRIPTION_ID]
Only required when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure". Steps to generate:
https://github.com/Azure/vagrant-azure#create-an-azure-active-directory-aad-application
ENV[AZURE_CLIENT_SECRET]
Only required when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure". Steps to generate:
https://github.com/Azure/vagrant-azure#create-an-azure-active-directory-aad-application
ENV[AZURE_REGION]
(Optional) Only applicable, when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure".
Azure region for the Planetmint instance. Get list of regions using Azure CLI.
e.g. az account list-locations. (default: westeurope)
ENV[AZURE_IMAGE_URN]
(Optional) Only applicable, when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure".
Azure image to use. Get list of available images using Azure CLI.
e.g. az vm image list --output table. (default: Canonical:UbuntuServer:16.04-LTS:latest)
ENV[AZURE_RESOURCE_GROUP]
(Optional) Only applicable, when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure".
Name of Azure resource group for the instance.
(default: bdb-vagrant-rg-2018-05-30)
ENV[AZURE_DNS_PREFIX]
(Optional) Only applicable, when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure".
DNS Prefix of the instance. (default: bdb-instance-2018-05-30)
ENV[AZURE_ADMIN_USERNAME]
(Optional) Only applicable, when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure".
Admin username of the the instance. (default: vagrant)
ENV[AZURE_VM_SIZE]
(Optional) Only applicable, when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure".
Azure VM size. (default: Standard_D2_v2)
ENV[SSH_PRIVATE_KEY_PATH]
Only required when STACK_TYPE="cloud" and STACK_TYPE_PROVIDER="azure". Absolute path of
SSH keypair required to log into the Azure instance.
-h
Show this help and exit.
```
The parameter that differentiates between the deployment type is `STACK_TYPE` which currently, supports
an opinionated deployment of Planetmint on `docker`, `local` and `cloud`.
### STACK_TYPE: docker
This configuration deploys a docker based Planetmint network on the dev/test machine that you are running `stack.sh` on. This is also the default `STACK_TYPE` config for `stack.sh`.
#### Example: docker
Deploy a 4 node docker based Planetmint network on your host.
```text
#Optional, since 4 is the default size.
$ export STACK_SIZE=4
#Optional, since docker is the default type.
$ export STACK_TYPE=docker
#Optional, repo to use for the network deployment
# Default: bigchaindb/bigchaindb
$ export STACK_REPO=bigchaindb/bigchaindb
#Optional, codebase to use for the network deployment
# Default: master
$ export STACK_BRANCH=master
#Optional, since 0.22.8 is the default tendermint version.
$ export TM_VERSION=0.22.8
#Optional, since 3.6 is the default MongoDB version.
$ export MONGO_VERSION=3.6
$ bash stack.sh
```
**Note**: For MacOSX users, the script will not install `Docker for Mac`, it only detects if `docker` is installed on the system, if not make sure to install [Docker for Mac](https://docs.docker.com/docker-for-mac/). Also make sure Docker API Version > 1.25. To check Docker API Version:
`docker version --format '{{.Server.APIVersion}}'`
### STACK_TYPE: local
This configuration deploys a VM based Planetmint network on your host/dev. All the services are running as processes on the VMs. For `local` deployments the following dependencies must be installed i.e.
- Vagrant
- Vagrant plugins.
- vagrant-cachier
- vagrant-vbguest
- vagrant-hosts
- vagrant-azure
- `vagrant plugin install vagrant-cachier vagrant-vbguest vagrant-hosts vagrant-azure`
- [Virtualbox](https://www.virtualbox.org/wiki/Downloads)
#### Example: VM
Deploy a 4 node VM based Planetmint network.
```text
$ export STACK_TYPE=local
# Optional, since 4 is the default size.
$ export STACK_SIZE=4
# Optional, default is 2048
$ export STACK_VM_MEMORY=2048
#Optional, default is 1
$ export STACK_VM_CPUS=1
#Optional, default is ubuntu/xenial64. Supported/tested images: bento/centos-7, fedora/25-cloud-base
$ export STACK_BOX_NAME=ubuntu/xenial64
#Optional, repo to use for the network deployment
# Default: bigchaindb/bigchaindb
$ export STACK_REPO=bigchaindb/bigchaindb
#Optional, codebase to use for the network deployment
# Default: master
$ export STACK_BRANCH=master
#Optional, since 0.22.8 is the default tendermint version
$ export TM_VERSION=0.22.8
#Optional, since 3.6 is the default MongoDB version.
$ export MONGO_VERSION=3.6
$ bash stack.sh
```
### STACK_TYPE: cloud
This configuration deploys a docker based Planetmint network on a cloud instance. Currently, only Azure is supported.
For `cloud` deployments the following dependencies must be installed i.e.
- Vagrant
- Vagrant plugins.
- vagrant-cachier
- vagrant-vbguest
- vagrant-hosts
- vagrant-azure
- `vagrant plugin install vagrant-cachier vagrant-vbguest vagrant-hosts vagrant-azure`
#### Example: stack
Deploy a 4 node docker based Planetmint network on an Azure instance.
- [Create an Azure Active Directory(AAD) Application](https://github.com/Azure/vagrant-azure#create-an-azure-active-directory-aad-application)
- Generate or specify an SSH keypair to login to the Azure instance.
- **Example:**
```text
$ ssh-keygen -t rsa -C "<name>" -f /path/to/key/<name>
```
- Configure parameters for `stack.sh`
```text
# After creating the AAD application with access to Azure Resource
# Group Manager for your subscription, it will return a JSON object
$ export AZURE_CLIENT_ID=<value from azure.appId>
$ export AZURE_TENANT_ID=<value from azure.tenant>
# Can be retrieved via
# az account list --query "[?isDefault].id" -o tsv
$ export AZURE_SUBSCRIPTION_ID=<your Azure subscription ID>
$ export AZURE_CLIENT_SECRET=<value from azure.password>
$ export STACK_TYPE=cloud
# Currently on azure is supported
$ export STACK_TYPE_PROVIDER=azure
$ export SSH_PRIVATE_KEY_PATH=</path/to/private/key>
# Optional, Azure region of the instance. Default: westeurope
$ export AZURE_REGION=westeurope
# Optional, Azure image urn of the instance. Default: Canonical:UbuntuServer:16.04-LTS:latest
$ export AZURE_IMAGE_URN=Canonical:UbuntuServer:16.04-LTS:latest
# Optional, Azure resource group. Default: bdb-vagrant-rg-yyyy-mm-dd(current date)
$ export AZURE_RESOURCE_GROUP=bdb-vagrant-rg-2018-01-01
# Optional, DNS prefix of the Azure instance. Default: bdb-instance-yyyy-mm-dd(current date)
$ export AZURE_DNS_PREFIX=bdb-instance-2018-01-01
# Optional, Admin username of the Azure instance. Default: vagrant
$ export AZURE_ADMIN_USERNAME=vagrant
# Optional, Azure instance size. Default: Standard_D2_v2
$ export AZURE_VM_SIZE=Standard_D2_v2
$ bash stack.sh
```
## Delete/Unstack a Planetmint network
Export all the variables exported for the corresponding `stack.sh` script and
run `unstack.sh` to delete/remove/unstack the Planetmint network/stack.
```text
$ bash unstack.sh
OR
# -s implies soft unstack. i.e. Only applicable for local and cloud based
# networks. Only deletes/stops the docker(s)/process(es) and does not
# delete the instances created via Vagrant or on Cloud. Default: hard
$ bash unstack.sh -s
```

View File

@ -0,0 +1,138 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Notes on Running a Local Dev Node as Processes
The following doc describes how to run a local node for developing Planetmint Tendermint version.
There are two crucial dependencies required to start a local node:
- MongoDB
- Tendermint
and of course you also need to install Planetmint Sever from the local code you just developed.
## Install and Run MongoDB
MongoDB can be easily installed, just refer to their [installation documentation](https://docs.mongodb.com/manual/installation/) for your distro.
We know MongoDB 3.4 and 3.6 work with Planetmint.
After the installation of MongoDB is complete, run MongoDB using `sudo mongod`
## Install and Run Tendermint
### Installing a Tendermint Executable
The version of Planetmint Server described in these docs only works well with Tendermint 0.31.5 (not a higher version number). Install that:
```bash
$ sudo apt install -y unzip
$ wget https://github.com/tendermint/tendermint/releases/download/v0.31.5/tendermint_v0.31.5_linux_amd64.zip
$ unzip tendermint_v0.31.5_linux_amd64.zip
$ rm tendermint_v0.31.5_linux_amd64.zip
$ sudo mv tendermint /usr/local/bin
```
### Installing Tendermint Using Docker
Tendermint can be run directly using the docker image. Refer [here](https://hub.docker.com/r/tendermint/tendermint/) for more details.
### Installing Tendermint from Source
Before we can begin installing Tendermint one should ensure that the Golang is installed on system and `$GOPATH` should be set in the `.bashrc` or `.zshrc`. An example setup is shown below
```bash
$ echo $GOPATH
/home/user/Documents/go
$ go -h
Go is a tool for managing Go source code.
Usage:
go command [arguments]
The commands are:
build compile packages and dependencies
clean remove object files
...
```
- We can drop `GOPATH` in `PATH` so that installed Golang packages are directly available in the shell. To do that add the following to your `.bashrc`
```bash
export PATH=${PATH}:${GOPATH}/bin
```
Follow [the Tendermint docs](https://tendermint.com/docs/introduction/install.html#from-source) to install Tendermint from source.
If the installation is successful then Tendermint is installed at `$GOPATH/bin`. To ensure Tendermint's installed fine execute the following command,
```bash
$ tendermint -h
Tendermint Core (BFT Consensus) in Go
Usage:
tendermint [command]
Available Commands:
gen_validator Generate new validator keypair
help Help about any command
init Initialize Tendermint
...
```
### Running Tendermint
- We can initialize and run tendermint as follows,
```bash
$ tendermint init
...
$ tendermint node --consensus.create_empty_blocks=false
```
The argument `--consensus.create_empty_blocks=false` specifies that Tendermint should not commit empty blocks.
- To reset all the data stored in Tendermint execute the following command,
```bash
$ tendermint unsafe_reset_all
```
## Install Planetmint
To install Planetmint from source (for dev), clone the repo and execute the following command, (it is better that you create a virtual env for this)
```bash
$ git clone https://github.com/bigchaindb/bigchaindb.git
$ cd bigchaindb
$ pip install -e .[dev] # or pip install -e '.[dev]' # for zsh
```
## Running All Tests
To execute tests when developing a feature or fixing a bug one could use the following command,
```bash
$ pytest -v
```
NOTE: MongoDB and Tendermint should be running as discussed above.
One could mark a specific test and execute the same by appending `-m my_mark` to the above command.
Although the above should prove sufficient in most cases but in case tests are failing on Travis CI then the following command can be used to possibly replicate the failure locally,
```bash
$ docker-compose run --rm --no-deps bdb pytest -v --cov=bigchaindb
```
NOTE: before executing the above command the user must ensure that they reset the Tendermint container by executing `tendermint usafe_reset_all` command in the Tendermint container.

View File

@ -0,0 +1,108 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Notes on Running a Local Dev Node with Docker Compose
## Setting up a single node development environment with ``docker-compose``
### Using the Planetmint 2.0 developer toolbox
We grouped all useful commands under a simple `Makefile`.
Run a Planetmint node in the foreground:
```bash
$ make run
```
There are also other commands you can execute:
- `make start`: Run Planetmint from source and daemonize it (stop it with `make stop`).
- `make stop`: Stop Planetmint.
- `make logs`: Attach to the logs.
- `make test`: Run all unit and acceptance tests.
- `make test-unit-watch`: Run all tests and wait. Every time you change code, tests will be run again.
- `make cov`: Check code coverage and open the result in the browser.
- `make doc`: Generate HTML documentation and open it in the browser.
- `make clean`: Remove all build, test, coverage and Python artifacts.
- `make reset`: Stop and REMOVE all containers. WARNING: you will LOSE all data stored in Planetmint.
### Using `docker-compose` directly
The Planetmint `Makefile` is a wrapper around some `docker-compose` commands we use frequently. If you need a finer granularity to manage the containers, you can still use `docker-compose` directly. This part of the documentation explains how to do that.
```bash
$ docker-compose build bigchaindb
$ docker-compose up -d bdb
```
The above command will launch all 3 main required services/processes:
* ``mongodb``
* ``tendermint``
* ``bigchaindb``
To follow the logs of the ``tendermint`` service:
```bash
$ docker-compose logs -f tendermint
```
To follow the logs of the ``bigchaindb`` service:
```bash
$ docker-compose logs -f bigchaindb
```
To follow the logs of the ``mongodb`` service:
```bash
$ docker-compose logs -f mdb
```
Simple health check:
```bash
$ docker-compose up curl-client
```
Post and retrieve a transaction -- copy/paste a driver basic example of a
``CREATE`` transaction:
```bash
$ docker-compose -f docker-compose.yml run --rm bdb-driver ipython
```
**TODO**: A python script to post and retrieve a transaction(s).
### Running Tests
Run all the tests using:
```bash
$ docker-compose run --rm --no-deps bigchaindb pytest -v
```
Run tests from a file:
```bash
$ docker-compose run --rm --no-deps bigchaindb pytest /path/to/file -v
```
Run specific tests:
```bash
$ docker-compose run --rm --no-deps bigchaindb pytest /path/to/file -k "<test_name>" -v
```
### Building Docs
You can also develop and build the Planetmint docs using ``docker-compose``:
```bash
$ docker-compose build bdocs
$ docker-compose up -d bdocs
```
The docs will be hosted on port **33333**, and can be accessed over [localhost](http:/localhost:33333), [127.0.0.1](http:/127.0.0.1:33333)
OR http:/HOST_IP:33333.

View File

@ -0,0 +1,157 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Write Code
==========
Know What You Want to Write Code to Do
--------------------------------------
Do you want to write code to resolve an open issue (bug)? Which one?
Do you want to implement a Planetmint Enhancement Proposal (PEP)? Which one?
You should know why you want to write code before you go any farther.
Refresh Yourself about the C4 Process
-------------------------------------
C4 is the Collective Code Construction Contract. It's quite short:
`re-reading it will only take a few minutes <https://github.com/bigchaindb/BEPs/tree/master/1>`_.
Set Up Your Local Machine. Here's How.
--------------------------------------
- Make sure you have Git installed.
- Get a text editor. Internally, we like:
- Vim
- PyCharm
- Visual Studio Code
- Atom
- GNU Emacs (Trent is crazy)
- GNU nano (Troy has lost his mind)
- If you plan to do JavaScript coding, get the latest JavaScript stuff (npm etc.).
- If you plan to do Python coding, get `the latest Python <https://www.python.org/downloads/>`_, and
get the latest ``pip``.
.. warning::
Don't use apt or apt-get to get the latest ``pip``. It won't work properly. Use ``get-pip.py``
from `the pip website <https://pip.pypa.io/en/stable/installing/>`_.
- Use the latest ``pip`` to get the latest ``virtualenv``:
.. code::
$ pip install virtualenv
- Create a Python Virttual Environment (virtualenv) for doing Planetmint Server development. There are many ways to do that. Google around and pick one.
An old-fashioned but fine way is:
.. code::
$ virtualenv -p $(which python3.6) NEW_ENV_NAME
$ . NEW_ENV_NAME/bin/activate
Be sure to use Python 3.6.x as the Python version for your virtualenv. The virtualenv creation process will actually get the
latest ``pip``, ``wheel`` and ``setuptools`` and put them inside the new virtualenv.
Before You Start Writing Code
-----------------------------
Read `BEP-24 <https://github.com/bigchaindb/BEPs/tree/master/24>`_
so you know what to do to ensure that your changes (i.e. your future pull request) can be merged.
It's easy and will save you some hassle later on.
Start Writing Code
------------------
Use the Git `Fork and Pull Request Workflow <https://github.com/susam/gitpr>`_. Tip: You could print that page for reference.
Your Python code should follow `our Python Style Guide <https://github.com/bigchaindb/bigchaindb/blob/master/PYTHON_STYLE_GUIDE.md>`_.
Similarly for JavaScript.
Make sure `pre-commit <https://pre-commit.com/>`_ actually checks commits. Do:
.. code::
$ pip install pre-commit # might not do anything if it is already installed, which is okay
$ pre-commit install
That will load the pre-commit settings in the file ``.pre-commit-config.yaml``. Now every time you do ``git commit <stuff>``, pre-commit
will run all those checks.
To install Planetmint Server from the local code, and to keep it up to date with the latest code in there, use:
.. code::
$ pip install -e .[dev]
The ``-e`` tells it to use the latest code. The ``.`` means use the current directory, which should be the one containing ``setup.py``.
The ``[dev]`` tells it to install some extra Python packages. Which ones? Open ``setup.py`` and look for ``dev`` in the ``extras_require`` section.
Remember to Write Tests
-----------------------
We like to test everything, if possible. Unit tests and also integration tests. We use the `pytest <https://docs.pytest.org/en/latest/>`_
framework to write Python tests. Read all about it.
Most tests are in the ``tests/`` folder. Take a look around.
Running a Local Node/Network for Dev and Test
---------------------------------------------
This is tricky and personal. Different people do it different ways. We documented some of those on separate pages:
- `Dev node setup and running all tests with Docker Compose <run-node-with-docker-compose.html>`_
- `Dev node setup and running all tests as processes <run-node-as-processes.html>`_
- `Dev network setup with stack.sh <run-dev-network-stack.html>`_
- `Dev network setup with Ansible <run-dev-network-ansible.html>`_
- More to come?
Create the PR on GitHub
-----------------------
Git push your branch to GitHub so as to create a pull request against the branch where the code you want to change *lives*.
Travis and Codecov will run and might complain. Look into the complaints and fix them before merging.
Travis gets its configuration and setup from the files:
- Some environment variables, if they are used. See https://docs.travis-ci.com/user/environment-variables/
- ``.travis.yml``
- ``tox.ini`` - What is tox? See `tox.readthedocs.io <https://tox.readthedocs.io/en/latest/>`_
- ``.ci/`` (as in Travis CI = Continuous Integration)
- ``travis-after-success.sh``
- ``travis-before-install.sh``
- ``travis-before-script.sh``
- ``travis-install.sh``
- ``travis_script.sh``
Read about the `Travis CI build lifecycle <https://docs.travis-ci.com/user/customizing-the-build/>`_ to understand when those scripts run and what they do.
You can have even more scripts!
Codecov gets its configuration from the file `codeocov.yaml <https://github.com/codecov/support/wiki/Codecov-Yaml>`_ which is also documented at
`docs.codecov.io <https://docs.codecov.io/v4.3.6/docs/codecov-yaml>`_. Codecov might also use ``setup.cfg``.
Merge!
------
Ideally, we like your PR and merge it right away. We don't want to keep you waiting.
If we want to make changes, we'll do them in a follow-up PR.

View File

@ -0,0 +1,29 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Contributing to Planetmint
==========================
There are many ways you can contribute to Planetmint.
It includes several sub-projects.
- `Planetmint Server <https://github.com/planetmint/planetmint>`_
- `Planetmint Python Driver <https://github.com/planetmint/planetmint-driver>`_
- `Planetmint JavaScript Driver <https://github.com/planetmint/js-planetmint-driver>`_
- `Planetmint Java Driver <https://github.com/planetmint/java-planetmint-driver>`_
- `cryptoconditions <https://github.com/planetmint/cryptoconditions>`_ (a Python package by us)
- `py-abci <https://github.com/davebryson/py-abci>`_ (a Python package we use)
- `Planetmint Enhancement Proposals (PEPs) <https://github.com/planetmint/PEPs>`_
Contents
--------
.. toctree::
:maxdepth: 2
ways-to-contribute/index
dev-setup-coding-and-contribution-process/index
cross-project-policies/index

View File

@ -0,0 +1,14 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Ways to Contribute
==================
.. toctree::
:maxdepth: 1
report-a-bug
write-docs

View File

@ -0,0 +1,44 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
How to contribute?
==================
# Report a Bug
To report a bug, go to the relevant GitHub repository, click on the **Issues** tab, click on the **New issue** button, and read the instructions.
# Write an Issue
To write an issue, go to the relevant GitHub repository, click on the **Issues** tab, click on the **New issue** button, and read the instructions.
# Answer Questions
People ask questions about Planetmint in the following places:
- Gitter
- [bigchaindb/bigchaindb](https://gitter.im/planetmint/community)
- [bigchaindb/js-bigchaindb-driver](https://gitter.im/planetmint/community))
- [Twitter](https://twitter.com/planetmint)
- [Stack Overflow "bigchaindb"](https://stackoverflow.com/search?q=planetmint)
Feel free to hang out and answer some questions. People will be thankful.
# Write a Planetmint Enhancement Proposal (PEP)
If you have an idea for a new feature or enhancement, and you want some feedback before you write a full Planetmint Enhancement Proposal (PEP), then feel free to:
- ask in the [planetmint/community Gitter chat room](https://gitter.im/planetmint/planetmint) or
- [open a new issue in the planetmint/PEPs repo](https://github.com/planetmint/PEPs/issues/new) and give it the label **PEP idea**.
If you want to discuss an existing PEP, then [open a new issue in the planetmint/PEPs repo](https://github.com/planetmint/BEPs/issues/new) and give it the label **discuss existing PEP**.
## Steps to Write a New PEP
1. Look at the structure of existing PEPs in the [planetmint/PEPs repo](https://github.com/planetmint/PEPs). Note the section headings. [PEP-2](https://github.com/planetmint/PEPs/tree/master/2) (our variant of the consensus-oriented specification system [COSS]) says more about the expected structure and process.
1. Write a first draft of your PEP. It doesn't have to be long or perfect.
1. Push your BEP draft to the [planetmint/PEPs repo](https://github.com/planetmint/PEPs) and make a pull request. [PEP-1](https://github.com/planetmint/PEPs/tree/master/1) (our variant of C4) outlines the process we use to handle all pull requests. In particular, we try to merge all pull requests quickly.
1. Your PEP can be revised by pushing more pull requests.

View File

@ -0,0 +1,30 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Write Docs
If you're writing code, you should also update any related docs. However, you might want to write docs only, such as:
- General explainers
- Tutorials
- Courses
- Code explanations
- How Planetmint relates to other blockchain things
- News from recent events
You can certainly do that!
- The docs for Planetmint Server live under ``planetmint/docs/`` in the ``planetmint/planetmint`` repo.
- There are docs for the Python driver under ``planetmint-driver/docs/`` in the ``planetmint/planetmint-driver`` repo.
- There are docs for the JavaScript driver under ``planetmint/js-planetmint-driver`` in the ``planetmint/js-planetmint-driver`` repo.
- The source code for the Planetmint website is in a private repo, but we can give you access if you ask.
The [Planetmint Transactions Specs](https://github.com/planetmint/PEPs/tree/master/tx-specs/) (one for each spec version) are in the ``planetmint/PEPs`` repo.
You can write the docs using Markdown (MD) or RestructuredText (RST). Sphinx can understand both. RST is more powerful.
ReadTheDocs will automatically rebuild the docs whenever a commit happens on the ``master`` branch, or on one of the other branches that it is monitoring.

View File

@ -0,0 +1,31 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Drivers
=======
Connectors to Planetmint are referred to as drivers within the community. A driver is used to create valid transactions, to generate key pairs, to sign transactions and to post the transaction to the Planetmint API.
These drivers were originally created by the original Planetmint team:
* `Python Driver <https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_
* `JavaScript / Node.js Driver <https://github.com/bigchaindb/js-bigchaindb-driver>`_
* `Java Driver <https://github.com/bigchaindb/java-bigchaindb-driver>`_
These drivers and tools were created by the Planetmint community:
.. warning::
Some of these projects are a work in progress,
but may still be useful.
Others might not work with the latest version of Planetmint.
* `ANSI C driver <https://github.com/RiddleAndCode/bigchaindb-c-driver>`_, should also work with C++ (working as of June 2019)
* `C# driver <https://github.com/Omnibasis/bigchaindb-csharp-driver>`_ (working as of May 2019)
* `Haskell transaction builder <https://github.com/bigchaindb/bigchaindb-hs>`_
* `Go driver <https://github.com/zbo14/envoke/blob/master/bigchain/bigchain.go>`_
* `Ruby driver <https://github.com/LicenseRocks/bigchaindb_ruby>`_
* `Ruby library for preparing/signing transactions and submitting them or querying a Planetmint node (MIT licensed) <https://rubygems.org/gems/bigchaindb>`_

View File

@ -0,0 +1,33 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Planetmint Documentation
========================
Meet Planetmint. The blockchain database.
It has some database characteristics and some blockchain `properties <properties.html>`_,
including decentralization, immutability and native support for assets.
At a high level, one can communicate with a Planetmint network (set of nodes) using the Planetmint HTTP API, or a wrapper for that API, such as the Planetmint Python Driver. Each Planetmint node runs Planetmint Server and various other software. The `terminology page <terminology.html>`_ explains some of those terms in more detail.
More About Planetmint
---------------------
.. toctree::
:maxdepth: 1
Planetmint Docs Home <self>
about-bigchaindb
terminology
properties
basic-usage
installation/index
drivers/index
query
contributing/index
korean/index

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -0,0 +1,737 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _the-http-client-server-api:
The HTTP Client-Server API
==========================
This page assumes you already know an API Root URL
for a Planetmint node or reverse proxy.
It should be something like ``https://example.com:9984``
or ``https://12.34.56.78:9984``.
If you set up a Planetmint node or reverse proxy yourself,
and you're not sure what the API Root URL is,
then see the last section of this page for help.
.. _bigchaindb-root-url:
Planetmint Root URL
-------------------
If you send an HTTP GET request to the Planetmint Root URL
e.g. ``http://localhost:9984``
or ``https://example.com:9984``
(with no ``/api/v1/`` on the end),
then you should get an HTTP response
with something like the following in the body:
.. literalinclude:: http-samples/index-response.http
:language: http
.. _api-root-endpoint:
API Root Endpoint
-----------------
If you send an HTTP GET request to the API Root Endpoint
e.g. ``http://localhost:9984/api/v1/``
or ``https://example.com:9984/api/v1/``,
then you should get an HTTP response
that allows you to discover the Planetmint API endpoints:
.. literalinclude:: http-samples/api-index-response.http
:language: http
Transactions Endpoint
---------------------
.. note::
If you want to do more sophisticated queries
than those provided by the Planetmint HTTP API,
then one option is to connect to MongoDB directly (if possible)
and do whatever queries MongoDB allows.
For more about that option, see
`the page about querying Planetmint <https://docs.bigchaindb.com/en/latest/query.html>`_.
.. http:get:: /api/v1/transactions/{transaction_id}
Get the transaction with the ID ``transaction_id``.
If a transaction with ID ``transaction_id`` has been included
in a committed block, then this endpoint returns that transaction,
otherwise the response will be ``404 Not Found``.
:param transaction_id: transaction ID
:type transaction_id: hex string
**Example request**:
.. literalinclude:: http-samples/get-tx-id-request.http
:language: http
**Example response**:
.. literalinclude:: http-samples/get-tx-id-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A transaction with that ID was found.
:statuscode 404: A transaction with that ID was not found.
.. http:get:: /api/v1/transactions
Requests to the ``/api/v1/transactions`` endpoint
without any query parameters will get a response status code ``400 Bad Request``.
.. http:get:: /api/v1/transactions?asset_id={asset_id}&operation={CREATE|TRANSFER}&last_tx={true|false}
Get a list of transactions that use an asset with the ID ``asset_id``.
If ``operation`` is ``CREATE``, then the CREATE transaction which created
the asset with ID ``asset_id`` will be returned.
If ``operation`` is ``TRANSFER``, then every TRANSFER transaction involving
the asset with ID ``asset_id`` will be returned.
This allows users to query the entire history or
provenance of an asset.
If ``operation`` is not included, then *every* transaction involving
the asset with ID ``asset_id`` will be returned.
if ``last_tx`` is set to ``true``, only the last transaction is returned
instead of all transactions with the given ``asset_id``.
This endpoint returns transactions only if they are in committed blocks.
:query string operation: (Optional) ``CREATE`` or ``TRANSFER``.
:query string asset_id: asset ID.
:query string last_tx: (Optional) ``true`` or ``false``.
**Example request**:
.. literalinclude:: http-samples/get-tx-by-asset-request.http
:language: http
**Example response**:
.. literalinclude:: http-samples/get-tx-by-asset-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A list of transactions containing an asset with ID ``asset_id`` was found and returned.
:statuscode 400: The request wasn't understood by the server, e.g. the ``asset_id`` querystring was not included in the request.
.. http:post:: /api/v1/transactions?mode={mode}
This endpoint is used to send a transaction to a Planetmint network.
The transaction is put in the body of the request.
:query string mode: (Optional) One of the three supported modes to send a transaction: ``async``, ``sync``, ``commit``. The default is ``async``.
Once the posted transaction arrives at a Planetmint node,
that node will check to see if the transaction is valid.
If it's invalid, the node will return an HTTP 400 (error).
Otherwise, the node will send the transaction to Tendermint (in the same node) using the
`Tendermint broadcast API
<https://tendermint.com/docs/tendermint-core/using-tendermint.html#broadcast-api>`_.
The meaning of the ``mode`` query parameter is inherited from the mode parameter in
`Tendermint's broadcast API
<https://tendermint.com/docs/tendermint-core/using-tendermint.html#broadcast-api>`_.
``mode=async`` means the HTTP response will come back immediately,
before Tendermint asks Planetmint Server to check the validity of the transaction (a second time).
``mode=sync`` means the HTTP response will come back
after Tendermint gets a response from Planetmint Server
regarding the validity of the transaction.
``mode=commit`` means the HTTP response will come back once the transaction
is in a committed block.
.. note::
In the async and sync modes, after a successful HTTP response is returned, the transaction may still be rejected later on. All the transactions are recorded internally by Tendermint in WAL (Write-Ahead Log) before the HTTP response is returned. Nevertheless, the following should be noted:
- Transactions in WAL including the failed ones are not exposed in any of the Planetmint or Tendermint APIs.
- Transactions are never fetched from WAL. WAL is never replayed.
- A critical failure (e.g. the system is out of disk space) may occur preventing transactions from being stored in WAL, even when the HTTP response indicates a success.
- If a transaction fails the validation because it conflicts with the other transactions of the same block, Tendermint includes it into its block, but Planetmint does not store these transactions and does not offer any information about them in the APIs.
.. note::
The posted transaction should be valid.
The relevant
`Planetmint Transactions Spec <https://github.com/bigchaindb/BEPs/tree/master/tx-specs/>`_
explains how to build a valid transaction
and how to check if a transaction is valid.
One would normally use a driver such as the `Planetmint Python Driver
<https://docs.bigchaindb.com/projects/py-driver/en/latest/index.html>`_
to build a valid transaction.
.. note::
A client can subscribe to the
WebSocket Event Stream API
to listen for committed transactions.
**Example request**:
.. literalinclude:: http-samples/post-tx-request.http
:language: http
**Example response**:
.. literalinclude:: http-samples/post-tx-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 202: The meaning of this response depends on the value
of the ``mode`` parameter. See above.
:statuscode 400: The posted transaction was invalid.
.. http:post:: /api/v1/transactions
This endpoint (without any parameters) will push a new transaction.
Since no ``mode`` parameter is included, the default mode is assumed: ``async``.
Transaction Outputs
-------------------
The ``/api/v1/outputs`` endpoint returns transactions outputs filtered by a
given public key, and optionally filtered to only include either spent or
unspent outputs.
.. note::
If you want to do more sophisticated queries
than those provided by the Planetmint HTTP API,
then one option is to connect to MongoDB directly (if possible)
and do whatever queries MongoDB allows.
For more about that option, see
`the page about querying Planetmint <https://docs.bigchaindb.com/en/latest/query.html>`_.
.. http:get:: /api/v1/outputs
Get transaction outputs by public key. The ``public_key`` parameter must be
a base58 encoded ed25519 public key associated with transaction output
ownership.
Returns a list of transaction outputs.
:param public_key: Base58 encoded public key associated with output
ownership. This parameter is mandatory and without it
the endpoint will return a ``400`` response code.
:param spent: (Optional) Boolean value (``true`` or ``false``)
indicating if the result set
should include only spent or only unspent outputs. If not
specified, the result includes all the outputs (both spent
and unspent) associated with the ``public_key``.
.. http:get:: /api/v1/outputs?public_key={public_key}
Return all outputs, both spent and unspent, for the ``public_key``.
**Example request**:
.. sourcecode:: http
GET /api/v1/outputs?public_key=1AAAbbb...ccc HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"output_index": 0,
"transaction_id": "2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e"
},
{
"output_index": 1,
"transaction_id": "2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e"
}
]
:statuscode 200: A list of outputs was found and returned in the body of the response.
:statuscode 400: The request wasn't understood by the server, e.g. the ``public_key`` querystring was not included in the request.
.. http:get:: /api/v1/outputs?public_key={public_key}&spent=true
Return all **spent** outputs for ``public_key``.
**Example request**:
.. sourcecode:: http
GET /api/v1/outputs?public_key=1AAAbbb...ccc&spent=true HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"output_index": 0,
"transaction_id": "2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e"
}
]
:statuscode 200: A list of outputs were found and returned in the body of the response.
:statuscode 400: The request wasn't understood by the server, e.g. the ``public_key`` querystring was not included in the request.
.. http:get:: /api/v1/outputs?public_key={public_key}&spent=false
Return all **unspent** outputs for ``public_key``.
**Example request**:
.. sourcecode:: http
GET /api/v1/outputs?public_key=1AAAbbb...ccc&spent=false HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"output_index": 1,
"transaction_id": "2d431073e1477f3073a4693ac7ff9be5634751de1b8abaa1f4e19548ef0b4b0e"
}
]
:statuscode 200: A list of outputs were found and returned in the body of the response.
:statuscode 400: The request wasn't understood by the server, e.g. the ``public_key`` querystring was not included in the request.
Assets
------
.. note::
If you want to do more sophisticated queries
than those provided by the Planetmint HTTP API,
then one option is to connect to MongoDB directly (if possible)
and do whatever queries MongoDB allows.
For more about that option, see
`the page about querying Planetmint <https://docs.bigchaindb.com/en/latest/query.html>`_.
.. http:get:: /api/v1/assets
Return all the assets that match a given text search.
:query string search: Text search string to query.
:query int limit: (Optional) Limit the number of returned assets. Defaults
to ``0`` meaning return all matching assets.
.. http:get:: /api/v1/assets/?search={search}
Return all assets that match a given text search.
.. note::
The ``id`` of the asset
is the same ``id`` of the CREATE transaction that created the asset.
.. note::
You can use ``assets/?search`` or ``assets?search``.
If no assets match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400 Bad Request`` is returned.
The results are sorted by text score.
For more information about the behavior of text search, see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_.
**Example request**:
.. sourcecode:: http
GET /api/v1/assets/?search=bigchaindb HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"data": {"msg": "Hello Planetmint 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"data": {"msg": "Hello Planetmint 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
{
"data": {"msg": "Hello Planetmint 3!"},
"id": "fa6bcb6a8fdea3dc2a860fcdc0e0c63c9cf5b25da8b02a4db4fb6a2d36d27791"
}
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
.. http:get:: /api/v1/assets?search={search}&limit={n_documents}
Return at most ``n_documents`` assets that match a given text search.
If no assets match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400 Bad Request`` is returned.
The results are sorted by text score.
For more information about the behavior of text search, see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_.
**Example request**:
.. sourcecode:: http
GET /api/v1/assets?search=bigchaindb&limit=2 HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"data": {"msg": "Hello Planetmint 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"data": {"msg": "Hello Planetmint 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
Transaction Metadata
--------------------
.. note::
If you want to do more sophisticated queries
than those provided by the Planetmint HTTP API,
then one option is to connect to MongoDB directly (if possible)
and do whatever queries MongoDB allows.
For more about that option, see
`the page about querying Planetmint <https://docs.bigchaindb.com/en/latest/query.html>`_.
.. http:get:: /api/v1/metadata
Return all the metadata objects that match a given text search.
:query string search: Text search string to query.
:query int limit: (Optional) Limit the number of returned metadata objects. Defaults
to ``0`` meaning return all matching objects.
.. http:get:: /api/v1/metadata/?search={search}
Return all metadata objects that match a given text search.
.. note::
The ``id`` of the metadata
is the same ``id`` of the transaction where it was defined.
.. note::
You can use ``metadata/?search`` or ``metadata?search``.
If no metadata objects match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400 Bad Request`` is returned.
The results are sorted by text score.
For more information about the behavior of text search, see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_.
**Example request**:
.. sourcecode:: http
GET /api/v1/metadata/?search=bigchaindb HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"metadata": {"metakey1": "Hello Planetmint 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"metadata": {"metakey2": "Hello Planetmint 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
{
"metadata": {"metakey3": "Hello Planetmint 3!"},
"id": "fa6bcb6a8fdea3dc2a860fcdc0e0c63c9cf5b25da8b02a4db4fb6a2d36d27791"
}
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
.. http:get:: /api/v1/metadata/?search={search}&limit={n_documents}
Return at most ``n_documents`` metadata objects that match a given text search.
If no metadata objects match the text search it returns an empty list.
If the text string is empty or the server does not support text search,
a ``400 Bad Request`` is returned.
The results are sorted by text score.
For more information about the behavior of text search, see `MongoDB text
search behavior <https://docs.mongodb.com/manual/reference/operator/query/text/#behavior>`_.
**Example request**:
.. sourcecode:: http
GET /api/v1/metadata?search=bigchaindb&limit=2 HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"metadata": {"msg": "Hello Planetmint 1!"},
"id": "51ce82a14ca274d43e4992bbce41f6fdeb755f846e48e710a3bbb3b0cf8e4204"
},
{
"metadata": {"msg": "Hello Planetmint 2!"},
"id": "b4e9005fa494d20e503d916fa87b74fe61c079afccd6e084260674159795ee31"
},
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully.
:statuscode 400: The query was not executed successfully. Returned if the
text string is empty or the server does not support
text search.
Validators
--------------------
.. http:get:: /api/v1/validators
Return the local validators set of a given node.
**Example request**:
.. sourcecode:: http
GET /api/v1/validators HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-type: application/json
[
{
"pub_key": {
"data":"4E2685D9016126864733225BE00F005515200727FBAB1312FC78C8B76831255A",
"type":"ed25519"
},
"power": 10
},
{
"pub_key": {
"data":"608D839D7100466D6BA6BE79C320F8B81DE93CFAA58CF9768CF921C6371F2553",
"type":"ed25519"
},
"power": 5
}
]
:resheader Content-Type: ``application/json``
:statuscode 200: The query was executed successfully and validators set was returned.
Blocks
------
.. http:get:: /api/v1/blocks/{block_height}
Get the block with the height ``block_height``.
:param block_height: block height
:type block_height: integer
**Example request**:
.. literalinclude:: http-samples/get-block-request.http
:language: http
**Example response**:
.. literalinclude:: http-samples/get-block-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: A block with that block height was found.
:statuscode 400: The request wasn't understood by the server, e.g. just requesting ``/blocks`` without the ``block_height``.
:statuscode 404: A block with that block height was not found.
.. http:get:: /api/v1/blocks
The unfiltered ``/blocks`` endpoint without any query parameters
returns a ``400 Bad Request`` status code.
**Example request**:
.. sourcecode:: http
GET /api/v1/blocks HTTP/1.1
Host: example.com
**Example response**:
.. sourcecode:: http
HTTP/1.1 400 Bad Request
:statuscode 400: The request wasn't understood by the server, e.g. just requesting ``/blocks`` without the ``block_height``.
.. http:get:: /api/v1/blocks?transaction_id={transaction_id}
Retrieve a list of block IDs (block heights), such that the blocks with those IDs contain a transaction with the ID ``transaction_id``. A correct response may consist of an empty list or a list with one block ID.
.. note::
In case no block was found, an empty list and an HTTP status code
``200 OK`` is returned, as the request was still successful.
:query string transaction_id: (Required) transaction ID
**Example request**:
.. literalinclude:: http-samples/get-block-txid-request.http
:language: http
**Example response**:
.. literalinclude:: http-samples/get-block-txid-response.http
:language: http
:resheader Content-Type: ``application/json``
:statuscode 200: The request was properly formed and zero or more blocks were found containing the specified ``transaction_id``.
:statuscode 400: The request wasn't understood by the server, e.g. just requesting ``/blocks``, without defining ``transaction_id``.
.. _determining-the-api-root-url:
Determining the API Root URL
----------------------------
When you start Planetmint Server using ``bigchaindb start``,
an HTTP API is exposed at some address. The default is:
``http://localhost:9984/api/v1/``
It's bound to ``localhost``,
so you can access it from the same machine,
but it won't be directly accessible from the outside world.
(The outside world could connect via a SOCKS proxy or whatnot.)
The documentation about Planetmint Server :doc:`Configuration Settings <../../installation/node-setup/configuration>`
has a section about how to set ``server.bind`` so as to make
the HTTP API publicly accessible.
If the API endpoint is publicly accessible,
then the public API Root URL is determined as follows:
- The public IP address (like 12.34.56.78)
is the public IP address of the machine exposing
the HTTP API to the public internet (e.g. either the machine hosting
Gunicorn or the machine running the reverse proxy such as NGINX).
It's determined by AWS, Azure, Rackspace, or whoever is hosting the machine.
- The DNS hostname (like example.com) is determined by DNS records,
such as an "A Record" associating example.com with 12.34.56.78
- The port (like 9984) is determined by the ``server.bind`` setting
if Gunicorn is exposed directly to the public Internet.
If a reverse proxy (like NGINX) is exposed directly to the public Internet
instead, then it could expose the HTTP API on whatever port it wants to.
(It should expose the HTTP API on port 9984, but it's not bound to do
that by anything other than convention.)

View File

@ -0,0 +1,13 @@
HTTP/1.1 200 OK
Content-Type: application/json
{
"assets": "/assets/",
"blocks": "/blocks/",
"docs": "https://docs.bigchaindb.com/projects/server/en/v0.9.0/http-client-server-api.html",
"metadata": "/metadata/",
"outputs": "/outputs/",
"streams": "ws://localhost:9985/api/v1/streams/valid_transactions",
"transactions": "/transactions/",
"validators": "/validators"
}

View File

@ -0,0 +1,3 @@
GET /api/v1/blocks/1 HTTP/1.1
Host: example.com

View File

@ -0,0 +1,45 @@
HTTP/1.1 200 OK
Content-Type: application/json
{
"height": 1,
"transactions": [
{
"asset": {
"data": {
"msg": "Hello Planetmint!"
}
},
"id": "6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2",
"inputs": [
{
"fulfillment": "pGSAIDE5i63cn4X8T8N1sZ2mGkJD5lNRnBM4PZgI_zvzbr-cgUBcdRj6EnG3MPl47m3XPd1HNfe7q3WL4T6MewNNnat33UvzVnPHo_vossv57M7L064VwrYMLGp097H7IeHpDngK",
"fulfills": null,
"owners_before": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"metadata": {
"sequence": 0
},
"operation": "CREATE",
"outputs": [
{
"amount": "1",
"condition": {
"details": {
"public_key": "4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD",
"type": "ed25519-sha-256"
},
"uri": "ni:///sha-256;PNYwdxaRaNw60N6LDFzOWO97b8tJeragczakL8PrAPc?fpt=ed25519-sha-256&cost=131072"
},
"public_keys": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"version": "2.0"
}
]
}

View File

@ -0,0 +1,3 @@
GET /api/v1/blocks?transaction_id=6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2 HTTP/1.1
Host: example.com

View File

@ -0,0 +1,6 @@
HTTP/1.1 200 OK
Content-Type: application/json
[
1
]

View File

@ -0,0 +1,3 @@
GET /api/v1/transactions?operation=TRANSFER&asset_id=6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2 HTTP/1.1
Host: example.com

View File

@ -0,0 +1,79 @@
HTTP/1.1 200 OK
Content-Type: application/json
[{
"asset": {
"id": "6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2"
},
"id": "2bf8ae1e3bc889a26df129dd6bbe6ccab30d1ae8e9e434fae1c4446042a68931",
"inputs": [
{
"fulfillment": "pGSAIDE5i63cn4X8T8N1sZ2mGkJD5lNRnBM4PZgI_zvzbr-cgUA90mMa9AWnbI70CUSVgzV9kRFf3tQ20RUIczNFqmwg9xrpOk_5uNoJB4bWRIojZmUhEyxSueCHLpqPXCEuyisE",
"fulfills": {
"output_index": 0,
"transaction_id": "6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2"
},
"owners_before": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"metadata": {
"sequence": 1
},
"operation": "TRANSFER",
"outputs": [
{
"amount": "1",
"condition": {
"details": {
"public_key": "3yfQPHeWAa1MxTX9Zf9176QqcpcnWcanVZZbaHb8B3h9",
"type": "ed25519-sha-256"
},
"uri": "ni:///sha-256;lu6ov4AKkee6KWGnyjOVLBeyuP0bz4-O6_dPi15eYUc?fpt=ed25519-sha-256&cost=131072"
},
"public_keys": [
"3yfQPHeWAa1MxTX9Zf9176QqcpcnWcanVZZbaHb8B3h9"
]
}
],
"version": "2.0"
},
{
"asset": {
"id": "6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2"
},
"id": "b9b614175eaed7cc93d9b80ebc2a91d35ae31928a7c218ae982272bb1785ef16",
"inputs": [
{
"fulfillment": "pGSAICw7Ul-c2lG6NFbHp3FbKRC7fivQcNGO7GS4wV3A-1QggUDi2bAVKJgEyE3LzMrAnAu1PnNs9DbDNkABaY6j3OCNEVwNVNg3V3qELOFNnH8vGUevREr4E-8Vb1Kzk4VR71MO",
"fulfills": {
"output_index": 0,
"transaction_id": "2bf8ae1e3bc889a26df129dd6bbe6ccab30d1ae8e9e434fae1c4446042a68931"
},
"owners_before": [
"3yfQPHeWAa1MxTX9Zf9176QqcpcnWcanVZZbaHb8B3h9"
]
}
],
"metadata": {
"sequence": 2
},
"operation": "TRANSFER",
"outputs": [
{
"amount": "1",
"condition": {
"details": {
"public_key": "3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm",
"type": "ed25519-sha-256"
},
"uri": "ni:///sha-256;Ll1r0LzgHUvWB87yIrNFYo731MMUEypqvrbPATTbuD4?fpt=ed25519-sha-256&cost=131072"
},
"public_keys": [
"3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm"
]
}
],
"version": "2.0"
}]

View File

@ -0,0 +1,3 @@
GET /api/v1/transactions/6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2 HTTP/1.1
Host: example.com

View File

@ -0,0 +1,40 @@
HTTP/1.1 200 OK
Content-Type: application/json
{
"asset": {
"data": {
"msg": "Hello Planetmint!"
}
},
"id": "6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2",
"inputs": [
{
"fulfillment": "pGSAIDE5i63cn4X8T8N1sZ2mGkJD5lNRnBM4PZgI_zvzbr-cgUBcdRj6EnG3MPl47m3XPd1HNfe7q3WL4T6MewNNnat33UvzVnPHo_vossv57M7L064VwrYMLGp097H7IeHpDngK",
"fulfills": null,
"owners_before": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"metadata": {
"sequence": 0
},
"operation": "CREATE",
"outputs": [
{
"amount": "1",
"condition": {
"details": {
"public_key": "4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD",
"type": "ed25519-sha-256"
},
"uri": "ni:///sha-256;PNYwdxaRaNw60N6LDFzOWO97b8tJeragczakL8PrAPc?fpt=ed25519-sha-256&cost=131072"
},
"public_keys": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"version": "2.0"
}

View File

@ -0,0 +1,20 @@
HTTP/1.1 200 OK
Content-Type: application/json
{
"api": {
"v1": {
"assets": "/api/v1/assets/",
"blocks": "/api/v1/blocks/",
"docs": "https://docs.bigchaindb.com/projects/server/en/v0.9.0/http-client-server-api.html",
"metadata": "/api/v1/metadata/",
"outputs": "/api/v1/outputs/",
"streams": "ws://localhost:9985/api/v1/streams/valid_transactions",
"transactions": "/api/v1/transactions/",
"validators": "/api/v1/validators"
}
},
"docs": "https://docs.bigchaindb.com/projects/server/en/v0.9.0/",
"software": "Planetmint",
"version": "0.9.0"
}

View File

@ -0,0 +1,41 @@
POST /api/v1/transactions?mode=async HTTP/1.1
Host: example.com
Content-Type: application/json
{
"asset": {
"data": {
"msg": "Hello Planetmint!"
}
},
"id": "6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2",
"inputs": [
{
"fulfillment": "pGSAIDE5i63cn4X8T8N1sZ2mGkJD5lNRnBM4PZgI_zvzbr-cgUBcdRj6EnG3MPl47m3XPd1HNfe7q3WL4T6MewNNnat33UvzVnPHo_vossv57M7L064VwrYMLGp097H7IeHpDngK",
"fulfills": null,
"owners_before": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"metadata": {
"sequence": 0
},
"operation": "CREATE",
"outputs": [
{
"amount": "1",
"condition": {
"details": {
"public_key": "4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD",
"type": "ed25519-sha-256"
},
"uri": "ni:///sha-256;PNYwdxaRaNw60N6LDFzOWO97b8tJeragczakL8PrAPc?fpt=ed25519-sha-256&cost=131072"
},
"public_keys": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"version": "2.0"
}

View File

@ -0,0 +1,40 @@
HTTP/1.1 202 Accepted
Content-Type: application/json
{
"asset": {
"data": {
"msg": "Hello Planetmint!"
}
},
"id": "6438c733f53dff60bdeded80e8c95126dd3a7ce8c1ee5d5591d030e61cde35d2",
"inputs": [
{
"fulfillment": "pGSAIDE5i63cn4X8T8N1sZ2mGkJD5lNRnBM4PZgI_zvzbr-cgUBcdRj6EnG3MPl47m3XPd1HNfe7q3WL4T6MewNNnat33UvzVnPHo_vossv57M7L064VwrYMLGp097H7IeHpDngK",
"fulfills": null,
"owners_before": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"metadata": {
"sequence": 0
},
"operation": "CREATE",
"outputs": [
{
"amount": "1",
"condition": {
"details": {
"public_key": "4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD",
"type": "ed25519-sha-256"
},
"uri": "ni:///sha-256;PNYwdxaRaNw60N6LDFzOWO97b8tJeragczakL8PrAPc?fpt=ed25519-sha-256&cost=131072"
},
"public_keys": [
"4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
]
}
],
"version": "2.0"
}

View File

@ -0,0 +1,16 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
API
===
.. toctree::
:maxdepth: 1
http-client-server-api
websocket-event-stream-api

View File

@ -0,0 +1,110 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
.. _the-websocket-event-stream-api:
The WebSocket Event Stream API
==============================
.. important::
The WebSocket Event Stream runs on a different port than the Web API. The
default port for the Web API is `9984`, while the one for the Event Stream
is `9985`.
Planetmint provides real-time event streams over the WebSocket protocol with
the Event Stream API.
Connecting to an event stream from your application enables a Planetmint node
to notify you as events occur, such as new `valid transactions <#valid-transactions>`_.
Demoing the API
---------------
You may be interested in demoing the Event Stream API with the `WebSocket echo test <http://websocket.org/echo.html>`_
to familiarize yourself before attempting an integration.
Determining Support for the Event Stream API
--------------------------------------------
It's a good idea to make sure that the node you're connecting with
has advertised support for the Event Stream API. To do so, send a HTTP GET
request to the node's :ref:`api-root-endpoint`
(e.g. ``http://localhost:9984/api/v1/``) and check that the
response contains a ``streams`` property:
.. code:: JSON
{
...,
"streams": "ws://example.com:9985/api/v1/streams/valid_transactions",
...
}
Connection Keep-Alive
---------------------
The Event Stream API supports Ping/Pong frames as descibed in
`RFC 6455 <https://tools.ietf.org/html/rfc6455#section-5.5.2>`_.
.. note::
It might not be possible to send PING/PONG frames via web browsers because
of non availability of Javascript API on different browsers to achieve the
same.
Streams
-------
Each stream is meant as a unidirectional communication channel, where the
Planetmint node is the only party sending messages. Any messages sent to the
Planetmint node will be ignored.
Streams will always be under the WebSocket protocol (so ``ws://`` or
``wss://``) and accessible as extensions to the ``/api/v<version>/streams/``
API root URL (for example, valid transactions
would be accessible under ``/api/v1/streams/valid_transactions``). If you're
running your own Planetmint instance and need help determining its root URL,
then see the page titled :ref:`determining-the-api-root-url`.
All messages sent in a stream are in the JSON format.
.. note::
For simplicity, Planetmint initially only provides a stream for all
committed transactions. In the future, we may provide streams for other
information. We may
also provide the ability to filter the stream for specific qualities, such
as a specific ``output``'s ``public_key``.
If you have specific use cases that you think would fit as part of this
API, consider creating a new `BEP <https://github.com/bigchaindb/BEPs>`_.
Valid Transactions
~~~~~~~~~~~~~~~~~~
``/valid_transactions``
Streams an event for any newly valid transactions committed to a block. Message
bodies contain the transaction's ID, associated asset ID, and containing
block's height.
Example message:
.. code:: JSON
{
"transaction_id": "<sha3-256 hash>",
"asset_id": "<sha3-256 hash>",
"height": <int>
}
.. note::
Transactions in Planetmint are committed in batches ("blocks") and will,
therefore, be streamed in batches.

View File

@ -0,0 +1,14 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Cryptography
============
Use the following link to find the Planetmint Transactions Spec (or Specs) that are relevant to you:
`Planetmint Transactions Specs <https://github.com/bigchaindb/BEPs/tree/master/tx-specs/>`_
Then see the sections titled **Cryptographic Hashes** and **Cryptographic Keys and Signatures**.

View File

@ -0,0 +1,79 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Notes for Firewall Setup
This is a page of notes on the ports potentially used by Planetmint nodes and the traffic they should expect, to help with firewall setup (or security group setup on cloud providers). This page is _not_ a firewall tutorial or step-by-step guide.
## Expected Unsolicited Inbound Traffic
The following ports should expect unsolicited inbound traffic:
1. **Port 22** can expect inbound SSH (TCP) traffic from the node administrator (i.e. a small set of IP addresses).
1. **Port 9984** can expect inbound HTTP (TCP) traffic from Planetmint clients sending transactions to the Planetmint HTTP API.
1. **Port 9985** can expect inbound WebSocket traffic from Planetmint clients.
1. **Port 26656** can expect inbound Tendermint P2P traffic from other Tendermint peers.
1. **Port 9986** can expect inbound HTTP (TCP) traffic from clients accessing the Public Key of a Tendermint instance.
All other ports should only get inbound traffic in response to specific requests from inside the node.
## Port 22
Port 22 is the default SSH port (TCP) so you'll at least want to make it possible to SSH in from your remote machine(s).
## Port 53
Port 53 is the default DNS port (UDP). It may be used, for example, by some package managers when look up the IP address associated with certain package sources.
## Port 80
Port 80 is the default HTTP port (TCP). It's used by some package managers to get packages. It's _not_ the default port for the Planetmint client-server HTTP API.
## Port 123
Port 123 is the default NTP port (UDP). You should be running an NTP daemon on production Planetmint nodes. NTP daemons must be able to send requests to external NTP servers and accept the respones.
## Port 161
Port 161 is the default SNMP port (usually UDP, sometimes TCP). SNMP is used, for example, by some server monitoring systems.
## Port 443
Port 443 is the default HTTPS port (TCP). Package managers might also get some packages using HTTPS.
## Port 9984
Port 9984 is the default port for the Planetmint client-server HTTP API (TCP), which is served by Gunicorn HTTP Server. It's _possible_ allow port 9984 to accept inbound traffic from anyone, but we recommend against doing that. Instead, set up a reverse proxy server (e.g. using Nginx) and only allow traffic from there. Information about how to do that can be found [in the Gunicorn documentation](http://docs.gunicorn.org/en/stable/deploy.html). (They call it a proxy.)
If Gunicorn and the reverse proxy are running on the same server, then you'll have to tell Gunicorn to listen on some port other than 9984 (so that the reverse proxy can listen on port 9984). You can do that by setting `server.bind` to 'localhost:PORT' in the [Planetmint Configuration Settings](../../installation/node-setup/configuration), where PORT is whatever port you chose (e.g. 9983).
You may want to have Gunicorn and the reverse proxy running on different servers, so that both can listen on port 9984. That would also help isolate the effects of a denial-of-service attack.
## Port 9985
Port 9985 is the default port for the Planetmint WebSocket Event Stream API.
## Port 9986
Port 9986 is the default port to access the Public Key of a Tendermint instance, it is used by a NGINX instance
that runs with Tendermint instance(Pod), and only hosts the Public Key.
## Port 26656
Port 26656 is the default port used by Tendermint Core to communicate with other instances of Tendermint Core (peers).
## Port 26657
Port 26657 is the default port used by Tendermint Core for RPC traffic. Planetmint nodes use that internally; they don't expect incoming traffic from the outside world on port 26657.
## Port 26658
Port 26658 is the default port used by Tendermint Core for ABCI traffic. Planetmint nodes use that internally; they don't expect incoming traffic from the outside world on port 26658.
## Other Ports
On Linux, you can use commands such as `netstat -tunlp` or `lsof -i` to get a sense of currently open/listening ports and connections, and the associated processes.

View File

@ -0,0 +1,41 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Generate a Key Pair for SSH
This page describes how to use `ssh-keygen`
to generate a public/private RSA key pair
that can be used with SSH.
(Note: `ssh-keygen` is found on most Linux and Unix-like
operating systems; if you're using Windows,
then you'll have to use another tool,
such as PuTTYgen.)
By convention, SSH key pairs get stored in the `~/.ssh/` directory.
Check what keys you already have there:
```text
ls -1 ~/.ssh/
```
Next, make up a new key pair name (called `<name>` below).
Here are some ideas:
* `aws-bdb-2`
* `tim-bdb-azure`
* `chris-bcdb-key`
Next, generate a public/private RSA key pair with that name:
```text
ssh-keygen -t rsa -C "<name>" -f ~/.ssh/<name>
```
It will ask you for a passphrase.
You can use whatever passphrase you like, but don't lose it.
Two keys (files) will be created in `~/.ssh/`:
1. `~/.ssh/<name>.pub` is the public key
2. `~/.ssh/<name>` is the private key

View File

@ -0,0 +1,19 @@
.. Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
Appendices
==========
.. toctree::
:maxdepth: 1
generate-key-pair-for-ssh
cryptography
firewall-notes
ntp-notes
log-rotation
licenses

View File

@ -0,0 +1,10 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Licenses
Information about how the Planetmint Server code and documentation are licensed can be found in [the LICENSES.md file](https://github.com/bigchaindb/bigchaindb/blob/master/LICENSES.md) of the bigchaindb/bigchaindb repository on GitHub.

Some files were not shown because too many files have changed in this diff Show More