GitHub actions (#234)

* creating first github action

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fix syntax error

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* renamed action, using black stable

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated checkout action on workflow black

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* formatted code with black

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced lint with black service

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed black service added black check to makefile

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced flake8 with black

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added pull_request to black actions trigger

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced flake8 with black style checker (#212)

* updated version number to 1.0.0

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* creating first github action

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fix syntax error

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* renamed action, using black stable

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated checkout action on workflow black

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* formatted code with black

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* version bumpt

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed some comments and unsused import

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced lint with black service

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed black service added black check to makefile

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced flake8 with black

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added pull_request to black actions trigger

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* started on unit test workflow

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed run step

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed typo

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* testing docker-compose

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* check docker-compose

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* try running pytest

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* check out -f

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed path

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* increased health check retries, added job dependency

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added path to docker-compose.yml to test action

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* moved container startup to test step

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added checkout step to test job

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* different kind of execution

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* checking build step

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed missing keyword

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added checkout to build step

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* storing artifacts

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added needs

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed Dockerfile-dev to python-slim

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added job matrix back in

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added abci to build job matrix

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated test job steps

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed typo

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced docker exec with docker-compose exec for abci test

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added first version of acceptance and integration test action

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added runs-on

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed syntax error

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* reverted to docker exec

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added copyright notice and env to start container step

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* separated abci from non abci test job

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* renamed pytest workflow to unit-test

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added codecov workflow

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added pytest install to codecov step

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added pip install

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* moved codecov to unit-test

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* show files

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed paths

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed debug job steps

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* renamed black to lint, added audit workflow

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* checking if dc down is necessary

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed dc down step from acceptance and integration

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed lint error

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added tox documentation to github acitons (#226)

* added documentation job

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added docs dependency install to docs workflow

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* add more dependencies

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* install rapidjson manually

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added python-rapidjson to docs requirements text

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed gh config on tox.ini

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added base58 to docs require

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed docs require to dev

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* reversed changes to docs require

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed gh to gh-actions

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* increased verbosity for debugging

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added -e docsroot manually

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed verbosity

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed travis ci files

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed audit step to trigger on schedule

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
Co-authored-by: enesturk <enes.m.turk@gmail.com>
This commit is contained in:
Lorenz Herzberger 2022-08-18 09:45:51 +02:00 committed by GitHub
parent e88bb41c70
commit 8abbef00fe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
151 changed files with 4721 additions and 5201 deletions

View File

@ -1,12 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
if [[ -z ${TOXENV} ]] && [[ ${PLANETMINT_CI_ABCI} != 'enable' ]] && [[ ${PLANETMINT_ACCEPTANCE_TEST} != 'enable' ]]; then
codecov -v -f htmlcov/coverage.xml
fi

View File

@ -1,20 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
if [[ -n ${TOXENV} ]]; then
sudo apt-get update
sudo apt-get install zsh
fi
if [[ -z ${TOXENV} ]]; then
sudo apt-get update
sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce
sudo rm /usr/local/bin/docker-compose
curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
chmod +x docker-compose
sudo mv docker-compose /usr/local/bin
fi

View File

@ -1,18 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
if [[ -z ${TOXENV} ]]; then
if [[ ${PLANETMINT_CI_ABCI} == 'enable' ]]; then
docker-compose up -d planetmint
else
docker-compose up -d bdb
fi
fi

View File

@ -1,19 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
pip install --upgrade pip
if [[ -n ${TOXENV} ]]; then
pip install --upgrade tox
elif [[ ${PLANETMINT_CI_ABCI} == 'enable' ]]; then
docker-compose build --no-cache --build-arg abci_status=enable planetmint
else
docker-compose build --no-cache planetmint
pip install --upgrade codecov
fi

View File

@ -1,21 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -e -x
if [[ -n ${TOXENV} ]]; then
tox -e ${TOXENV}
elif [[ ${PLANETMINT_CI_ABCI} == 'enable' ]]; then
docker-compose exec planetmint pytest -v -m abci
elif [[ ${PLANETMINT_ACCEPTANCE_TEST} == 'enable' ]]; then
./scripts/run-acceptance-test.sh
elif [[ ${PLANETMINT_INTEGRATION_TEST} == 'enable' ]]; then
docker-compose down # TODO: remove after ci optimization
./scripts/run-integration-test.sh
else
docker-compose exec planetmint pytest -v --cov=planetmint --cov-report xml:htmlcov/coverage.xml
fi

21
.github/workflows/acceptance-test.yml vendored Normal file
View File

@ -0,0 +1,21 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Acceptance tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Start container
run: docker-compose up -d planetmint
- name: Run test
run: docker-compose -f docker-compose.yml run --rm python-acceptance pytest /src

36
.github/workflows/audit.yml vendored Normal file
View File

@ -0,0 +1,36 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Audit
on:
schedule:
- cron: '0 2 * * *'
jobs:
audit:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Install pip-audit
run: pip install --upgrade pip pip-audit
- name: Install dependencies
run: pip install .
- name: Create requirements.txt
run: pip freeze > requirements.txt
- name: Audit dependencies
run: pip-audit

35
.github/workflows/documenation.yml vendored Normal file
View File

@ -0,0 +1,35 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Documentation
on: [push, pull_request]
jobs:
documentation:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Install tox
run: python -m pip install --upgrade tox tox-gh-actions
- name: Install dependencies
run: pip install .'[dev]'
- name: Run tox
run: tox -e docsroot

18
.github/workflows/integration-test.yml vendored Normal file
View File

@ -0,0 +1,18 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Integration tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Start test run
run: docker-compose -f docker-compose.integration.yml up test

17
.github/workflows/lint.yml vendored Normal file
View File

@ -0,0 +1,17 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Lint
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable
with:
options: "--check -l 119"
src: "."

109
.github/workflows/unit-test.yml vendored Normal file
View File

@ -0,0 +1,109 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Unit tests
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
include:
- abci_enabled: "ABCI enabled"
abci: "enabled"
- abci_disabled: "ABCI disabled"
abci: "disabled"
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Build container
run: |
if [[ "${{ matrix.abci }}" == "enabled" ]]; then
docker-compose -f docker-compose.yml build --no-cache --build-arg abci_status=enable planetmint
fi
if [[ ""${{ matrix.abci }}" == "disabled"" ]]; then
docker-compose -f docker-compose.yml build --no-cache planetmint
fi
- name: Save image
run: docker save -o planetmint.tar planetmint_planetmint
- name: Upload image
uses: actions/upload-artifact@v3
with:
name: planetmint-abci-${{matrix.abci}}
path: planetmint.tar
retention-days: 5
test-with-abci:
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- db: "MongoDB with ABCI"
host: "mongodb"
port: 27017
abci: "enabled"
- db: "Tarantool with ABCI"
host: "tarantool"
port: 3303
abci: "enabled"
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Download planetmint
uses: actions/download-artifact@v3
with:
name: planetmint-abci-enabled
- name: Load planetmint
run: docker load -i planetmint.tar
- name: Start containers
run: docker-compose -f docker-compose.yml up -d planetmint
- name: Run tests
run: docker exec planetmint_planetmint_1 pytest -v -m abci
test-without-abci:
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- db: "MongoDB without ABCI"
host: "mongodb"
port: 27017
- db: "Tarantool without ABCI"
host: "tarantool"
port: 3303
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Download planetmint
uses: actions/download-artifact@v3
with:
name: planetmint-abci-disabled
- name: Load planetmint
run: docker load -i planetmint.tar
- name: Start containers
run: docker-compose -f docker-compose.yml up -d bdb
- name: Run tests
run: docker exec planetmint_planetmint_1 pytest -v --cov=planetmint --cov-report xml:htmlcov/coverage.xml
- name: Upload Coverage to Codecov
uses: codecov/codecov-action@v3

View File

@ -1,64 +0,0 @@
# Copyright © 2020, 2021 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
sudo: required
dist: focal
services:
- docker
language: python
cache: pip
python:
- 3.9
env:
global:
- DOCKER_COMPOSE_VERSION=1.29.2
matrix:
- TOXENV=flake8
- TOXENV=docsroot
matrix:
fast_finish: true
include:
- python: 3.9
env:
- PLANETMINT_DATABASE_BACKEND=tarantool_db
- PLANETMINT_DATABASE_SSL=
- python: 3.9
env:
- PLANETMINT_DATABASE_BACKEND=tarantool_db
- PLANETMINT_DATABASE_SSL=
- PLANETMINT_CI_ABCI=enable
- python: 3.9
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- python: 3.9
env:
- PLANETMINT_DATABASE_BACKEND=localmongodb
- PLANETMINT_DATABASE_SSL=
- PLANETMINT_CI_ABCI=enable
- python: 3.9
env:
- PLANETMINT_ACCEPTANCE_TEST=enable
- python: 3.9
env:
- PLANETMINT_INTEGRATION_TEST=enable
before_install: sudo .ci/travis-before-install.sh
install: .ci/travis-install.sh
before_script: .ci/travis-before-script.sh
script: .ci/travis_script.sh
after_success: .ci/travis-after-success.sh

View File

@ -1,9 +1,9 @@
ARG python_version=3.9
FROM python:${python_version}
FROM python:${python_version}-slim
LABEL maintainer "contact@ipdb.global"
RUN apt-get update \
&& apt-get install -y git zsh\
&& apt-get install -y git zsh curl\
&& apt-get install -y tarantool-common\
&& apt-get install -y vim build-essential cmake\
&& pip install -U pip \

View File

@ -47,6 +47,7 @@ HELP := python -c "$$PRINT_HELP_PYSCRIPT"
ECHO := /usr/bin/env echo
IS_DOCKER_COMPOSE_INSTALLED := $(shell command -v docker-compose 2> /dev/null)
IS_BLACK_INSTALLED := $(shell command -v black 2> /dev/null)
################
# Main targets #
@ -70,8 +71,11 @@ stop: check-deps ## Stop Planetmint
logs: check-deps ## Attach to the logs
@$(DC) logs -f planetmint
lint: check-deps ## Lint the project
@$(DC) up lint
lint: check-py-deps ## Lint the project
black --check -l 119 .
format: check-py-deps ## Format the project
black -l 119 .
test: check-deps test-unit test-acceptance ## Run unit and acceptance tests
@ -132,3 +136,11 @@ ifndef IS_DOCKER_COMPOSE_INSTALLED
@$(ECHO)
@$(DC) # docker-compose is not installed, so we call it to generate an error and exit
endif
check-py-deps:
ifndef IS_BLACK_INSTALLED
@$(ECHO) "Error: black is not installed"
@$(ECHO)
@$(ECHO) "You need to activate your virtual environment and install the test dependencies"
black # black is not installed, so we call it to generate an error and exit
endif

View File

@ -82,11 +82,11 @@ x = 'name: {}; score: {}'.format(name, n)
we use the `format()` version. The [official Python documentation says](https://docs.python.org/2/library/stdtypes.html#str.format), "This method of string formatting is the new standard in Python 3, and should be preferred to the % formatting described in String Formatting Operations in new code."
## Running the Flake8 Style Checker
## Running the Black Style Checker
We use [Flake8](http://flake8.pycqa.org/en/latest/index.html) to check our Python code style. Once you have it installed, you can run it using:
We use [Black](https://black.readthedocs.io/en/stable/) to check our Python code style. Once you have it installed, you can run it using:
```text
flake8 --max-line-length 119 planetmint/
black --check -l 119 .
```

View File

@ -31,7 +31,7 @@ def test_basic():
# connect to localhost, but you can override this value using the env variable
# called `PLANETMINT_ENDPOINT`, a valid value must include the schema:
# `https://example.com:9984`
bdb = Planetmint(os.environ.get('PLANETMINT_ENDPOINT'))
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# ## Create keypairs
# This test requires the interaction between two actors with their own keypair.
@ -41,33 +41,28 @@ def test_basic():
# ## Alice registers her bike in Planetmint
# Alice has a nice bike, and here she creates the "digital twin"
# of her bike.
bike = {'data': {'bicycle': {'serial_number': 420420}}}
bike = {"data": {"bicycle": {"serial_number": 420420}}}
# She prepares a `CREATE` transaction...
prepared_creation_tx = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset=bike)
prepared_creation_tx = bdb.transactions.prepare(operation="CREATE", signers=alice.public_key, asset=bike)
# ... and she fulfills it with her private key.
fulfilled_creation_tx = bdb.transactions.fulfill(
prepared_creation_tx,
private_keys=alice.private_key)
fulfilled_creation_tx = bdb.transactions.fulfill(prepared_creation_tx, private_keys=alice.private_key)
# We will use the `id` of this transaction several time, so we store it in
# a variable with a short and easy name
bike_id = fulfilled_creation_tx['id']
bike_id = fulfilled_creation_tx["id"]
# Now she is ready to send it to the Planetmint Network.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_creation_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(bike_id), 'Cannot find transaction {}'.format(bike_id)
assert bdb.transactions.retrieve(bike_id), "Cannot find transaction {}".format(bike_id)
# Alice is now the proud owner of one unspent asset.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 1
assert bdb.outputs.get(alice.public_key)[0]['transaction_id'] == bike_id
assert bdb.outputs.get(alice.public_key)[0]["transaction_id"] == bike_id
# ## Alice transfers her bike to Bob
# After registering her bike, Alice is ready to transfer it to Bob.
@ -75,11 +70,11 @@ def test_basic():
# A `TRANSFER` transaction contains a pointer to the original asset. The original asset
# is identified by the `id` of the `CREATE` transaction that defined it.
transfer_asset = {'id': bike_id}
transfer_asset = {"id": bike_id}
# Alice wants to spend the one and only output available, the one with index `0`.
output_index = 0
output = fulfilled_creation_tx['outputs'][output_index]
output = fulfilled_creation_tx["outputs"][output_index]
# Here, she defines the `input` of the `TRANSFER` transaction. The `input` contains
# several keys:
@ -87,29 +82,26 @@ def test_basic():
# - `fulfillment`, taken from the previous `CREATE` transaction.
# - `fulfills`, that specifies which condition she is fulfilling.
# - `owners_before`.
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_creation_tx['id']},
'owners_before': output['public_keys']}
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_creation_tx["id"]},
"owners_before": output["public_keys"],
}
# Now that all the elements are set, she creates the actual transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=bob.public_key)
operation="TRANSFER", asset=transfer_asset, inputs=transfer_input, recipients=bob.public_key
)
# ... and signs it with her private key.
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=alice.private_key)
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=alice.private_key)
# She finally sends the transaction to a Planetmint node.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(fulfilled_transfer_tx['id']) == sent_transfer_tx
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# Now Alice has zero unspent transactions.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 0
@ -118,5 +110,5 @@ def test_basic():
assert len(bdb.outputs.get(bob.public_key, spent=False)) == 1
# Bob double checks what he got was the actual bike.
bob_tx_id = bdb.outputs.get(bob.public_key, spent=False)[0]['transaction_id']
bob_tx_id = bdb.outputs.get(bob.public_key, spent=False)[0]["transaction_id"]
assert bdb.transactions.retrieve(bob_tx_id) == sent_transfer_tx

View File

@ -34,7 +34,7 @@ def test_divisible_assets():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = Planetmint(os.environ.get('PLANETMINT_ENDPOINT'))
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# Oh look, it is Alice again and she brought her friend Bob along.
alice, bob = generate_keypair(), generate_keypair()
@ -48,13 +48,9 @@ def test_divisible_assets():
# the bike for one hour.
bike_token = {
'data': {
'token_for': {
'bike': {
'serial_number': 420420
}
},
'description': 'Time share token. Each token equals one hour of riding.',
"data": {
"token_for": {"bike": {"serial_number": 420420}},
"description": "Time share token. Each token equals one hour of riding.",
},
}
@ -62,28 +58,22 @@ def test_divisible_assets():
# Here, Alice defines in a tuple that she wants to assign
# these 10 tokens to Bob.
prepared_token_tx = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
recipients=[([bob.public_key], 10)],
asset=bike_token)
operation="CREATE", signers=alice.public_key, recipients=[([bob.public_key], 10)], asset=bike_token
)
# She fulfills and sends the transaction.
fulfilled_token_tx = bdb.transactions.fulfill(
prepared_token_tx,
private_keys=alice.private_key)
fulfilled_token_tx = bdb.transactions.fulfill(prepared_token_tx, private_keys=alice.private_key)
bdb.transactions.send_commit(fulfilled_token_tx)
# We store the `id` of the transaction to use it later on.
bike_token_id = fulfilled_token_tx['id']
bike_token_id = fulfilled_token_tx["id"]
# Let's check if the transaction was successful.
assert bdb.transactions.retrieve(bike_token_id), \
'Cannot find transaction {}'.format(bike_token_id)
assert bdb.transactions.retrieve(bike_token_id), "Cannot find transaction {}".format(bike_token_id)
# Bob owns 10 tokens now.
assert bdb.transactions.retrieve(bike_token_id)['outputs'][0][
'amount'] == '10'
assert bdb.transactions.retrieve(bike_token_id)["outputs"][0]["amount"] == "10"
# ## Bob wants to use the bike
# Now that Bob got the tokens and the sun is shining, he wants to get out
@ -91,49 +81,45 @@ def test_divisible_assets():
# To use the bike he has to send the tokens back to Alice.
# To learn about the details of transferring a transaction check out
# [test_basic.py](./test_basic.html)
transfer_asset = {'id': bike_token_id}
transfer_asset = {"id": bike_token_id}
output_index = 0
output = fulfilled_token_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_token_tx[
'id']},
'owners_before': output['public_keys']}
output = fulfilled_token_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_token_tx["id"]},
"owners_before": output["public_keys"],
}
# To use the tokens Bob has to reassign 7 tokens to himself and the
# amount he wants to use to Alice.
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
operation="TRANSFER",
asset=transfer_asset,
inputs=transfer_input,
recipients=[([alice.public_key], 3), ([bob.public_key], 7)])
recipients=[([alice.public_key], 3), ([bob.public_key], 7)],
)
# He signs and sends the transaction.
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=bob.private_key)
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# First, Bob checks if the transaction was successful.
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id']) == sent_transfer_tx
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# There are two outputs in the transaction now.
# The first output shows that Alice got back 3 tokens...
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id'])['outputs'][0]['amount'] == '3'
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["amount"] == "3"
# ... while Bob still has 7 left.
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id'])['outputs'][1]['amount'] == '7'
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][1]["amount"] == "7"
# ## Bob wants to ride the bike again
# It's been a week and Bob wants to right the bike again.
# Now he wants to ride for 8 hours, that's a lot Bob!
# He prepares the transaction again.
transfer_asset = {'id': bike_token_id}
transfer_asset = {"id": bike_token_id}
# This time we need an `output_index` of 1, since we have two outputs
# in the `fulfilled_transfer_tx` we created before. The first output with
# index 0 is for Alice and the second output is for Bob.
@ -141,24 +127,21 @@ def test_divisible_assets():
# correct output with the correct amount of tokens.
output_index = 1
output = fulfilled_transfer_tx['outputs'][output_index]
output = fulfilled_transfer_tx["outputs"][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_transfer_tx['id']},
'owners_before': output['public_keys']}
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_transfer_tx["id"]},
"owners_before": output["public_keys"],
}
# This time Bob only provides Alice in the `recipients` because he wants
# to spend all his tokens
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=[([alice.public_key], 8)])
operation="TRANSFER", asset=transfer_asset, inputs=transfer_input, recipients=[([alice.public_key], 8)]
)
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=bob.private_key)
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
# Oh Bob, what have you done?! You tried to spend more tokens than you had.
# Remember Bob, last time you spent 3 tokens already,
@ -169,10 +152,12 @@ def test_divisible_assets():
# Now Bob gets an error saying that the amount he wanted to spent is
# higher than the amount of tokens he has left.
assert error.value.args[0] == 400
message = 'Invalid transaction (AmountError): The amount used in the ' \
'inputs `7` needs to be same as the amount used in the ' \
'outputs `8`'
assert error.value.args[2]['message'] == message
message = (
"Invalid transaction (AmountError): The amount used in the "
"inputs `7` needs to be same as the amount used in the "
"outputs `8`"
)
assert error.value.args[2]["message"] == message
# We have to stop this test now, I am sorry, but Bob is pretty upset
# about his mistake. See you next time :)

View File

@ -17,32 +17,30 @@ from planetmint_driver.crypto import generate_keypair
def test_double_create():
bdb = Planetmint(os.environ.get('PLANETMINT_ENDPOINT'))
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
alice = generate_keypair()
results = queue.Queue()
tx = bdb.transactions.fulfill(
bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset={'data': {'uuid': str(uuid4())}}),
private_keys=alice.private_key)
bdb.transactions.prepare(operation="CREATE", signers=alice.public_key, asset={"data": {"uuid": str(uuid4())}}),
private_keys=alice.private_key,
)
def send_and_queue(tx):
try:
bdb.transactions.send_commit(tx)
results.put('OK')
results.put("OK")
except planetmint_driver.exceptions.TransportError as e:
results.put('FAIL')
results.put("FAIL")
t1 = Thread(target=send_and_queue, args=(tx, ))
t2 = Thread(target=send_and_queue, args=(tx, ))
t1 = Thread(target=send_and_queue, args=(tx,))
t2 = Thread(target=send_and_queue, args=(tx,))
t1.start()
t2.start()
results = [results.get(timeout=2), results.get(timeout=2)]
assert results.count('OK') == 1
assert results.count('FAIL') == 1
assert results.count("OK") == 1
assert results.count("FAIL") == 1

View File

@ -31,7 +31,7 @@ def test_multiple_owners():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = Planetmint(os.environ.get('PLANETMINT_ENDPOINT'))
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# Hey Alice and Bob, nice to see you again!
alice, bob = generate_keypair(), generate_keypair()
@ -41,40 +41,28 @@ def test_multiple_owners():
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = {
'data': {
'dish washer': {
'serial_number': 1337
}
}
}
dw_asset = {"data": {"dish washer": {"serial_number": 1337}}}
# They prepare a `CREATE` transaction. To have multiple owners, both
# Bob and Alice need to be the recipients.
prepared_dw_tx = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
recipients=(alice.public_key, bob.public_key),
asset=dw_asset)
operation="CREATE", signers=alice.public_key, recipients=(alice.public_key, bob.public_key), asset=dw_asset
)
# Now they both sign the transaction by providing their private keys.
# And send it afterwards.
fulfilled_dw_tx = bdb.transactions.fulfill(
prepared_dw_tx,
private_keys=[alice.private_key, bob.private_key])
fulfilled_dw_tx = bdb.transactions.fulfill(prepared_dw_tx, private_keys=[alice.private_key, bob.private_key])
bdb.transactions.send_commit(fulfilled_dw_tx)
# We store the `id` of the transaction to use it later on.
dw_id = fulfilled_dw_tx['id']
dw_id = fulfilled_dw_tx["id"]
# Let's check if the transaction was successful.
assert bdb.transactions.retrieve(dw_id), \
'Cannot find transaction {}'.format(dw_id)
assert bdb.transactions.retrieve(dw_id), "Cannot find transaction {}".format(dw_id)
# The transaction should have two public keys in the outputs.
assert len(
bdb.transactions.retrieve(dw_id)['outputs'][0]['public_keys']) == 2
assert len(bdb.transactions.retrieve(dw_id)["outputs"][0]["public_keys"]) == 2
# ## Alice and Bob transfer a transaction to Carol.
# Alice and Bob save a lot of money living together. They often go out
@ -86,39 +74,33 @@ def test_multiple_owners():
# Alice and Bob prepare the transaction to transfer the dish washer to
# Carol.
transfer_asset = {'id': dw_id}
transfer_asset = {"id": dw_id}
output_index = 0
output = fulfilled_dw_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_dw_tx[
'id']},
'owners_before': output['public_keys']}
output = fulfilled_dw_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_dw_tx["id"]},
"owners_before": output["public_keys"],
}
# Now they create the transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=carol.public_key)
operation="TRANSFER", asset=transfer_asset, inputs=transfer_input, recipients=carol.public_key
)
# ... and sign it with their private keys, then send it.
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx,
private_keys=[alice.private_key, bob.private_key])
prepared_transfer_tx, private_keys=[alice.private_key, bob.private_key]
)
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# They check if the transaction was successful.
assert bdb.transactions.retrieve(
fulfilled_transfer_tx['id']) == sent_transfer_tx
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# The owners before should include both Alice and Bob.
assert len(
bdb.transactions.retrieve(fulfilled_transfer_tx['id'])['inputs'][0][
'owners_before']) == 2
assert len(bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["inputs"][0]["owners_before"]) == 2
# While the new owner is Carol.
assert bdb.transactions.retrieve(fulfilled_transfer_tx['id'])[
'outputs'][0]['public_keys'][0] == carol.public_key
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["public_keys"][0] == carol.public_key

View File

@ -32,15 +32,36 @@ from planetmint_driver.exceptions import BadRequest
naughty_strings = blns.all()
skipped_naughty_strings = [
'1.00', '$1.00', '-1.00', '-$1.00', '0.00', '0..0', '.', '0.0.0',
'-.', ",./;'[]\\-=", 'ثم نفس سقطت وبالتحديد،, جزيرتي باستخدام أن دنو. إذ هنا؟ الستار وتنصيب كان. أهّل ايطاليا، بريطانيا-فرنسا قد أخذ. سليمان، إتفاقية بين ما, يذكر الحدود أي بعد, معاملة بولندا، الإطلاق عل إيو.',
'test\x00', 'Ṱ̺̺̕o͞ ̷i̲̬͇̪͙n̝̗͕v̟̜̘̦͟o̶̙̰̠kè͚̮̺̪̹̱̤ ̖t̝͕̳̣̻̪͞h̼͓̲̦̳̘̲e͇̣̰̦̬͎ ̢̼̻̱̘h͚͎͙̜̣̲ͅi̦̲̣̰̤v̻͍e̺̭̳̪̰-m̢iͅn̖̺̞̲̯̰d̵̼̟͙̩̼̘̳ ̞̥̱̳̭r̛̗̘e͙p͠r̼̞̻̭̗e̺̠̣͟s̘͇̳͍̝͉e͉̥̯̞̲͚̬͜ǹ̬͎͎̟̖͇̤t͍̬̤͓̼̭͘ͅi̪̱n͠g̴͉ ͏͉ͅc̬̟h͡a̫̻̯͘o̫̟̖͍̙̝͉s̗̦̲.̨̹͈̣', '̡͓̞ͅI̗̘̦͝n͇͇͙v̮̫ok̲̫̙͈i̖͙̭̹̠̞n̡̻̮̣̺g̲͈͙̭͙̬͎ ̰t͔̦h̞̲e̢̤ ͍̬̲͖f̴̘͕̣è͖ẹ̥̩l͖͔͚i͓͚̦͠n͖͍̗͓̳̮g͍ ̨o͚̪͡f̘̣̬ ̖̘͖̟͙̮c҉͔̫͖͓͇͖ͅh̵̤̣͚͔á̗̼͕ͅo̼̣̥s̱͈̺̖̦̻͢.̛̖̞̠̫̰', '̗̺͖̹̯͓Ṯ̤͍̥͇͈h̲́e͏͓̼̗̙̼̣͔ ͇̜̱̠͓͍ͅN͕͠e̗̱z̘̝̜̺͙p̤̺̹͍̯͚e̠̻̠͜r̨̤͍̺̖͔̖̖d̠̟̭̬̝͟i̦͖̩͓͔̤a̠̗̬͉̙n͚͜ ̻̞̰͚ͅh̵͉i̳̞v̢͇ḙ͎͟-҉̭̩̼͔m̤̭̫i͕͇̝̦n̗͙ḍ̟ ̯̲͕͞ǫ̟̯̰̲͙̻̝f ̪̰̰̗̖̭̘͘c̦͍̲̞͍̩̙ḥ͚a̮͎̟̙͜ơ̩̹͎s̤.̝̝ ҉Z̡̖̜͖̰̣͉̜a͖̰͙̬͡l̲̫̳͍̩g̡̟̼̱͚̞̬ͅo̗͜.̟',
'̦H̬̤̗̤͝e͜ ̜̥̝̻͍̟́w̕h̖̯͓o̝͙̖͎̱̮ ҉̺̙̞̟͈W̷̼̭a̺̪͍į͈͕̭͙̯̜t̶̼̮s̘͙͖̕ ̠̫̠B̻͍͙͉̳ͅe̵h̵̬͇̫͙i̹͓̳̳̮͎̫̕n͟d̴̪̜̖ ̰͉̩͇͙̲͞ͅT͖̼͓̪͢h͏͓̮̻e̬̝̟ͅ ̤̹̝W͙̞̝͔͇͝ͅa͏͓͔̹̼̣l̴͔̰̤̟͔ḽ̫.͕', '"><script>alert(document.title)</script>', "'><script>alert(document.title)</script>",
'><script>alert(document.title)</script>', '</script><script>alert(document.title)</script>', '< / script >< script >alert(document.title)< / script >',
' onfocus=alert(document.title) autofocus ','" onfocus=alert(document.title) autofocus ', "' onfocus=alert(document.title) autofocus ",
'scriptalert(document.title)/script', '/dev/null; touch /tmp/blns.fail ; echo', '../../../../../../../../../../../etc/passwd%00',
'../../../../../../../../../../../etc/hosts', '() { 0; }; touch /tmp/blns.shellshock1.fail;',
'() { _; } >_[$($())] { touch /tmp/blns.shellshock2.fail; }'
"1.00",
"$1.00",
"-1.00",
"-$1.00",
"0.00",
"0..0",
".",
"0.0.0",
"-.",
",./;'[]\\-=",
"ثم نفس سقطت وبالتحديد،, جزيرتي باستخدام أن دنو. إذ هنا؟ الستار وتنصيب كان. أهّل ايطاليا، بريطانيا-فرنسا قد أخذ. سليمان، إتفاقية بين ما, يذكر الحدود أي بعد, معاملة بولندا، الإطلاق عل إيو.",
"test\x00",
"Ṱ̺̺̕o͞ ̷i̲̬͇̪͙n̝̗͕v̟̜̘̦͟o̶̙̰̠kè͚̮̺̪̹̱̤ ̖t̝͕̳̣̻̪͞h̼͓̲̦̳̘̲e͇̣̰̦̬͎ ̢̼̻̱̘h͚͎͙̜̣̲ͅi̦̲̣̰̤v̻͍e̺̭̳̪̰-m̢iͅn̖̺̞̲̯̰d̵̼̟͙̩̼̘̳ ̞̥̱̳̭r̛̗̘e͙p͠r̼̞̻̭̗e̺̠̣͟s̘͇̳͍̝͉e͉̥̯̞̲͚̬͜ǹ̬͎͎̟̖͇̤t͍̬̤͓̼̭͘ͅi̪̱n͠g̴͉ ͏͉ͅc̬̟h͡a̫̻̯͘o̫̟̖͍̙̝͉s̗̦̲.̨̹͈̣",
"̡͓̞ͅI̗̘̦͝n͇͇͙v̮̫ok̲̫̙͈i̖͙̭̹̠̞n̡̻̮̣̺g̲͈͙̭͙̬͎ ̰t͔̦h̞̲e̢̤ ͍̬̲͖f̴̘͕̣è͖ẹ̥̩l͖͔͚i͓͚̦͠n͖͍̗͓̳̮g͍ ̨o͚̪͡f̘̣̬ ̖̘͖̟͙̮c҉͔̫͖͓͇͖ͅh̵̤̣͚͔á̗̼͕ͅo̼̣̥s̱͈̺̖̦̻͢.̛̖̞̠̫̰",
"̗̺͖̹̯͓Ṯ̤͍̥͇͈h̲́e͏͓̼̗̙̼̣͔ ͇̜̱̠͓͍ͅN͕͠e̗̱z̘̝̜̺͙p̤̺̹͍̯͚e̠̻̠͜r̨̤͍̺̖͔̖̖d̠̟̭̬̝͟i̦͖̩͓͔̤a̠̗̬͉̙n͚͜ ̻̞̰͚ͅh̵͉i̳̞v̢͇ḙ͎͟-҉̭̩̼͔m̤̭̫i͕͇̝̦n̗͙ḍ̟ ̯̲͕͞ǫ̟̯̰̲͙̻̝f ̪̰̰̗̖̭̘͘c̦͍̲̞͍̩̙ḥ͚a̮͎̟̙͜ơ̩̹͎s̤.̝̝ ҉Z̡̖̜͖̰̣͉̜a͖̰͙̬͡l̲̫̳͍̩g̡̟̼̱͚̞̬ͅo̗͜.̟",
"̦H̬̤̗̤͝e͜ ̜̥̝̻͍̟́w̕h̖̯͓o̝͙̖͎̱̮ ҉̺̙̞̟͈W̷̼̭a̺̪͍į͈͕̭͙̯̜t̶̼̮s̘͙͖̕ ̠̫̠B̻͍͙͉̳ͅe̵h̵̬͇̫͙i̹͓̳̳̮͎̫̕n͟d̴̪̜̖ ̰͉̩͇͙̲͞ͅT͖̼͓̪͢h͏͓̮̻e̬̝̟ͅ ̤̹̝W͙̞̝͔͇͝ͅa͏͓͔̹̼̣l̴͔̰̤̟͔ḽ̫.͕",
'"><script>alert(document.title)</script>',
"'><script>alert(document.title)</script>",
"><script>alert(document.title)</script>",
"</script><script>alert(document.title)</script>",
"< / script >< script >alert(document.title)< / script >",
" onfocus=alert(document.title) autofocus ",
'" onfocus=alert(document.title) autofocus ',
"' onfocus=alert(document.title) autofocus ",
"scriptalert(document.title)/script",
"/dev/null; touch /tmp/blns.fail ; echo",
"../../../../../../../../../../../etc/passwd%00",
"../../../../../../../../../../../etc/hosts",
"() { 0; }; touch /tmp/blns.shellshock1.fail;",
"() { _; } >_[$($())] { touch /tmp/blns.shellshock2.fail; }",
]
naughty_strings = [naughty for naughty in naughty_strings if naughty not in skipped_naughty_strings]
@ -50,22 +71,18 @@ def send_naughty_tx(asset, metadata):
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = Planetmint(os.environ.get('PLANETMINT_ENDPOINT'))
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# Here's Alice.
alice = generate_keypair()
# Alice is in a naughty mood today, so she creates a tx with some naughty strings
prepared_transaction = bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset=asset,
metadata=metadata)
operation="CREATE", signers=alice.public_key, asset=asset, metadata=metadata
)
# She fulfills the transaction
fulfilled_transaction = bdb.transactions.fulfill(
prepared_transaction,
private_keys=alice.private_key)
fulfilled_transaction = bdb.transactions.fulfill(prepared_transaction, private_keys=alice.private_key)
# The fulfilled tx gets sent to the BDB network
try:
@ -74,23 +91,24 @@ def send_naughty_tx(asset, metadata):
sent_transaction = e
# If her key contained a '.', began with a '$', or contained a NUL character
regex = '.*\..*|\$.*|.*\x00.*'
regex = ".*\..*|\$.*|.*\x00.*"
key = next(iter(metadata))
if re.match(regex, key):
# Then she expects a nicely formatted error code
status_code = sent_transaction.status_code
error = sent_transaction.error
regex = (
r'\{\s*\n*'
r"\{\s*\n*"
r'\s*"message":\s*"Invalid transaction \(ValidationError\):\s*'
r'Invalid key name.*The key name cannot contain characters.*\n*'
r"Invalid key name.*The key name cannot contain characters.*\n*"
r'\s*"status":\s*400\n*'
r'\s*\}\n*')
r"\s*\}\n*"
)
assert status_code == 400
assert re.fullmatch(regex, error), sent_transaction
# Otherwise, she expects to see her transaction in the database
elif 'id' in sent_transaction.keys():
tx_id = sent_transaction['id']
elif "id" in sent_transaction.keys():
tx_id = sent_transaction["id"]
assert bdb.transactions.retrieve(tx_id)
# If neither condition was true, then something weird happened...
else:
@ -100,8 +118,8 @@ def send_naughty_tx(asset, metadata):
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_keys(naughty_string):
asset = {'data': {naughty_string: 'nice_value'}}
metadata = {naughty_string: 'nice_value'}
asset = {"data": {naughty_string: "nice_value"}}
metadata = {naughty_string: "nice_value"}
send_naughty_tx(asset, metadata)
@ -109,7 +127,7 @@ def test_naughty_keys(naughty_string):
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_values(naughty_string):
asset = {'data': {'nice_key': naughty_string}}
metadata = {'nice_key': naughty_string}
asset = {"data": {"nice_key": naughty_string}}
metadata = {"nice_key": naughty_string}
send_naughty_tx(asset, metadata)

View File

@ -35,10 +35,10 @@ def test_stream():
# ## Set up the test
# We use the env variable `BICHAINDB_ENDPOINT` to know where to connect.
# Check [test_basic.py](./test_basic.html) for more information.
BDB_ENDPOINT = os.environ.get('PLANETMINT_ENDPOINT')
BDB_ENDPOINT = os.environ.get("PLANETMINT_ENDPOINT")
# *That's pretty bad, but let's do like this for now.*
WS_ENDPOINT = 'ws://{}:9985/api/v1/streams/valid_transactions'.format(BDB_ENDPOINT.rsplit(':')[0])
WS_ENDPOINT = "ws://{}:9985/api/v1/streams/valid_transactions".format(BDB_ENDPOINT.rsplit(":")[0])
bdb = Planetmint(BDB_ENDPOINT)
@ -90,11 +90,11 @@ def test_stream():
# random `uuid`.
for _ in range(10):
tx = bdb.transactions.fulfill(
bdb.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset={'data': {'uuid': str(uuid4())}}),
private_keys=alice.private_key)
bdb.transactions.prepare(
operation="CREATE", signers=alice.public_key, asset={"data": {"uuid": str(uuid4())}}
),
private_keys=alice.private_key,
)
# We don't want to wait for each transaction to be in a block. By using
# `async` mode, we make sure that the driver returns as soon as the
# transaction is pushed to the Planetmint API. Remember: we expect all
@ -104,7 +104,7 @@ def test_stream():
bdb.transactions.send_async(tx)
# The `id` of every sent transaction is then stored in a list.
sent.append(tx['id'])
sent.append(tx["id"])
# ## Check the valid transactions coming from Planetmint
# Now we are ready to check if Planetmint did its job. A simple way to
@ -118,9 +118,9 @@ def test_stream():
# the timeout, then game over ¯\\\_(ツ)\_/¯
try:
event = received.get(timeout=5)
txid = json.loads(event)['transaction_id']
txid = json.loads(event)["transaction_id"]
except queue.Empty:
assert False, 'Did not receive all expected transactions'
assert False, "Did not receive all expected transactions"
# Last thing is to try to remove the `txid` from the set of sent
# transactions. If this test is running in parallel with others, we

View File

@ -9,106 +9,105 @@ from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
def test_zenroom_signing(gen_key_zencode, secret_key_to_private_key_zencode,
fulfill_script_zencode, zenroom_data, zenroom_house_assets,
condition_script_zencode):
def test_zenroom_signing(
gen_key_zencode,
secret_key_to_private_key_zencode,
fulfill_script_zencode,
zenroom_data,
zenroom_house_assets,
condition_script_zencode,
):
biolabs = generate_keypair()
version = '2.0'
alice = json.loads(zencode_exec(gen_key_zencode).output)['keyring']
bob = json.loads(zencode_exec(gen_key_zencode).output)['keyring']
zen_public_keys = json.loads(zencode_exec(secret_key_to_private_key_zencode.format('Alice'),
keys=json.dumps({'keyring': alice})).output)
zen_public_keys.update(json.loads(zencode_exec(secret_key_to_private_key_zencode.format('Bob'),
keys=json.dumps({'keyring': bob})).output))
version = "2.0"
alice = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
bob = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
zen_public_keys = json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Alice"), keys=json.dumps({"keyring": alice})).output
)
zen_public_keys.update(
json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Bob"), keys=json.dumps({"keyring": bob})).output
)
)
zenroomscpt = ZenroomSha256(script=fulfill_script_zencode, data=zenroom_data, keys=zen_public_keys)
print(F'zenroom is: {zenroomscpt.script}')
print(f"zenroom is: {zenroomscpt.script}")
# CRYPTO-CONDITIONS: generate the condition uri
condition_uri_zen = zenroomscpt.condition.serialize_uri()
print(F'\nzenroom condition URI: {condition_uri_zen}')
condition_uri_zen = zenroomscpt.condition.serialize_uri()
print(f"\nzenroom condition URI: {condition_uri_zen}")
# CRYPTO-CONDITIONS: construct an unsigned fulfillment dictionary
unsigned_fulfillment_dict_zen = {
'type': zenroomscpt.TYPE_NAME,
'public_key': base58.b58encode(biolabs.public_key).decode(),
"type": zenroomscpt.TYPE_NAME,
"public_key": base58.b58encode(biolabs.public_key).decode(),
}
output = {
'amount': '10',
'condition': {
'details': unsigned_fulfillment_dict_zen,
'uri': condition_uri_zen,
"amount": "10",
"condition": {
"details": unsigned_fulfillment_dict_zen,
"uri": condition_uri_zen,
},
'public_keys': [biolabs.public_key,],
"public_keys": [
biolabs.public_key,
],
}
input_ = {
'fulfillment': None,
'fulfills': None,
'owners_before': [biolabs.public_key,]
"fulfillment": None,
"fulfills": None,
"owners_before": [
biolabs.public_key,
],
}
metadata = {
"result": {
"output": ["ok"]
}
}
metadata = {"result": {"output": ["ok"]}}
token_creation_tx = {
'operation': 'CREATE',
'asset': zenroom_house_assets,
'metadata': metadata,
'outputs': [output,],
'inputs': [input_,],
'version': version,
'id': None,
"operation": "CREATE",
"asset": zenroom_house_assets,
"metadata": metadata,
"outputs": [
output,
],
"inputs": [
input_,
],
"version": version,
"id": None,
}
# JSON: serialize the transaction-without-id to a json formatted string
message = json.dumps(
token_creation_tx,
sort_keys=True,
separators=(',', ':'),
separators=(",", ":"),
ensure_ascii=False,
)
# major workflow:
# we store the fulfill script in the transaction/message (zenroom-sha)
# the condition script is used to fulfill the transaction and create the signature
#
#
# the server should ick the fulfill script and recreate the zenroom-sha and verify the signature
message = zenroomscpt.sign(message, condition_script_zencode, alice)
assert(zenroomscpt.validate(message=message))
assert zenroomscpt.validate(message=message)
message = json.loads(message)
fulfillment_uri_zen = zenroomscpt.serialize_uri()
message['inputs'][0]['fulfillment'] = fulfillment_uri_zen
message["inputs"][0]["fulfillment"] = fulfillment_uri_zen
tx = message
tx['id'] = None
json_str_tx = json.dumps(
tx,
sort_keys=True,
skipkeys=False,
separators=(',', ':')
)
tx["id"] = None
json_str_tx = json.dumps(tx, sort_keys=True, skipkeys=False, separators=(",", ":"))
# SHA3: hash the serialized id-less transaction to generate the id
shared_creation_txid = sha3_256(json_str_tx.encode()).hexdigest()
message['id'] = shared_creation_txid
message["id"] = shared_creation_txid
# `https://example.com:9984`
plntmnt = Planetmint(os.environ.get('PLANETMINT_ENDPOINT'))
plntmnt = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
sent_transfer_tx = plntmnt.transactions.send_commit(message)
print( f"\n\nstatus and result : + {sent_transfer_tx}")
print(f"\n\nstatus and result : + {sent_transfer_tx}")

View File

@ -60,8 +60,8 @@ services:
test: ["CMD", "bash", "-c", "curl http://planetmint:9984 && curl http://tendermint:26657/abci_query"]
interval: 3s
timeout: 5s
retries: 3
command: '.ci/entrypoint.sh'
retries: 5
command: 'scripts/entrypoint.sh'
restart: always
tendermint:
@ -119,16 +119,6 @@ services:
volumes:
- ./docs/root/build/html:/usr/share/nginx/html
# Lints project according to PEP8
lint:
image: alpine/flake8
command: --max-line-length 119 /planetmint /acceptance /integration /tests
volumes:
- ./planetmint:/planetmint
- ./acceptance:/acceptance
- ./integration:/integration
- ./tests:/tests
# Remove all build, test, coverage and Python artifacts
clean:
image: alpine

View File

@ -20,28 +20,36 @@ from planetmint.web import server
TPLS = {}
TPLS['index-response'] = """\
TPLS[
"index-response"
] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(index)s
"""
TPLS['api-index-response'] = """\
TPLS[
"api-index-response"
] = """\
HTTP/1.1 200 OK
Content-Type: application/json
%(api_index)s
"""
TPLS['get-tx-id-request'] = """\
TPLS[
"get-tx-id-request"
] = """\
GET /api/v1/transactions/%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-id-response'] = """\
TPLS[
"get-tx-id-response"
] = """\
HTTP/1.1 200 OK
Content-Type: application/json
@ -49,14 +57,18 @@ Content-Type: application/json
"""
TPLS['get-tx-by-asset-request'] = """\
TPLS[
"get-tx-by-asset-request"
] = """\
GET /api/v1/transactions?operation=TRANSFER&asset_id=%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-tx-by-asset-response'] = """\
TPLS[
"get-tx-by-asset-response"
] = """\
HTTP/1.1 200 OK
Content-Type: application/json
@ -64,7 +76,9 @@ Content-Type: application/json
%(tx_transfer_last)s]
"""
TPLS['post-tx-request'] = """\
TPLS[
"post-tx-request"
] = """\
POST /api/v1/transactions?mode=async HTTP/1.1
Host: example.com
Content-Type: application/json
@ -73,7 +87,9 @@ Content-Type: application/json
"""
TPLS['post-tx-response'] = """\
TPLS[
"post-tx-response"
] = """\
HTTP/1.1 202 Accepted
Content-Type: application/json
@ -81,14 +97,18 @@ Content-Type: application/json
"""
TPLS['get-block-request'] = """\
TPLS[
"get-block-request"
] = """\
GET /api/v1/blocks/%(blockid)s HTTP/1.1
Host: example.com
"""
TPLS['get-block-response'] = """\
TPLS[
"get-block-response"
] = """\
HTTP/1.1 200 OK
Content-Type: application/json
@ -96,14 +116,18 @@ Content-Type: application/json
"""
TPLS['get-block-txid-request'] = """\
TPLS[
"get-block-txid-request"
] = """\
GET /api/v1/blocks?transaction_id=%(txid)s HTTP/1.1
Host: example.com
"""
TPLS['get-block-txid-response'] = """\
TPLS[
"get-block-txid-response"
] = """\
HTTP/1.1 200 OK
Content-Type: application/json
@ -112,7 +136,7 @@ Content-Type: application/json
def main():
""" Main function """
"""Main function"""
ctx = {}
@ -121,90 +145,91 @@ def main():
client = server.create_app().test_client()
host = 'example.com:9984'
host = "example.com:9984"
# HTTP Index
res = client.get('/', environ_overrides={'HTTP_HOST': host})
res = client.get("/", environ_overrides={"HTTP_HOST": host})
res_data = json.loads(res.data.decode())
ctx['index'] = pretty_json(res_data)
ctx["index"] = pretty_json(res_data)
# API index
res = client.get('/api/v1/', environ_overrides={'HTTP_HOST': host})
ctx['api_index'] = pretty_json(json.loads(res.data.decode()))
res = client.get("/api/v1/", environ_overrides={"HTTP_HOST": host})
ctx["api_index"] = pretty_json(json.loads(res.data.decode()))
# tx create
privkey = 'CfdqtD7sS7FgkMoGPXw55MVGGFwQLAoHYTcBhZDtF99Z'
pubkey = '4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD'
asset = {'msg': 'Hello Planetmint!'}
tx = Create.generate([pubkey], [([pubkey], 1)], asset=asset, metadata={'sequence': 0})
privkey = "CfdqtD7sS7FgkMoGPXw55MVGGFwQLAoHYTcBhZDtF99Z"
pubkey = "4K9sWUMFwTgaDGPfdynrbxWqWS6sWmKbZoTjxLtVUibD"
asset = {"msg": "Hello Planetmint!"}
tx = Create.generate([pubkey], [([pubkey], 1)], asset=asset, metadata={"sequence": 0})
tx = tx.sign([privkey])
ctx['tx'] = pretty_json(tx.to_dict())
ctx['public_keys'] = tx.outputs[0].public_keys[0]
ctx['txid'] = tx.id
ctx["tx"] = pretty_json(tx.to_dict())
ctx["public_keys"] = tx.outputs[0].public_keys[0]
ctx["txid"] = tx.id
# tx transfer
privkey_transfer = '3AeWpPdhEZzWLYfkfYHBfMFC2r1f8HEaGS9NtbbKssya'
pubkey_transfer = '3yfQPHeWAa1MxTX9Zf9176QqcpcnWcanVZZbaHb8B3h9'
privkey_transfer = "3AeWpPdhEZzWLYfkfYHBfMFC2r1f8HEaGS9NtbbKssya"
pubkey_transfer = "3yfQPHeWAa1MxTX9Zf9176QqcpcnWcanVZZbaHb8B3h9"
cid = 0
input_ = Input(fulfillment=tx.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx.id, output=cid),
owners_before=tx.outputs[cid].public_keys)
tx_transfer = Transfer.generate([input_], [([pubkey_transfer], 1)], asset_id=tx.id, metadata={'sequence': 1})
input_ = Input(
fulfillment=tx.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx.id, output=cid),
owners_before=tx.outputs[cid].public_keys,
)
tx_transfer = Transfer.generate([input_], [([pubkey_transfer], 1)], asset_id=tx.id, metadata={"sequence": 1})
tx_transfer = tx_transfer.sign([privkey])
ctx['tx_transfer'] = pretty_json(tx_transfer.to_dict())
ctx['public_keys_transfer'] = tx_transfer.outputs[0].public_keys[0]
ctx['tx_transfer_id'] = tx_transfer.id
ctx["tx_transfer"] = pretty_json(tx_transfer.to_dict())
ctx["public_keys_transfer"] = tx_transfer.outputs[0].public_keys[0]
ctx["tx_transfer_id"] = tx_transfer.id
# privkey_transfer_last = 'sG3jWDtdTXUidBJK53ucSTrosktG616U3tQHBk81eQe'
pubkey_transfer_last = '3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm'
pubkey_transfer_last = "3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm"
cid = 0
input_ = Input(fulfillment=tx_transfer.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx_transfer.id, output=cid),
owners_before=tx_transfer.outputs[cid].public_keys)
tx_transfer_last = Transfer.generate([input_], [([pubkey_transfer_last], 1)],
asset_id=tx.id, metadata={'sequence': 2})
input_ = Input(
fulfillment=tx_transfer.outputs[cid].fulfillment,
fulfills=TransactionLink(txid=tx_transfer.id, output=cid),
owners_before=tx_transfer.outputs[cid].public_keys,
)
tx_transfer_last = Transfer.generate(
[input_], [([pubkey_transfer_last], 1)], asset_id=tx.id, metadata={"sequence": 2}
)
tx_transfer_last = tx_transfer_last.sign([privkey_transfer])
ctx['tx_transfer_last'] = pretty_json(tx_transfer_last.to_dict())
ctx['tx_transfer_last_id'] = tx_transfer_last.id
ctx['public_keys_transfer_last'] = tx_transfer_last.outputs[0].public_keys[0]
ctx["tx_transfer_last"] = pretty_json(tx_transfer_last.to_dict())
ctx["tx_transfer_last_id"] = tx_transfer_last.id
ctx["public_keys_transfer_last"] = tx_transfer_last.outputs[0].public_keys[0]
# block
node_private = "5G2kE1zJAgTajkVSbPAQWo4c2izvtwqaNHYsaNpbbvxX"
node_public = "DngBurxfeNVKZWCEcDnLj1eMPAS7focUZTE5FndFGuHT"
signature = "53wxrEQDYk1dXzmvNSytbCfmNVnPqPkDQaTnAe8Jf43s6ssejPxezkCvUnGTnduNUmaLjhaan1iRLi3peu6s5DzA"
app_hash = 'f6e0c49c6d94d6924351f25bb334cf2a99af4206339bf784e741d1a5ab599056'
app_hash = "f6e0c49c6d94d6924351f25bb334cf2a99af4206339bf784e741d1a5ab599056"
block = lib.Block(height=1, transactions=[tx.to_dict()], app_hash=app_hash)
block_dict = block._asdict()
block_dict.pop('app_hash')
ctx['block'] = pretty_json(block_dict)
ctx['blockid'] = block.height
block_dict.pop("app_hash")
ctx["block"] = pretty_json(block_dict)
ctx["blockid"] = block.height
# block status
block_list = [
block.height
]
ctx['block_list'] = pretty_json(block_list)
block_list = [block.height]
ctx["block_list"] = pretty_json(block_list)
base_path = os.path.join(os.path.dirname(__file__),
'source/connecting/http-samples')
base_path = os.path.join(os.path.dirname(__file__), "source/connecting/http-samples")
if not os.path.exists(base_path):
os.makedirs(base_path)
for name, tpl in TPLS.items():
path = os.path.join(base_path, name + '.http')
path = os.path.join(base_path, name + ".http")
code = tpl % ctx
with open(path, 'w') as handle:
with open(path, "w") as handle:
handle.write(code)
def setup(*_):
""" Fool sphinx into think it's an extension muahaha """
"""Fool sphinx into think it's an extension muahaha"""
main()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@ -82,11 +82,11 @@ x = 'name: {}; score: {}'.format(name, n)
we use the `format()` version. The [official Python documentation says](https://docs.python.org/2/library/stdtypes.html#str.format), "This method of string formatting is the new standard in Python 3, and should be preferred to the % formatting described in String Formatting Operations in new code."
## Running the Flake8 Style Checker
## Running the Black Style Checker
We use [Flake8](http://flake8.pycqa.org/en/latest/index.html) to check our Python code style. Once you have it installed, you can run it using:
We use [Black](https://black.readthedocs.io/en/stable/) to check our Python code style. Once you have it installed, you can run it using:
```text
flake8 --max-line-length 119 planetmint/
black --check -l 119 .
```

View File

@ -32,5 +32,4 @@ class Hosts:
def assert_transaction(self, tx_id) -> None:
txs = self.get_transactions(tx_id)
for tx in txs:
assert txs[0] == tx, \
'Cannot find transaction {}'.format(tx_id)
assert txs[0] == tx, "Cannot find transaction {}".format(tx_id)

View File

@ -14,7 +14,7 @@ import time
def test_basic():
# Setup up connection to Planetmint integration test nodes
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
# genarate a keypair
@ -22,62 +22,64 @@ def test_basic():
# create a digital asset for Alice
game_boy_token = {
'data': {
'hash': '0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
'storageID': '0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
"data": {
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
}
# prepare the transaction with the digital asset and issue 10 tokens to bob
prepared_creation_tx = pm_alpha.transactions.prepare(
operation='CREATE',
operation="CREATE",
metadata={
'hash': '0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
'storageID': '0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF', },
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
signers=alice.public_key,
recipients=[([alice.public_key], 10)],
asset=game_boy_token)
asset=game_boy_token,
)
# fulfill and send the transaction
fulfilled_creation_tx = pm_alpha.transactions.fulfill(
prepared_creation_tx,
private_keys=alice.private_key)
fulfilled_creation_tx = pm_alpha.transactions.fulfill(prepared_creation_tx, private_keys=alice.private_key)
pm_alpha.transactions.send_commit(fulfilled_creation_tx)
time.sleep(1)
creation_tx_id = fulfilled_creation_tx['id']
creation_tx_id = fulfilled_creation_tx["id"]
# Assert that transaction is stored on all planetmint nodes
hosts.assert_transaction(creation_tx_id)
# Transfer
# create the output and inout for the transaction
transfer_asset = {'id': creation_tx_id}
transfer_asset = {"id": creation_tx_id}
output_index = 0
output = fulfilled_creation_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': transfer_asset['id']},
'owners_before': output['public_keys']}
output = fulfilled_creation_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": transfer_asset["id"]},
"owners_before": output["public_keys"],
}
# prepare the transaction and use 3 tokens
prepared_transfer_tx = pm_alpha.transactions.prepare(
operation='TRANSFER',
operation="TRANSFER",
asset=transfer_asset,
inputs=transfer_input,
metadata={'hash': '0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',
'storageID': '0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF', },
recipients=[([alice.public_key], 10)])
metadata={
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
recipients=[([alice.public_key], 10)],
)
# fulfill and send the transaction
fulfilled_transfer_tx = pm_alpha.transactions.fulfill(
prepared_transfer_tx,
private_keys=alice.private_key)
fulfilled_transfer_tx = pm_alpha.transactions.fulfill(prepared_transfer_tx, private_keys=alice.private_key)
sent_transfer_tx = pm_alpha.transactions.send_commit(fulfilled_transfer_tx)
time.sleep(1)
transfer_tx_id = sent_transfer_tx['id']
transfer_tx_id = sent_transfer_tx["id"]
# Assert that transaction is stored on both planetmint nodes
hosts.assert_transaction(transfer_tx_id)

View File

@ -33,7 +33,7 @@ def test_divisible_assets():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Oh look, it is Alice again and she brought her friend Bob along.
@ -48,13 +48,9 @@ def test_divisible_assets():
# the bike for one hour.
bike_token = {
'data': {
'token_for': {
'bike': {
'serial_number': 420420
}
},
'description': 'Time share token. Each token equals one hour of riding.',
"data": {
"token_for": {"bike": {"serial_number": 420420}},
"description": "Time share token. Each token equals one hour of riding.",
},
}
@ -62,28 +58,22 @@ def test_divisible_assets():
# Here, Alice defines in a tuple that she wants to assign
# these 10 tokens to Bob.
prepared_token_tx = pm.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
recipients=[([bob.public_key], 10)],
asset=bike_token)
operation="CREATE", signers=alice.public_key, recipients=[([bob.public_key], 10)], asset=bike_token
)
# She fulfills and sends the transaction.
fulfilled_token_tx = pm.transactions.fulfill(
prepared_token_tx,
private_keys=alice.private_key)
fulfilled_token_tx = pm.transactions.fulfill(prepared_token_tx, private_keys=alice.private_key)
pm.transactions.send_commit(fulfilled_token_tx)
# We store the `id` of the transaction to use it later on.
bike_token_id = fulfilled_token_tx['id']
bike_token_id = fulfilled_token_tx["id"]
# Let's check if the transaction was successful.
assert pm.transactions.retrieve(bike_token_id), \
'Cannot find transaction {}'.format(bike_token_id)
assert pm.transactions.retrieve(bike_token_id), "Cannot find transaction {}".format(bike_token_id)
# Bob owns 10 tokens now.
assert pm.transactions.retrieve(bike_token_id)['outputs'][0][
'amount'] == '10'
assert pm.transactions.retrieve(bike_token_id)["outputs"][0]["amount"] == "10"
# ## Bob wants to use the bike
# Now that Bob got the tokens and the sun is shining, he wants to get out
@ -91,51 +81,47 @@ def test_divisible_assets():
# To use the bike he has to send the tokens back to Alice.
# To learn about the details of transferring a transaction check out
# [test_basic.py](./test_basic.html)
transfer_asset = {'id': bike_token_id}
transfer_asset = {"id": bike_token_id}
output_index = 0
output = fulfilled_token_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_token_tx[
'id']},
'owners_before': output['public_keys']}
output = fulfilled_token_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_token_tx["id"]},
"owners_before": output["public_keys"],
}
# To use the tokens Bob has to reassign 7 tokens to himself and the
# amount he wants to use to Alice.
prepared_transfer_tx = pm.transactions.prepare(
operation='TRANSFER',
operation="TRANSFER",
asset=transfer_asset,
inputs=transfer_input,
recipients=[([alice.public_key], 3), ([bob.public_key], 7)])
recipients=[([alice.public_key], 3), ([bob.public_key], 7)],
)
# He signs and sends the transaction.
fulfilled_transfer_tx = pm.transactions.fulfill(
prepared_transfer_tx,
private_keys=bob.private_key)
fulfilled_transfer_tx = pm.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
sent_transfer_tx = pm.transactions.send_commit(fulfilled_transfer_tx)
# First, Bob checks if the transaction was successful.
assert pm.transactions.retrieve(
fulfilled_transfer_tx['id']) == sent_transfer_tx
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
hosts.assert_transaction(fulfilled_transfer_tx['id'])
hosts.assert_transaction(fulfilled_transfer_tx["id"])
# There are two outputs in the transaction now.
# The first output shows that Alice got back 3 tokens...
assert pm.transactions.retrieve(
fulfilled_transfer_tx['id'])['outputs'][0]['amount'] == '3'
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["amount"] == "3"
# ... while Bob still has 7 left.
assert pm.transactions.retrieve(
fulfilled_transfer_tx['id'])['outputs'][1]['amount'] == '7'
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][1]["amount"] == "7"
# ## Bob wants to ride the bike again
# It's been a week and Bob wants to right the bike again.
# Now he wants to ride for 8 hours, that's a lot Bob!
# He prepares the transaction again.
transfer_asset = {'id': bike_token_id}
transfer_asset = {"id": bike_token_id}
# This time we need an `output_index` of 1, since we have two outputs
# in the `fulfilled_transfer_tx` we created before. The first output with
# index 0 is for Alice and the second output is for Bob.
@ -143,24 +129,21 @@ def test_divisible_assets():
# correct output with the correct amount of tokens.
output_index = 1
output = fulfilled_transfer_tx['outputs'][output_index]
output = fulfilled_transfer_tx["outputs"][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_transfer_tx['id']},
'owners_before': output['public_keys']}
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_transfer_tx["id"]},
"owners_before": output["public_keys"],
}
# This time Bob only provides Alice in the `recipients` because he wants
# to spend all his tokens
prepared_transfer_tx = pm.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=[([alice.public_key], 8)])
operation="TRANSFER", asset=transfer_asset, inputs=transfer_input, recipients=[([alice.public_key], 8)]
)
fulfilled_transfer_tx = pm.transactions.fulfill(
prepared_transfer_tx,
private_keys=bob.private_key)
fulfilled_transfer_tx = pm.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
# Oh Bob, what have you done?! You tried to spend more tokens than you had.
# Remember Bob, last time you spent 3 tokens already,
@ -171,10 +154,12 @@ def test_divisible_assets():
# Now Bob gets an error saying that the amount he wanted to spent is
# higher than the amount of tokens he has left.
assert error.value.args[0] == 400
message = 'Invalid transaction (AmountError): The amount used in the ' \
'inputs `7` needs to be same as the amount used in the ' \
'outputs `8`'
assert error.value.args[2]['message'] == message
message = (
"Invalid transaction (AmountError): The amount used in the "
"inputs `7` needs to be same as the amount used in the "
"outputs `8`"
)
assert error.value.args[2]["message"] == message
# We have to stop this test now, I am sorry, but Bob is pretty upset
# about his mistake. See you next time :)

View File

@ -16,33 +16,31 @@ from .helper.hosts import Hosts
def test_double_create():
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
alice = generate_keypair()
results = queue.Queue()
tx = pm.transactions.fulfill(
pm.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset={'data': {'uuid': str(uuid4())}}),
private_keys=alice.private_key)
pm.transactions.prepare(operation="CREATE", signers=alice.public_key, asset={"data": {"uuid": str(uuid4())}}),
private_keys=alice.private_key,
)
def send_and_queue(tx):
try:
pm.transactions.send_commit(tx)
results.put('OK')
results.put("OK")
except planetmint_driver.exceptions.TransportError:
results.put('FAIL')
results.put("FAIL")
t1 = Thread(target=send_and_queue, args=(tx, ))
t2 = Thread(target=send_and_queue, args=(tx, ))
t1 = Thread(target=send_and_queue, args=(tx,))
t2 = Thread(target=send_and_queue, args=(tx,))
t1.start()
t2.start()
results = [results.get(timeout=2), results.get(timeout=2)]
assert results.count('OK') == 1
assert results.count('FAIL') == 1
assert results.count("OK") == 1
assert results.count("FAIL") == 1

View File

@ -28,7 +28,7 @@ from .helper.hosts import Hosts
def test_multiple_owners():
# Setup up connection to Planetmint integration test nodes
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
# Generate Keypairs for Alice and Bob!
@ -39,32 +39,22 @@ def test_multiple_owners():
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = {
'data': {
'dish washer': {
'serial_number': 1337
}
}
}
dw_asset = {"data": {"dish washer": {"serial_number": 1337}}}
# They prepare a `CREATE` transaction. To have multiple owners, both
# Bob and Alice need to be the recipients.
prepared_dw_tx = pm_alpha.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
recipients=(alice.public_key, bob.public_key),
asset=dw_asset)
operation="CREATE", signers=alice.public_key, recipients=(alice.public_key, bob.public_key), asset=dw_asset
)
# Now they both sign the transaction by providing their private keys.
# And send it afterwards.
fulfilled_dw_tx = pm_alpha.transactions.fulfill(
prepared_dw_tx,
private_keys=[alice.private_key, bob.private_key])
fulfilled_dw_tx = pm_alpha.transactions.fulfill(prepared_dw_tx, private_keys=[alice.private_key, bob.private_key])
pm_alpha.transactions.send_commit(fulfilled_dw_tx)
# We store the `id` of the transaction to use it later on.
dw_id = fulfilled_dw_tx['id']
dw_id = fulfilled_dw_tx["id"]
time.sleep(1)
@ -72,12 +62,10 @@ def test_multiple_owners():
hosts.assert_transaction(dw_id)
# Let's check if the transaction was successful.
assert pm_alpha.transactions.retrieve(dw_id), \
'Cannot find transaction {}'.format(dw_id)
assert pm_alpha.transactions.retrieve(dw_id), "Cannot find transaction {}".format(dw_id)
# The transaction should have two public keys in the outputs.
assert len(
pm_alpha.transactions.retrieve(dw_id)['outputs'][0]['public_keys']) == 2
assert len(pm_alpha.transactions.retrieve(dw_id)["outputs"][0]["public_keys"]) == 2
# ## Alice and Bob transfer a transaction to Carol.
# Alice and Bob save a lot of money living together. They often go out
@ -89,43 +77,39 @@ def test_multiple_owners():
# Alice and Bob prepare the transaction to transfer the dish washer to
# Carol.
transfer_asset = {'id': dw_id}
transfer_asset = {"id": dw_id}
output_index = 0
output = fulfilled_dw_tx['outputs'][output_index]
transfer_input = {'fulfillment': output['condition']['details'],
'fulfills': {'output_index': output_index,
'transaction_id': fulfilled_dw_tx[
'id']},
'owners_before': output['public_keys']}
output = fulfilled_dw_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_dw_tx["id"]},
"owners_before": output["public_keys"],
}
# Now they create the transaction...
prepared_transfer_tx = pm_alpha.transactions.prepare(
operation='TRANSFER',
asset=transfer_asset,
inputs=transfer_input,
recipients=carol.public_key)
operation="TRANSFER", asset=transfer_asset, inputs=transfer_input, recipients=carol.public_key
)
# ... and sign it with their private keys, then send it.
fulfilled_transfer_tx = pm_alpha.transactions.fulfill(
prepared_transfer_tx,
private_keys=[alice.private_key, bob.private_key])
prepared_transfer_tx, private_keys=[alice.private_key, bob.private_key]
)
sent_transfer_tx = pm_alpha.transactions.send_commit(fulfilled_transfer_tx)
time.sleep(1)
# Now compare if both nodes returned the same transaction
hosts.assert_transaction(fulfilled_transfer_tx['id'])
hosts.assert_transaction(fulfilled_transfer_tx["id"])
# They check if the transaction was successful.
assert pm_alpha.transactions.retrieve(
fulfilled_transfer_tx['id']) == sent_transfer_tx
assert pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# The owners before should include both Alice and Bob.
assert len(
pm_alpha.transactions.retrieve(fulfilled_transfer_tx['id'])['inputs'][0][
'owners_before']) == 2
assert len(pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"])["inputs"][0]["owners_before"]) == 2
# While the new owner is Carol.
assert pm_alpha.transactions.retrieve(fulfilled_transfer_tx['id'])[
'outputs'][0]['public_keys'][0] == carol.public_key
assert (
pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["public_keys"][0] == carol.public_key
)

View File

@ -27,6 +27,40 @@ from planetmint_driver.exceptions import BadRequest
from .helper.hosts import Hosts
naughty_strings = blns.all()
skipped_naughty_strings = [
"1.00",
"$1.00",
"-1.00",
"-$1.00",
"0.00",
"0..0",
".",
"0.0.0",
"-.",
",./;'[]\\-=",
"ثم نفس سقطت وبالتحديد،, جزيرتي باستخدام أن دنو. إذ هنا؟ الستار وتنصيب كان. أهّل ايطاليا، بريطانيا-فرنسا قد أخذ. سليمان، إتفاقية بين ما, يذكر الحدود أي بعد, معاملة بولندا، الإطلاق عل إيو.",
"test\x00",
"Ṱ̺̺̕o͞ ̷i̲̬͇̪͙n̝̗͕v̟̜̘̦͟o̶̙̰̠kè͚̮̺̪̹̱̤ ̖t̝͕̳̣̻̪͞h̼͓̲̦̳̘̲e͇̣̰̦̬͎ ̢̼̻̱̘h͚͎͙̜̣̲ͅi̦̲̣̰̤v̻͍e̺̭̳̪̰-m̢iͅn̖̺̞̲̯̰d̵̼̟͙̩̼̘̳ ̞̥̱̳̭r̛̗̘e͙p͠r̼̞̻̭̗e̺̠̣͟s̘͇̳͍̝͉e͉̥̯̞̲͚̬͜ǹ̬͎͎̟̖͇̤t͍̬̤͓̼̭͘ͅi̪̱n͠g̴͉ ͏͉ͅc̬̟h͡a̫̻̯͘o̫̟̖͍̙̝͉s̗̦̲.̨̹͈̣",
"̡͓̞ͅI̗̘̦͝n͇͇͙v̮̫ok̲̫̙͈i̖͙̭̹̠̞n̡̻̮̣̺g̲͈͙̭͙̬͎ ̰t͔̦h̞̲e̢̤ ͍̬̲͖f̴̘͕̣è͖ẹ̥̩l͖͔͚i͓͚̦͠n͖͍̗͓̳̮g͍ ̨o͚̪͡f̘̣̬ ̖̘͖̟͙̮c҉͔̫͖͓͇͖ͅh̵̤̣͚͔á̗̼͕ͅo̼̣̥s̱͈̺̖̦̻͢.̛̖̞̠̫̰",
"̗̺͖̹̯͓Ṯ̤͍̥͇͈h̲́e͏͓̼̗̙̼̣͔ ͇̜̱̠͓͍ͅN͕͠e̗̱z̘̝̜̺͙p̤̺̹͍̯͚e̠̻̠͜r̨̤͍̺̖͔̖̖d̠̟̭̬̝͟i̦͖̩͓͔̤a̠̗̬͉̙n͚͜ ̻̞̰͚ͅh̵͉i̳̞v̢͇ḙ͎͟-҉̭̩̼͔m̤̭̫i͕͇̝̦n̗͙ḍ̟ ̯̲͕͞ǫ̟̯̰̲͙̻̝f ̪̰̰̗̖̭̘͘c̦͍̲̞͍̩̙ḥ͚a̮͎̟̙͜ơ̩̹͎s̤.̝̝ ҉Z̡̖̜͖̰̣͉̜a͖̰͙̬͡l̲̫̳͍̩g̡̟̼̱͚̞̬ͅo̗͜.̟",
"̦H̬̤̗̤͝e͜ ̜̥̝̻͍̟́w̕h̖̯͓o̝͙̖͎̱̮ ҉̺̙̞̟͈W̷̼̭a̺̪͍į͈͕̭͙̯̜t̶̼̮s̘͙͖̕ ̠̫̠B̻͍͙͉̳ͅe̵h̵̬͇̫͙i̹͓̳̳̮͎̫̕n͟d̴̪̜̖ ̰͉̩͇͙̲͞ͅT͖̼͓̪͢h͏͓̮̻e̬̝̟ͅ ̤̹̝W͙̞̝͔͇͝ͅa͏͓͔̹̼̣l̴͔̰̤̟͔ḽ̫.͕",
'"><script>alert(document.title)</script>',
"'><script>alert(document.title)</script>",
"><script>alert(document.title)</script>",
"</script><script>alert(document.title)</script>",
"< / script >< script >alert(document.title)< / script >",
" onfocus=alert(document.title) autofocus ",
'" onfocus=alert(document.title) autofocus ',
"' onfocus=alert(document.title) autofocus ",
"scriptalert(document.title)/script",
"/dev/null; touch /tmp/blns.fail ; echo",
"../../../../../../../../../../../etc/passwd%00",
"../../../../../../../../../../../etc/hosts",
"() { 0; }; touch /tmp/blns.shellshock1.fail;",
"() { _; } >_[$($())] { touch /tmp/blns.shellshock2.fail; }",
]
naughty_strings = [naughty for naughty in naughty_strings if naughty not in skipped_naughty_strings]
# This is our base test case, but we'll reuse it to send naughty strings as both keys and values.
@ -34,7 +68,7 @@ def send_naughty_tx(asset, metadata):
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Here's Alice.
@ -42,15 +76,11 @@ def send_naughty_tx(asset, metadata):
# Alice is in a naughty mood today, so she creates a tx with some naughty strings
prepared_transaction = pm.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset=asset,
metadata=metadata)
operation="CREATE", signers=alice.public_key, asset=asset, metadata=metadata
)
# She fulfills the transaction
fulfilled_transaction = pm.transactions.fulfill(
prepared_transaction,
private_keys=alice.private_key)
fulfilled_transaction = pm.transactions.fulfill(prepared_transaction, private_keys=alice.private_key)
# The fulfilled tx gets sent to the pm network
try:
@ -59,23 +89,24 @@ def send_naughty_tx(asset, metadata):
sent_transaction = e
# If her key contained a '.', began with a '$', or contained a NUL character
regex = r'.*\..*|\$.*|.*\x00.*'
regex = r".*\..*|\$.*|.*\x00.*"
key = next(iter(metadata))
if re.match(regex, key):
# Then she expects a nicely formatted error code
status_code = sent_transaction.status_code
error = sent_transaction.error
regex = (
r'\{\s*\n*'
r"\{\s*\n*"
r'\s*"message":\s*"Invalid transaction \(ValidationError\):\s*'
r'Invalid key name.*The key name cannot contain characters.*\n*'
r"Invalid key name.*The key name cannot contain characters.*\n*"
r'\s*"status":\s*400\n*'
r'\s*\}\n*')
r"\s*\}\n*"
)
assert status_code == 400
assert re.fullmatch(regex, error), sent_transaction
# Otherwise, she expects to see her transaction in the database
elif 'id' in sent_transaction.keys():
tx_id = sent_transaction['id']
elif "id" in sent_transaction.keys():
tx_id = sent_transaction["id"]
assert pm.transactions.retrieve(tx_id)
# If neither condition was true, then something weird happened...
else:
@ -85,8 +116,8 @@ def send_naughty_tx(asset, metadata):
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_keys(naughty_string):
asset = {'data': {naughty_string: 'nice_value'}}
metadata = {naughty_string: 'nice_value'}
asset = {"data": {naughty_string: "nice_value"}}
metadata = {naughty_string: "nice_value"}
send_naughty_tx(asset, metadata)
@ -94,7 +125,7 @@ def test_naughty_keys(naughty_string):
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_values(naughty_string):
asset = {'data': {'nice_key': naughty_string}}
metadata = {'nice_key': naughty_string}
asset = {"data": {"nice_key": naughty_string}}
metadata = {"nice_key": naughty_string}
send_naughty_tx(asset, metadata)

View File

@ -35,11 +35,11 @@ def test_stream():
# ## Set up the test
# We use the env variable `BICHAINDB_ENDPOINT` to know where to connect.
# Check [test_basic.py](./test_basic.html) for more information.
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# *That's pretty bad, but let's do like this for now.*
WS_ENDPOINT = 'ws://{}:9985/api/v1/streams/valid_transactions'.format(hosts.hostnames[0])
WS_ENDPOINT = "ws://{}:9985/api/v1/streams/valid_transactions".format(hosts.hostnames[0])
# Hello to Alice again, she is pretty active in those tests, good job
# Alice!
@ -89,11 +89,11 @@ def test_stream():
# random `uuid`.
for _ in range(10):
tx = pm.transactions.fulfill(
pm.transactions.prepare(
operation='CREATE',
signers=alice.public_key,
asset={'data': {'uuid': str(uuid4())}}),
private_keys=alice.private_key)
pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, asset={"data": {"uuid": str(uuid4())}}
),
private_keys=alice.private_key,
)
# We don't want to wait for each transaction to be in a block. By using
# `async` mode, we make sure that the driver returns as soon as the
# transaction is pushed to the Planetmint API. Remember: we expect all
@ -103,7 +103,7 @@ def test_stream():
pm.transactions.send_async(tx)
# The `id` of every sent transaction is then stored in a list.
sent.append(tx['id'])
sent.append(tx["id"])
# ## Check the valid transactions coming from Planetmint
# Now we are ready to check if Planetmint did its job. A simple way to
@ -117,9 +117,9 @@ def test_stream():
# the timeout, then game over ¯\\\_(ツ)\_/¯
try:
event = received.get(timeout=5)
txid = json.loads(event)['transaction_id']
txid = json.loads(event)["transaction_id"]
except queue.Empty:
assert False, 'Did not receive all expected transactions'
assert False, "Did not receive all expected transactions"
# Last thing is to try to remove the `txid` from the set of sent
# transactions. If this test is running in parallel with others, we

View File

@ -18,27 +18,22 @@ from .helper.hosts import Hosts
def prepare_condition_details(condition: ThresholdSha256):
condition_details = {
'subconditions': [],
'threshold': condition.threshold,
'type': condition.TYPE_NAME
}
condition_details = {"subconditions": [], "threshold": condition.threshold, "type": condition.TYPE_NAME}
for s in condition.subconditions:
if (s['type'] == 'fulfillment' and s['body'].TYPE_NAME == 'ed25519-sha-256'):
condition_details['subconditions'].append({
'type': s['body'].TYPE_NAME,
'public_key': base58.b58encode(s['body'].public_key).decode()
})
if s["type"] == "fulfillment" and s["body"].TYPE_NAME == "ed25519-sha-256":
condition_details["subconditions"].append(
{"type": s["body"].TYPE_NAME, "public_key": base58.b58encode(s["body"].public_key).decode()}
)
else:
condition_details['subconditions'].append(prepare_condition_details(s['body']))
condition_details["subconditions"].append(prepare_condition_details(s["body"]))
return condition_details
def test_threshold():
# Setup connection to test nodes
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Generate Keypars for Alice, Bob an Carol!
@ -49,13 +44,7 @@ def test_threshold():
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = {
'data': {
'dish washer': {
'serial_number': 1337
}
}
}
dw_asset = {"data": {"dish washer": {"serial_number": 1337}}}
# Create subfulfillments
alice_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
@ -74,37 +63,37 @@ def test_threshold():
# Assemble output and input for the handcrafted tx
output = {
'amount': '1',
'condition': {
'details': condition_details,
'uri': condition_uri,
"amount": "1",
"condition": {
"details": condition_details,
"uri": condition_uri,
},
'public_keys': (alice.public_key, bob.public_key, carol.public_key),
"public_keys": (alice.public_key, bob.public_key, carol.public_key),
}
# The yet to be fulfilled input:
input_ = {
'fulfillment': None,
'fulfills': None,
'owners_before': (alice.public_key, bob.public_key),
"fulfillment": None,
"fulfills": None,
"owners_before": (alice.public_key, bob.public_key),
}
# Assemble the handcrafted transaction
handcrafted_dw_tx = {
'operation': 'CREATE',
'asset': dw_asset,
'metadata': None,
'outputs': (output,),
'inputs': (input_,),
'version': '2.0',
'id': None,
"operation": "CREATE",
"asset": dw_asset,
"metadata": None,
"outputs": (output,),
"inputs": (input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_dw_tx,
sort_keys=True,
separators=(',', ':'),
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
@ -121,19 +110,19 @@ def test_threshold():
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_dw_tx['inputs'][0]['fulfillment'] = fulfillment_uri
handcrafted_dw_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_dw_tx,
sort_keys=True,
separators=(',', ':'),
separators=(",", ":"),
ensure_ascii=False,
)
dw_creation_txid = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_dw_tx['id'] = dw_creation_txid
handcrafted_dw_tx["id"] = dw_creation_txid
pm.transactions.send_commit(handcrafted_dw_tx)
@ -144,18 +133,12 @@ def test_threshold():
def test_weighted_threshold():
hosts = Hosts('/shared/hostnames')
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
alice, bob, carol = generate_keypair(), generate_keypair(), generate_keypair()
asset = {
'data': {
'trashcan': {
'animals': ['racoon_1', 'racoon_2']
}
}
}
asset = {"data": {"trashcan": {"animals": ["racoon_1", "racoon_2"]}}}
alice_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
bob_ed25519 = Ed25519Sha256(public_key=base58.b58decode(bob.public_key))
@ -175,37 +158,37 @@ def test_weighted_threshold():
# Assemble output and input for the handcrafted tx
output = {
'amount': '1',
'condition': {
'details': condition_details,
'uri': condition_uri,
"amount": "1",
"condition": {
"details": condition_details,
"uri": condition_uri,
},
'public_keys': (alice.public_key, bob.public_key, carol.public_key),
"public_keys": (alice.public_key, bob.public_key, carol.public_key),
}
# The yet to be fulfilled input:
input_ = {
'fulfillment': None,
'fulfills': None,
'owners_before': (alice.public_key, bob.public_key),
"fulfillment": None,
"fulfills": None,
"owners_before": (alice.public_key, bob.public_key),
}
# Assemble the handcrafted transaction
handcrafted_tx = {
'operation': 'CREATE',
'asset': asset,
'metadata': None,
'outputs': (output,),
'inputs': (input_,),
'version': '2.0',
'id': None,
"operation": "CREATE",
"asset": asset,
"metadata": None,
"outputs": (output,),
"inputs": (input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_tx,
sort_keys=True,
separators=(',', ':'),
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
@ -224,19 +207,19 @@ def test_weighted_threshold():
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_tx['inputs'][0]['fulfillment'] = fulfillment_uri
handcrafted_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_tx,
sort_keys=True,
separators=(',', ':'),
separators=(",", ":"),
ensure_ascii=False,
)
creation_tx_id = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_tx['id'] = creation_tx_id
handcrafted_tx["id"] = creation_tx_id
pm.transactions.send_commit(handcrafted_tx)
@ -254,50 +237,50 @@ def test_weighted_threshold():
# Assemble output and input for the handcrafted tx
transfer_output = {
'amount': '1',
'condition': {
'details': {
'type': alice_transfer_ed25519.TYPE_NAME,
'public_key': base58.b58encode(alice_transfer_ed25519.public_key).decode()
"amount": "1",
"condition": {
"details": {
"type": alice_transfer_ed25519.TYPE_NAME,
"public_key": base58.b58encode(alice_transfer_ed25519.public_key).decode(),
},
'uri': transfer_condition_uri,
"uri": transfer_condition_uri,
},
'public_keys': (alice.public_key,),
"public_keys": (alice.public_key,),
}
# The yet to be fulfilled input:
transfer_input_ = {
'fulfillment': None,
'fulfills': {
'transaction_id': creation_tx_id,
'output_index': 0
},
'owners_before': (alice.public_key, bob.public_key, carol.public_key),
"fulfillment": None,
"fulfills": {"transaction_id": creation_tx_id, "output_index": 0},
"owners_before": (alice.public_key, bob.public_key, carol.public_key),
}
# Assemble the handcrafted transaction
handcrafted_transfer_tx = {
'operation': 'TRANSFER',
'asset': {'id': creation_tx_id},
'metadata': None,
'outputs': (transfer_output,),
'inputs': (transfer_input_,),
'version': '2.0',
'id': None,
"operation": "TRANSFER",
"asset": {"id": creation_tx_id},
"metadata": None,
"outputs": (transfer_output,),
"inputs": (transfer_input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_transfer_tx,
sort_keys=True,
separators=(',', ':'),
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
message.update('{}{}'.format(
handcrafted_transfer_tx['inputs'][0]['fulfills']['transaction_id'],
handcrafted_transfer_tx['inputs'][0]['fulfills']['output_index']).encode())
message.update(
"{}{}".format(
handcrafted_transfer_tx["inputs"][0]["fulfills"]["transaction_id"],
handcrafted_transfer_tx["inputs"][0]["fulfills"]["output_index"],
).encode()
)
# Sign message with Alice's und Bob's private key
bob_transfer_ed25519.sign(message.digest(), base58.b58decode(bob.private_key))
@ -314,19 +297,19 @@ def test_weighted_threshold():
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_transfer_tx['inputs'][0]['fulfillment'] = fulfillment_uri
handcrafted_transfer_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_transfer_tx,
sort_keys=True,
separators=(',', ':'),
separators=(",", ":"),
ensure_ascii=False,
)
transfer_tx_id = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_transfer_tx['id'] = transfer_tx_id
handcrafted_transfer_tx["id"] = transfer_tx_id
pm.transactions.send_commit(handcrafted_transfer_tx)

View File

@ -38,9 +38,7 @@ def test_zenroom_signing(
)
)
zenroomscpt = ZenroomSha256(
script=fulfill_script_zencode, data=zenroom_data, keys=zen_public_keys
)
zenroomscpt = ZenroomSha256(script=fulfill_script_zencode, data=zenroom_data, keys=zen_public_keys)
print(f"zenroom is: {zenroomscpt.script}")
# CRYPTO-CONDITIONS: generate the condition uri

View File

@ -15,19 +15,19 @@ def edit_genesis() -> None:
for file_name in file_names:
file = open(file_name)
genesis = json.load(file)
validators.extend(genesis['validators'])
validators.extend(genesis["validators"])
file.close()
genesis_file = open(file_names[0])
genesis_json = json.load(genesis_file)
genesis_json['validators'] = validators
genesis_json["validators"] = validators
genesis_file.close()
with open('/shared/genesis.json', 'w') as f:
with open("/shared/genesis.json", "w") as f:
json.dump(genesis_json, f, indent=True)
return None
if __name__ == '__main__':
if __name__ == "__main__":
edit_genesis()

View File

@ -31,25 +31,27 @@ import re
from dateutil.parser import parse
lineformat = re.compile(r'(?P<ipaddress>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) - - '
r'\[(?P<dateandtime>\d{2}\/[a-z]{3}\/\d{4}:\d{2}:\d{2}:\d{2} '
r'(\+|\-)\d{4})\] ((\"(GET|POST) )(?P<url>.+)(http\/1\.1")) '
r'(?P<statuscode>\d{3}) '
r'(?P<bytessent>\d+) '
r'(["](?P<refferer>(\-)|(.+))["]) '
r'(["](?P<useragent>.+)["])',
re.IGNORECASE)
lineformat = re.compile(
r"(?P<ipaddress>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) - - "
r"\[(?P<dateandtime>\d{2}\/[a-z]{3}\/\d{4}:\d{2}:\d{2}:\d{2} "
r'(\+|\-)\d{4})\] ((\"(GET|POST) )(?P<url>.+)(http\/1\.1")) '
r"(?P<statuscode>\d{3}) "
r"(?P<bytessent>\d+) "
r'(["](?P<refferer>(\-)|(.+))["]) '
r'(["](?P<useragent>.+)["])',
re.IGNORECASE,
)
filepath = sys.argv[1]
logline_list = []
with open(filepath) as csvfile:
csvreader = csv.reader(csvfile, delimiter=',')
csvreader = csv.reader(csvfile, delimiter=",")
for row in csvreader:
if row and (row[8] != 'LogEntry'):
if row and (row[8] != "LogEntry"):
# because the first line is just the column headers, such as 'LogEntry'
logline = row[8]
print(logline + '\n')
print(logline + "\n")
logline_data = re.search(lineformat, logline)
if logline_data:
logline_dict = logline_data.groupdict()
@ -63,20 +65,19 @@ total_bytes_sent = 0
tstamp_list = []
for lldict in logline_list:
total_bytes_sent += int(lldict['bytessent'])
dt = lldict['dateandtime']
total_bytes_sent += int(lldict["bytessent"])
dt = lldict["dateandtime"]
# https://tinyurl.com/lqjnhot
dtime = parse(dt[:11] + " " + dt[12:])
tstamp_list.append(dtime.timestamp())
print('Number of log lines seen: {}'.format(len(logline_list)))
print("Number of log lines seen: {}".format(len(logline_list)))
# Time range
trange_sec = max(tstamp_list) - min(tstamp_list)
trange_days = trange_sec / 60.0 / 60.0 / 24.0
print('Time range seen (days): {}'.format(trange_days))
print("Time range seen (days): {}".format(trange_days))
print('Total bytes sent: {}'.format(total_bytes_sent))
print("Total bytes sent: {}".format(total_bytes_sent))
print('Average bytes sent per day (out via GET): {}'.
format(total_bytes_sent / trange_days))
print("Average bytes sent per day (out via GET): {}".format(total_bytes_sent / trange_days))

View File

@ -6,7 +6,7 @@
from planetmint.transactions.common.transaction import Transaction # noqa
from planetmint import models # noqa
from planetmint.upsert_validator import ValidatorElection # noqa
from planetmint.transactions.types.elections.vote import Vote # noqa
from planetmint.transactions.types.elections.vote import Vote # noqa
from planetmint.transactions.types.elections.chain_migration_election import ChainMigrationElection
from planetmint.lib import Planetmint
from planetmint.core import App

View File

@ -14,15 +14,16 @@ from planetmint.backend.exceptions import ConnectionError
from planetmint.transactions.common.exceptions import ConfigurationError
BACKENDS = {
'tarantool_db': 'planetmint.backend.tarantool.connection.TarantoolDBConnection',
'localmongodb': 'planetmint.backend.localmongodb.connection.LocalMongoDBConnection'
"tarantool_db": "planetmint.backend.tarantool.connection.TarantoolDBConnection",
"localmongodb": "planetmint.backend.localmongodb.connection.LocalMongoDBConnection",
}
logger = logging.getLogger(__name__)
def connect(host: str = None, port: int = None, login: str = None, password: str = None, backend: str = None,
**kwargs):
def connect(
host: str = None, port: int = None, login: str = None, password: str = None, backend: str = None, **kwargs
):
try:
backend = backend
if not backend and kwargs and kwargs.get("backend"):
@ -37,40 +38,57 @@ def connect(host: str = None, port: int = None, login: str = None, password: str
raise ConfigurationError
host = host or Config().get()["database"]["host"] if not kwargs.get("host") else kwargs["host"]
port = port or Config().get()['database']['port'] if not kwargs.get("port") else kwargs["port"]
port = port or Config().get()["database"]["port"] if not kwargs.get("port") else kwargs["port"]
login = login or Config().get()["database"]["login"] if not kwargs.get("login") else kwargs["login"]
password = password or Config().get()["database"]["password"]
try:
if backend == "tarantool_db":
modulepath, _, class_name = BACKENDS[backend].rpartition('.')
modulepath, _, class_name = BACKENDS[backend].rpartition(".")
Class = getattr(import_module(modulepath), class_name)
return Class(host=host, port=port, user=login, password=password, kwargs=kwargs)
elif backend == "localmongodb":
modulepath, _, class_name = BACKENDS[backend].rpartition('.')
modulepath, _, class_name = BACKENDS[backend].rpartition(".")
Class = getattr(import_module(modulepath), class_name)
dbname = _kwargs_parser(key="name", kwargs=kwargs) or Config().get()['database']['name']
replicaset = _kwargs_parser(key="replicaset", kwargs=kwargs) or Config().get()['database']['replicaset']
ssl = _kwargs_parser(key="ssl", kwargs=kwargs) or Config().get()['database']['ssl']
login = login or Config().get()['database']['login'] if _kwargs_parser(key="login",
kwargs=kwargs) is None else _kwargs_parser( # noqa: E501
key="login", kwargs=kwargs)
password = password or Config().get()['database']['password'] if _kwargs_parser(key="password",
kwargs=kwargs) is None else _kwargs_parser( # noqa: E501
key="password", kwargs=kwargs)
ca_cert = _kwargs_parser(key="ca_cert", kwargs=kwargs) or Config().get()['database']['ca_cert']
certfile = _kwargs_parser(key="certfile", kwargs=kwargs) or Config().get()['database']['certfile']
keyfile = _kwargs_parser(key="keyfile", kwargs=kwargs) or Config().get()['database']['keyfile']
keyfile_passphrase = _kwargs_parser(key="keyfile_passphrase", kwargs=kwargs) or Config().get()['database'][
'keyfile_passphrase']
crlfile = _kwargs_parser(key="crlfile", kwargs=kwargs) or Config().get()['database']['crlfile']
dbname = _kwargs_parser(key="name", kwargs=kwargs) or Config().get()["database"]["name"]
replicaset = _kwargs_parser(key="replicaset", kwargs=kwargs) or Config().get()["database"]["replicaset"]
ssl = _kwargs_parser(key="ssl", kwargs=kwargs) or Config().get()["database"]["ssl"]
login = (
login or Config().get()["database"]["login"]
if _kwargs_parser(key="login", kwargs=kwargs) is None
else _kwargs_parser(key="login", kwargs=kwargs) # noqa: E501
)
password = (
password or Config().get()["database"]["password"]
if _kwargs_parser(key="password", kwargs=kwargs) is None
else _kwargs_parser(key="password", kwargs=kwargs) # noqa: E501
)
ca_cert = _kwargs_parser(key="ca_cert", kwargs=kwargs) or Config().get()["database"]["ca_cert"]
certfile = _kwargs_parser(key="certfile", kwargs=kwargs) or Config().get()["database"]["certfile"]
keyfile = _kwargs_parser(key="keyfile", kwargs=kwargs) or Config().get()["database"]["keyfile"]
keyfile_passphrase = (
_kwargs_parser(key="keyfile_passphrase", kwargs=kwargs)
or Config().get()["database"]["keyfile_passphrase"]
)
crlfile = _kwargs_parser(key="crlfile", kwargs=kwargs) or Config().get()["database"]["crlfile"]
max_tries = _kwargs_parser(key="max_tries", kwargs=kwargs)
connection_timeout = _kwargs_parser(key="connection_timeout", kwargs=kwargs)
return Class(host=host, port=port, dbname=dbname,
max_tries=max_tries, connection_timeout=connection_timeout,
replicaset=replicaset, ssl=ssl, login=login, password=password,
ca_cert=ca_cert, certfile=certfile, keyfile=keyfile,
keyfile_passphrase=keyfile_passphrase, crlfile=crlfile)
return Class(
host=host,
port=port,
dbname=dbname,
max_tries=max_tries,
connection_timeout=connection_timeout,
replicaset=replicaset,
ssl=ssl,
login=login,
password=password,
ca_cert=ca_cert,
certfile=certfile,
keyfile=keyfile,
keyfile_passphrase=keyfile_passphrase,
crlfile=crlfile,
)
except tarantool.error.NetworkError as network_err:
print(f"Host {host}:{port} can't be reached.\n{network_err}")
raise network_err
@ -81,15 +99,14 @@ def _kwargs_parser(key, kwargs):
return kwargs[key]
return None
class Connection:
"""Connection class interface.
All backend implementations should provide a connection class that inherits
from and implements this class.
"""
def __init__(self, host=None, port=None, dbname=None,
connection_timeout=None, max_tries=None,
**kwargs):
def __init__(self, host=None, port=None, dbname=None, connection_timeout=None, max_tries=None, **kwargs):
"""Create a new :class:`~.Connection` instance.
Args:
host (str): the host to connect to.
@ -104,14 +121,15 @@ class Connection:
configuration's ``database`` settings
"""
dbconf = Config().get()['database']
dbconf = Config().get()["database"]
self.host = host or dbconf['host']
self.port = port or dbconf['port']
self.dbname = dbname or dbconf['name']
self.connection_timeout = connection_timeout if connection_timeout is not None \
else dbconf['connection_timeout']
self.max_tries = max_tries if max_tries is not None else dbconf['max_tries']
self.host = host or dbconf["host"]
self.port = port or dbconf["port"]
self.dbname = dbname or dbconf["name"]
self.connection_timeout = (
connection_timeout if connection_timeout is not None else dbconf["connection_timeout"]
)
self.max_tries = max_tries if max_tries is not None else dbconf["max_tries"]
self.max_tries_counter = range(self.max_tries) if self.max_tries != 0 else repeat(0)
self._conn = None
@ -149,11 +167,16 @@ class Connection:
try:
self._conn = self._connect()
except ConnectionError as exc:
logger.warning('Attempt %s/%s. Connection to %s:%s failed after %sms.',
attempt, self.max_tries if self.max_tries != 0 else '',
self.host, self.port, self.connection_timeout)
logger.warning(
"Attempt %s/%s. Connection to %s:%s failed after %sms.",
attempt,
self.max_tries if self.max_tries != 0 else "",
self.host,
self.port,
self.connection_timeout,
)
if attempt == self.max_tries:
logger.critical('Cannot connect to the Database. Giving up.')
logger.critical("Cannot connect to the Database. Giving up.")
raise ConnectionError() from exc
else:
break

View File

@ -22,7 +22,7 @@ generic backend interfaces to the implementations in this module.
"""
# Register the single dispatched modules on import.
from planetmint.backend.localmongodb import schema, query, convert # noqa
from planetmint.backend.localmongodb import schema, query, convert # noqa
# MongoDBConnection should always be accessed via
# ``planetmint.backend.connect()``.

View File

@ -8,20 +8,28 @@ from ssl import CERT_REQUIRED
import pymongo
from planetmint.config import Config
from planetmint.backend.exceptions import (DuplicateKeyError,
OperationError,
ConnectionError)
from planetmint.backend.exceptions import DuplicateKeyError, OperationError, ConnectionError
from planetmint.transactions.common.exceptions import ConfigurationError
from planetmint.utils import Lazy
from planetmint.backend.connection import Connection
logger = logging.getLogger(__name__)
class LocalMongoDBConnection(Connection):
def __init__(self, replicaset=None, ssl=None, login=None, password=None,
ca_cert=None, certfile=None, keyfile=None,
keyfile_passphrase=None, crlfile=None, **kwargs):
class LocalMongoDBConnection(Connection):
def __init__(
self,
replicaset=None,
ssl=None,
login=None,
password=None,
ca_cert=None,
certfile=None,
keyfile=None,
keyfile_passphrase=None,
crlfile=None,
**kwargs,
):
"""Create a new Connection instance.
Args:
@ -32,15 +40,15 @@ class LocalMongoDBConnection(Connection):
"""
super().__init__(**kwargs)
self.replicaset = replicaset or Config().get()['database']['replicaset']
self.ssl = ssl if ssl is not None else Config().get()['database']['ssl']
self.login = login or Config().get()['database']['login']
self.password = password or Config().get()['database']['password']
self.ca_cert = ca_cert or Config().get()['database']['ca_cert']
self.certfile = certfile or Config().get()['database']['certfile']
self.keyfile = keyfile or Config().get()['database']['keyfile']
self.keyfile_passphrase = keyfile_passphrase or Config().get()['database']['keyfile_passphrase']
self.crlfile = crlfile or Config().get()['database']['crlfile']
self.replicaset = replicaset or Config().get()["database"]["replicaset"]
self.ssl = ssl if ssl is not None else Config().get()["database"]["ssl"]
self.login = login or Config().get()["database"]["login"]
self.password = password or Config().get()["database"]["password"]
self.ca_cert = ca_cert or Config().get()["database"]["ca_cert"]
self.certfile = certfile or Config().get()["database"]["certfile"]
self.keyfile = keyfile or Config().get()["database"]["keyfile"]
self.keyfile_passphrase = keyfile_passphrase or Config().get()["database"]["keyfile_passphrase"]
self.crlfile = crlfile or Config().get()["database"]["crlfile"]
if not self.ssl:
self.ssl = False
if not self.keyfile_passphrase:
@ -66,15 +74,14 @@ class LocalMongoDBConnection(Connection):
try:
return query.run(self.conn)
except pymongo.errors.AutoReconnect:
logger.warning('Lost connection to the database, '
'retrying query.')
logger.warning("Lost connection to the database, " "retrying query.")
return query.run(self.conn)
except pymongo.errors.AutoReconnect as exc:
raise ConnectionError from exc
except pymongo.errors.DuplicateKeyError as exc:
raise DuplicateKeyError from exc
except pymongo.errors.OperationFailure as exc:
print(f'DETAILS: {exc.details}')
print(f"DETAILS: {exc.details}")
raise OperationError from exc
def _connect(self):
@ -95,44 +102,45 @@ class LocalMongoDBConnection(Connection):
# `ConnectionFailure`.
# The presence of ca_cert, certfile, keyfile, crlfile implies the
# use of certificates for TLS connectivity.
if self.ca_cert is None or self.certfile is None or \
self.keyfile is None or self.crlfile is None:
client = pymongo.MongoClient(self.host,
self.port,
replicaset=self.replicaset,
serverselectiontimeoutms=self.connection_timeout,
ssl=self.ssl,
**MONGO_OPTS)
if self.ca_cert is None or self.certfile is None or self.keyfile is None or self.crlfile is None:
client = pymongo.MongoClient(
self.host,
self.port,
replicaset=self.replicaset,
serverselectiontimeoutms=self.connection_timeout,
ssl=self.ssl,
**MONGO_OPTS,
)
if self.login is not None and self.password is not None:
client[self.dbname].authenticate(self.login, self.password)
else:
logger.info('Connecting to MongoDB over TLS/SSL...')
client = pymongo.MongoClient(self.host,
self.port,
replicaset=self.replicaset,
serverselectiontimeoutms=self.connection_timeout,
ssl=self.ssl,
ssl_ca_certs=self.ca_cert,
ssl_certfile=self.certfile,
ssl_keyfile=self.keyfile,
ssl_pem_passphrase=self.keyfile_passphrase,
ssl_crlfile=self.crlfile,
ssl_cert_reqs=CERT_REQUIRED,
**MONGO_OPTS)
logger.info("Connecting to MongoDB over TLS/SSL...")
client = pymongo.MongoClient(
self.host,
self.port,
replicaset=self.replicaset,
serverselectiontimeoutms=self.connection_timeout,
ssl=self.ssl,
ssl_ca_certs=self.ca_cert,
ssl_certfile=self.certfile,
ssl_keyfile=self.keyfile,
ssl_pem_passphrase=self.keyfile_passphrase,
ssl_crlfile=self.crlfile,
ssl_cert_reqs=CERT_REQUIRED,
**MONGO_OPTS,
)
if self.login is not None:
client[self.dbname].authenticate(self.login,
mechanism='MONGODB-X509')
client[self.dbname].authenticate(self.login, mechanism="MONGODB-X509")
return client
except (pymongo.errors.ConnectionFailure,
pymongo.errors.OperationFailure) as exc:
logger.info('Exception in _connect(): {}'.format(exc))
except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc:
logger.info("Exception in _connect(): {}".format(exc))
raise ConnectionError(str(exc)) from exc
except pymongo.errors.ConfigurationError as exc:
raise ConfigurationError from exc
MONGO_OPTS = {
'socketTimeoutMS': 20000,
"socketTimeoutMS": 20000,
}

View File

@ -15,11 +15,10 @@ register_query = module_dispatch_registrar(convert)
@register_query(LocalMongoDBConnection)
def prepare_asset(connection, transaction_type, transaction_id, filter_operation, asset):
if transaction_type == filter_operation:
asset['id'] = transaction_id
asset["id"] = transaction_id
return asset
@register_query(LocalMongoDBConnection)
def prepare_metadata(connection, transaction_id, metadata):
return {'id': transaction_id,
'metadata': metadata}
return {"id": transaction_id, "metadata": metadata}

View File

@ -1,4 +1,5 @@
from functools import singledispatch
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
@ -19,104 +20,80 @@ register_query = module_dispatch_registrar(backend.query)
@register_query(LocalMongoDBConnection)
def store_transactions(conn, signed_transactions):
return conn.run(conn.collection('transactions')
.insert_many(signed_transactions))
return conn.run(conn.collection("transactions").insert_many(signed_transactions))
@register_query(LocalMongoDBConnection)
def get_transaction(conn, transaction_id):
return conn.run(
conn.collection('transactions')
.find_one({'id': transaction_id}, {'_id': 0}))
return conn.run(conn.collection("transactions").find_one({"id": transaction_id}, {"_id": 0}))
@register_query(LocalMongoDBConnection)
def get_transactions(conn, transaction_ids):
try:
return conn.run(
conn.collection('transactions')
.find({'id': {'$in': transaction_ids}},
projection={'_id': False}))
conn.collection("transactions").find({"id": {"$in": transaction_ids}}, projection={"_id": False})
)
except IndexError:
pass
@register_query(LocalMongoDBConnection)
def store_metadatas(conn, metadata):
return conn.run(
conn.collection('metadata')
.insert_many(metadata, ordered=False))
return conn.run(conn.collection("metadata").insert_many(metadata, ordered=False))
@register_query(LocalMongoDBConnection)
def get_metadata(conn, transaction_ids):
return conn.run(
conn.collection('metadata')
.find({'id': {'$in': transaction_ids}},
projection={'_id': False}))
return conn.run(conn.collection("metadata").find({"id": {"$in": transaction_ids}}, projection={"_id": False}))
@register_query(LocalMongoDBConnection)
def store_asset(conn, asset):
try:
return conn.run(
conn.collection('assets')
.insert_one(asset))
return conn.run(conn.collection("assets").insert_one(asset))
except DuplicateKeyError:
pass
@register_query(LocalMongoDBConnection)
def store_assets(conn, assets):
return conn.run(
conn.collection('assets')
.insert_many(assets, ordered=False))
return conn.run(conn.collection("assets").insert_many(assets, ordered=False))
@register_query(LocalMongoDBConnection)
def get_asset(conn, asset_id):
try:
return conn.run(
conn.collection('assets')
.find_one({'id': asset_id}, {'_id': 0, 'id': 0}))
return conn.run(conn.collection("assets").find_one({"id": asset_id}, {"_id": 0, "id": 0}))
except IndexError:
pass
@register_query(LocalMongoDBConnection)
def get_assets(conn, asset_ids):
return conn.run(
conn.collection('assets')
.find({'id': {'$in': asset_ids}},
projection={'_id': False}))
return conn.run(conn.collection("assets").find({"id": {"$in": asset_ids}}, projection={"_id": False}))
@register_query(LocalMongoDBConnection)
def get_spent(conn, transaction_id, output):
query = {'inputs':
{'$elemMatch':
{'$and': [{'fulfills.transaction_id': transaction_id},
{'fulfills.output_index': output}]}}}
query = {
"inputs": {
"$elemMatch": {"$and": [{"fulfills.transaction_id": transaction_id}, {"fulfills.output_index": output}]}
}
}
return conn.run(
conn.collection('transactions')
.find(query, {'_id': 0}))
return conn.run(conn.collection("transactions").find(query, {"_id": 0}))
@register_query(LocalMongoDBConnection)
def get_latest_block(conn):
return conn.run(
conn.collection('blocks')
.find_one(projection={'_id': False},
sort=[('height', DESCENDING)]))
return conn.run(conn.collection("blocks").find_one(projection={"_id": False}, sort=[("height", DESCENDING)]))
@register_query(LocalMongoDBConnection)
def store_block(conn, block):
try:
return conn.run(
conn.collection('blocks')
.insert_one(block))
return conn.run(conn.collection("blocks").insert_one(block))
except DuplicateKeyError:
pass
@ -125,32 +102,47 @@ def store_block(conn, block):
def get_txids_filtered(conn, asset_id, operation=None, last_tx=None):
match = {
Transaction.CREATE: {'operation': 'CREATE', 'id': asset_id},
Transaction.TRANSFER: {'operation': 'TRANSFER', 'asset.id': asset_id},
None: {'$or': [{'asset.id': asset_id}, {'id': asset_id}]},
Transaction.CREATE: {"operation": "CREATE", "id": asset_id},
Transaction.TRANSFER: {"operation": "TRANSFER", "asset.id": asset_id},
None: {"$or": [{"asset.id": asset_id}, {"id": asset_id}]},
}[operation]
cursor = conn.run(conn.collection('transactions').find(match))
cursor = conn.run(conn.collection("transactions").find(match))
if last_tx:
cursor = cursor.sort([('$natural', DESCENDING)]).limit(1)
cursor = cursor.sort([("$natural", DESCENDING)]).limit(1)
return (elem['id'] for elem in cursor)
return (elem["id"] for elem in cursor)
@register_query(LocalMongoDBConnection)
def text_search(conn, search, *, language='english', case_sensitive=False,
diacritic_sensitive=False, text_score=False, limit=0, table='assets'):
def text_search(
conn,
search,
*,
language="english",
case_sensitive=False,
diacritic_sensitive=False,
text_score=False,
limit=0,
table="assets"
):
cursor = conn.run(
conn.collection(table)
.find({'$text': {
'$search': search,
'$language': language,
'$caseSensitive': case_sensitive,
'$diacriticSensitive': diacritic_sensitive}},
{'score': {'$meta': 'textScore'}, '_id': False})
.sort([('score', {'$meta': 'textScore'})])
.limit(limit))
.find(
{
"$text": {
"$search": search,
"$language": language,
"$caseSensitive": case_sensitive,
"$diacriticSensitive": diacritic_sensitive,
}
},
{"score": {"$meta": "textScore"}, "_id": False},
)
.sort([("score", {"$meta": "textScore"})])
.limit(limit)
)
if text_score:
return cursor
@ -159,58 +151,54 @@ def text_search(conn, search, *, language='english', case_sensitive=False,
def _remove_text_score(asset):
asset.pop('score', None)
asset.pop("score", None)
return asset
@register_query(LocalMongoDBConnection)
def get_owned_ids(conn, owner):
cursor = conn.run(
conn.collection('transactions').aggregate([
{'$match': {'outputs.public_keys': owner}},
{'$project': {'_id': False}}
]))
conn.collection("transactions").aggregate(
[{"$match": {"outputs.public_keys": owner}}, {"$project": {"_id": False}}]
)
)
return cursor
@register_query(LocalMongoDBConnection)
def get_spending_transactions(conn, inputs):
transaction_ids = [i['transaction_id'] for i in inputs]
output_indexes = [i['output_index'] for i in inputs]
query = {'inputs':
{'$elemMatch':
{'$and':
[
{'fulfills.transaction_id': {'$in': transaction_ids}},
{'fulfills.output_index': {'$in': output_indexes}}
]}}}
transaction_ids = [i["transaction_id"] for i in inputs]
output_indexes = [i["output_index"] for i in inputs]
query = {
"inputs": {
"$elemMatch": {
"$and": [
{"fulfills.transaction_id": {"$in": transaction_ids}},
{"fulfills.output_index": {"$in": output_indexes}},
]
}
}
}
cursor = conn.run(
conn.collection('transactions').find(query, {'_id': False}))
cursor = conn.run(conn.collection("transactions").find(query, {"_id": False}))
return cursor
@register_query(LocalMongoDBConnection)
def get_block(conn, block_id):
return conn.run(
conn.collection('blocks')
.find_one({'height': block_id},
projection={'_id': False}))
return conn.run(conn.collection("blocks").find_one({"height": block_id}, projection={"_id": False}))
@register_query(LocalMongoDBConnection)
def get_block_with_transaction(conn, txid):
return conn.run(
conn.collection('blocks')
.find({'transactions': txid},
projection={'_id': False, 'height': True}))
return conn.run(conn.collection("blocks").find({"transactions": txid}, projection={"_id": False, "height": True}))
@register_query(LocalMongoDBConnection)
def delete_transactions(conn, txn_ids):
conn.run(conn.collection('assets').delete_many({'id': {'$in': txn_ids}}))
conn.run(conn.collection('metadata').delete_many({'id': {'$in': txn_ids}}))
conn.run(conn.collection('transactions').delete_many({'id': {'$in': txn_ids}}))
conn.run(conn.collection("assets").delete_many({"id": {"$in": txn_ids}}))
conn.run(conn.collection("metadata").delete_many({"id": {"$in": txn_ids}}))
conn.run(conn.collection("transactions").delete_many({"id": {"$in": txn_ids}}))
@register_query(LocalMongoDBConnection)
@ -218,7 +206,7 @@ def store_unspent_outputs(conn, *unspent_outputs):
if unspent_outputs:
try:
return conn.run(
conn.collection('utxos').insert_many(
conn.collection("utxos").insert_many(
unspent_outputs,
ordered=False,
)
@ -232,14 +220,19 @@ def store_unspent_outputs(conn, *unspent_outputs):
def delete_unspent_outputs(conn, *unspent_outputs):
if unspent_outputs:
return conn.run(
conn.collection('utxos').delete_many({
'$or': [{
'$and': [
{'transaction_id': unspent_output['transaction_id']},
{'output_index': unspent_output['output_index']},
],
} for unspent_output in unspent_outputs]
})
conn.collection("utxos").delete_many(
{
"$or": [
{
"$and": [
{"transaction_id": unspent_output["transaction_id"]},
{"output_index": unspent_output["output_index"]},
],
}
for unspent_output in unspent_outputs
]
}
)
)
@ -247,51 +240,36 @@ def delete_unspent_outputs(conn, *unspent_outputs):
def get_unspent_outputs(conn, *, query=None):
if query is None:
query = {}
return conn.run(conn.collection('utxos').find(query,
projection={'_id': False}))
return conn.run(conn.collection("utxos").find(query, projection={"_id": False}))
@register_query(LocalMongoDBConnection)
def store_pre_commit_state(conn, state):
return conn.run(
conn.collection('pre_commit')
.replace_one({}, state, upsert=True)
)
return conn.run(conn.collection("pre_commit").replace_one({}, state, upsert=True))
@register_query(LocalMongoDBConnection)
def get_pre_commit_state(connection):
return connection.run(connection.collection('pre_commit').find_one())
return connection.run(connection.collection("pre_commit").find_one())
@register_query(LocalMongoDBConnection)
def store_validator_set(conn, validators_update):
height = validators_update['height']
return conn.run(
conn.collection('validators').replace_one(
{'height': height},
validators_update,
upsert=True
)
)
height = validators_update["height"]
return conn.run(conn.collection("validators").replace_one({"height": height}, validators_update, upsert=True))
@register_query(LocalMongoDBConnection)
def delete_validator_set(conn, height):
return conn.run(
conn.collection('validators').delete_many({'height': height})
)
return conn.run(conn.collection("validators").delete_many({"height": height}))
@register_query(LocalMongoDBConnection)
def store_election(conn, election_id, height, is_concluded):
return conn.run(
conn.collection('elections').replace_one(
{'election_id': election_id,
'height': height},
{'election_id': election_id,
'height': height,
'is_concluded': is_concluded},
conn.collection("elections").replace_one(
{"election_id": election_id, "height": height},
{"election_id": election_id, "height": height, "is_concluded": is_concluded},
upsert=True,
)
)
@ -299,29 +277,22 @@ def store_election(conn, election_id, height, is_concluded):
@register_query(LocalMongoDBConnection)
def store_elections(conn, elections):
return conn.run(
conn.collection('elections').insert_many(elections)
)
return conn.run(conn.collection("elections").insert_many(elections))
@register_query(LocalMongoDBConnection)
def delete_elections(conn, height):
return conn.run(
conn.collection('elections').delete_many({'height': height})
)
return conn.run(conn.collection("elections").delete_many({"height": height}))
@register_query(LocalMongoDBConnection)
def get_validator_set(conn, height=None):
query = {}
if height is not None:
query = {'height': {'$lte': height}}
query = {"height": {"$lte": height}}
cursor = conn.run(
conn.collection('validators')
.find(query, projection={'_id': False})
.sort([('height', DESCENDING)])
.limit(1)
conn.collection("validators").find(query, projection={"_id": False}).sort([("height", DESCENDING)]).limit(1)
)
return next(cursor, None)
@ -329,35 +300,27 @@ def get_validator_set(conn, height=None):
@register_query(LocalMongoDBConnection)
def get_election(conn, election_id):
query = {'election_id': election_id}
query = {"election_id": election_id}
return conn.run(
conn.collection('elections')
.find_one(query, projection={'_id': False},
sort=[('height', DESCENDING)])
conn.collection("elections").find_one(query, projection={"_id": False}, sort=[("height", DESCENDING)])
)
@register_query(LocalMongoDBConnection)
def get_asset_tokens_for_public_key(conn, asset_id, public_key):
query = {'outputs.public_keys': [public_key],
'asset.id': asset_id}
query = {"outputs.public_keys": [public_key], "asset.id": asset_id}
cursor = conn.run(
conn.collection('transactions').aggregate([
{'$match': query},
{'$project': {'_id': False}}
]))
cursor = conn.run(conn.collection("transactions").aggregate([{"$match": query}, {"$project": {"_id": False}}]))
return cursor
@register_query(LocalMongoDBConnection)
def store_abci_chain(conn, height, chain_id, is_synced=True):
return conn.run(
conn.collection('abci_chains').replace_one(
{'height': height},
{'height': height, 'chain_id': chain_id,
'is_synced': is_synced},
conn.collection("abci_chains").replace_one(
{"height": height},
{"height": height, "chain_id": chain_id, "is_synced": is_synced},
upsert=True,
)
)
@ -365,14 +328,9 @@ def store_abci_chain(conn, height, chain_id, is_synced=True):
@register_query(LocalMongoDBConnection)
def delete_abci_chain(conn, height):
return conn.run(
conn.collection('abci_chains').delete_many({'height': height})
)
return conn.run(conn.collection("abci_chains").delete_many({"height": height}))
@register_query(LocalMongoDBConnection)
def get_latest_abci_chain(conn):
return conn.run(
conn.collection('abci_chains')
.find_one(projection={'_id': False}, sort=[('height', DESCENDING)])
)
return conn.run(conn.collection("abci_chains").find_one(projection={"_id": False}, sort=[("height", DESCENDING)]))

View File

@ -20,48 +20,48 @@ register_schema = module_dispatch_registrar(backend.schema)
INDEXES = {
'transactions': [
('id', dict(unique=True, name='transaction_id')),
('asset.id', dict(name='asset_id')),
('outputs.public_keys', dict(name='outputs')),
([('inputs.fulfills.transaction_id', ASCENDING),
('inputs.fulfills.output_index', ASCENDING)], dict(name='inputs')),
"transactions": [
("id", dict(unique=True, name="transaction_id")),
("asset.id", dict(name="asset_id")),
("outputs.public_keys", dict(name="outputs")),
(
[("inputs.fulfills.transaction_id", ASCENDING), ("inputs.fulfills.output_index", ASCENDING)],
dict(name="inputs"),
),
],
'assets': [
('id', dict(name='asset_id', unique=True)),
([('$**', TEXT)], dict(name='text')),
"assets": [
("id", dict(name="asset_id", unique=True)),
([("$**", TEXT)], dict(name="text")),
],
'blocks': [
([('height', DESCENDING)], dict(name='height', unique=True)),
"blocks": [
([("height", DESCENDING)], dict(name="height", unique=True)),
],
'metadata': [
('id', dict(name='transaction_id', unique=True)),
([('$**', TEXT)], dict(name='text')),
"metadata": [
("id", dict(name="transaction_id", unique=True)),
([("$**", TEXT)], dict(name="text")),
],
'utxos': [
([('transaction_id', ASCENDING),
('output_index', ASCENDING)], dict(name='utxo', unique=True)),
"utxos": [
([("transaction_id", ASCENDING), ("output_index", ASCENDING)], dict(name="utxo", unique=True)),
],
'pre_commit': [
('height', dict(name='height', unique=True)),
"pre_commit": [
("height", dict(name="height", unique=True)),
],
'elections': [
([('height', DESCENDING), ('election_id', ASCENDING)],
dict(name='election_id_height', unique=True)),
"elections": [
([("height", DESCENDING), ("election_id", ASCENDING)], dict(name="election_id_height", unique=True)),
],
'validators': [
('height', dict(name='height', unique=True)),
"validators": [
("height", dict(name="height", unique=True)),
],
'abci_chains': [
('height', dict(name='height', unique=True)),
('chain_id', dict(name='chain_id', unique=True)),
"abci_chains": [
("height", dict(name="height", unique=True)),
("chain_id", dict(name="chain_id", unique=True)),
],
}
@register_schema(LocalMongoDBConnection)
def create_database(conn, dbname):
logger.info('Create database `%s`.', dbname)
logger.info("Create database `%s`.", dbname)
# TODO: read and write concerns can be declared here
conn.conn.get_database(dbname)
@ -72,15 +72,15 @@ def create_tables(conn, dbname):
# create the table
# TODO: read and write concerns can be declared here
try:
logger.info(f'Create `{table_name}` table.')
logger.info(f"Create `{table_name}` table.")
conn.conn[dbname].create_collection(table_name)
except CollectionInvalid:
logger.info(f'Collection {table_name} already exists.')
logger.info(f"Collection {table_name} already exists.")
create_indexes(conn, dbname, table_name, INDEXES[table_name])
def create_indexes(conn, dbname, collection, indexes):
logger.info(f'Ensure secondary indexes for `{collection}`.')
logger.info(f"Ensure secondary indexes for `{collection}`.")
for fields, kwargs in indexes:
conn.conn[dbname][collection].create_index(fields, **kwargs)

View File

@ -27,12 +27,12 @@ def store_asset(asset: dict, connection):
@singledispatch
def store_assets(assets: list, connection):
"""Write a list of assets to the assets table.
backend
Args:
assets (list): a list of assets to write.
backend
Args:
assets (list): a list of assets to write.
Returns:
The database response.
Returns:
The database response.
"""
raise NotImplementedError
@ -215,8 +215,17 @@ def get_txids_filtered(connection, asset_id, operation=None):
@singledispatch
def text_search(conn, search, *, language='english', case_sensitive=False,
diacritic_sensitive=False, text_score=False, limit=0, table=None):
def text_search(
conn,
search,
*,
language="english",
case_sensitive=False,
diacritic_sensitive=False,
text_score=False,
limit=0,
table=None
):
"""Return all the assets that match the text search.
The results are sorted by text score.
@ -243,8 +252,7 @@ def text_search(conn, search, *, language='english', case_sensitive=False,
OperationError: If the backend does not support text search
"""
raise OperationError('This query is only supported when running '
'Planetmint with MongoDB as the backend.')
raise OperationError("This query is only supported when running " "Planetmint with MongoDB as the backend.")
@singledispatch
@ -384,8 +392,7 @@ def get_validator_set(conn, height):
@singledispatch
def get_election(conn, election_id):
"""Return the election record
"""
"""Return the election record"""
raise NotImplementedError
@ -432,6 +439,5 @@ def get_latest_abci_chain(conn):
@singledispatch
def _group_transaction_by_ids(txids: list, connection):
"""Returns the transactions object (JSON TYPE), from list of ids.
"""
"""Returns the transactions object (JSON TYPE), from list of ids."""
raise NotImplementedError

View File

@ -12,23 +12,74 @@ from planetmint.config import Config
from planetmint.backend.connection import connect
from planetmint.transactions.common.exceptions import ValidationError
from planetmint.transactions.common.utils import (
validate_all_values_for_key_in_obj, validate_all_values_for_key_in_list)
validate_all_values_for_key_in_obj,
validate_all_values_for_key_in_list,
)
logger = logging.getLogger(__name__)
# Tables/collections that every backend database must create
TABLES = ('transactions', 'blocks', 'assets', 'metadata',
'validators', 'elections', 'pre_commit', 'utxos', 'abci_chains')
TABLES = (
"transactions",
"blocks",
"assets",
"metadata",
"validators",
"elections",
"pre_commit",
"utxos",
"abci_chains",
)
SPACE_NAMES = ("abci_chains", "assets", "blocks", "blocks_tx",
"elections", "meta_data", "pre_commits", "validators",
"transactions", "inputs", "outputs", "keys", "utxos")
SPACE_NAMES = (
"abci_chains",
"assets",
"blocks",
"blocks_tx",
"elections",
"meta_data",
"pre_commits",
"validators",
"transactions",
"inputs",
"outputs",
"keys",
"utxos",
)
VALID_LANGUAGES = ('danish', 'dutch', 'english', 'finnish', 'french', 'german',
'hungarian', 'italian', 'norwegian', 'portuguese', 'romanian',
'russian', 'spanish', 'swedish', 'turkish', 'none',
'da', 'nl', 'en', 'fi', 'fr', 'de', 'hu', 'it', 'nb', 'pt',
'ro', 'ru', 'es', 'sv', 'tr')
VALID_LANGUAGES = (
"danish",
"dutch",
"english",
"finnish",
"french",
"german",
"hungarian",
"italian",
"norwegian",
"portuguese",
"romanian",
"russian",
"spanish",
"swedish",
"turkish",
"none",
"da",
"nl",
"en",
"fi",
"fr",
"de",
"hu",
"it",
"nb",
"pt",
"ro",
"ru",
"es",
"sv",
"tr",
)
@singledispatch
@ -84,7 +135,7 @@ def init_database(connection=None, dbname=None):
"""
connection = connection or connect()
dbname = dbname or Config().get()['database']['name']
dbname = dbname or Config().get()["database"]["name"]
create_database(connection, dbname)
create_tables(connection, dbname)
@ -93,41 +144,43 @@ def init_database(connection=None, dbname=None):
def validate_language_key(obj, key):
"""Validate all nested "language" key in `obj`.
Args:
obj (dict): dictionary whose "language" key is to be validated.
Args:
obj (dict): dictionary whose "language" key is to be validated.
Returns:
None: validation successful
Returns:
None: validation successful
Raises:
ValidationError: will raise exception in case language is not valid.
Raises:
ValidationError: will raise exception in case language is not valid.
"""
backend = Config().get()['database']['backend']
backend = Config().get()["database"]["backend"]
if backend == 'localmongodb':
if backend == "localmongodb":
data = obj.get(key, {})
if isinstance(data, dict):
validate_all_values_for_key_in_obj(data, 'language', validate_language)
validate_all_values_for_key_in_obj(data, "language", validate_language)
elif isinstance(data, list):
validate_all_values_for_key_in_list(data, 'language', validate_language)
validate_all_values_for_key_in_list(data, "language", validate_language)
def validate_language(value):
"""Check if `value` is a valid language.
https://docs.mongodb.com/manual/reference/text-search-languages/
https://docs.mongodb.com/manual/reference/text-search-languages/
Args:
value (str): language to validated
Args:
value (str): language to validated
Returns:
None: validation successful
Returns:
None: validation successful
Raises:
ValidationError: will raise exception in case language is not valid.
Raises:
ValidationError: will raise exception in case language is not valid.
"""
if value not in VALID_LANGUAGES:
error_str = ('MongoDB does not support text search for the '
'language "{}". If you do not understand this error '
'message then please rename key/field "language" to '
'something else like "lang".').format(value)
error_str = (
"MongoDB does not support text search for the "
'language "{}". If you do not understand this error '
'message then please rename key/field "language" to '
'something else like "lang".'
).format(value)
raise ValidationError(error_str)

View File

@ -1,5 +1,5 @@
# Register the single dispatched modules on import.
from planetmint.backend.tarantool import query, connection, schema, convert # noqa
from planetmint.backend.tarantool import query, connection, schema, convert # noqa
# MongoDBConnection should always be accessed via
# ``planetmint.backend.connect()``.

View File

@ -16,11 +16,10 @@ register_query = module_dispatch_registrar(convert)
def prepare_asset(connection, transaction_type, transaction_id, filter_operation, asset):
asset_id = transaction_id
if transaction_type != filter_operation:
asset_id = asset['id']
asset_id = asset["id"]
return tuple([asset, transaction_id, asset_id])
@register_query(TarantoolDBConnection)
def prepare_metadata(connection, transaction_id, metadata):
return {'id': transaction_id,
'metadata': metadata}
return {"id": transaction_id, "metadata": metadata}

View File

@ -57,40 +57,22 @@ def store_transactions(connection, signed_transactions: list):
txprepare = TransactionDecompose(transaction)
txtuples = txprepare.convert_to_tuple()
try:
connection.run(
connection.space("transactions").insert(txtuples["transactions"]),
only_data=False
)
connection.run(connection.space("transactions").insert(txtuples["transactions"]), only_data=False)
except: # This is used for omitting duplicate error in database for test -> test_bigchain_api::test_double_inclusion # noqa: E501, E722
continue
for _in in txtuples["inputs"]:
connection.run(
connection.space("inputs").insert(_in),
only_data=False
)
connection.run(connection.space("inputs").insert(_in), only_data=False)
for _out in txtuples["outputs"]:
connection.run(
connection.space("outputs").insert(_out),
only_data=False
)
connection.run(connection.space("outputs").insert(_out), only_data=False)
for _key in txtuples["keys"]:
connection.run(
connection.space("keys").insert(_key),
only_data=False
)
connection.run(connection.space("keys").insert(_key), only_data=False)
if txtuples["metadata"] is not None:
connection.run(
connection.space("meta_data").insert(txtuples["metadata"]),
only_data=False
)
connection.run(connection.space("meta_data").insert(txtuples["metadata"]), only_data=False)
if txtuples["asset"] is not None:
connection.run(
connection.space("assets").insert(txtuples["asset"]),
only_data=False
)
connection.run(connection.space("assets").insert(txtuples["asset"]), only_data=False)
@register_query(TarantoolDBConnection)
@ -110,7 +92,8 @@ def store_metadatas(connection, metadata: list):
for meta in metadata:
connection.run(
connection.space("meta_data").insert(
(meta["id"], json.dumps(meta["data"] if not "metadata" in meta else meta["metadata"]))) # noqa: E713
(meta["id"], json.dumps(meta["data"] if not "metadata" in meta else meta["metadata"]))
) # noqa: E713
)
@ -118,9 +101,7 @@ def store_metadatas(connection, metadata: list):
def get_metadata(connection, transaction_ids: list):
_returned_data = []
for _id in transaction_ids:
metadata = connection.run(
connection.space("meta_data").select(_id, index="id_search")
)
metadata = connection.run(connection.space("meta_data").select(_id, index="id_search"))
if metadata is not None:
if len(metadata) > 0:
metadata[0] = list(metadata[0])
@ -139,14 +120,13 @@ def store_asset(connection, asset):
return tuple(obj)
else:
return (json.dumps(obj), obj["id"], obj["id"])
try:
return connection.run(
connection.space("assets").insert(convert(asset)),
only_data=False
)
return connection.run(connection.space("assets").insert(convert(asset)), only_data=False)
except DatabaseError:
pass
@register_query(TarantoolDBConnection)
def store_assets(connection, assets: list):
for asset in assets:
@ -155,9 +135,7 @@ def store_assets(connection, assets: list):
@register_query(TarantoolDBConnection)
def get_asset(connection, asset_id: str):
_data = connection.run(
connection.space("assets").select(asset_id, index="txid_search")
)
_data = connection.run(connection.space("assets").select(asset_id, index="txid_search"))
return json.loads(_data[0][0]) if len(_data) > 0 else []
@ -166,9 +144,7 @@ def get_asset(connection, asset_id: str):
def get_assets(connection, assets_ids: list) -> list:
_returned_data = []
for _id in list(set(assets_ids)):
res = connection.run(
connection.space("assets").select(_id, index="txid_search")
)
res = connection.run(connection.space("assets").select(_id, index="txid_search"))
_returned_data.append(res[0])
sorted_assets = sorted(_returned_data, key=lambda k: k[1], reverse=False)
@ -186,17 +162,13 @@ def get_spent(connection, fullfil_transaction_id: str, fullfil_output_index: str
@register_query(TarantoolDBConnection)
def get_latest_block(connection): # TODO Here is used DESCENDING OPERATOR
_all_blocks = connection.run(
connection.space("blocks").select()
)
block = {"app_hash": '', "height": 0, "transactions": []}
_all_blocks = connection.run(connection.space("blocks").select())
block = {"app_hash": "", "height": 0, "transactions": []}
if _all_blocks is not None:
if len(_all_blocks) > 0:
_block = sorted(_all_blocks, key=itemgetter(1), reverse=True)[0]
_txids = connection.run(
connection.space("blocks_tx").select(_block[2], index="block_search")
)
_txids = connection.run(connection.space("blocks_tx").select(_block[2], index="block_search"))
block["app_hash"] = _block[0]
block["height"] = _block[1]
block["transactions"] = [tx[0] for tx in _txids]
@ -209,27 +181,22 @@ def get_latest_block(connection): # TODO Here is used DESCENDING OPERATOR
def store_block(connection, block: dict):
block_unique_id = token_hex(8)
connection.run(
connection.space("blocks").insert((block["app_hash"],
block["height"],
block_unique_id)),
only_data=False
connection.space("blocks").insert((block["app_hash"], block["height"], block_unique_id)), only_data=False
)
for txid in block["transactions"]:
connection.run(
connection.space("blocks_tx").insert((txid, block_unique_id)),
only_data=False
)
connection.run(connection.space("blocks_tx").insert((txid, block_unique_id)), only_data=False)
@register_query(TarantoolDBConnection)
def get_txids_filtered(connection, asset_id: str, operation: str = None,
last_tx: any = None): # TODO here is used 'OR' operator
def get_txids_filtered(
connection, asset_id: str, operation: str = None, last_tx: any = None
): # TODO here is used 'OR' operator
actions = {
"CREATE": {"sets": ["CREATE", asset_id], "index": "transaction_search"},
# 1 - operation, 2 - id (only in transactions) +
"TRANSFER": {"sets": ["TRANSFER", asset_id], "index": "transaction_search"},
# 1 - operation, 2 - asset.id (linked mode) + OPERATOR OR
None: {"sets": [asset_id, asset_id]}
None: {"sets": [asset_id, asset_id]},
}[operation]
_transactions = []
if actions["sets"][0] == "CREATE": # +
@ -237,9 +204,7 @@ def get_txids_filtered(connection, asset_id: str, operation: str = None,
connection.space("transactions").select([operation, asset_id], index=actions["index"])
)
elif actions["sets"][0] == "TRANSFER": # +
_assets = connection.run(
connection.space("assets").select([asset_id], index="only_asset_search")
)
_assets = connection.run(connection.space("assets").select([asset_id], index="only_asset_search"))
for asset in _assets:
_txid = asset[1]
_transactions = connection.run(
@ -248,12 +213,8 @@ def get_txids_filtered(connection, asset_id: str, operation: str = None,
if len(_transactions) != 0:
break
else:
_tx_ids = connection.run(
connection.space("transactions").select([asset_id], index="id_search")
)
_assets_ids = connection.run(
connection.space("assets").select([asset_id], index="only_asset_search")
)
_tx_ids = connection.run(connection.space("transactions").select([asset_id], index="id_search"))
_assets_ids = connection.run(connection.space("assets").select([asset_id], index="only_asset_search"))
return tuple(set([sublist[1] for sublist in _assets_ids] + [sublist[0] for sublist in _tx_ids]))
if last_tx:
@ -261,43 +222,34 @@ def get_txids_filtered(connection, asset_id: str, operation: str = None,
return tuple([elem[0] for elem in _transactions])
@register_query(TarantoolDBConnection)
def text_search(conn, search, table='assets', limit=0):
def text_search(conn, search, table="assets", limit=0):
pattern = ".{}.".format(search)
field_no = 1 if table == 'assets' else 2 # 2 for meta_data
res = conn.run(
conn.space(table).call('indexed_pattern_search', (table, field_no, pattern))
)
field_no = 1 if table == "assets" else 2 # 2 for meta_data
res = conn.run(conn.space(table).call("indexed_pattern_search", (table, field_no, pattern)))
to_return = []
if len(res[0]): # NEEDS BEAUTIFICATION
if table == 'assets':
if table == "assets":
for result in res[0]:
to_return.append({
'data': json.loads(result[0])['data'],
'id': result[1]
})
to_return.append({"data": json.loads(result[0])["data"], "id": result[1]})
else:
for result in res[0]:
to_return.append({
'metadata': json.loads(result[1]),
'id': result[0]
})
to_return.append({"metadata": json.loads(result[1]), "id": result[0]})
return to_return if limit == 0 else to_return[:limit]
def _remove_text_score(asset):
asset.pop('score', None)
asset.pop("score", None)
return asset
@register_query(TarantoolDBConnection)
def get_owned_ids(connection, owner: str):
_keys = connection.run(
connection.space("keys").select(owner, index="keys_search")
)
_keys = connection.run(connection.space("keys").select(owner, index="keys_search"))
if _keys is None or len(_keys) == 0:
return []
_transactionids = list(set([key[1] for key in _keys]))
@ -310,9 +262,11 @@ def get_spending_transactions(connection, inputs):
_transactions = []
for inp in inputs:
_trans_list = get_spent(fullfil_transaction_id=inp["transaction_id"],
fullfil_output_index=inp["output_index"],
connection=connection)
_trans_list = get_spent(
fullfil_transaction_id=inp["transaction_id"],
fullfil_output_index=inp["output_index"],
connection=connection,
)
_transactions.extend(_trans_list)
return _transactions
@ -320,28 +274,20 @@ def get_spending_transactions(connection, inputs):
@register_query(TarantoolDBConnection)
def get_block(connection, block_id=[]):
_block = connection.run(
connection.space("blocks").select(block_id, index="block_search", limit=1)
)
_block = connection.run(connection.space("blocks").select(block_id, index="block_search", limit=1))
if _block is None or len(_block) == 0:
return []
_block = _block[0]
_txblock = connection.run(
connection.space("blocks_tx").select(_block[2], index="block_search")
)
_txblock = connection.run(connection.space("blocks_tx").select(_block[2], index="block_search"))
return {"app_hash": _block[0], "height": _block[1], "transactions": [_tx[0] for _tx in _txblock]}
@register_query(TarantoolDBConnection)
def get_block_with_transaction(connection, txid: str):
_all_blocks_tx = connection.run(
connection.space("blocks_tx").select(txid, index="id_search")
)
_all_blocks_tx = connection.run(connection.space("blocks_tx").select(txid, index="id_search"))
if _all_blocks_tx is None or len(_all_blocks_tx) == 0:
return []
_block = connection.run(
connection.space("blocks").select(_all_blocks_tx[0][1], index="block_id_search")
)
_block = connection.run(connection.space("blocks").select(_all_blocks_tx[0][1], index="block_id_search"))
return [{"height": _height[1]} for _height in _block]
@ -373,7 +319,7 @@ def store_unspent_outputs(connection, *unspent_outputs: list):
if unspent_outputs:
for utxo in unspent_outputs:
output = connection.run(
connection.space("utxos").insert((utxo['transaction_id'], utxo['output_index'], dumps(utxo)))
connection.space("utxos").insert((utxo["transaction_id"], utxo["output_index"], dumps(utxo)))
)
result.append(output)
return result
@ -384,42 +330,36 @@ def delete_unspent_outputs(connection, *unspent_outputs: list):
result = []
if unspent_outputs:
for utxo in unspent_outputs:
output = connection.run(
connection.space("utxos").delete((utxo['transaction_id'], utxo['output_index']))
)
output = connection.run(connection.space("utxos").delete((utxo["transaction_id"], utxo["output_index"])))
result.append(output)
return result
@register_query(TarantoolDBConnection)
def get_unspent_outputs(connection, query=None): # for now we don't have implementation for 'query'.
_utxos = connection.run(
connection.space("utxos").select([])
)
_utxos = connection.run(connection.space("utxos").select([]))
return [loads(utx[2]) for utx in _utxos]
@register_query(TarantoolDBConnection)
def store_pre_commit_state(connection, state: dict):
_precommit = connection.run(
connection.space("pre_commits").select([], limit=1)
_precommit = connection.run(connection.space("pre_commits").select([], limit=1))
_precommitTuple = (
(token_hex(8), state["height"], state["transactions"])
if _precommit is None or len(_precommit) == 0
else _precommit[0]
)
_precommitTuple = (token_hex(8), state["height"], state["transactions"]) if _precommit is None or len(
_precommit) == 0 else _precommit[0]
connection.run(
connection.space("pre_commits").upsert(_precommitTuple,
op_list=[('=', 1, state["height"]),
('=', 2, state["transactions"])],
limit=1),
only_data=False
connection.space("pre_commits").upsert(
_precommitTuple, op_list=[("=", 1, state["height"]), ("=", 2, state["transactions"])], limit=1
),
only_data=False,
)
@register_query(TarantoolDBConnection)
def get_pre_commit_state(connection):
_commit = connection.run(
connection.space("pre_commits").select([], index="id_search")
)
_commit = connection.run(connection.space("pre_commits").select([], index="id_search"))
if _commit is None or len(_commit) == 0:
return None
_commit = sorted(_commit, key=itemgetter(1), reverse=False)[0]
@ -428,39 +368,32 @@ def get_pre_commit_state(connection):
@register_query(TarantoolDBConnection)
def store_validator_set(conn, validators_update: dict):
_validator = conn.run(
conn.space("validators").select(validators_update["height"], index="height_search", limit=1)
)
_validator = conn.run(conn.space("validators").select(validators_update["height"], index="height_search", limit=1))
unique_id = token_hex(8) if _validator is None or len(_validator) == 0 else _validator[0][0]
conn.run(
conn.space("validators").upsert((unique_id, validators_update["height"], validators_update["validators"]),
op_list=[('=', 1, validators_update["height"]),
('=', 2, validators_update["validators"])],
limit=1),
only_data=False
conn.space("validators").upsert(
(unique_id, validators_update["height"], validators_update["validators"]),
op_list=[("=", 1, validators_update["height"]), ("=", 2, validators_update["validators"])],
limit=1,
),
only_data=False,
)
@register_query(TarantoolDBConnection)
def delete_validator_set(connection, height: int):
_validators = connection.run(
connection.space("validators").select(height, index="height_search")
)
_validators = connection.run(connection.space("validators").select(height, index="height_search"))
for _valid in _validators:
connection.run(
connection.space("validators").delete(_valid[0]),
only_data=False
)
connection.run(connection.space("validators").delete(_valid[0]), only_data=False)
@register_query(TarantoolDBConnection)
def store_election(connection, election_id: str, height: int, is_concluded: bool):
connection.run(
connection.space("elections").upsert((election_id, height, is_concluded),
op_list=[('=', 1, height),
('=', 2, is_concluded)],
limit=1),
only_data=False
connection.space("elections").upsert(
(election_id, height, is_concluded), op_list=[("=", 1, height), ("=", 2, is_concluded)], limit=1
),
only_data=False,
)
@ -468,33 +401,27 @@ def store_election(connection, election_id: str, height: int, is_concluded: bool
def store_elections(connection, elections: list):
for election in elections:
_election = connection.run( # noqa: F841
connection.space("elections").insert((election["election_id"],
election["height"],
election["is_concluded"])),
only_data=False
connection.space("elections").insert(
(election["election_id"], election["height"], election["is_concluded"])
),
only_data=False,
)
@register_query(TarantoolDBConnection)
def delete_elections(connection, height: int):
_elections = connection.run(
connection.space("elections").select(height, index="height_search")
)
_elections = connection.run(connection.space("elections").select(height, index="height_search"))
for _elec in _elections:
connection.run(
connection.space("elections").delete(_elec[0]),
only_data=False
)
connection.run(connection.space("elections").delete(_elec[0]), only_data=False)
@register_query(TarantoolDBConnection)
def get_validator_set(connection, height: int = None):
_validators = connection.run(
connection.space("validators").select()
)
_validators = connection.run(connection.space("validators").select())
if height is not None and _validators is not None:
_validators = [{"height": validator[1], "validators": validator[2]} for validator in _validators if
validator[1] <= height]
_validators = [
{"height": validator[1], "validators": validator[2]} for validator in _validators if validator[1] <= height
]
return next(iter(sorted(_validators, key=lambda k: k["height"], reverse=True)), None)
elif _validators is not None:
_validators = [{"height": validator[1], "validators": validator[2]} for validator in _validators]
@ -504,9 +431,7 @@ def get_validator_set(connection, height: int = None):
@register_query(TarantoolDBConnection)
def get_election(connection, election_id: str):
_elections = connection.run(
connection.space("elections").select(election_id, index="id_search")
)
_elections = connection.run(connection.space("elections").select(election_id, index="id_search"))
if _elections is None or len(_elections) == 0:
return None
_election = sorted(_elections, key=itemgetter(0), reverse=True)[0]
@ -514,13 +439,12 @@ def get_election(connection, election_id: str):
@register_query(TarantoolDBConnection)
def get_asset_tokens_for_public_key(connection, asset_id: str,
public_key: str): # FIXME Something can be wrong with this function ! (public_key) is not used # noqa: E501
def get_asset_tokens_for_public_key(
connection, asset_id: str, public_key: str
): # FIXME Something can be wrong with this function ! (public_key) is not used # noqa: E501
# space = connection.space("keys")
# _keys = space.select([public_key], index="keys_search")
_transactions = connection.run(
connection.space("assets").select([asset_id], index="assetid_search")
)
_transactions = connection.run(connection.space("assets").select([asset_id], index="assetid_search"))
# _transactions = _transactions
# _keys = _keys.data
_grouped_transactions = _group_transaction_by_ids(connection=connection, txids=[_tx[1] for _tx in _transactions])
@ -531,30 +455,23 @@ def get_asset_tokens_for_public_key(connection, asset_id: str,
def store_abci_chain(connection, height: int, chain_id: str, is_synced: bool = True):
hash_id_primarykey = sha256(dumps(obj={"height": height}).encode()).hexdigest()
connection.run(
connection.space("abci_chains").upsert((height, is_synced, chain_id, hash_id_primarykey),
op_list=[
('=', 0, height),
('=', 1, is_synced),
('=', 2, chain_id)
]),
only_data=False
connection.space("abci_chains").upsert(
(height, is_synced, chain_id, hash_id_primarykey),
op_list=[("=", 0, height), ("=", 1, is_synced), ("=", 2, chain_id)],
),
only_data=False,
)
@register_query(TarantoolDBConnection)
def delete_abci_chain(connection, height: int):
hash_id_primarykey = sha256(dumps(obj={"height": height}).encode()).hexdigest()
connection.run(
connection.space("abci_chains").delete(hash_id_primarykey),
only_data=False
)
connection.run(connection.space("abci_chains").delete(hash_id_primarykey), only_data=False)
@register_query(TarantoolDBConnection)
def get_latest_abci_chain(connection):
_all_chains = connection.run(
connection.space("abci_chains").select()
)
_all_chains = connection.run(connection.space("abci_chains").select())
if _all_chains is None or len(_all_chains) == 0:
return None
_chain = sorted(_all_chains, key=itemgetter(0), reverse=True)[0]

View File

@ -9,9 +9,21 @@ from planetmint.backend.tarantool.connection import TarantoolDBConnection
logger = logging.getLogger(__name__)
register_schema = module_dispatch_registrar(backend.schema)
SPACE_NAMES = ("abci_chains", "assets", "blocks", "blocks_tx",
"elections", "meta_data", "pre_commits", "validators",
"transactions", "inputs", "outputs", "keys", "utxos")
SPACE_NAMES = (
"abci_chains",
"assets",
"blocks",
"blocks_tx",
"elections",
"meta_data",
"pre_commits",
"validators",
"transactions",
"inputs",
"outputs",
"keys",
"utxos",
)
SPACE_COMMANDS = {
"abci_chains": "abci_chains = box.schema.space.create('abci_chains', {engine='memtx', is_sync = false})",
@ -26,110 +38,86 @@ SPACE_COMMANDS = {
"inputs": "inputs = box.schema.space.create('inputs')",
"outputs": "outputs = box.schema.space.create('outputs')",
"keys": "keys = box.schema.space.create('keys')",
"utxos": "utxos = box.schema.space.create('utxos', {engine = 'memtx' , is_sync = false})"
"utxos": "utxos = box.schema.space.create('utxos', {engine = 'memtx' , is_sync = false})",
}
INDEX_COMMANDS = {
"abci_chains":
{
"id_search": "abci_chains:create_index('id_search' ,{type='hash', parts={'id'}})",
"height_search": "abci_chains:create_index('height_search' ,{type='tree', unique=false, parts={'height'}})"
},
"assets":
{
"txid_search": "assets:create_index('txid_search', {type='hash', parts={'tx_id'}})",
"assetid_search": "assets:create_index('assetid_search', {type='tree',unique=false, parts={'asset_id', 'tx_id'}})", # noqa: E501
"only_asset_search": "assets:create_index('only_asset_search', {type='tree', unique=false, parts={'asset_id'}})", # noqa: E501
"text_search": "assets:create_index('secondary', {unique=false,parts={1,'string'}})"
},
"blocks":
{
"id_search": "blocks:create_index('id_search' , {type='hash' , parts={'block_id'}})",
"block_search": "blocks:create_index('block_search' , {type='tree', unique = false, parts={'height'}})",
"block_id_search": "blocks:create_index('block_id_search', {type = 'hash', parts ={'block_id'}})"
},
"blocks_tx":
{
"id_search": "blocks_tx:create_index('id_search',{ type = 'hash', parts={'transaction_id'}})",
"block_search": "blocks_tx:create_index('block_search', {type = 'tree',unique=false, parts={'block_id'}})"
},
"elections":
{
"id_search": "elections:create_index('id_search' , {type='hash', parts={'election_id'}})",
"height_search": "elections:create_index('height_search' , {type='tree',unique=false, parts={'height'}})",
"update_search": "elections:create_index('update_search', {type='tree', unique=false, parts={'election_id', 'height'}})" # noqa: E501
},
"meta_data":
{
"id_search": "meta_datas:create_index('id_search', { type='hash' , parts={'transaction_id'}})",
"text_search": "meta_datas:create_index('secondary', {unique=false,parts={2,'string'}})"
},
"pre_commits":
{
"id_search": "pre_commits:create_index('id_search', {type ='hash' , parts={'commit_id'}})",
"height_search": "pre_commits:create_index('height_search', {type ='tree',unique=true, parts={'height'}})"
},
"validators":
{
"id_search": "validators:create_index('id_search' , {type='hash' , parts={'validator_id'}})",
"height_search": "validators:create_index('height_search' , {type='tree', unique=true, parts={'height'}})"
},
"transactions":
{
"id_search": "transactions:create_index('id_search' , {type = 'hash' , parts={'transaction_id'}})",
"transaction_search": "transactions:create_index('transaction_search' , {type = 'tree',unique=false, parts={'operation', 'transaction_id'}})" # noqa: E501
},
"inputs":
{
"delete_search": "inputs:create_index('delete_search' , {type = 'hash', parts={'input_id'}})",
"spent_search": "inputs:create_index('spent_search' , {type = 'tree', unique=false, parts={'fulfills_transaction_id', 'fulfills_output_index'}})", # noqa: E501
"id_search": "inputs:create_index('id_search', {type = 'tree', unique=false, parts = {'transaction_id'}})"
},
"outputs":
{
"unique_search": "outputs:create_index('unique_search' ,{type='hash', parts={'output_id'}})",
"id_search": "outputs:create_index('id_search' ,{type='tree', unique=false, parts={'transaction_id'}})"
},
"keys":
{
"id_search": "keys:create_index('id_search', {type = 'hash', parts={'id'}})",
"keys_search": "keys:create_index('keys_search', {type = 'tree', unique=false, parts={'public_key'}})",
"txid_search": "keys:create_index('txid_search', {type = 'tree', unique=false, parts={'transaction_id'}})",
"output_search": "keys:create_index('output_search', {type = 'tree', unique=false, parts={'output_id'}})"
},
"utxos":
{
"id_search": "utxos:create_index('id_search', {type='hash' , parts={'transaction_id', 'output_index'}})",
"transaction_search": "utxos:create_index('transaction_search', {type='tree', unique=false, parts={'transaction_id'}})", # noqa: E501
"index_Search": "utxos:create_index('index_search', {type='tree', unique=false, parts={'output_index'}})"
}
"abci_chains": {
"id_search": "abci_chains:create_index('id_search' ,{type='hash', parts={'id'}})",
"height_search": "abci_chains:create_index('height_search' ,{type='tree', unique=false, parts={'height'}})",
},
"assets": {
"txid_search": "assets:create_index('txid_search', {type='hash', parts={'tx_id'}})",
"assetid_search": "assets:create_index('assetid_search', {type='tree',unique=false, parts={'asset_id', 'tx_id'}})", # noqa: E501
"only_asset_search": "assets:create_index('only_asset_search', {type='tree', unique=false, parts={'asset_id'}})", # noqa: E501
"text_search": "assets:create_index('secondary', {unique=false,parts={1,'string'}})",
},
"blocks": {
"id_search": "blocks:create_index('id_search' , {type='hash' , parts={'block_id'}})",
"block_search": "blocks:create_index('block_search' , {type='tree', unique = false, parts={'height'}})",
"block_id_search": "blocks:create_index('block_id_search', {type = 'hash', parts ={'block_id'}})",
},
"blocks_tx": {
"id_search": "blocks_tx:create_index('id_search',{ type = 'hash', parts={'transaction_id'}})",
"block_search": "blocks_tx:create_index('block_search', {type = 'tree',unique=false, parts={'block_id'}})",
},
"elections": {
"id_search": "elections:create_index('id_search' , {type='hash', parts={'election_id'}})",
"height_search": "elections:create_index('height_search' , {type='tree',unique=false, parts={'height'}})",
"update_search": "elections:create_index('update_search', {type='tree', unique=false, parts={'election_id', 'height'}})", # noqa: E501
},
"meta_data": {
"id_search": "meta_datas:create_index('id_search', { type='hash' , parts={'transaction_id'}})",
"text_search": "meta_datas:create_index('secondary', {unique=false,parts={2,'string'}})",
},
"pre_commits": {
"id_search": "pre_commits:create_index('id_search', {type ='hash' , parts={'commit_id'}})",
"height_search": "pre_commits:create_index('height_search', {type ='tree',unique=true, parts={'height'}})",
},
"validators": {
"id_search": "validators:create_index('id_search' , {type='hash' , parts={'validator_id'}})",
"height_search": "validators:create_index('height_search' , {type='tree', unique=true, parts={'height'}})",
},
"transactions": {
"id_search": "transactions:create_index('id_search' , {type = 'hash' , parts={'transaction_id'}})",
"transaction_search": "transactions:create_index('transaction_search' , {type = 'tree',unique=false, parts={'operation', 'transaction_id'}})", # noqa: E501
},
"inputs": {
"delete_search": "inputs:create_index('delete_search' , {type = 'hash', parts={'input_id'}})",
"spent_search": "inputs:create_index('spent_search' , {type = 'tree', unique=false, parts={'fulfills_transaction_id', 'fulfills_output_index'}})", # noqa: E501
"id_search": "inputs:create_index('id_search', {type = 'tree', unique=false, parts = {'transaction_id'}})",
},
"outputs": {
"unique_search": "outputs:create_index('unique_search' ,{type='hash', parts={'output_id'}})",
"id_search": "outputs:create_index('id_search' ,{type='tree', unique=false, parts={'transaction_id'}})",
},
"keys": {
"id_search": "keys:create_index('id_search', {type = 'hash', parts={'id'}})",
"keys_search": "keys:create_index('keys_search', {type = 'tree', unique=false, parts={'public_key'}})",
"txid_search": "keys:create_index('txid_search', {type = 'tree', unique=false, parts={'transaction_id'}})",
"output_search": "keys:create_index('output_search', {type = 'tree', unique=false, parts={'output_id'}})",
},
"utxos": {
"id_search": "utxos:create_index('id_search', {type='hash' , parts={'transaction_id', 'output_index'}})",
"transaction_search": "utxos:create_index('transaction_search', {type='tree', unique=false, parts={'transaction_id'}})", # noqa: E501
"index_Search": "utxos:create_index('index_search', {type='tree', unique=false, parts={'output_index'}})",
},
}
SCHEMA_COMMANDS = {
"abci_chains":
"abci_chains:format({{name='height' , type='integer'},{name='is_synched' , type='boolean'},{name='chain_id',type='string'}, {name='id', type='string'}})", # noqa: E501
"assets":
"assets:format({{name='data' , type='string'}, {name='tx_id', type='string'}, {name='asset_id', type='string'}})", # noqa: E501
"blocks":
"blocks:format{{name='app_hash',type='string'},{name='height' , type='integer'},{name='block_id' , type='string'}}", # noqa: E501
"abci_chains": "abci_chains:format({{name='height' , type='integer'},{name='is_synched' , type='boolean'},{name='chain_id',type='string'}, {name='id', type='string'}})", # noqa: E501
"assets": "assets:format({{name='data' , type='string'}, {name='tx_id', type='string'}, {name='asset_id', type='string'}})", # noqa: E501
"blocks": "blocks:format{{name='app_hash',type='string'},{name='height' , type='integer'},{name='block_id' , type='string'}}", # noqa: E501
"blocks_tx": "blocks_tx:format{{name='transaction_id', type = 'string'}, {name = 'block_id', type = 'string'}}",
"elections":
"elections:format({{name='election_id' , type='string'},{name='height' , type='integer'}, {name='is_concluded' , type='boolean'}})", # noqa: E501
"elections": "elections:format({{name='election_id' , type='string'},{name='height' , type='integer'}, {name='is_concluded' , type='boolean'}})", # noqa: E501
"meta_data": "meta_datas:format({{name='transaction_id' , type='string'}, {name='meta_data' , type='string'}})", # noqa: E501
"pre_commits":
"pre_commits:format({{name='commit_id', type='string'}, {name='height',type='integer'}, {name='transactions',type=any}})", # noqa: E501
"validators":
"validators:format({{name='validator_id' , type='string'},{name='height',type='integer'},{name='validators' , type='any'}})", # noqa: E501
"transactions":
"transactions:format({{name='transaction_id' , type='string'}, {name='operation' , type='string'}, {name='version' ,type='string'}, {name='dict_map', type='any'}})", # noqa: E501
"inputs":
"inputs:format({{name='transaction_id' , type='string'}, {name='fulfillment' , type='any'}, {name='owners_before' , type='array'}, {name='fulfills_transaction_id', type = 'string'}, {name='fulfills_output_index', type = 'string'}, {name='input_id', type='string'}, {name='input_index', type='number'}})", # noqa: E501
"outputs":
"outputs:format({{name='transaction_id' , type='string'}, {name='amount' , type='string'}, {name='uri', type='string'}, {name='details_type', type='string'}, {name='details_public_key', type='any'}, {name = 'output_id', type = 'string'}, {name='treshold', type='any'}, {name='subconditions', type='any'}, {name='output_index', type='number'}})", # noqa: E501
"keys":
"keys:format({{name = 'id', type='string'}, {name = 'transaction_id', type = 'string'} ,{name = 'output_id', type = 'string'}, {name = 'public_key', type = 'string'}, {name = 'key_index', type = 'integer'}})", # noqa: E501
"utxos":
"utxos:format({{name='transaction_id' , type='string'}, {name='output_index' , type='integer'}, {name='utxo_dict', type='string'}})" # noqa: E501
"pre_commits": "pre_commits:format({{name='commit_id', type='string'}, {name='height',type='integer'}, {name='transactions',type=any}})", # noqa: E501
"validators": "validators:format({{name='validator_id' , type='string'},{name='height',type='integer'},{name='validators' , type='any'}})", # noqa: E501
"transactions": "transactions:format({{name='transaction_id' , type='string'}, {name='operation' , type='string'}, {name='version' ,type='string'}, {name='dict_map', type='any'}})", # noqa: E501
"inputs": "inputs:format({{name='transaction_id' , type='string'}, {name='fulfillment' , type='any'}, {name='owners_before' , type='array'}, {name='fulfills_transaction_id', type = 'string'}, {name='fulfills_output_index', type = 'string'}, {name='input_id', type='string'}, {name='input_index', type='number'}})", # noqa: E501
"outputs": "outputs:format({{name='transaction_id' , type='string'}, {name='amount' , type='string'}, {name='uri', type='string'}, {name='details_type', type='string'}, {name='details_public_key', type='any'}, {name = 'output_id', type = 'string'}, {name='treshold', type='any'}, {name='subconditions', type='any'}, {name='output_index', type='number'}})", # noqa: E501
"keys": "keys:format({{name = 'id', type='string'}, {name = 'transaction_id', type = 'string'} ,{name = 'output_id', type = 'string'}, {name = 'public_key', type = 'string'}, {name = 'key_index', type = 'integer'}})", # noqa: E501
"utxos": "utxos:format({{name='transaction_id' , type='string'}, {name='output_index' , type='integer'}, {name='utxo_dict', type='string'}})", # noqa: E501
}
SCHEMA_DROP_COMMANDS = {
@ -145,7 +133,7 @@ SCHEMA_DROP_COMMANDS = {
"inputs": "box.space.inputs:drop()",
"outputs": "box.space.outputs:drop()",
"keys": "box.space.keys:drop()",
"utxos": "box.space.utxos:drop()"
"utxos": "box.space.utxos:drop()",
}
@ -159,24 +147,24 @@ def drop_database(connection, not_used=None):
except Exception:
print(f"Unexpected error while trying to drop space '{_space}'")
@register_schema(TarantoolDBConnection)
def create_database(connection, dbname):
'''
"""
For tarantool implementation, this function runs
create_tables, to initiate spaces, schema and indexes.
'''
logger.info('Create database `%s`.', dbname)
"""
logger.info("Create database `%s`.", dbname)
create_tables(connection, dbname)
def run_command_with_output(command):
from subprocess import run
host_port = "%s:%s" % (Config().get()["database"]["host"], Config().get()["database"]["port"])
output = run(["tarantoolctl", "connect", host_port],
input=command,
capture_output=True).stderr
output = run(["tarantoolctl", "connect", host_port], input=command, capture_output=True).stderr
output = output.decode()
return output

View File

@ -41,13 +41,16 @@ class TransactionDecompose:
"outputs": [],
"keys": [],
"metadata": None,
"asset": None
"asset": None,
}
def get_map(self, dictionary: dict = None):
return _save_keys_order(dictionary=dictionary) if dictionary is not None else _save_keys_order(
dictionary=self._transaction)
return (
_save_keys_order(dictionary=dictionary)
if dictionary is not None
else _save_keys_order(dictionary=self._transaction)
)
def __create_hash(self, n: int):
return token_hex(n)
@ -71,13 +74,17 @@ class TransactionDecompose:
input_index = 0
for _input in self._transaction["inputs"]:
_inputs.append((self._transaction["id"],
_input["fulfillment"],
_input["owners_before"],
_input["fulfills"]["transaction_id"] if _input["fulfills"] is not None else "",
str(_input["fulfills"]["output_index"]) if _input["fulfills"] is not None else "",
self.__create_hash(7),
input_index))
_inputs.append(
(
self._transaction["id"],
_input["fulfillment"],
_input["owners_before"],
_input["fulfills"]["transaction_id"] if _input["fulfills"] is not None else "",
str(_input["fulfills"]["output_index"]) if _input["fulfills"] is not None else "",
self.__create_hash(7),
input_index,
)
)
input_index = input_index + 1
return _inputs
@ -88,27 +95,29 @@ class TransactionDecompose:
for _output in self._transaction["outputs"]:
output_id = self.__create_hash(7)
if _output["condition"]["details"].get("subconditions") is None:
tmp_output = (self._transaction["id"],
_output["amount"],
_output["condition"]["uri"],
_output["condition"]["details"]["type"],
_output["condition"]["details"]["public_key"],
output_id,
None,
None,
output_index
)
tmp_output = (
self._transaction["id"],
_output["amount"],
_output["condition"]["uri"],
_output["condition"]["details"]["type"],
_output["condition"]["details"]["public_key"],
output_id,
None,
None,
output_index,
)
else:
tmp_output = (self._transaction["id"],
_output["amount"],
_output["condition"]["uri"],
_output["condition"]["details"]["type"],
None,
output_id,
_output["condition"]["details"]["threshold"],
_output["condition"]["details"]["subconditions"],
output_index
)
tmp_output = (
self._transaction["id"],
_output["amount"],
_output["condition"]["uri"],
_output["condition"]["details"]["type"],
None,
output_id,
_output["condition"]["details"]["threshold"],
_output["condition"]["details"]["subconditions"],
output_index,
)
_outputs.append(tmp_output)
output_index = output_index + 1
@ -121,10 +130,7 @@ class TransactionDecompose:
def __prepare_transaction(self):
_map = self.get_map()
return (self._transaction["id"],
self._transaction["operation"],
self._transaction["version"],
_map)
return (self._transaction["id"], self._transaction["operation"], self._transaction["version"], _map)
def convert_to_tuple(self):
self._metadata_check()
@ -138,7 +144,6 @@ class TransactionDecompose:
class TransactionCompose:
def __init__(self, db_results):
self.db_results = db_results
self._map = self.db_results["transaction"][3]

View File

@ -1,11 +1,13 @@
import subprocess
def run_cmd(commands: list, config: dict):
ret = subprocess.Popen(
['%s %s:%s < %s' % ("tarantoolctl connect", "localhost", "3303", "planetmint/backend/tarantool/init.lua")],
["%s %s:%s < %s" % ("tarantoolctl connect", "localhost", "3303", "planetmint/backend/tarantool/init.lua")],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
universal_newlines=True,
bufsize=0,
shell=True)
shell=True,
)
return True if ret >= 0 else False

View File

@ -19,10 +19,12 @@ def module_dispatch_registrar(module):
return dispatch_registrar.register(obj_type)(func)
except AttributeError as ex:
raise ModuleDispatchRegistrationError(
('`{module}` does not contain a single-dispatchable '
'function named `{func}`. The module being registered '
'was not implemented correctly!').format(
func=func_name, module=module.__name__)) from ex
(
"`{module}` does not contain a single-dispatchable "
"function named `{func}`. The module being registered "
"was not implemented correctly!"
).format(func=func_name, module=module.__name__)
) from ex
return wrapper

View File

@ -1,31 +1,28 @@
elections = {
'upsert-validator': {
'help': 'Propose a change to the validator set',
'args': {
'public_key': {
'help': 'Public key of the validator to be added/updated/removed.'
"upsert-validator": {
"help": "Propose a change to the validator set",
"args": {
"public_key": {"help": "Public key of the validator to be added/updated/removed."},
"power": {
"type": int,
"help": "The proposed power for the validator. Setting to 0 will remove the validator.",
},
'power': {
'type': int,
'help': 'The proposed power for the validator. Setting to 0 will remove the validator.'},
'node_id': {
'help': 'The node_id of the validator.'
"node_id": {"help": "The node_id of the validator."},
"--private-key": {
"dest": "sk",
"required": True,
"help": "Path to the private key of the election initiator.",
},
'--private-key': {
'dest': 'sk',
'required': True,
'help': 'Path to the private key of the election initiator.'
}
}
},
},
'chain-migration': {
'help': 'Call for a halt to block production to allow for a version change across breaking changes.',
'args': {
'--private-key': {
'dest': 'sk',
'required': True,
'help': 'Path to the private key of the election initiator.'
"chain-migration": {
"help": "Call for a halt to block production to allow for a version change across breaking changes.",
"args": {
"--private-key": {
"dest": "sk",
"required": True,
"help": "Path to the private key of the election initiator.",
}
}
}
},
},
}

View File

@ -18,18 +18,15 @@ from planetmint.backend.tarantool.connection import TarantoolDBConnection
from planetmint.core import rollback
from planetmint.utils import load_node_key
from planetmint.transactions.common.transaction_mode_types import BROADCAST_TX_COMMIT
from planetmint.transactions.common.exceptions import (
DatabaseDoesNotExist, ValidationError)
from planetmint.transactions.common.exceptions import DatabaseDoesNotExist, ValidationError
from planetmint.transactions.types.elections.vote import Vote
from planetmint.transactions.types.elections.chain_migration_election import ChainMigrationElection
import planetmint
from planetmint import (backend, ValidatorElection,
Planetmint)
from planetmint import backend, ValidatorElection, Planetmint
from planetmint.backend import schema
from planetmint.backend import tarantool
from planetmint.commands import utils
from planetmint.commands.utils import (configure_planetmint,
input_on_stderr)
from planetmint.commands.utils import configure_planetmint, input_on_stderr
from planetmint.log import setup_logging
from planetmint.tendermint_utils import public_key_from_base64
from planetmint.commands.election_types import elections
@ -53,7 +50,7 @@ def run_show_config(args):
# the system needs to be configured, then display information on how to
# configure the system.
_config = Config().get()
del _config['CONFIGURED']
del _config["CONFIGURED"]
print(json.dumps(_config, indent=4, sort_keys=True))
@ -64,47 +61,47 @@ def run_configure(args):
config_file_exists = False
# if the config path is `-` then it's stdout
if config_path != '-':
if config_path != "-":
config_file_exists = os.path.exists(config_path)
if config_file_exists and not args.yes:
want = input_on_stderr('Config file `{}` exists, do you want to '
'override it? (cannot be undone) [y/N]: '.format(config_path))
if want != 'y':
want = input_on_stderr(
"Config file `{}` exists, do you want to " "override it? (cannot be undone) [y/N]: ".format(config_path)
)
if want != "y":
return
Config().init_config(args.backend)
conf = Config().get()
# select the correct config defaults based on the backend
print('Generating default configuration for backend {}'
.format(args.backend), file=sys.stderr)
print("Generating default configuration for backend {}".format(args.backend), file=sys.stderr)
database_keys = Config().get_db_key_map(args.backend)
if not args.yes:
for key in ('bind',):
val = conf['server'][key]
conf['server'][key] = input_on_stderr('API Server {}? (default `{}`): '.format(key, val), val)
for key in ("bind",):
val = conf["server"][key]
conf["server"][key] = input_on_stderr("API Server {}? (default `{}`): ".format(key, val), val)
for key in ('scheme', 'host', 'port'):
val = conf['wsserver'][key]
conf['wsserver'][key] = input_on_stderr('WebSocket Server {}? (default `{}`): '.format(key, val), val)
for key in ("scheme", "host", "port"):
val = conf["wsserver"][key]
conf["wsserver"][key] = input_on_stderr("WebSocket Server {}? (default `{}`): ".format(key, val), val)
for key in database_keys:
val = conf['database'][key]
conf['database'][key] = input_on_stderr('Database {}? (default `{}`): '.format(key, val), val)
val = conf["database"][key]
conf["database"][key] = input_on_stderr("Database {}? (default `{}`): ".format(key, val), val)
for key in ('host', 'port'):
val = conf['tendermint'][key]
conf['tendermint'][key] = input_on_stderr('Tendermint {}? (default `{}`)'.format(key, val), val)
for key in ("host", "port"):
val = conf["tendermint"][key]
conf["tendermint"][key] = input_on_stderr("Tendermint {}? (default `{}`)".format(key, val), val)
if config_path != '-':
if config_path != "-":
planetmint.config_utils.write_config(conf, config_path)
else:
print(json.dumps(conf, indent=4, sort_keys=True))
Config().set(conf)
print('Configuration written to {}'.format(config_path), file=sys.stderr)
print('Ready to go!', file=sys.stderr)
print("Configuration written to {}".format(config_path), file=sys.stderr)
print("Ready to go!", file=sys.stderr)
@configure_planetmint
@ -114,21 +111,19 @@ def run_election(args):
b = Planetmint()
# Call the function specified by args.action, as defined above
globals()[f'run_election_{args.action}'](args, b)
globals()[f"run_election_{args.action}"](args, b)
def run_election_new(args, planet):
election_type = args.election_type.replace('-', '_')
globals()[f'run_election_new_{election_type}'](args, planet)
election_type = args.election_type.replace("-", "_")
globals()[f"run_election_new_{election_type}"](args, planet)
def create_new_election(sk, planet, election_class, data):
try:
key = load_node_key(sk)
voters = election_class.recipients(planet)
election = election_class.generate([key.public_key],
voters,
data, None).sign([key.private_key])
election = election_class.generate([key.public_key], voters, data, None).sign([key.private_key])
election.validate(planet)
except ValidationError as e:
logger.error(e)
@ -138,11 +133,11 @@ def create_new_election(sk, planet, election_class, data):
return False
resp = planet.write_transaction(election, BROADCAST_TX_COMMIT)
if resp == (202, ''):
logger.info('[SUCCESS] Submitted proposal with id: {}'.format(election.id))
if resp == (202, ""):
logger.info("[SUCCESS] Submitted proposal with id: {}".format(election.id))
return election.id
else:
logger.error('Failed to commit election proposal')
logger.error("Failed to commit election proposal")
return False
@ -161,10 +156,9 @@ def run_election_new_upsert_validator(args, planet):
"""
new_validator = {
'public_key': {'value': public_key_from_base64(args.public_key),
'type': 'ed25519-base16'},
'power': args.power,
'node_id': args.node_id
"public_key": {"value": public_key_from_base64(args.public_key), "type": "ed25519-base16"},
"power": args.power,
"node_id": args.node_id,
}
return create_new_election(args.sk, planet, ValidatorElection, new_validator)
@ -202,23 +196,21 @@ def run_election_approve(args, planet):
if len(voting_powers) > 0:
voting_power = voting_powers[0]
else:
logger.error('The key you provided does not match any of the eligible voters in this election.')
logger.error("The key you provided does not match any of the eligible voters in this election.")
return False
inputs = [i for i in tx.to_inputs() if key.public_key in i.owners_before]
election_pub_key = ValidatorElection.to_public_key(tx.id)
approval = Vote.generate(inputs,
[([election_pub_key], voting_power)],
tx.id).sign([key.private_key])
approval = Vote.generate(inputs, [([election_pub_key], voting_power)], tx.id).sign([key.private_key])
approval.validate(planet)
resp = planet.write_transaction(approval, BROADCAST_TX_COMMIT)
if resp == (202, ''):
logger.info('[SUCCESS] Your vote has been submitted')
if resp == (202, ""):
logger.info("[SUCCESS] Your vote has been submitted")
return approval.id
else:
logger.error('Failed to commit vote')
logger.error("Failed to commit vote")
return False
@ -234,7 +226,7 @@ def run_election_show(args, planet):
election = planet.get_transaction(args.election_id)
if not election:
logger.error(f'No election found with election_id {args.election_id}')
logger.error(f"No election found with election_id {args.election_id}")
return
response = election.show_election(planet)
@ -260,11 +252,12 @@ def run_drop(args):
"""Drop the database"""
if not args.yes:
response = input_on_stderr('Do you want to drop `{}` database? [y/n]: ')
if response != 'y':
response = input_on_stderr("Do you want to drop `{}` database? [y/n]: ")
if response != "y":
return
from planetmint.backend.connection import connect
conn = connect()
try:
schema.drop_database(conn)
@ -284,115 +277,103 @@ def run_start(args):
setup_logging()
if not args.skip_initialize_database:
logger.info('Initializing database')
logger.info("Initializing database")
_run_init()
logger.info('Planetmint Version %s', planetmint.version.__version__)
logger.info("Planetmint Version %s", planetmint.version.__version__)
run_recover(planetmint.lib.Planetmint())
logger.info('Starting Planetmint main process.')
logger.info("Starting Planetmint main process.")
from planetmint.start import start
start(args)
def run_tendermint_version(args):
"""Show the supported Tendermint version(s)"""
supported_tm_ver = {
'description': 'Planetmint supports the following Tendermint version(s)',
'tendermint': __tm_supported_versions__,
"description": "Planetmint supports the following Tendermint version(s)",
"tendermint": __tm_supported_versions__,
}
print(json.dumps(supported_tm_ver, indent=4, sort_keys=True))
def create_parser():
parser = argparse.ArgumentParser(
description='Control your Planetmint node.',
parents=[utils.base_parser])
parser = argparse.ArgumentParser(description="Control your Planetmint node.", parents=[utils.base_parser])
# all the commands are contained in the subparsers object,
# the command selected by the user will be stored in `args.command`
# that is used by the `main` function to select which other
# function to call.
subparsers = parser.add_subparsers(title='Commands',
dest='command')
subparsers = parser.add_subparsers(title="Commands", dest="command")
# parser for writing a config file
config_parser = subparsers.add_parser('configure',
help='Prepare the config file.')
config_parser = subparsers.add_parser("configure", help="Prepare the config file.")
config_parser.add_argument('backend',
choices=['tarantool_db', 'localmongodb'],
default='tarantool_db',
const='tarantool_db',
nargs='?',
help='The backend to use. It can only be '
'"tarantool_db", currently.')
config_parser.add_argument(
"backend",
choices=["tarantool_db", "localmongodb"],
default="tarantool_db",
const="tarantool_db",
nargs="?",
help="The backend to use. It can only be " '"tarantool_db", currently.',
)
# parser for managing elections
election_parser = subparsers.add_parser('election',
help='Manage elections.')
election_parser = subparsers.add_parser("election", help="Manage elections.")
election_subparser = election_parser.add_subparsers(title='Action',
dest='action')
election_subparser = election_parser.add_subparsers(title="Action", dest="action")
new_election_parser = election_subparser.add_parser('new',
help='Calls a new election.')
new_election_parser = election_subparser.add_parser("new", help="Calls a new election.")
new_election_subparser = new_election_parser.add_subparsers(title='Election_Type',
dest='election_type')
new_election_subparser = new_election_parser.add_subparsers(title="Election_Type", dest="election_type")
# Parser factory for each type of new election, so we get a bunch of commands that look like this:
# election new <some_election_type> <args>...
for name, data in elections.items():
args = data['args']
generic_parser = new_election_subparser.add_parser(name, help=data['help'])
args = data["args"]
generic_parser = new_election_subparser.add_parser(name, help=data["help"])
for arg, kwargs in args.items():
generic_parser.add_argument(arg, **kwargs)
approve_election_parser = election_subparser.add_parser('approve',
help='Approve the election.')
approve_election_parser.add_argument('election_id',
help='The election_id of the election.')
approve_election_parser.add_argument('--private-key',
dest='sk',
required=True,
help='Path to the private key of the election initiator.')
approve_election_parser = election_subparser.add_parser("approve", help="Approve the election.")
approve_election_parser.add_argument("election_id", help="The election_id of the election.")
approve_election_parser.add_argument(
"--private-key", dest="sk", required=True, help="Path to the private key of the election initiator."
)
show_election_parser = election_subparser.add_parser('show',
help='Provides information about an election.')
show_election_parser = election_subparser.add_parser("show", help="Provides information about an election.")
show_election_parser.add_argument('election_id',
help='The transaction id of the election you wish to query.')
show_election_parser.add_argument("election_id", help="The transaction id of the election you wish to query.")
# parsers for showing/exporting config values
subparsers.add_parser('show-config',
help='Show the current configuration')
subparsers.add_parser("show-config", help="Show the current configuration")
# parser for database-level commands
subparsers.add_parser('init',
help='Init the database')
subparsers.add_parser("init", help="Init the database")
subparsers.add_parser('drop',
help='Drop the database')
subparsers.add_parser("drop", help="Drop the database")
# parser for starting Planetmint
start_parser = subparsers.add_parser('start',
help='Start Planetmint')
start_parser = subparsers.add_parser("start", help="Start Planetmint")
start_parser.add_argument('--no-init',
dest='skip_initialize_database',
default=False,
action='store_true',
help='Skip database initialization')
start_parser.add_argument(
"--no-init",
dest="skip_initialize_database",
default=False,
action="store_true",
help="Skip database initialization",
)
subparsers.add_parser('tendermint-version',
help='Show the Tendermint supported versions')
subparsers.add_parser("tendermint-version", help="Show the Tendermint supported versions")
start_parser.add_argument('--experimental-parallel-validation',
dest='experimental_parallel_validation',
default=False,
action='store_true',
help='💀 EXPERIMENTAL: parallelize validation for better throughput 💀')
start_parser.add_argument(
"--experimental-parallel-validation",
dest="experimental_parallel_validation",
default=False,
action="store_true",
help="💀 EXPERIMENTAL: parallelize validation for better throughput 💀",
)
return parser

View File

@ -30,22 +30,22 @@ def configure_planetmint(command):
The command wrapper function.
"""
@functools.wraps(command)
def configure(args):
config_from_cmdline = None
try:
if args.log_level is not None:
config_from_cmdline = {
'log': {
'level_console': args.log_level,
'level_logfile': args.log_level,
"log": {
"level_console": args.log_level,
"level_logfile": args.log_level,
},
'server': {'loglevel': args.log_level},
"server": {"loglevel": args.log_level},
}
except AttributeError:
pass
planetmint.config_utils.autoconfigure(
filename=args.config, config=config_from_cmdline, force=True)
planetmint.config_utils.autoconfigure(filename=args.config, config=config_from_cmdline, force=True)
command(args)
return configure
@ -53,13 +53,13 @@ def configure_planetmint(command):
def _convert(value, default=None, convert=None):
def convert_bool(value):
if value.lower() in ('true', 't', 'yes', 'y'):
if value.lower() in ("true", "t", "yes", "y"):
return True
if value.lower() in ('false', 'f', 'no', 'n'):
if value.lower() in ("false", "f", "no", "n"):
return False
raise ValueError('{} cannot be converted to bool'.format(value))
raise ValueError("{} cannot be converted to bool".format(value))
if value == '':
if value == "":
value = None
if convert is None:
@ -80,7 +80,7 @@ def _convert(value, default=None, convert=None):
# We need this because `input` always prints on stdout, while it should print
# to stderr. It's a very old bug, check it out here:
# - https://bugs.python.org/issue1927
def input_on_stderr(prompt='', default=None, convert=None):
def input_on_stderr(prompt="", default=None, convert=None):
"""Output a string to stderr and wait for input.
Args:
@ -92,7 +92,7 @@ def input_on_stderr(prompt='', default=None, convert=None):
``default`` will be used.
"""
print(prompt, end='', file=sys.stderr)
print(prompt, end="", file=sys.stderr)
value = builtins.input()
return _convert(value, default, convert)
@ -121,14 +121,13 @@ def start(parser, argv, scope):
# look up in the current scope for a function called 'run_<command>'
# replacing all the dashes '-' with the lowercase character '_'
func = scope.get('run_' + args.command.replace('-', '_'))
func = scope.get("run_" + args.command.replace("-", "_"))
# if no command has been found, raise a `NotImplementedError`
if not func:
raise NotImplementedError('Command `{}` not yet implemented'.
format(args.command))
raise NotImplementedError("Command `{}` not yet implemented".format(args.command))
args.multiprocess = getattr(args, 'multiprocess', False)
args.multiprocess = getattr(args, "multiprocess", False)
if args.multiprocess is False:
args.multiprocess = 1
@ -138,24 +137,28 @@ def start(parser, argv, scope):
return func(args)
base_parser = argparse.ArgumentParser(add_help=False, prog='planetmint')
base_parser = argparse.ArgumentParser(add_help=False, prog="planetmint")
base_parser.add_argument('-c', '--config',
help='Specify the location of the configuration file '
'(use "-" for stdout)')
base_parser.add_argument(
"-c", "--config", help="Specify the location of the configuration file " '(use "-" for stdout)'
)
# NOTE: this flag should not have any default value because that will override
# the environment variables provided to configure the logger.
base_parser.add_argument('-l', '--log-level',
type=str.upper, # convert to uppercase for comparison to choices
choices=['DEBUG', 'BENCHMARK', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'],
help='Log level')
base_parser.add_argument(
"-l",
"--log-level",
type=str.upper, # convert to uppercase for comparison to choices
choices=["DEBUG", "BENCHMARK", "INFO", "WARNING", "ERROR", "CRITICAL"],
help="Log level",
)
base_parser.add_argument('-y', '--yes', '--yes-please',
action='store_true',
help='Assume "yes" as answer to all prompts and run '
'non-interactively')
base_parser.add_argument(
"-y",
"--yes",
"--yes-please",
action="store_true",
help='Assume "yes" as answer to all prompts and run ' "non-interactively",
)
base_parser.add_argument('-v', '--version',
action='version',
version='%(prog)s {}'.format(__version__))
base_parser.add_argument("-v", "--version", action="version", version="%(prog)s {}".format(__version__))

View File

@ -1,6 +1,7 @@
import copy
import logging
import os
# from planetmint.log import DEFAULT_LOGGING_CONFIG as log_config
from planetmint.version import __version__ # noqa
@ -15,7 +16,6 @@ class Singleton(type):
class Config(metaclass=Singleton):
def __init__(self):
# from functools import reduce
# PORT_NUMBER = reduce(lambda x, y: x * y, map(ord, 'Planetmint')) % 2**16
@ -26,27 +26,27 @@ class Config(metaclass=Singleton):
# _base_database_localmongodb.keys() because dicts are unordered.
# I tried to configure
self.log_config = DEFAULT_LOGGING_CONFIG
db = 'tarantool_db'
db = "tarantool_db"
self.__private_database_keys_map = { # TODO Check if it is working after removing 'name' field
'tarantool_db': ('host', 'port'),
'localmongodb': ('host', 'port', 'name')
"tarantool_db": ("host", "port"),
"localmongodb": ("host", "port", "name"),
}
self.__private_database_localmongodb = {
'backend': 'localmongodb',
'host': 'localhost',
'port': 27017,
'name': 'bigchain',
'replicaset': None,
'login': None,
'password': None,
'connection_timeout': 5000,
'max_tries': 3,
'ssl': False,
'ca_cert': None,
'certfile': None,
'keyfile': None,
'keyfile_passphrase': None,
'crlfile': None
"backend": "localmongodb",
"host": "localhost",
"port": 27017,
"name": "bigchain",
"replicaset": None,
"login": None,
"password": None,
"connection_timeout": 5000,
"max_tries": 3,
"ssl": False,
"ca_cert": None,
"certfile": None,
"keyfile": None,
"keyfile_passphrase": None,
"crlfile": None,
}
self.__private_init_config = {
"absolute_path": os.path.dirname(os.path.abspath(__file__)) + "/backend/tarantool/init.lua"
@ -56,71 +56,68 @@ class Config(metaclass=Singleton):
"absolute_path": os.path.dirname(os.path.abspath(__file__)) + "/backend/tarantool/drop.lua"
}
self.__private_database_tarantool = {
'backend': 'tarantool_db',
'connection_timeout': 5000,
'max_tries': 3,
'name': 'universe',
"backend": "tarantool_db",
"connection_timeout": 5000,
"max_tries": 3,
"name": "universe",
"reconnect_delay": 0.5,
'host': 'localhost',
'port': 3303,
"host": "localhost",
"port": 3303,
"connect_now": True,
"encoding": "utf-8",
"login": "guest",
'password': "",
"password": "",
"service": "tarantoolctl connect",
"init_config": self.__private_init_config,
"drop_config": self.__private_drop_config,
}
self.__private_database_map = {
'tarantool_db': self.__private_database_tarantool,
'localmongodb': self.__private_database_localmongodb
"tarantool_db": self.__private_database_tarantool,
"localmongodb": self.__private_database_localmongodb,
}
self.__private_config = {
'server': {
"server": {
# Note: this section supports all the Gunicorn settings:
# - http://docs.gunicorn.org/en/stable/settings.html
'bind': 'localhost:9984',
'loglevel': logging.getLevelName(
self.log_config['handlers']['console']['level']).lower(),
'workers': None, # if None, the value will be cpu_count * 2 + 1
"bind": "localhost:9984",
"loglevel": logging.getLevelName(self.log_config["handlers"]["console"]["level"]).lower(),
"workers": None, # if None, the value will be cpu_count * 2 + 1
},
'wsserver': {
'scheme': 'ws',
'host': 'localhost',
'port': 9985,
'advertised_scheme': 'ws',
'advertised_host': 'localhost',
'advertised_port': 9985,
"wsserver": {
"scheme": "ws",
"host": "localhost",
"port": 9985,
"advertised_scheme": "ws",
"advertised_host": "localhost",
"advertised_port": 9985,
},
'tendermint': {
'host': 'localhost',
'port': 26657,
'version': 'v0.31.5', # look for __tm_supported_versions__
"tendermint": {
"host": "localhost",
"port": 26657,
"version": "v0.31.5", # look for __tm_supported_versions__
},
'database': self.__private_database_map,
'log': {
'file': self.log_config['handlers']['file']['filename'],
'error_file': self.log_config['handlers']['errors']['filename'],
'level_console': logging.getLevelName(
self.log_config['handlers']['console']['level']).lower(),
'level_logfile': logging.getLevelName(
self.log_config['handlers']['file']['level']).lower(),
'datefmt_console': self.log_config['formatters']['console']['datefmt'],
'datefmt_logfile': self.log_config['formatters']['file']['datefmt'],
'fmt_console': self.log_config['formatters']['console']['format'],
'fmt_logfile': self.log_config['formatters']['file']['format'],
'granular_levels': {},
"database": self.__private_database_map,
"log": {
"file": self.log_config["handlers"]["file"]["filename"],
"error_file": self.log_config["handlers"]["errors"]["filename"],
"level_console": logging.getLevelName(self.log_config["handlers"]["console"]["level"]).lower(),
"level_logfile": logging.getLevelName(self.log_config["handlers"]["file"]["level"]).lower(),
"datefmt_console": self.log_config["formatters"]["console"]["datefmt"],
"datefmt_logfile": self.log_config["formatters"]["file"]["datefmt"],
"fmt_console": self.log_config["formatters"]["console"]["format"],
"fmt_logfile": self.log_config["formatters"]["file"]["format"],
"granular_levels": {},
},
}
self._private_real_config = copy.deepcopy(self.__private_config)
# select the correct config defaults based on the backend
self._private_real_config['database'] = self.__private_database_map[db]
self._private_real_config["database"] = self.__private_database_map[db]
def init_config(self, db):
self._private_real_config = copy.deepcopy(self.__private_config)
# select the correct config defaults based on the backend
self._private_real_config['database'] = self.__private_database_map[db]
self._private_real_config["database"] = self.__private_database_map[db]
return self._private_real_config
def get(self):
@ -135,52 +132,55 @@ class Config(metaclass=Singleton):
def get_db_map(sefl, db):
return sefl.__private_database_map[db]
DEFAULT_LOG_DIR = os.getcwd()
DEFAULT_LOGGING_CONFIG = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'console': {
'class': 'logging.Formatter',
'format': ('[%(asctime)s] [%(levelname)s] (%(name)s) '
'%(message)s (%(processName)-10s - pid: %(process)d)'),
'datefmt': '%Y-%m-%d %H:%M:%S',
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"console": {
"class": "logging.Formatter",
"format": (
"[%(asctime)s] [%(levelname)s] (%(name)s) " "%(message)s (%(processName)-10s - pid: %(process)d)"
),
"datefmt": "%Y-%m-%d %H:%M:%S",
},
"file": {
"class": "logging.Formatter",
"format": (
"[%(asctime)s] [%(levelname)s] (%(name)s) " "%(message)s (%(processName)-10s - pid: %(process)d)"
),
"datefmt": "%Y-%m-%d %H:%M:%S",
},
'file': {
'class': 'logging.Formatter',
'format': ('[%(asctime)s] [%(levelname)s] (%(name)s) '
'%(message)s (%(processName)-10s - pid: %(process)d)'),
'datefmt': '%Y-%m-%d %H:%M:%S',
}
},
'handlers': {
'console': {
'class': 'logging.StreamHandler',
'formatter': 'console',
'level': logging.INFO,
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "console",
"level": logging.INFO,
},
'file': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(DEFAULT_LOG_DIR, 'planetmint.log'),
'mode': 'w',
'maxBytes': 209715200,
'backupCount': 5,
'formatter': 'file',
'level': logging.INFO,
"file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": os.path.join(DEFAULT_LOG_DIR, "planetmint.log"),
"mode": "w",
"maxBytes": 209715200,
"backupCount": 5,
"formatter": "file",
"level": logging.INFO,
},
"errors": {
"class": "logging.handlers.RotatingFileHandler",
"filename": os.path.join(DEFAULT_LOG_DIR, "planetmint-errors.log"),
"mode": "w",
"maxBytes": 209715200,
"backupCount": 5,
"formatter": "file",
"level": logging.ERROR,
},
'errors': {
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(DEFAULT_LOG_DIR, 'planetmint-errors.log'),
'mode': 'w',
'maxBytes': 209715200,
'backupCount': 5,
'formatter': 'file',
'level': logging.ERROR,
}
},
'loggers': {},
'root': {
'level': logging.DEBUG,
'handlers': ['console', 'file', 'errors'],
"loggers": {},
"root": {
"level": logging.DEBUG,
"handlers": ["console", "file", "errors"],
},
}

View File

@ -29,16 +29,16 @@ from planetmint.transactions.common import exceptions
from planetmint.validation import BaseValidationRules
# TODO: move this to a proper configuration file for logging
logging.getLogger('requests').setLevel(logging.WARNING)
logging.getLogger("requests").setLevel(logging.WARNING)
logger = logging.getLogger(__name__)
CONFIG_DEFAULT_PATH = os.environ.setdefault(
'PLANETMINT_CONFIG_PATH',
os.path.join(os.path.expanduser('~'), '.planetmint'),
"PLANETMINT_CONFIG_PATH",
os.path.join(os.path.expanduser("~"), ".planetmint"),
)
CONFIG_PREFIX = 'PLANETMINT'
CONFIG_SEP = '_'
CONFIG_PREFIX = "PLANETMINT"
CONFIG_SEP = "_"
def map_leafs(func, mapping):
@ -96,21 +96,21 @@ def file_config(filename=None):
dict: The config values in the specified config file (or the
file at CONFIG_DEFAULT_PATH, if filename == None)
"""
logger.debug('On entry into file_config(), filename = {}'.format(filename))
logger.debug("On entry into file_config(), filename = {}".format(filename))
if filename is None:
filename = CONFIG_DEFAULT_PATH
logger.debug('file_config() will try to open `{}`'.format(filename))
logger.debug("file_config() will try to open `{}`".format(filename))
with open(filename) as f:
try:
config = json.load(f)
except ValueError as err:
raise exceptions.ConfigurationError(
'Failed to parse the JSON configuration from `{}`, {}'.format(filename, err)
"Failed to parse the JSON configuration from `{}`, {}".format(filename, err)
)
logger.info('Configuration loaded from `{}`'.format(filename))
logger.info("Configuration loaded from `{}`".format(filename))
return config
@ -136,7 +136,7 @@ def env_config(config):
return map_leafs(load_from_env, config)
def update_types(config, reference, list_sep=':'):
def update_types(config, reference, list_sep=":"):
"""Return a new configuration where all the values types
are aligned with the ones in the default configuration
"""
@ -192,7 +192,7 @@ def set_config(config):
_config = Config().get()
# Update the default config with whatever is in the passed config
update(_config, update_types(config, _config))
_config['CONFIGURED'] = True
_config["CONFIGURED"] = True
Config().set(_config)
@ -208,7 +208,7 @@ def update_config(config):
_config = Config().get()
# Update the default config with whatever is in the passed config
update(_config, update_types(config, _config))
_config['CONFIGURED'] = True
_config["CONFIGURED"] = True
Config().set(_config)
@ -223,12 +223,12 @@ def write_config(config, filename=None):
if not filename:
filename = CONFIG_DEFAULT_PATH
with open(filename, 'w') as f:
with open(filename, "w") as f:
json.dump(config, f, indent=4)
def is_configured():
return bool(Config().get().get('CONFIGURED'))
return bool(Config().get().get("CONFIGURED"))
def autoconfigure(filename=None, config=None, force=False):
@ -236,7 +236,7 @@ def autoconfigure(filename=None, config=None, force=False):
been initialized.
"""
if not force and is_configured():
logger.debug('System already configured, skipping autoconfiguration')
logger.debug("System already configured, skipping autoconfiguration")
return
# start with the current configuration
@ -249,7 +249,7 @@ def autoconfigure(filename=None, config=None, force=False):
if filename:
raise
else:
logger.info('Cannot find config file `%s`.' % e.filename)
logger.info("Cannot find config file `%s`." % e.filename)
# override configuration with env variables
newconfig = env_config(newconfig)
@ -277,20 +277,20 @@ def load_validation_plugin(name=None):
# We should probably support Requirements specs in the config, e.g.
# validation_plugin: 'my-plugin-package==0.0.1;default'
plugin = None
for entry_point in iter_entry_points('planetmint.validation', name):
for entry_point in iter_entry_points("planetmint.validation", name):
plugin = entry_point.load()
# No matching entry_point found
if not plugin:
raise ResolutionError(
'No plugin found in group `planetmint.validation` with name `{}`'.
format(name))
raise ResolutionError("No plugin found in group `planetmint.validation` with name `{}`".format(name))
# Is this strictness desireable?
# It will probably reduce developer headaches in the wild.
if not issubclass(plugin, (BaseValidationRules,)):
raise TypeError('object of type "{}" does not implement `planetmint.'
'validation.BaseValidationRules`'.format(type(plugin)))
raise TypeError(
'object of type "{}" does not implement `planetmint.'
"validation.BaseValidationRules`".format(type(plugin))
)
return plugin
@ -302,7 +302,7 @@ def load_events_plugins(names=None):
return plugins
for name in names:
for entry_point in iter_entry_points('planetmint.events', name):
for entry_point in iter_entry_points("planetmint.events", name):
plugins.append((name, entry_point.load()))
return plugins

View File

@ -18,12 +18,11 @@ from tendermint.abci.types_pb2 import (
ResponseDeliverTx,
ResponseBeginBlock,
ResponseEndBlock,
ResponseCommit
ResponseCommit,
)
from planetmint import Planetmint
from planetmint.transactions.types.elections.election import Election
from planetmint.tendermint_utils import (decode_transaction,
calculate_hash)
from planetmint.tendermint_utils import decode_transaction, calculate_hash
from planetmint.lib import Block
import planetmint.upsert_validator.validator_utils as vutils
from planetmint.events import EventTypes, Event
@ -42,40 +41,41 @@ class App(BaseApplication):
def __init__(self, planetmint_node=None, events_queue=None):
# super().__init__(abci)
logger.debug('Checking values of types')
logger.debug("Checking values of types")
logger.debug(dir(types_pb2))
self.events_queue = events_queue
self.planetmint_node = planetmint_node or Planetmint()
self.block_txn_ids = []
self.block_txn_hash = ''
self.block_txn_hash = ""
self.block_transactions = []
self.validators = None
self.new_height = None
self.chain = self.planetmint_node.get_latest_abci_chain()
def log_abci_migration_error(self, chain_id, validators):
logger.error('An ABCI chain migration is in process. '
'Download theself.planetmint_node.get_latest_abci_chain new ABCI client and configure it with '
f'chain_id={chain_id} and validators={validators}.')
logger.error(
"An ABCI chain migration is in process. "
"Download theself.planetmint_node.get_latest_abci_chain new ABCI client and configure it with "
f"chain_id={chain_id} and validators={validators}."
)
def abort_if_abci_chain_is_not_synced(self):
if self.chain is None or self.chain['is_synced']:
if self.chain is None or self.chain["is_synced"]:
return
validators = self.planetmint_node.get_validators()
self.log_abci_migration_error(self.chain['chain_id'], validators)
self.log_abci_migration_error(self.chain["chain_id"], validators)
sys.exit(1)
def init_chain(self, genesis):
"""Initialize chain upon genesis or a migration"""
app_hash = ''
app_hash = ""
height = 0
known_chain = self.planetmint_node.get_latest_abci_chain()
if known_chain is not None:
chain_id = known_chain['chain_id']
chain_id = known_chain["chain_id"]
if known_chain['is_synced']:
msg = (f'Got invalid InitChain ABCI request ({genesis}) - '
f'the chain {chain_id} is already synced.')
if known_chain["is_synced"]:
msg = f"Got invalid InitChain ABCI request ({genesis}) - " f"the chain {chain_id} is already synced."
logger.error(msg)
sys.exit(1)
if chain_id != genesis.chain_id:
@ -84,22 +84,19 @@ class App(BaseApplication):
sys.exit(1)
# set migration values for app hash and height
block = self.planetmint_node.get_latest_block()
app_hash = '' if block is None else block['app_hash']
height = 0 if block is None else block['height'] + 1
app_hash = "" if block is None else block["app_hash"]
height = 0 if block is None else block["height"] + 1
known_validators = self.planetmint_node.get_validators()
validator_set = [vutils.decode_validator(v)
for v in genesis.validators]
validator_set = [vutils.decode_validator(v) for v in genesis.validators]
if known_validators and known_validators != validator_set:
self.log_abci_migration_error(known_chain['chain_id'],
known_validators)
self.log_abci_migration_error(known_chain["chain_id"], known_validators)
sys.exit(1)
block = Block(app_hash=app_hash, height=height, transactions=[])
self.planetmint_node.store_block(block._asdict())
self.planetmint_node.store_validator_set(height + 1, validator_set)
abci_chain_height = 0 if known_chain is None else known_chain['height']
abci_chain_height = 0 if known_chain is None else known_chain["height"]
self.planetmint_node.store_abci_chain(abci_chain_height, genesis.chain_id, True)
self.chain = {'height': abci_chain_height, 'is_synced': True,
'chain_id': genesis.chain_id}
self.chain = {"height": abci_chain_height, "is_synced": True, "chain_id": genesis.chain_id}
return ResponseInitChain()
def info(self, request):
@ -118,12 +115,12 @@ class App(BaseApplication):
r = ResponseInfo()
block = self.planetmint_node.get_latest_block()
if block:
chain_shift = 0 if self.chain is None else self.chain['height']
r.last_block_height = block['height'] - chain_shift
r.last_block_app_hash = block['app_hash'].encode('utf-8')
chain_shift = 0 if self.chain is None else self.chain["height"]
r.last_block_height = block["height"] - chain_shift
r.last_block_app_hash = block["app_hash"].encode("utf-8")
else:
r.last_block_height = 0
r.last_block_app_hash = b''
r.last_block_app_hash = b""
return r
def check_tx(self, raw_transaction):
@ -136,13 +133,13 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced()
logger.debug('check_tx: %s', raw_transaction)
logger.debug("check_tx: %s", raw_transaction)
transaction = decode_transaction(raw_transaction)
if self.planetmint_node.is_valid_transaction(transaction):
logger.debug('check_tx: VALID')
logger.debug("check_tx: VALID")
return ResponseCheckTx(code=OkCode)
else:
logger.debug('check_tx: INVALID')
logger.debug("check_tx: INVALID")
return ResponseCheckTx(code=CodeTypeError)
def begin_block(self, req_begin_block):
@ -153,10 +150,9 @@ class App(BaseApplication):
"""
self.abort_if_abci_chain_is_not_synced()
chain_shift = 0 if self.chain is None else self.chain['height']
chain_shift = 0 if self.chain is None else self.chain["height"]
# req_begin_block.header.num_txs not found, so removing it.
logger.debug('BEGIN BLOCK, height:%s',
req_begin_block.header.height + chain_shift)
logger.debug("BEGIN BLOCK, height:%s", req_begin_block.header.height + chain_shift)
self.block_txn_ids = []
self.block_transactions = []
@ -171,15 +167,16 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced()
logger.debug('deliver_tx: %s', raw_transaction)
logger.debug("deliver_tx: %s", raw_transaction)
transaction = self.planetmint_node.is_valid_transaction(
decode_transaction(raw_transaction), self.block_transactions)
decode_transaction(raw_transaction), self.block_transactions
)
if not transaction:
logger.debug('deliver_tx: INVALID')
logger.debug("deliver_tx: INVALID")
return ResponseDeliverTx(code=CodeTypeError)
else:
logger.debug('storing tx')
logger.debug("storing tx")
self.block_txn_ids.append(transaction.id)
self.block_transactions.append(transaction)
return ResponseDeliverTx(code=OkCode)
@ -194,28 +191,25 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced()
chain_shift = 0 if self.chain is None else self.chain['height']
chain_shift = 0 if self.chain is None else self.chain["height"]
height = request_end_block.height + chain_shift
self.new_height = height
# store pre-commit state to recover in case there is a crash during
# `end_block` or `commit`
logger.debug(f'Updating pre-commit state: {self.new_height}')
pre_commit_state = dict(height=self.new_height,
transactions=self.block_txn_ids)
logger.debug(f"Updating pre-commit state: {self.new_height}")
pre_commit_state = dict(height=self.new_height, transactions=self.block_txn_ids)
self.planetmint_node.store_pre_commit_state(pre_commit_state)
block_txn_hash = calculate_hash(self.block_txn_ids)
block = self.planetmint_node.get_latest_block()
if self.block_txn_ids:
self.block_txn_hash = calculate_hash([block['app_hash'], block_txn_hash])
self.block_txn_hash = calculate_hash([block["app_hash"], block_txn_hash])
else:
self.block_txn_hash = block['app_hash']
self.block_txn_hash = block["app_hash"]
validator_update = Election.process_block(self.planetmint_node,
self.new_height,
self.block_transactions)
validator_update = Election.process_block(self.planetmint_node, self.new_height, self.block_transactions)
return ResponseEndBlock(validator_updates=validator_update)
@ -224,29 +218,29 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced()
data = self.block_txn_hash.encode('utf-8')
data = self.block_txn_hash.encode("utf-8")
# register a new block only when new transactions are received
if self.block_txn_ids:
self.planetmint_node.store_bulk_transactions(self.block_transactions)
block = Block(app_hash=self.block_txn_hash,
height=self.new_height,
transactions=self.block_txn_ids)
block = Block(app_hash=self.block_txn_hash, height=self.new_height, transactions=self.block_txn_ids)
# NOTE: storing the block should be the last operation during commit
# this effects crash recovery. Refer BEP#8 for details
self.planetmint_node.store_block(block._asdict())
logger.debug('Commit-ing new block with hash: apphash=%s ,'
'height=%s, txn ids=%s', data, self.new_height,
self.block_txn_ids)
logger.debug(
"Commit-ing new block with hash: apphash=%s ," "height=%s, txn ids=%s",
data,
self.new_height,
self.block_txn_ids,
)
if self.events_queue:
event = Event(EventTypes.BLOCK_VALID, {
'height': self.new_height,
'hash': self.block_txn_hash,
'transactions': self.block_transactions
})
event = Event(
EventTypes.BLOCK_VALID,
{"height": self.new_height, "hash": self.block_txn_hash, "transactions": self.block_transactions},
)
self.events_queue.put(event)
return ResponseCommit(data=data)
@ -266,10 +260,10 @@ def rollback(b):
latest_block = b.get_latest_block()
if latest_block is None:
logger.error('Found precommit state but no blocks!')
logger.error("Found precommit state but no blocks!")
sys.exit(1)
# NOTE: the pre-commit state is always at most 1 block ahead of the commited state
if latest_block['height'] < pre_commit['height']:
Election.rollback(b, pre_commit['height'], pre_commit['transactions'])
b.delete_transactions(pre_commit['transactions'])
if latest_block["height"] < pre_commit["height"]:
Election.rollback(b, pre_commit["height"], pre_commit["transactions"])
b.delete_transactions(pre_commit["transactions"])

View File

@ -8,7 +8,7 @@ from collections import defaultdict
from multiprocessing import Queue
POISON_PILL = 'POISON_PILL'
POISON_PILL = "POISON_PILL"
class EventTypes:
@ -73,7 +73,7 @@ class Exchange:
try:
self.started_queue.get(timeout=1)
raise RuntimeError('Cannot create a new subscriber queue while Exchange is running.')
raise RuntimeError("Cannot create a new subscriber queue while Exchange is running.")
except Empty:
pass
@ -99,7 +99,7 @@ class Exchange:
def run(self):
"""Start the exchange"""
self.started_queue.put('STARTED')
self.started_queue.put("STARTED")
while True:
event = self.publisher_queue.get()

View File

@ -8,7 +8,7 @@ from planetmint.backend import query
from planetmint.transactions.common.transaction import TransactionLink
class FastQuery():
class FastQuery:
"""Database queries that join on block results from a single node."""
def __init__(self, connection):
@ -17,11 +17,12 @@ class FastQuery():
def get_outputs_by_public_key(self, public_key):
"""Get outputs for a public key"""
txs = list(query.get_owned_ids(self.connection, public_key))
return [TransactionLink(tx['id'], index)
for tx in txs
for index, output in enumerate(tx['outputs'])
if condition_details_has_owner(output['condition']['details'],
public_key)]
return [
TransactionLink(tx["id"], index)
for tx in txs
for index, output in enumerate(tx["outputs"])
if condition_details_has_owner(output["condition"]["details"], public_key)
]
def filter_spent_outputs(self, outputs):
"""Remove outputs that have been spent
@ -31,9 +32,7 @@ class FastQuery():
"""
links = [o.to_dict() for o in outputs]
txs = list(query.get_spending_transactions(self.connection, links))
spends = {TransactionLink.from_dict(input_['fulfills'])
for tx in txs
for input_ in tx['inputs']}
spends = {TransactionLink.from_dict(input_["fulfills"]) for tx in txs for input_ in tx["inputs"]}
return [ff for ff in outputs if ff not in spends]
def filter_unspent_outputs(self, outputs):
@ -44,7 +43,5 @@ class FastQuery():
"""
links = [o.to_dict() for o in outputs]
txs = list(query.get_spending_transactions(self.connection, links))
spends = {TransactionLink.from_dict(input_['fulfills'])
for tx in txs
for input_ in tx['inputs']}
spends = {TransactionLink.from_dict(input_["fulfills"]) for tx in txs for input_ in tx["inputs"]}
return [ff for ff in outputs if ff in spends]

View File

@ -25,10 +25,12 @@ import planetmint
from planetmint.config import Config
from planetmint import backend, config_utils, fastquery
from planetmint.models import Transaction
from planetmint.transactions.common.exceptions import (
SchemaValidationError, ValidationError, DoubleSpend)
from planetmint.transactions.common.exceptions import SchemaValidationError, ValidationError, DoubleSpend
from planetmint.transactions.common.transaction_mode_types import (
BROADCAST_TX_COMMIT, BROADCAST_TX_ASYNC, BROADCAST_TX_SYNC)
BROADCAST_TX_COMMIT,
BROADCAST_TX_ASYNC,
BROADCAST_TX_SYNC,
)
from planetmint.tendermint_utils import encode_transaction, merkleroot
from planetmint import exceptions as core_exceptions
from planetmint.validation import BaseValidationRules
@ -60,14 +62,12 @@ class Planetmint(object):
"""
config_utils.autoconfigure()
self.mode_commit = BROADCAST_TX_COMMIT
self.mode_list = (BROADCAST_TX_ASYNC,
BROADCAST_TX_SYNC,
self.mode_commit)
self.tendermint_host = Config().get()['tendermint']['host']
self.tendermint_port = Config().get()['tendermint']['port']
self.endpoint = 'http://{}:{}/'.format(self.tendermint_host, self.tendermint_port)
self.mode_list = (BROADCAST_TX_ASYNC, BROADCAST_TX_SYNC, self.mode_commit)
self.tendermint_host = Config().get()["tendermint"]["host"]
self.tendermint_port = Config().get()["tendermint"]["port"]
self.endpoint = "http://{}:{}/".format(self.tendermint_host, self.tendermint_port)
validationPlugin = Config().get().get('validation_plugin')
validationPlugin = Config().get().get("validation_plugin")
if validationPlugin:
self.validation = config_utils.load_validation_plugin(validationPlugin)
@ -78,16 +78,10 @@ class Planetmint(object):
def post_transaction(self, transaction, mode):
"""Submit a valid transaction to the mempool."""
if not mode or mode not in self.mode_list:
raise ValidationError('Mode must be one of the following {}.'
.format(', '.join(self.mode_list)))
raise ValidationError("Mode must be one of the following {}.".format(", ".join(self.mode_list)))
tx_dict = transaction.tx_dict if transaction.tx_dict else transaction.to_dict()
payload = {
'method': mode,
'jsonrpc': '2.0',
'params': [encode_transaction(tx_dict)],
'id': str(uuid4())
}
payload = {"method": mode, "jsonrpc": "2.0", "params": [encode_transaction(tx_dict)], "id": str(uuid4())}
# TODO: handle connection errors!
return requests.post(self.endpoint, json=payload)
@ -100,29 +94,29 @@ class Planetmint(object):
def _process_post_response(self, response, mode):
logger.debug(response)
error = response.get('error')
error = response.get("error")
if error:
status_code = 500
message = error.get('message', 'Internal Error')
data = error.get('data', '')
message = error.get("message", "Internal Error")
data = error.get("data", "")
if 'Tx already exists in cache' in data:
if "Tx already exists in cache" in data:
status_code = 400
return (status_code, message + ' - ' + data)
return (status_code, message + " - " + data)
result = response['result']
result = response["result"]
if mode == self.mode_commit:
check_tx_code = result.get('check_tx', {}).get('code', 0)
deliver_tx_code = result.get('deliver_tx', {}).get('code', 0)
check_tx_code = result.get("check_tx", {}).get("code", 0)
deliver_tx_code = result.get("deliver_tx", {}).get("code", 0)
error_code = check_tx_code or deliver_tx_code
else:
error_code = result.get('code', 0)
error_code = result.get("code", 0)
if error_code:
return (500, 'Transaction validation failed')
return (500, "Transaction validation failed")
return (202, '')
return (202, "")
def store_bulk_transactions(self, transactions):
txns = []
@ -132,18 +126,20 @@ class Planetmint(object):
for t in transactions:
transaction = t.tx_dict if t.tx_dict else rapidjson.loads(rapidjson.dumps(t.to_dict()))
asset = transaction.pop('asset')
metadata = transaction.pop('metadata')
asset = transaction.pop("asset")
metadata = transaction.pop("metadata")
asset = backend.convert.prepare_asset(self.connection,
transaction_type=transaction["operation"],
transaction_id=transaction["id"],
filter_operation=t.CREATE,
asset=asset)
asset = backend.convert.prepare_asset(
self.connection,
transaction_type=transaction["operation"],
transaction_id=transaction["id"],
filter_operation=t.CREATE,
asset=asset,
)
metadata = backend.convert.prepare_metadata(self.connection,
transaction_id=transaction["id"],
metadata=metadata)
metadata = backend.convert.prepare_metadata(
self.connection, transaction_id=transaction["id"], metadata=metadata
)
txn_metadatas.append(metadata)
assets.append(asset)
@ -167,14 +163,10 @@ class Planetmint(object):
transaction incoming into the system for which the UTXOF
set needs to be updated.
"""
spent_outputs = [
spent_output for spent_output in transaction.spent_outputs
]
spent_outputs = [spent_output for spent_output in transaction.spent_outputs]
if spent_outputs:
self.delete_unspent_outputs(*spent_outputs)
self.store_unspent_outputs(
*[utxo._asdict() for utxo in transaction.unspent_outputs]
)
self.store_unspent_outputs(*[utxo._asdict() for utxo in transaction.unspent_outputs])
def store_unspent_outputs(self, *unspent_outputs):
"""Store the given ``unspent_outputs`` (utxos).
@ -184,8 +176,7 @@ class Planetmint(object):
length tuple or list of unspent outputs.
"""
if unspent_outputs:
return backend.query.store_unspent_outputs(
self.connection, *unspent_outputs)
return backend.query.store_unspent_outputs(self.connection, *unspent_outputs)
def get_utxoset_merkle_root(self):
"""Returns the merkle root of the utxoset. This implies that
@ -214,9 +205,7 @@ class Planetmint(object):
# TODO Once ready, use the already pre-computed utxo_hash field.
# See common/transactions.py for details.
hashes = [
sha3_256(
'{}{}'.format(utxo['transaction_id'], utxo['output_index']).encode()
).digest() for utxo in utxoset
sha3_256("{}{}".format(utxo["transaction_id"], utxo["output_index"]).encode()).digest() for utxo in utxoset
]
# TODO Notice the sorted call!
return merkleroot(sorted(hashes))
@ -238,8 +227,7 @@ class Planetmint(object):
length tuple or list of unspent outputs.
"""
if unspent_outputs:
return backend.query.delete_unspent_outputs(
self.connection, *unspent_outputs)
return backend.query.delete_unspent_outputs(self.connection, *unspent_outputs)
def is_committed(self, transaction_id):
transaction = backend.query.get_transaction(self.connection, transaction_id)
@ -251,14 +239,14 @@ class Planetmint(object):
asset = backend.query.get_asset(self.connection, transaction_id)
metadata = backend.query.get_metadata(self.connection, [transaction_id])
if asset:
transaction['asset'] = asset
transaction["asset"] = asset
if 'metadata' not in transaction:
if "metadata" not in transaction:
metadata = metadata[0] if metadata else None
if metadata:
metadata = metadata.get('metadata')
metadata = metadata.get("metadata")
transaction.update({'metadata': metadata})
transaction.update({"metadata": metadata})
transaction = Transaction.from_dict(transaction)
@ -268,10 +256,8 @@ class Planetmint(object):
return backend.query.get_transactions(self.connection, txn_ids)
def get_transactions_filtered(self, asset_id, operation=None, last_tx=None):
"""Get a list of transactions filtered on some criteria
"""
txids = backend.query.get_txids_filtered(self.connection, asset_id,
operation, last_tx)
"""Get a list of transactions filtered on some criteria"""
txids = backend.query.get_txids_filtered(self.connection, asset_id, operation, last_tx)
for txid in txids:
yield self.get_transaction(txid)
@ -297,27 +283,24 @@ class Planetmint(object):
return self.fastquery.filter_spent_outputs(outputs)
def get_spent(self, txid, output, current_transactions=[]):
transactions = backend.query.get_spent(self.connection, txid,
output)
transactions = backend.query.get_spent(self.connection, txid, output)
transactions = list(transactions) if transactions else []
if len(transactions) > 1:
raise core_exceptions.CriticalDoubleSpend(
'`{}` was spent more than once. There is a problem'
' with the chain'.format(txid))
"`{}` was spent more than once. There is a problem" " with the chain".format(txid)
)
current_spent_transactions = []
for ctxn in current_transactions:
for ctxn_input in ctxn.inputs:
if ctxn_input.fulfills and \
ctxn_input.fulfills.txid == txid and \
ctxn_input.fulfills.output == output:
if ctxn_input.fulfills and ctxn_input.fulfills.txid == txid and ctxn_input.fulfills.output == output:
current_spent_transactions.append(ctxn)
transaction = None
if len(transactions) + len(current_spent_transactions) > 1:
raise DoubleSpend('tx "{}" spends inputs twice'.format(txid))
elif transactions:
transaction = backend.query.get_transactions(self.connection, [transactions[0]['id']])
transaction = backend.query.get_transactions(self.connection, [transactions[0]["id"]])
transaction = Transaction.from_dict(transaction[0])
elif current_spent_transactions:
transaction = current_spent_transactions[0]
@ -346,17 +329,16 @@ class Planetmint(object):
block = backend.query.get_block(self.connection, block_id)
latest_block = self.get_latest_block()
latest_block_height = latest_block['height'] if latest_block else 0
latest_block_height = latest_block["height"] if latest_block else 0
if not block and block_id > latest_block_height:
return
result = {'height': block_id,
'transactions': []}
result = {"height": block_id, "transactions": []}
if block:
transactions = backend.query.get_transactions(self.connection, block['transactions'])
result['transactions'] = [t.to_dict() for t in Transaction.from_db(self, transactions)]
transactions = backend.query.get_transactions(self.connection, block["transactions"])
result["transactions"] = [t.to_dict() for t in Transaction.from_db(self, transactions)]
return result
@ -372,9 +354,9 @@ class Planetmint(object):
"""
blocks = list(backend.query.get_block_with_transaction(self.connection, txid))
if len(blocks) > 1:
logger.critical('Transaction id %s exists in multiple blocks', txid)
logger.critical("Transaction id %s exists in multiple blocks", txid)
return [block['height'] for block in blocks]
return [block["height"] for block in blocks]
def validate_transaction(self, tx, current_transactions=[]):
"""Validate a transaction against the current status of the database."""
@ -388,10 +370,10 @@ class Planetmint(object):
try:
transaction = Transaction.from_dict(tx)
except SchemaValidationError as e:
logger.warning('Invalid transaction schema: %s', e.__cause__.message)
logger.warning("Invalid transaction schema: %s", e.__cause__.message)
return False
except ValidationError as e:
logger.warning('Invalid transaction (%s): %s', type(e).__name__, e)
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
return transaction.validate(self, current_transactions)
@ -401,10 +383,10 @@ class Planetmint(object):
try:
return self.validate_transaction(tx, current_transactions)
except ValidationError as e:
logger.warning('Invalid transaction (%s): %s', type(e).__name__, e)
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
def text_search(self, search, *, limit=0, table='assets'):
def text_search(self, search, *, limit=0, table="assets"):
"""Return an iterator of assets that match the text search
Args:
@ -414,8 +396,7 @@ class Planetmint(object):
Returns:
iter: An iterator of assets that match the text search.
"""
return backend.query.text_search(self.connection, search, limit=limit,
table=table)
return backend.query.text_search(self.connection, search, limit=limit, table=table)
def get_assets(self, asset_ids):
"""Return a list of assets that match the asset_ids
@ -450,7 +431,7 @@ class Planetmint(object):
def get_validators(self, height=None):
result = self.get_validator_change(height)
return [] if result is None else result['validators']
return [] if result is None else result["validators"]
def get_election(self, election_id):
return backend.query.get_election(self.connection, election_id)
@ -463,18 +444,16 @@ class Planetmint(object):
def store_validator_set(self, height, validators):
"""Store validator set at a given `height`.
NOTE: If the validator set already exists at that `height` then an
exception will be raised.
NOTE: If the validator set already exists at that `height` then an
exception will be raised.
"""
return backend.query.store_validator_set(self.connection, {'height': height,
'validators': validators})
return backend.query.store_validator_set(self.connection, {"height": height, "validators": validators})
def delete_validator_set(self, height):
return backend.query.delete_validator_set(self.connection, height)
def store_abci_chain(self, height, chain_id, is_synced=True):
return backend.query.store_abci_chain(self.connection, height,
chain_id, is_synced)
return backend.query.store_abci_chain(self.connection, height, chain_id, is_synced)
def delete_abci_chain(self, height):
return backend.query.delete_abci_chain(self.connection, height)
@ -499,16 +478,15 @@ class Planetmint(object):
block = self.get_latest_block()
suffix = '-migrated-at-height-'
chain_id = latest_chain['chain_id']
block_height_str = str(block['height'])
suffix = "-migrated-at-height-"
chain_id = latest_chain["chain_id"]
block_height_str = str(block["height"])
new_chain_id = chain_id.split(suffix)[0] + suffix + block_height_str
self.store_abci_chain(block['height'] + 1, new_chain_id, False)
self.store_abci_chain(block["height"] + 1, new_chain_id, False)
def store_election(self, election_id, height, is_concluded):
return backend.query.store_election(self.connection, election_id,
height, is_concluded)
return backend.query.store_election(self.connection, election_id, height, is_concluded)
def store_elections(self, elections):
return backend.query.store_elections(self.connection, elections)
@ -517,4 +495,4 @@ class Planetmint(object):
return backend.query.delete_elections(self.connection, height)
Block = namedtuple('Block', ('app_hash', 'height', 'transactions'))
Block = namedtuple("Block", ("app_hash", "height", "transactions"))

View File

@ -11,11 +11,12 @@ from logging.config import dictConfig as set_logging_config
from planetmint.config import Config, DEFAULT_LOGGING_CONFIG
import os
def _normalize_log_level(level):
try:
return level.upper()
except AttributeError as exc:
raise ConfigurationError('Log level must be a string!') from exc
raise ConfigurationError("Log level must be a string!") from exc
def setup_logging():
@ -32,47 +33,47 @@ def setup_logging():
"""
logging_configs = DEFAULT_LOGGING_CONFIG
new_logging_configs = Config().get()['log']
new_logging_configs = Config().get()["log"]
if 'file' in new_logging_configs:
filename = new_logging_configs['file']
logging_configs['handlers']['file']['filename'] = filename
if "file" in new_logging_configs:
filename = new_logging_configs["file"]
logging_configs["handlers"]["file"]["filename"] = filename
if 'error_file' in new_logging_configs:
error_filename = new_logging_configs['error_file']
logging_configs['handlers']['errors']['filename'] = error_filename
if "error_file" in new_logging_configs:
error_filename = new_logging_configs["error_file"]
logging_configs["handlers"]["errors"]["filename"] = error_filename
if 'level_console' in new_logging_configs:
level = _normalize_log_level(new_logging_configs['level_console'])
logging_configs['handlers']['console']['level'] = level
if "level_console" in new_logging_configs:
level = _normalize_log_level(new_logging_configs["level_console"])
logging_configs["handlers"]["console"]["level"] = level
if 'level_logfile' in new_logging_configs:
level = _normalize_log_level(new_logging_configs['level_logfile'])
logging_configs['handlers']['file']['level'] = level
if "level_logfile" in new_logging_configs:
level = _normalize_log_level(new_logging_configs["level_logfile"])
logging_configs["handlers"]["file"]["level"] = level
if 'fmt_console' in new_logging_configs:
fmt = new_logging_configs['fmt_console']
logging_configs['formatters']['console']['format'] = fmt
if "fmt_console" in new_logging_configs:
fmt = new_logging_configs["fmt_console"]
logging_configs["formatters"]["console"]["format"] = fmt
if 'fmt_logfile' in new_logging_configs:
fmt = new_logging_configs['fmt_logfile']
logging_configs['formatters']['file']['format'] = fmt
if "fmt_logfile" in new_logging_configs:
fmt = new_logging_configs["fmt_logfile"]
logging_configs["formatters"]["file"]["format"] = fmt
if 'datefmt_console' in new_logging_configs:
fmt = new_logging_configs['datefmt_console']
logging_configs['formatters']['console']['datefmt'] = fmt
if "datefmt_console" in new_logging_configs:
fmt = new_logging_configs["datefmt_console"]
logging_configs["formatters"]["console"]["datefmt"] = fmt
if 'datefmt_logfile' in new_logging_configs:
fmt = new_logging_configs['datefmt_logfile']
logging_configs['formatters']['file']['datefmt'] = fmt
if "datefmt_logfile" in new_logging_configs:
fmt = new_logging_configs["datefmt_logfile"]
logging_configs["formatters"]["file"]["datefmt"] = fmt
log_levels = new_logging_configs.get('granular_levels', {})
log_levels = new_logging_configs.get("granular_levels", {})
for logger_name, level in log_levels.items():
level = _normalize_log_level(level)
try:
logging_configs['loggers'][logger_name]['level'] = level
logging_configs["loggers"][logger_name]["level"] = level
except KeyError:
logging_configs['loggers'][logger_name] = {'level': level}
logging_configs["loggers"][logger_name] = {"level": level}
set_logging_config(logging_configs)

View File

@ -4,16 +4,16 @@
# Code is Apache-2.0 and docs are CC-BY-4.0
from planetmint.backend.schema import validate_language_key
from planetmint.transactions.common.exceptions import (InvalidSignature, DuplicateTransaction)
from planetmint.transactions.common.exceptions import InvalidSignature, DuplicateTransaction
from planetmint.transactions.common.schema import validate_transaction_schema
from planetmint.transactions.common.transaction import Transaction
from planetmint.transactions.common.utils import (validate_txn_obj, validate_key)
from planetmint.transactions.common.utils import validate_txn_obj, validate_key
class Transaction(Transaction):
ASSET = 'asset'
METADATA = 'metadata'
DATA = 'data'
ASSET = "asset"
METADATA = "metadata"
DATA = "data"
def validate(self, planet, current_transactions=[]):
"""Validate transaction spend
@ -31,11 +31,10 @@ class Transaction(Transaction):
if self.operation == Transaction.CREATE:
duplicates = any(txn for txn in current_transactions if txn.id == self.id)
if planet.is_committed(self.id) or duplicates:
raise DuplicateTransaction('transaction `{}` already exists'
.format(self.id))
raise DuplicateTransaction("transaction `{}` already exists".format(self.id))
if not self.inputs_valid(input_conditions):
raise InvalidSignature('Transaction signature is invalid.')
raise InvalidSignature("Transaction signature is invalid.")
elif self.operation == Transaction.TRANSFER:
self.validate_transfer_inputs(planet, current_transactions)
@ -68,7 +67,7 @@ class FastTransaction:
@property
def id(self):
return self.data['id']
return self.data["id"]
def to_dict(self):
return self.data

View File

@ -39,8 +39,8 @@ class ParallelValidationApp(App):
return super().end_block(request_end_block)
RESET = 'reset'
EXIT = 'exit'
RESET = "reset"
EXIT = "exit"
class ParallelValidator:
@ -64,7 +64,7 @@ class ParallelValidator:
def validate(self, raw_transaction):
dict_transaction = decode_transaction(raw_transaction)
index = int(dict_transaction['id'], 16) % self.number_of_workers
index = int(dict_transaction["id"], 16) % self.number_of_workers
self.routing_queues[index].put((self.transaction_index, dict_transaction))
self.transaction_index += 1
@ -105,13 +105,11 @@ class ValidationWorker:
def validate(self, dict_transaction):
try:
asset_id = dict_transaction['asset']['id']
asset_id = dict_transaction["asset"]["id"]
except KeyError:
asset_id = dict_transaction['id']
asset_id = dict_transaction["id"]
transaction = self.planetmint.is_valid_transaction(
dict_transaction,
self.validated_transactions[asset_id])
transaction = self.planetmint.is_valid_transaction(dict_transaction, self.validated_transactions[asset_id])
if transaction:
self.validated_transactions[asset_id].append(transaction)

View File

@ -40,13 +40,12 @@ def start(args):
exchange = Exchange()
# start the web api
app_server = server.create_server(
settings=Config().get()['server'],
log_config=Config().get()['log'],
planetmint_factory=Planetmint)
p_webapi = Process(name='planetmint_webapi', target=app_server.run, daemon=True)
settings=Config().get()["server"], log_config=Config().get()["log"], planetmint_factory=Planetmint
)
p_webapi = Process(name="planetmint_webapi", target=app_server.run, daemon=True)
p_webapi.start()
logger.info(BANNER.format(Config().get()['server']['bind']))
logger.info(BANNER.format(Config().get()["server"]["bind"]))
# start websocket server
p_websocket_server = Process(

View File

@ -17,28 +17,28 @@ except ImportError:
def encode_transaction(value):
"""Encode a transaction (dict) to Base64."""
return base64.b64encode(json.dumps(value).encode('utf8')).decode('utf8')
return base64.b64encode(json.dumps(value).encode("utf8")).decode("utf8")
def decode_transaction(raw):
"""Decode a transaction from bytes to a dict."""
return json.loads(raw.decode('utf8'))
return json.loads(raw.decode("utf8"))
def decode_transaction_base64(value):
"""Decode a transaction from Base64."""
return json.loads(base64.b64decode(value.encode('utf8')).decode('utf8'))
return json.loads(base64.b64decode(value.encode("utf8")).decode("utf8"))
def calculate_hash(key_list):
if not key_list:
return ''
return ""
full_hash = sha3_256()
for key in key_list:
full_hash.update(key.encode('utf8'))
full_hash.update(key.encode("utf8"))
return full_hash.hexdigest()
@ -59,16 +59,13 @@ def merkleroot(hashes):
# i.e. an empty list, then the hash of the empty string is returned.
# This seems too easy but maybe that is good enough? TO REVIEW!
if not hashes:
return sha3_256(b'').hexdigest()
return sha3_256(b"").hexdigest()
# XXX END TEMPORARY -- MUST REVIEW ...
if len(hashes) == 1:
return hexlify(hashes[0]).decode()
if len(hashes) % 2 == 1:
hashes.append(hashes[-1])
parent_hashes = [
sha3_256(hashes[i] + hashes[i + 1]).digest()
for i in range(0, len(hashes) - 1, 2)
]
parent_hashes = [sha3_256(hashes[i] + hashes[i + 1]).digest() for i in range(0, len(hashes) - 1, 2)]
return merkleroot(parent_hashes)
@ -76,7 +73,7 @@ def public_key64_to_address(base64_public_key):
"""Note this only compatible with Tendermint 0.19.x"""
ed25519_public_key = public_key_from_base64(base64_public_key)
encoded_public_key = amino_encoded_public_key(ed25519_public_key)
return hashlib.new('ripemd160', encoded_public_key).hexdigest().upper()
return hashlib.new("ripemd160", encoded_public_key).hexdigest().upper()
def public_key_from_base64(base64_public_key):
@ -93,8 +90,8 @@ def public_key_to_base64(ed25519_public_key):
def key_to_base64(ed25519_key):
ed25519_key = bytes.fromhex(ed25519_key)
return base64.b64encode(ed25519_key).decode('utf-8')
return base64.b64encode(ed25519_key).decode("utf-8")
def amino_encoded_public_key(ed25519_public_key):
return bytes.fromhex('1624DE6220{}'.format(ed25519_public_key))
return bytes.fromhex("1624DE6220{}".format(ed25519_public_key))

View File

@ -14,7 +14,7 @@ except ImportError:
from cryptoconditions import crypto
CryptoKeypair = namedtuple('CryptoKeypair', ('private_key', 'public_key'))
CryptoKeypair = namedtuple("CryptoKeypair", ("private_key", "public_key"))
def hash_data(data):
@ -33,8 +33,7 @@ def generate_key_pair():
"""
# TODO FOR CC: Adjust interface so that this function becomes unnecessary
return CryptoKeypair(
*(k.decode() for k in crypto.ed25519_generate_key_pair()))
return CryptoKeypair(*(k.decode() for k in crypto.ed25519_generate_key_pair()))
PrivateKey = crypto.Ed25519SigningKey
@ -43,13 +42,15 @@ PublicKey = crypto.Ed25519VerifyingKey
def key_pair_from_ed25519_key(hex_private_key):
"""Generate base58 encode public-private key pair from a hex encoded private key"""
priv_key = crypto.Ed25519SigningKey(bytes.fromhex(hex_private_key)[:32], encoding='bytes')
priv_key = crypto.Ed25519SigningKey(bytes.fromhex(hex_private_key)[:32], encoding="bytes")
public_key = priv_key.get_verifying_key()
return CryptoKeypair(private_key=priv_key.encode(encoding='base58').decode('utf-8'),
public_key=public_key.encode(encoding='base58').decode('utf-8'))
return CryptoKeypair(
private_key=priv_key.encode(encoding="base58").decode("utf-8"),
public_key=public_key.encode(encoding="base58").decode("utf-8"),
)
def public_key_from_ed25519_key(hex_public_key):
"""Generate base58 public key from hex encoded public key"""
public_key = crypto.Ed25519VerifyingKey(bytes.fromhex(hex_public_key), encoding='bytes')
return public_key.encode(encoding='base58').decode('utf-8')
public_key = crypto.Ed25519VerifyingKey(bytes.fromhex(hex_public_key), encoding="bytes")
return public_key.encode(encoding="base58").decode("utf-8")

View File

@ -30,19 +30,19 @@ class Input(object):
def __init__(self, fulfillment, owners_before, fulfills=None):
"""Create an instance of an :class:`~.Input`.
Args:
fulfillment (:class:`cryptoconditions.Fulfillment`): A
Fulfillment to be signed with a private key.
owners_before (:obj:`list` of :obj:`str`): A list of owners
after a Transaction was confirmed.
fulfills (:class:`~planetmint.transactions.common.transaction.
TransactionLink`, optional): A link representing the input
of a `TRANSFER` Transaction.
Args:
fulfillment (:class:`cryptoconditions.Fulfillment`): A
Fulfillment to be signed with a private key.
owners_before (:obj:`list` of :obj:`str`): A list of owners
after a Transaction was confirmed.
fulfills (:class:`~planetmint.transactions.common.transaction.
TransactionLink`, optional): A link representing the input
of a `TRANSFER` Transaction.
"""
if fulfills is not None and not isinstance(fulfills, TransactionLink):
raise TypeError('`fulfills` must be a TransactionLink instance')
raise TypeError("`fulfills` must be a TransactionLink instance")
if not isinstance(owners_before, list):
raise TypeError('`owners_before` must be a list instance')
raise TypeError("`owners_before` must be a list instance")
self.fulfillment = fulfillment
self.fulfills = fulfills
@ -60,12 +60,12 @@ class Input(object):
def to_dict(self):
"""Transforms the object to a Python dictionary.
Note:
If an Input hasn't been signed yet, this method returns a
dictionary representation.
Note:
If an Input hasn't been signed yet, this method returns a
dictionary representation.
Returns:
dict: The Input as an alternative serialization format.
Returns:
dict: The Input as an alternative serialization format.
"""
try:
fulfillment = self.fulfillment.serialize_uri()
@ -79,9 +79,9 @@ class Input(object):
fulfills = None
input_ = {
'owners_before': self.owners_before,
'fulfills': fulfills,
'fulfillment': fulfillment,
"owners_before": self.owners_before,
"fulfills": fulfills,
"fulfillment": fulfillment,
}
return input_
@ -97,23 +97,23 @@ class Input(object):
def from_dict(cls, data):
"""Transforms a Python dictionary to an Input object.
Note:
Optionally, this method can also serialize a Cryptoconditions-
Fulfillment that is not yet signed.
Note:
Optionally, this method can also serialize a Cryptoconditions-
Fulfillment that is not yet signed.
Args:
data (dict): The Input to be transformed.
Args:
data (dict): The Input to be transformed.
Returns:
:class:`~planetmint.transactions.common.transaction.Input`
Returns:
:class:`~planetmint.transactions.common.transaction.Input`
Raises:
InvalidSignature: If an Input's URI couldn't be parsed.
Raises:
InvalidSignature: If an Input's URI couldn't be parsed.
"""
fulfillment = data['fulfillment']
fulfillment = data["fulfillment"]
if not isinstance(fulfillment, (Fulfillment, type(None))):
try:
fulfillment = Fulfillment.from_uri(data['fulfillment'])
fulfillment = Fulfillment.from_uri(data["fulfillment"])
except ASN1DecodeError:
# TODO Remove as it is legacy code, and simply fall back on
# ASN1DecodeError
@ -121,6 +121,6 @@ class Input(object):
except TypeError:
# NOTE: See comment about this special case in
# `Input.to_dict`
fulfillment = _fulfillment_from_details(data['fulfillment'])
fulfills = TransactionLink.from_dict(data['fulfills'])
return cls(fulfillment, data['owners_before'], fulfills)
fulfillment = _fulfillment_from_details(data["fulfillment"])
fulfills = TransactionLink.from_dict(data["fulfills"])
return cls(fulfillment, data["owners_before"], fulfills)

View File

@ -5,7 +5,7 @@ from functools import lru_cache
class HDict(dict):
def __hash__(self):
return hash(codecs.decode(self['id'], 'hex'))
return hash(codecs.decode(self["id"], "hex"))
@lru_cache(maxsize=16384)
@ -14,12 +14,11 @@ def from_dict(func, *args, **kwargs):
def memoize_from_dict(func):
@functools.wraps(func)
def memoized_func(*args, **kwargs):
if args[1] is None:
return None
elif args[1].get('id', None):
elif args[1].get("id", None):
args = list(args)
args[1] = HDict(args[1])
new_args = tuple(args)
@ -30,7 +29,7 @@ def memoize_from_dict(func):
return memoized_func
class ToDictWrapper():
class ToDictWrapper:
def __init__(self, tx):
self.tx = tx
@ -47,7 +46,6 @@ def to_dict(func, tx_wrapped):
def memoize_to_dict(func):
@functools.wraps(func)
def memoized_func(*args, **kwargs):

View File

@ -19,7 +19,7 @@ logger = logging.getLogger(__name__)
def _load_schema(name, version, path=__file__):
"""Load a schema from disk"""
path = os.path.join(os.path.dirname(path), version, name + '.yaml')
path = os.path.join(os.path.dirname(path), version, name + ".yaml")
with open(path) as handle:
schema = yaml.safe_load(handle)
fast_schema = rapidjson.Validator(rapidjson.dumps(schema))
@ -27,22 +27,17 @@ def _load_schema(name, version, path=__file__):
# TODO: make this an env var from a config file
TX_SCHEMA_VERSION = 'v2.0'
TX_SCHEMA_VERSION = "v2.0"
TX_SCHEMA_PATH, TX_SCHEMA_COMMON = _load_schema('transaction',
TX_SCHEMA_VERSION)
_, TX_SCHEMA_CREATE = _load_schema('transaction_create',
TX_SCHEMA_VERSION)
_, TX_SCHEMA_TRANSFER = _load_schema('transaction_transfer',
TX_SCHEMA_VERSION)
TX_SCHEMA_PATH, TX_SCHEMA_COMMON = _load_schema("transaction", TX_SCHEMA_VERSION)
_, TX_SCHEMA_CREATE = _load_schema("transaction_create", TX_SCHEMA_VERSION)
_, TX_SCHEMA_TRANSFER = _load_schema("transaction_transfer", TX_SCHEMA_VERSION)
_, TX_SCHEMA_VALIDATOR_ELECTION = _load_schema('transaction_validator_election',
TX_SCHEMA_VERSION)
_, TX_SCHEMA_VALIDATOR_ELECTION = _load_schema("transaction_validator_election", TX_SCHEMA_VERSION)
_, TX_SCHEMA_CHAIN_MIGRATION_ELECTION = _load_schema('transaction_chain_migration_election',
TX_SCHEMA_VERSION)
_, TX_SCHEMA_CHAIN_MIGRATION_ELECTION = _load_schema("transaction_chain_migration_election", TX_SCHEMA_VERSION)
_, TX_SCHEMA_VOTE = _load_schema('transaction_vote', TX_SCHEMA_VERSION)
_, TX_SCHEMA_VOTE = _load_schema("transaction_vote", TX_SCHEMA_VERSION)
def _validate_schema(schema, body):
@ -66,7 +61,7 @@ def _validate_schema(schema, body):
jsonschema.validate(body, schema[0])
except jsonschema.ValidationError as exc2:
raise SchemaValidationError(str(exc2)) from exc2
logger.warning('code problem: jsonschema did not raise an exception, wheras rapidjson raised %s', exc)
logger.warning("code problem: jsonschema did not raise an exception, wheras rapidjson raised %s", exc)
raise SchemaValidationError(str(exc)) from exc
@ -77,7 +72,7 @@ def validate_transaction_schema(tx):
transaction. TX_SCHEMA_[TRANSFER|CREATE] add additional constraints on top.
"""
_validate_schema(TX_SCHEMA_COMMON, tx)
if tx['operation'] == 'TRANSFER':
if tx["operation"] == "TRANSFER":
_validate_schema(TX_SCHEMA_TRANSFER, tx)
else:
_validate_schema(TX_SCHEMA_CREATE, tx)

View File

@ -120,26 +120,15 @@ class Transaction(object):
# Asset payloads for 'CREATE' operations must be None or
# dicts holding a `data` property. Asset payloads for 'TRANSFER'
# operations must be dicts holding an `id` property.
if (
operation == self.CREATE
and asset is not None
and not (isinstance(asset, dict) and "data" in asset)
):
if operation == self.CREATE and asset is not None and not (isinstance(asset, dict) and "data" in asset):
raise TypeError(
(
"`asset` must be None or a dict holding a `data` "
" property instance for '{}' Transactions".format(operation)
)
)
elif operation == self.TRANSFER and not (
isinstance(asset, dict) and "id" in asset
):
raise TypeError(
(
"`asset` must be a dict holding an `id` property "
"for 'TRANSFER' Transactions"
)
)
elif operation == self.TRANSFER and not (isinstance(asset, dict) and "id" in asset):
raise TypeError(("`asset` must be a dict holding an `id` property " "for 'TRANSFER' Transactions"))
if outputs and not isinstance(outputs, list):
raise TypeError("`outputs` must be a list instance or None")
@ -298,10 +287,7 @@ class Transaction(object):
# to decode to convert the bytestring into a python str
return public_key.decode()
key_pairs = {
gen_public_key(PrivateKey(private_key)): PrivateKey(private_key)
for private_key in private_keys
}
key_pairs = {gen_public_key(PrivateKey(private_key)): PrivateKey(private_key) for private_key in private_keys}
tx_dict = self.to_dict()
tx_dict = Transaction._remove_signatures(tx_dict)
@ -336,10 +322,7 @@ class Transaction(object):
elif isinstance(input_.fulfillment, ZenroomSha256):
return cls._sign_threshold_signature_fulfillment(input_, message, key_pairs)
else:
raise ValueError(
"Fulfillment couldn't be matched to "
"Cryptocondition fulfillment type."
)
raise ValueError("Fulfillment couldn't be matched to " "Cryptocondition fulfillment type.")
@classmethod
def _sign_zenroom_fulfillment(cls, input_, message, key_pairs):
@ -359,20 +342,15 @@ class Transaction(object):
public_key = input_.owners_before[0]
message = sha3_256(message.encode())
if input_.fulfills:
message.update(
"{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode()
)
message.update("{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode())
try:
# cryptoconditions makes no assumptions of the encoding of the
# message to sign or verify. It only accepts bytestrings
input_.fulfillment.sign(
message.digest(), base58.b58decode(key_pairs[public_key].encode())
)
input_.fulfillment.sign(message.digest(), base58.b58decode(key_pairs[public_key].encode()))
except KeyError:
raise KeypairMismatchException(
"Public key {} is not a pair to "
"any of the private keys".format(public_key)
"Public key {} is not a pair to " "any of the private keys".format(public_key)
)
return input_
@ -394,20 +372,15 @@ class Transaction(object):
public_key = input_.owners_before[0]
message = sha3_256(message.encode())
if input_.fulfills:
message.update(
"{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode()
)
message.update("{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode())
try:
# cryptoconditions makes no assumptions of the encoding of the
# message to sign or verify. It only accepts bytestrings
input_.fulfillment.sign(
message.digest(), base58.b58decode(key_pairs[public_key].encode())
)
input_.fulfillment.sign(message.digest(), base58.b58decode(key_pairs[public_key].encode()))
except KeyError:
raise KeypairMismatchException(
"Public key {} is not a pair to "
"any of the private keys".format(public_key)
"Public key {} is not a pair to " "any of the private keys".format(public_key)
)
return input_
@ -424,9 +397,7 @@ class Transaction(object):
input_ = deepcopy(input_)
message = sha3_256(message.encode())
if input_.fulfills:
message.update(
"{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode()
)
message.update("{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode())
for owner_before in set(input_.owners_before):
# TODO: CC should throw a KeypairMismatchException, instead of
@ -442,15 +413,13 @@ class Transaction(object):
subffills = ccffill.get_subcondition_from_vk(base58.b58decode(owner_before))
if not subffills:
raise KeypairMismatchException(
"Public key {} cannot be found "
"in the fulfillment".format(owner_before)
"Public key {} cannot be found " "in the fulfillment".format(owner_before)
)
try:
private_key = key_pairs[owner_before]
except KeyError:
raise KeypairMismatchException(
"Public key {} is not a pair "
"to any of the private keys".format(owner_before)
"Public key {} is not a pair " "to any of the private keys".format(owner_before)
)
# cryptoconditions makes no assumptions of the encoding of the
@ -483,9 +452,7 @@ class Transaction(object):
# greatly, as we do not have to check against `None` values.
return self._inputs_valid(["dummyvalue" for _ in self.inputs])
elif self.operation == self.TRANSFER:
return self._inputs_valid(
[output.fulfillment.condition_uri for output in outputs]
)
return self._inputs_valid([output.fulfillment.condition_uri for output in outputs])
else:
allowed_ops = ", ".join(self.__class__.ALLOWED_OPERATIONS)
raise TypeError("`operation` must be one of {}".format(allowed_ops))
@ -506,9 +473,7 @@ class Transaction(object):
"""
if len(self.inputs) != len(output_condition_uris):
raise ValueError(
"Inputs and " "output_condition_uris must have the same count"
)
raise ValueError("Inputs and " "output_condition_uris must have the same count")
tx_dict = self.tx_dict if self.tx_dict else self.to_dict()
tx_dict = Transaction._remove_signatures(tx_dict)
@ -517,9 +482,7 @@ class Transaction(object):
def validate(i, output_condition_uri=None):
"""Validate input against output condition URI"""
return self._input_valid(
self.inputs[i], self.operation, tx_serialized, output_condition_uri
)
return self._input_valid(self.inputs[i], self.operation, tx_serialized, output_condition_uri)
return all(validate(i, cond) for i, cond in enumerate(output_condition_uris))
@ -574,9 +537,7 @@ class Transaction(object):
else:
message = sha3_256(message.encode())
if input_.fulfills:
message.update(
"{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode()
)
message.update("{}{}".format(input_.fulfills.txid, input_.fulfills.output).encode())
# NOTE: We pass a timestamp to `.validate`, as in case of a timeout
# condition we'll have to validate against it
@ -676,19 +637,11 @@ class Transaction(object):
transactions = [transactions]
# create a set of the transactions' asset ids
asset_ids = {
tx.id if tx.operation == tx.CREATE else tx.asset["id"]
for tx in transactions
}
asset_ids = {tx.id if tx.operation == tx.CREATE else tx.asset["id"] for tx in transactions}
# check that all the transasctions have the same asset id
if len(asset_ids) > 1:
raise AssetIdMismatch(
(
"All inputs of all transactions passed"
" need to have the same asset id"
)
)
raise AssetIdMismatch(("All inputs of all transactions passed" " need to have the same asset id"))
return asset_ids.pop()
@staticmethod
@ -712,10 +665,7 @@ class Transaction(object):
tx_body_serialized = Transaction._to_str(tx_body)
valid_tx_id = Transaction._to_hash(tx_body_serialized)
if proposed_tx_id != valid_tx_id:
err_msg = (
"The transaction's id '{}' isn't equal to "
"the hash of its body, i.e. it's not valid."
)
err_msg = "The transaction's id '{}' isn't equal to " "the hash of its body, i.e. it's not valid."
raise InvalidHash(err_msg.format(proposed_tx_id))
@classmethod
@ -729,27 +679,25 @@ class Transaction(object):
Returns:
:class:`~planetmint.transactions.common.transaction.Transaction`
"""
operation = (
tx.get("operation", Transaction.CREATE)
if isinstance(tx, dict)
else Transaction.CREATE
)
operation = tx.get("operation", Transaction.CREATE) if isinstance(tx, dict) else Transaction.CREATE
cls = Transaction.resolve_class(operation)
id = None
try:
id = tx['id']
id = tx["id"]
except KeyError:
id = None
# tx['asset'] = tx['asset'][0] if isinstance( tx['asset'], list) or isinstance( tx['asset'], tuple) else tx['asset'], # noqa: E501
local_dict = {
'inputs': tx['inputs'],
'outputs': tx['outputs'],
'operation': operation,
'metadata': tx['metadata'],
'asset': tx['asset'], # [0] if isinstance( tx['asset'], list) or isinstance( tx['asset'], tuple) else tx['asset'], # noqa: E501
'version': tx['version'],
'id': id
"inputs": tx["inputs"],
"outputs": tx["outputs"],
"operation": operation,
"metadata": tx["metadata"],
"asset": tx[
"asset"
], # [0] if isinstance( tx['asset'], list) or isinstance( tx['asset'], tuple) else tx['asset'], # noqa: E501
"version": tx["version"],
"id": id,
}
if not skip_schema_validation:
@ -802,14 +750,14 @@ class Transaction(object):
if asset is not None:
# This is tarantool specific behaviour needs to be addressed
tx = tx_map[asset[1]]
tx['asset'] = asset[0]
tx["asset"] = asset[0]
tx_ids = list(tx_map.keys())
metadata_list = list(planet.get_metadata(tx_ids))
for metadata in metadata_list:
if 'id' in metadata:
tx = tx_map[metadata['id']]
tx.update({'metadata': metadata.get('metadata')})
if "id" in metadata:
tx = tx_map[metadata["id"]]
tx.update({"metadata": metadata.get("metadata")})
if return_list:
tx_list = []
@ -851,9 +799,7 @@ class Transaction(object):
if input_tx is None:
raise InputDoesNotExist("input `{}` doesn't exist".format(input_txid))
spent = planet.get_spent(
input_txid, input_.fulfills.output, current_transactions
)
spent = planet.get_spent(input_txid, input_.fulfills.output, current_transactions)
if spent:
raise DoubleSpend("input `{}` was already spent".format(input_txid))
@ -869,27 +815,15 @@ class Transaction(object):
# validate asset id
asset_id = self.get_asset_id(input_txs)
if asset_id != self.asset["id"]:
raise AssetIdMismatch(
(
"The asset id of the input does not"
" match the asset id of the"
" transaction"
)
)
raise AssetIdMismatch(("The asset id of the input does not" " match the asset id of the" " transaction"))
input_amount = sum(
[input_condition.amount for input_condition in input_conditions]
)
output_amount = sum(
[output_condition.amount for output_condition in self.outputs]
)
input_amount = sum([input_condition.amount for input_condition in input_conditions])
output_amount = sum([output_condition.amount for output_condition in self.outputs])
if output_amount != input_amount:
raise AmountError(
(
"The amount used in the inputs `{}`"
" needs to be same as the amount used"
" in the outputs `{}`"
"The amount used in the inputs `{}`" " needs to be same as the amount used" " in the outputs `{}`"
).format(input_amount, output_amount)
)

View File

@ -3,29 +3,30 @@
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
class TransactionLink(object):
"""An object for unidirectional linking to a Transaction's Output.
Attributes:
txid (str, optional): A Transaction to link to.
output (int, optional): An output's index in a Transaction with id
`txid`.
Attributes:
txid (str, optional): A Transaction to link to.
output (int, optional): An output's index in a Transaction with id
`txid`.
"""
def __init__(self, txid=None, output=None):
"""Create an instance of a :class:`~.TransactionLink`.
Note:
In an IPLD implementation, this class is not necessary anymore,
as an IPLD link can simply point to an object, as well as an
objects properties. So instead of having a (de)serializable
class, we can have a simple IPLD link of the form:
`/<tx_id>/transaction/outputs/<output>/`.
Note:
In an IPLD implementation, this class is not necessary anymore,
as an IPLD link can simply point to an object, as well as an
objects properties. So instead of having a (de)serializable
class, we can have a simple IPLD link of the form:
`/<tx_id>/transaction/outputs/<output>/`.
Args:
txid (str, optional): A Transaction to link to.
output (int, optional): An Outputs's index in a Transaction with
id `txid`.
Args:
txid (str, optional): A Transaction to link to.
output (int, optional): An Outputs's index in a Transaction with
id `txid`.
"""
self.txid = txid
self.output = output
@ -44,33 +45,32 @@ class TransactionLink(object):
def from_dict(cls, link):
"""Transforms a Python dictionary to a TransactionLink object.
Args:
link (dict): The link to be transformed.
Args:
link (dict): The link to be transformed.
Returns:
:class:`~planetmint.transactions.common.transaction.TransactionLink`
Returns:
:class:`~planetmint.transactions.common.transaction.TransactionLink`
"""
try:
return cls(link['transaction_id'], link['output_index'])
return cls(link["transaction_id"], link["output_index"])
except TypeError:
return cls()
def to_dict(self):
"""Transforms the object to a Python dictionary.
Returns:
(dict|None): The link as an alternative serialization format.
Returns:
(dict|None): The link as an alternative serialization format.
"""
if self.txid is None and self.output is None:
return None
else:
return {
'transaction_id': self.txid,
'output_index': self.output,
"transaction_id": self.txid,
"output_index": self.output,
}
def to_uri(self, path=''):
def to_uri(self, path=""):
if self.txid is None and self.output is None:
return None
return '{}/transactions/{}/outputs/{}'.format(path, self.txid,
self.output)
return "{}/transactions/{}/outputs/{}".format(path, self.txid, self.output)

View File

@ -3,6 +3,6 @@
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
BROADCAST_TX_COMMIT = 'broadcast_tx_commit'
BROADCAST_TX_ASYNC = 'broadcast_tx_async'
BROADCAST_TX_SYNC = 'broadcast_tx_sync'
BROADCAST_TX_COMMIT = "broadcast_tx_commit"
BROADCAST_TX_ASYNC = "broadcast_tx_async"
BROADCAST_TX_SYNC = "broadcast_tx_sync"

View File

@ -75,7 +75,7 @@ def validate_txn_obj(obj_name, obj, key, validation_fun):
Raises:
ValidationError: `validation_fun` will raise exception on failure
"""
backend = Config().get()['database']['backend']
backend = Config().get()["database"]["backend"]
if backend == "localmongodb":
data = obj.get(key, {})
@ -184,9 +184,7 @@ def _fulfillment_to_details(fulfillment):
}
if fulfillment.type_name == "threshold-sha-256":
subconditions = [
_fulfillment_to_details(cond["body"]) for cond in fulfillment.subconditions
]
subconditions = [_fulfillment_to_details(cond["body"]) for cond in fulfillment.subconditions]
return {
"type": "threshold-sha-256",
"threshold": fulfillment.threshold,

View File

@ -10,23 +10,23 @@ from planetmint.transactions.common.output import Output
class Create(Transaction):
OPERATION = 'CREATE'
OPERATION = "CREATE"
ALLOWED_OPERATIONS = (OPERATION,)
@classmethod
def validate_create(self, tx_signers, recipients, asset, metadata):
if not isinstance(tx_signers, list):
raise TypeError('`tx_signers` must be a list instance')
raise TypeError("`tx_signers` must be a list instance")
if not isinstance(recipients, list):
raise TypeError('`recipients` must be a list instance')
raise TypeError("`recipients` must be a list instance")
if len(tx_signers) == 0:
raise ValueError('`tx_signers` list cannot be empty')
raise ValueError("`tx_signers` list cannot be empty")
if len(recipients) == 0:
raise ValueError('`recipients` list cannot be empty')
raise ValueError("`recipients` list cannot be empty")
if not (asset is None or isinstance(asset, dict)):
raise TypeError('`asset` must be a dict or None')
raise TypeError("`asset` must be a dict or None")
if not (metadata is None or isinstance(metadata, dict)):
raise TypeError('`metadata` must be a dict or None')
raise TypeError("`metadata` must be a dict or None")
inputs = []
outputs = []
@ -34,9 +34,9 @@ class Create(Transaction):
# generate_outputs
for recipient in recipients:
if not isinstance(recipient, tuple) or len(recipient) != 2:
raise ValueError(('Each `recipient` in the list must be a'
' tuple of `([<list of public keys>],'
' <amount>)`'))
raise ValueError(
("Each `recipient` in the list must be a" " tuple of `([<list of public keys>]," " <amount>)`")
)
pub_keys, amount = recipient
outputs.append(Output.generate(pub_keys, amount))
@ -49,30 +49,30 @@ class Create(Transaction):
def generate(cls, tx_signers, recipients, metadata=None, asset=None):
"""A simple way to generate a `CREATE` transaction.
Note:
This method currently supports the following Cryptoconditions
use cases:
- Ed25519
- ThresholdSha256
Note:
This method currently supports the following Cryptoconditions
use cases:
- Ed25519
- ThresholdSha256
Additionally, it provides support for the following Planetmint
use cases:
- Multiple inputs and outputs.
Additionally, it provides support for the following Planetmint
use cases:
- Multiple inputs and outputs.
Args:
tx_signers (:obj:`list` of :obj:`str`): A list of keys that
represent the signers of the CREATE Transaction.
recipients (:obj:`list` of :obj:`tuple`): A list of
([keys],amount) that represent the recipients of this
Transaction.
metadata (dict): The metadata to be stored along with the
Transaction.
asset (dict): The metadata associated with the asset that will
be created in this Transaction.
Args:
tx_signers (:obj:`list` of :obj:`str`): A list of keys that
represent the signers of the CREATE Transaction.
recipients (:obj:`list` of :obj:`tuple`): A list of
([keys],amount) that represent the recipients of this
Transaction.
metadata (dict): The metadata to be stored along with the
Transaction.
asset (dict): The metadata associated with the asset that will
be created in this Transaction.
Returns:
:class:`~planetmint.common.transaction.Transaction`
Returns:
:class:`~planetmint.common.transaction.Transaction`
"""
(inputs, outputs) = cls.validate_create(tx_signers, recipients, asset, metadata)
return cls(cls.OPERATION, {'data': asset}, inputs, outputs, metadata)
return cls(cls.OPERATION, {"data": asset}, inputs, outputs, metadata)

View File

@ -10,31 +10,31 @@ from copy import deepcopy
class Transfer(Transaction):
OPERATION = 'TRANSFER'
OPERATION = "TRANSFER"
ALLOWED_OPERATIONS = (OPERATION,)
@classmethod
def validate_transfer(cls, inputs, recipients, asset_id, metadata):
if not isinstance(inputs, list):
raise TypeError('`inputs` must be a list instance')
raise TypeError("`inputs` must be a list instance")
if len(inputs) == 0:
raise ValueError('`inputs` must contain at least one item')
raise ValueError("`inputs` must contain at least one item")
if not isinstance(recipients, list):
raise TypeError('`recipients` must be a list instance')
raise TypeError("`recipients` must be a list instance")
if len(recipients) == 0:
raise ValueError('`recipients` list cannot be empty')
raise ValueError("`recipients` list cannot be empty")
outputs = []
for recipient in recipients:
if not isinstance(recipient, tuple) or len(recipient) != 2:
raise ValueError(('Each `recipient` in the list must be a'
' tuple of `([<list of public keys>],'
' <amount>)`'))
raise ValueError(
("Each `recipient` in the list must be a" " tuple of `([<list of public keys>]," " <amount>)`")
)
pub_keys, amount = recipient
outputs.append(Output.generate(pub_keys, amount))
if not isinstance(asset_id, str):
raise TypeError('`asset_id` must be a string')
raise TypeError("`asset_id` must be a string")
return (deepcopy(inputs), outputs)
@ -42,40 +42,40 @@ class Transfer(Transaction):
def generate(cls, inputs, recipients, asset_id, metadata=None):
"""A simple way to generate a `TRANSFER` transaction.
Note:
Different cases for threshold conditions:
Note:
Different cases for threshold conditions:
Combining multiple `inputs` with an arbitrary number of
`recipients` can yield interesting cases for the creation of
threshold conditions we'd like to support. The following
notation is proposed:
Combining multiple `inputs` with an arbitrary number of
`recipients` can yield interesting cases for the creation of
threshold conditions we'd like to support. The following
notation is proposed:
1. The index of a `recipient` corresponds to the index of
an input:
e.g. `transfer([input1], [a])`, means `input1` would now be
owned by user `a`.
1. The index of a `recipient` corresponds to the index of
an input:
e.g. `transfer([input1], [a])`, means `input1` would now be
owned by user `a`.
2. `recipients` can (almost) get arbitrary deeply nested,
creating various complex threshold conditions:
e.g. `transfer([inp1, inp2], [[a, [b, c]], d])`, means
`a`'s signature would have a 50% weight on `inp1`
compared to `b` and `c` that share 25% of the leftover
weight respectively. `inp2` is owned completely by `d`.
2. `recipients` can (almost) get arbitrary deeply nested,
creating various complex threshold conditions:
e.g. `transfer([inp1, inp2], [[a, [b, c]], d])`, means
`a`'s signature would have a 50% weight on `inp1`
compared to `b` and `c` that share 25% of the leftover
weight respectively. `inp2` is owned completely by `d`.
Args:
inputs (:obj:`list` of :class:`~planetmint.common.transaction.
Input`): Converted `Output`s, intended to
be used as inputs in the transfer to generate.
recipients (:obj:`list` of :obj:`tuple`): A list of
([keys],amount) that represent the recipients of this
Transaction.
asset_id (str): The asset ID of the asset to be transferred in
this Transaction.
metadata (dict): Python dictionary to be stored along with the
Transaction.
Args:
inputs (:obj:`list` of :class:`~planetmint.common.transaction.
Input`): Converted `Output`s, intended to
be used as inputs in the transfer to generate.
recipients (:obj:`list` of :obj:`tuple`): A list of
([keys],amount) that represent the recipients of this
Transaction.
asset_id (str): The asset ID of the asset to be transferred in
this Transaction.
metadata (dict): Python dictionary to be stored along with the
Transaction.
Returns:
:class:`~planetmint.common.transaction.Transaction`
Returns:
:class:`~planetmint.common.transaction.Transaction`
"""
(inputs, outputs) = cls.validate_transfer(inputs, recipients, asset_id, metadata)
return cls(cls.OPERATION, {'id': asset_id}, inputs, outputs, metadata)
return cls(cls.OPERATION, {"id": asset_id}, inputs, outputs, metadata)

View File

@ -6,14 +6,14 @@ from planetmint.transactions.types.elections.election import Election
class ChainMigrationElection(Election):
OPERATION = 'CHAIN_MIGRATION_ELECTION'
OPERATION = "CHAIN_MIGRATION_ELECTION"
CREATE = OPERATION
ALLOWED_OPERATIONS = (OPERATION,)
TX_SCHEMA_CUSTOM = TX_SCHEMA_CHAIN_MIGRATION_ELECTION
def has_concluded(self, planetmint, *args, **kwargs):
chain = planetmint.get_latest_abci_chain()
if chain is not None and not chain['is_synced']:
if chain is not None and not chain["is_synced"]:
# do not conclude the migration election if
# there is another migration in progress
return False
@ -26,7 +26,7 @@ class ChainMigrationElection(Election):
def show_election(self, planet):
output = super().show_election(planet)
chain = planet.get_latest_abci_chain()
if chain is None or chain['is_synced']:
if chain is None or chain["is_synced"]:
return output
output += f'\nchain_id={chain["chain_id"]}'
@ -34,14 +34,15 @@ class ChainMigrationElection(Election):
output += f'\napp_hash={block["app_hash"]}'
validators = [
{
'pub_key': {
'type': 'tendermint/PubKeyEd25519',
'value': k,
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": k,
},
'power': v,
} for k, v in self.get_validators(planet).items()
"power": v,
}
for k, v in self.get_validators(planet).items()
]
output += f'\nvalidators={json.dumps(validators, indent=4)}'
output += f"\nvalidators={json.dumps(validators, indent=4)}"
return output
def on_rollback(self, planet, new_height):

View File

@ -12,30 +12,33 @@ from planetmint.transactions.types.assets.create import Create
from planetmint.transactions.types.assets.transfer import Transfer
from planetmint.transactions.types.elections.vote import Vote
from planetmint.transactions.common.exceptions import (
InvalidSignature, MultipleInputsError, InvalidProposer,
UnequalValidatorSet, DuplicateTransaction)
InvalidSignature,
MultipleInputsError,
InvalidProposer,
UnequalValidatorSet,
DuplicateTransaction,
)
from planetmint.tendermint_utils import key_from_base64, public_key_to_base64
from planetmint.transactions.common.crypto import (public_key_from_ed25519_key)
from planetmint.transactions.common.crypto import public_key_from_ed25519_key
from planetmint.transactions.common.transaction import Transaction
from planetmint.transactions.common.schema import (
_validate_schema, TX_SCHEMA_COMMON, TX_SCHEMA_CREATE)
from planetmint.transactions.common.schema import _validate_schema, TX_SCHEMA_COMMON, TX_SCHEMA_CREATE
class Election(Transaction):
"""Represents election transactions.
To implement a custom election, create a class deriving from this one
with OPERATION set to the election operation, ALLOWED_OPERATIONS
set to (OPERATION,), CREATE set to OPERATION.
To implement a custom election, create a class deriving from this one
with OPERATION set to the election operation, ALLOWED_OPERATIONS
set to (OPERATION,), CREATE set to OPERATION.
"""
OPERATION = None
# Custom validation schema
TX_SCHEMA_CUSTOM = None
# Election Statuses:
ONGOING = 'ongoing'
CONCLUDED = 'concluded'
INCONCLUSIVE = 'inconclusive'
ONGOING = "ongoing"
CONCLUDED = "concluded"
INCONCLUSIVE = "inconclusive"
# Vote ratio to approve an election
ELECTION_THRESHOLD = 2 / 3
@ -51,18 +54,18 @@ class Election(Transaction):
latest_block = planet.get_latest_block()
if latest_block is None:
return None
return planet.get_validator_change(latest_block['height'])
return planet.get_validator_change(latest_block["height"])
@classmethod
def get_validators(cls, planet, height=None):
"""Return a dictionary of validators with key as `public_key` and
value as the `voting_power`
value as the `voting_power`
"""
validators = {}
for validator in planet.get_validators(height):
# NOTE: we assume that Tendermint encodes public key in base64
public_key = public_key_from_ed25519_key(key_from_base64(validator['public_key']['value']))
validators[public_key] = validator['voting_power']
public_key = public_key_from_ed25519_key(key_from_base64(validator["public_key"]["value"]))
validators[public_key] = validator["voting_power"]
return validators
@ -114,26 +117,25 @@ class Election(Transaction):
duplicates = any(txn for txn in current_transactions if txn.id == self.id)
if planet.is_committed(self.id) or duplicates:
raise DuplicateTransaction('transaction `{}` already exists'
.format(self.id))
raise DuplicateTransaction("transaction `{}` already exists".format(self.id))
if not self.inputs_valid(input_conditions):
raise InvalidSignature('Transaction signature is invalid.')
raise InvalidSignature("Transaction signature is invalid.")
current_validators = self.get_validators(planet)
# NOTE: Proposer should be a single node
if len(self.inputs) != 1 or len(self.inputs[0].owners_before) != 1:
raise MultipleInputsError('`tx_signers` must be a list instance of length one')
raise MultipleInputsError("`tx_signers` must be a list instance of length one")
# NOTE: Check if the proposer is a validator.
[election_initiator_node_pub_key] = self.inputs[0].owners_before
if election_initiator_node_pub_key not in current_validators.keys():
raise InvalidProposer('Public key is not a part of the validator set')
raise InvalidProposer("Public key is not a part of the validator set")
# NOTE: Check if all validators have been assigned votes equal to their voting power
if not self.is_same_topology(current_validators, self.outputs):
raise UnequalValidatorSet('Validator set much be exactly same to the outputs of election')
raise UnequalValidatorSet("Validator set much be exactly same to the outputs of election")
return self
@ -141,10 +143,10 @@ class Election(Transaction):
def generate(cls, initiator, voters, election_data, metadata=None):
# Break symmetry in case we need to call an election with the same properties twice
uuid = uuid4()
election_data['seed'] = str(uuid)
election_data["seed"] = str(uuid)
(inputs, outputs) = Create.validate_create(initiator, voters, election_data, metadata)
election = cls(cls.OPERATION, {'data': election_data}, inputs, outputs, metadata)
election = cls(cls.OPERATION, {"data": election_data}, inputs, outputs, metadata)
cls.validate_schema(election.to_dict())
return election
@ -174,21 +176,19 @@ class Election(Transaction):
def count_votes(cls, election_pk, transactions, getter=getattr):
votes = 0
for txn in transactions:
if getter(txn, 'operation') == Vote.OPERATION:
for output in getter(txn, 'outputs'):
if getter(txn, "operation") == Vote.OPERATION:
for output in getter(txn, "outputs"):
# NOTE: We enforce that a valid vote to election id will have only
# election_pk in the output public keys, including any other public key
# along with election_pk will lead to vote being not considered valid.
if len(getter(output, 'public_keys')) == 1 and [election_pk] == getter(output, 'public_keys'):
votes = votes + int(getter(output, 'amount'))
if len(getter(output, "public_keys")) == 1 and [election_pk] == getter(output, "public_keys"):
votes = votes + int(getter(output, "amount"))
return votes
def get_commited_votes(self, planet, election_pk=None):
if election_pk is None:
election_pk = self.to_public_key(self.id)
txns = list(backend.query.get_asset_tokens_for_public_key(planet.connection,
self.id,
election_pk))
txns = list(backend.query.get_asset_tokens_for_public_key(planet.connection, self.id, election_pk))
return self.count_votes(election_pk, txns, dict.get)
def has_concluded(self, planet, current_votes=[]):
@ -208,15 +208,14 @@ class Election(Transaction):
votes_current = self.count_votes(election_pk, current_votes)
total_votes = sum(output.amount for output in self.outputs)
if (votes_committed < (2 / 3) * total_votes) and \
(votes_committed + votes_current >= (2 / 3) * total_votes):
if (votes_committed < (2 / 3) * total_votes) and (votes_committed + votes_current >= (2 / 3) * total_votes):
return True
return False
def get_status(self, planet):
election = self.get_election(self.id, planet)
if election and election['is_concluded']:
if election and election["is_concluded"]:
return self.CONCLUDED
return self.INCONCLUSIVE if self.has_validator_set_changed(planet) else self.ONGOING
@ -226,11 +225,11 @@ class Election(Transaction):
if latest_change is None:
return False
latest_change_height = latest_change['height']
latest_change_height = latest_change["height"]
election = self.get_election(self.id, planet)
return latest_change_height > election['height']
return latest_change_height > election["height"]
def get_election(self, election_id, planet):
return planet.get_election(election_id)
@ -239,14 +238,14 @@ class Election(Transaction):
planet.store_election(self.id, height, is_concluded)
def show_election(self, planet):
data = self.asset['data']
if 'public_key' in data.keys():
data['public_key'] = public_key_to_base64(data['public_key']['value'])
response = ''
data = self.asset["data"]
if "public_key" in data.keys():
data["public_key"] = public_key_to_base64(data["public_key"]["value"])
response = ""
for k, v in data.items():
if k != 'seed':
response += f'{k}={v}\n'
response += f'status={self.get_status(planet)}'
if k != "seed":
response += f"{k}={v}\n"
response += f"status={self.get_status(planet)}"
return response
@ -257,8 +256,7 @@ class Election(Transaction):
if not isinstance(tx, Election):
continue
elections.append({'election_id': tx.id, 'height': height,
'is_concluded': False})
elections.append({"election_id": tx.id, "height": height, "is_concluded": False})
return elections
@classmethod
@ -268,7 +266,7 @@ class Election(Transaction):
if not isinstance(tx, Vote):
continue
election_id = tx.asset['id']
election_id = tx.asset["id"]
if election_id not in elections:
elections[election_id] = []
elections[election_id].append(tx)
@ -277,26 +275,26 @@ class Election(Transaction):
@classmethod
def process_block(cls, planet, new_height, txns):
"""Looks for election and vote transactions inside the block, records
and processes elections.
and processes elections.
Every election is recorded in the database.
Every election is recorded in the database.
Every vote has a chance to conclude the corresponding election. When
an election is concluded, the corresponding database record is
marked as such.
Every vote has a chance to conclude the corresponding election. When
an election is concluded, the corresponding database record is
marked as such.
Elections and votes are processed in the order in which they
appear in the block. Elections are concluded in the order of
appearance of their first votes in the block.
Elections and votes are processed in the order in which they
appear in the block. Elections are concluded in the order of
appearance of their first votes in the block.
For every election concluded in the block, calls its `on_approval`
method. The returned value of the last `on_approval`, if any,
is a validator set update to be applied in one of the following blocks.
For every election concluded in the block, calls its `on_approval`
method. The returned value of the last `on_approval`, if any,
is a validator set update to be applied in one of the following blocks.
`on_approval` methods are implemented by elections of particular type.
The method may contain side effects but should be idempotent. To account
for other concluded elections, if it requires so, the method should
rely on the database state.
`on_approval` methods are implemented by elections of particular type.
The method may contain side effects but should be idempotent. To account
for other concluded elections, if it requires so, the method should
rely on the database state.
"""
# elections initiated in this block
initiated_elections = cls._get_initiated_elections(new_height, txns)
@ -324,9 +322,9 @@ class Election(Transaction):
@classmethod
def rollback(cls, planet, new_height, txn_ids):
"""Looks for election and vote transactions inside the block and
cleans up the database artifacts possibly created in `process_blocks`.
cleans up the database artifacts possibly created in `process_blocks`.
Part of the `end_block`/`commit` crash recovery.
Part of the `end_block`/`commit` crash recovery.
"""
# delete election records for elections initiated at this height and
@ -342,13 +340,13 @@ class Election(Transaction):
def on_approval(self, planet, new_height):
"""Override to update the database state according to the
election rules. Consider the current database state to account for
other concluded elections, if required.
election rules. Consider the current database state to account for
other concluded elections, if required.
"""
raise NotImplementedError
def on_rollback(self, planet, new_height):
"""Override to clean up the database artifacts possibly created
in `on_approval`. Part of the `end_block`/`commit` crash recovery.
in `on_approval`. Part of the `end_block`/`commit` crash recovery.
"""
raise NotImplementedError

View File

@ -6,12 +6,16 @@
from planetmint.transactions.types.assets.create import Create
from planetmint.transactions.types.assets.transfer import Transfer
from planetmint.transactions.common.schema import (
_validate_schema, TX_SCHEMA_COMMON, TX_SCHEMA_TRANSFER, TX_SCHEMA_VOTE)
_validate_schema,
TX_SCHEMA_COMMON,
TX_SCHEMA_TRANSFER,
TX_SCHEMA_VOTE,
)
class Vote(Transfer):
OPERATION = 'VOTE'
OPERATION = "VOTE"
# NOTE: This class inherits TRANSFER txn type. The `TRANSFER` property is
# overriden to re-use methods from parent class
TRANSFER = OPERATION
@ -41,14 +45,14 @@ class Vote(Transfer):
@classmethod
def generate(cls, inputs, recipients, election_id, metadata=None):
(inputs, outputs) = cls.validate_transfer(inputs, recipients, election_id, metadata)
election_vote = cls(cls.OPERATION, {'id': election_id}, inputs, outputs, metadata)
election_vote = cls(cls.OPERATION, {"id": election_id}, inputs, outputs, metadata)
cls.validate_schema(election_vote.to_dict())
return election_vote
@classmethod
def validate_schema(cls, tx):
"""Validate the validator election vote transaction. Since `VOTE` extends `TRANSFER`
transaction, all the validations for `CREATE` transaction should be inherited
transaction, all the validations for `CREATE` transaction should be inherited
"""
_validate_schema(TX_SCHEMA_COMMON, tx)
_validate_schema(TX_SCHEMA_TRANSFER, tx)

View File

@ -4,4 +4,4 @@
# Code is Apache-2.0 and docs are CC-BY-4.0
from planetmint.upsert_validator.validator_election import ValidatorElection # noqa
from planetmint.upsert_validator.validator_election import ValidatorElection # noqa

View File

@ -6,12 +6,12 @@
from planetmint.transactions.common.exceptions import InvalidPowerChange
from planetmint.transactions.types.elections.election import Election
from planetmint.transactions.common.schema import TX_SCHEMA_VALIDATOR_ELECTION
from .validator_utils import (new_validator_set, encode_validator, validate_asset_public_key)
from .validator_utils import new_validator_set, encode_validator, validate_asset_public_key
class ValidatorElection(Election):
OPERATION = 'VALIDATOR_ELECTION'
OPERATION = "VALIDATOR_ELECTION"
# NOTE: this transaction class extends create so the operation inheritence is achieved
# by renaming CREATE to VALIDATOR_ELECTION
CREATE = OPERATION
@ -19,29 +19,28 @@ class ValidatorElection(Election):
TX_SCHEMA_CUSTOM = TX_SCHEMA_VALIDATOR_ELECTION
def validate(self, planet, current_transactions=[]):
"""For more details refer BEP-21: https://github.com/planetmint/BEPs/tree/master/21
"""
"""For more details refer BEP-21: https://github.com/planetmint/BEPs/tree/master/21"""
current_validators = self.get_validators(planet)
super(ValidatorElection, self).validate(planet, current_transactions=current_transactions)
# NOTE: change more than 1/3 of the current power is not allowed
if self.asset['data']['power'] >= (1 / 3) * sum(current_validators.values()):
raise InvalidPowerChange('`power` change must be less than 1/3 of total power')
if self.asset["data"]["power"] >= (1 / 3) * sum(current_validators.values()):
raise InvalidPowerChange("`power` change must be less than 1/3 of total power")
return self
@classmethod
def validate_schema(cls, tx):
super(ValidatorElection, cls).validate_schema(tx)
validate_asset_public_key(tx['asset']['data']['public_key'])
validate_asset_public_key(tx["asset"]["data"]["public_key"])
def has_concluded(self, planet, *args, **kwargs):
latest_block = planet.get_latest_block()
if latest_block is not None:
latest_block_height = latest_block['height']
latest_validator_change = planet.get_validator_change()['height']
latest_block_height = latest_block["height"]
latest_validator_change = planet.get_validator_change()["height"]
# TODO change to `latest_block_height + 3` when upgrading to Tendermint 0.24.0.
if latest_validator_change == latest_block_height + 2:
@ -51,17 +50,15 @@ class ValidatorElection(Election):
return super().has_concluded(planet, *args, **kwargs)
def on_approval(self, planet, new_height):
validator_updates = [self.asset['data']]
validator_updates = [self.asset["data"]]
curr_validator_set = planet.get_validators(new_height)
updated_validator_set = new_validator_set(curr_validator_set,
validator_updates)
updated_validator_set = new_validator_set(curr_validator_set, validator_updates)
updated_validator_set = [v for v in updated_validator_set
if v['voting_power'] > 0]
updated_validator_set = [v for v in updated_validator_set if v["voting_power"] > 0]
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
planet.store_validator_set(new_height + 1, updated_validator_set)
return encode_validator(self.asset['data'])
return encode_validator(self.asset["data"])
def on_rollback(self, planetmint, new_height):
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.

View File

@ -8,67 +8,72 @@ from planetmint.transactions.common.exceptions import InvalidPublicKey
def encode_validator(v):
ed25519_public_key = v['public_key']['value']
ed25519_public_key = v["public_key"]["value"]
pub_key = keys_pb2.PublicKey(ed25519=bytes.fromhex(ed25519_public_key))
return types_pb2.ValidatorUpdate(pub_key=pub_key, power=v['power'])
return types_pb2.ValidatorUpdate(pub_key=pub_key, power=v["power"])
def decode_validator(v):
return {'public_key': {'type': 'ed25519-base64',
'value': codecs.encode(v.pub_key.ed25519, 'base64').decode().rstrip('\n')},
'voting_power': v.power}
return {
"public_key": {
"type": "ed25519-base64",
"value": codecs.encode(v.pub_key.ed25519, "base64").decode().rstrip("\n"),
},
"voting_power": v.power,
}
def new_validator_set(validators, updates):
validators_dict = {}
for v in validators:
validators_dict[v['public_key']['value']] = v
validators_dict[v["public_key"]["value"]] = v
updates_dict = {}
for u in updates:
decoder = get_public_key_decoder(u['public_key'])
public_key64 = base64.b64encode(decoder(u['public_key']['value'])).decode('utf-8')
updates_dict[public_key64] = {'public_key': {'type': 'ed25519-base64',
'value': public_key64},
'voting_power': u['power']}
decoder = get_public_key_decoder(u["public_key"])
public_key64 = base64.b64encode(decoder(u["public_key"]["value"])).decode("utf-8")
updates_dict[public_key64] = {
"public_key": {"type": "ed25519-base64", "value": public_key64},
"voting_power": u["power"],
}
new_validators_dict = {**validators_dict, **updates_dict}
return list(new_validators_dict.values())
def encode_pk_to_base16(validator):
pk = validator['public_key']
pk = validator["public_key"]
decoder = get_public_key_decoder(pk)
public_key16 = base64.b16encode(decoder(pk['value'])).decode('utf-8')
public_key16 = base64.b16encode(decoder(pk["value"])).decode("utf-8")
validator['public_key']['value'] = public_key16
validator["public_key"]["value"] = public_key16
return validator
def validate_asset_public_key(pk):
pk_binary = pk['value'].encode('utf-8')
pk_binary = pk["value"].encode("utf-8")
decoder = get_public_key_decoder(pk)
try:
pk_decoded = decoder(pk_binary)
if len(pk_decoded) != 32:
raise InvalidPublicKey('Public key should be of size 32 bytes')
raise InvalidPublicKey("Public key should be of size 32 bytes")
except binascii.Error:
raise InvalidPublicKey('Invalid `type` specified for public key `value`')
raise InvalidPublicKey("Invalid `type` specified for public key `value`")
def get_public_key_decoder(pk):
encoding = pk['type']
encoding = pk["type"]
decoder = base64.b64decode
if encoding == 'ed25519-base16':
if encoding == "ed25519-base16":
decoder = base64.b16decode
elif encoding == 'ed25519-base32':
elif encoding == "ed25519-base32":
decoder = base64.b32decode
elif encoding == 'ed25519-base64':
elif encoding == "ed25519-base64":
decoder = base64.b64decode
else:
raise InvalidPublicKey('Invalid `type` specified for public key `value`')
raise InvalidPublicKey("Invalid `type` specified for public key `value`")
return decoder

View File

@ -17,9 +17,7 @@ from planetmint.transactions.common.crypto import key_pair_from_ed25519_key
class ProcessGroup(object):
def __init__(self, concurrency=None, group=None, target=None, name=None,
args=None, kwargs=None, daemon=None):
def __init__(self, concurrency=None, group=None, target=None, name=None, args=None, kwargs=None, daemon=None):
self.concurrency = concurrency or mp.cpu_count()
self.group = group
self.target = target
@ -31,9 +29,14 @@ class ProcessGroup(object):
def start(self):
for i in range(self.concurrency):
proc = mp.Process(group=self.group, target=self.target,
name=self.name, args=self.args,
kwargs=self.kwargs, daemon=self.daemon)
proc = mp.Process(
group=self.group,
target=self.target,
name=self.name,
args=self.args,
kwargs=self.kwargs,
daemon=self.daemon,
)
proc.start()
self.processes.append(proc)
@ -117,8 +120,8 @@ def condition_details_has_owner(condition_details, owner):
bool: True if the public key is found in the condition details, False otherwise
"""
if 'subconditions' in condition_details:
result = condition_details_has_owner(condition_details['subconditions'], owner)
if "subconditions" in condition_details:
result = condition_details_has_owner(condition_details["subconditions"], owner)
if result:
return True
@ -128,8 +131,7 @@ def condition_details_has_owner(condition_details, owner):
if result:
return True
else:
if 'public_key' in condition_details \
and owner == condition_details['public_key']:
if "public_key" in condition_details and owner == condition_details["public_key"]:
return True
return False
@ -157,7 +159,7 @@ class Lazy:
return self
def __getitem__(self, key):
self.stack.append('__getitem__')
self.stack.append("__getitem__")
self.stack.append(([key], {}))
return self
@ -184,7 +186,7 @@ class Lazy:
def load_node_key(path):
with open(path) as json_data:
priv_validator = json.load(json_data)
priv_key = priv_validator['priv_key']['value']
priv_key = priv_validator["priv_key"]["value"]
hex_private_key = key_from_base64(priv_key)
return key_pair_from_ed25519_key(hex_private_key)
@ -200,7 +202,7 @@ def tendermint_version_is_compatible(running_tm_ver):
"""
# Splitting because version can look like this e.g. 0.22.8-40d6dc2e
tm_ver = running_tm_ver.split('-')
tm_ver = running_tm_ver.split("-")
if not tm_ver:
return False
for ver in __tm_supported_versions__:

View File

@ -4,7 +4,7 @@
# Code is Apache-2.0 and docs are CC-BY-4.0
class BaseValidationRules():
class BaseValidationRules:
"""Base validation rules for Planetmint.
A validation plugin must expose a class inheriting from this one via an entry_point.

View File

@ -21,7 +21,7 @@ def add_routes(app):
for (prefix, routes) in API_SECTIONS:
api = Api(app, prefix=prefix)
for ((pattern, resource, *args), kwargs) in routes:
kwargs.setdefault('strict_slashes', False)
kwargs.setdefault("strict_slashes", False)
api.add_resource(resource, pattern, *args, **kwargs)
@ -30,20 +30,20 @@ def r(*args, **kwargs):
ROUTES_API_V1 = [
r('/', info.ApiV1Index),
r('assets/', assets.AssetListApi),
r('metadata/', metadata.MetadataApi),
r('blocks/<int:block_id>', blocks.BlockApi),
r('blocks/latest', blocks.LatestBlock),
r('blocks/', blocks.BlockListApi),
r('transactions/<string:tx_id>', tx.TransactionApi),
r('transactions', tx.TransactionListApi),
r('outputs/', outputs.OutputListApi),
r('validators/', validators.ValidatorsApi),
r("/", info.ApiV1Index),
r("assets/", assets.AssetListApi),
r("metadata/", metadata.MetadataApi),
r("blocks/<int:block_id>", blocks.BlockApi),
r("blocks/latest", blocks.LatestBlock),
r("blocks/", blocks.BlockListApi),
r("transactions/<string:tx_id>", tx.TransactionApi),
r("transactions", tx.TransactionListApi),
r("outputs/", outputs.OutputListApi),
r("validators/", validators.ValidatorsApi),
]
API_SECTIONS = [
(None, [r('/', info.RootIndex)]),
('/api/v1/', ROUTES_API_V1),
(None, [r("/", info.RootIndex)]),
("/api/v1/", ROUTES_API_V1),
]

View File

@ -44,13 +44,14 @@ class StandaloneApplication(gunicorn.app.base.BaseApplication):
def load_config(self):
# find a better way to pass this such that
# the custom logger class can access it.
custom_log_config = self.options.get('custom_log_config')
self.cfg.env_orig['custom_log_config'] = custom_log_config
custom_log_config = self.options.get("custom_log_config")
self.cfg.env_orig["custom_log_config"] = custom_log_config
config = dict((key, value) for key, value in self.options.items()
if key in self.cfg.settings and value is not None)
config = dict(
(key, value) for key, value in self.options.items() if key in self.cfg.settings and value is not None
)
config['default_proc_name'] = 'planetmint_gunicorn'
config["default_proc_name"] = "planetmint_gunicorn"
for key, value in config.items():
# not sure if we need the `key.lower` here, will just keep
# keep it for now.
@ -81,7 +82,7 @@ def create_app(*, debug=False, threads=1, planetmint_factory=None):
app.debug = debug
app.config['bigchain_pool'] = utils.pool(planetmint_factory, size=threads)
app.config["bigchain_pool"] = utils.pool(planetmint_factory, size=threads)
add_routes(app)
@ -101,18 +102,18 @@ def create_server(settings, log_config=None, planetmint_factory=None):
settings = copy.deepcopy(settings)
if not settings.get('workers'):
settings['workers'] = (multiprocessing.cpu_count() * 2) + 1
if not settings.get("workers"):
settings["workers"] = (multiprocessing.cpu_count() * 2) + 1
if not settings.get('threads'):
if not settings.get("threads"):
# Note: Threading is not recommended currently, as the frontend workload
# is largely CPU bound and parallisation across Python threads makes it
# slower.
settings['threads'] = 1
settings["threads"] = 1
settings['custom_log_config'] = log_config
app = create_app(debug=settings.get('debug', False),
threads=settings['threads'],
planetmint_factory=planetmint_factory)
settings["custom_log_config"] = log_config
app = create_app(
debug=settings.get("debug", False), threads=settings["threads"], planetmint_factory=planetmint_factory
)
standalone = StandaloneApplication(app, options=settings)
return standalone

View File

@ -22,9 +22,9 @@ class StripContentTypeMiddleware:
def __call__(self, environ, start_response):
"""Run the middleware and then call the original WSGI application."""
if environ['REQUEST_METHOD'] == 'GET':
if environ["REQUEST_METHOD"] == "GET":
try:
del environ['CONTENT_TYPE']
del environ["CONTENT_TYPE"]
except KeyError:
pass
else:

View File

@ -30,17 +30,17 @@ class AssetListApi(Resource):
A list of assets that match the query.
"""
parser = reqparse.RequestParser()
parser.add_argument('search', type=str, required=True)
parser.add_argument('limit', type=int)
parser.add_argument("search", type=str, required=True)
parser.add_argument("limit", type=int)
args = parser.parse_args()
if not args['search']:
return make_error(400, 'text_search cannot be empty')
if not args['limit']:
if not args["search"]:
return make_error(400, "text_search cannot be empty")
if not args["limit"]:
# if the limit is not specified do not pass None to `text_search`
del args['limit']
del args["limit"]
pool = current_app.config['bigchain_pool']
pool = current_app.config["bigchain_pool"]
with pool() as planet:
assets = planet.text_search(**args)
@ -49,7 +49,4 @@ class AssetListApi(Resource):
# This only works with MongoDB as the backend
return list(assets)
except OperationError as e:
return make_error(
400,
'({}): {}'.format(type(e).__name__, e)
)
return make_error(400, "({}): {}".format(type(e).__name__, e))

View File

@ -17,13 +17,13 @@ logger = logging.getLogger(__name__)
def make_error(status_code, message=None):
if status_code == 404 and message is None:
message = 'Not found'
message = "Not found"
response_content = {'status': status_code, 'message': message}
request_info = {'method': request.method, 'path': request.path}
response_content = {"status": status_code, "message": message}
request_info = {"method": request.method, "path": request.path}
request_info.update(response_content)
logger.error('HTTP API error: %(status)s - %(method)s:%(path)s - %(message)s', request_info)
logger.error("HTTP API error: %(status)s - %(method)s:%(path)s - %(message)s", request_info)
response = jsonify(response_content)
response.status_code = status_code
@ -37,10 +37,10 @@ def base_ws_uri():
customized (typically when running behind NAT, firewall, etc.)
"""
config_wsserver = Config().get()['wsserver']
config_wsserver = Config().get()["wsserver"]
scheme = config_wsserver['advertised_scheme']
host = config_wsserver['advertised_host']
port = config_wsserver['advertised_port']
scheme = config_wsserver["advertised_scheme"]
host = config_wsserver["advertised_host"]
port = config_wsserver["advertised_port"]
return '{}://{}:{}'.format(scheme, host, port)
return "{}://{}:{}".format(scheme, host, port)

View File

@ -21,7 +21,7 @@ class LatestBlock(Resource):
A JSON string containing the data about the block.
"""
pool = current_app.config['bigchain_pool']
pool = current_app.config["bigchain_pool"]
with pool() as planet:
block = planet.get_latest_block()
@ -43,7 +43,7 @@ class BlockApi(Resource):
A JSON string containing the data about the block.
"""
pool = current_app.config['bigchain_pool']
pool = current_app.config["bigchain_pool"]
with pool() as planet:
block = planet.get_block(block_id=block_id)
@ -64,12 +64,12 @@ class BlockListApi(Resource):
"valid", "invalid", "undecided".
"""
parser = reqparse.RequestParser()
parser.add_argument('transaction_id', type=str, required=True)
parser.add_argument("transaction_id", type=str, required=True)
args = parser.parse_args(strict=True)
tx_id = args['transaction_id']
tx_id = args["transaction_id"]
pool = current_app.config['bigchain_pool']
pool = current_app.config["bigchain_pool"]
with pool() as planet:
blocks = planet.get_block_containing_tx(tx_id)

View File

@ -15,23 +15,20 @@ from planetmint.web.websocket_server import EVENTS_ENDPOINT, EVENTS_ENDPOINT_BLO
class RootIndex(Resource):
def get(self):
docs_url = [
'https://docs.planetmint.io/projects/server/en/v',
version.__version__ + '/'
]
return flask.jsonify({
'api': {
'v1': get_api_v1_info('/api/v1/')
},
'docs': ''.join(docs_url),
'software': 'Planetmint',
'version': version.__version__,
})
docs_url = ["https://docs.planetmint.io/projects/server/en/v", version.__version__ + "/"]
return flask.jsonify(
{
"api": {"v1": get_api_v1_info("/api/v1/")},
"docs": "".join(docs_url),
"software": "Planetmint",
"version": version.__version__,
}
)
class ApiV1Index(Resource):
def get(self):
return flask.jsonify(get_api_v1_info('/'))
return flask.jsonify(get_api_v1_info("/"))
def get_api_v1_info(api_prefix):
@ -41,19 +38,19 @@ def get_api_v1_info(api_prefix):
websocket_root_tx = base_ws_uri() + EVENTS_ENDPOINT
websocket_root_block = base_ws_uri() + EVENTS_ENDPOINT_BLOCKS
docs_url = [
'https://docs.planetmint.io/projects/server/en/v',
"https://docs.planetmint.io/projects/server/en/v",
version.__version__,
'/http-client-server-api.html',
"/http-client-server-api.html",
]
return {
'docs': ''.join(docs_url),
'transactions': '{}transactions/'.format(api_prefix),
'blocks': '{}blocks/'.format(api_prefix),
'assets': '{}assets/'.format(api_prefix),
'outputs': '{}outputs/'.format(api_prefix),
'streams': websocket_root_tx,
'streamedblocks': websocket_root_block,
'metadata': '{}metadata/'.format(api_prefix),
'validators': '{}validators'.format(api_prefix),
"docs": "".join(docs_url),
"transactions": "{}transactions/".format(api_prefix),
"blocks": "{}blocks/".format(api_prefix),
"assets": "{}assets/".format(api_prefix),
"outputs": "{}outputs/".format(api_prefix),
"streams": websocket_root_tx,
"streamedblocks": websocket_root_block,
"metadata": "{}metadata/".format(api_prefix),
"validators": "{}validators".format(api_prefix),
}

View File

@ -30,25 +30,22 @@ class MetadataApi(Resource):
A list of metadata that match the query.
"""
parser = reqparse.RequestParser()
parser.add_argument('search', type=str, required=True)
parser.add_argument('limit', type=int)
parser.add_argument("search", type=str, required=True)
parser.add_argument("limit", type=int)
args = parser.parse_args()
if not args['search']:
return make_error(400, 'text_search cannot be empty')
if not args['limit']:
del args['limit']
if not args["search"]:
return make_error(400, "text_search cannot be empty")
if not args["limit"]:
del args["limit"]
pool = current_app.config['bigchain_pool']
pool = current_app.config["bigchain_pool"]
with pool() as planet:
args['table'] = 'meta_data'
args["table"] = "meta_data"
metadata = planet.text_search(**args)
try:
return list(metadata)
except OperationError as e:
return make_error(
400,
'({}): {}'.format(type(e).__name__, e)
)
return make_error(400, "({}): {}".format(type(e).__name__, e))

View File

@ -18,14 +18,11 @@ class OutputListApi(Resource):
A :obj:`list` of :cls:`str` of links to outputs.
"""
parser = reqparse.RequestParser()
parser.add_argument('public_key', type=parameters.valid_ed25519,
required=True)
parser.add_argument('spent', type=parameters.valid_bool)
parser.add_argument("public_key", type=parameters.valid_ed25519, required=True)
parser.add_argument("spent", type=parameters.valid_bool)
args = parser.parse_args(strict=True)
pool = current_app.config['bigchain_pool']
pool = current_app.config["bigchain_pool"]
with pool() as planet:
outputs = planet.get_outputs_filtered(args['public_key'],
args['spent'])
return [{'transaction_id': output.txid, 'output_index': output.output}
for output in outputs]
outputs = planet.get_outputs_filtered(args["public_key"], args["spent"])
return [{"transaction_id": output.txid, "output_index": output.output} for output in outputs]

View File

@ -6,45 +6,47 @@
import re
from planetmint.transactions.common.transaction_mode_types import (
BROADCAST_TX_COMMIT, BROADCAST_TX_ASYNC, BROADCAST_TX_SYNC)
BROADCAST_TX_COMMIT,
BROADCAST_TX_ASYNC,
BROADCAST_TX_SYNC,
)
def valid_txid(txid):
if re.match('^[a-fA-F0-9]{64}$', txid):
if re.match("^[a-fA-F0-9]{64}$", txid):
return txid.lower()
raise ValueError('Invalid hash')
raise ValueError("Invalid hash")
def valid_bool(val):
val = val.lower()
if val == 'true':
if val == "true":
return True
if val == 'false':
if val == "false":
return False
raise ValueError('Boolean value must be "true" or "false" (lowercase)')
def valid_ed25519(key):
if (re.match('^[1-9a-zA-Z]{43,44}$', key) and not
re.match('.*[Il0O]', key)):
if re.match("^[1-9a-zA-Z]{43,44}$", key) and not re.match(".*[Il0O]", key):
return key
raise ValueError('Invalid base58 ed25519 key')
raise ValueError("Invalid base58 ed25519 key")
def valid_operation(op):
op = op.upper()
if op == 'CREATE':
return 'CREATE'
if op == 'TRANSFER':
return 'TRANSFER'
if op == "CREATE":
return "CREATE"
if op == "TRANSFER":
return "TRANSFER"
raise ValueError('Operation must be "CREATE" or "TRANSFER"')
def valid_mode(mode):
if mode == 'async':
if mode == "async":
return BROADCAST_TX_ASYNC
if mode == 'sync':
if mode == "sync":
return BROADCAST_TX_SYNC
if mode == 'commit':
if mode == "commit":
return BROADCAST_TX_COMMIT
raise ValueError('Mode must be "async", "sync" or "commit"')

View File

@ -65,9 +65,7 @@ class TransactionListApi(Resource):
A ``dict`` containing the data about the transaction.
"""
parser = reqparse.RequestParser()
parser.add_argument(
"mode", type=parameters.valid_mode, default=BROADCAST_TX_ASYNC
)
parser.add_argument("mode", type=parameters.valid_mode, default=BROADCAST_TX_ASYNC)
args = parser.parse_args()
mode = str(args["mode"])
@ -85,21 +83,15 @@ class TransactionListApi(Resource):
message="Invalid transaction schema: {}".format(e.__cause__.message),
)
except KeyError as e:
return make_error(
400, "Invalid transaction ({}): {}".format(type(e).__name__, e)
)
return make_error(400, "Invalid transaction ({}): {}".format(type(e).__name__, e))
except ValidationError as e:
return make_error(
400, "Invalid transaction ({}): {}".format(type(e).__name__, e)
)
return make_error(400, "Invalid transaction ({}): {}".format(type(e).__name__, e))
with pool() as planet:
try:
planet.validate_transaction(tx_obj)
except ValidationError as e:
return make_error(
400, "Invalid transaction ({}): {}".format(type(e).__name__, e)
)
return make_error(400, "Invalid transaction ({}): {}".format(type(e).__name__, e))
else:
status_code, message = planet.write_transaction(tx_obj, mode)

View File

@ -15,7 +15,7 @@ class ValidatorsApi(Resource):
A JSON string containing the validator set of the current node.
"""
pool = current_app.config['bigchain_pool']
pool = current_app.config["bigchain_pool"]
with pool() as planet:
validators = planet.get_validators()

View File

@ -15,7 +15,7 @@ class Dispatcher:
This class implements a simple publish/subscribe pattern.
"""
def __init__(self, event_source, type='tx'):
def __init__(self, event_source, type="tx"):
"""Create a new instance.
Args:
@ -49,20 +49,18 @@ class Dispatcher:
@staticmethod
def simplified_block(block):
txids = []
for tx in block['transactions']:
for tx in block["transactions"]:
txids.append(tx.id)
return {'height': block['height'], 'hash': block['hash'], 'transaction_ids': txids}
return {"height": block["height"], "hash": block["hash"], "transaction_ids": txids}
@staticmethod
def eventify_block(block):
for tx in block['transactions']:
for tx in block["transactions"]:
if tx.asset:
asset_id = tx.asset.get('id', tx.id)
asset_id = tx.asset.get("id", tx.id)
else:
asset_id = tx.id
yield {'height': block['height'],
'asset_id': asset_id,
'transaction_id': tx.id}
yield {"height": block["height"], "asset_id": asset_id, "transaction_id": tx.id}
async def publish(self):
"""Publish new events to the subscribers."""
@ -77,9 +75,9 @@ class Dispatcher:
if isinstance(event, str):
str_buffer.append(event)
elif event.type == EventTypes.BLOCK_VALID:
if self.type == 'tx':
if self.type == "tx":
str_buffer = map(json.dumps, self.eventify_block(event.data))
elif self.type == 'blk':
elif self.type == "blk":
str_buffer = [json.dumps(self.simplified_block(event.data))]
else:
return

View File

@ -29,8 +29,8 @@ from planetmint.web.websocket_dispatcher import Dispatcher
logger = logging.getLogger(__name__)
EVENTS_ENDPOINT = '/api/v1/streams/valid_transactions'
EVENTS_ENDPOINT_BLOCKS = '/api/v1/streams/valid_blocks'
EVENTS_ENDPOINT = "/api/v1/streams/valid_transactions"
EVENTS_ENDPOINT_BLOCKS = "/api/v1/streams/valid_blocks"
def _multiprocessing_to_asyncio(in_queue, out_queue1, out_queue2, loop):
@ -51,60 +51,60 @@ def _multiprocessing_to_asyncio(in_queue, out_queue1, out_queue2, loop):
async def websocket_tx_handler(request):
"""Handle a new socket connection."""
logger.debug('New TX websocket connection.')
logger.debug("New TX websocket connection.")
websocket = aiohttp.web.WebSocketResponse()
await websocket.prepare(request)
uuid = uuid4()
request.app['tx_dispatcher'].subscribe(uuid, websocket)
request.app["tx_dispatcher"].subscribe(uuid, websocket)
while True:
# Consume input buffer
try:
msg = await websocket.receive()
except RuntimeError as e:
logger.debug('Websocket exception: %s', str(e))
logger.debug("Websocket exception: %s", str(e))
break
except CancelledError:
logger.debug('Websocket closed')
logger.debug("Websocket closed")
break
if msg.type == aiohttp.WSMsgType.CLOSED:
logger.debug('Websocket closed')
logger.debug("Websocket closed")
break
elif msg.type == aiohttp.WSMsgType.ERROR:
logger.debug('Websocket exception: %s', websocket.exception())
logger.debug("Websocket exception: %s", websocket.exception())
break
request.app['tx_dispatcher'].unsubscribe(uuid)
request.app["tx_dispatcher"].unsubscribe(uuid)
return websocket
async def websocket_blk_handler(request):
"""Handle a new socket connection."""
logger.debug('New BLK websocket connection.')
logger.debug("New BLK websocket connection.")
websocket = aiohttp.web.WebSocketResponse()
await websocket.prepare(request)
uuid = uuid4()
request.app['blk_dispatcher'].subscribe(uuid, websocket)
request.app["blk_dispatcher"].subscribe(uuid, websocket)
while True:
# Consume input buffer
try:
msg = await websocket.receive()
except RuntimeError as e:
logger.debug('Websocket exception: %s', str(e))
logger.debug("Websocket exception: %s", str(e))
break
except CancelledError:
logger.debug('Websocket closed')
logger.debug("Websocket closed")
break
if msg.type == aiohttp.WSMsgType.CLOSED:
logger.debug('Websocket closed')
logger.debug("Websocket closed")
break
elif msg.type == aiohttp.WSMsgType.ERROR:
logger.debug('Websocket exception: %s', websocket.exception())
logger.debug("Websocket exception: %s", websocket.exception())
break
request.app['blk_dispatcher'].unsubscribe(uuid)
request.app["blk_dispatcher"].unsubscribe(uuid)
return websocket
@ -115,16 +115,16 @@ def init_app(tx_source, blk_source, *, loop=None):
An aiohttp application.
"""
blk_dispatcher = Dispatcher(blk_source, 'blk')
tx_dispatcher = Dispatcher(tx_source, 'tx')
blk_dispatcher = Dispatcher(blk_source, "blk")
tx_dispatcher = Dispatcher(tx_source, "tx")
# Schedule the dispatcher
loop.create_task(blk_dispatcher.publish(), name='blk')
loop.create_task(tx_dispatcher.publish(), name='tx')
loop.create_task(blk_dispatcher.publish(), name="blk")
loop.create_task(tx_dispatcher.publish(), name="tx")
app = aiohttp.web.Application(loop=loop)
app['tx_dispatcher'] = tx_dispatcher
app['blk_dispatcher'] = blk_dispatcher
app["tx_dispatcher"] = tx_dispatcher
app["blk_dispatcher"] = blk_dispatcher
app.router.add_get(EVENTS_ENDPOINT, websocket_tx_handler)
app.router.add_get(EVENTS_ENDPOINT_BLOCKS, websocket_blk_handler)
return app
@ -139,13 +139,12 @@ def start(sync_event_source, loop=None):
tx_source = asyncio.Queue(loop=loop)
blk_source = asyncio.Queue(loop=loop)
bridge = threading.Thread(target=_multiprocessing_to_asyncio,
args=(sync_event_source, tx_source, blk_source, loop),
daemon=True)
bridge = threading.Thread(
target=_multiprocessing_to_asyncio, args=(sync_event_source, tx_source, blk_source, loop), daemon=True
)
bridge.start()
app = init_app(tx_source, blk_source, loop=loop)
aiohttp.web.run_app(app,
host=Config().get()['wsserver']['host'],
port=Config().get()['wsserver']['port'],
loop=loop)
aiohttp.web.run_app(
app, host=Config().get()["wsserver"]["host"], port=Config().get()["wsserver"]["port"], loop=loop
)

View File

@ -4,6 +4,3 @@ test=pytest
[coverage:run]
source = .
omit = *test*
[flake8]
max_line_length = 119

Some files were not shown because too many files have changed in this diff Show More