Compare commits

..

57 Commits
v2.2.0 ... main

Author SHA1 Message Date
Jürgen Eckel
975921183c
fixed audit (#412)
* fixed audit
* fixed tarantool installation


Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2025-02-13 22:34:42 +01:00
Jürgen Eckel
a848324e1d
version bump 2025-02-13 17:14:24 +01:00
Jürgen Eckel
58131d445a
package changes (#411)
* package changes

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2025-02-13 17:11:34 +01:00
annonymmous
f3077ee8e3 Update poetry.lock 2025-02-13 12:20:07 +01:00
Julian Strobl
ef00a7fdde
[sonar] Remove obsolete project
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-11-09 10:19:58 +01:00
Julian Strobl
ce1649f7db
Disable scheduled workflow run 2023-09-11 08:20:31 +02:00
Julian Strobl
472d4cfbd9
Merge pull request #403 from planetmint/dependabot/pip/cryptography-41.0.2
Bump cryptography from 41.0.1 to 41.0.2
2023-07-20 08:06:30 +02:00
dependabot[bot]
9279dd680b
Bump cryptography from 41.0.1 to 41.0.2
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.1 to 41.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.1...41.0.2)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-15 01:31:08 +00:00
Jürgen Eckel
1571211a24
bumped ersion to 2.5.1
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-22 09:28:45 +02:00
Jürgen Eckel
67abb7102d
fixed all-in-one container tarantool issue
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-22 09:21:51 +02:00
Jürgen Eckel
3ac0ca2c69
Tm 0.34.24 (#401)
* upgrade to Tendermint v0.34.24
* upgraded all the old tendermint versions to the new version


Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-21 11:59:44 +02:00
Jürgen Eckel
4bf1af6f06
fix dependencies (locked) and the audit (#400)
* fix dependencies (locked) and the audit

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added pip-audit to poetry to avoid inconsistent environments

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-14 09:30:03 +02:00
Lorenz Herzberger
0d947a4083
updated poetry workflow (#399)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-06-13 09:49:54 +02:00
Jürgen Eckel
34e5492420
Fixed broken tx api (#398)
* enforced using a newer planetmint-transactions package and adjusted to a renaming of the variable
* bumped version & added changelog info

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-24 21:48:50 +02:00
Jürgen Eckel
4c55f576b9
392 abci rpc is not defined for election proposals (#397)
* fixed missing abci_rpc initialization
* bumped versions and added changelog
* sq fixes

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-24 09:44:50 +02:00
Jürgen Eckel
b2bca169ec
fixing potential type error in cases of new block heights (#396)
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-23 15:22:21 +02:00
dependabot[bot]
3e223f04cd
Bump requests from 2.25.1 to 2.31.0 (#395)
* Bump requests from 2.25.1 to 2.31.0

Bumps [requests](https://github.com/psf/requests) from 2.25.1 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.25.1...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* fixed vulnerability analysis (excluded new/different vulns)

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* disabled another vuln

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* adjust the right pipeline

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed proper pipeline

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-23 14:06:02 +02:00
Julian Strobl
95001fc262
[ci] Add nightly run
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-28 14:19:16 +02:00
Julian Strobl
923f14d669 [ci] Add SonarQube Quality Gate action
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-28 11:23:33 +02:00
Jürgen Eckel
74d3c732b1
bumped version and added missing changelog (#390)
* bumped version added missing changelog

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-21 11:05:33 +02:00
Jürgen Eckel
5c4923dbd6
373 integration of the dataaccessor singleton (#389)
* initial singleton usage

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* passing all tests

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* aggretated code into helper functions

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-21 10:48:40 +02:00
Jürgen Eckel
884c3cc32b
385 cli cmd not properly implemented planetmint migrate up (#386)
* fixed cmd line to function mapping issue
* bumped version
* fixed init.lua script issue
* fixed indexing issue on tarantool migrate script

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-19 14:01:34 +02:00
Lorenz Herzberger
4feeed5862
fixed path to init.lua (#384)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-18 11:41:20 +02:00
Lorenz Herzberger
461fae27d1
adjusted tarantool scripts for use in service (#383)
* adjusted tarantool scripts for use in service

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed schema migrate call

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed version number in changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-18 09:36:07 +02:00
Jürgen Eckel
033235fb16
fixed the migration to a different output objec (#382)
* fixed the migration to a different output object
* fixed test cases (magic mocks)

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-17 15:19:46 +02:00
Lorenz Herzberger
11cf86464f
Add utxo migration (#379)
* added migration script for utxo space
* added migration commands
* changelog and version bump
* added db call to command

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-14 11:34:36 +02:00
Lorenz Herzberger
9f4cc292bc
fixed sonarqube issues (#377)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-11 21:38:00 +02:00
Lorenz Herzberger
6a3c655e3b
Refactor utxo (#375)
* adjusted utxo space to resemble outputs

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added update_utxoset, removed deprecated test utils

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed test_update_utxoset

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed deprecated query and test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed delete_unspent_outputs tests

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* moved get_merkget_utxoset_merkle_root to dataaccessor and fixed test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed delete_transactions query

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed deprecated fixtures

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* blackified

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added get_outputs_by_owner query and adjusted dataaccessor

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed fastquery class

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed api test case

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed TestMultipleInputs

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed get_outputs_filtered test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed get_spent naming issue

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* blackified

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated changelog and version bump

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-11 15:18:44 +02:00
Lorenz Herzberger
dbf4e9085c
remove zenroom signing (#368)
* added zenroom validation to validator.py and adjusted zenroom test case
* updated transactions dependency
* updated poetry.lock

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-05 15:48:00 +02:00
Julian Strobl
4f9c7127b6
[sonar] Exclude k8s/ directory
Error: Can not add the same measure twice on k8s/dev-setup/nginx-http.yaml

Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-04 11:06:41 +02:00
Julian Strobl
3b4dcac388
[sonar] Add initial Sonar Scan setup
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-04 11:03:17 +02:00
Jürgen Eckel
e69742808f
asyncio - removed deprecation (#372)
* improved connection error and termination handling
* removed keyboard termination: exception
* fixed test cases
* added python >= 3.10 compatibility

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-02 23:36:37 +02:00
Julian Strobl
08ce10ab1f
[ci] Fix docker tag for planetmint-aio (#356)
Otherwise a tag "latest-aio" instead of "latest" is created.

Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-03-14 09:37:23 +01:00
Jürgen Eckel
a3468cf991
added changelog , bumped version
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-13 13:19:30 +01:00
Julian Strobl
6403e02277
Merge pull request #354 from planetmint/jmastr/switch-to-planetmint-aio-docker-image
[ci] Switch to planetmint-aio Docker image
2023-03-13 13:16:16 +01:00
Julian Strobl
aa1310bede
Merge pull request #355 from planetmint/fixed_dockerfile_all_in_one
fixed usability of the planetmint-aio dockerfile/image
2023-03-13 13:08:47 +01:00
Jürgen Eckel
90759697ee
fixed usability of the planetmint-aio dockerfile/image
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-13 12:58:27 +01:00
Julian Strobl
eae8ec4c1e
[ci] Switch to planetmint-aio Docker image
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-03-13 10:55:17 +01:00
Julian Strobl
26e0a21e39
[ci] Add Docker All-In-One build (#352)
* [ci] Add Docker All-In-One build
* added changelog and version bump


Signed-off-by: Julian Strobl <jmastr@mailbox.org>
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-10 08:43:41 +01:00
Jürgen Eckel
8942ebe4af
fixed object differentiation issue in eventify_block (#350)
* fixed object differentiation issue in eventify_block

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-06 15:11:53 +01:00
Jürgen Eckel
59f25687da
Hot fix 2.3.1 (#340)
* fixed issues after 2.3.0, one that has been caused by refactoring, the other existed already
* version bump and changlog

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-02 08:50:28 +01:00
Jürgen Eckel
83dfbed8b2
332 integrate tarantool driver abstraction (#339)
* renamed bigchain_pool -> validator_obj
* renamed the flask connection pool (class name)
* prepared AsyncIO separation
* renamed abci/core.py and class names, merged utils files
* removed obsolete file
* tidy up of ABCI application logic interface
* updated to newest driver tarantool 0.12.1
* Added new start options: --abci-only and --web-api-only to enable seperate execution of the services
* Added exception handling to the ABCI app
* removed space() object usage and thereby halved the amount of DB lookups
* removed async_io handling in the connection object but left some basics of the potential initialization
* tidied up the import structure/order
* tidied up imports
* set version number and changelog

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-01 17:42:18 +01:00
dependabot[bot]
2c0b0f03c9
Bump markdown-it-py from 2.1.0 to 2.2.0 (#336)
Bumps [markdown-it-py](https://github.com/executablebooks/markdown-it-py) from 2.1.0 to 2.2.0.
- [Release notes](https://github.com/executablebooks/markdown-it-py/releases)
- [Changelog](https://github.com/executablebooks/markdown-it-py/blob/master/CHANGELOG.md)
- [Commits](https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0)

---
updated-dependencies:
- dependency-name: markdown-it-py
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 16:56:26 +01:00
Jürgen Eckel
0b0c954d34
331 refactor a certain module gets a specific driver type flask sync driver abci server async driver first we stick to the current tarantool driver (#337)
* created ABCI_RPC class to seperate RPC interaction from the other ABCI interactions
* renamed validation.py to validator.py
* simplified planetmint/__init__.py
* moved methods used by testing to tests/utils.py
* making planetmint/__init__.py lean
* moved ProcessGroup object to tests as it is only used there
* reintegrated disabled tests


Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-27 16:48:31 +01:00
Jürgen Eckel
77ab922eed
removed integration tests from the repo (#329)
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-22 14:07:20 +01:00
Lorenz Herzberger
1fc306e09d
fixed subcondition instantiation recursively (#328)
* fixed subcondition instantiation recursively
* blackified
* updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-22 10:39:04 +01:00
Jürgen Eckel
89b5427e47
fixed bug : roll back caused from_dict call on a None object (#327)
* fixed bug : roll back caused from_dict call on a None object

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* simplified fix

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-21 10:57:37 +01:00
dependabot[bot]
979af5e453
Bump cryptography from 3.4.7 to 39.0.1 (#324)
Bumps [cryptography](https://github.com/pyca/cryptography) from 3.4.7 to 39.0.1.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/3.4.7...39.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 16:00:43 +01:00
Lorenz Herzberger
63b386a9cf
Migrate docs to use poetry (#326)
* migrated docs to use poetry, removed python browser script from makefile

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* bumped version in version.py

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-16 16:00:00 +01:00
dependabot[bot]
15a8a82191
Bump ipython from 8.9.0 to 8.10.0 (#323)
Bumps [ipython](https://github.com/ipython/ipython) from 8.9.0 to 8.10.0.
- [Release notes](https://github.com/ipython/ipython/releases)
- [Commits](https://github.com/ipython/ipython/compare/8.9.0...8.10.0)

---
updated-dependencies:
- dependency-name: ipython
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 15:22:41 +01:00
Lorenz Herzberger
c69272f6a2
removed unused code for deprecated text search (#322)
* removed unused code for depricated text search

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-15 16:11:36 +01:00
Lorenz Herzberger
384b091d74
Migrate to poetry (#321)
* added pyproject.toml and poetry.lock

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added scripts and classifiers to pyproject.toml

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated planetmint-transacitons, updated dockerfile to use poerty, updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated CI and Makefile

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated CI audit step to use poetry

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated version number on pyproject.toml

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated version number

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-15 15:56:01 +01:00
Jürgen Eckel
811f89e5a6
79 hotfix election validation bckwrd compat (#319)
* fixed election and voting backward compatibility issues
* bumped version!
* fixed changed testcases and a bug
* blackified
* blackified with newest verion to satisfy CI
* fix dependency management issue

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-15 09:19:17 +01:00
Jürgen Eckel
2bb0539b78
catching Tarantool exceptions in case of concurrency (implicitly issu… (#312)
* catching Tarantool exceptions in case of concurrency (implicitly issued by the planetmint-diver-ts tests)
* fixed black version
* blackified (new version)

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-01 13:43:39 +01:00
Lorenz Herzberger
87506ff4a1
removed depricated or unused code (#311)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-01-31 16:39:09 +01:00
Jürgen Eckel
3cb9424a35
added content: write permissions to the CI.yml
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-01-31 16:34:19 +01:00
Jürgen Eckel
6115a73f66
fixed workflow & bumped version
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-01-31 14:09:24 +01:00
160 changed files with 7730 additions and 6382 deletions

View File

@ -17,6 +17,7 @@ on:
permissions: permissions:
packages: write packages: write
contents: write
jobs: jobs:
lint: lint:
@ -40,17 +41,44 @@ jobs:
with: with:
python-version: 3.9 python-version: 3.9
- name: Install pip-audit - name: Setup poetry
run: pip install --upgrade pip pip-audit uses: Gr1N/setup-poetry@v8
- name: Install dependencies - name: Install dependencies
run: pip install . run: poetry install
- name: Create requirements.txt - name: Create requirements.txt
run: pip freeze > requirements.txt run: poetry run pip freeze > requirements.txt
- name: Audit dependencies - name: Audit dependencies
run: pip-audit --ignore-vuln PYSEC-2022-42969 --ignore-vuln PYSEC-2022-203 --ignore-vuln GHSA-r9hx-vwmv-q579 run: |
poetry run pip-audit \
--ignore-vuln GHSA-8495-4g3g-x7pr \
--ignore-vuln PYSEC-2024-230 \
--ignore-vuln PYSEC-2024-225 \
--ignore-vuln GHSA-3ww4-gg4f-jr7f \
--ignore-vuln GHSA-9v9h-cgj8-h64p \
--ignore-vuln GHSA-h4gh-qq45-vh27 \
--ignore-vuln PYSEC-2023-62 \
--ignore-vuln PYSEC-2024-71 \
--ignore-vuln GHSA-84pr-m4jr-85g5 \
--ignore-vuln GHSA-w3h3-4rj7-4ph4 \
--ignore-vuln PYSEC-2024-60 \
--ignore-vuln GHSA-h5c8-rqwp-cp95 \
--ignore-vuln GHSA-h75v-3vvj-5mfj \
--ignore-vuln GHSA-q2x7-8rv6-6q7h \
--ignore-vuln GHSA-gmj6-6f8f-6699 \
--ignore-vuln PYSEC-2023-117 \
--ignore-vuln GHSA-m87m-mmvp-v9qm \
--ignore-vuln GHSA-9wx4-h78v-vm56 \
--ignore-vuln GHSA-34jh-p97f-mpxf \
--ignore-vuln PYSEC-2022-203 \
--ignore-vuln PYSEC-2023-58 \
--ignore-vuln PYSEC-2023-57 \
--ignore-vuln PYSEC-2023-221 \
--ignore-vuln GHSA-2g68-c3qc-8985 \
--ignore-vuln GHSA-f9vj-2wh5-fj8j \
--ignore-vuln GHSA-q34m-jh98-gwm2
test: test:
needs: lint needs: lint
@ -78,11 +106,13 @@ jobs:
run: sudo apt-get update && sudo apt-get install -y git zsh curl tarantool-common vim build-essential cmake run: sudo apt-get update && sudo apt-get install -y git zsh curl tarantool-common vim build-essential cmake
- name: Get Tendermint - name: Get Tendermint
run: wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_0.34.15_linux_amd64.tar.gz && tar zxf tendermint_0.34.15_linux_amd64.tar.gz run: wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_0.34.24_linux_amd64.tar.gz && tar zxf tendermint_0.34.24_linux_amd64.tar.gz
- name: Setup poetry
uses: Gr1N/setup-poetry@v8
- name: Install Planetmint - name: Install Planetmint
run: pip install -e '.[dev]' run: poetry install --with dev
- name: Execute Tests - name: Execute Tests
run: make test run: make test
@ -102,18 +132,15 @@ jobs:
python-version: 3.9 python-version: 3.9
- name: Setup poetry - name: Setup poetry
uses: Gr1N/setup-poetry@v7 uses: Gr1N/setup-poetry@v8
- name: Install dependencies - name: Install dependencies
run: pip install -e '.[dev]' && pip install wheel && python setup.py bdist_wheel sdist run: poetry install --with dev
- name: Upload to PyPI - name: Upload to PyPI
run: | run: |
twine check dist/* poetry build
twine upload dist/* poetry publish -u __token__ -p ${{ secrets.PYPI_TOKEN }}
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
- name: Upload to GitHub - name: Upload to GitHub
uses: softprops/action-gh-release@v1 uses: softprops/action-gh-release@v1
@ -121,7 +148,7 @@ jobs:
files: dist/* files: dist/*
publish-docker: publish-docker:
needs: release needs: test
if: startsWith(github.ref, 'refs/tags/') if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
@ -168,3 +195,30 @@ jobs:
labels: ${{ steps.semver.outputs.labels }} labels: ${{ steps.semver.outputs.labels }}
env: env:
CRYPTOGRAPHY_DONT_BUILD_RUST: 1 CRYPTOGRAPHY_DONT_BUILD_RUST: 1
- name: Docker meta AIO
id: semver-aio # you'll use this in the next step
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
ghcr.io/planetmint/planetmint-aio
# Docker tags based on the following events/attributes
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
- name: Build and push AIO
uses: docker/build-push-action@v2
with:
context: .
file: Dockerfile-all-in-one
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.semver-aio.outputs.tags }}
labels: ${{ steps.semver-aio.outputs.labels }}
env:
CRYPTOGRAPHY_DONT_BUILD_RUST: 1

View File

@ -21,14 +21,44 @@ jobs:
with: with:
python-version: 3.9 python-version: 3.9
- name: Install pip-audit - name: Setup poetry
run: pip install --upgrade pip pip-audit uses: Gr1N/setup-poetry@v8
- name: Install dependencies - name: Install dependencies
run: pip install . run: poetry install
- name: Create requirements.txt - name: Create requirements.txt
run: pip freeze > requirements.txt run: poetry run pip freeze > requirements.txt
- name: Audit dependencies - name: Audit dependencies
run: pip-audit --ignore-vuln PYSEC-2022-42969 --ignore-vuln PYSEC-2022-203 --ignore-vuln GHSA-r9hx-vwmv-q579 run: |
poetry run pip-audit \
--ignore-vuln PYSEC-2022-203 \
--ignore-vuln PYSEC-2023-58 \
--ignore-vuln PYSEC-2023-57 \
--ignore-vuln PYSEC-2023-62 \
--ignore-vuln GHSA-8495-4g3g-x7pr \
--ignore-vuln PYSEC-2023-135 \
--ignore-vuln PYSEC-2024-230 \
--ignore-vuln PYSEC-2024-225 \
--ignore-vuln GHSA-3ww4-gg4f-jr7f \
--ignore-vuln GHSA-9v9h-cgj8-h64p \
--ignore-vuln GHSA-h4gh-qq45-vh27 \
--ignore-vuln PYSEC-2024-71 \
--ignore-vuln GHSA-84pr-m4jr-85g5 \
--ignore-vuln GHSA-w3h3-4rj7-4ph4 \
--ignore-vuln PYSEC-2024-60 \
--ignore-vuln GHSA-h5c8-rqwp-cp95 \
--ignore-vuln GHSA-h75v-3vvj-5mfj \
--ignore-vuln GHSA-q2x7-8rv6-6q7h \
--ignore-vuln GHSA-gmj6-6f8f-6699 \
--ignore-vuln PYSEC-2023-117 \
--ignore-vuln GHSA-m87m-mmvp-v9qm \
--ignore-vuln GHSA-9wx4-h78v-vm56 \
--ignore-vuln PYSEC-2023-192 \
--ignore-vuln PYSEC-2023-212 \
--ignore-vuln GHSA-34jh-p97f-mpxf \
--ignore-vuln PYSEC-2023-221 \
--ignore-vuln GHSA-2g68-c3qc-8985 \
--ignore-vuln GHSA-f9vj-2wh5-fj8j \
--ignore-vuln GHSA-q34m-jh98-gwm2

View File

@ -1,19 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Integration tests
on: [push, pull_request]
jobs:
test:
if: ${{ false }
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Start test run
run: docker-compose -f docker-compose.integration.yml up test

View File

@ -25,6 +25,71 @@ For reference, the possible headings are:
* **Known Issues** * **Known Issues**
* **Notes** * **Notes**
## [2.5.1] - 2023-22-06
* **Fixed** docker image incompatibility with tarantool installer, switched to ubuntu-container for AIO image
## [2.5.0] - 2023-21-06
* **Changed** Upgraded ABCI compatbility to Tendermint v0.34.24 and CometBFT v0.34.29
## [2.4.7] - 2023-24-05
* **Fixed** wrong referencing of planetmint-transactions object and variable
## [2.4.6] - 2023-24-05
* **Fixed** Missing ABCI_RPC object initiailization for CLI voting commands.
* **Fixed** TypeError in EndBlock procedure that occured rarely within the network.
* **Security** moved to a more secure requests version
## [2.4.5] - 2023-21-04
* **Fixed** Integration of DataAccessor Singleton class to reduce potentially multiple DB driver initializations.
## [2.4.4] - 2023-19-04
* **Fixed** tarantool migration script issues (modularity, script failures, cli cmd to function mapping)
## [2.4.3] - 2023-17-04
* **Fixed** fixed migration behaviour for non docker service
## [2.4.2] - 2023-13-04
* **Added** planetmint migration commands
## [2.4.1] - 2023-11-04
* **Removed** Fastquery class
* **Changed** UTXO space updated to resemble outputs
* **Changed** updated UTXO querying
## [2.4.0] - 2023-29-03
* **Added** Zenroom script validation
* **Changed** adjusted zenroom testing for new transaction script structure
## [2.3.3] - 2023-10-03
* **Fixed** CI issues with the docker images
* **Added** Tendermint, tarantool, and planetmint initialization to the all-in-one docker image
## [2.3.2] - 2023-10-03
* **Fixed** websocket service issue with block/asset object access of different object/tx versions
* **Added** CI pipeline to build and package the all-in-one docker images
## [2.3.1] - 2023-02-03
* **Fixed** backend.models.assets class content type issue (verification if objects are of type dict)
* **Fixed** Type defintions of Exceptions in the backend.query exception catching decorat
## [2.3.0] - 2023-01-03
* **Fixed** double usage of the tarantool driver in one instance that lead to crashes
* **Changed** refactored a lot of classes and the structure
* **Changed** upgraded to tarantool driver 0.12.1
## [2.2.4] - 2023-15-02
* **Fixed** subcondition instantiation now works recursively
* **Changed** migrated dependency management to poetry
* **Removed** removed unused text_search related code
* **Changed** docs are now built using poetry
## [2.2.3] - 2023-14-02
* **Fixed** fixed voting/election backward compatibility issue (using planetmint-transactions >= 0.7.0) on the 2.2 main branch
* **Changed** migrated dependency management to poetry
## [2.2.2] - 2023-31-01
* **Fixed** catching tarantool exceptions in case tarantool drivers throw execeptions due to concurrency issues. This issue got idenitfied during the testing of the planetmint-driver-ts.
## [2.2.0] - 2023-31-01 ## [2.2.0] - 2023-31-01
* **Changed** standardized blocks API * **Changed** standardized blocks API
@ -36,6 +101,9 @@ For reference, the possible headings are:
* **Removed** removed text_search routes * **Removed** removed text_search routes
* **Added** metadata / asset cid route for fetching transactions * **Added** metadata / asset cid route for fetching transactions
## [1.4.2] - 2023-14-02
* **fixed** fixed voting/election backward compatibility issue (using planetmint-transactions >= 0.7.0)
## [1.4.1] - 2022-21-12 ## [1.4.1] - 2022-21-12
* **fixed** inconsistent cryptocondition keyring tag handling. Using cryptoconditions > 1.1.0 from now on. * **fixed** inconsistent cryptocondition keyring tag handling. Using cryptoconditions > 1.1.0 from now on.

View File

@ -32,5 +32,5 @@ ENV PLANETMINT_CI_ABCI ${abci_status}
RUN mkdir -p /usr/src/app RUN mkdir -p /usr/src/app
COPY . /usr/src/app/ COPY . /usr/src/app/
WORKDIR /usr/src/app WORKDIR /usr/src/app
RUN pip install -e .[dev] RUN pip install poetry
RUN pip install flask-cors RUN poetry install --with dev

View File

@ -1,7 +1,7 @@
FROM python:3.9-slim FROM ubuntu:22.04
LABEL maintainer "contact@ipdb.global" LABEL maintainer "contact@ipdb.global"
ARG TM_VERSION=0.34.15 ARG TM_VERSION=0.34.24
RUN mkdir -p /usr/src/app RUN mkdir -p /usr/src/app
ENV HOME /root ENV HOME /root
COPY . /usr/src/app/ COPY . /usr/src/app/
@ -11,15 +11,17 @@ RUN apt-get update \
&& apt-get install -y openssl ca-certificates git \ && apt-get install -y openssl ca-certificates git \
&& apt-get install -y vim build-essential cmake jq zsh wget \ && apt-get install -y vim build-essential cmake jq zsh wget \
&& apt-get install -y libstdc++6 \ && apt-get install -y libstdc++6 \
&& apt-get install -y openssh-client openssh-server \ && apt-get install -y openssh-client openssh-server
&& pip install --upgrade pip cffi \ RUN apt-get install -y python3 python3-pip cython3
RUN pip install --upgrade pip cffi \
&& pip install -e . \ && pip install -e . \
&& apt-get autoremove && apt-get autoremove
# Install tarantool and monit # Install tarantool and monit
RUN apt-get install -y dirmngr gnupg apt-transport-https software-properties-common ca-certificates curl RUN apt-get install -y dirmngr gnupg apt-transport-https software-properties-common ca-certificates curl
RUN ln -fs /usr/share/zoneinfo/Etc/UTC /etc/localtime
RUN apt-get update RUN apt-get update
RUN curl -L https://tarantool.io/wrATeGF/release/2/installer.sh | bash RUN curl -L https://tarantool.io/release/2/installer.sh | bash
RUN apt-get install -y tarantool monit RUN apt-get install -y tarantool monit
# Install Tendermint # Install Tendermint
@ -42,8 +44,14 @@ ENV PLANETMINT_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws
ENV PLANETMINT_TENDERMINT_PORT 26657 ENV PLANETMINT_TENDERMINT_PORT 26657
COPY planetmint/backend/tarantool/opt/init.lua /etc/tarantool/instances.enabled
VOLUME /data/db /data/configdb /tendermint VOLUME /data/db /data/configdb /tendermint
EXPOSE 27017 28017 9984 9985 26656 26657 26658 EXPOSE 27017 28017 9984 9985 26656 26657 26658
WORKDIR $HOME WORKDIR $HOME
RUN tendermint init
RUN planetmint -y configure

View File

@ -32,5 +32,5 @@ ENV PLANETMINT_CI_ABCI ${abci_status}
RUN mkdir -p /usr/src/app RUN mkdir -p /usr/src/app
COPY . /usr/src/app/ COPY . /usr/src/app/
WORKDIR /usr/src/app WORKDIR /usr/src/app
RUN pip install -e .[dev] RUN pip install poetry
RUN pip install flask-cors RUN poetry install --with dev

View File

@ -1,23 +1,8 @@
.PHONY: help run start stop logs lint test test-unit test-unit-watch test-integration cov docs clean reset release dist check-deps clean-build clean-pyc clean-test .PHONY: help run start stop logs lint test test-unit test-unit-watch cov docs clean reset release dist check-deps clean-build clean-pyc clean-test
.DEFAULT_GOAL := help .DEFAULT_GOAL := help
#############################
# Open a URL in the browser #
#############################
define BROWSER_PYSCRIPT
import os, webbrowser, sys
try:
from urllib import pathname2url
except:
from urllib.request import pathname2url
webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1])))
endef
export BROWSER_PYSCRIPT
################################## ##################################
# Display help for this makefile # # Display help for this makefile #
################################## ##################################
@ -41,8 +26,7 @@ export PRINT_HELP_PYSCRIPT
# Basic commands # # Basic commands #
################## ##################
DOCKER := docker DOCKER := docker
DC := docker-compose DC := docker compose
BROWSER := python -c "$$BROWSER_PYSCRIPT"
HELP := python -c "$$PRINT_HELP_PYSCRIPT" HELP := python -c "$$PRINT_HELP_PYSCRIPT"
ECHO := /usr/bin/env echo ECHO := /usr/bin/env echo
@ -81,34 +65,21 @@ test: check-deps test-unit ## Run unit
test-unit: check-deps ## Run all tests once or specify a file/test with TEST=tests/file.py::Class::test test-unit: check-deps ## Run all tests once or specify a file/test with TEST=tests/file.py::Class::test
@$(DC) up -d tarantool @$(DC) up -d tarantool
#wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_0.34.15_linux_amd64.tar.gz #wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_0.34.24_linux_amd64.tar.gz
#tar zxf tendermint_0.34.15_linux_amd64.tar.gz #tar zxf tendermint_0.34.24_linux_amd64.tar.gz
pytest -m "not abci" poetry run pytest -m "not abci"
rm -rf ~/.tendermint && ./tendermint init && ./tendermint node --consensus.create_empty_blocks=false --rpc.laddr=tcp://0.0.0.0:26657 --proxy_app=tcp://localhost:26658& rm -rf ~/.tendermint && ./tendermint init && ./tendermint node --consensus.create_empty_blocks=false --rpc.laddr=tcp://0.0.0.0:26657 --proxy_app=tcp://localhost:26658&
pytest -m abci poetry run pytest -m abci
@$(DC) down @$(DC) down
test-unit-watch: check-deps ## Run all tests and wait. Every time you change code, tests will be run again test-unit-watch: check-deps ## Run all tests and wait. Every time you change code, tests will be run again
@$(DC) run --rm --no-deps planetmint pytest -f @$(DC) run --rm --no-deps planetmint pytest -f
test-integration: check-deps ## Run all integration tests
@./scripts/run-integration-test.sh
cov: check-deps ## Check code coverage and open the result in the browser cov: check-deps ## Check code coverage and open the result in the browser
@$(DC) run --rm planetmint pytest -v --cov=planetmint --cov-report html @$(DC) run --rm planetmint pytest -v --cov=planetmint --cov-report html
$(BROWSER) htmlcov/index.html
docs: check-deps ## Generate HTML documentation and open it in the browser docs: check-deps ## Generate HTML documentation and open it in the browser
@$(DC) run --rm --no-deps bdocs make -C docs/root html @$(DC) run --rm --no-deps bdocs make -C docs/root html
$(BROWSER) docs/root/build/html/index.html
docs-integration: check-deps ## Create documentation for integration tests
@$(DC) run --rm python-integration pycco -i -s /src -d /docs
$(BROWSER) integration/python/docs/index.html
clean: check-deps ## Remove all build, test, coverage and Python artifacts clean: check-deps ## Remove all build, test, coverage and Python artifacts
@$(DC) up clean @$(DC) up clean

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V., # Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -30,7 +30,7 @@ def test_bigchain_instance_is_initialized_when_conf_provided():
def test_load_validation_plugin_loads_default_rules_without_name(): def test_load_validation_plugin_loads_default_rules_without_name():
from planetmint import config_utils from planetmint import config_utils
from planetmint.validation import BaseValidationRules from planetmint.application.basevalidationrules import BaseValidationRules
assert config_utils.load_validation_plugin() == BaseValidationRules assert config_utils.load_validation_plugin() == BaseValidationRules
@ -120,11 +120,8 @@ def test_env_config(monkeypatch):
assert result == expected assert result == expected
@pytest.mark.skip @pytest.mark.skip(reason="Disabled until we create a better config format")
def test_autoconfigure_read_both_from_file_and_env( def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request):
monkeypatch, request
): # TODO Disabled until we create a better config format
return
# constants # constants
DATABASE_HOST = "test-host" DATABASE_HOST = "test-host"
DATABASE_NAME = "test-dbname" DATABASE_NAME = "test-dbname"
@ -210,7 +207,7 @@ def test_autoconfigure_read_both_from_file_and_env(
"advertised_port": WSSERVER_ADVERTISED_PORT, "advertised_port": WSSERVER_ADVERTISED_PORT,
}, },
"database": database_mongodb, "database": database_mongodb,
"tendermint": {"host": "localhost", "port": 26657, "version": "v0.34.15"}, "tendermint": {"host": "localhost", "port": 26657, "version": "v0.34.24"},
"log": { "log": {
"file": LOG_FILE, "file": LOG_FILE,
"level_console": "debug", "level_console": "debug",
@ -315,11 +312,10 @@ def test_write_config():
), ),
) )
def test_database_envs(env_name, env_value, config_key, monkeypatch): def test_database_envs(env_name, env_value, config_key, monkeypatch):
monkeypatch.setattr("os.environ", {env_name: env_value}) monkeypatch.setattr("os.environ", {env_name: env_value})
planetmint.config_utils.autoconfigure() planetmint.config_utils.autoconfigure()
expected_config = Config().get() expected_config = Config().get()
expected_config["database"][config_key] = env_value expected_config["database"][config_key] = env_value
assert planetmint.config == expected_config assert planetmint.config.Config().get() == expected_config

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V., # Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
@ -6,19 +7,8 @@
version: '2.2' version: '2.2'
services: services:
clean-shared:
image: alpine
command: ["/scripts/clean-shared.sh"]
volumes:
- ./integration/scripts/clean-shared.sh:/scripts/clean-shared.sh
- shared:/shared
planetmint-all-in-one: planetmint-all-in-one:
build: image: planetmint/planetmint-aio:latest
context: .
dockerfile: Dockerfile-all-in-one
depends_on:
- clean-shared
expose: expose:
- "22" - "22"
- "9984" - "9984"
@ -27,8 +17,6 @@ services:
- "26657" - "26657"
- "26658" - "26658"
command: ["/usr/src/app/scripts/pre-config-planetmint.sh", "/usr/src/app/scripts/all-in-one.bash"] command: ["/usr/src/app/scripts/pre-config-planetmint.sh", "/usr/src/app/scripts/all-in-one.bash"]
environment:
SCALE: ${SCALE:-4}
volumes: volumes:
- ./integration/scripts:/usr/src/app/scripts - ./integration/scripts:/usr/src/app/scripts
- shared:/shared - shared:/shared
@ -48,6 +36,3 @@ services:
- ./integration/scripts:/scripts - ./integration/scripts:/scripts
- ./integration/cli:/tests - ./integration/cli:/tests
- shared:/shared - shared:/shared
volumes:
shared:

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V., # Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
@ -22,8 +23,8 @@ services:
- "3303:3303" - "3303:3303"
- "8081:8081" - "8081:8081"
volumes: volumes:
- ./planetmint/backend/tarantool/init.lua:/opt/tarantool/init.lua - ./planetmint/backend/tarantool/opt/init.lua:/opt/tarantool/init.lua
command: tarantool /opt/tarantool/init.lua entrypoint: tarantool /opt/tarantool/init.lua
restart: always restart: always
planetmint: planetmint:
depends_on: depends_on:
@ -64,7 +65,7 @@ services:
restart: always restart: always
tendermint: tendermint:
image: tendermint/tendermint:v0.34.15 image: tendermint/tendermint:v0.34.24
# volumes: # volumes:
# - ./tmdata:/tendermint # - ./tmdata:/tendermint
entrypoint: '' entrypoint: ''
@ -96,7 +97,7 @@ services:
context: . context: .
dockerfile: Dockerfile dockerfile: Dockerfile
args: args:
backend: tarantool backend: tarantool_db
volumes: volumes:
- .:/usr/src/app/ - .:/usr/src/app/
command: make -C docs/root html command: make -C docs/root html

View File

@ -3,7 +3,7 @@
# You can set these variables from the command line. # You can set these variables from the command line.
SPHINXOPTS = SPHINXOPTS =
SPHINXBUILD = sphinx-build SPHINXBUILD = poetry run sphinx-build
PAPER = a4 PAPER = a4
BUILDDIR = build BUILDDIR = build

View File

@ -11,7 +11,9 @@ import os.path
from transactions.common.input import Input from transactions.common.input import Input
from transactions.common.transaction_link import TransactionLink from transactions.common.transaction_link import TransactionLink
from planetmint import lib
import planetmint.abci.block
from transactions.types.assets.create import Create from transactions.types.assets.create import Create
from transactions.types.assets.transfer import Transfer from transactions.types.assets.transfer import Transfer
from planetmint.web import server from planetmint.web import server
@ -187,7 +189,6 @@ def main():
ctx["public_keys_transfer"] = tx_transfer.outputs[0].public_keys[0] ctx["public_keys_transfer"] = tx_transfer.outputs[0].public_keys[0]
ctx["tx_transfer_id"] = tx_transfer.id ctx["tx_transfer_id"] = tx_transfer.id
# privkey_transfer_last = 'sG3jWDtdTXUidBJK53ucSTrosktG616U3tQHBk81eQe'
pubkey_transfer_last = "3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm" pubkey_transfer_last = "3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm"
cid = 0 cid = 0
@ -210,7 +211,7 @@ def main():
signature = "53wxrEQDYk1dXzmvNSytbCfmNVnPqPkDQaTnAe8Jf43s6ssejPxezkCvUnGTnduNUmaLjhaan1iRLi3peu6s5DzA" signature = "53wxrEQDYk1dXzmvNSytbCfmNVnPqPkDQaTnAe8Jf43s6ssejPxezkCvUnGTnduNUmaLjhaan1iRLi3peu6s5DzA"
app_hash = "f6e0c49c6d94d6924351f25bb334cf2a99af4206339bf784e741d1a5ab599056" app_hash = "f6e0c49c6d94d6924351f25bb334cf2a99af4206339bf784e741d1a5ab599056"
block = lib.Block(height=1, transactions=[tx.to_dict()], app_hash=app_hash) block = planetmint.abci.block.Block(height=1, transactions=[tx.to_dict()], app_hash=app_hash)
block_dict = block._asdict() block_dict = block._asdict()
block_dict.pop("app_hash") block_dict.pop("app_hash")
ctx["block"] = pretty_json(block_dict) ctx["block"] = pretty_json(block_dict)

View File

@ -1,46 +0,0 @@
aafigure==0.6
alabaster==0.7.12
Babel==2.10.1
certifi==2022.12.7
charset-normalizer==2.0.12
commonmark==0.9.1
docutils==0.17.1
idna
imagesize==1.3.0
importlib-metadata==4.11.3
Jinja2==3.0.0
markdown-it-py==2.1.0
MarkupSafe==2.1.1
mdit-py-plugins==0.3.0
mdurl==0.1.1
myst-parser==0.17.2
packaging==21.3
pockets==0.9.1
Pygments==2.12.0
pyparsing==3.0.8
pytz==2022.1
PyYAML>=5.4.0
requests>=2.25i.1
six==1.16.0
snowballstemmer==2.2.0
Sphinx==4.5.0
sphinx-rtd-theme==1.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-httpdomain==1.8.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-napoleon==0.7
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
urllib3==1.26.9
wget==3.2
zipp==3.8.0
nest-asyncio==1.5.5
sphinx-press-theme==0.8.0
sphinx-documatt-theme
base58>=2.1.1
pynacl==1.4.0
zenroom==2.1.0.dev1655293214
pyasn1==0.4.8
cryptography==3.4.7

View File

@ -198,7 +198,6 @@ todo_include_todos = False
# a list of builtin themes. # a list of builtin themes.
# #
html_theme = "press" html_theme = "press"
# html_theme = 'sphinx_documatt_theme'
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the

View File

@ -4,7 +4,7 @@ Content-Type: application/json
{ {
"assets": "/assets/", "assets": "/assets/",
"blocks": "/blocks/", "blocks": "/blocks/",
"docs": "https://docs.planetmint.io/projects/server/en/v1.4.1/http-client-server-api.html", "docs": "https://docs.planetmint.io/projects/server/en/v2.2.4/http-client-server-api.html",
"metadata": "/metadata/", "metadata": "/metadata/",
"outputs": "/outputs/", "outputs": "/outputs/",
"streamedblocks": "ws://localhost:9985/api/v1/streams/valid_blocks", "streamedblocks": "ws://localhost:9985/api/v1/streams/valid_blocks",

View File

@ -6,7 +6,7 @@ Content-Type: application/json
"v1": { "v1": {
"assets": "/api/v1/assets/", "assets": "/api/v1/assets/",
"blocks": "/api/v1/blocks/", "blocks": "/api/v1/blocks/",
"docs": "https://docs.planetmint.io/projects/server/en/v1.4.1/http-client-server-api.html", "docs": "https://docs.planetmint.io/projects/server/en/v2.2.4/http-client-server-api.html",
"metadata": "/api/v1/metadata/", "metadata": "/api/v1/metadata/",
"outputs": "/api/v1/outputs/", "outputs": "/api/v1/outputs/",
"streamedblocks": "ws://localhost:9985/api/v1/streams/valid_blocks", "streamedblocks": "ws://localhost:9985/api/v1/streams/valid_blocks",
@ -15,7 +15,7 @@ Content-Type: application/json
"validators": "/api/v1/validators" "validators": "/api/v1/validators"
} }
}, },
"docs": "https://docs.planetmint.io/projects/server/en/v1.4.1/", "docs": "https://docs.planetmint.io/projects/server/en/v2.2.4/",
"software": "Planetmint", "software": "Planetmint",
"version": "1.4.1" "version": "2.2.4"
} }

View File

@ -30,9 +30,9 @@ The version of Planetmint Server described in these docs only works well with Te
```bash ```bash
$ sudo apt install -y unzip $ sudo apt install -y unzip
$ wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_v0.34.15_linux_amd64.zip $ wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_v0.34.24_linux_amd64.zip
$ unzip tendermint_v0.34.15_linux_amd64.zip $ unzip tendermint_v0.34.24_linux_amd64.zip
$ rm tendermint_v0.34.15_linux_amd64.zip $ rm tendermint_v0.34.24_linux_amd64.zip
$ sudo mv tendermint /usr/local/bin $ sudo mv tendermint /usr/local/bin
``` ```

View File

@ -59,8 +59,8 @@ $ sudo apt install mongodb
``` ```
Tendermint can be installed and started as follows Tendermint can be installed and started as follows
``` ```
$ wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_0.34.15_linux_amd64.tar.gz $ wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_0.34.24_linux_amd64.tar.gz
$ tar zxf tendermint_0.34.15_linux_amd64.tar.gz $ tar zxf tendermint_0.34.24_linux_amd64.tar.gz
$ ./tendermint init $ ./tendermint init
$ ./tendermint node --proxy_app=tcp://localhost:26658 $ ./tendermint node --proxy_app=tcp://localhost:26658
``` ```

View File

@ -60,7 +60,7 @@ you can do this:
.. code:: .. code::
$ mkdir $(pwd)/tmdata $ mkdir $(pwd)/tmdata
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:v0.34.15 init $ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:v0.34.24 init
$ cat $(pwd)/tmdata/genesis.json $ cat $(pwd)/tmdata/genesis.json
You should see something that looks like: You should see something that looks like:

View File

@ -1,23 +0,0 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Integration test suite
This directory contains the integration test suite for Planetmint.
The suite uses Docker Compose to spin up multiple Planetmint nodes, run tests with `pytest` as well as cli tests and teardown.
## Running the tests
Run `make test-integration` in the project root directory.
By default the integration test suite spins up four planetmint nodes. If you desire to run a different configuration you can pass `SCALE=<number of nodes>` as an environmental variable.
## Writing and documenting the tests
Tests are sometimes difficult to read. For integration tests, we try to be really explicit on what the test is doing, so please write code that is *simple* and easy to understand. We decided to use literate-programming documentation. To generate the documentation for python tests run:
```bash
make docs-integration
```

View File

@ -1,47 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Add chain migration test
check_status () {
status=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@$1 'bash -s' < scripts/election.sh show_election $2 | tail -n 1)
status=${status#*=}
if [ $status != $3 ]; then
exit 1
fi
}
# Read host names from shared
readarray -t HOSTNAMES < /shared/hostnames
# Split into proposer and approvers
PROPOSER=${HOSTNAMES[0]}
APPROVERS=${HOSTNAMES[@]:1}
# Propose chain migration
result=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@${PROPOSER} 'bash -s' < scripts/election.sh migrate)
# Check if election is ongoing and approve chain migration
for APPROVER in ${APPROVERS[@]}; do
# Check if election is still ongoing
check_status ${APPROVER} $result ongoing
ssh -o "StrictHostKeyChecking=no" -i ~/.ssh/id_rsa root@${APPROVER} 'bash -s' < scripts/election.sh approve $result
done
# Status of election should be concluded
status=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@${PROPOSER} 'bash -s' < scripts/election.sh show_election $result)
status=${status#*INFO:planetmint.commands.planetmint:}
status=("$status[@]")
# TODO: Get status, chain_id, app_hash and validators to restore planetmint on all nodes
# References:
# https://github.com/bigchaindb/BEPs/tree/master/42
# http://docs.bigchaindb.com/en/latest/installation/node-setup/bigchaindb-cli.html
for word in $status; do
echo $word
done
echo ${status#*validators=}

View File

@ -1,33 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
check_status () {
status=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@$1 'bash -s' < scripts/election.sh show_election $2 | tail -n 1)
status=${status#*=}
if [ $status != $3 ]; then
exit 1
fi
}
# Read host names from shared
readarray -t HOSTNAMES < /shared/hostnames
# Split into proposer and approvers
PROPOSER=${HOSTNAMES[0]}
APPROVERS=${HOSTNAMES[@]:1}
# Propose validator upsert
result=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@${PROPOSER} 'bash -s' < scripts/election.sh elect 2)
# Check if election is ongoing and approve validator upsert
for APPROVER in ${APPROVERS[@]}; do
# Check if election is still ongoing
check_status ${APPROVER} $result ongoing
ssh -o "StrictHostKeyChecking=no" -i ~/.ssh/id_rsa root@${APPROVER} 'bash -s' < scripts/election.sh approve $result
done
# Status of election should be concluded
check_status ${PROPOSER} $result concluded

View File

@ -1 +0,0 @@
docs

View File

@ -1,19 +0,0 @@
FROM python:3.9
RUN apt-get update \
&& pip install -U pip \
&& apt-get autoremove \
&& apt-get clean
RUN apt-get install -y vim
RUN apt-get update
RUN apt-get install -y build-essential cmake openssh-client openssh-server git
RUN apt-get install -y zsh
RUN mkdir -p /src
RUN pip install --upgrade meson ninja
RUN pip install --upgrade \
pytest~=6.2.5 \
pycco \
websocket-client~=0.47.0 \
planetmint-driver>=9.2.0 \
blns

View File

@ -1,86 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import pytest
CONDITION_SCRIPT = """Scenario 'ecdh': create the signature of an object
Given I have the 'keyring'
Given that I have a 'string dictionary' named 'houses'
When I create the signature of 'houses'
Then print the 'signature'"""
FULFILL_SCRIPT = """Scenario 'ecdh': Bob verifies the signature from Alice
Given I have a 'ecdh public key' from 'Alice'
Given that I have a 'string dictionary' named 'houses'
Given I have a 'signature' named 'signature'
When I verify the 'houses' has a signature in 'signature' by 'Alice'
Then print the string 'ok'"""
SK_TO_PK = """Scenario 'ecdh': Create the keypair
Given that I am known as '{}'
Given I have the 'keyring'
When I create the ecdh public key
When I create the bitcoin address
Then print my 'ecdh public key'
Then print my 'bitcoin address'"""
GENERATE_KEYPAIR = """Scenario 'ecdh': Create the keypair
Given that I am known as 'Pippo'
When I create the ecdh key
When I create the bitcoin key
Then print data"""
INITIAL_STATE = {"also": "more data"}
SCRIPT_INPUT = {
"houses": [
{
"name": "Harry",
"team": "Gryffindor",
},
{
"name": "Draco",
"team": "Slytherin",
},
],
}
metadata = {"units": 300, "type": "KG"}
ZENROOM_DATA = {"that": "is my data"}
@pytest.fixture
def gen_key_zencode():
return GENERATE_KEYPAIR
@pytest.fixture
def secret_key_to_private_key_zencode():
return SK_TO_PK
@pytest.fixture
def fulfill_script_zencode():
return FULFILL_SCRIPT
@pytest.fixture
def condition_script_zencode():
return CONDITION_SCRIPT
@pytest.fixture
def zenroom_house_assets():
return SCRIPT_INPUT
@pytest.fixture
def zenroom_script_input():
return SCRIPT_INPUT
@pytest.fixture
def zenroom_data():
return ZENROOM_DATA

View File

@ -1,35 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from typing import List
from planetmint_driver import Planetmint
class Hosts:
hostnames = []
connections = []
def __init__(self, filepath):
self.set_hostnames(filepath=filepath)
self.set_connections()
def set_hostnames(self, filepath) -> None:
with open(filepath) as f:
self.hostnames = f.readlines()
def set_connections(self) -> None:
self.connections = list(map(lambda h: Planetmint(h), self.hostnames))
def get_connection(self, index=0) -> Planetmint:
return self.connections[index]
def get_transactions(self, tx_id) -> List:
return list(map(lambda connection: connection.transactions.retrieve(tx_id), self.connections))
def assert_transaction(self, tx_id) -> None:
txs = self.get_transactions(tx_id)
for tx in txs:
assert txs[0] == tx, "Cannot find transaction {}".format(tx_id)

View File

@ -1,87 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# import Planetmint and create object
from planetmint_driver.crypto import generate_keypair
# import helper to manage multiple nodes
from .helper.hosts import Hosts
import time
def test_basic():
# Setup up connection to Planetmint integration test nodes
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
# genarate a keypair
alice = generate_keypair()
# create a digital asset for Alice
game_boy_token = [
{
"data": {
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
}
]
# prepare the transaction with the digital asset and issue 10 tokens to bob
prepared_creation_tx = pm_alpha.transactions.prepare(
operation="CREATE",
metadata={
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
signers=alice.public_key,
recipients=[([alice.public_key], 10)],
assets=game_boy_token,
)
# fulfill and send the transaction
fulfilled_creation_tx = pm_alpha.transactions.fulfill(prepared_creation_tx, private_keys=alice.private_key)
pm_alpha.transactions.send_commit(fulfilled_creation_tx)
time.sleep(1)
creation_tx_id = fulfilled_creation_tx["id"]
# Assert that transaction is stored on all planetmint nodes
hosts.assert_transaction(creation_tx_id)
# Transfer
# create the output and inout for the transaction
transfer_assets = [{"id": creation_tx_id}]
output_index = 0
output = fulfilled_creation_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": transfer_assets[0]["id"]},
"owners_before": output["public_keys"],
}
# prepare the transaction and use 3 tokens
prepared_transfer_tx = pm_alpha.transactions.prepare(
operation="TRANSFER",
asset=transfer_assets,
inputs=transfer_input,
metadata={
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
recipients=[([alice.public_key], 10)],
)
# fulfill and send the transaction
fulfilled_transfer_tx = pm_alpha.transactions.fulfill(prepared_transfer_tx, private_keys=alice.private_key)
sent_transfer_tx = pm_alpha.transactions.send_commit(fulfilled_transfer_tx)
time.sleep(1)
transfer_tx_id = sent_transfer_tx["id"]
# Assert that transaction is stored on both planetmint nodes
hosts.assert_transaction(transfer_tx_id)

View File

@ -1,167 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Divisible assets integration testing
# This test checks if we can successfully divide assets.
# The script tests various things like:
#
# - create a transaction with a divisible asset and issue them to someone
# - check if the transaction is stored and has the right amount of tokens
# - spend some tokens
# - try to spend more tokens than available
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the `amount`
# of a given transaction.
# ## Imports
# We need the `pytest` package to catch the `BadRequest` exception properly.
# And of course, we also need the `BadRequest`.
import pytest
from planetmint_driver.exceptions import BadRequest
# Import generate_keypair to create actors
from planetmint_driver.crypto import generate_keypair
# import helper to manage multiple nodes
from .helper.hosts import Hosts
def test_divisible_assets():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Oh look, it is Alice again and she brought her friend Bob along.
alice, bob = generate_keypair(), generate_keypair()
# ## Alice creates a time sharing token
# Alice wants to go on vacation, while Bobs bike just broke down.
# Alice decides to rent her bike to Bob while she is gone.
# So she prepares a `CREATE` transaction to issues 10 tokens.
# First, she prepares an asset for a time sharing token. As you can see in
# the description, Bob and Alice agree that each token can be used to ride
# the bike for one hour.
bike_token = [
{
"data": {
"token_for": {"bike": {"serial_number": 420420}},
"description": "Time share token. Each token equals one hour of riding.",
},
}
]
# She prepares a `CREATE` transaction and issues 10 tokens.
# Here, Alice defines in a tuple that she wants to assign
# these 10 tokens to Bob.
prepared_token_tx = pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, recipients=[([bob.public_key], 10)], assets=bike_token
)
# She fulfills and sends the transaction.
fulfilled_token_tx = pm.transactions.fulfill(prepared_token_tx, private_keys=alice.private_key)
pm.transactions.send_commit(fulfilled_token_tx)
# We store the `id` of the transaction to use it later on.
bike_token_id = fulfilled_token_tx["id"]
# Let's check if the transaction was successful.
assert pm.transactions.retrieve(bike_token_id), "Cannot find transaction {}".format(bike_token_id)
# Bob owns 10 tokens now.
assert pm.transactions.retrieve(bike_token_id)["outputs"][0]["amount"] == "10"
# ## Bob wants to use the bike
# Now that Bob got the tokens and the sun is shining, he wants to get out
# with the bike for three hours.
# To use the bike he has to send the tokens back to Alice.
# To learn about the details of transferring a transaction check out
# [test_basic.py](./test_basic.html)
transfer_assets = [{"id": bike_token_id}]
output_index = 0
output = fulfilled_token_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_token_tx["id"]},
"owners_before": output["public_keys"],
}
# To use the tokens Bob has to reassign 7 tokens to himself and the
# amount he wants to use to Alice.
prepared_transfer_tx = pm.transactions.prepare(
operation="TRANSFER",
asset=transfer_assets,
inputs=transfer_input,
recipients=[([alice.public_key], 3), ([bob.public_key], 7)],
)
# He signs and sends the transaction.
fulfilled_transfer_tx = pm.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
sent_transfer_tx = pm.transactions.send_commit(fulfilled_transfer_tx)
# First, Bob checks if the transaction was successful.
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
hosts.assert_transaction(fulfilled_transfer_tx["id"])
# There are two outputs in the transaction now.
# The first output shows that Alice got back 3 tokens...
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["amount"] == "3"
# ... while Bob still has 7 left.
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][1]["amount"] == "7"
# ## Bob wants to ride the bike again
# It's been a week and Bob wants to right the bike again.
# Now he wants to ride for 8 hours, that's a lot Bob!
# He prepares the transaction again.
transfer_assets = [{"id": bike_token_id}]
# This time we need an `output_index` of 1, since we have two outputs
# in the `fulfilled_transfer_tx` we created before. The first output with
# index 0 is for Alice and the second output is for Bob.
# Since Bob wants to spend more of his tokens he has to provide the
# correct output with the correct amount of tokens.
output_index = 1
output = fulfilled_transfer_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_transfer_tx["id"]},
"owners_before": output["public_keys"],
}
# This time Bob only provides Alice in the `recipients` because he wants
# to spend all his tokens
prepared_transfer_tx = pm.transactions.prepare(
operation="TRANSFER", assets=transfer_assets, inputs=transfer_input, recipients=[([alice.public_key], 8)]
)
fulfilled_transfer_tx = pm.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
# Oh Bob, what have you done?! You tried to spend more tokens than you had.
# Remember Bob, last time you spent 3 tokens already,
# so you only have 7 left.
with pytest.raises(BadRequest) as error:
pm.transactions.send_commit(fulfilled_transfer_tx)
# Now Bob gets an error saying that the amount he wanted to spent is
# higher than the amount of tokens he has left.
assert error.value.args[0] == 400
message = (
"Invalid transaction (AmountError): The amount used in the "
"inputs `7` needs to be same as the amount used in the "
"outputs `8`"
)
assert error.value.args[2]["message"] == message
# We have to stop this test now, I am sorry, but Bob is pretty upset
# about his mistake. See you next time :)

View File

@ -1,48 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Double Spend testing
# This test challenge the system with double spends.
from uuid import uuid4
from threading import Thread
import queue
import planetmint_driver.exceptions
from planetmint_driver.crypto import generate_keypair
from .helper.hosts import Hosts
def test_double_create():
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
alice = generate_keypair()
results = queue.Queue()
tx = pm.transactions.fulfill(
pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=[{"data": {"uuid": str(uuid4())}}]
),
private_keys=alice.private_key,
)
def send_and_queue(tx):
try:
pm.transactions.send_commit(tx)
results.put("OK")
except planetmint_driver.exceptions.TransportError:
results.put("FAIL")
t1 = Thread(target=send_and_queue, args=(tx,))
t2 = Thread(target=send_and_queue, args=(tx,))
t1.start()
t2.start()
results = [results.get(timeout=2), results.get(timeout=2)]
assert results.count("OK") == 1
assert results.count("FAIL") == 1

View File

@ -1,115 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Multisignature integration testing
# This test checks if we can successfully create and transfer a transaction
# with multiple owners.
# The script tests various things like:
#
# - create a transaction with multiple owners
# - check if the transaction is stored and has the right amount of public keys
# - transfer the transaction to a third person
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the public keys
# of a given transaction.
# # Imports
import time
# For this test case we need import and use the Python driver
from planetmint_driver.crypto import generate_keypair
# Import helper to deal with multiple nodes
from .helper.hosts import Hosts
def test_multiple_owners():
# Setup up connection to Planetmint integration test nodes
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
# Generate Keypairs for Alice and Bob!
alice, bob = generate_keypair(), generate_keypair()
# ## Alice and Bob create a transaction
# Alice and Bob just moved into a shared flat, no one can afford these
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = [{"data": {"dish washer": {"serial_number": 1337}}}]
# They prepare a `CREATE` transaction. To have multiple owners, both
# Bob and Alice need to be the recipients.
prepared_dw_tx = pm_alpha.transactions.prepare(
operation="CREATE", signers=alice.public_key, recipients=(alice.public_key, bob.public_key), assets=dw_asset
)
# Now they both sign the transaction by providing their private keys.
# And send it afterwards.
fulfilled_dw_tx = pm_alpha.transactions.fulfill(prepared_dw_tx, private_keys=[alice.private_key, bob.private_key])
pm_alpha.transactions.send_commit(fulfilled_dw_tx)
# We store the `id` of the transaction to use it later on.
dw_id = fulfilled_dw_tx["id"]
time.sleep(1)
# Use hosts to assert that the transaction is properly propagated to every node
hosts.assert_transaction(dw_id)
# Let's check if the transaction was successful.
assert pm_alpha.transactions.retrieve(dw_id), "Cannot find transaction {}".format(dw_id)
# The transaction should have two public keys in the outputs.
assert len(pm_alpha.transactions.retrieve(dw_id)["outputs"][0]["public_keys"]) == 2
# ## Alice and Bob transfer a transaction to Carol.
# Alice and Bob save a lot of money living together. They often go out
# for dinner and don't cook at home. But now they don't have any dishes to
# wash, so they decide to sell the dish washer to their friend Carol.
# Hey Carol, nice to meet you!
carol = generate_keypair()
# Alice and Bob prepare the transaction to transfer the dish washer to
# Carol.
transfer_assets = [{"id": dw_id}]
output_index = 0
output = fulfilled_dw_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_dw_tx["id"]},
"owners_before": output["public_keys"],
}
# Now they create the transaction...
prepared_transfer_tx = pm_alpha.transactions.prepare(
operation="TRANSFER", assets=transfer_assets, inputs=transfer_input, recipients=carol.public_key
)
# ... and sign it with their private keys, then send it.
fulfilled_transfer_tx = pm_alpha.transactions.fulfill(
prepared_transfer_tx, private_keys=[alice.private_key, bob.private_key]
)
sent_transfer_tx = pm_alpha.transactions.send_commit(fulfilled_transfer_tx)
time.sleep(1)
# Now compare if both nodes returned the same transaction
hosts.assert_transaction(fulfilled_transfer_tx["id"])
# They check if the transaction was successful.
assert pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# The owners before should include both Alice and Bob.
assert len(pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"])["inputs"][0]["owners_before"]) == 2
# While the new owner is Carol.
assert (
pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["public_keys"][0] == carol.public_key
)

View File

@ -1,131 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# ## Testing potentially hazardous strings
# This test uses a library of `naughty` strings (code injections, weird unicode chars., etc.) as both keys and values.
# We look for either a successful tx, or in the case that we use a naughty string as a key, and it violates some key
# constraints, we expect to receive a well formatted error message.
# ## Imports
# Since the naughty strings get encoded and decoded in odd ways,
# we'll use a regex to sweep those details under the rug.
import re
# We'll use a nice library of naughty strings...
from blns import blns
# And parameterize our test so each one is treated as a separate test case
import pytest
# For this test case we import and use the Python Driver.
from planetmint_driver.crypto import generate_keypair
from planetmint_driver.exceptions import BadRequest
# import helper to manage multiple nodes
from .helper.hosts import Hosts
naughty_strings = blns.all()
skipped_naughty_strings = [
"1.00",
"$1.00",
"-1.00",
"-$1.00",
"0.00",
"0..0",
".",
"0.0.0",
"-.",
",./;'[]\\-=",
"ثم نفس سقطت وبالتحديد،, جزيرتي باستخدام أن دنو. إذ هنا؟ الستار وتنصيب كان. أهّل ايطاليا، بريطانيا-فرنسا قد أخذ. سليمان، إتفاقية بين ما, يذكر الحدود أي بعد, معاملة بولندا، الإطلاق عل إيو.",
"test\x00",
"Ṱ̺̺̕o͞ ̷i̲̬͇̪͙n̝̗͕v̟̜̘̦͟o̶̙̰̠kè͚̮̺̪̹̱̤ ̖t̝͕̳̣̻̪͞h̼͓̲̦̳̘̲e͇̣̰̦̬͎ ̢̼̻̱̘h͚͎͙̜̣̲ͅi̦̲̣̰̤v̻͍e̺̭̳̪̰-m̢iͅn̖̺̞̲̯̰d̵̼̟͙̩̼̘̳ ̞̥̱̳̭r̛̗̘e͙p͠r̼̞̻̭̗e̺̠̣͟s̘͇̳͍̝͉e͉̥̯̞̲͚̬͜ǹ̬͎͎̟̖͇̤t͍̬̤͓̼̭͘ͅi̪̱n͠g̴͉ ͏͉ͅc̬̟h͡a̫̻̯͘o̫̟̖͍̙̝͉s̗̦̲.̨̹͈̣",
"̡͓̞ͅI̗̘̦͝n͇͇͙v̮̫ok̲̫̙͈i̖͙̭̹̠̞n̡̻̮̣̺g̲͈͙̭͙̬͎ ̰t͔̦h̞̲e̢̤ ͍̬̲͖f̴̘͕̣è͖ẹ̥̩l͖͔͚i͓͚̦͠n͖͍̗͓̳̮g͍ ̨o͚̪͡f̘̣̬ ̖̘͖̟͙̮c҉͔̫͖͓͇͖ͅh̵̤̣͚͔á̗̼͕ͅo̼̣̥s̱͈̺̖̦̻͢.̛̖̞̠̫̰",
"̗̺͖̹̯͓Ṯ̤͍̥͇͈h̲́e͏͓̼̗̙̼̣͔ ͇̜̱̠͓͍ͅN͕͠e̗̱z̘̝̜̺͙p̤̺̹͍̯͚e̠̻̠͜r̨̤͍̺̖͔̖̖d̠̟̭̬̝͟i̦͖̩͓͔̤a̠̗̬͉̙n͚͜ ̻̞̰͚ͅh̵͉i̳̞v̢͇ḙ͎͟-҉̭̩̼͔m̤̭̫i͕͇̝̦n̗͙ḍ̟ ̯̲͕͞ǫ̟̯̰̲͙̻̝f ̪̰̰̗̖̭̘͘c̦͍̲̞͍̩̙ḥ͚a̮͎̟̙͜ơ̩̹͎s̤.̝̝ ҉Z̡̖̜͖̰̣͉̜a͖̰͙̬͡l̲̫̳͍̩g̡̟̼̱͚̞̬ͅo̗͜.̟",
"̦H̬̤̗̤͝e͜ ̜̥̝̻͍̟́w̕h̖̯͓o̝͙̖͎̱̮ ҉̺̙̞̟͈W̷̼̭a̺̪͍į͈͕̭͙̯̜t̶̼̮s̘͙͖̕ ̠̫̠B̻͍͙͉̳ͅe̵h̵̬͇̫͙i̹͓̳̳̮͎̫̕n͟d̴̪̜̖ ̰͉̩͇͙̲͞ͅT͖̼͓̪͢h͏͓̮̻e̬̝̟ͅ ̤̹̝W͙̞̝͔͇͝ͅa͏͓͔̹̼̣l̴͔̰̤̟͔ḽ̫.͕",
'"><script>alert(document.title)</script>',
"'><script>alert(document.title)</script>",
"><script>alert(document.title)</script>",
"</script><script>alert(document.title)</script>",
"< / script >< script >alert(document.title)< / script >",
" onfocus=alert(document.title) autofocus ",
'" onfocus=alert(document.title) autofocus ',
"' onfocus=alert(document.title) autofocus ",
"scriptalert(document.title)/script",
"/dev/null; touch /tmp/blns.fail ; echo",
"../../../../../../../../../../../etc/passwd%00",
"../../../../../../../../../../../etc/hosts",
"() { 0; }; touch /tmp/blns.shellshock1.fail;",
"() { _; } >_[$($())] { touch /tmp/blns.shellshock2.fail; }",
]
naughty_strings = [naughty for naughty in naughty_strings if naughty not in skipped_naughty_strings]
# This is our base test case, but we'll reuse it to send naughty strings as both keys and values.
def send_naughty_tx(assets, metadata):
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Here's Alice.
alice = generate_keypair()
# Alice is in a naughty mood today, so she creates a tx with some naughty strings
prepared_transaction = pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=assets, metadata=metadata
)
# She fulfills the transaction
fulfilled_transaction = pm.transactions.fulfill(prepared_transaction, private_keys=alice.private_key)
# The fulfilled tx gets sent to the pm network
try:
sent_transaction = pm.transactions.send_commit(fulfilled_transaction)
except BadRequest as e:
sent_transaction = e
# If her key contained a '.', began with a '$', or contained a NUL character
regex = r".*\..*|\$.*|.*\x00.*"
key = next(iter(metadata))
if re.match(regex, key):
# Then she expects a nicely formatted error code
status_code = sent_transaction.status_code
error = sent_transaction.error
regex = (
r"\{\s*\n*"
r'\s*"message":\s*"Invalid transaction \(ValidationError\):\s*'
r"Invalid key name.*The key name cannot contain characters.*\n*"
r'\s*"status":\s*400\n*'
r"\s*\}\n*"
)
assert status_code == 400
assert re.fullmatch(regex, error), sent_transaction
# Otherwise, she expects to see her transaction in the database
elif "id" in sent_transaction.keys():
tx_id = sent_transaction["id"]
assert pm.transactions.retrieve(tx_id)
# If neither condition was true, then something weird happened...
else:
raise TypeError(sent_transaction)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_keys(naughty_string):
assets = [{"data": {naughty_string: "nice_value"}}]
metadata = {naughty_string: "nice_value"}
send_naughty_tx(assets, metadata)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_values(naughty_string):
assets = [{"data": {"nice_key": naughty_string}}]
metadata = {"nice_key": naughty_string}
send_naughty_tx(assets, metadata)

View File

@ -1,131 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Stream Acceptance Test
# This test checks if the event stream works correctly. The basic idea of this
# test is to generate some random **valid** transaction, send them to a
# Planetmint node, and expect those transactions to be returned by the valid
# transactions Stream API. During this test, two threads work together,
# sharing a queue to exchange events.
#
# - The *main thread* first creates and sends the transactions to Planetmint;
# then it run through all events in the shared queue to check if all
# transactions sent have been validated by Planetmint.
# - The *listen thread* listens to the events coming from Planetmint and puts
# them in a queue shared with the main thread.
import queue
import json
from threading import Thread, Event
from uuid import uuid4
# For this script, we need to set up a websocket connection, that's the reason
# we import the
# [websocket](https://github.com/websocket-client/websocket-client) module
from websocket import create_connection
from planetmint_driver.crypto import generate_keypair
# import helper to manage multiple nodes
from .helper.hosts import Hosts
def test_stream():
# ## Set up the test
# We use the env variable `BICHAINDB_ENDPOINT` to know where to connect.
# Check [test_basic.py](./test_basic.html) for more information.
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# *That's pretty bad, but let's do like this for now.*
WS_ENDPOINT = "ws://{}:9985/api/v1/streams/valid_transactions".format(hosts.hostnames[0])
# Hello to Alice again, she is pretty active in those tests, good job
# Alice!
alice = generate_keypair()
# We need few variables to keep the state, specifically we need `sent` to
# keep track of all transactions Alice sent to Planetmint, while `received`
# are the transactions Planetmint validated and sent back to her.
sent = []
received = queue.Queue()
# In this test we use a websocket. The websocket must be started **before**
# sending transactions to Planetmint, otherwise we might lose some
# transactions. The `ws_ready` event is used to synchronize the main thread
# with the listen thread.
ws_ready = Event()
# ## Listening to events
# This is the function run by the complementary thread.
def listen():
# First we connect to the remote endpoint using the WebSocket protocol.
ws = create_connection(WS_ENDPOINT)
# After the connection has been set up, we can signal the main thread
# to proceed (continue reading, it should make sense in a second.)
ws_ready.set()
# It's time to consume all events coming from the Planetmint stream API.
# Every time a new event is received, it is put in the queue shared
# with the main thread.
while True:
result = ws.recv()
received.put(result)
# Put `listen` in a thread, and start it. Note that `listen` is a local
# function and it can access all variables in the enclosing function.
t = Thread(target=listen, daemon=True)
t.start()
# ## Pushing the transactions to Planetmint
# After starting the listen thread, we wait for it to connect, and then we
# proceed.
ws_ready.wait()
# Here we prepare, sign, and send ten different `CREATE` transactions. To
# make sure each transaction is different from the other, we generate a
# random `uuid`.
for _ in range(10):
tx = pm.transactions.fulfill(
pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=[{"data": {"uuid": str(uuid4())}}]
),
private_keys=alice.private_key,
)
# We don't want to wait for each transaction to be in a block. By using
# `async` mode, we make sure that the driver returns as soon as the
# transaction is pushed to the Planetmint API. Remember: we expect all
# transactions to be in the shared queue: this is a two phase test,
# first we send a bunch of transactions, then we check if they are
# valid (and, in this case, they should).
pm.transactions.send_async(tx)
# The `id` of every sent transaction is then stored in a list.
sent.append(tx["id"])
# ## Check the valid transactions coming from Planetmint
# Now we are ready to check if Planetmint did its job. A simple way to
# check if all sent transactions have been processed is to **remove** from
# `sent` the transactions we get from the *listen thread*. At one point in
# time, `sent` should be empty, and we exit the test.
while sent:
# To avoid waiting forever, we have an arbitrary timeout of 5
# seconds: it should be enough time for Planetmint to create
# blocks, in fact a new block is created every second. If we hit
# the timeout, then game over ¯\\\_(ツ)\_/¯
try:
event = received.get(timeout=5)
txid = json.loads(event)["transaction_id"]
except queue.Empty:
assert False, "Did not receive all expected transactions"
# Last thing is to try to remove the `txid` from the set of sent
# transactions. If this test is running in parallel with others, we
# might get a transaction id of another test, and `remove` can fail.
# It's OK if this happens.
try:
sent.remove(txid)
except ValueError:
pass

View File

@ -1,319 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# ## Imports
import time
import json
# For this test case we need the planetmint_driver.crypto package
import base58
import sha3
from planetmint_cryptoconditions import Ed25519Sha256, ThresholdSha256
from planetmint_driver.crypto import generate_keypair
# Import helper to deal with multiple nodes
from .helper.hosts import Hosts
def prepare_condition_details(condition: ThresholdSha256):
condition_details = {"subconditions": [], "threshold": condition.threshold, "type": condition.TYPE_NAME}
for s in condition.subconditions:
if s["type"] == "fulfillment" and s["body"].TYPE_NAME == "ed25519-sha-256":
condition_details["subconditions"].append(
{"type": s["body"].TYPE_NAME, "public_key": base58.b58encode(s["body"].public_key).decode()}
)
else:
condition_details["subconditions"].append(prepare_condition_details(s["body"]))
return condition_details
def test_threshold():
# Setup connection to test nodes
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Generate Keypars for Alice, Bob an Carol!
alice, bob, carol = generate_keypair(), generate_keypair(), generate_keypair()
# ## Alice and Bob create a transaction
# Alice and Bob just moved into a shared flat, no one can afford these
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = [{"data": {"dish washer": {"serial_number": 1337}}}]
# Create subfulfillments
alice_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
bob_ed25519 = Ed25519Sha256(public_key=base58.b58decode(bob.public_key))
carol_ed25519 = Ed25519Sha256(public_key=base58.b58decode(carol.public_key))
# Create threshold condition (2/3) and add subfulfillments
threshold_sha256 = ThresholdSha256(2)
threshold_sha256.add_subfulfillment(alice_ed25519)
threshold_sha256.add_subfulfillment(bob_ed25519)
threshold_sha256.add_subfulfillment(carol_ed25519)
# Create a condition uri and details for the output object
condition_uri = threshold_sha256.condition.serialize_uri()
condition_details = prepare_condition_details(threshold_sha256)
# Assemble output and input for the handcrafted tx
output = {
"amount": "1",
"condition": {
"details": condition_details,
"uri": condition_uri,
},
"public_keys": (alice.public_key, bob.public_key, carol.public_key),
}
# The yet to be fulfilled input:
input_ = {
"fulfillment": None,
"fulfills": None,
"owners_before": (alice.public_key, bob.public_key),
}
# Assemble the handcrafted transaction
handcrafted_dw_tx = {
"operation": "CREATE",
"asset": dw_asset,
"metadata": None,
"outputs": (output,),
"inputs": (input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_dw_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
# Sign message with Alice's und Bob's private key
alice_ed25519.sign(message.digest(), base58.b58decode(alice.private_key))
bob_ed25519.sign(message.digest(), base58.b58decode(bob.private_key))
# Create fulfillment and add uri to inputs
fulfillment_threshold = ThresholdSha256(2)
fulfillment_threshold.add_subfulfillment(alice_ed25519)
fulfillment_threshold.add_subfulfillment(bob_ed25519)
fulfillment_threshold.add_subcondition(carol_ed25519.condition)
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_dw_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_dw_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
dw_creation_txid = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_dw_tx["id"] = dw_creation_txid
pm.transactions.send_commit(handcrafted_dw_tx)
time.sleep(1)
# Assert that the tx is propagated to all nodes
hosts.assert_transaction(dw_creation_txid)
def test_weighted_threshold():
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
alice, bob, carol = generate_keypair(), generate_keypair(), generate_keypair()
assets = [{"data": {"trashcan": {"animals": ["racoon_1", "racoon_2"]}}}]
alice_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
bob_ed25519 = Ed25519Sha256(public_key=base58.b58decode(bob.public_key))
carol_ed25519 = Ed25519Sha256(public_key=base58.b58decode(carol.public_key))
threshold = ThresholdSha256(1)
threshold.add_subfulfillment(alice_ed25519)
sub_threshold = ThresholdSha256(2)
sub_threshold.add_subfulfillment(bob_ed25519)
sub_threshold.add_subfulfillment(carol_ed25519)
threshold.add_subfulfillment(sub_threshold)
condition_uri = threshold.condition.serialize_uri()
condition_details = prepare_condition_details(threshold)
# Assemble output and input for the handcrafted tx
output = {
"amount": "1",
"condition": {
"details": condition_details,
"uri": condition_uri,
},
"public_keys": (alice.public_key, bob.public_key, carol.public_key),
}
# The yet to be fulfilled input:
input_ = {
"fulfillment": None,
"fulfills": None,
"owners_before": (alice.public_key, bob.public_key),
}
# Assemble the handcrafted transaction
handcrafted_tx = {
"operation": "CREATE",
"asset": assets,
"metadata": None,
"outputs": (output,),
"inputs": (input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
# Sign message with Alice's und Bob's private key
alice_ed25519.sign(message.digest(), base58.b58decode(alice.private_key))
# Create fulfillment and add uri to inputs
sub_fulfillment_threshold = ThresholdSha256(2)
sub_fulfillment_threshold.add_subcondition(bob_ed25519.condition)
sub_fulfillment_threshold.add_subcondition(carol_ed25519.condition)
fulfillment_threshold = ThresholdSha256(1)
fulfillment_threshold.add_subfulfillment(alice_ed25519)
fulfillment_threshold.add_subfulfillment(sub_fulfillment_threshold)
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
creation_tx_id = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_tx["id"] = creation_tx_id
pm.transactions.send_commit(handcrafted_tx)
time.sleep(1)
# Assert that the tx is propagated to all nodes
hosts.assert_transaction(creation_tx_id)
# Now transfer created asset
alice_transfer_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
bob_transfer_ed25519 = Ed25519Sha256(public_key=base58.b58decode(bob.public_key))
carol_transfer_ed25519 = Ed25519Sha256(public_key=base58.b58decode(carol.public_key))
transfer_condition_uri = alice_transfer_ed25519.condition.serialize_uri()
# Assemble output and input for the handcrafted tx
transfer_output = {
"amount": "1",
"condition": {
"details": {
"type": alice_transfer_ed25519.TYPE_NAME,
"public_key": base58.b58encode(alice_transfer_ed25519.public_key).decode(),
},
"uri": transfer_condition_uri,
},
"public_keys": (alice.public_key,),
}
# The yet to be fulfilled input:
transfer_input_ = {
"fulfillment": None,
"fulfills": {"transaction_id": creation_tx_id, "output_index": 0},
"owners_before": (alice.public_key, bob.public_key, carol.public_key),
}
# Assemble the handcrafted transaction
handcrafted_transfer_tx = {
"operation": "TRANSFER",
"assets": [{"id": creation_tx_id}],
"metadata": None,
"outputs": (transfer_output,),
"inputs": (transfer_input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_transfer_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
message.update(
"{}{}".format(
handcrafted_transfer_tx["inputs"][0]["fulfills"]["transaction_id"],
handcrafted_transfer_tx["inputs"][0]["fulfills"]["output_index"],
).encode()
)
# Sign message with Alice's und Bob's private key
bob_transfer_ed25519.sign(message.digest(), base58.b58decode(bob.private_key))
carol_transfer_ed25519.sign(message.digest(), base58.b58decode(carol.private_key))
sub_fulfillment_threshold = ThresholdSha256(2)
sub_fulfillment_threshold.add_subfulfillment(bob_transfer_ed25519)
sub_fulfillment_threshold.add_subfulfillment(carol_transfer_ed25519)
# Create fulfillment and add uri to inputs
fulfillment_threshold = ThresholdSha256(1)
fulfillment_threshold.add_subcondition(alice_transfer_ed25519.condition)
fulfillment_threshold.add_subfulfillment(sub_fulfillment_threshold)
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_transfer_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_transfer_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
transfer_tx_id = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_transfer_tx["id"] = transfer_tx_id
pm.transactions.send_commit(handcrafted_transfer_tx)
time.sleep(1)
# Assert that the tx is propagated to all nodes
hosts.assert_transaction(transfer_tx_id)

View File

@ -1,131 +0,0 @@
import json
import base58
from hashlib import sha3_256
from planetmint_cryptoconditions.types.zenroom import ZenroomSha256
from planetmint_driver.crypto import generate_keypair
from .helper.hosts import Hosts
from zenroom import zencode_exec
import time
def test_zenroom_signing(
gen_key_zencode,
secret_key_to_private_key_zencode,
fulfill_script_zencode,
zenroom_data,
zenroom_house_assets,
zenroom_script_input,
condition_script_zencode,
):
biolabs = generate_keypair()
version = "2.0"
alice = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
bob = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
zen_public_keys = json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Alice"), keys=json.dumps({"keyring": alice})).output
)
zen_public_keys.update(
json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Bob"), keys=json.dumps({"keyring": bob})).output
)
)
zenroomscpt = ZenroomSha256(script=fulfill_script_zencode, data=zenroom_data, keys=zen_public_keys)
print(f"zenroom is: {zenroomscpt.script}")
# CRYPTO-CONDITIONS: generate the condition uri
condition_uri_zen = zenroomscpt.condition.serialize_uri()
print(f"\nzenroom condition URI: {condition_uri_zen}")
# CRYPTO-CONDITIONS: construct an unsigned fulfillment dictionary
unsigned_fulfillment_dict_zen = {
"type": zenroomscpt.TYPE_NAME,
"public_key": base58.b58encode(biolabs.public_key).decode(),
}
output = {
"amount": "10",
"condition": {
"details": unsigned_fulfillment_dict_zen,
"uri": condition_uri_zen,
},
"public_keys": [
biolabs.public_key,
],
}
input_ = {
"fulfillment": None,
"fulfills": None,
"owners_before": [
biolabs.public_key,
],
}
metadata = {"result": {"output": ["ok"]}}
script_ = {
"code": {"type": "zenroom", "raw": "test_string", "parameters": [{"obj": "1"}, {"obj": "2"}]},
"state": "dd8bbd234f9869cab4cc0b84aa660e9b5ef0664559b8375804ee8dce75b10576",
"input": zenroom_script_input,
"output": ["ok"],
"policies": {},
}
metadata = {"result": {"output": ["ok"]}}
token_creation_tx = {
"operation": "CREATE",
"asset": {"data": {"test": "my asset"}},
"script": script_,
"metadata": metadata,
"outputs": [
output,
],
"inputs": [
input_,
],
"version": version,
"id": None,
}
# JSON: serialize the transaction-without-id to a json formatted string
tx = json.dumps(
token_creation_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
script_ = json.dumps(script_)
# major workflow:
# we store the fulfill script in the transaction/message (zenroom-sha)
# the condition script is used to fulfill the transaction and create the signature
#
# the server should ick the fulfill script and recreate the zenroom-sha and verify the signature
signed_input = zenroomscpt.sign(script_, condition_script_zencode, alice)
input_signed = json.loads(signed_input)
input_signed["input"]["signature"] = input_signed["output"]["signature"]
del input_signed["output"]["signature"]
del input_signed["output"]["logs"]
input_signed["output"] = ["ok"] # define expected output that is to be compared
input_msg = json.dumps(input_signed)
assert zenroomscpt.validate(message=input_msg)
tx = json.loads(tx)
fulfillment_uri_zen = zenroomscpt.serialize_uri()
tx["inputs"][0]["fulfillment"] = fulfillment_uri_zen
tx["script"] = input_signed
tx["id"] = None
json_str_tx = json.dumps(tx, sort_keys=True, skipkeys=False, separators=(",", ":"))
# SHA3: hash the serialized id-less transaction to generate the id
shared_creation_txid = sha3_256(json_str_tx.encode()).hexdigest()
tx["id"] = shared_creation_txid
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
sent_transfer_tx = pm_alpha.transactions.send_commit(tx)
time.sleep(1)
# Assert that transaction is stored on both planetmint nodes
hosts.assert_transaction(shared_creation_txid)
print(f"\n\nstatus and result : + {sent_transfer_tx}")

View File

@ -1,14 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Planetmint configuration
/usr/src/app/scripts/planetmint-monit-config
# Tarantool startup and configuration
tarantool /usr/src/app/scripts/init.lua
# Start services
monit -d 5 -I -B

View File

@ -1,11 +0,0 @@
#!/bin/sh
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
rm /shared/hostnames
rm /shared/lock
rm /shared/*node_id
rm /shared/*.json
rm /shared/id_rsa.pub

View File

@ -1,81 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Show tendermint node id
show_id () {
tendermint --home=/tendermint show_node_id | tail -n 1
}
# Show validator public key
show_validator () {
tendermint --home=/tendermint show_validator | tail -n 1
}
# Elect new voting power for node
elect_validator () {
planetmint election new upsert-validator $1 $2 $3 --private-key /tendermint/config/priv_validator_key.json 2>&1
}
# Propose new chain migration
propose_migration () {
planetmint election new chain-migration --private-key /tendermint/config/priv_validator_key.json 2>&1
}
# Show election state
show_election () {
planetmint election show $1 2>&1
}
# Approve election
approve_validator () {
planetmint election approve $1 --private-key /tendermint/config/priv_validator_key.json
}
# Fetch tendermint id and pubkey and create upsert proposal
elect () {
node_id=$(show_id)
validator_pubkey=$(show_validator | jq -r .value)
proposal=$(elect_validator $validator_pubkey $1 $node_id | grep SUCCESS)
echo ${proposal##* }
}
# Create chain migration proposal and return election id
migrate () {
proposal=$(propose_migration | grep SUCCESS)
echo ${proposal##* }
}
usage () {
echo "usage: TODO"
}
while [ "$1" != "" ]; do
case $1 in
show_id ) show_id
;;
show_validator ) show_validator
;;
elect ) shift
elect $1
;;
migrate ) shift
migrate
;;
show_election ) shift
show_election $1
;;
approve ) shift
approve_validator $1
;;
* ) usage
exit 1
esac
shift
done
exitcode=$?
exit $exitcode

View File

@ -1,33 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import json
import sys
def edit_genesis() -> None:
file_names = sys.argv[1:]
validators = []
for file_name in file_names:
file = open(file_name)
genesis = json.load(file)
validators.extend(genesis["validators"])
file.close()
genesis_file = open(file_names[0])
genesis_json = json.load(genesis_file)
genesis_json["validators"] = validators
genesis_file.close()
with open("/shared/genesis.json", "w") as f:
json.dump(genesis_json, f, indent=True)
return None
if __name__ == "__main__":
edit_genesis()

View File

@ -1,86 +0,0 @@
#!/usr/bin/env tarantool
box.cfg {
listen = 3303,
background = true,
log = '.planetmint-monit/logs/tarantool.log',
pid_file = '.planetmint-monit/monit_processes/tarantool.pid'
}
box.schema.user.grant('guest','read,write,execute,create,drop','universe')
function indexed_pattern_search(space_name, field_no, pattern)
if (box.space[space_name] == nil) then
print("Error: Failed to find the specified space")
return nil
end
local index_no = -1
for i=0,box.schema.INDEX_MAX,1 do
if (box.space[space_name].index[i] == nil) then break end
if (box.space[space_name].index[i].type == "TREE"
and box.space[space_name].index[i].parts[1].fieldno == field_no
and (box.space[space_name].index[i].parts[1].type == "scalar"
or box.space[space_name].index[i].parts[1].type == "string")) then
index_no = i
break
end
end
if (index_no == -1) then
print("Error: Failed to find an appropriate index")
return nil
end
local index_search_key = ""
local index_search_key_length = 0
local last_character = ""
local c = ""
local c2 = ""
for i=1,string.len(pattern),1 do
c = string.sub(pattern, i, i)
if (last_character ~= "%") then
if (c == '^' or c == "$" or c == "(" or c == ")" or c == "."
or c == "[" or c == "]" or c == "*" or c == "+"
or c == "-" or c == "?") then
break
end
if (c == "%") then
c2 = string.sub(pattern, i + 1, i + 1)
if (string.match(c2, "%p") == nil) then break end
index_search_key = index_search_key .. c2
else
index_search_key = index_search_key .. c
end
end
last_character = c
end
index_search_key_length = string.len(index_search_key)
local result_set = {}
local number_of_tuples_in_result_set = 0
local previous_tuple_field = ""
while true do
local number_of_tuples_since_last_yield = 0
local is_time_for_a_yield = false
for _,tuple in box.space[space_name].index[index_no]:
pairs(index_search_key,{iterator = box.index.GE}) do
if (string.sub(tuple[field_no], 1, index_search_key_length)
> index_search_key) then
break
end
number_of_tuples_since_last_yield = number_of_tuples_since_last_yield + 1
if (number_of_tuples_since_last_yield >= 10
and tuple[field_no] ~= previous_tuple_field) then
index_search_key = tuple[field_no]
is_time_for_a_yield = true
break
end
previous_tuple_field = tuple[field_no]
if (string.match(tuple[field_no], pattern) ~= nil) then
number_of_tuples_in_result_set = number_of_tuples_in_result_set + 1
result_set[number_of_tuples_in_result_set] = tuple
end
end
if (is_time_for_a_yield ~= true) then
break
end
require('fiber').yield()
end
return result_set
end

View File

@ -1,208 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -o nounset
# Check if directory for monit logs exists
if [ ! -d "$HOME/.planetmint-monit" ]; then
mkdir -p "$HOME/.planetmint-monit"
fi
monit_pid_path=${MONIT_PID_PATH:=$HOME/.planetmint-monit/monit_processes}
monit_script_path=${MONIT_SCRIPT_PATH:=$HOME/.planetmint-monit/monit_script}
monit_log_path=${MONIT_LOG_PATH:=$HOME/.planetmint-monit/logs}
monitrc_path=${MONITRC_PATH:=$HOME/.monitrc}
function usage() {
cat <<EOM
Usage: ${0##*/} [-h]
Configure Monit for Planetmint and Tendermint process management.
ENV[MONIT_PID_PATH] || --monit-pid-path PATH
Absolute path to directory where the the program's pid-file will reside.
The pid-file contains the ID(s) of the process(es). (default: ${monit_pid_path})
ENV[MONIT_SCRIPT_PATH] || --monit-script-path PATH
Absolute path to the directory where the executable program or
script is present. (default: ${monit_script_path})
ENV[MONIT_LOG_PATH] || --monit-log-path PATH
Absolute path to the directory where all the logs for processes
monitored by Monit are stored. (default: ${monit_log_path})
ENV[MONITRC_PATH] || --monitrc-path PATH
Absolute path to the monit control file(monitrc). (default: ${monitrc_path})
-h|--help
Show this help and exit.
EOM
}
while [[ $# -gt 0 ]]; do
arg="$1"
case $arg in
--monit-pid-path)
monit_pid_path="$2"
shift
;;
--monit-script-path)
monit_script_path="$2"
shift
;;
--monit-log-path)
monit_log_path="$2"
shift
;;
--monitrc-path)
monitrc_path="$2"
shift
;;
-h | --help)
usage
exit
;;
*)
echo "Unknown option: $1"
usage
exit 1
;;
esac
shift
done
# Check if directory for monit logs exists
if [ ! -d "$monit_log_path" ]; then
mkdir -p "$monit_log_path"
fi
# Check if directory for monit pid files exists
if [ ! -d "$monit_pid_path" ]; then
mkdir -p "$monit_pid_path"
fi
cat >${monit_script_path} <<EOF
#!/bin/bash
case \$1 in
start_planetmint)
pushd \$4
nohup planetmint start > /dev/null 2>&1 &
echo \$! > \$2
popd
;;
stop_planetmint)
kill -2 \`cat \$2\`
rm -f \$2
;;
start_tendermint)
pushd \$4
nohup tendermint node \
--p2p.laddr "tcp://0.0.0.0:26656" \
--rpc.laddr "tcp://0.0.0.0:26657" \
--proxy_app="tcp://0.0.0.0:26658" \
--consensus.create_empty_blocks=false \
--p2p.pex=false >> \$3/tendermint.out.log 2>> \$3/tendermint.err.log &
echo \$! > \$2
popd
;;
stop_tendermint)
kill -2 \`cat \$2\`
rm -f \$2
;;
esac
exit 0
EOF
chmod +x ${monit_script_path}
cat >${monit_script_path}_logrotate <<EOF
#!/bin/bash
case \$1 in
rotate_tendermint_logs)
/bin/cp \$2 \$2.\$(date +%y-%m-%d)
/bin/tar -cvf \$2.\$(date +%Y%m%d_%H%M%S).tar.gz \$2.\$(date +%y-%m-%d)
/bin/rm \$2.\$(date +%y-%m-%d)
/bin/cp /dev/null \$2
;;
esac
exit 0
EOF
chmod +x ${monit_script_path}_logrotate
# Handling overwriting of control file interactively
if [ -f "$monitrc_path" ]; then
echo "$monitrc_path already exists."
read -p "Overwrite[Y]? " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "Overriding $monitrc_path"
else
read -p "Enter absolute path to store Monit control file: " monitrc_path
eval monitrc_path="$monitrc_path"
if [ ! -d "$(dirname $monitrc_path)" ]; then
echo "Failed to save monit control file '$monitrc_path': No such file or directory."
exit 1
fi
fi
fi
# configure monitrc
cat >${monitrc_path} <<EOF
set httpd
port 2812
allow localhost
check process planetmint
with pidfile ${monit_pid_path}/planetmint.pid
start program "${monit_script_path} start_planetmint $monit_pid_path/planetmint.pid ${monit_log_path} ${monit_log_path}"
restart program "${monit_script_path} start_planetmint $monit_pid_path/planetmint.pid ${monit_log_path} ${monit_log_path}"
stop program "${monit_script_path} stop_planetmint $monit_pid_path/planetmint.pid ${monit_log_path} ${monit_log_path}"
check process tendermint
with pidfile ${monit_pid_path}/tendermint.pid
start program "${monit_script_path} start_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
restart program "${monit_script_path} start_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
stop program "${monit_script_path} stop_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
depends on planetmint
check file tendermint.out.log with path ${monit_log_path}/tendermint.out.log
if size > 200 MB then
exec "${monit_script_path}_logrotate rotate_tendermint_logs ${monit_log_path}/tendermint.out.log $monit_pid_path/tendermint.pid"
check file tendermint.err.log with path ${monit_log_path}/tendermint.err.log
if size > 200 MB then
exec "${monit_script_path}_logrotate rotate_tendermint_logs ${monit_log_path}/tendermint.err.log $monit_pid_path/tendermint.pid"
EOF
# Setting permissions for control file
chmod 0700 ${monitrc_path}
echo -e "Planetmint process manager configured!"
set -o errexit

View File

@ -1,83 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Write hostname to list
echo $(hostname) >> /shared/hostnames
# Create ssh folder
mkdir ~/.ssh
# Wait for test container pubkey
while [ ! -f /shared/id_rsa.pub ]; do
echo "WAIT FOR PUBKEY"
sleep 1
done
# Add pubkey to authorized keys
cat /shared/id_rsa.pub > ~/.ssh/authorized_keys
# Allow root user login
sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/" /etc/ssh/sshd_config
# Restart ssh service
service ssh restart
# Tendermint configuration
tendermint init
# Write node id to shared folder
HOSTNAME=$(hostname)
NODE_ID=$(tendermint show_node_id | tail -n 1)
echo $NODE_ID > /shared/${HOSTNAME}_node_id
# Wait for other node ids
FILES=()
while [ ! ${#FILES[@]} == $SCALE ]; do
echo "WAIT FOR NODE IDS"
sleep 1
FILES=(/shared/*node_id)
done
# Write node ids to persistent peers
PEERS="persistent_peers = \""
for f in ${FILES[@]}; do
ID=$(cat $f)
HOST=$(echo $f | cut -c 9-20)
if [ ! $HOST == $HOSTNAME ]; then
PEERS+="${ID}@${HOST}:26656, "
fi
done
PEERS=$(echo $PEERS | rev | cut -c 2- | rev)
PEERS+="\""
sed -i "/persistent_peers = \"\"/c\\${PEERS}" /tendermint/config/config.toml
# Copy genesis.json to shared folder
cp /tendermint/config/genesis.json /shared/${HOSTNAME}_genesis.json
# Await config file of all services to be present
FILES=()
while [ ! ${#FILES[@]} == $SCALE ]; do
echo "WAIT FOR GENESIS FILES"
sleep 1
FILES=(/shared/*_genesis.json)
done
# Create genesis.json for nodes
if [ ! -f /shared/lock ]; then
echo LOCKING
touch /shared/lock
/usr/src/app/scripts/genesis.py ${FILES[@]}
fi
while [ ! -f /shared/genesis.json ]; do
echo "WAIT FOR GENESIS"
sleep 1
done
# Copy genesis.json to tendermint config
cp /shared/genesis.json /tendermint/config/genesis.json
exec "$@"

View File

@ -1,16 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Create ssh folder
mkdir ~/.ssh
# Create ssh keys
ssh-keygen -q -t rsa -N '' -f ~/.ssh/id_rsa
# Publish pubkey to shared folder
cp ~/.ssh/id_rsa.pub /shared
exec "$@"

View File

@ -1,24 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Start CLI Tests
# Test upsert new validator
/tests/upsert-new-validator.sh
# Test chain migration
# TODO: implementation not finished
#/tests/chain-migration.sh
# TODO: Implement test for voting edge cases or implicit in chain migration and upsert validator?
exitcode=$?
if [ $exitcode -ne 0 ]; then
exit $exitcode
fi
exec "$@"

View File

@ -1,29 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Only continue if all services are ready
HOSTNAMES=()
while [ ! ${#HOSTNAMES[@]} == $SCALE ]; do
echo "WAIT FOR HOSTNAMES"
sleep 1
readarray -t HOSTNAMES < /shared/hostnames
done
for host in ${HOSTNAMES[@]}; do
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $host:9984)" != "200" ]]; do
echo "WAIT FOR PLANETMINT $host"
sleep 1
done
done
for host in ${HOSTNAMES[@]}; do
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $host:26657)" != "200" ]]; do
echo "WAIT FOR TENDERMINT $host"
sleep 1
done
done
exec "$@"

View File

@ -1,4 +1,4 @@
FROM tendermint/tendermint:v0.34.15 FROM tendermint/tendermint:v0.34.24
LABEL maintainer "contact@ipdb.global" LABEL maintainer "contact@ipdb.global"
WORKDIR / WORKDIR /
USER root USER root

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V., # Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V., # Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V., # Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -1,4 +1,4 @@
ARG tm_version=v0.31.5 ARG tm_version=v0.34.24
FROM tendermint/tendermint:${tm_version} FROM tendermint/tendermint:${tm_version}
LABEL maintainer "contact@ipdb.global" LABEL maintainer "contact@ipdb.global"
WORKDIR / WORKDIR /

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V., # Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -17,7 +17,7 @@ stack_size=${STACK_SIZE:=4}
stack_type=${STACK_TYPE:="docker"} stack_type=${STACK_TYPE:="docker"}
stack_type_provider=${STACK_TYPE_PROVIDER:=""} stack_type_provider=${STACK_TYPE_PROVIDER:=""}
# NOTE versions prior v0.28.0 have different priv_validator format! # NOTE versions prior v0.28.0 have different priv_validator format!
tm_version=${TM_VERSION:="v0.34.15"} tm_version=${TM_VERSION:="v0.34.24"}
mongo_version=${MONGO_VERSION:="3.6"} mongo_version=${MONGO_VERSION:="3.6"}
stack_vm_memory=${STACK_VM_MEMORY:=2048} stack_vm_memory=${STACK_VM_MEMORY:=2048}
stack_vm_cpus=${STACK_VM_CPUS:=2} stack_vm_cpus=${STACK_VM_CPUS:=2}

View File

@ -16,7 +16,7 @@ stack_repo=${STACK_REPO:="planetmint/planetmint"}
stack_size=${STACK_SIZE:=4} stack_size=${STACK_SIZE:=4}
stack_type=${STACK_TYPE:="docker"} stack_type=${STACK_TYPE:="docker"}
stack_type_provider=${STACK_TYPE_PROVIDER:=""} stack_type_provider=${STACK_TYPE_PROVIDER:=""}
tm_version=${TM_VERSION:="0.31.5"} tm_version=${TM_VERSION:="0.34.24"}
mongo_version=${MONGO_VERSION:="3.6"} mongo_version=${MONGO_VERSION:="3.6"}
stack_vm_memory=${STACK_VM_MEMORY:=2048} stack_vm_memory=${STACK_VM_MEMORY:=2048}
stack_vm_cpus=${STACK_VM_CPUS:=2} stack_vm_cpus=${STACK_VM_CPUS:=2}

View File

@ -19,7 +19,7 @@ The `Planetmint` class is defined here. Most node-level operations and database
`Block`, `Transaction`, and `Asset` classes are defined here. The classes mirror the block and transaction structure from the documentation, but also include methods for validation and signing. `Block`, `Transaction`, and `Asset` classes are defined here. The classes mirror the block and transaction structure from the documentation, but also include methods for validation and signing.
### [`validation.py`](./validation.py) ### [`validation.py`](application/basevalidationrules.py)
Base class for validation methods (verification of votes, blocks, and transactions). The actual logic is mostly found in `transaction` and `block` models, defined in [`models.py`](./models.py). Base class for validation methods (verification of votes, blocks, and transactions). The actual logic is mostly found in `transaction` and `block` models, defined in [`models.py`](./models.py).
@ -27,7 +27,7 @@ Base class for validation methods (verification of votes, blocks, and transactio
Entry point for the Planetmint process, after initialization. All subprocesses are started here: processes to handle new blocks, votes, etc. Entry point for the Planetmint process, after initialization. All subprocesses are started here: processes to handle new blocks, votes, etc.
### [`config_utils.py`](./config_utils.py) ### [`config_utils.py`](config_utils.py)
Methods for managing the configuration, including loading configuration files, automatically generating the configuration, and keeping the configuration consistent across Planetmint instances. Methods for managing the configuration, including loading configuration files, automatically generating the configuration, and keeping the configuration consistent across Planetmint instances.

View File

@ -2,17 +2,3 @@
# Planetmint and IPDB software contributors. # Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0 # Code is Apache-2.0 and docs are CC-BY-4.0
from transactions.common.transaction import Transaction # noqa
from transactions.types.elections.validator_election import ValidatorElection # noqa
from transactions.types.elections.vote import Vote # noqa
from transactions.types.elections.chain_migration_election import ChainMigrationElection
from planetmint.lib import Planetmint
from planetmint.core import App
Transaction.register_type(Transaction.CREATE, Transaction)
Transaction.register_type(Transaction.TRANSFER, Transaction)
Transaction.register_type(ValidatorElection.OPERATION, ValidatorElection)
Transaction.register_type(ChainMigrationElection.OPERATION, ChainMigrationElection)
Transaction.register_type(Vote.OPERATION, Vote)

View File

@ -9,9 +9,10 @@ with Tendermint.
import logging import logging
import sys import sys
from tendermint.abci import types_pb2
from abci.application import BaseApplication from abci.application import BaseApplication
from abci.application import OkCode from abci.application import OkCode
from tendermint.abci import types_pb2
from tendermint.abci.types_pb2 import ( from tendermint.abci.types_pb2 import (
ResponseInfo, ResponseInfo,
ResponseInitChain, ResponseInitChain,
@ -21,35 +22,40 @@ from tendermint.abci.types_pb2 import (
ResponseEndBlock, ResponseEndBlock,
ResponseCommit, ResponseCommit,
) )
from planetmint import Planetmint
from planetmint.tendermint_utils import decode_transaction, calculate_hash, decode_validator
from planetmint.lib import Block
from planetmint.events import EventTypes, Event
from planetmint.application.validator import Validator
from planetmint.abci.utils import decode_validator, decode_transaction, calculate_hash
from planetmint.abci.block import Block
from planetmint.ipc.events import EventTypes, Event
from planetmint.backend.exceptions import DBConcurrencyError
CodeTypeError = 1 CodeTypeError = 1
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
class App(BaseApplication): class ApplicationLogic(BaseApplication):
"""Bridge between Planetmint and Tendermint. """Bridge between Planetmint and Tendermint.
The role of this class is to expose the Planetmint The role of this class is to expose the Planetmint
transaction logic to Tendermint Core. transaction logic to Tendermint Core.
""" """
def __init__(self, planetmint_node=None, events_queue=None): def __init__(
self,
validator: Validator = None,
events_queue=None,
):
# super().__init__(abci) # super().__init__(abci)
logger.debug("Checking values of types") logger.debug("Checking values of types")
logger.debug(dir(types_pb2)) logger.debug(dir(types_pb2))
self.events_queue = events_queue self.events_queue = events_queue
self.planetmint_node = planetmint_node or Planetmint() self.validator = validator if validator else Validator()
self.block_txn_ids = [] self.block_txn_ids = []
self.block_txn_hash = "" self.block_txn_hash = ""
self.block_transactions = [] self.block_transactions = []
self.validators = None self.validators = None
self.new_height = None self.new_height = None
self.chain = self.planetmint_node.get_latest_abci_chain() self.chain = self.validator.models.get_latest_abci_chain()
def log_abci_migration_error(self, chain_id, validators): def log_abci_migration_error(self, chain_id, validators):
logger.error( logger.error(
@ -61,7 +67,7 @@ class App(BaseApplication):
def abort_if_abci_chain_is_not_synced(self): def abort_if_abci_chain_is_not_synced(self):
if self.chain is None or self.chain["is_synced"]: if self.chain is None or self.chain["is_synced"]:
return return
validators = self.planetmint_node.get_validators() validators = self.validator.models.get_validators()
self.log_abci_migration_error(self.chain["chain_id"], validators) self.log_abci_migration_error(self.chain["chain_id"], validators)
sys.exit(1) sys.exit(1)
@ -69,33 +75,42 @@ class App(BaseApplication):
"""Initialize chain upon genesis or a migration""" """Initialize chain upon genesis or a migration"""
app_hash = "" app_hash = ""
height = 0 height = 0
known_chain = self.planetmint_node.get_latest_abci_chain() try:
known_chain = self.validator.models.get_latest_abci_chain()
if known_chain is not None: if known_chain is not None:
chain_id = known_chain["chain_id"] chain_id = known_chain["chain_id"]
if known_chain["is_synced"]: if known_chain["is_synced"]:
msg = f"Got invalid InitChain ABCI request ({genesis}) - " f"the chain {chain_id} is already synced." msg = f"Got invalid InitChain ABCI request ({genesis}) - the chain {chain_id} is already synced."
logger.error(msg) logger.error(msg)
sys.exit(1) sys.exit(1)
if chain_id != genesis.chain_id: if chain_id != genesis.chain_id:
validators = self.planetmint_node.get_validators() validators = self.validator.models.get_validators()
self.log_abci_migration_error(chain_id, validators) self.log_abci_migration_error(chain_id, validators)
sys.exit(1) sys.exit(1)
# set migration values for app hash and height # set migration values for app hash and height
block = self.planetmint_node.get_latest_block() block = self.validator.models.get_latest_block()
app_hash = "" if block is None else block["app_hash"] app_hash = "" if block is None else block["app_hash"]
height = 0 if block is None else block["height"] + 1 height = 0 if block is None else block["height"] + 1
known_validators = self.planetmint_node.get_validators() known_validators = self.validator.models.get_validators()
validator_set = [decode_validator(v) for v in genesis.validators] validator_set = [decode_validator(v) for v in genesis.validators]
if known_validators and known_validators != validator_set: if known_validators and known_validators != validator_set:
self.log_abci_migration_error(known_chain["chain_id"], known_validators) self.log_abci_migration_error(known_chain["chain_id"], known_validators)
sys.exit(1) sys.exit(1)
block = Block(app_hash=app_hash, height=height, transactions=[]) block = Block(app_hash=app_hash, height=height, transactions=[])
self.planetmint_node.store_block(block._asdict()) self.validator.models.store_block(block._asdict())
self.planetmint_node.store_validator_set(height + 1, validator_set) self.validator.models.store_validator_set(height + 1, validator_set)
abci_chain_height = 0 if known_chain is None else known_chain["height"] abci_chain_height = 0 if known_chain is None else known_chain["height"]
self.planetmint_node.store_abci_chain(abci_chain_height, genesis.chain_id, True) self.validator.models.store_abci_chain(abci_chain_height, genesis.chain_id, True)
self.chain = {"height": abci_chain_height, "is_synced": True, "chain_id": genesis.chain_id} self.chain = {
"height": abci_chain_height,
"is_synced": True,
"chain_id": genesis.chain_id,
}
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
return ResponseInitChain() return ResponseInitChain()
def info(self, request): def info(self, request):
@ -112,7 +127,13 @@ class App(BaseApplication):
# logger.info(f"Tendermint version: {request.version}") # logger.info(f"Tendermint version: {request.version}")
r = ResponseInfo() r = ResponseInfo()
block = self.planetmint_node.get_latest_block() block = None
try:
block = self.validator.models.get_latest_block()
except DBConcurrencyError:
block = None
except ValueError:
block = None
if block: if block:
chain_shift = 0 if self.chain is None else self.chain["height"] chain_shift = 0 if self.chain is None else self.chain["height"]
r.last_block_height = block["height"] - chain_shift r.last_block_height = block["height"] - chain_shift
@ -134,12 +155,17 @@ class App(BaseApplication):
logger.debug("check_tx: %s", raw_transaction) logger.debug("check_tx: %s", raw_transaction)
transaction = decode_transaction(raw_transaction) transaction = decode_transaction(raw_transaction)
if self.planetmint_node.is_valid_transaction(transaction): try:
if self.validator.is_valid_transaction(transaction):
logger.debug("check_tx: VALID") logger.debug("check_tx: VALID")
return ResponseCheckTx(code=OkCode) return ResponseCheckTx(code=OkCode)
else: else:
logger.debug("check_tx: INVALID") logger.debug("check_tx: INVALID")
return ResponseCheckTx(code=CodeTypeError) return ResponseCheckTx(code=CodeTypeError)
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
def begin_block(self, req_begin_block): def begin_block(self, req_begin_block):
"""Initialize list of transaction. """Initialize list of transaction.
@ -167,9 +193,15 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced() self.abort_if_abci_chain_is_not_synced()
logger.debug("deliver_tx: %s", raw_transaction) logger.debug("deliver_tx: %s", raw_transaction)
transaction = self.planetmint_node.is_valid_transaction( transaction = None
try:
transaction = self.validator.is_valid_transaction(
decode_transaction(raw_transaction), self.block_transactions decode_transaction(raw_transaction), self.block_transactions
) )
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
if not transaction: if not transaction:
logger.debug("deliver_tx: INVALID") logger.debug("deliver_tx: INVALID")
@ -198,19 +230,25 @@ class App(BaseApplication):
# `end_block` or `commit` # `end_block` or `commit`
logger.debug(f"Updating pre-commit state: {self.new_height}") logger.debug(f"Updating pre-commit state: {self.new_height}")
pre_commit_state = dict(height=self.new_height, transactions=self.block_txn_ids) pre_commit_state = dict(height=self.new_height, transactions=self.block_txn_ids)
self.planetmint_node.store_pre_commit_state(pre_commit_state) try:
self.validator.models.store_pre_commit_state(pre_commit_state)
block_txn_hash = calculate_hash(self.block_txn_ids) block_txn_hash = calculate_hash(self.block_txn_ids)
block = self.planetmint_node.get_latest_block() block = self.validator.models.get_latest_block()
logger.debug("BLOCK: ", block)
logger.debug(f"BLOCK: {block}")
if self.block_txn_ids: if self.block_txn_ids:
self.block_txn_hash = calculate_hash([block["app_hash"], block_txn_hash]) self.block_txn_hash = calculate_hash([block["app_hash"], block_txn_hash])
else: else:
self.block_txn_hash = block["app_hash"] self.block_txn_hash = block["app_hash"]
validator_update = self.planetmint_node.process_block(self.new_height, self.block_transactions) validator_update = self.validator.process_block(self.new_height, self.block_transactions)
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
except TypeError:
sys.exit(1)
return ResponseEndBlock(validator_updates=validator_update) return ResponseEndBlock(validator_updates=validator_update)
@ -220,18 +258,26 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced() self.abort_if_abci_chain_is_not_synced()
data = self.block_txn_hash.encode("utf-8") data = self.block_txn_hash.encode("utf-8")
try:
# register a new block only when new transactions are received # register a new block only when new transactions are received
if self.block_txn_ids: if self.block_txn_ids:
self.planetmint_node.store_bulk_transactions(self.block_transactions) self.validator.models.store_bulk_transactions(self.block_transactions)
block = Block(app_hash=self.block_txn_hash, height=self.new_height, transactions=self.block_txn_ids) block = Block(
app_hash=self.block_txn_hash,
height=self.new_height,
transactions=self.block_txn_ids,
)
# NOTE: storing the block should be the last operation during commit # NOTE: storing the block should be the last operation during commit
# this effects crash recovery. Refer BEP#8 for details # this effects crash recovery. Refer BEP#8 for details
self.planetmint_node.store_block(block._asdict()) self.validator.models.store_block(block._asdict())
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
logger.debug( logger.debug(
"Commit-ing new block with hash: apphash=%s ," "height=%s, txn ids=%s", "Commit-ing new block with hash: apphash=%s, height=%s, txn ids=%s",
data, data,
self.new_height, self.new_height,
self.block_txn_ids, self.block_txn_ids,
@ -240,31 +286,12 @@ class App(BaseApplication):
if self.events_queue: if self.events_queue:
event = Event( event = Event(
EventTypes.BLOCK_VALID, EventTypes.BLOCK_VALID,
{"height": self.new_height, "hash": self.block_txn_hash, "transactions": self.block_transactions}, {
"height": self.new_height,
"hash": self.block_txn_hash,
"transactions": self.block_transactions,
},
) )
self.events_queue.put(event) self.events_queue.put(event)
return ResponseCommit(data=data) return ResponseCommit(data=data)
def rollback(planetmint):
pre_commit = None
try:
pre_commit = planetmint.get_pre_commit_state()
except Exception as e:
logger.exception("Unexpected error occurred while executing get_pre_commit_state()", e)
if pre_commit is None or len(pre_commit) == 0:
# the pre_commit record is first stored in the first `end_block`
return
latest_block = planetmint.get_latest_block()
if latest_block is None:
logger.error("Found precommit state but no blocks!")
sys.exit(1)
# NOTE: the pre-commit state is always at most 1 block ahead of the commited state
if latest_block["height"] < pre_commit["height"]:
planetmint.rollback_election(pre_commit["height"], pre_commit["transactions"])
planetmint.delete_transactions(pre_commit["transactions"])

3
planetmint/abci/block.py Normal file
View File

@ -0,0 +1,3 @@
from collections import namedtuple
Block = namedtuple("Block", ("app_hash", "height", "transactions"))

View File

@ -6,9 +6,9 @@
import multiprocessing import multiprocessing
from collections import defaultdict from collections import defaultdict
from planetmint import App from planetmint.abci.application_logic import ApplicationLogic
from planetmint.lib import Planetmint from planetmint.application.validator import Validator
from planetmint.tendermint_utils import decode_transaction from planetmint.abci.utils import decode_transaction
from abci.application import OkCode from abci.application import OkCode
from tendermint.abci.types_pb2 import ( from tendermint.abci.types_pb2 import (
ResponseCheckTx, ResponseCheckTx,
@ -16,7 +16,7 @@ from tendermint.abci.types_pb2 import (
) )
class ParallelValidationApp(App): class ParallelValidationApp(ApplicationLogic):
def __init__(self, planetmint=None, events_queue=None): def __init__(self, planetmint=None, events_queue=None):
super().__init__(planetmint, events_queue) super().__init__(planetmint, events_queue)
self.parallel_validator = ParallelValidator() self.parallel_validator = ParallelValidator()
@ -93,7 +93,7 @@ class ValidationWorker:
def __init__(self, in_queue, results_queue): def __init__(self, in_queue, results_queue):
self.in_queue = in_queue self.in_queue = in_queue
self.results_queue = results_queue self.results_queue = results_queue
self.planetmint = Planetmint() self.validator = Validator()
self.reset() self.reset()
def reset(self): def reset(self):
@ -112,7 +112,7 @@ class ValidationWorker:
except TypeError: except TypeError:
asset_id = dict_transaction["id"] asset_id = dict_transaction["id"]
transaction = self.planetmint.is_valid_transaction(dict_transaction, self.validated_transactions[asset_id]) transaction = self.validator.is_valid_transaction(dict_transaction, self.validated_transactions[asset_id])
if transaction: if transaction:
self.validated_transactions[asset_id].append(transaction) self.validated_transactions[asset_id].append(transaction)

80
planetmint/abci/rpc.py Normal file
View File

@ -0,0 +1,80 @@
import requests
from uuid import uuid4
from transactions.common.exceptions import ValidationError
from transactions.common.transaction_mode_types import (
BROADCAST_TX_COMMIT,
BROADCAST_TX_ASYNC,
BROADCAST_TX_SYNC,
)
from planetmint.abci.utils import encode_transaction
from planetmint.application.validator import logger
from planetmint.config_utils import autoconfigure
from planetmint.config import Config
MODE_COMMIT = BROADCAST_TX_COMMIT
MODE_LIST = (BROADCAST_TX_ASYNC, BROADCAST_TX_SYNC, MODE_COMMIT)
class ABCI_RPC:
def __init__(self):
autoconfigure()
self.tendermint_host = Config().get()["tendermint"]["host"]
self.tendermint_port = Config().get()["tendermint"]["port"]
self.tendermint_rpc_endpoint = "http://{}:{}/".format(self.tendermint_host, self.tendermint_port)
@staticmethod
def _process_post_response(mode_commit, response, mode):
logger.debug(response)
error = response.get("error")
if error:
status_code = 500
message = error.get("message", "Internal Error")
data = error.get("data", "")
if "Tx already exists in cache" in data:
status_code = 400
return (status_code, message + " - " + data)
result = response["result"]
if mode == mode_commit:
check_tx_code = result.get("check_tx", {}).get("code", 0)
deliver_tx_code = result.get("deliver_tx", {}).get("code", 0)
error_code = check_tx_code or deliver_tx_code
else:
error_code = result.get("code", 0)
if error_code:
return (500, "Transaction validation failed")
return (202, "")
def write_transaction(self, mode_list, endpoint, mode_commit, transaction, mode):
# This method offers backward compatibility with the Web API.
"""Submit a valid transaction to the mempool."""
response = self.post_transaction(mode_list, endpoint, transaction, mode)
return ABCI_RPC._process_post_response(mode_commit, response.json(), mode)
def post_transaction(self, mode_list, endpoint, transaction, mode):
"""Submit a valid transaction to the mempool."""
if not mode or mode not in mode_list:
raise ValidationError("Mode must be one of the following {}.".format(", ".join(mode_list)))
tx_dict = transaction.tx_dict if transaction.tx_dict else transaction.to_dict()
payload = {
"method": mode,
"jsonrpc": "2.0",
"params": [encode_transaction(tx_dict)],
"id": str(uuid4()),
}
try:
response = requests.post(endpoint, json=payload)
except requests.exceptions.ConnectionError as e:
logger.error(f"Tendermint RCP Connection issue: {e}")
raise e
except Exception as e:
logger.error(f"Tendermint RCP Connection issue: {e}")
raise e
return response

View File

@ -1,19 +1,46 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import base64 import base64
import codecs
import hashlib import hashlib
import json import json
import codecs
from binascii import hexlify from binascii import hexlify
from hashlib import sha3_256
from packaging import version
from tendermint.abci import types_pb2 from tendermint.abci import types_pb2
from tendermint.crypto import keys_pb2 from tendermint.crypto import keys_pb2
from hashlib import sha3_256 from transactions.common.crypto import key_pair_from_ed25519_key
from transactions.common.exceptions import InvalidPublicKey from transactions.common.exceptions import InvalidPublicKey
from planetmint.version import __tm_supported_versions__
def load_node_key(path):
with open(path) as json_data:
priv_validator = json.load(json_data)
priv_key = priv_validator["priv_key"]["value"]
hex_private_key = key_from_base64(priv_key)
return key_pair_from_ed25519_key(hex_private_key)
def tendermint_version_is_compatible(running_tm_ver):
"""
Check Tendermint compatability with Planetmint server
:param running_tm_ver: Version number of the connected Tendermint instance
:type running_tm_ver: str
:return: True/False depending on the compatability with Planetmint server
:rtype: bool
"""
# Splitting because version can look like this e.g. 0.22.8-40d6dc2e
tm_ver = running_tm_ver.split("-")
if not tm_ver:
return False
for ver in __tm_supported_versions__:
if version.parse(ver) == version.parse(tm_ver[0]):
return True
return False
def encode_validator(v): def encode_validator(v):
ed25519_public_key = v["public_key"]["value"] ed25519_public_key = v["public_key"]["value"]
@ -52,7 +79,6 @@ def new_validator_set(validators, updates):
def get_public_key_decoder(pk): def get_public_key_decoder(pk):
encoding = pk["type"] encoding = pk["type"]
decoder = base64.b64decode
if encoding == "ed25519-base16": if encoding == "ed25519-base16":
decoder = base64.b16decode decoder = base64.b16decode
@ -121,7 +147,6 @@ def merkleroot(hashes):
return merkleroot(parent_hashes) return merkleroot(parent_hashes)
# ripemd160 is only available below python 3.9.13
@DeprecationWarning @DeprecationWarning
def public_key64_to_address(base64_public_key): def public_key64_to_address(base64_public_key):
"""Note this only compatible with Tendermint 0.19.x""" """Note this only compatible with Tendermint 0.19.x"""

View File

@ -0,0 +1,2 @@
from .validator import Validator
from .basevalidationrules import BaseValidationRules

View File

@ -0,0 +1,557 @@
import logging
import json
from collections import OrderedDict
from transactions import Transaction, Vote
from transactions.common.exceptions import (
DoubleSpend,
AssetIdMismatch,
InvalidSignature,
AmountError,
SchemaValidationError,
ValidationError,
MultipleInputsError,
DuplicateTransaction,
InvalidProposer,
UnequalValidatorSet,
InvalidPowerChange,
)
from transactions.common.crypto import public_key_from_ed25519_key
from transactions.common.output import Output as TransactionOutput
from transactions.common.transaction import VALIDATOR_ELECTION, CHAIN_MIGRATION_ELECTION
from transactions.types.elections.election import Election
from transactions.types.elections.validator_utils import election_id_to_public_key
from planetmint.abci.utils import encode_validator, new_validator_set, key_from_base64, public_key_to_base64
from planetmint.application.basevalidationrules import BaseValidationRules
from planetmint.backend.models.output import Output
from planetmint.model.dataaccessor import DataAccessor
from planetmint.config import Config
from planetmint.config_utils import load_validation_plugin
from planetmint.utils.singleton import Singleton
logger = logging.getLogger(__name__)
class Validator:
def __init__(self):
self.models = DataAccessor()
self.validation = Validator._get_validation_method()
@staticmethod
def _get_validation_method():
validationPlugin = Config().get().get("validation_plugin")
if validationPlugin:
validation_method = load_validation_plugin(validationPlugin)
else:
validation_method = BaseValidationRules
return validation_method
@staticmethod
def validate_inputs_distinct(tx: Transaction):
# Validate that all inputs are distinct
links = [i.fulfills.to_uri() for i in tx.inputs]
if len(links) != len(set(links)):
raise DoubleSpend('tx "{}" spends inputs twice'.format(tx.id))
@staticmethod
def validate_asset_id(tx: Transaction, input_txs: list):
# validate asset
if tx.operation != Transaction.COMPOSE:
asset_id = tx.get_asset_id(input_txs)
if asset_id != Transaction.read_out_asset_id(tx):
raise AssetIdMismatch(("The asset id of the input does not match the asset id of the transaction"))
else:
asset_ids = Transaction.get_asset_ids(input_txs)
if Transaction.read_out_asset_id(tx) in asset_ids:
raise AssetIdMismatch(("The asset ID of the compose must be different to all of its input asset IDs"))
@staticmethod
def validate_input_conditions(tx: Transaction, input_conditions: list[Output]):
# convert planetmint.Output objects to transactions.common.Output objects
input_conditions_dict = Output.list_to_dict(input_conditions)
input_conditions_converted = []
for input_cond in input_conditions_dict:
input_conditions_converted.append(TransactionOutput.from_dict(input_cond))
if not tx.inputs_valid(input_conditions_converted):
raise InvalidSignature("Transaction signature is invalid.")
def validate_compose_inputs(self, tx, current_transactions=[]) -> bool:
input_txs, input_conditions = self.models.get_input_txs_and_conditions(tx.inputs, current_transactions)
Validator.validate_input_conditions(tx, input_conditions)
Validator.validate_asset_id(tx, input_txs)
Validator.validate_inputs_distinct(tx)
return True
def validate_transfer_inputs(self, tx, current_transactions=[]) -> bool:
input_txs, input_conditions = self.models.get_input_txs_and_conditions(tx.inputs, current_transactions)
Validator.validate_input_conditions(tx, input_conditions)
Validator.validate_asset_id(tx, input_txs)
Validator.validate_inputs_distinct(tx)
input_amount = sum([input_condition.amount for input_condition in input_conditions])
output_amount = sum([output_condition.amount for output_condition in tx.outputs])
if output_amount != input_amount:
raise AmountError(
"The amount used in the inputs `{}` needs to be same as the amount used in the outputs `{}`".format(
input_amount, output_amount
)
)
return True
def validate_create_inputs(self, tx, current_transactions=[]) -> bool:
duplicates = any(txn for txn in current_transactions if txn.id == tx.id)
if self.models.is_committed(tx.id) or duplicates:
raise DuplicateTransaction("transaction `{}` already exists".format(tx.id))
fulfilling_inputs = [i for i in tx.inputs if i.fulfills is not None and i.fulfills.txid is not None]
if len(fulfilling_inputs) > 0:
input_txs, input_conditions = self.models.get_input_txs_and_conditions(
fulfilling_inputs, current_transactions
)
create_asset = tx.assets[0]
input_asset = input_txs[0].assets[tx.inputs[0].fulfills.output]["data"]
if create_asset != input_asset:
raise ValidationError("CREATE must have matching asset description with input transaction")
if input_txs[0].operation != Transaction.DECOMPOSE:
raise SchemaValidationError("CREATE can only consume DECOMPOSE outputs")
return True
def validate_transaction(self, transaction, current_transactions=[]):
"""Validate a transaction against the current status of the database."""
# CLEANUP: The conditional below checks for transaction in dict format.
# It would be better to only have a single format for the transaction
# throught the code base.
if isinstance(transaction, dict):
try:
transaction = Transaction.from_dict(transaction, False)
except SchemaValidationError as e:
logger.warning("Invalid transaction schema: %s", e.__cause__.message)
return False
except ValidationError as e:
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
if self.validate_script(transaction) == False:
logger.warning("Invalid transaction script")
return False
if transaction.operation == Transaction.CREATE:
self.validate_create_inputs(transaction, current_transactions)
elif transaction.operation in [Transaction.TRANSFER, Transaction.VOTE]:
self.validate_transfer_inputs(transaction, current_transactions)
elif transaction.operation in [Transaction.COMPOSE]:
self.validate_compose_inputs(transaction, current_transactions)
return transaction
def validate_script(self, transaction: Transaction) -> bool:
if transaction.script:
return transaction.script.validate()
return True
def validate_election(self, transaction, current_transactions=[]): # TODO: move somewhere else
"""Validate election transaction
NOTE:
* A valid election is initiated by an existing validator.
* A valid election is one where voters are validators and votes are
allocated according to the voting power of each validator node.
Args:
:param planet: (Planetmint) an instantiated planetmint.lib.Planetmint object.
:param current_transactions: (list) A list of transactions to be validated along with the election
Returns:
Election: a Election object or an object of the derived Election subclass.
Raises:
ValidationError: If the election is invalid
"""
duplicates = any(txn for txn in current_transactions if txn.id == transaction.id)
if self.models.is_committed(transaction.id) or duplicates:
raise DuplicateTransaction("transaction `{}` already exists".format(transaction.id))
current_validators = self.models.get_validators_dict()
# NOTE: Proposer should be a single node
if len(transaction.inputs) != 1 or len(transaction.inputs[0].owners_before) != 1:
raise MultipleInputsError("`tx_signers` must be a list instance of length one")
# NOTE: Check if the proposer is a validator.
[election_initiator_node_pub_key] = transaction.inputs[0].owners_before
if election_initiator_node_pub_key not in current_validators.keys():
raise InvalidProposer("Public key is not a part of the validator set")
# NOTE: Check if all validators have been assigned votes equal to their voting power
if not Validator.is_same_topology(current_validators, transaction.outputs):
raise UnequalValidatorSet("Validator set much be exactly same to the outputs of election")
if transaction.operation == VALIDATOR_ELECTION:
self.validate_validator_election(transaction)
return transaction
@staticmethod
def is_same_topology(current_topology, election_topology):
voters = {}
for voter in election_topology:
if len(voter.public_keys) > 1:
return False
[public_key] = voter.public_keys
voting_power = voter.amount
voters[public_key] = voting_power
# Check whether the voters and their votes is same to that of the
# validators and their voting power in the network
return current_topology == voters
def validate_validator_election(self, transaction): # TODO: move somewhere else
"""For more details refer BEP-21: https://github.com/planetmint/BEPs/tree/master/21"""
current_validators = self.models.get_validators_dict()
# NOTE: change more than 1/3 of the current power is not allowed
if transaction.get_assets()[0]["data"]["power"] >= (1 / 3) * sum(current_validators.values()):
raise InvalidPowerChange("`power` change must be less than 1/3 of total power")
def get_election_status(self, transaction):
election = self.models.get_election(transaction.id)
if election and election["is_concluded"]:
return Election.CONCLUDED
return Election.INCONCLUSIVE if self.has_validator_set_changed(transaction) else Election.ONGOING
def has_validator_set_changed(self, transaction):
latest_change = self.get_validator_change()
if latest_change is None:
return False
latest_change_height = latest_change["height"]
election = self.models.get_election(transaction.id)
return latest_change_height > election["height"]
def get_validator_change(self):
"""Return the validator set from the most recent approved block
:return: {
'height': <block_height>,
'validators': <validator_set>
}
"""
latest_block = self.models.get_latest_block()
if latest_block is None:
return None
return self.models.get_validator_set(latest_block["height"])
def get_validator_dict(self, height=None):
"""Return a dictionary of validators with key as `public_key` and
value as the `voting_power`
"""
validators = {}
for validator in self.models.get_validators(height=height):
# NOTE: we assume that Tendermint encodes public key in base64
public_key = public_key_from_ed25519_key(key_from_base64(validator["public_key"]["value"]))
validators[public_key] = validator["voting_power"]
return validators
# TODO to be moved to planetmint.commands.planetmint
def show_election_status(self, transaction):
data = transaction.assets[0]
data = data.to_dict()["data"]
if "public_key" in data.keys():
data["public_key"] = public_key_to_base64(data["public_key"]["value"])
response = ""
for k, v in data.items():
if k != "seed":
response += f"{k}={v}\n"
response += f"status={self.get_election_status(transaction)}"
if transaction.operation == CHAIN_MIGRATION_ELECTION:
response = self.append_chain_migration_status(response)
return response
# TODO to be moved to planetmint.commands.planetmint
def append_chain_migration_status(self, status):
chain = self.models.get_latest_abci_chain()
if chain is None or chain["is_synced"]:
return status
status += f'\nchain_id={chain["chain_id"]}'
block = self.models.get_latest_block()
status += f'\napp_hash={block["app_hash"]}'
validators = [
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": k,
},
"power": v,
}
for k, v in self.get_validator_dict().items()
]
status += f"\nvalidators={json.dumps(validators, indent=4)}"
return status
def get_recipients_list(self):
"""Convert validator dictionary to a recipient list for `Transaction`"""
recipients = []
for public_key, voting_power in self.get_validator_dict().items():
recipients.append(([public_key], voting_power))
return recipients
def count_votes(self, election_pk, transactions):
votes = 0
for txn in transactions:
if txn.operation == Vote.OPERATION:
for output in txn.outputs:
# NOTE: We enforce that a valid vote to election id will have only
# election_pk in the output public keys, including any other public key
# along with election_pk will lead to vote being not considered valid.
if len(output.public_keys) == 1 and [election_pk] == output.public_keys:
votes = votes + output.amount
return votes
def get_commited_votes(self, transaction, election_pk=None): # TODO: move somewhere else
if election_pk is None:
election_pk = election_id_to_public_key(transaction.id)
txns = self.models.get_asset_tokens_for_public_key(transaction.id, election_pk)
return self.count_votes(election_pk, txns)
def _get_initiated_elections(self, height, txns): # TODO: move somewhere else
elections = []
for tx in txns:
if not isinstance(tx, Election):
continue
elections.append({"election_id": tx.id, "height": height, "is_concluded": False})
return elections
def _get_votes(self, txns): # TODO: move somewhere else
elections = OrderedDict()
for tx in txns:
if not isinstance(tx, Vote):
continue
election_id = Transaction.read_out_asset_id(tx)
if election_id not in elections:
elections[election_id] = []
elections[election_id].append(tx)
return elections
def process_block(self, new_height, txns): # TODO: move somewhere else
"""Looks for election and vote transactions inside the block, records
and processes elections.
Every election is recorded in the database.
Every vote has a chance to conclude the corresponding election. When
an election is concluded, the corresponding database record is
marked as such.
Elections and votes are processed in the order in which they
appear in the block. Elections are concluded in the order of
appearance of their first votes in the block.
For every election concluded in the block, calls its `on_approval`
method. The returned value of the last `on_approval`, if any,
is a validator set update to be applied in one of the following blocks.
`on_approval` methods are implemented by elections of particular type.
The method may contain side effects but should be idempotent. To account
for other concluded elections, if it requires so, the method should
rely on the database state.
"""
# elections initiated in this block
initiated_elections = self._get_initiated_elections(new_height, txns)
if initiated_elections:
self.models.store_elections(initiated_elections)
# elections voted for in this block and their votes
elections = self._get_votes(txns)
validator_update = None
for election_id, votes in elections.items():
election = self.models.get_transaction(election_id)
if election is None:
continue
if not self.has_election_concluded(election, votes):
continue
validator_update = self.approve_election(election, new_height)
self.models.store_election(election.id, new_height, is_concluded=True)
return [validator_update] if validator_update else []
def has_election_concluded(self, transaction, current_votes=[]): # TODO: move somewhere else
"""Check if the election can be concluded or not.
* Elections can only be concluded if the validator set has not changed
since the election was initiated.
* Elections can be concluded only if the current votes form a supermajority.
Custom elections may override this function and introduce additional checks.
"""
if self.has_validator_set_changed(transaction):
return False
if transaction.operation == VALIDATOR_ELECTION:
if not self.has_validator_election_concluded():
return False
if transaction.operation == CHAIN_MIGRATION_ELECTION:
if not self.has_chain_migration_concluded():
return False
election_pk = election_id_to_public_key(transaction.id)
votes_committed = self.get_commited_votes(transaction, election_pk)
votes_current = self.count_votes(election_pk, current_votes)
total_votes = sum(int(output.amount) for output in transaction.outputs)
if (votes_committed < (2 / 3) * total_votes) and (votes_committed + votes_current >= (2 / 3) * total_votes):
return True
return False
def has_validator_election_concluded(self): # TODO: move somewhere else
latest_block = self.models.get_latest_block()
if latest_block is not None:
latest_block_height = latest_block["height"]
latest_validator_change = self.models.get_validator_set()["height"]
# TODO change to `latest_block_height + 3` when upgrading to Tendermint 0.24.0.
if latest_validator_change == latest_block_height + 2:
# do not conclude the election if there is a change assigned already
return False
return True
def has_chain_migration_concluded(self): # TODO: move somewhere else
chain = self.models.get_latest_abci_chain()
if chain is not None and not chain["is_synced"]:
# do not conclude the migration election if
# there is another migration in progress
return False
return True
def rollback_election(self, new_height, txn_ids): # TODO: move somewhere else
"""Looks for election and vote transactions inside the block and
cleans up the database artifacts possibly created in `process_blocks`.
Part of the `end_block`/`commit` crash recovery.
"""
# delete election records for elections initiated at this height and
# elections concluded at this height
self.models.delete_elections(new_height)
txns = [self.models.get_transaction(tx_id) for tx_id in txn_ids]
txns = [Transaction.from_dict(tx.to_dict()) for tx in txns if tx]
elections = self._get_votes(txns)
for election_id in elections:
election = self.models.get_transaction(election_id)
if election.operation == VALIDATOR_ELECTION:
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
self.models.delete_validator_set(new_height + 1)
if election.operation == CHAIN_MIGRATION_ELECTION:
self.models.delete_abci_chain(new_height)
def approve_election(self, election, new_height):
"""Override to update the database state according to the
election rules. Consider the current database state to account for
other concluded elections, if required.
"""
if election.operation == CHAIN_MIGRATION_ELECTION:
self.migrate_abci_chain()
if election.operation == VALIDATOR_ELECTION:
validator_updates = [election.assets[0].data]
curr_validator_set = self.models.get_validators(height=new_height)
updated_validator_set = new_validator_set(curr_validator_set, validator_updates)
updated_validator_set = [v for v in updated_validator_set if v["voting_power"] > 0]
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
self.models.store_validator_set(new_height + 1, updated_validator_set)
return encode_validator(election.assets[0].data)
def is_valid_transaction(self, tx, current_transactions=[]):
# NOTE: the function returns the Transaction object in case
# the transaction is valid
try:
return self.validate_transaction(tx, current_transactions)
except ValidationError as e:
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
def migrate_abci_chain(self):
"""Generate and record a new ABCI chain ID. New blocks are not
accepted until we receive an InitChain ABCI request with
the matching chain ID and validator set.
Chain ID is generated based on the current chain and height.
`chain-X` => `chain-X-migrated-at-height-5`.
`chain-X-migrated-at-height-5` => `chain-X-migrated-at-height-21`.
If there is no known chain (we are at genesis), the function returns.
"""
latest_chain = self.models.get_latest_abci_chain()
if latest_chain is None:
return
block = self.models.get_latest_block()
suffix = "-migrated-at-height-"
chain_id = latest_chain["chain_id"]
block_height_str = str(block["height"])
new_chain_id = chain_id.split(suffix)[0] + suffix + block_height_str
self.models.store_abci_chain(block["height"] + 1, new_chain_id, False)
def rollback(self):
pre_commit = None
try:
pre_commit = self.models.get_pre_commit_state()
except Exception as e:
logger.exception("Unexpected error occurred while executing get_pre_commit_state()", e)
if pre_commit is None or len(pre_commit) == 0:
# the pre_commit record is first stored in the first `end_block`
return
latest_block = self.models.get_latest_block()
if latest_block is None:
logger.error("Found precommit state but no blocks!")
sys.exit(1)
# NOTE: the pre-commit state is always at most 1 block ahead of the commited state
if latest_block["height"] < pre_commit["height"]:
self.rollback_election(pre_commit["height"], pre_commit["transactions"])
self.models.delete_transactions(pre_commit["transactions"])

View File

@ -11,7 +11,7 @@ from transactions.common.exceptions import ConfigurationError
from planetmint.config import Config from planetmint.config import Config
BACKENDS = { BACKENDS = {
"tarantool_db": "planetmint.backend.tarantool.connection.TarantoolDBConnection", "tarantool_db": "planetmint.backend.tarantool.sync_io.connection.TarantoolDBConnection",
"localmongodb": "planetmint.backend.localmongodb.connection.LocalMongoDBConnection", "localmongodb": "planetmint.backend.localmongodb.connection.LocalMongoDBConnection",
} }

View File

@ -22,5 +22,9 @@ class OperationDataInsertionError(BackendError):
"""Exception raised when a Database operation fails.""" """Exception raised when a Database operation fails."""
class DBConcurrencyError(BackendError):
"""Exception raised when a Database operation fails."""
class DuplicateKeyError(OperationError): class DuplicateKeyError(OperationError):
"""Exception raised when an insert fails because the key is not unique""" """Exception raised when an insert fails because the key is not unique"""

View File

@ -1,56 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from dataclasses import dataclass
# NOTE: only here temporarily
from planetmint.backend.models import Asset, MetaData, Input
from planetmint.backend.models import Output
@dataclass
class Block:
id: str = None
app_hash: str = None
height: int = None
@dataclass
class Script:
id: str = None
script = None
@dataclass
class UTXO:
id: str = None
output_index: int = None
utxo: dict = None
@dataclass
class Transaction:
id: str = None
assets: list[Asset] = None
metadata: MetaData = None
version: str = None # TODO: make enum
operation: str = None # TODO: make enum
inputs: list[Input] = None
outputs: list[Output] = None
script: str = None
@dataclass
class Validator:
id: str = None
height: int = None
validators = None
@dataclass
class ABCIChain:
height: str = None
is_synced: bool = None
chain_id: str = None

View File

@ -10,7 +10,7 @@ import pymongo
from planetmint.config import Config from planetmint.config import Config
from planetmint.backend.exceptions import DuplicateKeyError, OperationError, ConnectionError from planetmint.backend.exceptions import DuplicateKeyError, OperationError, ConnectionError
from transactions.common.exceptions import ConfigurationError from transactions.common.exceptions import ConfigurationError
from planetmint.utils import Lazy from planetmint.utils.lazy import Lazy
from planetmint.backend.connection import DBConnection, _kwargs_parser from planetmint.backend.connection import DBConnection, _kwargs_parser
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -73,7 +73,7 @@ class LocalMongoDBConnection(DBConnection):
try: try:
return query.run(self.connect()) return query.run(self.connect())
except pymongo.errors.AutoReconnect: except pymongo.errors.AutoReconnect:
logger.warning("Lost connection to the database, " "retrying query.") logger.warning("Lost connection to the database, retrying query.")
return query.run(self.connect()) return query.run(self.connect())
except pymongo.errors.AutoReconnect as exc: except pymongo.errors.AutoReconnect as exc:
raise ConnectionError from exc raise ConnectionError from exc

View File

@ -77,7 +77,7 @@ def get_assets(conn, asset_ids):
@register_query(LocalMongoDBConnection) @register_query(LocalMongoDBConnection)
def get_spent(conn, transaction_id, output): def get_spending_transaction(conn, transaction_id, output):
query = { query = {
"inputs": { "inputs": {
"$elemMatch": {"$and": [{"fulfills.transaction_id": transaction_id}, {"fulfills.output_index": output}]} "$elemMatch": {"$and": [{"fulfills.transaction_id": transaction_id}, {"fulfills.output_index": output}]}
@ -102,7 +102,6 @@ def store_block(conn, block):
@register_query(LocalMongoDBConnection) @register_query(LocalMongoDBConnection)
def get_txids_filtered(conn, asset_ids, operation=None, last_tx=None): def get_txids_filtered(conn, asset_ids, operation=None, last_tx=None):
match = { match = {
Transaction.CREATE: {"operation": "CREATE", "id": {"$in": asset_ids}}, Transaction.CREATE: {"operation": "CREATE", "id": {"$in": asset_ids}},
Transaction.TRANSFER: {"operation": "TRANSFER", "asset.id": {"$in": asset_ids}}, Transaction.TRANSFER: {"operation": "TRANSFER", "asset.id": {"$in": asset_ids}},
@ -117,41 +116,6 @@ def get_txids_filtered(conn, asset_ids, operation=None, last_tx=None):
return (elem["id"] for elem in cursor) return (elem["id"] for elem in cursor)
@register_query(LocalMongoDBConnection)
def text_search(
conn,
search,
*,
language="english",
case_sensitive=False,
diacritic_sensitive=False,
text_score=False,
limit=0,
table="assets"
):
cursor = conn.run(
conn.collection(table)
.find(
{
"$text": {
"$search": search,
"$language": language,
"$caseSensitive": case_sensitive,
"$diacriticSensitive": diacritic_sensitive,
}
},
{"score": {"$meta": "textScore"}, "_id": False},
)
.sort([("score", {"$meta": "textScore"})])
.limit(limit)
)
if text_score:
return cursor
return (_remove_text_score(obj) for obj in cursor)
def _remove_text_score(asset): def _remove_text_score(asset):
asset.pop("score", None) asset.pop("score", None)
return asset return asset
@ -203,21 +167,6 @@ def delete_transactions(conn, txn_ids):
conn.run(conn.collection("transactions").delete_many({"id": {"$in": txn_ids}})) conn.run(conn.collection("transactions").delete_many({"id": {"$in": txn_ids}}))
@register_query(LocalMongoDBConnection)
def store_unspent_outputs(conn, *unspent_outputs):
if unspent_outputs:
try:
return conn.run(
conn.collection("utxos").insert_many(
unspent_outputs,
ordered=False,
)
)
except DuplicateKeyError:
# TODO log warning at least
pass
@register_query(LocalMongoDBConnection) @register_query(LocalMongoDBConnection)
def delete_unspent_outputs(conn, *unspent_outputs): def delete_unspent_outputs(conn, *unspent_outputs):
if unspent_outputs: if unspent_outputs:

View File

@ -23,7 +23,7 @@ class Asset:
@staticmethod @staticmethod
def from_list_dict(asset_dict_list: list[dict]) -> list[Asset]: def from_list_dict(asset_dict_list: list[dict]) -> list[Asset]:
return [Asset.from_dict(asset_dict) for asset_dict in asset_dict_list] return [Asset.from_dict(asset_dict) for asset_dict in asset_dict_list if isinstance(asset_dict, dict)]
@staticmethod @staticmethod
def list_to_dict(asset_list: list[Asset]) -> list[dict]: def list_to_dict(asset_list: list[Asset]) -> list[dict]:

View File

@ -46,7 +46,7 @@ class DbTransaction:
) )
@staticmethod @staticmethod
def remove_generated_fields(tx_dict: dict): def remove_generated_fields(tx_dict: dict) -> dict:
tx_dict["outputs"] = [ tx_dict["outputs"] = [
DbTransaction.remove_generated_or_none_output_keys(output) for output in tx_dict["outputs"] DbTransaction.remove_generated_or_none_output_keys(output) for output in tx_dict["outputs"]
] ]
@ -55,13 +55,19 @@ class DbTransaction:
return tx_dict return tx_dict
@staticmethod @staticmethod
def remove_generated_or_none_output_keys(output): def remove_generated_or_none_output_keys(output: dict) -> dict:
output["condition"]["details"] = {k: v for k, v in output["condition"]["details"].items() if v is not None} output["condition"]["details"] = {k: v for k, v in output["condition"]["details"].items() if v is not None}
if "id" in output: if "id" in output:
output.pop("id") output.pop("id")
return output return output
def to_dict(self) -> dict: def to_dict(self) -> dict:
"""
Returns
-------
object
"""
assets = Asset.list_to_dict(self.assets) assets = Asset.list_to_dict(self.assets)
tx = { tx = {
"inputs": Input.list_to_dict(self.inputs), "inputs": Input.list_to_dict(self.inputs),

View File

@ -8,57 +8,32 @@ from dataclasses import dataclass, field
from typing import List from typing import List
@dataclass
class SubCondition:
type: str
public_key: str
def to_tuple(self) -> tuple:
return self.type, self.public_key
def to_dict(self) -> dict:
return {"type": self.type, "public_key": self.public_key}
@staticmethod
def from_dict(subcondition_dict: dict) -> SubCondition:
return SubCondition(subcondition_dict["type"], subcondition_dict["public_key"])
@staticmethod
def list_to_dict(subconditions: List[SubCondition]) -> List[dict] | None:
if subconditions is None:
return None
return [subcondition.to_dict() for subcondition in subconditions]
@dataclass @dataclass
class ConditionDetails: class ConditionDetails:
type: str = "" type: str = ""
public_key: str = "" public_key: str = ""
threshold: int = None threshold: int = None
sub_conditions: list[SubCondition] = None sub_conditions: List[ConditionDetails] = field(default_factory=list)
def to_dict(self) -> dict: def to_dict(self) -> dict:
if self.sub_conditions is None: if self.sub_conditions is None:
return { return {"type": self.type, "public_key": self.public_key}
"type": self.type,
"public_key": self.public_key,
}
else: else:
return { return {
"type": self.type, "type": self.type,
"threshold": self.threshold, "threshold": self.threshold,
"subconditions": [subcondition.to_dict() for subcondition in self.sub_conditions], "subconditions": [sub_condition.to_dict() for sub_condition in self.sub_conditions],
} }
@staticmethod @staticmethod
def from_dict(data: dict) -> ConditionDetails: def from_dict(details: dict) -> ConditionDetails:
sub_conditions = None sub_conditions = None
if "subconditions" in data: if "subconditions" in details:
sub_conditions = [SubCondition.from_dict(sub_condition) for sub_condition in data["subconditions"]] sub_conditions = [ConditionDetails.from_dict(sub_condition) for sub_condition in details["subconditions"]]
return ConditionDetails( return ConditionDetails(
type=data.get("type"), type=details.get("type"),
public_key=data.get("public_key"), public_key=details.get("public_key"),
threshold=data.get("threshold"), threshold=details.get("threshold"),
sub_conditions=sub_conditions, sub_conditions=sub_conditions,
) )
@ -93,12 +68,14 @@ class Output:
@staticmethod @staticmethod
def outputs_dict(output: dict, transaction_id: str = "") -> Output: def outputs_dict(output: dict, transaction_id: str = "") -> Output:
out_dict: Output return Output(
if output["condition"]["details"].get("subconditions") is None: transaction_id=transaction_id,
out_dict = output_with_public_key(output, transaction_id) public_keys=output["public_keys"],
else: amount=output["amount"],
out_dict = output_with_sub_conditions(output, transaction_id) condition=Condition(
return out_dict uri=output["condition"]["uri"], details=ConditionDetails.from_dict(output["condition"]["details"])
),
)
@staticmethod @staticmethod
def from_tuple(output: tuple) -> Output: def from_tuple(output: tuple) -> Output:
@ -126,57 +103,12 @@ class Output:
def to_dict(self) -> dict: def to_dict(self) -> dict:
return { return {
"id": self.id, # "id": self.id,
"public_keys": self.public_keys, "public_keys": self.public_keys,
"condition": { "condition": self.condition.to_dict(),
"details": {
"type": self.condition.details.type,
"public_key": self.condition.details.public_key,
"threshold": self.condition.details.threshold,
"subconditions": SubCondition.list_to_dict(self.condition.details.sub_conditions),
},
"uri": self.condition.uri,
},
"amount": str(self.amount), "amount": str(self.amount),
} }
@staticmethod @staticmethod
def list_to_dict(output_list: list[Output]) -> list[dict]: def list_to_dict(output_list: list[Output]) -> list[dict]:
return [output.to_dict() for output in output_list or []] return [output.to_dict() for output in output_list or []]
def output_with_public_key(output, transaction_id) -> Output:
return Output(
transaction_id=transaction_id,
public_keys=output["public_keys"],
amount=output["amount"],
condition=Condition(
uri=output["condition"]["uri"],
details=ConditionDetails(
type=output["condition"]["details"]["type"],
public_key=output["condition"]["details"]["public_key"],
),
),
)
def output_with_sub_conditions(output, transaction_id) -> Output:
return Output(
transaction_id=transaction_id,
public_keys=output["public_keys"],
amount=output["amount"],
condition=Condition(
uri=output["condition"]["uri"],
details=ConditionDetails(
type=output["condition"]["details"]["type"],
threshold=output["condition"]["details"]["threshold"],
sub_conditions=[
SubCondition(
type=sub_condition["type"],
public_key=sub_condition["public_key"],
)
for sub_condition in output["condition"]["details"]["subconditions"]
],
),
),
)

View File

@ -7,10 +7,9 @@
from functools import singledispatch from functools import singledispatch
from planetmint.backend.models import Asset, MetaData, Output, Input, Script from planetmint.backend.models import Asset, Block, MetaData, Output, Input, Script
from planetmint.backend.exceptions import OperationError from planetmint.backend.exceptions import OperationError
from planetmint.backend.interfaces import Block
from planetmint.backend.models.dbtransaction import DbTransaction from planetmint.backend.models.dbtransaction import DbTransaction
@ -134,7 +133,7 @@ def get_asset(connection, asset_id) -> Asset:
@singledispatch @singledispatch
def get_spent(connection, transaction_id, condition_id): def get_spending_transaction(connection, transaction_id, condition_id):
"""Check if a `txid` was already used as an input. """Check if a `txid` was already used as an input.
A transaction can be used as an input for another transaction. Bigchain A transaction can be used as an input for another transaction. Bigchain
@ -209,7 +208,7 @@ def get_block_with_transaction(connection, txid):
@singledispatch @singledispatch
def store_transaction_outputs(connection, output: Output, index: int): def store_transaction_outputs(connection, output: Output, index: int, table: str):
"""Store the transaction outputs. """Store the transaction outputs.
Args: Args:
@ -244,47 +243,6 @@ def get_txids_filtered(connection, asset_id, operation=None):
raise NotImplementedError raise NotImplementedError
@singledispatch
def text_search(
conn,
search,
*,
language="english",
case_sensitive=False,
diacritic_sensitive=False,
text_score=False,
limit=0,
table=None
):
"""Return all the assets that match the text search.
The results are sorted by text score.
For more information about the behavior of text search on MongoDB see
https://docs.mongodb.com/manual/reference/operator/query/text/#behavior
Args:
search (str): Text search string to query the text index
language (str, optional): The language for the search and the rules for
stemmer and tokenizer. If the language is ``None`` text search uses
simple tokenization and no stemming.
case_sensitive (bool, optional): Enable or disable case sensitive
search.
diacritic_sensitive (bool, optional): Enable or disable case sensitive
diacritic search.
text_score (bool, optional): If ``True`` returns the text score with
each document.
limit (int, optional): Limit the number of returned documents.
Returns:
:obj:`list` of :obj:`dict`: a list of assets
Raises:
OperationError: If the backend does not support text search
"""
raise OperationError("This query is only supported when running " "Planetmint with MongoDB as the backend.")
@singledispatch @singledispatch
def get_latest_block(conn): def get_latest_block(conn):
"""Get the latest commited block i.e. block with largest height""" """Get the latest commited block i.e. block with largest height"""
@ -306,13 +264,6 @@ def store_block(conn, block):
raise NotImplementedError raise NotImplementedError
@singledispatch
def store_unspent_outputs(connection, unspent_outputs):
"""Store unspent outputs in ``utxo_set`` table."""
raise NotImplementedError
@singledispatch @singledispatch
def delete_unspent_outputs(connection, unspent_outputs): def delete_unspent_outputs(connection, unspent_outputs):
"""Delete unspent outputs in ``utxo_set`` table.""" """Delete unspent outputs in ``utxo_set`` table."""
@ -497,6 +448,12 @@ def get_outputs_by_tx_id(connection, tx_id: str) -> list[Output]:
raise NotImplementedError raise NotImplementedError
@singledispatch
def get_outputs_by_owner(connection, public_key: str, table: str) -> list[Output]:
"""Retrieve an owners outputs by public key"""
raise NotImplementedError
@singledispatch @singledispatch
def get_metadata(conn, transaction_ids): def get_metadata(conn, transaction_ids):
"""Retrieve metadata for a list of transactions by their ids""" """Retrieve metadata for a list of transactions by their ids"""

View File

@ -137,6 +137,18 @@ def init_database(connection, dbname):
raise NotImplementedError raise NotImplementedError
@singledispatch
def migrate(connection):
"""Migrate database
Args:
connection (:class:`~planetmint.backend.connection.Connection`): an
existing connection to use to migrate the database.
Creates one if not given.
"""
raise NotImplementedError
def validate_language_key(obj, key): def validate_language_key(obj, key):
"""Validate all nested "language" key in `obj`. """Validate all nested "language" key in `obj`.

View File

@ -1,5 +1,2 @@
# Register the single dispatched modules on import. # Register the single dispatched modules on import.
from planetmint.backend.tarantool import query, connection, schema # noqa from planetmint.backend.tarantool.sync_io import connection, query, schema
# MongoDBConnection should always be accessed via
# ``planetmint.backend.connect()``.

View File

@ -1,5 +1,12 @@
local fiber = require('fiber')
box.cfg{listen = 3303} box.cfg{listen = 3303}
box.once("bootstrap", function()
box.schema.user.grant('guest','read,write,execute,create,drop','universe')
end)
function init() function init()
-- ABCI chains -- ABCI chains
abci_chains = box.schema.create_space('abci_chains', { if_not_exists = true }) abci_chains = box.schema.create_space('abci_chains', { if_not_exists = true })
@ -166,9 +173,11 @@ function init()
utxos = box.schema.create_space('utxos', { if_not_exists = true }) utxos = box.schema.create_space('utxos', { if_not_exists = true })
utxos:format({ utxos:format({
{ name = 'id', type = 'string' }, { name = 'id', type = 'string' },
{ name = 'transaction_id', type = 'string' }, { name = 'amount' , type = 'unsigned' },
{ name = 'output_index', type = 'unsigned' }, { name = 'public_keys', type = 'array' },
{ name = 'utxo', type = 'map' } { name = 'condition', type = 'map' },
{ name = 'output_index', type = 'number' },
{ name = 'transaction_id' , type = 'string' }
}) })
utxos:create_index('id', { utxos:create_index('id', {
if_not_exists = true, if_not_exists = true,
@ -184,7 +193,13 @@ function init()
parts = { parts = {
{ field = 'transaction_id', type = 'string' }, { field = 'transaction_id', type = 'string' },
{ field = 'output_index', type = 'unsigned' } { field = 'output_index', type = 'unsigned' }
}}) }
})
utxos:create_index('public_keys', {
if_not_exists = true,
unique = false,
parts = {{field = 'public_keys[*]', type = 'string' }}
})
-- Elections -- Elections
@ -318,3 +333,65 @@ end
function delete_output( id ) function delete_output( id )
box.space.outputs:delete(id) box.space.outputs:delete(id)
end end
function atomic(batch_size, iter, fn)
box.atomic(function()
local i = 0
for _, x in iter:unwrap() do
fn(x)
i = i + 1
if i % batch_size == 0 then
box.commit()
fiber.yield() -- for read-only operations when `commit` doesn't yield
box.begin()
end
end
end)
end
function migrate()
-- migration code from 2.4.0 to 2.4.3
box.once("planetmint:v2.4.3", function()
box.space.utxos:drop()
utxos = box.schema.create_space('utxos', { if_not_exists = true })
utxos:format({
{ name = 'id', type = 'string' },
{ name = 'amount' , type = 'unsigned' },
{ name = 'public_keys', type = 'array' },
{ name = 'condition', type = 'map' },
{ name = 'output_index', type = 'number' },
{ name = 'transaction_id' , type = 'string' }
})
utxos:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
utxos:create_index('utxos_by_transaction_id', {
if_not_exists = true,
unique = false,
parts = {{ field = 'transaction_id', type = 'string' }}
})
utxos:create_index('utxo_by_transaction_id_and_output_index', {
if_not_exists = true,
parts = {
{ field = 'transaction_id', type = 'string' },
{ field = 'output_index', type = 'unsigned' }
}
})
utxos:create_index('public_keys', {
if_not_exists = true,
unique = false,
parts = {{field = 'public_keys[*]', type = 'string' }}
})
atomic(1000, box.space.outputs:pairs(), function(output)
utxos:insert{output[1], output[2], output[3], output[4], output[5], output[6]}
end)
atomic(1000, utxos:pairs(), function(utxo)
spending_transaction = box.space.transactions.index.spending_transaction_by_id_and_output_index:select{utxo[6], utxo[5]}
if table.getn(spending_transaction) > 0 then
utxos:delete(utxo[1])
end
end)
end)
end

View File

@ -6,9 +6,10 @@
import logging import logging
import tarantool import tarantool
from planetmint.config import Config from planetmint.config import Config
from transactions.common.exceptions import ConfigurationError from transactions.common.exceptions import ConfigurationError
from planetmint.utils import Lazy from planetmint.utils.lazy import Lazy
from planetmint.backend.connection import DBConnection from planetmint.backend.connection import DBConnection
from planetmint.backend.exceptions import ConnectionError from planetmint.backend.exceptions import ConnectionError
@ -55,11 +56,15 @@ class TarantoolDBConnection(DBConnection):
with open(path, "r") as f: with open(path, "r") as f:
execute = f.readlines() execute = f.readlines()
f.close() f.close()
return "".join(execute).encode() return "".join(execute).encode(encoding="utf-8")
def connect(self): def connect(self):
if not self.__conn: if not self.__conn:
self.__conn = tarantool.connect(host=self.host, port=self.port) self.__conn = tarantool.Connection(
host=self.host, port=self.port, encoding="utf-8", connect_now=True, reconnect_delay=0.1
)
elif self.__conn.connected == False:
self.__conn.connect()
return self.__conn return self.__conn
def close(self): def close(self):
@ -74,65 +79,8 @@ class TarantoolDBConnection(DBConnection):
def get_space(self, space_name: str): def get_space(self, space_name: str):
return self.connect().space(space_name) return self.connect().space(space_name)
def space(self, space_name: str):
return self.query().space(space_name)
def exec(self, query, only_data=True):
try:
conn = self.connect()
conn.execute(query) if only_data else conn.execute(query)
except tarantool.error.OperationalError as op_error:
raise op_error
except tarantool.error.NetworkError as net_error:
raise net_error
def run(self, query, only_data=True):
try:
conn = self.connect()
return query.run(conn).data if only_data else query.run(conn)
except tarantool.error.OperationalError as op_error:
raise op_error
except tarantool.error.NetworkError as net_error:
raise net_error
def drop_database(self): def drop_database(self):
self.connect().call("drop") self.connect().call("drop")
def init_database(self): def init_database(self):
self.connect().call("init") self.connect().call("init")
def run_command(self, command: str, config: dict):
from subprocess import run
try:
self.close()
except ConnectionError:
pass
print(f" commands: {command}")
host_port = "%s:%s" % (self.host, self.port)
execute_cmd = self._file_content_to_bytes(path=command)
output = run(
["tarantoolctl", "connect", host_port],
input=execute_cmd,
capture_output=True,
).stderr
output = output.decode()
return output
def run_command_with_output(self, command: str):
from subprocess import run
try:
self.close()
except ConnectionError:
pass
host_port = "%s:%s" % (
Config().get()["database"]["host"],
Config().get()["database"]["port"],
)
output = run(["tarantoolctl", "connect", host_port], input=command, capture_output=True)
if output.returncode != 0:
raise Exception(f"Error while trying to execute cmd {command} on host:port {host_port}: {output.stderr}")
return output.stdout

View File

@ -4,7 +4,6 @@
# Code is Apache-2.0 and docs are CC-BY-4.0 # Code is Apache-2.0 and docs are CC-BY-4.0
"""Query implementation for Tarantool""" """Query implementation for Tarantool"""
import json
import logging import logging
from uuid import uuid4 from uuid import uuid4
from operator import itemgetter from operator import itemgetter
@ -15,9 +14,8 @@ from planetmint.backend import query
from planetmint.backend.models.dbtransaction import DbTransaction from planetmint.backend.models.dbtransaction import DbTransaction
from planetmint.backend.exceptions import OperationDataInsertionError from planetmint.backend.exceptions import OperationDataInsertionError
from planetmint.exceptions import CriticalDoubleSpend from planetmint.exceptions import CriticalDoubleSpend
from planetmint.backend.exceptions import DBConcurrencyError
from planetmint.backend.tarantool.const import ( from planetmint.backend.tarantool.const import (
TARANT_TABLE_META_DATA,
TARANT_TABLE_ASSETS,
TARANT_TABLE_TRANSACTION, TARANT_TABLE_TRANSACTION,
TARANT_TABLE_OUTPUT, TARANT_TABLE_OUTPUT,
TARANT_TABLE_SCRIPT, TARANT_TABLE_SCRIPT,
@ -35,13 +33,43 @@ from planetmint.backend.tarantool.const import (
) )
from planetmint.backend.utils import module_dispatch_registrar from planetmint.backend.utils import module_dispatch_registrar
from planetmint.backend.models import Asset, Block, Output from planetmint.backend.models import Asset, Block, Output
from planetmint.backend.tarantool.connection import TarantoolDBConnection from planetmint.backend.tarantool.sync_io.connection import TarantoolDBConnection
from transactions.common.transaction import Transaction from transactions.common.transaction import Transaction
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
register_query = module_dispatch_registrar(query) register_query = module_dispatch_registrar(query)
from tarantool.error import OperationalError, NetworkError, SchemaError
from functools import wraps
def catch_db_exception(function_to_decorate):
@wraps(function_to_decorate)
def wrapper(*args, **kw):
try:
output = function_to_decorate(*args, **kw)
except OperationalError as op_error:
raise op_error
except SchemaError as schema_error:
raise schema_error
except NetworkError as net_error:
raise net_error
except ValueError as e:
logger.info(f"ValueError in Query/DB instruction: {e}: raising DBConcurrencyError")
raise DBConcurrencyError
except AttributeError as e:
logger.info(f"Attribute in Query/DB instruction: {e}: raising DBConcurrencyError")
raise DBConcurrencyError
except Exception as e:
logger.info(f"Could not insert transactions: {e}")
if e.args[0] == 3 and e.args[1].startswith("Duplicate key exists in"):
raise CriticalDoubleSpend()
else:
raise OperationDataInsertionError()
return output
return wrapper
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
def get_complete_transactions_by_ids(connection, txids: list) -> list[DbTransaction]: def get_complete_transactions_by_ids(connection, txids: list) -> list[DbTransaction]:
@ -59,8 +87,9 @@ def get_complete_transactions_by_ids(connection, txids: list) -> list[DbTransact
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_outputs_by_tx_id(connection, tx_id: str) -> list[Output]: def get_outputs_by_tx_id(connection, tx_id: str) -> list[Output]:
_outputs = connection.run(connection.space(TARANT_TABLE_OUTPUT).select(tx_id, index=TARANT_TX_ID_SEARCH)) _outputs = connection.connect().select(TARANT_TABLE_OUTPUT, tx_id, index=TARANT_TX_ID_SEARCH).data
_sorted_outputs = sorted(_outputs, key=itemgetter(4)) _sorted_outputs = sorted(_outputs, key=itemgetter(4))
return [Output.from_tuple(output) for output in _sorted_outputs] return [Output.from_tuple(output) for output in _sorted_outputs]
@ -75,28 +104,35 @@ def get_transaction(connection, tx_id: str) -> Union[DbTransaction, None]:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_transactions_by_asset(connection, asset: str, limit: int = 1000) -> list[DbTransaction]: def get_transactions_by_asset(connection, asset: str, limit: int = 1000) -> list[DbTransaction]:
txs = connection.run( txs = (
connection.space(TARANT_TABLE_TRANSACTION).select(asset, limit=limit, index="transactions_by_asset_cid") connection.connect()
.select(TARANT_TABLE_TRANSACTION, asset, limit=limit, index="transactions_by_asset_cid")
.data
) )
tx_ids = [tx[0] for tx in txs] tx_ids = [tx[0] for tx in txs]
return get_complete_transactions_by_ids(connection, tx_ids) return get_complete_transactions_by_ids(connection, tx_ids)
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_transactions_by_metadata(connection, metadata: str, limit: int = 1000) -> list[DbTransaction]: def get_transactions_by_metadata(connection, metadata: str, limit: int = 1000) -> list[DbTransaction]:
txs = connection.run( txs = (
connection.space(TARANT_TABLE_TRANSACTION).select(metadata, limit=limit, index="transactions_by_metadata_cid") connection.connect()
.select(TARANT_TABLE_TRANSACTION, metadata, limit=limit, index="transactions_by_metadata_cid")
.data
) )
tx_ids = [tx[0] for tx in txs] tx_ids = [tx[0] for tx in txs]
return get_complete_transactions_by_ids(connection, tx_ids) return get_complete_transactions_by_ids(connection, tx_ids)
def store_transaction_outputs(connection, output: Output, index: int) -> str: @register_query(TarantoolDBConnection)
@catch_db_exception
def store_transaction_outputs(connection, output: Output, index: int, table=TARANT_TABLE_OUTPUT) -> str:
output_id = uuid4().hex output_id = uuid4().hex
try: connection.connect().insert(
connection.run( table,
connection.space(TARANT_TABLE_OUTPUT).insert(
( (
output_id, output_id,
int(output.amount), int(output.amount),
@ -104,13 +140,9 @@ def store_transaction_outputs(connection, output: Output, index: int) -> str:
output.condition.to_dict(), output.condition.to_dict(),
index, index,
output.transaction_id, output.transaction_id,
) ),
) ).data
)
return output_id return output_id
except Exception as e:
logger.info(f"Could not insert Output: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@ -124,6 +156,7 @@ def store_transactions(connection, signed_transactions: list, table=TARANT_TABLE
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def store_transaction(connection, transaction, table=TARANT_TABLE_TRANSACTION): def store_transaction(connection, transaction, table=TARANT_TABLE_TRANSACTION):
scripts = None scripts = None
if TARANT_TABLE_SCRIPT in transaction: if TARANT_TABLE_SCRIPT in transaction:
@ -142,19 +175,13 @@ def store_transaction(connection, transaction, table=TARANT_TABLE_TRANSACTION):
transaction["inputs"], transaction["inputs"],
scripts, scripts,
) )
try: connection.connect().insert(table, tx)
connection.run(connection.space(table).insert(tx), only_data=False)
except Exception as e:
logger.info(f"Could not insert transactions: {e}")
if e.args[0] == 3 and e.args[1].startswith("Duplicate key exists in"):
raise CriticalDoubleSpend()
else:
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_transaction_by_id(connection, transaction_id, table=TARANT_TABLE_TRANSACTION): def get_transaction_by_id(connection, transaction_id, table=TARANT_TABLE_TRANSACTION):
txs = connection.run(connection.space(table).select(transaction_id, index=TARANT_ID_SEARCH), only_data=False) txs = connection.connect().select(table, transaction_id, index=TARANT_ID_SEARCH)
if len(txs) == 0: if len(txs) == 0:
return None return None
return DbTransaction.from_tuple(txs[0]) return DbTransaction.from_tuple(txs[0])
@ -172,18 +199,18 @@ def get_transactions(connection, transactions_ids: list) -> list[DbTransaction]:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_asset(connection, asset_id: str) -> Asset: def get_asset(connection, asset_id: str) -> Asset:
_data = connection.run( connection.connect().select(TARANT_TABLE_TRANSACTION, asset_id, index=TARANT_INDEX_TX_BY_ASSET_ID).data
connection.space(TARANT_TABLE_TRANSACTION).select(asset_id, index=TARANT_INDEX_TX_BY_ASSET_ID)
)
return Asset.from_dict(_data[0]) return Asset.from_dict(_data[0])
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_assets(connection, assets_ids: list) -> list[Asset]: def get_assets(connection, assets_ids: list) -> list[Asset]:
_returned_data = [] _returned_data = []
for _id in list(set(assets_ids)): for _id in list(set(assets_ids)):
res = connection.run(connection.space(TARANT_TABLE_TRANSACTION).select(_id, index=TARANT_INDEX_TX_BY_ASSET_ID)) res = connection.connect().select(TARANT_TABLE_TRANSACTION, _id, index=TARANT_INDEX_TX_BY_ASSET_ID).data
if len(res) == 0: if len(res) == 0:
continue continue
_returned_data.append(res[0]) _returned_data.append(res[0])
@ -193,18 +220,26 @@ def get_assets(connection, assets_ids: list) -> list[Asset]:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
def get_spent(connection, fullfil_transaction_id: str, fullfil_output_index: str) -> list[DbTransaction]: @catch_db_exception
_inputs = connection.run( def get_spending_transaction(
connection.space(TARANT_TABLE_TRANSACTION).select( connection, fullfil_transaction_id: str, fullfil_output_index: str
[fullfil_transaction_id, fullfil_output_index], index=TARANT_INDEX_SPENDING_BY_ID_AND_OUTPUT_INDEX ) -> list[DbTransaction]:
_inputs = (
connection.connect()
.select(
TARANT_TABLE_TRANSACTION,
[fullfil_transaction_id, fullfil_output_index],
index=TARANT_INDEX_SPENDING_BY_ID_AND_OUTPUT_INDEX,
) )
.data
) )
return get_complete_transactions_by_ids(txids=[inp[0] for inp in _inputs], connection=connection) return get_complete_transactions_by_ids(txids=[inp[0] for inp in _inputs], connection=connection)
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_latest_block(connection) -> Union[dict, None]: def get_latest_block(connection) -> Union[dict, None]:
blocks = connection.run(connection.space(TARANT_TABLE_BLOCKS).select()) blocks = connection.connect().select(TARANT_TABLE_BLOCKS).data
if not blocks: if not blocks:
return None return None
@ -214,37 +249,32 @@ def get_latest_block(connection) -> Union[dict, None]:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def store_block(connection, block: dict): def store_block(connection, block: dict):
block_unique_id = uuid4().hex block_unique_id = uuid4().hex
try: connection.connect().insert(
connection.run( TARANT_TABLE_BLOCKS, (block_unique_id, block["app_hash"], block["height"], block[TARANT_TABLE_TRANSACTION])
connection.space(TARANT_TABLE_BLOCKS).insert(
(block_unique_id, block["app_hash"], block["height"], block[TARANT_TABLE_TRANSACTION])
),
only_data=False,
) )
except Exception as e:
logger.info(f"Could not insert block: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_txids_filtered(connection, asset_ids: list[str], operation: str = "", last_tx: bool = False) -> list[str]: def get_txids_filtered(connection, asset_ids: list[str], operation: str = "", last_tx: bool = False) -> list[str]:
transactions = [] transactions = []
if operation == "CREATE": if operation == "CREATE":
transactions = connection.run( transactions = (
connection.space(TARANT_TABLE_TRANSACTION).select( connection.connect()
[asset_ids[0], operation], index="transactions_by_id_and_operation" .select(TARANT_TABLE_TRANSACTION, [asset_ids[0], operation], index="transactions_by_id_and_operation")
) .data
) )
elif operation == "TRANSFER": elif operation == "TRANSFER":
transactions = connection.run( transactions = (
connection.space(TARANT_TABLE_TRANSACTION).select(asset_ids, index=TARANT_INDEX_TX_BY_ASSET_ID) connection.connect().select(TARANT_TABLE_TRANSACTION, asset_ids, index=TARANT_INDEX_TX_BY_ASSET_ID).data
) )
else: else:
txs = connection.run(connection.space(TARANT_TABLE_TRANSACTION).select(asset_ids, index=TARANT_ID_SEARCH)) txs = connection.connect().select(TARANT_TABLE_TRANSACTION, asset_ids, index=TARANT_ID_SEARCH).data
asset_txs = connection.run( asset_txs = (
connection.space(TARANT_TABLE_TRANSACTION).select(asset_ids, index=TARANT_INDEX_TX_BY_ASSET_ID) connection.connect().select(TARANT_TABLE_TRANSACTION, asset_ids, index=TARANT_INDEX_TX_BY_ASSET_ID).data
) )
transactions = txs + asset_txs transactions = txs + asset_txs
@ -258,27 +288,9 @@ def get_txids_filtered(connection, asset_ids: list[str], operation: str = "", la
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
def text_search(conn, search, table=TARANT_TABLE_ASSETS, limit=0): @catch_db_exception
pattern = ".{}.".format(search)
field_no = 1 if table == TARANT_TABLE_ASSETS else 2 # 2 for meta_data
res = conn.run(conn.space(table).call("indexed_pattern_search", (table, field_no, pattern)))
to_return = []
if len(res[0]): # NEEDS BEAUTIFICATION
if table == TARANT_TABLE_ASSETS:
for result in res[0]:
to_return.append({"data": json.loads(result[0])["data"], "id": result[1]})
else:
for result in res[0]:
to_return.append({TARANT_TABLE_META_DATA: json.loads(result[1]), "id": result[0]})
return to_return if limit == 0 else to_return[:limit]
@register_query(TarantoolDBConnection)
def get_owned_ids(connection, owner: str) -> list[DbTransaction]: def get_owned_ids(connection, owner: str) -> list[DbTransaction]:
outputs = connection.run(connection.space(TARANT_TABLE_OUTPUT).select(owner, index="public_keys")) outputs = connection.connect().select(TARANT_TABLE_OUTPUT, owner, index="public_keys").data
if len(outputs) == 0: if len(outputs) == 0:
return [] return []
txids = [output[5] for output in outputs] txids = [output[5] for output in outputs]
@ -291,7 +303,7 @@ def get_spending_transactions(connection, inputs):
_transactions = [] _transactions = []
for inp in inputs: for inp in inputs:
_trans_list = get_spent( _trans_list = get_spending_transaction(
fullfil_transaction_id=inp["transaction_id"], fullfil_transaction_id=inp["transaction_id"],
fullfil_output_index=inp["output_index"], fullfil_output_index=inp["output_index"],
connection=connection, connection=connection,
@ -302,8 +314,9 @@ def get_spending_transactions(connection, inputs):
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_block(connection, block_id=None) -> Union[dict, None]: def get_block(connection, block_id=None) -> Union[dict, None]:
_block = connection.run(connection.space(TARANT_TABLE_BLOCKS).select(block_id, index="height", limit=1)) _block = connection.connect().select(TARANT_TABLE_BLOCKS, block_id, index="height", limit=1).data
if len(_block) == 0: if len(_block) == 0:
return return
_block = Block.from_tuple(_block[0]) _block = Block.from_tuple(_block[0])
@ -311,8 +324,9 @@ def get_block(connection, block_id=None) -> Union[dict, None]:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_block_with_transaction(connection, txid: str) -> Union[dict, None]: def get_block_with_transaction(connection, txid: str) -> Union[dict, None]:
_block = connection.run(connection.space(TARANT_TABLE_BLOCKS).select(txid, index="block_by_transaction_id")) _block = connection.connect().select(TARANT_TABLE_BLOCKS, txid, index="block_by_transaction_id").data
if len(_block) == 0: if len(_block) == 0:
return return
_block = Block.from_tuple(_block[0]) _block = Block.from_tuple(_block[0])
@ -320,83 +334,66 @@ def get_block_with_transaction(connection, txid: str) -> Union[dict, None]:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def delete_transactions(connection, txn_ids: list): def delete_transactions(connection, txn_ids: list):
try:
for _id in txn_ids: for _id in txn_ids:
_outputs = get_outputs_by_tx_id(connection, _id) _outputs = get_outputs_by_tx_id(connection, _id)
for x in range(len(_outputs)): for x in range(len(_outputs)):
connection.connect().call("delete_output", (_outputs[x].id)) connection.connect().call("delete_output", (_outputs[x].id))
connection.connect().delete(
TARANT_TABLE_UTXOS, (_id, _outputs[x].index), index="utxo_by_transaction_id_and_output_index"
)
for _id in txn_ids: for _id in txn_ids:
connection.run(connection.space(TARANT_TABLE_TRANSACTION).delete(_id), only_data=False) connection.connect().delete(TARANT_TABLE_TRANSACTION, _id)
connection.run(connection.space(TARANT_TABLE_GOVERNANCE).delete(_id), only_data=False) connection.connect().delete(TARANT_TABLE_GOVERNANCE, _id)
except Exception as e:
logger.info(f"Could not insert unspent output: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
def store_unspent_outputs(connection, *unspent_outputs: list): @catch_db_exception
def delete_unspent_outputs(connection, unspent_outputs: list):
result = [] result = []
if unspent_outputs: if unspent_outputs:
for utxo in unspent_outputs: for utxo in unspent_outputs:
try: output = (
output = connection.run( connection.connect()
connection.space(TARANT_TABLE_UTXOS).insert( .delete(
(uuid4().hex, utxo["transaction_id"], utxo["output_index"], utxo) TARANT_TABLE_UTXOS,
) (utxo["transaction_id"], utxo["output_index"]),
) index="utxo_by_transaction_id_and_output_index",
result.append(output)
except Exception as e:
logger.info(f"Could not insert unspent output: {e}")
raise OperationDataInsertionError()
return result
@register_query(TarantoolDBConnection)
def delete_unspent_outputs(connection, *unspent_outputs: list):
result = []
if unspent_outputs:
for utxo in unspent_outputs:
output = connection.run(
connection.space(TARANT_TABLE_UTXOS).delete(
(utxo["transaction_id"], utxo["output_index"]), index="utxo_by_transaction_id_and_output_index"
) )
.data
) )
result.append(output) result.append(output)
return result return result
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_unspent_outputs(connection, query=None): # for now we don't have implementation for 'query'. def get_unspent_outputs(connection, query=None): # for now we don't have implementation for 'query'.
_utxos = connection.run(connection.space(TARANT_TABLE_UTXOS).select([])) utxos = connection.connect().select(TARANT_TABLE_UTXOS, []).data
return [utx[3] for utx in _utxos] return [{"transaction_id": utxo[5], "output_index": utxo[4]} for utxo in utxos]
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def store_pre_commit_state(connection, state: dict): def store_pre_commit_state(connection, state: dict):
_precommit = connection.run(connection.space(TARANT_TABLE_PRE_COMMITS).select([], limit=1)) _precommit = connection.connect().select(TARANT_TABLE_PRE_COMMITS, [], limit=1).data
_precommitTuple = ( _precommitTuple = (
(uuid4().hex, state["height"], state[TARANT_TABLE_TRANSACTION]) (uuid4().hex, state["height"], state[TARANT_TABLE_TRANSACTION])
if _precommit is None or len(_precommit) == 0 if _precommit is None or len(_precommit) == 0
else _precommit[0] else _precommit[0]
) )
try: connection.connect().upsert(
connection.run( TARANT_TABLE_PRE_COMMITS,
connection.space(TARANT_TABLE_PRE_COMMITS).upsert(
_precommitTuple, _precommitTuple,
op_list=[("=", 1, state["height"]), ("=", 2, state[TARANT_TABLE_TRANSACTION])], op_list=[("=", 1, state["height"]), ("=", 2, state[TARANT_TABLE_TRANSACTION])],
limit=1,
),
only_data=False,
) )
except Exception as e:
logger.info(f"Could not insert pre commit state: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_pre_commit_state(connection) -> dict: def get_pre_commit_state(connection) -> dict:
_commit = connection.run(connection.space(TARANT_TABLE_PRE_COMMITS).select([], index=TARANT_ID_SEARCH)) _commit = connection.connect().select(TARANT_TABLE_PRE_COMMITS, [], index=TARANT_ID_SEARCH).data
if _commit is None or len(_commit) == 0: if _commit is None or len(_commit) == 0:
return None return None
_commit = sorted(_commit, key=itemgetter(1), reverse=False)[0] _commit = sorted(_commit, key=itemgetter(1), reverse=False)[0]
@ -404,71 +401,57 @@ def get_pre_commit_state(connection) -> dict:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def store_validator_set(conn, validators_update: dict): def store_validator_set(conn, validators_update: dict):
_validator = conn.run( _validator = (
conn.space(TARANT_TABLE_VALIDATOR_SETS).select(validators_update["height"], index="height", limit=1) conn.connect().select(TARANT_TABLE_VALIDATOR_SETS, validators_update["height"], index="height", limit=1).data
) )
unique_id = uuid4().hex if _validator is None or len(_validator) == 0 else _validator[0][0] unique_id = uuid4().hex if _validator is None or len(_validator) == 0 else _validator[0][0]
try: result = conn.connect().upsert(
conn.run( TARANT_TABLE_VALIDATOR_SETS,
conn.space(TARANT_TABLE_VALIDATOR_SETS).upsert(
(unique_id, validators_update["height"], validators_update["validators"]), (unique_id, validators_update["height"], validators_update["validators"]),
op_list=[("=", 1, validators_update["height"]), ("=", 2, validators_update["validators"])], op_list=[("=", 1, validators_update["height"]), ("=", 2, validators_update["validators"])],
limit=1,
),
only_data=False,
) )
except Exception as e: return result
logger.info(f"Could not insert validator set: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def delete_validator_set(connection, height: int): def delete_validator_set(connection, height: int):
_validators = connection.run(connection.space(TARANT_TABLE_VALIDATOR_SETS).select(height, index="height")) _validators = connection.connect().select(TARANT_TABLE_VALIDATOR_SETS, height, index="height").data
for _valid in _validators: for _valid in _validators:
connection.run(connection.space(TARANT_TABLE_VALIDATOR_SETS).delete(_valid[0]), only_data=False) connection.connect().delete(TARANT_TABLE_VALIDATOR_SETS, _valid[0])
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def store_election(connection, election_id: str, height: int, is_concluded: bool): def store_election(connection, election_id: str, height: int, is_concluded: bool):
try: connection.connect().upsert(
connection.run( TARANT_TABLE_ELECTIONS, (election_id, height, is_concluded), op_list=[("=", 1, height), ("=", 2, is_concluded)]
connection.space(TARANT_TABLE_ELECTIONS).upsert(
(election_id, height, is_concluded), op_list=[("=", 1, height), ("=", 2, is_concluded)], limit=1
),
only_data=False,
) )
except Exception as e:
logger.info(f"Could not insert election: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def store_elections(connection, elections: list): def store_elections(connection, elections: list):
try:
for election in elections: for election in elections:
_election = connection.run( # noqa: F841 _election = connection.connect().insert(
connection.space(TARANT_TABLE_ELECTIONS).insert( TARANT_TABLE_ELECTIONS, (election["election_id"], election["height"], election["is_concluded"])
(election["election_id"], election["height"], election["is_concluded"])
),
only_data=False,
) )
except Exception as e:
logger.info(f"Could not insert elections: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def delete_elections(connection, height: int): def delete_elections(connection, height: int):
_elections = connection.run(connection.space(TARANT_TABLE_ELECTIONS).select(height, index="height")) _elections = connection.connect().select(TARANT_TABLE_ELECTIONS, height, index="height").data
for _elec in _elections: for _elec in _elections:
connection.run(connection.space(TARANT_TABLE_ELECTIONS).delete(_elec[0]), only_data=False) connection.connect().delete(TARANT_TABLE_ELECTIONS, _elec[0])
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_validator_set(connection, height: int = None): def get_validator_set(connection, height: int = None):
_validators = connection.run(connection.space(TARANT_TABLE_VALIDATOR_SETS).select()) _validators = connection.connect().select(TARANT_TABLE_VALIDATOR_SETS).data
if height is not None and _validators is not None: if height is not None and _validators is not None:
_validators = [ _validators = [
{"height": validator[1], "validators": validator[2]} for validator in _validators if validator[1] <= height {"height": validator[1], "validators": validator[2]} for validator in _validators if validator[1] <= height
@ -481,8 +464,9 @@ def get_validator_set(connection, height: int = None):
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_election(connection, election_id: str) -> dict: def get_election(connection, election_id: str) -> dict:
_elections = connection.run(connection.space(TARANT_TABLE_ELECTIONS).select(election_id, index=TARANT_ID_SEARCH)) _elections = connection.connect().select(TARANT_TABLE_ELECTIONS, election_id, index=TARANT_ID_SEARCH).data
if _elections is None or len(_elections) == 0: if _elections is None or len(_elections) == 0:
return None return None
_election = sorted(_elections, key=itemgetter(0), reverse=True)[0] _election = sorted(_elections, key=itemgetter(0), reverse=True)[0]
@ -490,40 +474,46 @@ def get_election(connection, election_id: str) -> dict:
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_asset_tokens_for_public_key(connection, asset_id: str, public_key: str) -> list[DbTransaction]: def get_asset_tokens_for_public_key(connection, asset_id: str, public_key: str) -> list[DbTransaction]:
id_transactions = connection.run(connection.space(TARANT_TABLE_GOVERNANCE).select([asset_id])) id_transactions = connection.connect().select(TARANT_TABLE_GOVERNANCE, [asset_id]).data
asset_id_transactions = connection.run( asset_id_transactions = (
connection.space(TARANT_TABLE_GOVERNANCE).select([asset_id], index="governance_by_asset_id") connection.connect().select(TARANT_TABLE_GOVERNANCE, [asset_id], index="governance_by_asset_id").data
) )
transactions = id_transactions + asset_id_transactions transactions = id_transactions + asset_id_transactions
return get_complete_transactions_by_ids(connection, [_tx[0] for _tx in transactions]) return get_complete_transactions_by_ids(connection, [_tx[0] for _tx in transactions])
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def store_abci_chain(connection, height: int, chain_id: str, is_synced: bool = True): def store_abci_chain(connection, height: int, chain_id: str, is_synced: bool = True):
try: connection.connect().upsert(
connection.run( TARANT_TABLE_ABCI_CHAINS,
connection.space(TARANT_TABLE_ABCI_CHAINS).upsert(
(chain_id, height, is_synced), (chain_id, height, is_synced),
op_list=[("=", 0, chain_id), ("=", 1, height), ("=", 2, is_synced)], op_list=[("=", 0, chain_id), ("=", 1, height), ("=", 2, is_synced)],
),
only_data=False,
) )
except Exception as e:
logger.info(f"Could not insert abci-chain: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def delete_abci_chain(connection, height: int): def delete_abci_chain(connection, height: int):
chains = connection.run(connection.space(TARANT_TABLE_ABCI_CHAINS).select(height, index="height"), only_data=False) chains = connection.connect().select(TARANT_TABLE_ABCI_CHAINS, height, index="height")
connection.run(connection.space(TARANT_TABLE_ABCI_CHAINS).delete(chains[0][0], index="id"), only_data=False) connection.connect().delete(TARANT_TABLE_ABCI_CHAINS, chains[0][0], index="id")
@register_query(TarantoolDBConnection) @register_query(TarantoolDBConnection)
@catch_db_exception
def get_latest_abci_chain(connection) -> Union[dict, None]: def get_latest_abci_chain(connection) -> Union[dict, None]:
_all_chains = connection.run(connection.space(TARANT_TABLE_ABCI_CHAINS).select()) _all_chains = connection.connect().select(TARANT_TABLE_ABCI_CHAINS).data
if _all_chains is None or len(_all_chains) == 0: if _all_chains is None or len(_all_chains) == 0:
return None return None
_chain = sorted(_all_chains, key=itemgetter(1), reverse=True)[0] _chain = sorted(_all_chains, key=itemgetter(1), reverse=True)[0]
return {"chain_id": _chain[0], "height": _chain[1], "is_synced": _chain[2]} return {"chain_id": _chain[0], "height": _chain[1], "is_synced": _chain[2]}
@register_query(TarantoolDBConnection)
@catch_db_exception
def get_outputs_by_owner(connection, public_key: str, table=TARANT_TABLE_OUTPUT) -> list[Output]:
outputs = connection.connect().select(table, public_key, index="public_keys")
return [Output.from_tuple(output) for output in outputs]

View File

@ -3,7 +3,7 @@ import logging
from planetmint.config import Config from planetmint.config import Config
from planetmint.backend.utils import module_dispatch_registrar from planetmint.backend.utils import module_dispatch_registrar
from planetmint import backend from planetmint import backend
from planetmint.backend.tarantool.connection import TarantoolDBConnection from planetmint.backend.tarantool.sync_io.connection import TarantoolDBConnection
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
register_schema = module_dispatch_registrar(backend.schema) register_schema = module_dispatch_registrar(backend.schema)
@ -32,19 +32,11 @@ def create_database(connection, dbname):
logger.info("Create database `%s`.", dbname) logger.info("Create database `%s`.", dbname)
def run_command_with_output(command):
from subprocess import run
host_port = "%s:%s" % (
Config().get()["database"]["host"],
Config().get()["database"]["port"],
)
output = run(["tarantoolctl", "connect", host_port], input=command, capture_output=True)
if output.returncode != 0:
raise Exception(f"Error while trying to execute cmd {command} on host:port {host_port}: {output.stderr}")
return output.stdout
@register_schema(TarantoolDBConnection) @register_schema(TarantoolDBConnection)
def create_tables(connection, dbname): def create_tables(connection, dbname):
connection.connect().call("init") connection.connect().call("init")
@register_schema(TarantoolDBConnection)
def migrate(connection):
connection.connect().call("migrate")

View File

@ -1 +0,0 @@
from planetmint.backend.tarantool.transaction import tools

View File

@ -1,89 +0,0 @@
from transactions.common.memoize import HDict
from planetmint.backend.tarantool.const import (
TARANT_TABLE_META_DATA,
TARANT_TABLE_ASSETS,
TARANT_TABLE_KEYS,
TARANT_TABLE_TRANSACTION,
TARANT_TABLE_INPUT,
TARANT_TABLE_OUTPUT,
TARANT_TABLE_SCRIPT,
)
def get_items(_list):
for item in _list:
if type(item) is dict:
yield item
def _save_keys_order(dictionary):
filter_keys = ["asset", TARANT_TABLE_META_DATA]
if type(dictionary) is dict or type(dictionary) is HDict:
keys = list(dictionary.keys())
_map = {}
for key in keys:
_map[key] = _save_keys_order(dictionary=dictionary[key]) if key not in filter_keys else None
return _map
elif type(dictionary) is list:
_maps = []
for _item in get_items(_list=dictionary):
_map = {}
keys = list(_item.keys())
for key in keys:
_map[key] = _save_keys_order(dictionary=_item[key]) if key not in filter_keys else None
_maps.append(_map)
return _maps
return None
class TransactionDecompose:
def __init__(self, _transaction):
self._transaction = _transaction
self._tuple_transaction = {
TARANT_TABLE_TRANSACTION: (),
TARANT_TABLE_INPUT: [],
TARANT_TABLE_OUTPUT: [],
TARANT_TABLE_KEYS: [],
TARANT_TABLE_SCRIPT: None,
TARANT_TABLE_META_DATA: None,
TARANT_TABLE_ASSETS: None,
}
def get_map(self, dictionary: dict = None):
return (
_save_keys_order(dictionary=dictionary)
if dictionary is not None
else _save_keys_order(dictionary=self._transaction)
)
def __prepare_transaction(self):
_map = self.get_map()
return (self._transaction["id"], self._transaction["operation"], self._transaction["version"], _map)
def convert_to_tuple(self):
self._tuple_transaction[TARANT_TABLE_TRANSACTION] = self.__prepare_transaction()
return self._tuple_transaction
class TransactionCompose:
def __init__(self, db_results):
self.db_results = db_results
self._map = self.db_results[TARANT_TABLE_TRANSACTION][3]
def _get_transaction_operation(self):
return self.db_results[TARANT_TABLE_TRANSACTION][1]
def _get_transaction_version(self):
return self.db_results[TARANT_TABLE_TRANSACTION][2]
def _get_transaction_id(self):
return self.db_results[TARANT_TABLE_TRANSACTION][0]
def convert_to_dict(self):
transaction = {k: None for k in list(self._map.keys())}
transaction["id"] = self._get_transaction_id()
transaction["version"] = self._get_transaction_version()
transaction["operation"] = self._get_transaction_operation()
return transaction

View File

@ -1,13 +0,0 @@
import subprocess
def run_cmd(commands: list, config: dict):
ret = subprocess.Popen(
["%s %s:%s < %s" % ("tarantoolctl connect", "localhost", "3303", "planetmint/backend/tarantool/init.lua")],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
universal_newlines=True,
bufsize=0,
shell=True,
)
return True if ret >= 0 else False

View File

@ -14,26 +14,27 @@ import json
import sys import sys
import planetmint import planetmint
from planetmint.core import rollback
from planetmint.utils import load_node_key
from transactions.common.transaction_mode_types import BROADCAST_TX_COMMIT from transactions.common.transaction_mode_types import BROADCAST_TX_COMMIT
from transactions.common.exceptions import DatabaseDoesNotExist, ValidationError from transactions.common.exceptions import DatabaseDoesNotExist, ValidationError
from transactions.types.elections.vote import Vote from transactions.types.elections.vote import Vote
from transactions.types.elections.chain_migration_election import ChainMigrationElection from transactions.types.elections.chain_migration_election import ChainMigrationElection
from transactions.types.elections.validator_utils import election_id_to_public_key from transactions.types.elections.validator_utils import election_id_to_public_key
from transactions.types.elections.validator_election import ValidatorElection
from transactions.common.transaction import Transaction from transactions.common.transaction import Transaction
from planetmint import ValidatorElection, Planetmint
from planetmint.application.validator import Validator
from planetmint.backend import schema from planetmint.backend import schema
from planetmint.commands import utils from planetmint.commands import utils
from planetmint.commands.utils import configure_planetmint, input_on_stderr from planetmint.commands.utils import configure_planetmint, input_on_stderr
from planetmint.log import setup_logging from planetmint.config_utils import setup_logging
from planetmint.tendermint_utils import public_key_from_base64 from planetmint.abci.rpc import ABCI_RPC, MODE_COMMIT, MODE_LIST
from planetmint.abci.utils import load_node_key, public_key_from_base64
from planetmint.commands.election_types import elections from planetmint.commands.election_types import elections
from planetmint.version import __tm_supported_versions__ from planetmint.version import __tm_supported_versions__
from planetmint.config import Config from planetmint.config import Config
from planetmint.backend.tarantool.const import TARANT_TABLE_GOVERNANCE
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -109,18 +110,22 @@ def run_configure(args):
def run_election(args): def run_election(args):
"""Initiate and manage elections""" """Initiate and manage elections"""
b = Planetmint() b = Validator()
abci_rpc = ABCI_RPC()
if args.action == "show":
run_election_show(args, b)
else:
# Call the function specified by args.action, as defined above # Call the function specified by args.action, as defined above
globals()[f"run_election_{args.action}"](args, b) globals()[f"run_election_{args.action}"](args, b, abci_rpc)
def run_election_new(args, planet): def run_election_new(args, planet, abci_rpc):
election_type = args.election_type.replace("-", "_") election_type = args.election_type.replace("-", "_")
globals()[f"run_election_new_{election_type}"](args, planet) globals()[f"run_election_new_{election_type}"](args, planet, abci_rpc)
def create_new_election(sk, planet, election_class, data): def create_new_election(sk, planet, election_class, data, abci_rpc):
try: try:
key = load_node_key(sk) key = load_node_key(sk)
voters = planet.get_recipients_list() voters = planet.get_recipients_list()
@ -133,7 +138,9 @@ def create_new_election(sk, planet, election_class, data):
logger.error(fd_404) logger.error(fd_404)
return False return False
resp = planet.write_transaction(election, BROADCAST_TX_COMMIT) resp = abci_rpc.write_transaction(
MODE_LIST, abci_rpc.tendermint_rpc_endpoint, MODE_COMMIT, election, BROADCAST_TX_COMMIT
)
if resp == (202, ""): if resp == (202, ""):
logger.info("[SUCCESS] Submitted proposal with id: {}".format(election.id)) logger.info("[SUCCESS] Submitted proposal with id: {}".format(election.id))
return election.id return election.id
@ -142,7 +149,7 @@ def create_new_election(sk, planet, election_class, data):
return False return False
def run_election_new_upsert_validator(args, planet): def run_election_new_upsert_validator(args, planet, abci_rpc):
"""Initiates an election to add/update/remove a validator to an existing Planetmint network """Initiates an election to add/update/remove a validator to an existing Planetmint network
:param args: dict :param args: dict
@ -166,10 +173,10 @@ def run_election_new_upsert_validator(args, planet):
} }
] ]
return create_new_election(args.sk, planet, ValidatorElection, new_validator) return create_new_election(args.sk, planet, ValidatorElection, new_validator, abci_rpc)
def run_election_new_chain_migration(args, planet): def run_election_new_chain_migration(args, planet, abci_rpc):
"""Initiates an election to halt block production """Initiates an election to halt block production
:param args: dict :param args: dict
@ -180,10 +187,10 @@ def run_election_new_chain_migration(args, planet):
:return: election_id or `False` in case of failure :return: election_id or `False` in case of failure
""" """
return create_new_election(args.sk, planet, ChainMigrationElection, [{"data": {}}]) return create_new_election(args.sk, planet, ChainMigrationElection, [{"data": {}}], abci_rpc)
def run_election_approve(args, planet): def run_election_approve(args, validator: Validator, abci_rpc: ABCI_RPC):
"""Approve an election """Approve an election
:param args: dict :param args: dict
@ -196,7 +203,7 @@ def run_election_approve(args, planet):
""" """
key = load_node_key(args.sk) key = load_node_key(args.sk)
tx = planet.get_transaction(args.election_id) tx = validator.models.get_transaction(args.election_id)
voting_powers = [v.amount for v in tx.outputs if key.public_key in v.public_keys] voting_powers = [v.amount for v in tx.outputs if key.public_key in v.public_keys]
if len(voting_powers) > 0: if len(voting_powers) > 0:
voting_power = voting_powers[0] voting_power = voting_powers[0]
@ -208,9 +215,11 @@ def run_election_approve(args, planet):
inputs = [i for i in tx_converted.to_inputs() if key.public_key in i.owners_before] inputs = [i for i in tx_converted.to_inputs() if key.public_key in i.owners_before]
election_pub_key = election_id_to_public_key(tx.id) election_pub_key = election_id_to_public_key(tx.id)
approval = Vote.generate(inputs, [([election_pub_key], voting_power)], [tx.id]).sign([key.private_key]) approval = Vote.generate(inputs, [([election_pub_key], voting_power)], [tx.id]).sign([key.private_key])
planet.validate_transaction(approval) validator.validate_transaction(approval)
resp = planet.write_transaction(approval, BROADCAST_TX_COMMIT) resp = abci_rpc.write_transaction(
MODE_LIST, abci_rpc.tendermint_rpc_endpoint, MODE_COMMIT, approval, BROADCAST_TX_COMMIT
)
if resp == (202, ""): if resp == (202, ""):
logger.info("[SUCCESS] Your vote has been submitted") logger.info("[SUCCESS] Your vote has been submitted")
@ -220,7 +229,7 @@ def run_election_approve(args, planet):
return False return False
def run_election_show(args, planet): def run_election_show(args, validator: Validator):
"""Retrieves information about an election """Retrieves information about an election
:param args: dict :param args: dict
@ -230,12 +239,12 @@ def run_election_show(args, planet):
:param planet: an instance of Planetmint :param planet: an instance of Planetmint
""" """
election = planet.get_transaction(args.election_id) election = validator.models.get_transaction(args.election_id)
if not election: if not election:
logger.error(f"No election found with election_id {args.election_id}") logger.error(f"No election found with election_id {args.election_id}")
return return
response = planet.show_election_status(election) response = validator.show_election_status(election)
logger.info(response) logger.info(response)
@ -243,8 +252,8 @@ def run_election_show(args, planet):
def _run_init(): def _run_init():
bdb = planetmint.Planetmint() validator = Validator()
schema.init_database(bdb.connection) schema.init_database(validator.models.connection)
@configure_planetmint @configure_planetmint
@ -253,6 +262,12 @@ def run_init(args):
_run_init() _run_init()
@configure_planetmint
def run_migrate(args):
validator = Validator()
schema.migrate(validator.models.connection)
@configure_planetmint @configure_planetmint
def run_drop(args): def run_drop(args):
"""Drop the database""" """Drop the database"""
@ -271,13 +286,10 @@ def run_drop(args):
print("Drop was executed, but spaces doesn't exist.", file=sys.stderr) print("Drop was executed, but spaces doesn't exist.", file=sys.stderr)
def run_recover(b):
rollback(b)
@configure_planetmint @configure_planetmint
def run_start(args): def run_start(args):
"""Start the processes to run the node""" """Start the processes to run the node"""
logger.info("Planetmint Version %s", planetmint.version.__version__)
# Configure Logging # Configure Logging
setup_logging() setup_logging()
@ -286,8 +298,9 @@ def run_start(args):
logger.info("Initializing database") logger.info("Initializing database")
_run_init() _run_init()
logger.info("Planetmint Version %s", planetmint.version.__version__) validator = Validator()
run_recover(planetmint.lib.Planetmint()) validator.rollback()
del validator
logger.info("Starting Planetmint main process.") logger.info("Starting Planetmint main process.")
from planetmint.start import start from planetmint.start import start
@ -360,6 +373,8 @@ def create_parser():
subparsers.add_parser("drop", help="Drop the database") subparsers.add_parser("drop", help="Drop the database")
subparsers.add_parser("migrate", help="Migrate up")
# parser for starting Planetmint # parser for starting Planetmint
start_parser = subparsers.add_parser("start", help="Start Planetmint") start_parser = subparsers.add_parser("start", help="Start Planetmint")
@ -381,6 +396,21 @@ def create_parser():
help="💀 EXPERIMENTAL: parallelize validation for better throughput 💀", help="💀 EXPERIMENTAL: parallelize validation for better throughput 💀",
) )
start_parser.add_argument(
"--web-api-only",
dest="web_api_only",
default=False,
action="store_true",
help="💀 EXPERIMENTAL: seperate web API from ABCI server 💀",
)
start_parser.add_argument(
"--abci-only",
dest="abci_only",
default=False,
action="store_true",
help="💀 EXPERIMENTAL: seperate web API from ABCI server 💀",
)
return parser return parser

View File

@ -1,19 +1,9 @@
import copy import copy
import logging import logging
import os import os
# from planetmint.log import DEFAULT_LOGGING_CONFIG as log_config
from planetmint.version import __version__ # noqa
from decouple import config from decouple import config
from planetmint.utils.singleton import Singleton
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class Config(metaclass=Singleton): class Config(metaclass=Singleton):
@ -96,7 +86,7 @@ class Config(metaclass=Singleton):
"tendermint": { "tendermint": {
"host": "localhost", "host": "localhost",
"port": 26657, "port": 26657,
"version": "v0.34.15", # look for __tm_supported_versions__ "version": "v0.34.24", # look for __tm_supported_versions__
}, },
"database": self.__private_database_map, "database": self.__private_database_map,
"log": { "log": {
@ -127,8 +117,8 @@ class Config(metaclass=Singleton):
def set(self, config): def set(self, config):
self._private_real_config = config self._private_real_config = config
def get_db_key_map(sefl, db): def get_db_key_map(self, db):
return sefl.__private_database_keys_map[db] return self.__private_database_keys_map[db]
def get_db_map(sefl, db): def get_db_map(sefl, db):
return sefl.__private_database_map[db] return sefl.__private_database_map[db]
@ -141,16 +131,12 @@ DEFAULT_LOGGING_CONFIG = {
"formatters": { "formatters": {
"console": { "console": {
"class": "logging.Formatter", "class": "logging.Formatter",
"format": ( "format": ("[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)"),
"[%(asctime)s] [%(levelname)s] (%(name)s) " "%(message)s (%(processName)-10s - pid: %(process)d)"
),
"datefmt": "%Y-%m-%d %H:%M:%S", "datefmt": "%Y-%m-%d %H:%M:%S",
}, },
"file": { "file": {
"class": "logging.Formatter", "class": "logging.Formatter",
"format": ( "format": ("[%(asctime)s] [%(levelname)s] (%(name)s) %(message)s (%(processName)-10s - pid: %(process)d)"),
"[%(asctime)s] [%(levelname)s] (%(name)s) " "%(message)s (%(processName)-10s - pid: %(process)d)"
),
"datefmt": "%Y-%m-%d %H:%M:%S", "datefmt": "%Y-%m-%d %H:%M:%S",
}, },
}, },

View File

@ -23,10 +23,14 @@ import logging
import collections.abc import collections.abc
from functools import lru_cache from functools import lru_cache
from logging.config import dictConfig as set_logging_config
from pkg_resources import iter_entry_points, ResolutionError from pkg_resources import iter_entry_points, ResolutionError
from planetmint.config import Config from transactions.common.exceptions import ConfigurationError
from planetmint.config import Config, DEFAULT_LOGGING_CONFIG
from planetmint.application.basevalidationrules import BaseValidationRules
from transactions.common import exceptions from transactions.common import exceptions
from planetmint.validation import BaseValidationRules
# TODO: move this to a proper configuration file for logging # TODO: move this to a proper configuration file for logging
logging.getLogger("requests").setLevel(logging.WARNING) logging.getLogger("requests").setLevel(logging.WARNING)
@ -306,3 +310,69 @@ def load_events_plugins(names=None):
plugins.append((name, entry_point.load())) plugins.append((name, entry_point.load()))
return plugins return plugins
def _normalize_log_level(level):
try:
return level.upper()
except AttributeError as exc:
raise ConfigurationError("Log level must be a string!") from exc
def setup_logging():
"""Function to configure log handlers.
.. important::
Configuration, if needed, should be applied before invoking this
decorator, as starting the subscriber process for logging will
configure the root logger for the child process based on the
state of :obj:`planetmint.config` at the moment this decorator
is invoked.
"""
logging_configs = DEFAULT_LOGGING_CONFIG
new_logging_configs = Config().get()["log"]
if "file" in new_logging_configs:
filename = new_logging_configs["file"]
logging_configs["handlers"]["file"]["filename"] = filename
if "error_file" in new_logging_configs:
error_filename = new_logging_configs["error_file"]
logging_configs["handlers"]["errors"]["filename"] = error_filename
if "level_console" in new_logging_configs:
level = _normalize_log_level(new_logging_configs["level_console"])
logging_configs["handlers"]["console"]["level"] = level
if "level_logfile" in new_logging_configs:
level = _normalize_log_level(new_logging_configs["level_logfile"])
logging_configs["handlers"]["file"]["level"] = level
if "fmt_console" in new_logging_configs:
fmt = new_logging_configs["fmt_console"]
logging_configs["formatters"]["console"]["format"] = fmt
if "fmt_logfile" in new_logging_configs:
fmt = new_logging_configs["fmt_logfile"]
logging_configs["formatters"]["file"]["format"] = fmt
if "datefmt_console" in new_logging_configs:
fmt = new_logging_configs["datefmt_console"]
logging_configs["formatters"]["console"]["datefmt"] = fmt
if "datefmt_logfile" in new_logging_configs:
fmt = new_logging_configs["datefmt_logfile"]
logging_configs["formatters"]["file"]["datefmt"] = fmt
log_levels = new_logging_configs.get("granular_levels", {})
for logger_name, level in log_levels.items():
level = _normalize_log_level(level)
try:
logging_configs["loggers"][logger_name]["level"] = level
except KeyError:
logging_configs["loggers"][logger_name] = {"level": level}
set_logging_config(logging_configs)

View File

@ -1,47 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from planetmint.utils import condition_details_has_owner
from planetmint.backend import query
from transactions.common.transaction import TransactionLink
class FastQuery:
"""Database queries that join on block results from a single node."""
def __init__(self, connection):
self.connection = connection
def get_outputs_by_public_key(self, public_key):
"""Get outputs for a public key"""
txs = query.get_owned_ids(self.connection, public_key)
return [
TransactionLink(tx.id, index)
for tx in txs
for index, output in enumerate(tx.outputs)
if condition_details_has_owner(output.condition.details, public_key)
]
def filter_spent_outputs(self, outputs):
"""Remove outputs that have been spent
Args:
outputs: list of TransactionLink
"""
links = [o.to_dict() for o in outputs]
txs = query.get_spending_transactions(self.connection, links)
spends = {TransactionLink.from_dict(input.fulfills.to_dict()) for tx in txs for input in tx.inputs}
return [ff for ff in outputs if ff not in spends]
def filter_unspent_outputs(self, outputs):
"""Remove outputs that have not been spent
Args:
outputs: list of TransactionLink
"""
links = [o.to_dict() for o in outputs]
txs = query.get_spending_transactions(self.connection, links)
spends = {TransactionLink.from_dict(input.fulfills.to_dict()) for tx in txs for input in tx.inputs}
return [ff for ff in outputs if ff in spends]

View File

37
planetmint/ipc/events.py Normal file
View File

@ -0,0 +1,37 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
POISON_PILL = "POISON_PILL"
class EventTypes:
"""Container class that holds all the possible
events Planetmint manages.
"""
# If you add a new Event Type, make sure to add it
# to the docs in docs/server/source/event-plugin-api.rst
ALL = ~0
BLOCK_VALID = 1
BLOCK_INVALID = 2
# NEW_EVENT = 4
# NEW_EVENT = 8
# NEW_EVENT = 16...
class Event:
"""An Event."""
def __init__(self, event_type, event_data):
"""Creates a new event.
Args:
event_type (int): the type of the event, see
:class:`~planetmint.events.EventTypes`
event_data (obj): the data of the event.
"""
self.type = event_type
self.data = event_data

View File

@ -1,45 +1,11 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from queue import Empty from queue import Empty
from collections import defaultdict from collections import defaultdict
import multiprocessing import multiprocessing
import logging
from planetmint.ipc.events import EventTypes, POISON_PILL
POISON_PILL = "POISON_PILL" logger = logging.getLogger(__name__)
class EventTypes:
"""Container class that holds all the possible
events Planetmint manages.
"""
# If you add a new Event Type, make sure to add it
# to the docs in docs/server/source/event-plugin-api.rst
ALL = ~0
BLOCK_VALID = 1
BLOCK_INVALID = 2
# NEW_EVENT = 4
# NEW_EVENT = 8
# NEW_EVENT = 16...
class Event:
"""An Event."""
def __init__(self, event_type, event_data):
"""Creates a new event.
Args:
event_type (int): the type of the event, see
:class:`~planetmint.events.EventTypes`
event_data (obj): the data of the event.
"""
self.type = event_type
self.data = event_data
class Exchange: class Exchange:
@ -100,10 +66,14 @@ class Exchange:
def run(self): def run(self):
"""Start the exchange""" """Start the exchange"""
self.started_queue.put("STARTED") self.started_queue.put("STARTED")
try:
while True: while True:
event = self.publisher_queue.get() event = self.publisher_queue.get()
if event == POISON_PILL: if event == POISON_PILL:
return return
else: else:
self.dispatch(event) self.dispatch(event)
except KeyboardInterrupt:
return
except Exception as e:
logger.debug(f"Exchange Exception: {e}")

View File

@ -1,968 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
"""Module containing main contact points with Tendermint and
MongoDB.
"""
import logging
from planetmint.backend.connection import Connection
import json
import rapidjson
import requests
from itertools import chain
from collections import namedtuple, OrderedDict
from uuid import uuid4
from hashlib import sha3_256
from transactions import Transaction, Vote
from transactions.common.crypto import public_key_from_ed25519_key
from transactions.common.exceptions import (
SchemaValidationError,
ValidationError,
DuplicateTransaction,
InvalidSignature,
DoubleSpend,
InputDoesNotExist,
AssetIdMismatch,
AmountError,
MultipleInputsError,
InvalidProposer,
UnequalValidatorSet,
InvalidPowerChange,
)
from transactions.common.transaction import VALIDATOR_ELECTION, CHAIN_MIGRATION_ELECTION
from transactions.common.transaction_mode_types import (
BROADCAST_TX_COMMIT,
BROADCAST_TX_ASYNC,
BROADCAST_TX_SYNC,
)
from transactions.common.output import Output as TransactionOutput
from transactions.types.elections.election import Election
from transactions.types.elections.validator_utils import election_id_to_public_key
from planetmint.backend.models import Output, DbTransaction
from planetmint.backend.tarantool.const import (
TARANT_TABLE_GOVERNANCE,
TARANT_TABLE_TRANSACTION,
)
from planetmint.config import Config
from planetmint import backend, config_utils, fastquery
from planetmint.tendermint_utils import (
encode_transaction,
merkleroot,
key_from_base64,
public_key_to_base64,
encode_validator,
new_validator_set,
)
from planetmint import exceptions as core_exceptions
from planetmint.validation import BaseValidationRules
from planetmint.backend.interfaces import Asset, MetaData
from planetmint.const import GOVERNANCE_TRANSACTION_TYPES
logger = logging.getLogger(__name__)
class Planetmint(object):
"""Planetmint API
Create, read, sign, write transactions to the database
"""
def __init__(self, connection=None):
"""Initialize the Planetmint instance
A Planetmint instance has several configuration parameters (e.g. host).
If a parameter value is passed as an argument to the Planetmint
__init__ method, then that is the value it will have.
Otherwise, the parameter value will come from an environment variable.
If that environment variable isn't set, then the value
will come from the local configuration file. And if that variable
isn't in the local configuration file, then the parameter will have
its default value (defined in planetmint.__init__).
Args:
connection (:class:`~planetmint.backend.connection.Connection`):
A connection to the database.
"""
config_utils.autoconfigure()
self.mode_commit = BROADCAST_TX_COMMIT
self.mode_list = (BROADCAST_TX_ASYNC, BROADCAST_TX_SYNC, self.mode_commit)
self.tendermint_host = Config().get()["tendermint"]["host"]
self.tendermint_port = Config().get()["tendermint"]["port"]
self.endpoint = "http://{}:{}/".format(self.tendermint_host, self.tendermint_port)
validationPlugin = Config().get().get("validation_plugin")
if validationPlugin:
self.validation = config_utils.load_validation_plugin(validationPlugin)
else:
self.validation = BaseValidationRules
self.connection = connection if connection is not None else Connection()
def post_transaction(self, transaction, mode):
"""Submit a valid transaction to the mempool."""
if not mode or mode not in self.mode_list:
raise ValidationError("Mode must be one of the following {}.".format(", ".join(self.mode_list)))
tx_dict = transaction.tx_dict if transaction.tx_dict else transaction.to_dict()
payload = {
"method": mode,
"jsonrpc": "2.0",
"params": [encode_transaction(tx_dict)],
"id": str(uuid4()),
}
# TODO: handle connection errors!
return requests.post(self.endpoint, json=payload)
def write_transaction(self, transaction, mode):
# This method offers backward compatibility with the Web API.
"""Submit a valid transaction to the mempool."""
response = self.post_transaction(transaction, mode)
return self._process_post_response(response.json(), mode)
def _process_post_response(self, response, mode):
logger.debug(response)
error = response.get("error")
if error:
status_code = 500
message = error.get("message", "Internal Error")
data = error.get("data", "")
if "Tx already exists in cache" in data:
status_code = 400
return (status_code, message + " - " + data)
result = response["result"]
if mode == self.mode_commit:
check_tx_code = result.get("check_tx", {}).get("code", 0)
deliver_tx_code = result.get("deliver_tx", {}).get("code", 0)
error_code = check_tx_code or deliver_tx_code
else:
error_code = result.get("code", 0)
if error_code:
return (500, "Transaction validation failed")
return (202, "")
def store_bulk_transactions(self, transactions):
txns = []
gov_txns = []
for t in transactions:
transaction = t.tx_dict if t.tx_dict else rapidjson.loads(rapidjson.dumps(t.to_dict()))
if transaction["operation"] in GOVERNANCE_TRANSACTION_TYPES:
gov_txns.append(transaction)
else:
txns.append(transaction)
backend.query.store_transactions(self.connection, txns, TARANT_TABLE_TRANSACTION)
backend.query.store_transactions(self.connection, gov_txns, TARANT_TABLE_GOVERNANCE)
def delete_transactions(self, txs):
return backend.query.delete_transactions(self.connection, txs)
def update_utxoset(self, transaction):
self.updated__ = """Update the UTXO set given ``transaction``. That is, remove
the outputs that the given ``transaction`` spends, and add the
outputs that the given ``transaction`` creates.
Args:
transaction (:obj:`~planetmint.models.Transaction`): A new
transaction incoming into the system for which the UTXOF
set needs to be updated.
"""
spent_outputs = [spent_output for spent_output in transaction.spent_outputs]
if spent_outputs:
self.delete_unspent_outputs(*spent_outputs)
self.store_unspent_outputs(*[utxo._asdict() for utxo in transaction.unspent_outputs])
def store_unspent_outputs(self, *unspent_outputs):
"""Store the given ``unspent_outputs`` (utxos).
Args:
*unspent_outputs (:obj:`tuple` of :obj:`dict`): Variable
length tuple or list of unspent outputs.
"""
if unspent_outputs:
return backend.query.store_unspent_outputs(self.connection, *unspent_outputs)
def get_utxoset_merkle_root(self):
"""Returns the merkle root of the utxoset. This implies that
the utxoset is first put into a merkle tree.
For now, the merkle tree and its root will be computed each
time. This obviously is not efficient and a better approach
that limits the repetition of the same computation when
unnecesary should be sought. For instance, future optimizations
could simply re-compute the branches of the tree that were
affected by a change.
The transaction hash (id) and output index should be sufficient
to uniquely identify a utxo, and consequently only that
information from a utxo record is needed to compute the merkle
root. Hence, each node of the merkle tree should contain the
tuple (txid, output_index).
.. important:: The leaves of the tree will need to be sorted in
some kind of lexicographical order.
Returns:
str: Merkle root in hexadecimal form.
"""
utxoset = backend.query.get_unspent_outputs(self.connection)
# TODO Once ready, use the already pre-computed utxo_hash field.
# See common/transactions.py for details.
hashes = [
sha3_256("{}{}".format(utxo["transaction_id"], utxo["output_index"]).encode()).digest() for utxo in utxoset
]
# TODO Notice the sorted call!
return merkleroot(sorted(hashes))
def get_unspent_outputs(self):
"""Get the utxoset.
Returns:
generator of unspent_outputs.
"""
cursor = backend.query.get_unspent_outputs(self.connection)
return (record for record in cursor)
def delete_unspent_outputs(self, *unspent_outputs):
"""Deletes the given ``unspent_outputs`` (utxos).
Args:
*unspent_outputs (:obj:`tuple` of :obj:`dict`): Variable
length tuple or list of unspent outputs.
"""
if unspent_outputs:
return backend.query.delete_unspent_outputs(self.connection, *unspent_outputs)
def is_committed(self, transaction_id):
transaction = backend.query.get_transaction_single(self.connection, transaction_id)
return bool(transaction)
def get_transaction(self, transaction_id):
return backend.query.get_transaction_single(self.connection, transaction_id)
def get_transactions(self, txn_ids):
return backend.query.get_transactions(self.connection, txn_ids)
def get_transactions_filtered(self, asset_ids, operation=None, last_tx=False):
"""Get a list of transactions filtered on some criteria"""
txids = backend.query.get_txids_filtered(self.connection, asset_ids, operation, last_tx)
for txid in txids:
yield self.get_transaction(txid)
def get_outputs_by_tx_id(self, txid):
return backend.query.get_outputs_by_tx_id(self.connection, txid)
def get_outputs_filtered(self, owner, spent=None):
"""Get a list of output links filtered on some criteria
Args:
owner (str): base58 encoded public_key.
spent (bool): If ``True`` return only the spent outputs. If
``False`` return only unspent outputs. If spent is
not specified (``None``) return all outputs.
Returns:
:obj:`list` of TransactionLink: list of ``txid`` s and ``output`` s
pointing to another transaction's condition
"""
outputs = self.fastquery.get_outputs_by_public_key(owner)
if spent is None:
return outputs
elif spent is True:
return self.fastquery.filter_unspent_outputs(outputs)
elif spent is False:
return self.fastquery.filter_spent_outputs(outputs)
def get_spent(self, txid, output, current_transactions=[]):
transactions = backend.query.get_spent(self.connection, txid, output)
current_spent_transactions = []
for ctxn in current_transactions:
for ctxn_input in ctxn.inputs:
if ctxn_input.fulfills and ctxn_input.fulfills.txid == txid and ctxn_input.fulfills.output == output:
current_spent_transactions.append(ctxn)
transaction = None
if len(transactions) + len(current_spent_transactions) > 1:
raise DoubleSpend('tx "{}" spends inputs twice'.format(txid))
elif transactions:
tx_id = transactions[0].id
tx = backend.query.get_transaction_single(self.connection, tx_id)
transaction = tx.to_dict()
elif current_spent_transactions:
transaction = current_spent_transactions[0]
return transaction
def store_block(self, block):
"""Create a new block."""
return backend.query.store_block(self.connection, block)
def get_latest_block(self) -> dict:
"""Get the block with largest height."""
return backend.query.get_latest_block(self.connection)
def get_block(self, block_id) -> dict:
"""Get the block with the specified `block_id`.
Returns the block corresponding to `block_id` or None if no match is
found.
Args:
block_id (int): block id of the block to get.
"""
block = backend.query.get_block(self.connection, block_id)
latest_block = self.get_latest_block()
latest_block_height = latest_block["height"] if latest_block else 0
if not block and block_id > latest_block_height:
return
return block
def get_block_containing_tx(self, txid):
"""Retrieve the list of blocks (block ids) containing a
transaction with transaction id `txid`
Args:
txid (str): transaction id of the transaction to query
Returns:
Block id list (list(int))
"""
block = backend.query.get_block_with_transaction(self.connection, txid)
return block
def validate_transaction(self, transaction, current_transactions=[]):
"""Validate a transaction against the current status of the database."""
# CLEANUP: The conditional below checks for transaction in dict format.
# It would be better to only have a single format for the transaction
# throught the code base.
if isinstance(transaction, dict):
try:
transaction = Transaction.from_dict(transaction, False)
except SchemaValidationError as e:
logger.warning("Invalid transaction schema: %s", e.__cause__.message)
return False
except ValidationError as e:
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
if transaction.operation == Transaction.CREATE:
self.validate_create_inputs(transaction, current_transactions)
elif transaction.operation in [Transaction.TRANSFER, Transaction.VOTE]:
self.validate_transfer_inputs(transaction, current_transactions)
elif transaction.operation in [Transaction.COMPOSE]:
self.validate_compose_inputs(transaction, current_transactions)
return transaction
def validate_create_inputs(self, tx, current_transactions=[]) -> bool:
duplicates = any(txn for txn in current_transactions if txn.id == tx.id)
if self.is_committed(tx.id) or duplicates:
raise DuplicateTransaction("transaction `{}` already exists".format(tx.id))
fulfilling_inputs = [i for i in tx.inputs if i.fulfills is not None and i.fulfills.txid is not None]
if len(fulfilling_inputs) > 0:
input_txs, input_conditions = self.get_input_txs_and_conditions(fulfilling_inputs, current_transactions)
create_asset = tx.assets[0]
input_asset = input_txs[0].assets[tx.inputs[0].fulfills.output]["data"]
if create_asset != input_asset:
raise ValidationError("CREATE must have matching asset description with input transaction")
if input_txs[0].operation != Transaction.DECOMPOSE:
raise SchemaValidationError("CREATE can only consume DECOMPOSE outputs")
return True
def validate_transfer_inputs(self, tx, current_transactions=[]) -> bool:
input_txs, input_conditions = self.get_input_txs_and_conditions(tx.inputs, current_transactions)
self.validate_input_conditions(tx, input_conditions)
self.validate_asset_id(tx, input_txs)
self.validate_inputs_distinct(tx)
input_amount = sum([input_condition.amount for input_condition in input_conditions])
output_amount = sum([output_condition.amount for output_condition in tx.outputs])
if output_amount != input_amount:
raise AmountError(
(
"The amount used in the inputs `{}`" " needs to be same as the amount used" " in the outputs `{}`"
).format(input_amount, output_amount)
)
return True
def validate_compose_inputs(self, tx, current_transactions=[]) -> bool:
input_txs, input_conditions = self.get_input_txs_and_conditions(tx.inputs, current_transactions)
self.validate_input_conditions(tx, input_conditions)
self.validate_asset_id(tx, input_txs)
self.validate_inputs_distinct(tx)
return True
def get_input_txs_and_conditions(self, inputs, current_transactions=[]):
# store the inputs so that we can check if the asset ids match
input_txs = []
input_conditions = []
for input_ in inputs:
input_txid = input_.fulfills.txid
input_tx = self.get_transaction(input_txid)
_output = self.get_outputs_by_tx_id(input_txid)
if input_tx is None:
for ctxn in current_transactions:
if ctxn.id == input_txid:
ctxn_dict = ctxn.to_dict()
input_tx = DbTransaction.from_dict(ctxn_dict)
_output = [
Output.from_dict(output, index, ctxn.id)
for index, output in enumerate(ctxn_dict["outputs"])
]
if input_tx is None:
raise InputDoesNotExist("input `{}` doesn't exist".format(input_txid))
spent = self.get_spent(input_txid, input_.fulfills.output, current_transactions)
if spent:
raise DoubleSpend("input `{}` was already spent".format(input_txid))
output = _output[input_.fulfills.output]
input_conditions.append(output)
tx_dict = input_tx.to_dict()
tx_dict["outputs"] = Output.list_to_dict(_output)
tx_dict = DbTransaction.remove_generated_fields(tx_dict)
pm_transaction = Transaction.from_dict(tx_dict, False)
input_txs.append(pm_transaction)
return (input_txs, input_conditions)
def validate_input_conditions(self, tx, input_conditions):
# convert planetmint.Output objects to transactions.common.Output objects
input_conditions_dict = Output.list_to_dict(input_conditions)
input_conditions_converted = []
for input_cond in input_conditions_dict:
input_conditions_converted.append(TransactionOutput.from_dict(input_cond))
if not tx.inputs_valid(input_conditions_converted):
raise InvalidSignature("Transaction signature is invalid.")
def validate_asset_id(self, tx: Transaction, input_txs: list):
# validate asset
if tx.operation != Transaction.COMPOSE:
asset_id = tx.get_asset_id(input_txs)
if asset_id != Transaction.read_out_asset_id(tx):
raise AssetIdMismatch(
("The asset id of the input does not" " match the asset id of the" " transaction")
)
else:
asset_ids = Transaction.get_asset_ids(input_txs)
if Transaction.read_out_asset_id(tx) in asset_ids:
raise AssetIdMismatch(("The asset ID of the compose must be different to all of its input asset IDs"))
def validate_inputs_distinct(self, tx):
# Validate that all inputs are distinct
links = [i.fulfills.to_uri() for i in tx.inputs]
if len(links) != len(set(links)):
raise DoubleSpend('tx "{}" spends inputs twice'.format(tx.id))
def is_valid_transaction(self, tx, current_transactions=[]):
# NOTE: the function returns the Transaction object in case
# the transaction is valid
try:
return self.validate_transaction(tx, current_transactions)
except ValidationError as e:
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
def text_search(self, search, *, limit=0, table="assets"):
"""Return an iterator of assets that match the text search
Args:
search (str): Text search string to query the text index
limit (int, optional): Limit the number of returned documents.
Returns:
iter: An iterator of assets that match the text search.
"""
return backend.query.text_search(self.connection, search, limit=limit, table=table)
def get_assets(self, asset_ids) -> list[Asset]:
"""Return a list of assets that match the asset_ids
Args:
asset_ids (:obj:`list` of :obj:`str`): A list of asset_ids to
retrieve from the database.
Returns:
list: The list of assets returned from the database.
"""
return backend.query.get_assets(self.connection, asset_ids)
def get_assets_by_cid(self, asset_cid, **kwargs) -> list[dict]:
asset_txs = backend.query.get_transactions_by_asset(self.connection, asset_cid, **kwargs)
# flatten and return all found assets
return list(chain.from_iterable([Asset.list_to_dict(tx.assets) for tx in asset_txs]))
def get_metadata(self, txn_ids) -> list[MetaData]:
"""Return a list of metadata that match the transaction ids (txn_ids)
Args:
txn_ids (:obj:`list` of :obj:`str`): A list of txn_ids to
retrieve from the database.
Returns:
list: The list of metadata returned from the database.
"""
return backend.query.get_metadata(self.connection, txn_ids)
def get_metadata_by_cid(self, metadata_cid, **kwargs) -> list[str]:
metadata_txs = backend.query.get_transactions_by_metadata(self.connection, metadata_cid, **kwargs)
return [tx.metadata.metadata for tx in metadata_txs]
@property
def fastquery(self):
return fastquery.FastQuery(self.connection)
def get_validator_set(self, height=None):
return backend.query.get_validator_set(self.connection, height)
def get_validators(self, height=None):
result = self.get_validator_set(height)
return [] if result is None else result["validators"]
def get_election(self, election_id):
return backend.query.get_election(self.connection, election_id)
def get_pre_commit_state(self):
return backend.query.get_pre_commit_state(self.connection)
def store_pre_commit_state(self, state):
return backend.query.store_pre_commit_state(self.connection, state)
def store_validator_set(self, height, validators):
"""Store validator set at a given `height`.
NOTE: If the validator set already exists at that `height` then an
exception will be raised.
"""
return backend.query.store_validator_set(self.connection, {"height": height, "validators": validators})
def delete_validator_set(self, height):
return backend.query.delete_validator_set(self.connection, height)
def store_abci_chain(self, height, chain_id, is_synced=True):
return backend.query.store_abci_chain(self.connection, height, chain_id, is_synced)
def delete_abci_chain(self, height):
return backend.query.delete_abci_chain(self.connection, height)
def get_latest_abci_chain(self):
return backend.query.get_latest_abci_chain(self.connection)
def migrate_abci_chain(self):
"""Generate and record a new ABCI chain ID. New blocks are not
accepted until we receive an InitChain ABCI request with
the matching chain ID and validator set.
Chain ID is generated based on the current chain and height.
`chain-X` => `chain-X-migrated-at-height-5`.
`chain-X-migrated-at-height-5` => `chain-X-migrated-at-height-21`.
If there is no known chain (we are at genesis), the function returns.
"""
latest_chain = self.get_latest_abci_chain()
if latest_chain is None:
return
block = self.get_latest_block()
suffix = "-migrated-at-height-"
chain_id = latest_chain["chain_id"]
block_height_str = str(block["height"])
new_chain_id = chain_id.split(suffix)[0] + suffix + block_height_str
self.store_abci_chain(block["height"] + 1, new_chain_id, False)
def store_election(self, election_id, height, is_concluded):
return backend.query.store_election(self.connection, election_id, height, is_concluded)
def store_elections(self, elections):
return backend.query.store_elections(self.connection, elections)
def delete_elections(self, height):
return backend.query.delete_elections(self.connection, height)
# NOTE: moved here from Election needs to be placed somewhere else
def get_validators_dict(self, height=None):
"""Return a dictionary of validators with key as `public_key` and
value as the `voting_power`
"""
validators = {}
for validator in self.get_validators(height):
# NOTE: we assume that Tendermint encodes public key in base64
public_key = public_key_from_ed25519_key(key_from_base64(validator["public_key"]["value"]))
validators[public_key] = validator["voting_power"]
return validators
def validate_election(self, transaction, current_transactions=[]): # TODO: move somewhere else
"""Validate election transaction
NOTE:
* A valid election is initiated by an existing validator.
* A valid election is one where voters are validators and votes are
allocated according to the voting power of each validator node.
Args:
:param planet: (Planetmint) an instantiated planetmint.lib.Planetmint object.
:param current_transactions: (list) A list of transactions to be validated along with the election
Returns:
Election: a Election object or an object of the derived Election subclass.
Raises:
ValidationError: If the election is invalid
"""
duplicates = any(txn for txn in current_transactions if txn.id == transaction.id)
if self.is_committed(transaction.id) or duplicates:
raise DuplicateTransaction("transaction `{}` already exists".format(transaction.id))
current_validators = self.get_validators_dict()
# NOTE: Proposer should be a single node
if len(transaction.inputs) != 1 or len(transaction.inputs[0].owners_before) != 1:
raise MultipleInputsError("`tx_signers` must be a list instance of length one")
# NOTE: Check if the proposer is a validator.
[election_initiator_node_pub_key] = transaction.inputs[0].owners_before
if election_initiator_node_pub_key not in current_validators.keys():
raise InvalidProposer("Public key is not a part of the validator set")
# NOTE: Check if all validators have been assigned votes equal to their voting power
if not self.is_same_topology(current_validators, transaction.outputs):
raise UnequalValidatorSet("Validator set much be exactly same to the outputs of election")
if transaction.operation == VALIDATOR_ELECTION:
self.validate_validator_election(transaction)
return transaction
def validate_validator_election(self, transaction): # TODO: move somewhere else
"""For more details refer BEP-21: https://github.com/planetmint/BEPs/tree/master/21"""
current_validators = self.get_validators_dict()
# NOTE: change more than 1/3 of the current power is not allowed
if transaction.assets[0]["data"]["power"] >= (1 / 3) * sum(current_validators.values()):
raise InvalidPowerChange("`power` change must be less than 1/3 of total power")
def get_election_status(self, transaction):
election = self.get_election(transaction.id)
if election and election["is_concluded"]:
return Election.CONCLUDED
return Election.INCONCLUSIVE if self.has_validator_set_changed(transaction) else Election.ONGOING
def has_validator_set_changed(self, transaction): # TODO: move somewhere else
latest_change = self.get_validator_change()
if latest_change is None:
return False
latest_change_height = latest_change["height"]
election = self.get_election(transaction.id)
return latest_change_height > election["height"]
def get_validator_change(self): # TODO: move somewhere else
"""Return the validator set from the most recent approved block
:return: {
'height': <block_height>,
'validators': <validator_set>
}
"""
latest_block = self.get_latest_block()
if latest_block is None:
return None
return self.get_validator_set(latest_block["height"])
def get_validator_dict(self, height=None):
"""Return a dictionary of validators with key as `public_key` and
value as the `voting_power`
"""
validators = {}
for validator in self.get_validators(height):
# NOTE: we assume that Tendermint encodes public key in base64
public_key = public_key_from_ed25519_key(key_from_base64(validator["public_key"]["value"]))
validators[public_key] = validator["voting_power"]
return validators
def get_recipients_list(self):
"""Convert validator dictionary to a recipient list for `Transaction`"""
recipients = []
for public_key, voting_power in self.get_validator_dict().items():
recipients.append(([public_key], voting_power))
return recipients
def show_election_status(self, transaction):
data = transaction.assets[0]
data = data.to_dict()["data"]
if "public_key" in data.keys():
data["public_key"] = public_key_to_base64(data["public_key"]["value"])
response = ""
for k, v in data.items():
if k != "seed":
response += f"{k}={v}\n"
response += f"status={self.get_election_status(transaction)}"
if transaction.operation == CHAIN_MIGRATION_ELECTION:
response = self.append_chain_migration_status(response)
return response
def append_chain_migration_status(self, status):
chain = self.get_latest_abci_chain()
if chain is None or chain["is_synced"]:
return status
status += f'\nchain_id={chain["chain_id"]}'
block = self.get_latest_block()
status += f'\napp_hash={block["app_hash"]}'
validators = [
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": k,
},
"power": v,
}
for k, v in self.get_validator_dict().items()
]
status += f"\nvalidators={json.dumps(validators, indent=4)}"
return status
def is_same_topology(cls, current_topology, election_topology):
voters = {}
for voter in election_topology:
if len(voter.public_keys) > 1:
return False
[public_key] = voter.public_keys
voting_power = voter.amount
voters[public_key] = voting_power
# Check whether the voters and their votes is same to that of the
# validators and their voting power in the network
return current_topology == voters
def count_votes(self, election_pk, transactions):
votes = 0
for txn in transactions:
if txn.operation == Vote.OPERATION:
for output in txn.outputs:
# NOTE: We enforce that a valid vote to election id will have only
# election_pk in the output public keys, including any other public key
# along with election_pk will lead to vote being not considered valid.
if len(output.public_keys) == 1 and [election_pk] == output.public_keys:
votes = votes + output.amount
return votes
def get_commited_votes(self, transaction, election_pk=None): # TODO: move somewhere else
if election_pk is None:
election_pk = election_id_to_public_key(transaction.id)
txns = backend.query.get_asset_tokens_for_public_key(self.connection, transaction.id, election_pk)
return self.count_votes(election_pk, txns)
def _get_initiated_elections(self, height, txns): # TODO: move somewhere else
elections = []
for tx in txns:
if not isinstance(tx, Election):
continue
elections.append({"election_id": tx.id, "height": height, "is_concluded": False})
return elections
def _get_votes(self, txns): # TODO: move somewhere else
elections = OrderedDict()
for tx in txns:
if not isinstance(tx, Vote):
continue
election_id = tx.assets[0]["id"]
if election_id not in elections:
elections[election_id] = []
elections[election_id].append(tx)
return elections
def process_block(self, new_height, txns): # TODO: move somewhere else
"""Looks for election and vote transactions inside the block, records
and processes elections.
Every election is recorded in the database.
Every vote has a chance to conclude the corresponding election. When
an election is concluded, the corresponding database record is
marked as such.
Elections and votes are processed in the order in which they
appear in the block. Elections are concluded in the order of
appearance of their first votes in the block.
For every election concluded in the block, calls its `on_approval`
method. The returned value of the last `on_approval`, if any,
is a validator set update to be applied in one of the following blocks.
`on_approval` methods are implemented by elections of particular type.
The method may contain side effects but should be idempotent. To account
for other concluded elections, if it requires so, the method should
rely on the database state.
"""
# elections initiated in this block
initiated_elections = self._get_initiated_elections(new_height, txns)
if initiated_elections:
self.store_elections(initiated_elections)
# elections voted for in this block and their votes
elections = self._get_votes(txns)
validator_update = None
for election_id, votes in elections.items():
election = self.get_transaction(election_id)
if election is None:
continue
if not self.has_election_concluded(election, votes):
continue
validator_update = self.approve_election(election, new_height)
self.store_election(election.id, new_height, is_concluded=True)
return [validator_update] if validator_update else []
def has_election_concluded(self, transaction, current_votes=[]): # TODO: move somewhere else
"""Check if the election can be concluded or not.
* Elections can only be concluded if the validator set has not changed
since the election was initiated.
* Elections can be concluded only if the current votes form a supermajority.
Custom elections may override this function and introduce additional checks.
"""
if self.has_validator_set_changed(transaction):
return False
if transaction.operation == VALIDATOR_ELECTION:
if not self.has_validator_election_concluded():
return False
if transaction.operation == CHAIN_MIGRATION_ELECTION:
if not self.has_chain_migration_concluded():
return False
election_pk = election_id_to_public_key(transaction.id)
votes_committed = self.get_commited_votes(transaction, election_pk)
votes_current = self.count_votes(election_pk, current_votes)
total_votes = sum(int(output.amount) for output in transaction.outputs)
if (votes_committed < (2 / 3) * total_votes) and (votes_committed + votes_current >= (2 / 3) * total_votes):
return True
return False
def has_validator_election_concluded(self): # TODO: move somewhere else
latest_block = self.get_latest_block()
if latest_block is not None:
latest_block_height = latest_block["height"]
latest_validator_change = self.get_validator_set()["height"]
# TODO change to `latest_block_height + 3` when upgrading to Tendermint 0.24.0.
if latest_validator_change == latest_block_height + 2:
# do not conclude the election if there is a change assigned already
return False
return True
def has_chain_migration_concluded(self): # TODO: move somewhere else
chain = self.get_latest_abci_chain()
if chain is not None and not chain["is_synced"]:
# do not conclude the migration election if
# there is another migration in progress
return False
return True
def rollback_election(self, new_height, txn_ids): # TODO: move somewhere else
"""Looks for election and vote transactions inside the block and
cleans up the database artifacts possibly created in `process_blocks`.
Part of the `end_block`/`commit` crash recovery.
"""
# delete election records for elections initiated at this height and
# elections concluded at this height
self.delete_elections(new_height)
txns = [self.get_transaction(tx_id) for tx_id in txn_ids]
txns = [Transaction.from_dict(tx.to_dict()) for tx in txns]
elections = self._get_votes(txns)
for election_id in elections:
election = self.get_transaction(election_id)
if election.operation == VALIDATOR_ELECTION:
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
self.delete_validator_set(new_height + 1)
if election.operation == CHAIN_MIGRATION_ELECTION:
self.delete_abci_chain(new_height)
def approve_election(self, election, new_height):
"""Override to update the database state according to the
election rules. Consider the current database state to account for
other concluded elections, if required.
"""
if election.operation == CHAIN_MIGRATION_ELECTION:
self.migrate_abci_chain()
if election.operation == VALIDATOR_ELECTION:
validator_updates = [election.assets[0].data]
curr_validator_set = self.get_validators(new_height)
updated_validator_set = new_validator_set(curr_validator_set, validator_updates)
updated_validator_set = [v for v in updated_validator_set if v["voting_power"] > 0]
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
self.store_validator_set(new_height + 1, updated_validator_set)
return encode_validator(election.assets[0].data)
Block = namedtuple("Block", ("app_hash", "height", "transactions"))

View File

@ -1,75 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from transactions.common.exceptions import ConfigurationError
from logging.config import dictConfig as set_logging_config
from planetmint.config import Config, DEFAULT_LOGGING_CONFIG
def _normalize_log_level(level):
try:
return level.upper()
except AttributeError as exc:
raise ConfigurationError("Log level must be a string!") from exc
def setup_logging():
"""Function to configure log hadlers.
.. important::
Configuration, if needed, should be applied before invoking this
decorator, as starting the subscriber process for logging will
configure the root logger for the child process based on the
state of :obj:`planetmint.config` at the moment this decorator
is invoked.
"""
logging_configs = DEFAULT_LOGGING_CONFIG
new_logging_configs = Config().get()["log"]
if "file" in new_logging_configs:
filename = new_logging_configs["file"]
logging_configs["handlers"]["file"]["filename"] = filename
if "error_file" in new_logging_configs:
error_filename = new_logging_configs["error_file"]
logging_configs["handlers"]["errors"]["filename"] = error_filename
if "level_console" in new_logging_configs:
level = _normalize_log_level(new_logging_configs["level_console"])
logging_configs["handlers"]["console"]["level"] = level
if "level_logfile" in new_logging_configs:
level = _normalize_log_level(new_logging_configs["level_logfile"])
logging_configs["handlers"]["file"]["level"] = level
if "fmt_console" in new_logging_configs:
fmt = new_logging_configs["fmt_console"]
logging_configs["formatters"]["console"]["format"] = fmt
if "fmt_logfile" in new_logging_configs:
fmt = new_logging_configs["fmt_logfile"]
logging_configs["formatters"]["file"]["format"] = fmt
if "datefmt_console" in new_logging_configs:
fmt = new_logging_configs["datefmt_console"]
logging_configs["formatters"]["console"]["datefmt"] = fmt
if "datefmt_logfile" in new_logging_configs:
fmt = new_logging_configs["datefmt_logfile"]
logging_configs["formatters"]["file"]["datefmt"] = fmt
log_levels = new_logging_configs.get("granular_levels", {})
for logger_name, level in log_levels.items():
level = _normalize_log_level(level)
try:
logging_configs["loggers"][logger_name]["level"] = level
except KeyError:
logging_configs["loggers"][logger_name] = {"level": level}
set_logging_config(logging_configs)

View File

View File

@ -0,0 +1,348 @@
import rapidjson
from itertools import chain
from hashlib import sha3_256
from transactions import Transaction
from transactions.common.exceptions import DoubleSpend
from transactions.common.crypto import public_key_from_ed25519_key
from transactions.common.exceptions import InputDoesNotExist
from planetmint import config_utils, backend
from planetmint.const import GOVERNANCE_TRANSACTION_TYPES
from planetmint.abci.utils import key_from_base64, merkleroot
from planetmint.backend.connection import Connection
from planetmint.backend.tarantool.const import (
TARANT_TABLE_TRANSACTION,
TARANT_TABLE_GOVERNANCE,
TARANT_TABLE_UTXOS,
TARANT_TABLE_OUTPUT,
)
from planetmint.backend.models.block import Block
from planetmint.backend.models.output import Output
from planetmint.backend.models.asset import Asset
from planetmint.backend.models.metadata import MetaData
from planetmint.backend.models.dbtransaction import DbTransaction
from planetmint.utils.singleton import Singleton
class DataAccessor(metaclass=Singleton):
def __init__(self, database_connection=None):
config_utils.autoconfigure()
self.connection = database_connection if database_connection is not None else Connection()
def close_connection(self):
self.connection.close()
def connect(self):
self.connection.connect()
def store_bulk_transactions(self, transactions):
txns = []
gov_txns = []
for t in transactions:
transaction = t.tx_dict if t.tx_dict else rapidjson.loads(rapidjson.dumps(t.to_dict()))
if transaction["operation"] in GOVERNANCE_TRANSACTION_TYPES:
gov_txns.append(transaction)
else:
txns.append(transaction)
backend.query.store_transactions(self.connection, txns, TARANT_TABLE_TRANSACTION)
backend.query.store_transactions(self.connection, gov_txns, TARANT_TABLE_GOVERNANCE)
[self.update_utxoset(t) for t in txns + gov_txns]
def delete_transactions(self, txs):
return backend.query.delete_transactions(self.connection, txs)
def is_committed(self, transaction_id):
transaction = backend.query.get_transaction_single(self.connection, transaction_id)
return bool(transaction)
def get_transaction(self, transaction_id):
return backend.query.get_transaction_single(self.connection, transaction_id)
def get_transactions(self, txn_ids):
return backend.query.get_transactions(self.connection, txn_ids)
def get_transactions_filtered(self, asset_ids, operation=None, last_tx=False):
"""Get a list of transactions filtered on some criteria"""
txids = backend.query.get_txids_filtered(self.connection, asset_ids, operation, last_tx)
for txid in txids:
yield self.get_transaction(txid)
def get_outputs_by_tx_id(self, txid):
return backend.query.get_outputs_by_tx_id(self.connection, txid)
def get_outputs_filtered(self, owner, spent=None) -> list[Output]:
"""Get a list of output links filtered on some criteria
Args:
owner (str): base58 encoded public_key.
spent (bool): If ``True`` return only the spent outputs. If
``False`` return only unspent outputs. If spent is
not specified (``None``) return all outputs.
Returns:
:obj:`list` of Output: list of ``txid`` s and ``output`` s
pointing to another transaction's condition
"""
outputs = backend.query.get_outputs_by_owner(self.connection, owner)
unspent_outputs = backend.query.get_outputs_by_owner(self.connection, owner, TARANT_TABLE_UTXOS)
if spent is True:
spent_outputs = []
for output in outputs:
if not any(
utxo.transaction_id == output.transaction_id and utxo.index == output.index
for utxo in unspent_outputs
):
spent_outputs.append(output)
return spent_outputs
elif spent is False:
return unspent_outputs
return outputs
def store_block(self, block):
"""Create a new block."""
return backend.query.store_block(self.connection, block)
def get_latest_block(self) -> dict:
"""Get the block with largest height."""
return backend.query.get_latest_block(self.connection)
def get_block(self, block_id) -> dict:
"""Get the block with the specified `block_id`.
Returns the block corresponding to `block_id` or None if no match is
found.
Args:
block_id (int): block id of the block to get.
"""
block = backend.query.get_block(self.connection, block_id)
latest_block = self.get_latest_block()
latest_block_height = latest_block["height"] if latest_block else 0
if not block and block_id > latest_block_height:
return
return block
def delete_abci_chain(self, height):
return backend.query.delete_abci_chain(self.connection, height)
def get_latest_abci_chain(self):
return backend.query.get_latest_abci_chain(self.connection)
def store_election(self, election_id, height, is_concluded):
return backend.query.store_election(self.connection, election_id, height, is_concluded)
def store_elections(self, elections):
return backend.query.store_elections(self.connection, elections)
def delete_elections(self, height):
return backend.query.delete_elections(self.connection, height)
# NOTE: moved here from Election needs to be placed somewhere else
def get_validators_dict(self, height=None):
"""Return a dictionary of validators with key as `public_key` and
value as the `voting_power`
"""
validators = {}
for validator in self.get_validators(height=height):
# NOTE: we assume that Tendermint encodes public key in base64
public_key = public_key_from_ed25519_key(key_from_base64(validator["public_key"]["value"]))
validators[public_key] = validator["voting_power"]
return validators
def get_spending_transaction(self, txid, output, current_transactions=[]) -> DbTransaction:
transactions = backend.query.get_spending_transaction(self.connection, txid, output)
current_spent_transactions = []
for ctxn in current_transactions:
for ctxn_input in ctxn.inputs:
if ctxn_input.fulfills and ctxn_input.fulfills.txid == txid and ctxn_input.fulfills.output == output:
current_spent_transactions.append(ctxn)
transaction = None
if len(transactions) + len(current_spent_transactions) > 1:
raise DoubleSpend('tx "{}" spends inputs twice'.format(txid))
elif transactions:
tx_id = transactions[0].id
tx = backend.query.get_transaction_single(self.connection, tx_id)
transaction = tx.to_dict()
elif current_spent_transactions:
transaction = current_spent_transactions[0]
return transaction
def get_block_containing_tx(self, txid) -> Block:
"""
Retrieve the list of blocks (block ids) containing a
transaction with transaction id `txid`
Args:
txid (str): transaction id of the transaction to query
Returns:
Block id list (list(int))
"""
block = backend.query.get_block_with_transaction(self.connection, txid)
return block
def get_input_txs_and_conditions(self, inputs, current_transactions=[]):
# store the inputs so that we can check if the asset ids match
input_txs = []
input_conditions = []
for input_ in inputs:
input_txid = input_.fulfills.txid
input_tx = self.get_transaction(input_txid)
_output = self.get_outputs_by_tx_id(input_txid)
if input_tx is None:
for ctxn in current_transactions:
if ctxn.id == input_txid:
ctxn_dict = ctxn.to_dict()
input_tx = DbTransaction.from_dict(ctxn_dict)
_output = [
Output.from_dict(output, index, ctxn.id)
for index, output in enumerate(ctxn_dict["outputs"])
]
if input_tx is None:
raise InputDoesNotExist("input `{}` doesn't exist".format(input_txid))
spent = self.get_spending_transaction(input_txid, input_.fulfills.output, current_transactions)
if spent:
raise DoubleSpend("input `{}` was already spent".format(input_txid))
output = _output[input_.fulfills.output]
input_conditions.append(output)
tx_dict = input_tx.to_dict()
tx_dict["outputs"] = Output.list_to_dict(_output)
tx_dict = DbTransaction.remove_generated_fields(tx_dict)
pm_transaction = Transaction.from_dict(tx_dict, False)
input_txs.append(pm_transaction)
return input_txs, input_conditions
def get_assets(self, asset_ids) -> list[Asset]:
"""Return a list of assets that match the asset_ids
Args:
asset_ids (:obj:`list` of :obj:`str`): A list of asset_ids to
retrieve from the database.
Returns:
list: The list of assets returned from the database.
"""
return backend.query.get_assets(self.connection, asset_ids)
def get_assets_by_cid(self, asset_cid, **kwargs) -> list[dict]:
asset_txs = backend.query.get_transactions_by_asset(self.connection, asset_cid, **kwargs)
# flatten and return all found assets
return list(chain.from_iterable([Asset.list_to_dict(tx.assets) for tx in asset_txs]))
def get_metadata(self, txn_ids) -> list[MetaData]:
"""Return a list of metadata that match the transaction ids (txn_ids)
Args:
txn_ids (:obj:`list` of :obj:`str`): A list of txn_ids to
retrieve from the database.
Returns:
list: The list of metadata returned from the database.
"""
return backend.query.get_metadata(self.connection, txn_ids)
def get_metadata_by_cid(self, metadata_cid, **kwargs) -> list[str]:
metadata_txs = backend.query.get_transactions_by_metadata(self.connection, metadata_cid, **kwargs)
return [tx.metadata.metadata for tx in metadata_txs]
def get_validator_set(self, height=None):
return backend.query.get_validator_set(self.connection, height)
def get_validators(self, height=None):
result = self.get_validator_set(height)
return [] if result is None else result["validators"]
def get_election(self, election_id):
return backend.query.get_election(self.connection, election_id)
def get_pre_commit_state(self):
return backend.query.get_pre_commit_state(self.connection)
def store_pre_commit_state(self, state):
return backend.query.store_pre_commit_state(self.connection, state)
def store_validator_set(self, height, validators):
"""
Store validator set at a given `height`.
NOTE: If the validator set already exists at that `height` then an
exception will be raised.
"""
return backend.query.store_validator_set(self.connection, {"height": height, "validators": validators})
def delete_validator_set(self, height):
return backend.query.delete_validator_set(self.connection, height)
def store_abci_chain(self, height, chain_id, is_synced=True):
return backend.query.store_abci_chain(self.connection, height, chain_id, is_synced)
def get_asset_tokens_for_public_key(self, transaction_id, election_pk):
txns = backend.query.get_asset_tokens_for_public_key(self.connection, transaction_id, election_pk)
return txns
def update_utxoset(self, transaction):
spent_outputs = [
{"output_index": input["fulfills"]["output_index"], "transaction_id": input["fulfills"]["transaction_id"]}
for input in transaction["inputs"]
if input["fulfills"] != None
]
if spent_outputs:
backend.query.delete_unspent_outputs(self.connection, spent_outputs)
[
backend.query.store_transaction_outputs(
self.connection, Output.outputs_dict(output, transaction["id"]), index, TARANT_TABLE_UTXOS
)
for index, output in enumerate(transaction["outputs"])
]
def get_utxoset_merkle_root(self):
"""Returns the merkle root of the utxoset. This implies that
the utxoset is first put into a merkle tree.
For now, the merkle tree and its root will be computed each
time. This obviously is not efficient and a better approach
that limits the repetition of the same computation when
unnecesary should be sought. For instance, future optimizations
could simply re-compute the branches of the tree that were
affected by a change.
The transaction hash (id) and output index should be sufficient
to uniquely identify a utxo, and consequently only that
information from a utxo record is needed to compute the merkle
root. Hence, each node of the merkle tree should contain the
tuple (txid, output_index).
.. important:: The leaves of the tree will need to be sorted in
some kind of lexicographical order.
Returns:
str: Merkle root in hexadecimal form.
"""
utxoset = backend.query.get_unspent_outputs(self.connection)
# See common/transactions.py for details.
hashes = [
sha3_256("{}{}".format(utxo["transaction_id"], utxo["output_index"]).encode()).digest() for utxo in utxoset
]
print(sorted(hashes))
return merkleroot(sorted(hashes))

View File

@ -1,23 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
class FastTransaction:
"""A minimal wrapper around a transaction dictionary. This is useful for
when validation is not required but a routine expects something that looks
like a transaction, for example during block creation.
Note: immutability could also be provided
"""
def __init__(self, tx_dict):
self.data = tx_dict
@property
def id(self):
return self.data["id"]
def to_dict(self):
return self.data

View File

@ -3,16 +3,18 @@
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0) # SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0 # Code is Apache-2.0 and docs are CC-BY-4.0
import sys
import logging import logging
import setproctitle import setproctitle
from planetmint.config import Config from planetmint.config import Config
from planetmint.lib import Planetmint from planetmint.application.validator import Validator
from planetmint.core import App from planetmint.abci.application_logic import ApplicationLogic
from planetmint.parallel_validation import ParallelValidationApp from planetmint.abci.parallel_validation import ParallelValidationApp
from planetmint.web import server, websocket_server from planetmint.web import server, websocket_server
from planetmint.events import Exchange, EventTypes from planetmint.ipc.events import EventTypes
from planetmint.utils import Process from planetmint.ipc.exchange import Exchange
from planetmint.utils.processes import Process
from planetmint.version import __version__ from planetmint.version import __version__
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -34,18 +36,20 @@ BANNER = """
""" """
def start(args): def start_web_api(args):
# Exchange object for event stream api
logger.info("Starting Planetmint")
exchange = Exchange()
# start the web api
app_server = server.create_server( app_server = server.create_server(
settings=Config().get()["server"], log_config=Config().get()["log"], planetmint_factory=Planetmint settings=Config().get()["server"], log_config=Config().get()["log"], planetmint_factory=Validator
) )
if args.web_api_only:
app_server.run()
else:
p_webapi = Process(name="planetmint_webapi", target=app_server.run, daemon=True) p_webapi = Process(name="planetmint_webapi", target=app_server.run, daemon=True)
p_webapi.start() p_webapi.start()
def start_abci_server(args):
logger.info(BANNER.format(__version__, Config().get()["server"]["bind"])) logger.info(BANNER.format(__version__, Config().get()["server"]["bind"]))
exchange = Exchange()
# start websocket server # start websocket server
p_websocket_server = Process( p_websocket_server = Process(
@ -66,21 +70,29 @@ def start(args):
setproctitle.setproctitle("planetmint") setproctitle.setproctitle("planetmint")
# Start the ABCIServer abci_server_app = None
publisher_queue = exchange.get_publisher_queue()
if args.experimental_parallel_validation: if args.experimental_parallel_validation:
app = ABCIServer( abci_server_app = ParallelValidationApp(events_queue=publisher_queue)
app=ParallelValidationApp(
events_queue=exchange.get_publisher_queue(),
)
)
else: else:
app = ABCIServer( abci_server_app = ApplicationLogic(events_queue=publisher_queue)
app=App(
events_queue=exchange.get_publisher_queue(), app = ABCIServer(abci_server_app)
)
)
app.run() app.run()
def start(args):
logger.info("Starting Planetmint")
if args.web_api_only:
start_web_api(args)
elif args.abci_only:
start_abci_server(args)
else:
start_web_api(args)
start_abci_server(args)
if __name__ == "__main__": if __name__ == "__main__":
start() start(sys.argv)

View File

@ -1,211 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import contextlib
import threading
import queue
import multiprocessing
import json
import setproctitle
from packaging import version
from planetmint.version import __tm_supported_versions__
from planetmint.tendermint_utils import key_from_base64
from planetmint.backend.models.output import ConditionDetails
from transactions.common.crypto import key_pair_from_ed25519_key
class ProcessGroup(object):
def __init__(self, concurrency=None, group=None, target=None, name=None, args=None, kwargs=None, daemon=None):
self.concurrency = concurrency or multiprocessing.cpu_count()
self.group = group
self.target = target
self.name = name
self.args = args or ()
self.kwargs = kwargs or {}
self.daemon = daemon
self.processes = []
def start(self):
for i in range(self.concurrency):
proc = multiprocessing.Process(
group=self.group,
target=self.target,
name=self.name,
args=self.args,
kwargs=self.kwargs,
daemon=self.daemon,
)
proc.start()
self.processes.append(proc)
class Process(multiprocessing.Process):
"""Wrapper around multiprocessing.Process that uses
setproctitle to set the name of the process when running
the target task.
"""
def run(self):
setproctitle.setproctitle(self.name)
super().run()
# Inspired by:
# - http://stackoverflow.com/a/24741694/597097
def pool(builder, size, timeout=None):
"""Create a pool that imposes a limit on the number of stored
instances.
Args:
builder: a function to build an instance.
size: the size of the pool.
timeout(Optional[float]): the seconds to wait before raising
a ``queue.Empty`` exception if no instances are available
within that time.
Raises:
If ``timeout`` is defined but the request is taking longer
than the specified time, the context manager will raise
a ``queue.Empty`` exception.
Returns:
A context manager that can be used with the ``with``
statement.
"""
lock = threading.Lock()
local_pool = queue.Queue()
current_size = 0
@contextlib.contextmanager
def pooled():
nonlocal current_size
instance = None
# If we still have free slots, then we have room to create new
# instances.
if current_size < size:
with lock:
# We need to check again if we have slots available, since
# the situation might be different after acquiring the lock
if current_size < size:
current_size += 1
instance = builder()
# Watchout: current_size can be equal to size if the previous part of
# the function has been executed, that's why we need to check if the
# instance is None.
if instance is None:
instance = local_pool.get(timeout=timeout)
yield instance
local_pool.put(instance)
return pooled
# TODO: Rename this function, it's handling fulfillments not conditions
def condition_details_has_owner(condition_details, owner):
"""Check if the public_key of owner is in the condition details
as an Ed25519Fulfillment.public_key
Args:
condition_details (dict): dict with condition details
owner (str): base58 public key of owner
Returns:
bool: True if the public key is found in the condition details, False otherwise
"""
if isinstance(condition_details, ConditionDetails) and condition_details.sub_conditions is not None:
result = condition_details_has_owner(condition_details.sub_conditions, owner)
if result:
return True
elif isinstance(condition_details, list):
for subcondition in condition_details:
result = condition_details_has_owner(subcondition, owner)
if result:
return True
else:
if condition_details.public_key is not None and owner == condition_details.public_key:
return True
return False
class Lazy:
"""Lazy objects are useful to create chains of methods to
execute later.
A lazy object records the methods that has been called, and
replay them when the :py:meth:`run` method is called. Note that
:py:meth:`run` needs an object `instance` to replay all the
methods that have been recorded.
"""
def __init__(self):
"""Instantiate a new Lazy object."""
self.stack = []
def __getattr__(self, name):
self.stack.append(name)
return self
def __call__(self, *args, **kwargs):
self.stack.append((args, kwargs))
return self
def __getitem__(self, key):
self.stack.append("__getitem__")
self.stack.append(([key], {}))
return self
def run(self, instance):
"""Run the recorded chain of methods on `instance`.
Args:
instance: an object.
"""
last = instance
for item in self.stack:
if isinstance(item, str):
last = getattr(last, item)
else:
last = last(*item[0], **item[1])
self.stack = []
return last
# Load Tendermint's public and private key from the file path
def load_node_key(path):
with open(path) as json_data:
priv_validator = json.load(json_data)
priv_key = priv_validator["priv_key"]["value"]
hex_private_key = key_from_base64(priv_key)
return key_pair_from_ed25519_key(hex_private_key)
def tendermint_version_is_compatible(running_tm_ver):
"""
Check Tendermint compatability with Planetmint server
:param running_tm_ver: Version number of the connected Tendermint instance
:type running_tm_ver: str
:return: True/False depending on the compatability with Planetmint server
:rtype: bool
"""
# Splitting because version can look like this e.g. 0.22.8-40d6dc2e
tm_ver = running_tm_ver.split("-")
if not tm_ver:
return False
for ver in __tm_supported_versions__:
if version.parse(ver) == version.parse(tm_ver[0]):
return True
return False

View File

44
planetmint/utils/lazy.py Normal file
View File

@ -0,0 +1,44 @@
class Lazy:
"""Lazy objects are useful to create chains of methods to
execute later.
A lazy object records the methods that has been called, and
replay them when the :py:meth:`run` method is called. Note that
:py:meth:`run` needs an object `instance` to replay all the
methods that have been recorded.
"""
def __init__(self):
"""Instantiate a new Lazy object."""
self.stack = []
def __getattr__(self, name):
self.stack.append(name)
return self
def __call__(self, *args, **kwargs):
self.stack.append((args, kwargs))
return self
def __getitem__(self, key):
self.stack.append("__getitem__")
self.stack.append(([key], {}))
return self
def run(self, instance):
"""Run the recorded chain of methods on `instance`.
Args:
instance: an object.
"""
last = instance
for item in self.stack:
if isinstance(item, str):
last = getattr(last, item)
else:
last = last(*item[0], **item[1])
self.stack = []
return last

Some files were not shown because too many files have changed in this diff Show More