Compare commits

...

64 Commits
v1.4.2 ... main

Author SHA1 Message Date
Jürgen Eckel
975921183c
fixed audit (#412)
* fixed audit
* fixed tarantool installation


Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2025-02-13 22:34:42 +01:00
Jürgen Eckel
a848324e1d
version bump 2025-02-13 17:14:24 +01:00
Jürgen Eckel
58131d445a
package changes (#411)
* package changes

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2025-02-13 17:11:34 +01:00
annonymmous
f3077ee8e3 Update poetry.lock 2025-02-13 12:20:07 +01:00
Julian Strobl
ef00a7fdde
[sonar] Remove obsolete project
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-11-09 10:19:58 +01:00
Julian Strobl
ce1649f7db
Disable scheduled workflow run 2023-09-11 08:20:31 +02:00
Julian Strobl
472d4cfbd9
Merge pull request #403 from planetmint/dependabot/pip/cryptography-41.0.2
Bump cryptography from 41.0.1 to 41.0.2
2023-07-20 08:06:30 +02:00
dependabot[bot]
9279dd680b
Bump cryptography from 41.0.1 to 41.0.2
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.1 to 41.0.2.
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/41.0.1...41.0.2)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-15 01:31:08 +00:00
Jürgen Eckel
1571211a24
bumped ersion to 2.5.1
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-22 09:28:45 +02:00
Jürgen Eckel
67abb7102d
fixed all-in-one container tarantool issue
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-22 09:21:51 +02:00
Jürgen Eckel
3ac0ca2c69
Tm 0.34.24 (#401)
* upgrade to Tendermint v0.34.24
* upgraded all the old tendermint versions to the new version


Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-21 11:59:44 +02:00
Jürgen Eckel
4bf1af6f06
fix dependencies (locked) and the audit (#400)
* fix dependencies (locked) and the audit

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added pip-audit to poetry to avoid inconsistent environments

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-06-14 09:30:03 +02:00
Lorenz Herzberger
0d947a4083
updated poetry workflow (#399)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-06-13 09:49:54 +02:00
Jürgen Eckel
34e5492420
Fixed broken tx api (#398)
* enforced using a newer planetmint-transactions package and adjusted to a renaming of the variable
* bumped version & added changelog info

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-24 21:48:50 +02:00
Jürgen Eckel
4c55f576b9
392 abci rpc is not defined for election proposals (#397)
* fixed missing abci_rpc initialization
* bumped versions and added changelog
* sq fixes

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-24 09:44:50 +02:00
Jürgen Eckel
b2bca169ec
fixing potential type error in cases of new block heights (#396)
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-23 15:22:21 +02:00
dependabot[bot]
3e223f04cd
Bump requests from 2.25.1 to 2.31.0 (#395)
* Bump requests from 2.25.1 to 2.31.0

Bumps [requests](https://github.com/psf/requests) from 2.25.1 to 2.31.0.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.25.1...v2.31.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* fixed vulnerability analysis (excluded new/different vulns)

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* disabled another vuln

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* adjust the right pipeline

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed proper pipeline

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-05-23 14:06:02 +02:00
Julian Strobl
95001fc262
[ci] Add nightly run
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-28 14:19:16 +02:00
Julian Strobl
923f14d669 [ci] Add SonarQube Quality Gate action
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-28 11:23:33 +02:00
Jürgen Eckel
74d3c732b1
bumped version and added missing changelog (#390)
* bumped version added missing changelog

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-21 11:05:33 +02:00
Jürgen Eckel
5c4923dbd6
373 integration of the dataaccessor singleton (#389)
* initial singleton usage

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* passing all tests

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* aggretated code into helper functions

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-21 10:48:40 +02:00
Jürgen Eckel
884c3cc32b
385 cli cmd not properly implemented planetmint migrate up (#386)
* fixed cmd line to function mapping issue
* bumped version
* fixed init.lua script issue
* fixed indexing issue on tarantool migrate script

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-19 14:01:34 +02:00
Lorenz Herzberger
4feeed5862
fixed path to init.lua (#384)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-18 11:41:20 +02:00
Lorenz Herzberger
461fae27d1
adjusted tarantool scripts for use in service (#383)
* adjusted tarantool scripts for use in service

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed schema migrate call

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed version number in changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-18 09:36:07 +02:00
Jürgen Eckel
033235fb16
fixed the migration to a different output objec (#382)
* fixed the migration to a different output object
* fixed test cases (magic mocks)

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-17 15:19:46 +02:00
Lorenz Herzberger
11cf86464f
Add utxo migration (#379)
* added migration script for utxo space
* added migration commands
* changelog and version bump
* added db call to command

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-14 11:34:36 +02:00
Lorenz Herzberger
9f4cc292bc
fixed sonarqube issues (#377)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-11 21:38:00 +02:00
Lorenz Herzberger
6a3c655e3b
Refactor utxo (#375)
* adjusted utxo space to resemble outputs

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added update_utxoset, removed deprecated test utils

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed test_update_utxoset

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed deprecated query and test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed delete_unspent_outputs tests

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* moved get_merkget_utxoset_merkle_root to dataaccessor and fixed test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed delete_transactions query

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed deprecated fixtures

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* blackified

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added get_outputs_by_owner query and adjusted dataaccessor

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed fastquery class

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed api test case

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed TestMultipleInputs

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed get_outputs_filtered test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed get_spent naming issue

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* blackified

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated changelog and version bump

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-11 15:18:44 +02:00
Lorenz Herzberger
dbf4e9085c
remove zenroom signing (#368)
* added zenroom validation to validator.py and adjusted zenroom test case
* updated transactions dependency
* updated poetry.lock

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-04-05 15:48:00 +02:00
Julian Strobl
4f9c7127b6
[sonar] Exclude k8s/ directory
Error: Can not add the same measure twice on k8s/dev-setup/nginx-http.yaml

Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-04 11:06:41 +02:00
Julian Strobl
3b4dcac388
[sonar] Add initial Sonar Scan setup
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-04-04 11:03:17 +02:00
Jürgen Eckel
e69742808f
asyncio - removed deprecation (#372)
* improved connection error and termination handling
* removed keyboard termination: exception
* fixed test cases
* added python >= 3.10 compatibility

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-04-02 23:36:37 +02:00
Julian Strobl
08ce10ab1f
[ci] Fix docker tag for planetmint-aio (#356)
Otherwise a tag "latest-aio" instead of "latest" is created.

Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-03-14 09:37:23 +01:00
Jürgen Eckel
a3468cf991
added changelog , bumped version
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-13 13:19:30 +01:00
Julian Strobl
6403e02277
Merge pull request #354 from planetmint/jmastr/switch-to-planetmint-aio-docker-image
[ci] Switch to planetmint-aio Docker image
2023-03-13 13:16:16 +01:00
Julian Strobl
aa1310bede
Merge pull request #355 from planetmint/fixed_dockerfile_all_in_one
fixed usability of the planetmint-aio dockerfile/image
2023-03-13 13:08:47 +01:00
Jürgen Eckel
90759697ee
fixed usability of the planetmint-aio dockerfile/image
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-13 12:58:27 +01:00
Julian Strobl
eae8ec4c1e
[ci] Switch to planetmint-aio Docker image
Signed-off-by: Julian Strobl <jmastr@mailbox.org>
2023-03-13 10:55:17 +01:00
Julian Strobl
26e0a21e39
[ci] Add Docker All-In-One build (#352)
* [ci] Add Docker All-In-One build
* added changelog and version bump


Signed-off-by: Julian Strobl <jmastr@mailbox.org>
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-10 08:43:41 +01:00
Jürgen Eckel
8942ebe4af
fixed object differentiation issue in eventify_block (#350)
* fixed object differentiation issue in eventify_block

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-06 15:11:53 +01:00
Jürgen Eckel
59f25687da
Hot fix 2.3.1 (#340)
* fixed issues after 2.3.0, one that has been caused by refactoring, the other existed already
* version bump and changlog

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-02 08:50:28 +01:00
Jürgen Eckel
83dfbed8b2
332 integrate tarantool driver abstraction (#339)
* renamed bigchain_pool -> validator_obj
* renamed the flask connection pool (class name)
* prepared AsyncIO separation
* renamed abci/core.py and class names, merged utils files
* removed obsolete file
* tidy up of ABCI application logic interface
* updated to newest driver tarantool 0.12.1
* Added new start options: --abci-only and --web-api-only to enable seperate execution of the services
* Added exception handling to the ABCI app
* removed space() object usage and thereby halved the amount of DB lookups
* removed async_io handling in the connection object but left some basics of the potential initialization
* tidied up the import structure/order
* tidied up imports
* set version number and changelog

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-03-01 17:42:18 +01:00
dependabot[bot]
2c0b0f03c9
Bump markdown-it-py from 2.1.0 to 2.2.0 (#336)
Bumps [markdown-it-py](https://github.com/executablebooks/markdown-it-py) from 2.1.0 to 2.2.0.
- [Release notes](https://github.com/executablebooks/markdown-it-py/releases)
- [Changelog](https://github.com/executablebooks/markdown-it-py/blob/master/CHANGELOG.md)
- [Commits](https://github.com/executablebooks/markdown-it-py/compare/v2.1.0...v2.2.0)

---
updated-dependencies:
- dependency-name: markdown-it-py
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-27 16:56:26 +01:00
Jürgen Eckel
0b0c954d34
331 refactor a certain module gets a specific driver type flask sync driver abci server async driver first we stick to the current tarantool driver (#337)
* created ABCI_RPC class to seperate RPC interaction from the other ABCI interactions
* renamed validation.py to validator.py
* simplified planetmint/__init__.py
* moved methods used by testing to tests/utils.py
* making planetmint/__init__.py lean
* moved ProcessGroup object to tests as it is only used there
* reintegrated disabled tests


Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-27 16:48:31 +01:00
Jürgen Eckel
77ab922eed
removed integration tests from the repo (#329)
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-22 14:07:20 +01:00
Lorenz Herzberger
1fc306e09d
fixed subcondition instantiation recursively (#328)
* fixed subcondition instantiation recursively
* blackified
* updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-22 10:39:04 +01:00
Jürgen Eckel
89b5427e47
fixed bug : roll back caused from_dict call on a None object (#327)
* fixed bug : roll back caused from_dict call on a None object

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* simplified fix

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-21 10:57:37 +01:00
dependabot[bot]
979af5e453
Bump cryptography from 3.4.7 to 39.0.1 (#324)
Bumps [cryptography](https://github.com/pyca/cryptography) from 3.4.7 to 39.0.1.
- [Release notes](https://github.com/pyca/cryptography/releases)
- [Changelog](https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/cryptography/compare/3.4.7...39.0.1)

---
updated-dependencies:
- dependency-name: cryptography
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 16:00:43 +01:00
Lorenz Herzberger
63b386a9cf
Migrate docs to use poetry (#326)
* migrated docs to use poetry, removed python browser script from makefile

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* bumped version in version.py

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-16 16:00:00 +01:00
dependabot[bot]
15a8a82191
Bump ipython from 8.9.0 to 8.10.0 (#323)
Bumps [ipython](https://github.com/ipython/ipython) from 8.9.0 to 8.10.0.
- [Release notes](https://github.com/ipython/ipython/releases)
- [Commits](https://github.com/ipython/ipython/compare/8.9.0...8.10.0)

---
updated-dependencies:
- dependency-name: ipython
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-16 15:22:41 +01:00
Lorenz Herzberger
c69272f6a2
removed unused code for deprecated text search (#322)
* removed unused code for depricated text search

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-15 16:11:36 +01:00
Lorenz Herzberger
384b091d74
Migrate to poetry (#321)
* added pyproject.toml and poetry.lock

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added scripts and classifiers to pyproject.toml

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated planetmint-transacitons, updated dockerfile to use poerty, updated changelog

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated CI and Makefile

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated CI audit step to use poetry

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated version number on pyproject.toml

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* updated version number

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

---------

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-02-15 15:56:01 +01:00
Jürgen Eckel
811f89e5a6
79 hotfix election validation bckwrd compat (#319)
* fixed election and voting backward compatibility issues
* bumped version!
* fixed changed testcases and a bug
* blackified
* blackified with newest verion to satisfy CI
* fix dependency management issue

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-15 09:19:17 +01:00
Jürgen Eckel
2bb0539b78
catching Tarantool exceptions in case of concurrency (implicitly issu… (#312)
* catching Tarantool exceptions in case of concurrency (implicitly issued by the planetmint-diver-ts tests)
* fixed black version
* blackified (new version)

---------

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-02-01 13:43:39 +01:00
Lorenz Herzberger
87506ff4a1
removed depricated or unused code (#311)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-01-31 16:39:09 +01:00
Jürgen Eckel
3cb9424a35
added content: write permissions to the CI.yml
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-01-31 16:34:19 +01:00
Jürgen Eckel
6115a73f66
fixed workflow & bumped version
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-01-31 14:09:24 +01:00
Lorenz Herzberger
3f65a13c46
standardize blocks api (#306)
* adjusted block API return format
* blackified and updated version and changelog
* added to error message if no block with id was found

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-01-31 09:59:34 +01:00
Jürgen Eckel
9a74a9c987
286 pull access denied attempting to download planetmint docker image (#307)
* simplified CI workflows
* added docker image publishing on gh
* added arm buildxy
* added CI changes
* adjusted CI workflow
* fixed some vulnerability by upgrading dependencies


* changed Dockerfile-dev to be the default

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-01-31 09:55:19 +01:00
Jürgen Eckel
599f64f68c
Compose & Decompose Support (#304)
Added the newest transaction package and support for Compose and Decompose as specified in PRP-5 https://github.com/planetmint/PRPs/tree/main/5
2023-01-26 17:41:45 +01:00
Jürgen Eckel
cfa3b6dcd4
Fixing issues (#300)
* making multprocessing usage explicit and easily identifiable
* fixed error messaging and made API not found reports debug information
* removed acceptance tests
* removed obsolete gh workflow file
* fixed testcaes issue with patching
* changed testcases to not check for error logs as we moved this to debug logs
checks/asserts can be re-integrated asap we are able to set the debuglevel in for the single use cases

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-01-26 13:53:04 +01:00
Jürgen Eckel
84ae2ccf2b
upgraded tarantool to support M1 (#302)
* upgraded tarantool to support M1

* increased number to a stable version

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
2023-01-26 13:51:11 +01:00
Lorenz Herzberger
8abaaf79f4
removed faulty pip statement (#301)
Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
2023-01-26 10:17:44 +01:00
Lorenz Herzberger
4472a1a3ee
refactor tarantool backend (#292)
* added initial interfaces for backend, refactored Asset and MetaData logic

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* adjusted input dataclass, added queries, removed convert

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* created backend models folder, replaced token_hex with uuid

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Add cleanup and add constants

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* added to and from static methods to asset, input model and removed logic from tools

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* simplified store_bulk_transaction and corresponding query, adjusted test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed script queries

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Add Output model

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Adapt Output class

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Further fixes

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Further fixes

* Get rid of decompose

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* refactored init.lua

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* refactored drop.lua

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Add transaction data class

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* refactored init.lua

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fix tests

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Fix more tests

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Format file

* Fix recursion error

* More fixes

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Further fixes

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* using init.lua for db setup

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed flush_db for new tarantool implementation

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed unique constraints

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* used new indexes on block related db operations

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Adapt models

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Check if blocks is empty

* adjusted get_txids_filtered for new indexes

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Adaptions due to schema change

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* fixed get block test case

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Fix subcondition serialization

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Remove unnecessary method

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* More fixes

* renamed group_txs and used data models in fastquery

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* adjusted query test cases, removed unused code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced asset search with get_asset_by_cid

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added limit to asset queries

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced metadata search with cid lookup

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed most of the test_lib test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed election test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed some more test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed 'is' vs '==' issue

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* - blackified & fixed recovery / delete transactions issues becaues of data model transitions
- reintegrated get_transaction() call in query -> delegating this to get_complete_transactions_by_ids

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* show election status uses the governance table from now on
show election status maps the asset["data"] object properly

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed input object differences between old / new version and lookup of transaction in the governance pool

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed TX lookup issues due to different pools

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed wrong index name issue:  transaction_by_asset vs transaction_by_asset_id

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed asset class key mixup

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* moved field removal methods to DbTransaction
redefined strcuture of DbTransction.to_dict() to be equal to the one of Transactions.to_dict()

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added proper input conversion of the test cases and a proper input validation and object converion

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* simplified imports
fixed transfer input issues of the tests

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed comparision issue : dict vs. object

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed schema validation errors

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added verification of ConditionDetails to the owner verification to avoid mixup between ConditionDetails and SubCondition
fixed Object comparision issues due to object changes

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed object handling issue and complicated stuff

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added missing import

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added proper corner case handling in case a requested block is not found

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed object comparision issue

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed output handling for validate_transfer_inputs

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed wrong search pool usage

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed zenroom testcase

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed last abci issues and blackified the code

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added tarantool exception catching and raising as well as logging

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed obj comparision issue in test_get_spent_issue_1271

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added raiing CriticialDoubleSpend Exception for governance and transactions
fixed search space issue with election / voting commit lookup

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* * made returned outputs unique (get_owned_ids)
* added delete_output method to init.lua
* fixd output deletion issue by relaying the deletion to lua instead of the python code

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed rollback after crash

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* adjusted assets=None to assets=[{"data":None}] to avoid exeptions in the background service

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed unused code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed unused code, reverted transaction fetching, added return types to queries

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed duplicate code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed depricated code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* store transactions of various versions (backwardcompatibility)
added _bdb variable to init/drop DBs for the single use cases (started failing as TXs are looked up in DB - compared to before)

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added support for v2.0 transaction to DB writing/reading

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed merge errors (arguments ... )

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* Simplified unit tests (#294)

* adjusted make test
* 1st improvments to ease testing
* simplified gh actions
* adjusted gh action file
* removed deps
* added sudo to apt calls
* removed predefined pytest module definitions
* added installing planetmint into the unit test container
* give time to the db container
* added environment variables to unit-test.yml
* removed acceptances tests from test executions

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed unused code, updated version number

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
Signed-off-by: cybnon <stefan.weber93@googlemail.com>
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: cybnon <stefan.weber93@googlemail.com>
Co-authored-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: Jürgen Eckel <eckelj@users.noreply.github.com>
2023-01-16 15:21:56 +01:00
201 changed files with 9479 additions and 9713 deletions

224
.github/workflows/CI.yml vendored Normal file
View File

@ -0,0 +1,224 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
---
name: CI
on:
push:
branches:
- "*"
tags:
- "v*.*.*"
pull_request:
branches:
- "main"
permissions:
packages: write
contents: write
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable
with:
options: "--check -l 119"
src: "."
audit:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Setup poetry
uses: Gr1N/setup-poetry@v8
- name: Install dependencies
run: poetry install
- name: Create requirements.txt
run: poetry run pip freeze > requirements.txt
- name: Audit dependencies
run: |
poetry run pip-audit \
--ignore-vuln GHSA-8495-4g3g-x7pr \
--ignore-vuln PYSEC-2024-230 \
--ignore-vuln PYSEC-2024-225 \
--ignore-vuln GHSA-3ww4-gg4f-jr7f \
--ignore-vuln GHSA-9v9h-cgj8-h64p \
--ignore-vuln GHSA-h4gh-qq45-vh27 \
--ignore-vuln PYSEC-2023-62 \
--ignore-vuln PYSEC-2024-71 \
--ignore-vuln GHSA-84pr-m4jr-85g5 \
--ignore-vuln GHSA-w3h3-4rj7-4ph4 \
--ignore-vuln PYSEC-2024-60 \
--ignore-vuln GHSA-h5c8-rqwp-cp95 \
--ignore-vuln GHSA-h75v-3vvj-5mfj \
--ignore-vuln GHSA-q2x7-8rv6-6q7h \
--ignore-vuln GHSA-gmj6-6f8f-6699 \
--ignore-vuln PYSEC-2023-117 \
--ignore-vuln GHSA-m87m-mmvp-v9qm \
--ignore-vuln GHSA-9wx4-h78v-vm56 \
--ignore-vuln GHSA-34jh-p97f-mpxf \
--ignore-vuln PYSEC-2022-203 \
--ignore-vuln PYSEC-2023-58 \
--ignore-vuln PYSEC-2023-57 \
--ignore-vuln PYSEC-2023-221 \
--ignore-vuln GHSA-2g68-c3qc-8985 \
--ignore-vuln GHSA-f9vj-2wh5-fj8j \
--ignore-vuln GHSA-q34m-jh98-gwm2
test:
needs: lint
runs-on: ubuntu-latest
env:
PLANETMINT_DATABASE_BACKEND: tarantool_db
PLANETMINT_DATABASE_HOST: localhost
PLANETMINT_DATABASE_PORT: 3303
PLANETMINT_SERVER_BIND: 0.0.0.0:9984
PLANETMINT_WSSERVER_HOST: 0.0.0.0
PLANETMINT_WSSERVER_ADVERTISED_HOST: localhost
PLANETMINT_TENDERMINT_HOST: localhost
PLANETMINT_TENDERMINT_PORT: 26657
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Prepare OS
run: sudo apt-get update && sudo apt-get install -y git zsh curl tarantool-common vim build-essential cmake
- name: Get Tendermint
run: wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_0.34.24_linux_amd64.tar.gz && tar zxf tendermint_0.34.24_linux_amd64.tar.gz
- name: Setup poetry
uses: Gr1N/setup-poetry@v8
- name: Install Planetmint
run: poetry install --with dev
- name: Execute Tests
run: make test
release:
needs: test
if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Setup poetry
uses: Gr1N/setup-poetry@v8
- name: Install dependencies
run: poetry install --with dev
- name: Upload to PyPI
run: |
poetry build
poetry publish -u __token__ -p ${{ secrets.PYPI_TOKEN }}
- name: Upload to GitHub
uses: softprops/action-gh-release@v1
with:
files: dist/*
publish-docker:
needs: test
if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
# Get the repository's code
- name: Checkout
uses: actions/checkout@v2
# https://github.com/docker/setup-qemu-action
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
# https://github.com/docker/setup-buildx-action
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Login to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GH_PACKAGE_DEPLOYMENT }}
- name: Docker meta
id: semver # you'll use this in the next step
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
ghcr.io/planetmint/planetmint
# Docker tags based on the following events/attributes
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.semver.outputs.tags }}
labels: ${{ steps.semver.outputs.labels }}
env:
CRYPTOGRAPHY_DONT_BUILD_RUST: 1
- name: Docker meta AIO
id: semver-aio # you'll use this in the next step
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
ghcr.io/planetmint/planetmint-aio
# Docker tags based on the following events/attributes
tags: |
type=schedule
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
- name: Build and push AIO
uses: docker/build-push-action@v2
with:
context: .
file: Dockerfile-all-in-one
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.semver-aio.outputs.tags }}
labels: ${{ steps.semver-aio.outputs.labels }}
env:
CRYPTOGRAPHY_DONT_BUILD_RUST: 1

View File

@ -1,22 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Acceptance tests
on: [push, pull_request]
jobs:
test:
if: ${{ false }
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Start container
run: docker-compose up -d planetmint
- name: Run test
run: docker-compose -f docker-compose.yml run --rm python-acceptance pytest /src

View File

@ -21,16 +21,44 @@ jobs:
with:
python-version: 3.9
- name: Install pip-audit
run: pip install --upgrade pip pip-audit
- name: Setup poetry
uses: Gr1N/setup-poetry@v8
- name: Install dependencies
run: pip install .
run: poetry install
- name: Create requirements.txt
run: pip freeze > requirements.txt
run: poetry run pip freeze > requirements.txt
- name: Audit dependencies
run: pip-audit
run: |
poetry run pip-audit \
--ignore-vuln PYSEC-2022-203 \
--ignore-vuln PYSEC-2023-58 \
--ignore-vuln PYSEC-2023-57 \
--ignore-vuln PYSEC-2023-62 \
--ignore-vuln GHSA-8495-4g3g-x7pr \
--ignore-vuln PYSEC-2023-135 \
--ignore-vuln PYSEC-2024-230 \
--ignore-vuln PYSEC-2024-225 \
--ignore-vuln GHSA-3ww4-gg4f-jr7f \
--ignore-vuln GHSA-9v9h-cgj8-h64p \
--ignore-vuln GHSA-h4gh-qq45-vh27 \
--ignore-vuln PYSEC-2024-71 \
--ignore-vuln GHSA-84pr-m4jr-85g5 \
--ignore-vuln GHSA-w3h3-4rj7-4ph4 \
--ignore-vuln PYSEC-2024-60 \
--ignore-vuln GHSA-h5c8-rqwp-cp95 \
--ignore-vuln GHSA-h75v-3vvj-5mfj \
--ignore-vuln GHSA-q2x7-8rv6-6q7h \
--ignore-vuln GHSA-gmj6-6f8f-6699 \
--ignore-vuln PYSEC-2023-117 \
--ignore-vuln GHSA-m87m-mmvp-v9qm \
--ignore-vuln GHSA-9wx4-h78v-vm56 \
--ignore-vuln PYSEC-2023-192 \
--ignore-vuln PYSEC-2023-212 \
--ignore-vuln GHSA-34jh-p97f-mpxf \
--ignore-vuln PYSEC-2023-221 \
--ignore-vuln GHSA-2g68-c3qc-8985 \
--ignore-vuln GHSA-f9vj-2wh5-fj8j \
--ignore-vuln GHSA-q34m-jh98-gwm2

View File

@ -1,19 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Integration tests
on: [push, pull_request]
jobs:
test:
if: ${{ false }
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Start test run
run: docker-compose -f docker-compose.integration.yml up test

View File

@ -1,17 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Lint
on: [push, pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable
with:
options: "--check -l 119"
src: "."

View File

@ -1,30 +0,0 @@
name: Deploy packages
on:
push:
tags:
- '*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Install dependencies
run: pip install -e '.[dev]' && pip install wheel && python setup.py bdist_wheel sdist
- name: Upload to TestPyPI
run: |
twine check dist/*
twine upload dist/*
env:
TWINE_USERNAME: __token__
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}

View File

@ -1,59 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Unit tests - with direct ABCI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Build container
run: |
docker-compose -f docker-compose.yml build --no-cache --build-arg abci_status=enable planetmint
- name: Save image
run: docker save -o planetmint.tar planetmint_planetmint
- name: Upload image
uses: actions/upload-artifact@v3
with:
name: planetmint-abci
path: planetmint.tar
retention-days: 5
test-with-abci:
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- db: "Tarantool with ABCI"
host: "tarantool"
port: 3303
abci: "enabled"
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Download planetmint
uses: actions/download-artifact@v3
with:
name: planetmint-abci
- name: Load planetmint
run: docker load -i planetmint.tar
- name: Start containers
run: docker-compose -f docker-compose.yml up -d planetmint
- name: Run tests
run: docker exec planetmint_planetmint_1 pytest -v -m abci

View File

@ -1,60 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Unit tests - with Planemint
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Build container
run: |
docker-compose -f docker-compose.yml build --no-cache planetmint
- name: Save image
run: docker save -o planetmint.tar planetmint_planetmint
- name: Upload image
uses: actions/upload-artifact@v3
with:
name: planetmint
path: planetmint.tar
retention-days: 5
test-without-abci:
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- db: "Tarantool without ABCI"
host: "tarantool"
port: 3303
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Download planetmint
uses: actions/download-artifact@v3
with:
name: planetmint
- name: Load planetmint
run: docker load -i planetmint.tar
- name: Start containers
run: docker-compose -f docker-compose.yml up -d bdb
- name: Run tests
run: docker exec planetmint_planetmint_1 pytest -v --cov=planetmint --cov-report xml:htmlcov/coverage.xml
- name: Upload Coverage to Codecov
uses: codecov/codecov-action@v3

View File

@ -25,6 +25,85 @@ For reference, the possible headings are:
* **Known Issues**
* **Notes**
## [2.5.1] - 2023-22-06
* **Fixed** docker image incompatibility with tarantool installer, switched to ubuntu-container for AIO image
## [2.5.0] - 2023-21-06
* **Changed** Upgraded ABCI compatbility to Tendermint v0.34.24 and CometBFT v0.34.29
## [2.4.7] - 2023-24-05
* **Fixed** wrong referencing of planetmint-transactions object and variable
## [2.4.6] - 2023-24-05
* **Fixed** Missing ABCI_RPC object initiailization for CLI voting commands.
* **Fixed** TypeError in EndBlock procedure that occured rarely within the network.
* **Security** moved to a more secure requests version
## [2.4.5] - 2023-21-04
* **Fixed** Integration of DataAccessor Singleton class to reduce potentially multiple DB driver initializations.
## [2.4.4] - 2023-19-04
* **Fixed** tarantool migration script issues (modularity, script failures, cli cmd to function mapping)
## [2.4.3] - 2023-17-04
* **Fixed** fixed migration behaviour for non docker service
## [2.4.2] - 2023-13-04
* **Added** planetmint migration commands
## [2.4.1] - 2023-11-04
* **Removed** Fastquery class
* **Changed** UTXO space updated to resemble outputs
* **Changed** updated UTXO querying
## [2.4.0] - 2023-29-03
* **Added** Zenroom script validation
* **Changed** adjusted zenroom testing for new transaction script structure
## [2.3.3] - 2023-10-03
* **Fixed** CI issues with the docker images
* **Added** Tendermint, tarantool, and planetmint initialization to the all-in-one docker image
## [2.3.2] - 2023-10-03
* **Fixed** websocket service issue with block/asset object access of different object/tx versions
* **Added** CI pipeline to build and package the all-in-one docker images
## [2.3.1] - 2023-02-03
* **Fixed** backend.models.assets class content type issue (verification if objects are of type dict)
* **Fixed** Type defintions of Exceptions in the backend.query exception catching decorat
## [2.3.0] - 2023-01-03
* **Fixed** double usage of the tarantool driver in one instance that lead to crashes
* **Changed** refactored a lot of classes and the structure
* **Changed** upgraded to tarantool driver 0.12.1
## [2.2.4] - 2023-15-02
* **Fixed** subcondition instantiation now works recursively
* **Changed** migrated dependency management to poetry
* **Removed** removed unused text_search related code
* **Changed** docs are now built using poetry
## [2.2.3] - 2023-14-02
* **Fixed** fixed voting/election backward compatibility issue (using planetmint-transactions >= 0.7.0) on the 2.2 main branch
* **Changed** migrated dependency management to poetry
## [2.2.2] - 2023-31-01
* **Fixed** catching tarantool exceptions in case tarantool drivers throw execeptions due to concurrency issues. This issue got idenitfied during the testing of the planetmint-driver-ts.
## [2.2.0] - 2023-31-01
* **Changed** standardized blocks API
## [2.1.0] - 2023-26-01
* **Added** validation for compose and decompose transaction types
## [2.0.0] - 2023-12-01
* **Changed** changed tarantool db schema
* **Removed** removed text_search routes
* **Added** metadata / asset cid route for fetching transactions
## [1.4.2] - 2023-14-02
* **fixed** fixed voting/election backward compatibility issue (using planetmint-transactions >= 0.7.0)
## [1.4.1] - 2022-21-12
* **fixed** inconsistent cryptocondition keyring tag handling. Using cryptoconditions > 1.1.0 from now on.

View File

@ -1,24 +1,36 @@
FROM python:3.9
ARG python_version=3.9
FROM python:${python_version}-slim
LABEL maintainer "contact@ipdb.global"
RUN mkdir -p /usr/src/app
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN apt-get -qq update \
&& apt-get -y upgrade \
&& apt-get install -y jq vim zsh build-essential cmake\
&& pip install . \
RUN apt-get update \
&& apt-get install -y git zsh curl\
&& apt-get install -y tarantool-common\
&& apt-get install -y vim build-essential cmake\
&& pip install -U pip \
&& apt-get autoremove \
&& apt-get clean
ARG backend
ARG abci_status
VOLUME ["/data", "/certs"]
# When developing with Python in a docker container, we are using PYTHONBUFFERED
# to force stdin, stdout and stderr to be totally unbuffered and to capture logs/outputs
ENV PYTHONUNBUFFERED 0
ENV PLANETMINT_CONFIG_PATH /data/.planetmint
ENV PLANETMINT_DATABASE_PORT 3303
ENV PLANETMINT_DATABASE_BACKEND $backend
ENV PLANETMINT_SERVER_BIND 0.0.0.0:9984
ENV PLANETMINT_WSSERVER_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_SCHEME ws
ENV PLANETMINT_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws
ENV PLANETMINT_WSSERVER_ADVERTISED_PORT 9985
ENTRYPOINT ["planetmint"]
CMD ["start"]
ENV PLANETMINT_TENDERMINT_PORT 26657
ENV PLANETMINT_CI_ABCI ${abci_status}
RUN mkdir -p /usr/src/app
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN pip install poetry
RUN poetry install --with dev

View File

@ -1,7 +1,7 @@
FROM python:3.9-slim
FROM ubuntu:22.04
LABEL maintainer "contact@ipdb.global"
ARG TM_VERSION=0.34.15
ARG TM_VERSION=0.34.24
RUN mkdir -p /usr/src/app
ENV HOME /root
COPY . /usr/src/app/
@ -11,15 +11,17 @@ RUN apt-get update \
&& apt-get install -y openssl ca-certificates git \
&& apt-get install -y vim build-essential cmake jq zsh wget \
&& apt-get install -y libstdc++6 \
&& apt-get install -y openssh-client openssh-server \
&& pip install --upgrade pip cffi \
&& apt-get install -y openssh-client openssh-server
RUN apt-get install -y python3 python3-pip cython3
RUN pip install --upgrade pip cffi \
&& pip install -e . \
&& apt-get autoremove
# Install tarantool and monit
RUN apt-get install -y dirmngr gnupg apt-transport-https software-properties-common ca-certificates curl
RUN ln -fs /usr/share/zoneinfo/Etc/UTC /etc/localtime
RUN apt-get update
RUN curl -L https://tarantool.io/wrATeGF/release/2/installer.sh | bash
RUN curl -L https://tarantool.io/release/2/installer.sh | bash
RUN apt-get install -y tarantool monit
# Install Tendermint
@ -42,8 +44,14 @@ ENV PLANETMINT_WSSERVER_ADVERTISED_HOST 0.0.0.0
ENV PLANETMINT_WSSERVER_ADVERTISED_SCHEME ws
ENV PLANETMINT_TENDERMINT_PORT 26657
COPY planetmint/backend/tarantool/opt/init.lua /etc/tarantool/instances.enabled
VOLUME /data/db /data/configdb /tendermint
EXPOSE 27017 28017 9984 9985 26656 26657 26658
WORKDIR $HOME
RUN tendermint init
RUN planetmint -y configure

View File

@ -32,5 +32,5 @@ ENV PLANETMINT_CI_ABCI ${abci_status}
RUN mkdir -p /usr/src/app
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN pip install -e .[dev]
RUN pip install flask-cors
RUN pip install poetry
RUN poetry install --with dev

View File

@ -1,23 +1,8 @@
.PHONY: help run start stop logs lint test test-unit test-unit-watch test-acceptance test-integration cov docs docs-acceptance clean reset release dist check-deps clean-build clean-pyc clean-test
.PHONY: help run start stop logs lint test test-unit test-unit-watch cov docs clean reset release dist check-deps clean-build clean-pyc clean-test
.DEFAULT_GOAL := help
#############################
# Open a URL in the browser #
#############################
define BROWSER_PYSCRIPT
import os, webbrowser, sys
try:
from urllib import pathname2url
except:
from urllib.request import pathname2url
webbrowser.open("file://" + pathname2url(os.path.abspath(sys.argv[1])))
endef
export BROWSER_PYSCRIPT
##################################
# Display help for this makefile #
##################################
@ -41,8 +26,7 @@ export PRINT_HELP_PYSCRIPT
# Basic commands #
##################
DOCKER := docker
DC := docker-compose
BROWSER := python -c "$$BROWSER_PYSCRIPT"
DC := docker compose
HELP := python -c "$$PRINT_HELP_PYSCRIPT"
ECHO := /usr/bin/env echo
@ -77,36 +61,25 @@ lint: check-py-deps ## Lint the project
format: check-py-deps ## Format the project
black -l 119 .
test: check-deps test-unit test-acceptance ## Run unit and acceptance tests
test: check-deps test-unit ## Run unit
test-unit: check-deps ## Run all tests once or specify a file/test with TEST=tests/file.py::Class::test
@$(DC) up -d bdb
@$(DC) exec planetmint pytest ${TEST}
@$(DC) up -d tarantool
#wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_0.34.24_linux_amd64.tar.gz
#tar zxf tendermint_0.34.24_linux_amd64.tar.gz
poetry run pytest -m "not abci"
rm -rf ~/.tendermint && ./tendermint init && ./tendermint node --consensus.create_empty_blocks=false --rpc.laddr=tcp://0.0.0.0:26657 --proxy_app=tcp://localhost:26658&
poetry run pytest -m abci
@$(DC) down
test-unit-watch: check-deps ## Run all tests and wait. Every time you change code, tests will be run again
@$(DC) run --rm --no-deps planetmint pytest -f
test-acceptance: check-deps ## Run all acceptance tests
@./scripts/run-acceptance-test.sh
test-integration: check-deps ## Run all integration tests
@./scripts/run-integration-test.sh
cov: check-deps ## Check code coverage and open the result in the browser
@$(DC) run --rm planetmint pytest -v --cov=planetmint --cov-report html
$(BROWSER) htmlcov/index.html
docs: check-deps ## Generate HTML documentation and open it in the browser
@$(DC) run --rm --no-deps bdocs make -C docs/root html
$(BROWSER) docs/root/build/html/index.html
docs-acceptance: check-deps ## Create documentation for acceptance tests
@$(DC) run --rm python-acceptance pycco -i -s /src -d /docs
$(BROWSER) acceptance/python/docs/index.html
docs-integration: check-deps ## Create documentation for integration tests
@$(DC) run --rm python-integration pycco -i -s /src -d /docs
$(BROWSER) integration/python/docs/index.html
clean: check-deps ## Remove all build, test, coverage and Python artifacts
@$(DC) up clean

View File

@ -1,27 +0,0 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Acceptance test suite
This directory contains the acceptance test suite for Planetmint.
The suite uses Docker Compose to set up a single Planetmint node, run all tests, and finally stop the node. In the future we will add support for a four node network setup.
## Running the tests
It should be as easy as `make test-acceptance`.
Note that `make test-acceptance` will take some time to start the node and shutting it down. If you are developing a test, or you wish to run a specific test in the acceptance test suite, first start the node with `make start`. After the node is running, you can run `pytest` inside the `python-acceptance` container with:
```bash
docker-compose run --rm python-acceptance pytest <use whatever option you need>
```
## Writing and documenting the tests
Tests are sometimes difficult to read. For acceptance tests, we try to be really explicit on what the test is doing, so please write code that is *simple* and easy to understand. We decided to use literate-programming documentation. To generate the documentation run:
```bash
make doc-acceptance
```

View File

@ -1 +0,0 @@
docs

View File

@ -1,18 +0,0 @@
FROM python:3.9
RUN apt-get update \
&& pip install -U pip \
&& apt-get autoremove \
&& apt-get clean
RUN apt-get install -y vim zsh build-essential cmake git
RUN mkdir -p /src
RUN /usr/local/bin/python -m pip install --upgrade pip
RUN pip install --upgrade meson ninja
RUN pip install --upgrade \
pycco \
websocket-client~=0.47.0 \
pytest~=3.0 \
planetmint-driver>=0.9.2 \
blns
RUN pip install planetmint-ipld>=0.0.3

View File

@ -1,86 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import pytest
CONDITION_SCRIPT = """Scenario 'ecdh': create the signature of an object
Given I have the 'keyring'
Given that I have a 'string dictionary' named 'houses'
When I create the signature of 'houses'
Then print the 'signature'"""
FULFILL_SCRIPT = """Scenario 'ecdh': Bob verifies the signature from Alice
Given I have a 'ecdh public key' from 'Alice'
Given that I have a 'string dictionary' named 'houses'
Given I have a 'signature' named 'signature'
When I verify the 'houses' has a signature in 'signature' by 'Alice'
Then print the string 'ok'"""
SK_TO_PK = """Scenario 'ecdh': Create the keypair
Given that I am known as '{}'
Given I have the 'keyring'
When I create the ecdh public key
When I create the bitcoin address
Then print my 'ecdh public key'
Then print my 'bitcoin address'"""
GENERATE_KEYPAIR = """Scenario 'ecdh': Create the keypair
Given that I am known as 'Pippo'
When I create the ecdh key
When I create the bitcoin key
Then print data"""
INITIAL_STATE = {"also": "more data"}
SCRIPT_INPUT = {
"houses": [
{
"name": "Harry",
"team": "Gryffindor",
},
{
"name": "Draco",
"team": "Slytherin",
},
],
}
metadata = {"units": 300, "type": "KG"}
ZENROOM_DATA = {"that": "is my data"}
@pytest.fixture
def gen_key_zencode():
return GENERATE_KEYPAIR
@pytest.fixture
def secret_key_to_private_key_zencode():
return SK_TO_PK
@pytest.fixture
def fulfill_script_zencode():
return FULFILL_SCRIPT
@pytest.fixture
def condition_script_zencode():
return CONDITION_SCRIPT
@pytest.fixture
def zenroom_house_assets():
return SCRIPT_INPUT
@pytest.fixture
def zenroom_script_input():
return SCRIPT_INPUT
@pytest.fixture
def zenroom_data():
return ZENROOM_DATA

View File

@ -1,174 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Basic Acceptance Test
# Here we check that the primitives of the system behave as expected.
# As you will see, this script tests basic stuff like:
#
# - create a transaction
# - check if the transaction is stored
# - check for the outputs of a given public key
# - transfer the transaction to another key
#
# We run a series of checks for each steps, that is retrieving the transaction from
# the remote system, and also checking the `outputs` of a given public key.
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
import os
# For this test case we import and use the Python Driver.
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
from ipld import multihash, marshal
def test_get_tests():
# ## Set up a connection to Planetmint
# To use BighainDB we need a connection. Here we create one. By default we
# connect to localhost, but you can override this value using the env variable
# called `PLANETMINT_ENDPOINT`, a valid value must include the schema:
# `https://example.com:9984`
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# ## Create keypairs
# This test requires the interaction between two actors with their own keypair.
# The two keypairs will be called—drum roll—Alice and Bob.
alice, bob = generate_keypair(), generate_keypair()
# ## Alice registers her bike in Planetmint
# Alice has a nice bike, and here she creates the "digital twin"
# of her bike.
bike = {"data": multihash(marshal({"bicycle": {"serial_number": 420420}}))}
# She prepares a `CREATE` transaction...
prepared_creation_tx = bdb.transactions.prepare(operation="CREATE", signers=alice.public_key, asset=bike)
# ... and she fulfills it with her private key.
fulfilled_creation_tx = bdb.transactions.fulfill(prepared_creation_tx, private_keys=alice.private_key)
# We will use the `id` of this transaction several time, so we store it in
# a variable with a short and easy name
bike_id = fulfilled_creation_tx["id"]
# Now she is ready to send it to the Planetmint Network.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_creation_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(bike_id), "Cannot find transaction {}".format(bike_id)
# Alice is now the proud owner of one unspent asset.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 1
assert bdb.outputs.get(alice.public_key)[0]["transaction_id"] == bike_id
# ## Alice transfers her bike to Bob
# After registering her bike, Alice is ready to transfer it to Bob.
# She needs to create a new `TRANSFER` transaction.
# A `TRANSFER` transaction contains a pointer to the original asset. The original asset
# is identified by the `id` of the `CREATE` transaction that defined it.
transfer_asset = {"id": bike_id}
# Alice wants to spend the one and only output available, the one with index `0`.
output_index = 0
output = fulfilled_creation_tx["outputs"][output_index]
# Here, she defines the `input` of the `TRANSFER` transaction. The `input` contains
# several keys:
#
# - `fulfillment`, taken from the previous `CREATE` transaction.
# - `fulfills`, that specifies which condition she is fulfilling.
# - `owners_before`.
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_creation_tx["id"]},
"owners_before": output["public_keys"],
}
# Now that all the elements are set, she creates the actual transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation="TRANSFER", asset=transfer_asset, inputs=transfer_input, recipients=bob.public_key
)
# ... and signs it with her private key.
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=alice.private_key)
# She finally sends the transaction to a Planetmint node.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# Now Alice has zero unspent transactions.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 0
# While Bob has one.copy
assert len(bdb.outputs.get(bob.public_key, spent=False)) == 1
# Bob double checks what he got was the actual bike.
bob_tx_id = bdb.outputs.get(bob.public_key, spent=False)[0]["transaction_id"]
assert bdb.transactions.retrieve(bob_tx_id) == sent_transfer_tx
transfer_asset = {"id": bike_id}
# Alice wants to spend the one and only output available, the one with index `0`.
output_index = 0
output = fulfilled_transfer_tx["outputs"][output_index]
# Here, she defines the `input` of the `TRANSFER` transaction. The `input` contains
# several keys:
#
# - `fulfillment`, taken from the previous `CREATE` transaction.
# - `fulfills`, that specifies which condition she is fulfilling.
# - `owners_before`.
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_transfer_tx["id"]},
"owners_before": output["public_keys"],
}
# Now that all the elements are set, she creates the actual transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation="TRANSFER", asset=transfer_asset, inputs=transfer_input, recipients=bob.public_key
)
# ... and signs it with her private key.
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
# She finally sends the transaction to a Planetmint node.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# from urllib3 import request
import urllib3
import json
http = urllib3.PoolManager()
# verify that 3 transactions contain the asset_id
asset_id = bike_id
url = "http://planetmint:9984/api/v1/transactions?asset_id=" + asset_id
r = http.request("GET", url)
tmp_json = http.request("GET", url)
tmp_json = json.loads(tmp_json.data.decode("utf-8"))
assert len(tmp_json) == 3
# verify that one transaction is the create TX
url = "http://planetmint:9984/api/v1/transactions?asset_id=" + asset_id + "&operation=CREATE"
r = http.request("GET", url)
tmp_json = http.request("GET", url)
tmp_json = json.loads(tmp_json.data.decode("utf-8"))
assert len(tmp_json) == 1
# verify that 2 transactoins are of type transfer
url = "http://planetmint:9984/api/v1/transactions?asset_id=" + asset_id + "&operation=transfer"
r = http.request("GET", url)
tmp_json = http.request("GET", url)
tmp_json = json.loads(tmp_json.data.decode("utf-8"))
assert len(tmp_json) == 2

View File

@ -1,115 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Basic Acceptance Test
# Here we check that the primitives of the system behave as expected.
# As you will see, this script tests basic stuff like:
#
# - create a transaction
# - check if the transaction is stored
# - check for the outputs of a given public key
# - transfer the transaction to another key
#
# We run a series of checks for each steps, that is retrieving the transaction from
# the remote system, and also checking the `outputs` of a given public key.
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
import os
# For this test case we import and use the Python Driver.
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
from ipld import multihash, marshal
def test_basic():
# ## Set up a connection to Planetmint
# To use BighainDB we need a connection. Here we create one. By default we
# connect to localhost, but you can override this value using the env variable
# called `PLANETMINT_ENDPOINT`, a valid value must include the schema:
# `https://example.com:9984`
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# ## Create keypairs
# This test requires the interaction between two actors with their own keypair.
# The two keypairs will be called—drum roll—Alice and Bob.
alice, bob = generate_keypair(), generate_keypair()
# ## Alice registers her bike in Planetmint
# Alice has a nice bike, and here she creates the "digital twin"
# of her bike.
bike = [{"data": multihash(marshal({"bicycle": {"serial_number": 420420}}))}]
# She prepares a `CREATE` transaction...
prepared_creation_tx = bdb.transactions.prepare(operation="CREATE", signers=alice.public_key, assets=bike)
# ... and she fulfills it with her private key.
fulfilled_creation_tx = bdb.transactions.fulfill(prepared_creation_tx, private_keys=alice.private_key)
# We will use the `id` of this transaction several time, so we store it in
# a variable with a short and easy name
bike_id = fulfilled_creation_tx["id"]
# Now she is ready to send it to the Planetmint Network.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_creation_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(bike_id), "Cannot find transaction {}".format(bike_id)
# Alice is now the proud owner of one unspent asset.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 1
assert bdb.outputs.get(alice.public_key)[0]["transaction_id"] == bike_id
# ## Alice transfers her bike to Bob
# After registering her bike, Alice is ready to transfer it to Bob.
# She needs to create a new `TRANSFER` transaction.
# A `TRANSFER` transaction contains a pointer to the original asset. The original asset
# is identified by the `id` of the `CREATE` transaction that defined it.
transfer_assets = [{"id": bike_id}]
# Alice wants to spend the one and only output available, the one with index `0`.
output_index = 0
output = fulfilled_creation_tx["outputs"][output_index]
# Here, she defines the `input` of the `TRANSFER` transaction. The `input` contains
# several keys:
#
# - `fulfillment`, taken from the previous `CREATE` transaction.
# - `fulfills`, that specifies which condition she is fulfilling.
# - `owners_before`.
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_creation_tx["id"]},
"owners_before": output["public_keys"],
}
# Now that all the elements are set, she creates the actual transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation="TRANSFER", assets=transfer_assets, inputs=transfer_input, recipients=bob.public_key
)
# ... and signs it with her private key.
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=alice.private_key)
# She finally sends the transaction to a Planetmint node.
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# And just to be 100% sure, she also checks if she can retrieve
# it from the Planetmint node.
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# Now Alice has zero unspent transactions.
assert len(bdb.outputs.get(alice.public_key, spent=False)) == 0
# While Bob has one.
assert len(bdb.outputs.get(bob.public_key, spent=False)) == 1
# Bob double checks what he got was the actual bike.
bob_tx_id = bdb.outputs.get(bob.public_key, spent=False)[0]["transaction_id"]
assert bdb.transactions.retrieve(bob_tx_id) == sent_transfer_tx

View File

@ -1,170 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Divisible assets integration testing
# This test checks if we can successfully divide assets.
# The script tests various things like:
#
# - create a transaction with a divisible asset and issue them to someone
# - check if the transaction is stored and has the right amount of tokens
# - spend some tokens
# - try to spend more tokens than available
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the `amount`
# of a given transaction.
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
# We need the `pytest` package to catch the `BadRequest` exception properly.
# And of course, we also need the `BadRequest`.
import os
import pytest
from planetmint_driver.exceptions import BadRequest
# For this test case we import and use the Python Driver.
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
from ipld import multihash, marshal
def test_divisible_assets():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# Oh look, it is Alice again and she brought her friend Bob along.
alice, bob = generate_keypair(), generate_keypair()
# ## Alice creates a time sharing token
# Alice wants to go on vacation, while Bobs bike just broke down.
# Alice decides to rent her bike to Bob while she is gone.
# So she prepares a `CREATE` transaction to issues 10 tokens.
# First, she prepares an asset for a time sharing token. As you can see in
# the description, Bob and Alice agree that each token can be used to ride
# the bike for one hour.
bike_token = [
{
"data": multihash(
marshal(
{
"token_for": {"bike": {"serial_number": 420420}},
"description": "Time share token. Each token equals one hour of riding.",
}
)
),
}
]
# She prepares a `CREATE` transaction and issues 10 tokens.
# Here, Alice defines in a tuple that she wants to assign
# these 10 tokens to Bob.
prepared_token_tx = bdb.transactions.prepare(
operation="CREATE", signers=alice.public_key, recipients=[([bob.public_key], 10)], assets=bike_token
)
# She fulfills and sends the transaction.
fulfilled_token_tx = bdb.transactions.fulfill(prepared_token_tx, private_keys=alice.private_key)
bdb.transactions.send_commit(fulfilled_token_tx)
# We store the `id` of the transaction to use it later on.
bike_token_id = fulfilled_token_tx["id"]
# Let's check if the transaction was successful.
assert bdb.transactions.retrieve(bike_token_id), "Cannot find transaction {}".format(bike_token_id)
# Bob owns 10 tokens now.
assert bdb.transactions.retrieve(bike_token_id)["outputs"][0]["amount"] == "10"
# ## Bob wants to use the bike
# Now that Bob got the tokens and the sun is shining, he wants to get out
# with the bike for three hours.
# To use the bike he has to send the tokens back to Alice.
# To learn about the details of transferring a transaction check out
# [test_basic.py](./test_basic.html)
transfer_assets = [{"id": bike_token_id}]
output_index = 0
output = fulfilled_token_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_token_tx["id"]},
"owners_before": output["public_keys"],
}
# To use the tokens Bob has to reassign 7 tokens to himself and the
# amount he wants to use to Alice.
prepared_transfer_tx = bdb.transactions.prepare(
operation="TRANSFER",
asset=transfer_assets,
inputs=transfer_input,
recipients=[([alice.public_key], 3), ([bob.public_key], 7)],
)
# He signs and sends the transaction.
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# First, Bob checks if the transaction was successful.
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# There are two outputs in the transaction now.
# The first output shows that Alice got back 3 tokens...
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["amount"] == "3"
# ... while Bob still has 7 left.
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][1]["amount"] == "7"
# ## Bob wants to ride the bike again
# It's been a week and Bob wants to right the bike again.
# Now he wants to ride for 8 hours, that's a lot Bob!
# He prepares the transaction again.
transfer_asset = [{"id": bike_token_id}]
# This time we need an `output_index` of 1, since we have two outputs
# in the `fulfilled_transfer_tx` we created before. The first output with
# index 0 is for Alice and the second output is for Bob.
# Since Bob wants to spend more of his tokens he has to provide the
# correct output with the correct amount of tokens.
output_index = 1
output = fulfilled_transfer_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_transfer_tx["id"]},
"owners_before": output["public_keys"],
}
# This time Bob only provides Alice in the `recipients` because he wants
# to spend all his tokens
prepared_transfer_tx = bdb.transactions.prepare(
operation="TRANSFER", assets=transfer_assets, inputs=transfer_input, recipients=[([alice.public_key], 8)]
)
fulfilled_transfer_tx = bdb.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
# Oh Bob, what have you done?! You tried to spend more tokens than you had.
# Remember Bob, last time you spent 3 tokens already,
# so you only have 7 left.
with pytest.raises(BadRequest) as error:
bdb.transactions.send_commit(fulfilled_transfer_tx)
# Now Bob gets an error saying that the amount he wanted to spent is
# higher than the amount of tokens he has left.
assert error.value.args[0] == 400
message = (
"Invalid transaction (AmountError): The amount used in the "
"inputs `7` needs to be same as the amount used in the "
"outputs `8`"
)
assert error.value.args[2]["message"] == message
# We have to stop this test now, I am sorry, but Bob is pretty upset
# about his mistake. See you next time :)

View File

@ -1,49 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Double Spend testing
# This test challenge the system with double spends.
import os
from uuid import uuid4
from threading import Thread
import queue
import planetmint_driver.exceptions
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
from ipld import multihash, marshal
def test_double_create():
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
alice = generate_keypair()
results = queue.Queue()
tx = bdb.transactions.fulfill(
bdb.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=[{"data": multihash(marshal({"uuid": str(uuid4())}))}]
),
private_keys=alice.private_key,
)
def send_and_queue(tx):
try:
bdb.transactions.send_commit(tx)
results.put("OK")
except planetmint_driver.exceptions.TransportError as e:
results.put("FAIL")
t1 = Thread(target=send_and_queue, args=(tx,))
t2 = Thread(target=send_and_queue, args=(tx,))
t1.start()
t2.start()
results = [results.get(timeout=2), results.get(timeout=2)]
assert results.count("OK") == 1
assert results.count("FAIL") == 1

View File

@ -1,107 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Multiple owners integration testing
# This test checks if we can successfully create and transfer a transaction
# with multiple owners.
# The script tests various things like:
#
# - create a transaction with multiple owners
# - check if the transaction is stored and has the right amount of public keys
# - transfer the transaction to a third person
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the public keys
# of a given transaction.
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
import os
# For this test case we import and use the Python Driver.
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
from ipld import multihash, marshal
def test_multiple_owners():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# Hey Alice and Bob, nice to see you again!
alice, bob = generate_keypair(), generate_keypair()
# ## Alice and Bob create a transaction
# Alice and Bob just moved into a shared flat, no one can afford these
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = {"data": multihash(marshal({"dish washer": {"serial_number": 1337}}))}
# They prepare a `CREATE` transaction. To have multiple owners, both
# Bob and Alice need to be the recipients.
prepared_dw_tx = bdb.transactions.prepare(
operation="CREATE", signers=alice.public_key, recipients=(alice.public_key, bob.public_key), assets=[dw_asset]
)
# Now they both sign the transaction by providing their private keys.
# And send it afterwards.
fulfilled_dw_tx = bdb.transactions.fulfill(prepared_dw_tx, private_keys=[alice.private_key, bob.private_key])
bdb.transactions.send_commit(fulfilled_dw_tx)
# We store the `id` of the transaction to use it later on.
dw_id = fulfilled_dw_tx["id"]
# Let's check if the transaction was successful.
assert bdb.transactions.retrieve(dw_id), "Cannot find transaction {}".format(dw_id)
# The transaction should have two public keys in the outputs.
assert len(bdb.transactions.retrieve(dw_id)["outputs"][0]["public_keys"]) == 2
# ## Alice and Bob transfer a transaction to Carol.
# Alice and Bob save a lot of money living together. They often go out
# for dinner and don't cook at home. But now they don't have any dishes to
# wash, so they decide to sell the dish washer to their friend Carol.
# Hey Carol, nice to meet you!
carol = generate_keypair()
# Alice and Bob prepare the transaction to transfer the dish washer to
# Carol.
transfer_assets = [{"id": dw_id}]
output_index = 0
output = fulfilled_dw_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_dw_tx["id"]},
"owners_before": output["public_keys"],
}
# Now they create the transaction...
prepared_transfer_tx = bdb.transactions.prepare(
operation="TRANSFER", assets=transfer_assets, inputs=transfer_input, recipients=carol.public_key
)
# ... and sign it with their private keys, then send it.
fulfilled_transfer_tx = bdb.transactions.fulfill(
prepared_transfer_tx, private_keys=[alice.private_key, bob.private_key]
)
sent_transfer_tx = bdb.transactions.send_commit(fulfilled_transfer_tx)
# They check if the transaction was successful.
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# The owners before should include both Alice and Bob.
assert len(bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["inputs"][0]["owners_before"]) == 2
# While the new owner is Carol.
assert bdb.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["public_keys"][0] == carol.public_key

View File

@ -1,134 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# ## Testing potentially hazardous strings
# This test uses a library of `naughty` strings (code injections, weird unicode chars., etc.) as both keys and values.
# We look for either a successful tx, or in the case that we use a naughty string as a key, and it violates some key
# constraints, we expect to receive a well formatted error message.
# ## Imports
# We need some utils from the `os` package, we will interact with
# env variables.
import os
# Since the naughty strings get encoded and decoded in odd ways,
# we'll use a regex to sweep those details under the rug.
import re
from tkinter import N
from unittest import skip
# We'll use a nice library of naughty strings...
from blns import blns
# And parameterize our test so each one is treated as a separate test case
import pytest
# For this test case we import and use the Python Driver.
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
from planetmint_driver.exceptions import BadRequest
from ipld import multihash, marshal
naughty_strings = blns.all()
skipped_naughty_strings = [
"1.00",
"$1.00",
"-1.00",
"-$1.00",
"0.00",
"0..0",
".",
"0.0.0",
"-.",
",./;'[]\\-=",
"ثم نفس سقطت وبالتحديد،, جزيرتي باستخدام أن دنو. إذ هنا؟ الستار وتنصيب كان. أهّل ايطاليا، بريطانيا-فرنسا قد أخذ. سليمان، إتفاقية بين ما, يذكر الحدود أي بعد, معاملة بولندا، الإطلاق عل إيو.",
"test\x00",
"Ṱ̺̺̕o͞ ̷i̲̬͇̪͙n̝̗͕v̟̜̘̦͟o̶̙̰̠kè͚̮̺̪̹̱̤ ̖t̝͕̳̣̻̪͞h̼͓̲̦̳̘̲e͇̣̰̦̬͎ ̢̼̻̱̘h͚͎͙̜̣̲ͅi̦̲̣̰̤v̻͍e̺̭̳̪̰-m̢iͅn̖̺̞̲̯̰d̵̼̟͙̩̼̘̳ ̞̥̱̳̭r̛̗̘e͙p͠r̼̞̻̭̗e̺̠̣͟s̘͇̳͍̝͉e͉̥̯̞̲͚̬͜ǹ̬͎͎̟̖͇̤t͍̬̤͓̼̭͘ͅi̪̱n͠g̴͉ ͏͉ͅc̬̟h͡a̫̻̯͘o̫̟̖͍̙̝͉s̗̦̲.̨̹͈̣",
"̡͓̞ͅI̗̘̦͝n͇͇͙v̮̫ok̲̫̙͈i̖͙̭̹̠̞n̡̻̮̣̺g̲͈͙̭͙̬͎ ̰t͔̦h̞̲e̢̤ ͍̬̲͖f̴̘͕̣è͖ẹ̥̩l͖͔͚i͓͚̦͠n͖͍̗͓̳̮g͍ ̨o͚̪͡f̘̣̬ ̖̘͖̟͙̮c҉͔̫͖͓͇͖ͅh̵̤̣͚͔á̗̼͕ͅo̼̣̥s̱͈̺̖̦̻͢.̛̖̞̠̫̰",
"̗̺͖̹̯͓Ṯ̤͍̥͇͈h̲́e͏͓̼̗̙̼̣͔ ͇̜̱̠͓͍ͅN͕͠e̗̱z̘̝̜̺͙p̤̺̹͍̯͚e̠̻̠͜r̨̤͍̺̖͔̖̖d̠̟̭̬̝͟i̦͖̩͓͔̤a̠̗̬͉̙n͚͜ ̻̞̰͚ͅh̵͉i̳̞v̢͇ḙ͎͟-҉̭̩̼͔m̤̭̫i͕͇̝̦n̗͙ḍ̟ ̯̲͕͞ǫ̟̯̰̲͙̻̝f ̪̰̰̗̖̭̘͘c̦͍̲̞͍̩̙ḥ͚a̮͎̟̙͜ơ̩̹͎s̤.̝̝ ҉Z̡̖̜͖̰̣͉̜a͖̰͙̬͡l̲̫̳͍̩g̡̟̼̱͚̞̬ͅo̗͜.̟",
"̦H̬̤̗̤͝e͜ ̜̥̝̻͍̟́w̕h̖̯͓o̝͙̖͎̱̮ ҉̺̙̞̟͈W̷̼̭a̺̪͍į͈͕̭͙̯̜t̶̼̮s̘͙͖̕ ̠̫̠B̻͍͙͉̳ͅe̵h̵̬͇̫͙i̹͓̳̳̮͎̫̕n͟d̴̪̜̖ ̰͉̩͇͙̲͞ͅT͖̼͓̪͢h͏͓̮̻e̬̝̟ͅ ̤̹̝W͙̞̝͔͇͝ͅa͏͓͔̹̼̣l̴͔̰̤̟͔ḽ̫.͕",
'"><script>alert(document.title)</script>',
"'><script>alert(document.title)</script>",
"><script>alert(document.title)</script>",
"</script><script>alert(document.title)</script>",
"< / script >< script >alert(document.title)< / script >",
" onfocus=alert(document.title) autofocus ",
'" onfocus=alert(document.title) autofocus ',
"' onfocus=alert(document.title) autofocus ",
"scriptalert(document.title)/script",
"/dev/null; touch /tmp/blns.fail ; echo",
"../../../../../../../../../../../etc/passwd%00",
"../../../../../../../../../../../etc/hosts",
"() { 0; }; touch /tmp/blns.shellshock1.fail;",
"() { _; } >_[$($())] { touch /tmp/blns.shellshock2.fail; }",
]
naughty_strings = [naughty for naughty in naughty_strings if naughty not in skipped_naughty_strings]
# This is our base test case, but we'll reuse it to send naughty strings as both keys and values.
def send_naughty_tx(assets, metadata):
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
bdb = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
# Here's Alice.
alice = generate_keypair()
# Alice is in a naughty mood today, so she creates a tx with some naughty strings
prepared_transaction = bdb.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=assets, metadata=metadata
)
# She fulfills the transaction
fulfilled_transaction = bdb.transactions.fulfill(prepared_transaction, private_keys=alice.private_key)
# The fulfilled tx gets sent to the BDB network
try:
sent_transaction = bdb.transactions.send_commit(fulfilled_transaction)
except BadRequest as e:
sent_transaction = e
# If her key contained a '.', began with a '$', or contained a NUL character
regex = ".*\..*|\$.*|.*\x00.*"
key = next(iter(metadata))
if re.match(regex, key):
# Then she expects a nicely formatted error code
status_code = sent_transaction.status_code
error = sent_transaction.error
regex = (
r"\{\s*\n*"
r'\s*"message":\s*"Invalid transaction \(ValidationError\):\s*'
r"Invalid key name.*The key name cannot contain characters.*\n*"
r'\s*"status":\s*400\n*'
r"\s*\}\n*"
)
assert status_code == 400
assert re.fullmatch(regex, error), sent_transaction
# Otherwise, she expects to see her transaction in the database
elif "id" in sent_transaction.keys():
tx_id = sent_transaction["id"]
assert bdb.transactions.retrieve(tx_id)
# If neither condition was true, then something weird happened...
else:
raise TypeError(sent_transaction)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_keys(naughty_string):
assets = [{"data": multihash(marshal({naughty_string: "nice_value"}))}]
metadata = multihash(marshal({naughty_string: "nice_value"}))
send_naughty_tx(assets, metadata)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_values(naughty_string):
assets = [{"data": multihash(marshal({"nice_key": naughty_string}))}]
metadata = multihash(marshal({"nice_key": naughty_string}))
send_naughty_tx(assets, metadata)

View File

@ -1,135 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Stream Acceptance Test
# This test checks if the event stream works correctly. The basic idea of this
# test is to generate some random **valid** transaction, send them to a
# Planetmint node, and expect those transactions to be returned by the valid
# transactions Stream API. During this test, two threads work together,
# sharing a queue to exchange events.
#
# - The *main thread* first creates and sends the transactions to Planetmint;
# then it run through all events in the shared queue to check if all
# transactions sent have been validated by Planetmint.
# - The *listen thread* listens to the events coming from Planetmint and puts
# them in a queue shared with the main thread.
import os
import queue
import json
from threading import Thread, Event
from uuid import uuid4
from ipld import multihash, marshal
# For this script, we need to set up a websocket connection, that's the reason
# we import the
# [websocket](https://github.com/websocket-client/websocket-client) module
from websocket import create_connection
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
def test_stream():
# ## Set up the test
# We use the env variable `BICHAINDB_ENDPOINT` to know where to connect.
# Check [test_basic.py](./test_basic.html) for more information.
BDB_ENDPOINT = os.environ.get("PLANETMINT_ENDPOINT")
# *That's pretty bad, but let's do like this for now.*
WS_ENDPOINT = "ws://{}:9985/api/v1/streams/valid_transactions".format(BDB_ENDPOINT.rsplit(":")[0])
bdb = Planetmint(BDB_ENDPOINT)
# Hello to Alice again, she is pretty active in those tests, good job
# Alice!
alice = generate_keypair()
# We need few variables to keep the state, specifically we need `sent` to
# keep track of all transactions Alice sent to Planetmint, while `received`
# are the transactions Planetmint validated and sent back to her.
sent = []
received = queue.Queue()
# In this test we use a websocket. The websocket must be started **before**
# sending transactions to Planetmint, otherwise we might lose some
# transactions. The `ws_ready` event is used to synchronize the main thread
# with the listen thread.
ws_ready = Event()
# ## Listening to events
# This is the function run by the complementary thread.
def listen():
# First we connect to the remote endpoint using the WebSocket protocol.
ws = create_connection(WS_ENDPOINT)
# After the connection has been set up, we can signal the main thread
# to proceed (continue reading, it should make sense in a second.)
ws_ready.set()
# It's time to consume all events coming from the Planetmint stream API.
# Every time a new event is received, it is put in the queue shared
# with the main thread.
while True:
result = ws.recv()
received.put(result)
# Put `listen` in a thread, and start it. Note that `listen` is a local
# function and it can access all variables in the enclosing function.
t = Thread(target=listen, daemon=True)
t.start()
# ## Pushing the transactions to Planetmint
# After starting the listen thread, we wait for it to connect, and then we
# proceed.
ws_ready.wait()
# Here we prepare, sign, and send ten different `CREATE` transactions. To
# make sure each transaction is different from the other, we generate a
# random `uuid`.
for _ in range(10):
tx = bdb.transactions.fulfill(
bdb.transactions.prepare(
operation="CREATE",
signers=alice.public_key,
assets=[{"data": multihash(marshal({"uuid": str(uuid4())}))}],
),
private_keys=alice.private_key,
)
# We don't want to wait for each transaction to be in a block. By using
# `async` mode, we make sure that the driver returns as soon as the
# transaction is pushed to the Planetmint API. Remember: we expect all
# transactions to be in the shared queue: this is a two phase test,
# first we send a bunch of transactions, then we check if they are
# valid (and, in this case, they should).
bdb.transactions.send_async(tx)
# The `id` of every sent transaction is then stored in a list.
sent.append(tx["id"])
# ## Check the valid transactions coming from Planetmint
# Now we are ready to check if Planetmint did its job. A simple way to
# check if all sent transactions have been processed is to **remove** from
# `sent` the transactions we get from the *listen thread*. At one point in
# time, `sent` should be empty, and we exit the test.
while sent:
# To avoid waiting forever, we have an arbitrary timeout of 5
# seconds: it should be enough time for Planetmint to create
# blocks, in fact a new block is created every second. If we hit
# the timeout, then game over ¯\\\_(ツ)\_/¯
try:
event = received.get(timeout=5)
txid = json.loads(event)["transaction_id"]
except queue.Empty:
assert False, "Did not receive all expected transactions"
# Last thing is to try to remove the `txid` from the set of sent
# transactions. If this test is running in parallel with others, we
# might get a transaction id of another test, and `remove` can fail.
# It's OK if this happens.
try:
sent.remove(txid)
except ValueError:
pass

View File

@ -1,162 +0,0 @@
import os
import json
import base58
from hashlib import sha3_256
from planetmint_cryptoconditions.types.ed25519 import Ed25519Sha256
from planetmint_cryptoconditions.types.zenroom import ZenroomSha256
from zenroom import zencode_exec
from planetmint_driver import Planetmint
from planetmint_driver.crypto import generate_keypair
from ipld import multihash, marshal
def test_zenroom_signing(
gen_key_zencode,
secret_key_to_private_key_zencode,
fulfill_script_zencode,
zenroom_data,
zenroom_house_assets,
zenroom_script_input,
condition_script_zencode,
):
biolabs = generate_keypair()
version = "2.0"
alice = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
bob = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
zen_public_keys = json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Alice"), keys=json.dumps({"keyring": alice})).output
)
zen_public_keys.update(
json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Bob"), keys=json.dumps({"keyring": bob})).output
)
)
zenroomscpt = ZenroomSha256(script=fulfill_script_zencode, data=zenroom_data, keys=zen_public_keys)
print(f"zenroom is: {zenroomscpt.script}")
def test_zenroom_signing(
gen_key_zencode,
secret_key_to_private_key_zencode,
fulfill_script_zencode,
zenroom_data,
zenroom_house_assets,
condition_script_zencode,
):
biolabs = generate_keypair()
version = "2.0"
alice = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
bob = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
zen_public_keys = json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Alice"), keys=json.dumps({"keyring": alice})).output
)
zen_public_keys.update(
json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Bob"), keys=json.dumps({"keyring": bob})).output
)
)
zenroomscpt = ZenroomSha256(script=fulfill_script_zencode, data=zenroom_data, keys=zen_public_keys)
print(f"zenroom is: {zenroomscpt.script}")
# CRYPTO-CONDITIONS: generate the condition uri
condition_uri_zen = zenroomscpt.condition.serialize_uri()
print(f"\nzenroom condition URI: {condition_uri_zen}")
# CRYPTO-CONDITIONS: construct an unsigned fulfillment dictionary
unsigned_fulfillment_dict_zen = {
"type": zenroomscpt.TYPE_NAME,
"public_key": base58.b58encode(biolabs.public_key).decode(),
}
output = {
"amount": "10",
"condition": {
"details": unsigned_fulfillment_dict_zen,
"uri": condition_uri_zen,
},
"public_keys": [
biolabs.public_key,
],
}
input_ = {
"fulfillment": None,
"fulfills": None,
"owners_before": [
biolabs.public_key,
],
}
metadata = {"result": {"output": ["ok"]}}
script_ = {
"code": {"type": "zenroom", "raw": "test_string", "parameters": [{"obj": "1"}, {"obj": "2"}]}, # obsolete
"state": "dd8bbd234f9869cab4cc0b84aa660e9b5ef0664559b8375804ee8dce75b10576", #
"input": zenroom_script_input,
"output": ["ok"],
"policies": {},
}
metadata = {"result": {"output": ["ok"]}}
token_creation_tx = {
"operation": "CREATE",
"assets": [{"data": multihash(marshal({"test": "my asset"}))}],
"metadata": multihash(marshal(metadata)),
"script": script_,
"outputs": [
output,
],
"inputs": [
input_,
],
"version": version,
"id": None,
}
# JSON: serialize the transaction-without-id to a json formatted string
tx = json.dumps(
token_creation_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
script_ = json.dumps(script_)
# major workflow:
# we store the fulfill script in the transaction/message (zenroom-sha)
# the condition script is used to fulfill the transaction and create the signature
#
# the server should ick the fulfill script and recreate the zenroom-sha and verify the signature
signed_input = zenroomscpt.sign(script_, condition_script_zencode, alice)
input_signed = json.loads(signed_input)
input_signed["input"]["signature"] = input_signed["output"]["signature"]
del input_signed["output"]["signature"]
del input_signed["output"]["logs"]
input_signed["output"] = ["ok"] # define expected output that is to be compared
input_msg = json.dumps(input_signed)
assert zenroomscpt.validate(message=input_msg)
tx = json.loads(tx)
fulfillment_uri_zen = zenroomscpt.serialize_uri()
tx["inputs"][0]["fulfillment"] = fulfillment_uri_zen
tx["script"] = input_signed
tx["id"] = None
json_str_tx = json.dumps(tx, sort_keys=True, skipkeys=False, separators=(",", ":"))
# SHA3: hash the serialized id-less transaction to generate the id
shared_creation_txid = sha3_256(json_str_tx.encode()).hexdigest()
tx["id"] = shared_creation_txid
# tx = json.dumps(tx)
# `https://example.com:9984`
print(f"TX \n{tx}")
plntmnt = Planetmint(os.environ.get("PLANETMINT_ENDPOINT"))
sent_transfer_tx = plntmnt.transactions.send_commit(tx)
print(f"\n\nstatus and result : + {sent_transfer_tx}")

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -30,7 +30,7 @@ def test_bigchain_instance_is_initialized_when_conf_provided():
def test_load_validation_plugin_loads_default_rules_without_name():
from planetmint import config_utils
from planetmint.validation import BaseValidationRules
from planetmint.application.basevalidationrules import BaseValidationRules
assert config_utils.load_validation_plugin() == BaseValidationRules
@ -120,11 +120,8 @@ def test_env_config(monkeypatch):
assert result == expected
@pytest.mark.skip
def test_autoconfigure_read_both_from_file_and_env(
monkeypatch, request
): # TODO Disabled until we create a better config format
return
@pytest.mark.skip(reason="Disabled until we create a better config format")
def test_autoconfigure_read_both_from_file_and_env(monkeypatch, request):
# constants
DATABASE_HOST = "test-host"
DATABASE_NAME = "test-dbname"
@ -210,7 +207,7 @@ def test_autoconfigure_read_both_from_file_and_env(
"advertised_port": WSSERVER_ADVERTISED_PORT,
},
"database": database_mongodb,
"tendermint": {"host": "localhost", "port": 26657, "version": "v0.34.15"},
"tendermint": {"host": "localhost", "port": 26657, "version": "v0.34.24"},
"log": {
"file": LOG_FILE,
"level_console": "debug",
@ -315,11 +312,10 @@ def test_write_config():
),
)
def test_database_envs(env_name, env_value, config_key, monkeypatch):
monkeypatch.setattr("os.environ", {env_name: env_value})
planetmint.config_utils.autoconfigure()
expected_config = Config().get()
expected_config["database"][config_key] = env_value
assert planetmint.config == expected_config
assert planetmint.config.Config().get() == expected_config

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
@ -6,19 +7,8 @@
version: '2.2'
services:
clean-shared:
image: alpine
command: ["/scripts/clean-shared.sh"]
volumes:
- ./integration/scripts/clean-shared.sh:/scripts/clean-shared.sh
- shared:/shared
planetmint-all-in-one:
build:
context: .
dockerfile: Dockerfile-all-in-one
depends_on:
- clean-shared
image: planetmint/planetmint-aio:latest
expose:
- "22"
- "9984"
@ -27,8 +17,6 @@ services:
- "26657"
- "26658"
command: ["/usr/src/app/scripts/pre-config-planetmint.sh", "/usr/src/app/scripts/all-in-one.bash"]
environment:
SCALE: ${SCALE:-4}
volumes:
- ./integration/scripts:/usr/src/app/scripts
- shared:/shared
@ -48,6 +36,3 @@ services:
- ./integration/scripts:/scripts
- ./integration/cli:/tests
- shared:/shared
volumes:
shared:

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
@ -15,15 +16,15 @@ services:
command: mongod
restart: always
tarantool:
image: tarantool/tarantool:2.8.3
image: tarantool/tarantool:2.10.4
ports:
- "5200:5200"
- "3301:3301"
- "3303:3303"
- "8081:8081"
volumes:
- ./planetmint/backend/tarantool/basic.lua:/opt/tarantool/basic.lua
command: tarantool /opt/tarantool/basic.lua
- ./planetmint/backend/tarantool/opt/init.lua:/opt/tarantool/init.lua
entrypoint: tarantool /opt/tarantool/init.lua
restart: always
planetmint:
depends_on:
@ -32,7 +33,7 @@ services:
- tarantool
build:
context: .
dockerfile: Dockerfile-dev
dockerfile: Dockerfile
volumes:
- ./planetmint:/usr/src/app/planetmint
- ./tests:/usr/src/app/tests
@ -60,11 +61,11 @@ services:
interval: 3s
timeout: 5s
retries: 5
command: 'scripts/entrypoint.sh'
command: 'planetmint -l DEBUG start'
restart: always
tendermint:
image: tendermint/tendermint:v0.34.15
image: tendermint/tendermint:v0.34.24
# volumes:
# - ./tmdata:/tendermint
entrypoint: ''
@ -86,17 +87,6 @@ services:
image: appropriate/curl
command: /bin/sh -c "curl -s http://planetmint:9984/ > /dev/null && curl -s http://tendermint:26657/ > /dev/null"
# Planetmint setup to do acceptance testing with Python
python-acceptance:
build:
context: .
dockerfile: ./acceptance/python/Dockerfile
volumes:
- ./acceptance/python/docs:/docs
- ./acceptance/python/src:/src
environment:
- PLANETMINT_ENDPOINT=planetmint
# Build docs only
# docker-compose build bdocs
# docker-compose up -d bdocs
@ -105,9 +95,9 @@ services:
- vdocs
build:
context: .
dockerfile: Dockerfile-dev
dockerfile: Dockerfile
args:
backend: tarantool
backend: tarantool_db
volumes:
- .:/usr/src/app/
command: make -C docs/root html

View File

@ -3,7 +3,7 @@
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXBUILD = poetry run sphinx-build
PAPER = a4
BUILDDIR = build

View File

@ -11,7 +11,9 @@ import os.path
from transactions.common.input import Input
from transactions.common.transaction_link import TransactionLink
from planetmint import lib
import planetmint.abci.block
from transactions.types.assets.create import Create
from transactions.types.assets.transfer import Transfer
from planetmint.web import server
@ -187,7 +189,6 @@ def main():
ctx["public_keys_transfer"] = tx_transfer.outputs[0].public_keys[0]
ctx["tx_transfer_id"] = tx_transfer.id
# privkey_transfer_last = 'sG3jWDtdTXUidBJK53ucSTrosktG616U3tQHBk81eQe'
pubkey_transfer_last = "3Af3fhhjU6d9WecEM9Uw5hfom9kNEwE7YuDWdqAUssqm"
cid = 0
@ -210,7 +211,7 @@ def main():
signature = "53wxrEQDYk1dXzmvNSytbCfmNVnPqPkDQaTnAe8Jf43s6ssejPxezkCvUnGTnduNUmaLjhaan1iRLi3peu6s5DzA"
app_hash = "f6e0c49c6d94d6924351f25bb334cf2a99af4206339bf784e741d1a5ab599056"
block = lib.Block(height=1, transactions=[tx.to_dict()], app_hash=app_hash)
block = planetmint.abci.block.Block(height=1, transactions=[tx.to_dict()], app_hash=app_hash)
block_dict = block._asdict()
block_dict.pop("app_hash")
ctx["block"] = pretty_json(block_dict)

View File

@ -1,46 +0,0 @@
aafigure==0.6
alabaster==0.7.12
Babel==2.10.1
certifi==2022.12.7
charset-normalizer==2.0.12
commonmark==0.9.1
docutils==0.17.1
idna
imagesize==1.3.0
importlib-metadata==4.11.3
Jinja2==3.0.0
markdown-it-py==2.1.0
MarkupSafe==2.1.1
mdit-py-plugins==0.3.0
mdurl==0.1.1
myst-parser==0.17.2
packaging==21.3
pockets==0.9.1
Pygments==2.12.0
pyparsing==3.0.8
pytz==2022.1
PyYAML>=5.4.0
requests>=2.25i.1
six==1.16.0
snowballstemmer==2.2.0
Sphinx==4.5.0
sphinx-rtd-theme==1.0.0
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-httpdomain==1.8.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-napoleon==0.7
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
urllib3==1.26.9
wget==3.2
zipp==3.8.0
nest-asyncio==1.5.5
sphinx-press-theme==0.8.0
sphinx-documatt-theme
base58>=2.1.1
pynacl==1.4.0
zenroom==2.1.0.dev1655293214
pyasn1==0.4.8
cryptography==3.4.7

View File

@ -198,7 +198,6 @@ todo_include_todos = False
# a list of builtin themes.
#
html_theme = "press"
# html_theme = 'sphinx_documatt_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the

View File

@ -4,7 +4,7 @@ Content-Type: application/json
{
"assets": "/assets/",
"blocks": "/blocks/",
"docs": "https://docs.planetmint.io/projects/server/en/v1.4.1/http-client-server-api.html",
"docs": "https://docs.planetmint.io/projects/server/en/v2.2.4/http-client-server-api.html",
"metadata": "/metadata/",
"outputs": "/outputs/",
"streamedblocks": "ws://localhost:9985/api/v1/streams/valid_blocks",

View File

@ -6,7 +6,7 @@ Content-Type: application/json
"v1": {
"assets": "/api/v1/assets/",
"blocks": "/api/v1/blocks/",
"docs": "https://docs.planetmint.io/projects/server/en/v1.4.1/http-client-server-api.html",
"docs": "https://docs.planetmint.io/projects/server/en/v2.2.4/http-client-server-api.html",
"metadata": "/api/v1/metadata/",
"outputs": "/api/v1/outputs/",
"streamedblocks": "ws://localhost:9985/api/v1/streams/valid_blocks",
@ -15,7 +15,7 @@ Content-Type: application/json
"validators": "/api/v1/validators"
}
},
"docs": "https://docs.planetmint.io/projects/server/en/v1.4.1/",
"docs": "https://docs.planetmint.io/projects/server/en/v2.2.4/",
"software": "Planetmint",
"version": "1.4.1"
"version": "2.2.4"
}

View File

@ -30,9 +30,9 @@ The version of Planetmint Server described in these docs only works well with Te
```bash
$ sudo apt install -y unzip
$ wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_v0.34.15_linux_amd64.zip
$ unzip tendermint_v0.34.15_linux_amd64.zip
$ rm tendermint_v0.34.15_linux_amd64.zip
$ wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_v0.34.24_linux_amd64.zip
$ unzip tendermint_v0.34.24_linux_amd64.zip
$ rm tendermint_v0.34.24_linux_amd64.zip
$ sudo mv tendermint /usr/local/bin
```

View File

@ -59,8 +59,8 @@ $ sudo apt install mongodb
```
Tendermint can be installed and started as follows
```
$ wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_0.34.15_linux_amd64.tar.gz
$ tar zxf tendermint_0.34.15_linux_amd64.tar.gz
$ wget https://github.com/tendermint/tendermint/releases/download/v0.34.24/tendermint_0.34.24_linux_amd64.tar.gz
$ tar zxf tendermint_0.34.24_linux_amd64.tar.gz
$ ./tendermint init
$ ./tendermint node --proxy_app=tcp://localhost:26658
```

View File

@ -60,7 +60,7 @@ you can do this:
.. code::
$ mkdir $(pwd)/tmdata
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:v0.34.15 init
$ docker run --rm -v $(pwd)/tmdata:/tendermint/config tendermint/tendermint:v0.34.24 init
$ cat $(pwd)/tmdata/genesis.json
You should see something that looks like:

View File

@ -1,23 +0,0 @@
<!---
Copyright © 2020 Interplanetary Database Association e.V.,
Planetmint and IPDB software contributors.
SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
Code is Apache-2.0 and docs are CC-BY-4.0
--->
# Integration test suite
This directory contains the integration test suite for Planetmint.
The suite uses Docker Compose to spin up multiple Planetmint nodes, run tests with `pytest` as well as cli tests and teardown.
## Running the tests
Run `make test-integration` in the project root directory.
By default the integration test suite spins up four planetmint nodes. If you desire to run a different configuration you can pass `SCALE=<number of nodes>` as an environmental variable.
## Writing and documenting the tests
Tests are sometimes difficult to read. For integration tests, we try to be really explicit on what the test is doing, so please write code that is *simple* and easy to understand. We decided to use literate-programming documentation. To generate the documentation for python tests run:
```bash
make docs-integration
```

View File

@ -1,47 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Add chain migration test
check_status () {
status=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@$1 'bash -s' < scripts/election.sh show_election $2 | tail -n 1)
status=${status#*=}
if [ $status != $3 ]; then
exit 1
fi
}
# Read host names from shared
readarray -t HOSTNAMES < /shared/hostnames
# Split into proposer and approvers
PROPOSER=${HOSTNAMES[0]}
APPROVERS=${HOSTNAMES[@]:1}
# Propose chain migration
result=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@${PROPOSER} 'bash -s' < scripts/election.sh migrate)
# Check if election is ongoing and approve chain migration
for APPROVER in ${APPROVERS[@]}; do
# Check if election is still ongoing
check_status ${APPROVER} $result ongoing
ssh -o "StrictHostKeyChecking=no" -i ~/.ssh/id_rsa root@${APPROVER} 'bash -s' < scripts/election.sh approve $result
done
# Status of election should be concluded
status=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@${PROPOSER} 'bash -s' < scripts/election.sh show_election $result)
status=${status#*INFO:planetmint.commands.planetmint:}
status=("$status[@]")
# TODO: Get status, chain_id, app_hash and validators to restore planetmint on all nodes
# References:
# https://github.com/bigchaindb/BEPs/tree/master/42
# http://docs.bigchaindb.com/en/latest/installation/node-setup/bigchaindb-cli.html
for word in $status; do
echo $word
done
echo ${status#*validators=}

View File

@ -1,33 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
check_status () {
status=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@$1 'bash -s' < scripts/election.sh show_election $2 | tail -n 1)
status=${status#*=}
if [ $status != $3 ]; then
exit 1
fi
}
# Read host names from shared
readarray -t HOSTNAMES < /shared/hostnames
# Split into proposer and approvers
PROPOSER=${HOSTNAMES[0]}
APPROVERS=${HOSTNAMES[@]:1}
# Propose validator upsert
result=$(ssh -o "StrictHostKeyChecking=no" -i \~/.ssh/id_rsa root@${PROPOSER} 'bash -s' < scripts/election.sh elect 2)
# Check if election is ongoing and approve validator upsert
for APPROVER in ${APPROVERS[@]}; do
# Check if election is still ongoing
check_status ${APPROVER} $result ongoing
ssh -o "StrictHostKeyChecking=no" -i ~/.ssh/id_rsa root@${APPROVER} 'bash -s' < scripts/election.sh approve $result
done
# Status of election should be concluded
check_status ${PROPOSER} $result concluded

View File

@ -1 +0,0 @@
docs

View File

@ -1,19 +0,0 @@
FROM python:3.9
RUN apt-get update \
&& pip install -U pip \
&& apt-get autoremove \
&& apt-get clean
RUN apt-get install -y vim
RUN apt-get update
RUN apt-get install -y build-essential cmake openssh-client openssh-server git
RUN apt-get install -y zsh
RUN mkdir -p /src
RUN pip install --upgrade meson ninja
RUN pip install --upgrade \
pytest~=6.2.5 \
pycco \
websocket-client~=0.47.0 \
planetmint-driver>=9.2.0 \
blns

View File

@ -1,86 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import pytest
CONDITION_SCRIPT = """Scenario 'ecdh': create the signature of an object
Given I have the 'keyring'
Given that I have a 'string dictionary' named 'houses'
When I create the signature of 'houses'
Then print the 'signature'"""
FULFILL_SCRIPT = """Scenario 'ecdh': Bob verifies the signature from Alice
Given I have a 'ecdh public key' from 'Alice'
Given that I have a 'string dictionary' named 'houses'
Given I have a 'signature' named 'signature'
When I verify the 'houses' has a signature in 'signature' by 'Alice'
Then print the string 'ok'"""
SK_TO_PK = """Scenario 'ecdh': Create the keypair
Given that I am known as '{}'
Given I have the 'keyring'
When I create the ecdh public key
When I create the bitcoin address
Then print my 'ecdh public key'
Then print my 'bitcoin address'"""
GENERATE_KEYPAIR = """Scenario 'ecdh': Create the keypair
Given that I am known as 'Pippo'
When I create the ecdh key
When I create the bitcoin key
Then print data"""
INITIAL_STATE = {"also": "more data"}
SCRIPT_INPUT = {
"houses": [
{
"name": "Harry",
"team": "Gryffindor",
},
{
"name": "Draco",
"team": "Slytherin",
},
],
}
metadata = {"units": 300, "type": "KG"}
ZENROOM_DATA = {"that": "is my data"}
@pytest.fixture
def gen_key_zencode():
return GENERATE_KEYPAIR
@pytest.fixture
def secret_key_to_private_key_zencode():
return SK_TO_PK
@pytest.fixture
def fulfill_script_zencode():
return FULFILL_SCRIPT
@pytest.fixture
def condition_script_zencode():
return CONDITION_SCRIPT
@pytest.fixture
def zenroom_house_assets():
return SCRIPT_INPUT
@pytest.fixture
def zenroom_script_input():
return SCRIPT_INPUT
@pytest.fixture
def zenroom_data():
return ZENROOM_DATA

View File

@ -1,35 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from typing import List
from planetmint_driver import Planetmint
class Hosts:
hostnames = []
connections = []
def __init__(self, filepath):
self.set_hostnames(filepath=filepath)
self.set_connections()
def set_hostnames(self, filepath) -> None:
with open(filepath) as f:
self.hostnames = f.readlines()
def set_connections(self) -> None:
self.connections = list(map(lambda h: Planetmint(h), self.hostnames))
def get_connection(self, index=0) -> Planetmint:
return self.connections[index]
def get_transactions(self, tx_id) -> List:
return list(map(lambda connection: connection.transactions.retrieve(tx_id), self.connections))
def assert_transaction(self, tx_id) -> None:
txs = self.get_transactions(tx_id)
for tx in txs:
assert txs[0] == tx, "Cannot find transaction {}".format(tx_id)

View File

@ -1,87 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# import Planetmint and create object
from planetmint_driver.crypto import generate_keypair
# import helper to manage multiple nodes
from .helper.hosts import Hosts
import time
def test_basic():
# Setup up connection to Planetmint integration test nodes
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
# genarate a keypair
alice = generate_keypair()
# create a digital asset for Alice
game_boy_token = [
{
"data": {
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
}
]
# prepare the transaction with the digital asset and issue 10 tokens to bob
prepared_creation_tx = pm_alpha.transactions.prepare(
operation="CREATE",
metadata={
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
signers=alice.public_key,
recipients=[([alice.public_key], 10)],
assets=game_boy_token,
)
# fulfill and send the transaction
fulfilled_creation_tx = pm_alpha.transactions.fulfill(prepared_creation_tx, private_keys=alice.private_key)
pm_alpha.transactions.send_commit(fulfilled_creation_tx)
time.sleep(1)
creation_tx_id = fulfilled_creation_tx["id"]
# Assert that transaction is stored on all planetmint nodes
hosts.assert_transaction(creation_tx_id)
# Transfer
# create the output and inout for the transaction
transfer_assets = [{"id": creation_tx_id}]
output_index = 0
output = fulfilled_creation_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": transfer_assets[0]["id"]},
"owners_before": output["public_keys"],
}
# prepare the transaction and use 3 tokens
prepared_transfer_tx = pm_alpha.transactions.prepare(
operation="TRANSFER",
asset=transfer_assets,
inputs=transfer_input,
metadata={
"hash": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
"storageID": "0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
},
recipients=[([alice.public_key], 10)],
)
# fulfill and send the transaction
fulfilled_transfer_tx = pm_alpha.transactions.fulfill(prepared_transfer_tx, private_keys=alice.private_key)
sent_transfer_tx = pm_alpha.transactions.send_commit(fulfilled_transfer_tx)
time.sleep(1)
transfer_tx_id = sent_transfer_tx["id"]
# Assert that transaction is stored on both planetmint nodes
hosts.assert_transaction(transfer_tx_id)

View File

@ -1,167 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Divisible assets integration testing
# This test checks if we can successfully divide assets.
# The script tests various things like:
#
# - create a transaction with a divisible asset and issue them to someone
# - check if the transaction is stored and has the right amount of tokens
# - spend some tokens
# - try to spend more tokens than available
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the `amount`
# of a given transaction.
# ## Imports
# We need the `pytest` package to catch the `BadRequest` exception properly.
# And of course, we also need the `BadRequest`.
import pytest
from planetmint_driver.exceptions import BadRequest
# Import generate_keypair to create actors
from planetmint_driver.crypto import generate_keypair
# import helper to manage multiple nodes
from .helper.hosts import Hosts
def test_divisible_assets():
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Oh look, it is Alice again and she brought her friend Bob along.
alice, bob = generate_keypair(), generate_keypair()
# ## Alice creates a time sharing token
# Alice wants to go on vacation, while Bobs bike just broke down.
# Alice decides to rent her bike to Bob while she is gone.
# So she prepares a `CREATE` transaction to issues 10 tokens.
# First, she prepares an asset for a time sharing token. As you can see in
# the description, Bob and Alice agree that each token can be used to ride
# the bike for one hour.
bike_token = [
{
"data": {
"token_for": {"bike": {"serial_number": 420420}},
"description": "Time share token. Each token equals one hour of riding.",
},
}
]
# She prepares a `CREATE` transaction and issues 10 tokens.
# Here, Alice defines in a tuple that she wants to assign
# these 10 tokens to Bob.
prepared_token_tx = pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, recipients=[([bob.public_key], 10)], assets=bike_token
)
# She fulfills and sends the transaction.
fulfilled_token_tx = pm.transactions.fulfill(prepared_token_tx, private_keys=alice.private_key)
pm.transactions.send_commit(fulfilled_token_tx)
# We store the `id` of the transaction to use it later on.
bike_token_id = fulfilled_token_tx["id"]
# Let's check if the transaction was successful.
assert pm.transactions.retrieve(bike_token_id), "Cannot find transaction {}".format(bike_token_id)
# Bob owns 10 tokens now.
assert pm.transactions.retrieve(bike_token_id)["outputs"][0]["amount"] == "10"
# ## Bob wants to use the bike
# Now that Bob got the tokens and the sun is shining, he wants to get out
# with the bike for three hours.
# To use the bike he has to send the tokens back to Alice.
# To learn about the details of transferring a transaction check out
# [test_basic.py](./test_basic.html)
transfer_assets = [{"id": bike_token_id}]
output_index = 0
output = fulfilled_token_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_token_tx["id"]},
"owners_before": output["public_keys"],
}
# To use the tokens Bob has to reassign 7 tokens to himself and the
# amount he wants to use to Alice.
prepared_transfer_tx = pm.transactions.prepare(
operation="TRANSFER",
asset=transfer_assets,
inputs=transfer_input,
recipients=[([alice.public_key], 3), ([bob.public_key], 7)],
)
# He signs and sends the transaction.
fulfilled_transfer_tx = pm.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
sent_transfer_tx = pm.transactions.send_commit(fulfilled_transfer_tx)
# First, Bob checks if the transaction was successful.
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
hosts.assert_transaction(fulfilled_transfer_tx["id"])
# There are two outputs in the transaction now.
# The first output shows that Alice got back 3 tokens...
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["amount"] == "3"
# ... while Bob still has 7 left.
assert pm.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][1]["amount"] == "7"
# ## Bob wants to ride the bike again
# It's been a week and Bob wants to right the bike again.
# Now he wants to ride for 8 hours, that's a lot Bob!
# He prepares the transaction again.
transfer_assets = [{"id": bike_token_id}]
# This time we need an `output_index` of 1, since we have two outputs
# in the `fulfilled_transfer_tx` we created before. The first output with
# index 0 is for Alice and the second output is for Bob.
# Since Bob wants to spend more of his tokens he has to provide the
# correct output with the correct amount of tokens.
output_index = 1
output = fulfilled_transfer_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_transfer_tx["id"]},
"owners_before": output["public_keys"],
}
# This time Bob only provides Alice in the `recipients` because he wants
# to spend all his tokens
prepared_transfer_tx = pm.transactions.prepare(
operation="TRANSFER", assets=transfer_assets, inputs=transfer_input, recipients=[([alice.public_key], 8)]
)
fulfilled_transfer_tx = pm.transactions.fulfill(prepared_transfer_tx, private_keys=bob.private_key)
# Oh Bob, what have you done?! You tried to spend more tokens than you had.
# Remember Bob, last time you spent 3 tokens already,
# so you only have 7 left.
with pytest.raises(BadRequest) as error:
pm.transactions.send_commit(fulfilled_transfer_tx)
# Now Bob gets an error saying that the amount he wanted to spent is
# higher than the amount of tokens he has left.
assert error.value.args[0] == 400
message = (
"Invalid transaction (AmountError): The amount used in the "
"inputs `7` needs to be same as the amount used in the "
"outputs `8`"
)
assert error.value.args[2]["message"] == message
# We have to stop this test now, I am sorry, but Bob is pretty upset
# about his mistake. See you next time :)

View File

@ -1,48 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Double Spend testing
# This test challenge the system with double spends.
from uuid import uuid4
from threading import Thread
import queue
import planetmint_driver.exceptions
from planetmint_driver.crypto import generate_keypair
from .helper.hosts import Hosts
def test_double_create():
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
alice = generate_keypair()
results = queue.Queue()
tx = pm.transactions.fulfill(
pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=[{"data": {"uuid": str(uuid4())}}]
),
private_keys=alice.private_key,
)
def send_and_queue(tx):
try:
pm.transactions.send_commit(tx)
results.put("OK")
except planetmint_driver.exceptions.TransportError:
results.put("FAIL")
t1 = Thread(target=send_and_queue, args=(tx,))
t2 = Thread(target=send_and_queue, args=(tx,))
t1.start()
t2.start()
results = [results.get(timeout=2), results.get(timeout=2)]
assert results.count("OK") == 1
assert results.count("FAIL") == 1

View File

@ -1,115 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Multisignature integration testing
# This test checks if we can successfully create and transfer a transaction
# with multiple owners.
# The script tests various things like:
#
# - create a transaction with multiple owners
# - check if the transaction is stored and has the right amount of public keys
# - transfer the transaction to a third person
#
# We run a series of checks for each step, that is retrieving
# the transaction from the remote system, and also checking the public keys
# of a given transaction.
# # Imports
import time
# For this test case we need import and use the Python driver
from planetmint_driver.crypto import generate_keypair
# Import helper to deal with multiple nodes
from .helper.hosts import Hosts
def test_multiple_owners():
# Setup up connection to Planetmint integration test nodes
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
# Generate Keypairs for Alice and Bob!
alice, bob = generate_keypair(), generate_keypair()
# ## Alice and Bob create a transaction
# Alice and Bob just moved into a shared flat, no one can afford these
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = [{"data": {"dish washer": {"serial_number": 1337}}}]
# They prepare a `CREATE` transaction. To have multiple owners, both
# Bob and Alice need to be the recipients.
prepared_dw_tx = pm_alpha.transactions.prepare(
operation="CREATE", signers=alice.public_key, recipients=(alice.public_key, bob.public_key), assets=dw_asset
)
# Now they both sign the transaction by providing their private keys.
# And send it afterwards.
fulfilled_dw_tx = pm_alpha.transactions.fulfill(prepared_dw_tx, private_keys=[alice.private_key, bob.private_key])
pm_alpha.transactions.send_commit(fulfilled_dw_tx)
# We store the `id` of the transaction to use it later on.
dw_id = fulfilled_dw_tx["id"]
time.sleep(1)
# Use hosts to assert that the transaction is properly propagated to every node
hosts.assert_transaction(dw_id)
# Let's check if the transaction was successful.
assert pm_alpha.transactions.retrieve(dw_id), "Cannot find transaction {}".format(dw_id)
# The transaction should have two public keys in the outputs.
assert len(pm_alpha.transactions.retrieve(dw_id)["outputs"][0]["public_keys"]) == 2
# ## Alice and Bob transfer a transaction to Carol.
# Alice and Bob save a lot of money living together. They often go out
# for dinner and don't cook at home. But now they don't have any dishes to
# wash, so they decide to sell the dish washer to their friend Carol.
# Hey Carol, nice to meet you!
carol = generate_keypair()
# Alice and Bob prepare the transaction to transfer the dish washer to
# Carol.
transfer_assets = [{"id": dw_id}]
output_index = 0
output = fulfilled_dw_tx["outputs"][output_index]
transfer_input = {
"fulfillment": output["condition"]["details"],
"fulfills": {"output_index": output_index, "transaction_id": fulfilled_dw_tx["id"]},
"owners_before": output["public_keys"],
}
# Now they create the transaction...
prepared_transfer_tx = pm_alpha.transactions.prepare(
operation="TRANSFER", assets=transfer_assets, inputs=transfer_input, recipients=carol.public_key
)
# ... and sign it with their private keys, then send it.
fulfilled_transfer_tx = pm_alpha.transactions.fulfill(
prepared_transfer_tx, private_keys=[alice.private_key, bob.private_key]
)
sent_transfer_tx = pm_alpha.transactions.send_commit(fulfilled_transfer_tx)
time.sleep(1)
# Now compare if both nodes returned the same transaction
hosts.assert_transaction(fulfilled_transfer_tx["id"])
# They check if the transaction was successful.
assert pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"]) == sent_transfer_tx
# The owners before should include both Alice and Bob.
assert len(pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"])["inputs"][0]["owners_before"]) == 2
# While the new owner is Carol.
assert (
pm_alpha.transactions.retrieve(fulfilled_transfer_tx["id"])["outputs"][0]["public_keys"][0] == carol.public_key
)

View File

@ -1,131 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# ## Testing potentially hazardous strings
# This test uses a library of `naughty` strings (code injections, weird unicode chars., etc.) as both keys and values.
# We look for either a successful tx, or in the case that we use a naughty string as a key, and it violates some key
# constraints, we expect to receive a well formatted error message.
# ## Imports
# Since the naughty strings get encoded and decoded in odd ways,
# we'll use a regex to sweep those details under the rug.
import re
# We'll use a nice library of naughty strings...
from blns import blns
# And parameterize our test so each one is treated as a separate test case
import pytest
# For this test case we import and use the Python Driver.
from planetmint_driver.crypto import generate_keypair
from planetmint_driver.exceptions import BadRequest
# import helper to manage multiple nodes
from .helper.hosts import Hosts
naughty_strings = blns.all()
skipped_naughty_strings = [
"1.00",
"$1.00",
"-1.00",
"-$1.00",
"0.00",
"0..0",
".",
"0.0.0",
"-.",
",./;'[]\\-=",
"ثم نفس سقطت وبالتحديد،, جزيرتي باستخدام أن دنو. إذ هنا؟ الستار وتنصيب كان. أهّل ايطاليا، بريطانيا-فرنسا قد أخذ. سليمان، إتفاقية بين ما, يذكر الحدود أي بعد, معاملة بولندا، الإطلاق عل إيو.",
"test\x00",
"Ṱ̺̺̕o͞ ̷i̲̬͇̪͙n̝̗͕v̟̜̘̦͟o̶̙̰̠kè͚̮̺̪̹̱̤ ̖t̝͕̳̣̻̪͞h̼͓̲̦̳̘̲e͇̣̰̦̬͎ ̢̼̻̱̘h͚͎͙̜̣̲ͅi̦̲̣̰̤v̻͍e̺̭̳̪̰-m̢iͅn̖̺̞̲̯̰d̵̼̟͙̩̼̘̳ ̞̥̱̳̭r̛̗̘e͙p͠r̼̞̻̭̗e̺̠̣͟s̘͇̳͍̝͉e͉̥̯̞̲͚̬͜ǹ̬͎͎̟̖͇̤t͍̬̤͓̼̭͘ͅi̪̱n͠g̴͉ ͏͉ͅc̬̟h͡a̫̻̯͘o̫̟̖͍̙̝͉s̗̦̲.̨̹͈̣",
"̡͓̞ͅI̗̘̦͝n͇͇͙v̮̫ok̲̫̙͈i̖͙̭̹̠̞n̡̻̮̣̺g̲͈͙̭͙̬͎ ̰t͔̦h̞̲e̢̤ ͍̬̲͖f̴̘͕̣è͖ẹ̥̩l͖͔͚i͓͚̦͠n͖͍̗͓̳̮g͍ ̨o͚̪͡f̘̣̬ ̖̘͖̟͙̮c҉͔̫͖͓͇͖ͅh̵̤̣͚͔á̗̼͕ͅo̼̣̥s̱͈̺̖̦̻͢.̛̖̞̠̫̰",
"̗̺͖̹̯͓Ṯ̤͍̥͇͈h̲́e͏͓̼̗̙̼̣͔ ͇̜̱̠͓͍ͅN͕͠e̗̱z̘̝̜̺͙p̤̺̹͍̯͚e̠̻̠͜r̨̤͍̺̖͔̖̖d̠̟̭̬̝͟i̦͖̩͓͔̤a̠̗̬͉̙n͚͜ ̻̞̰͚ͅh̵͉i̳̞v̢͇ḙ͎͟-҉̭̩̼͔m̤̭̫i͕͇̝̦n̗͙ḍ̟ ̯̲͕͞ǫ̟̯̰̲͙̻̝f ̪̰̰̗̖̭̘͘c̦͍̲̞͍̩̙ḥ͚a̮͎̟̙͜ơ̩̹͎s̤.̝̝ ҉Z̡̖̜͖̰̣͉̜a͖̰͙̬͡l̲̫̳͍̩g̡̟̼̱͚̞̬ͅo̗͜.̟",
"̦H̬̤̗̤͝e͜ ̜̥̝̻͍̟́w̕h̖̯͓o̝͙̖͎̱̮ ҉̺̙̞̟͈W̷̼̭a̺̪͍į͈͕̭͙̯̜t̶̼̮s̘͙͖̕ ̠̫̠B̻͍͙͉̳ͅe̵h̵̬͇̫͙i̹͓̳̳̮͎̫̕n͟d̴̪̜̖ ̰͉̩͇͙̲͞ͅT͖̼͓̪͢h͏͓̮̻e̬̝̟ͅ ̤̹̝W͙̞̝͔͇͝ͅa͏͓͔̹̼̣l̴͔̰̤̟͔ḽ̫.͕",
'"><script>alert(document.title)</script>',
"'><script>alert(document.title)</script>",
"><script>alert(document.title)</script>",
"</script><script>alert(document.title)</script>",
"< / script >< script >alert(document.title)< / script >",
" onfocus=alert(document.title) autofocus ",
'" onfocus=alert(document.title) autofocus ',
"' onfocus=alert(document.title) autofocus ",
"scriptalert(document.title)/script",
"/dev/null; touch /tmp/blns.fail ; echo",
"../../../../../../../../../../../etc/passwd%00",
"../../../../../../../../../../../etc/hosts",
"() { 0; }; touch /tmp/blns.shellshock1.fail;",
"() { _; } >_[$($())] { touch /tmp/blns.shellshock2.fail; }",
]
naughty_strings = [naughty for naughty in naughty_strings if naughty not in skipped_naughty_strings]
# This is our base test case, but we'll reuse it to send naughty strings as both keys and values.
def send_naughty_tx(assets, metadata):
# ## Set up a connection to Planetmint
# Check [test_basic.py](./test_basic.html) to get some more details
# about the endpoint.
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Here's Alice.
alice = generate_keypair()
# Alice is in a naughty mood today, so she creates a tx with some naughty strings
prepared_transaction = pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=assets, metadata=metadata
)
# She fulfills the transaction
fulfilled_transaction = pm.transactions.fulfill(prepared_transaction, private_keys=alice.private_key)
# The fulfilled tx gets sent to the pm network
try:
sent_transaction = pm.transactions.send_commit(fulfilled_transaction)
except BadRequest as e:
sent_transaction = e
# If her key contained a '.', began with a '$', or contained a NUL character
regex = r".*\..*|\$.*|.*\x00.*"
key = next(iter(metadata))
if re.match(regex, key):
# Then she expects a nicely formatted error code
status_code = sent_transaction.status_code
error = sent_transaction.error
regex = (
r"\{\s*\n*"
r'\s*"message":\s*"Invalid transaction \(ValidationError\):\s*'
r"Invalid key name.*The key name cannot contain characters.*\n*"
r'\s*"status":\s*400\n*'
r"\s*\}\n*"
)
assert status_code == 400
assert re.fullmatch(regex, error), sent_transaction
# Otherwise, she expects to see her transaction in the database
elif "id" in sent_transaction.keys():
tx_id = sent_transaction["id"]
assert pm.transactions.retrieve(tx_id)
# If neither condition was true, then something weird happened...
else:
raise TypeError(sent_transaction)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_keys(naughty_string):
assets = [{"data": {naughty_string: "nice_value"}}]
metadata = {naughty_string: "nice_value"}
send_naughty_tx(assets, metadata)
@pytest.mark.parametrize("naughty_string", naughty_strings, ids=naughty_strings)
def test_naughty_values(naughty_string):
assets = [{"data": {"nice_key": naughty_string}}]
metadata = {"nice_key": naughty_string}
send_naughty_tx(assets, metadata)

View File

@ -1,131 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# # Stream Acceptance Test
# This test checks if the event stream works correctly. The basic idea of this
# test is to generate some random **valid** transaction, send them to a
# Planetmint node, and expect those transactions to be returned by the valid
# transactions Stream API. During this test, two threads work together,
# sharing a queue to exchange events.
#
# - The *main thread* first creates and sends the transactions to Planetmint;
# then it run through all events in the shared queue to check if all
# transactions sent have been validated by Planetmint.
# - The *listen thread* listens to the events coming from Planetmint and puts
# them in a queue shared with the main thread.
import queue
import json
from threading import Thread, Event
from uuid import uuid4
# For this script, we need to set up a websocket connection, that's the reason
# we import the
# [websocket](https://github.com/websocket-client/websocket-client) module
from websocket import create_connection
from planetmint_driver.crypto import generate_keypair
# import helper to manage multiple nodes
from .helper.hosts import Hosts
def test_stream():
# ## Set up the test
# We use the env variable `BICHAINDB_ENDPOINT` to know where to connect.
# Check [test_basic.py](./test_basic.html) for more information.
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# *That's pretty bad, but let's do like this for now.*
WS_ENDPOINT = "ws://{}:9985/api/v1/streams/valid_transactions".format(hosts.hostnames[0])
# Hello to Alice again, she is pretty active in those tests, good job
# Alice!
alice = generate_keypair()
# We need few variables to keep the state, specifically we need `sent` to
# keep track of all transactions Alice sent to Planetmint, while `received`
# are the transactions Planetmint validated and sent back to her.
sent = []
received = queue.Queue()
# In this test we use a websocket. The websocket must be started **before**
# sending transactions to Planetmint, otherwise we might lose some
# transactions. The `ws_ready` event is used to synchronize the main thread
# with the listen thread.
ws_ready = Event()
# ## Listening to events
# This is the function run by the complementary thread.
def listen():
# First we connect to the remote endpoint using the WebSocket protocol.
ws = create_connection(WS_ENDPOINT)
# After the connection has been set up, we can signal the main thread
# to proceed (continue reading, it should make sense in a second.)
ws_ready.set()
# It's time to consume all events coming from the Planetmint stream API.
# Every time a new event is received, it is put in the queue shared
# with the main thread.
while True:
result = ws.recv()
received.put(result)
# Put `listen` in a thread, and start it. Note that `listen` is a local
# function and it can access all variables in the enclosing function.
t = Thread(target=listen, daemon=True)
t.start()
# ## Pushing the transactions to Planetmint
# After starting the listen thread, we wait for it to connect, and then we
# proceed.
ws_ready.wait()
# Here we prepare, sign, and send ten different `CREATE` transactions. To
# make sure each transaction is different from the other, we generate a
# random `uuid`.
for _ in range(10):
tx = pm.transactions.fulfill(
pm.transactions.prepare(
operation="CREATE", signers=alice.public_key, assets=[{"data": {"uuid": str(uuid4())}}]
),
private_keys=alice.private_key,
)
# We don't want to wait for each transaction to be in a block. By using
# `async` mode, we make sure that the driver returns as soon as the
# transaction is pushed to the Planetmint API. Remember: we expect all
# transactions to be in the shared queue: this is a two phase test,
# first we send a bunch of transactions, then we check if they are
# valid (and, in this case, they should).
pm.transactions.send_async(tx)
# The `id` of every sent transaction is then stored in a list.
sent.append(tx["id"])
# ## Check the valid transactions coming from Planetmint
# Now we are ready to check if Planetmint did its job. A simple way to
# check if all sent transactions have been processed is to **remove** from
# `sent` the transactions we get from the *listen thread*. At one point in
# time, `sent` should be empty, and we exit the test.
while sent:
# To avoid waiting forever, we have an arbitrary timeout of 5
# seconds: it should be enough time for Planetmint to create
# blocks, in fact a new block is created every second. If we hit
# the timeout, then game over ¯\\\_(ツ)\_/¯
try:
event = received.get(timeout=5)
txid = json.loads(event)["transaction_id"]
except queue.Empty:
assert False, "Did not receive all expected transactions"
# Last thing is to try to remove the `txid` from the set of sent
# transactions. If this test is running in parallel with others, we
# might get a transaction id of another test, and `remove` can fail.
# It's OK if this happens.
try:
sent.remove(txid)
except ValueError:
pass

View File

@ -1,319 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# ## Imports
import time
import json
# For this test case we need the planetmint_driver.crypto package
import base58
import sha3
from planetmint_cryptoconditions import Ed25519Sha256, ThresholdSha256
from planetmint_driver.crypto import generate_keypair
# Import helper to deal with multiple nodes
from .helper.hosts import Hosts
def prepare_condition_details(condition: ThresholdSha256):
condition_details = {"subconditions": [], "threshold": condition.threshold, "type": condition.TYPE_NAME}
for s in condition.subconditions:
if s["type"] == "fulfillment" and s["body"].TYPE_NAME == "ed25519-sha-256":
condition_details["subconditions"].append(
{"type": s["body"].TYPE_NAME, "public_key": base58.b58encode(s["body"].public_key).decode()}
)
else:
condition_details["subconditions"].append(prepare_condition_details(s["body"]))
return condition_details
def test_threshold():
# Setup connection to test nodes
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
# Generate Keypars for Alice, Bob an Carol!
alice, bob, carol = generate_keypair(), generate_keypair(), generate_keypair()
# ## Alice and Bob create a transaction
# Alice and Bob just moved into a shared flat, no one can afford these
# high rents anymore. Bob suggests to get a dish washer for the
# kitchen. Alice agrees and here they go, creating the asset for their
# dish washer.
dw_asset = [{"data": {"dish washer": {"serial_number": 1337}}}]
# Create subfulfillments
alice_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
bob_ed25519 = Ed25519Sha256(public_key=base58.b58decode(bob.public_key))
carol_ed25519 = Ed25519Sha256(public_key=base58.b58decode(carol.public_key))
# Create threshold condition (2/3) and add subfulfillments
threshold_sha256 = ThresholdSha256(2)
threshold_sha256.add_subfulfillment(alice_ed25519)
threshold_sha256.add_subfulfillment(bob_ed25519)
threshold_sha256.add_subfulfillment(carol_ed25519)
# Create a condition uri and details for the output object
condition_uri = threshold_sha256.condition.serialize_uri()
condition_details = prepare_condition_details(threshold_sha256)
# Assemble output and input for the handcrafted tx
output = {
"amount": "1",
"condition": {
"details": condition_details,
"uri": condition_uri,
},
"public_keys": (alice.public_key, bob.public_key, carol.public_key),
}
# The yet to be fulfilled input:
input_ = {
"fulfillment": None,
"fulfills": None,
"owners_before": (alice.public_key, bob.public_key),
}
# Assemble the handcrafted transaction
handcrafted_dw_tx = {
"operation": "CREATE",
"asset": dw_asset,
"metadata": None,
"outputs": (output,),
"inputs": (input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_dw_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
# Sign message with Alice's und Bob's private key
alice_ed25519.sign(message.digest(), base58.b58decode(alice.private_key))
bob_ed25519.sign(message.digest(), base58.b58decode(bob.private_key))
# Create fulfillment and add uri to inputs
fulfillment_threshold = ThresholdSha256(2)
fulfillment_threshold.add_subfulfillment(alice_ed25519)
fulfillment_threshold.add_subfulfillment(bob_ed25519)
fulfillment_threshold.add_subcondition(carol_ed25519.condition)
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_dw_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_dw_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
dw_creation_txid = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_dw_tx["id"] = dw_creation_txid
pm.transactions.send_commit(handcrafted_dw_tx)
time.sleep(1)
# Assert that the tx is propagated to all nodes
hosts.assert_transaction(dw_creation_txid)
def test_weighted_threshold():
hosts = Hosts("/shared/hostnames")
pm = hosts.get_connection()
alice, bob, carol = generate_keypair(), generate_keypair(), generate_keypair()
assets = [{"data": {"trashcan": {"animals": ["racoon_1", "racoon_2"]}}}]
alice_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
bob_ed25519 = Ed25519Sha256(public_key=base58.b58decode(bob.public_key))
carol_ed25519 = Ed25519Sha256(public_key=base58.b58decode(carol.public_key))
threshold = ThresholdSha256(1)
threshold.add_subfulfillment(alice_ed25519)
sub_threshold = ThresholdSha256(2)
sub_threshold.add_subfulfillment(bob_ed25519)
sub_threshold.add_subfulfillment(carol_ed25519)
threshold.add_subfulfillment(sub_threshold)
condition_uri = threshold.condition.serialize_uri()
condition_details = prepare_condition_details(threshold)
# Assemble output and input for the handcrafted tx
output = {
"amount": "1",
"condition": {
"details": condition_details,
"uri": condition_uri,
},
"public_keys": (alice.public_key, bob.public_key, carol.public_key),
}
# The yet to be fulfilled input:
input_ = {
"fulfillment": None,
"fulfills": None,
"owners_before": (alice.public_key, bob.public_key),
}
# Assemble the handcrafted transaction
handcrafted_tx = {
"operation": "CREATE",
"asset": assets,
"metadata": None,
"outputs": (output,),
"inputs": (input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
# Sign message with Alice's und Bob's private key
alice_ed25519.sign(message.digest(), base58.b58decode(alice.private_key))
# Create fulfillment and add uri to inputs
sub_fulfillment_threshold = ThresholdSha256(2)
sub_fulfillment_threshold.add_subcondition(bob_ed25519.condition)
sub_fulfillment_threshold.add_subcondition(carol_ed25519.condition)
fulfillment_threshold = ThresholdSha256(1)
fulfillment_threshold.add_subfulfillment(alice_ed25519)
fulfillment_threshold.add_subfulfillment(sub_fulfillment_threshold)
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
creation_tx_id = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_tx["id"] = creation_tx_id
pm.transactions.send_commit(handcrafted_tx)
time.sleep(1)
# Assert that the tx is propagated to all nodes
hosts.assert_transaction(creation_tx_id)
# Now transfer created asset
alice_transfer_ed25519 = Ed25519Sha256(public_key=base58.b58decode(alice.public_key))
bob_transfer_ed25519 = Ed25519Sha256(public_key=base58.b58decode(bob.public_key))
carol_transfer_ed25519 = Ed25519Sha256(public_key=base58.b58decode(carol.public_key))
transfer_condition_uri = alice_transfer_ed25519.condition.serialize_uri()
# Assemble output and input for the handcrafted tx
transfer_output = {
"amount": "1",
"condition": {
"details": {
"type": alice_transfer_ed25519.TYPE_NAME,
"public_key": base58.b58encode(alice_transfer_ed25519.public_key).decode(),
},
"uri": transfer_condition_uri,
},
"public_keys": (alice.public_key,),
}
# The yet to be fulfilled input:
transfer_input_ = {
"fulfillment": None,
"fulfills": {"transaction_id": creation_tx_id, "output_index": 0},
"owners_before": (alice.public_key, bob.public_key, carol.public_key),
}
# Assemble the handcrafted transaction
handcrafted_transfer_tx = {
"operation": "TRANSFER",
"assets": [{"id": creation_tx_id}],
"metadata": None,
"outputs": (transfer_output,),
"inputs": (transfer_input_,),
"version": "2.0",
"id": None,
}
# Create sha3-256 of message to sign
message = json.dumps(
handcrafted_transfer_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
message = sha3.sha3_256(message.encode())
message.update(
"{}{}".format(
handcrafted_transfer_tx["inputs"][0]["fulfills"]["transaction_id"],
handcrafted_transfer_tx["inputs"][0]["fulfills"]["output_index"],
).encode()
)
# Sign message with Alice's und Bob's private key
bob_transfer_ed25519.sign(message.digest(), base58.b58decode(bob.private_key))
carol_transfer_ed25519.sign(message.digest(), base58.b58decode(carol.private_key))
sub_fulfillment_threshold = ThresholdSha256(2)
sub_fulfillment_threshold.add_subfulfillment(bob_transfer_ed25519)
sub_fulfillment_threshold.add_subfulfillment(carol_transfer_ed25519)
# Create fulfillment and add uri to inputs
fulfillment_threshold = ThresholdSha256(1)
fulfillment_threshold.add_subcondition(alice_transfer_ed25519.condition)
fulfillment_threshold.add_subfulfillment(sub_fulfillment_threshold)
fulfillment_uri = fulfillment_threshold.serialize_uri()
handcrafted_transfer_tx["inputs"][0]["fulfillment"] = fulfillment_uri
# Create tx_id for handcrafted_dw_tx and send tx commit
json_str_tx = json.dumps(
handcrafted_transfer_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
transfer_tx_id = sha3.sha3_256(json_str_tx.encode()).hexdigest()
handcrafted_transfer_tx["id"] = transfer_tx_id
pm.transactions.send_commit(handcrafted_transfer_tx)
time.sleep(1)
# Assert that the tx is propagated to all nodes
hosts.assert_transaction(transfer_tx_id)

View File

@ -1,131 +0,0 @@
import json
import base58
from hashlib import sha3_256
from planetmint_cryptoconditions.types.zenroom import ZenroomSha256
from planetmint_driver.crypto import generate_keypair
from .helper.hosts import Hosts
from zenroom import zencode_exec
import time
def test_zenroom_signing(
gen_key_zencode,
secret_key_to_private_key_zencode,
fulfill_script_zencode,
zenroom_data,
zenroom_house_assets,
zenroom_script_input,
condition_script_zencode,
):
biolabs = generate_keypair()
version = "2.0"
alice = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
bob = json.loads(zencode_exec(gen_key_zencode).output)["keyring"]
zen_public_keys = json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Alice"), keys=json.dumps({"keyring": alice})).output
)
zen_public_keys.update(
json.loads(
zencode_exec(secret_key_to_private_key_zencode.format("Bob"), keys=json.dumps({"keyring": bob})).output
)
)
zenroomscpt = ZenroomSha256(script=fulfill_script_zencode, data=zenroom_data, keys=zen_public_keys)
print(f"zenroom is: {zenroomscpt.script}")
# CRYPTO-CONDITIONS: generate the condition uri
condition_uri_zen = zenroomscpt.condition.serialize_uri()
print(f"\nzenroom condition URI: {condition_uri_zen}")
# CRYPTO-CONDITIONS: construct an unsigned fulfillment dictionary
unsigned_fulfillment_dict_zen = {
"type": zenroomscpt.TYPE_NAME,
"public_key": base58.b58encode(biolabs.public_key).decode(),
}
output = {
"amount": "10",
"condition": {
"details": unsigned_fulfillment_dict_zen,
"uri": condition_uri_zen,
},
"public_keys": [
biolabs.public_key,
],
}
input_ = {
"fulfillment": None,
"fulfills": None,
"owners_before": [
biolabs.public_key,
],
}
metadata = {"result": {"output": ["ok"]}}
script_ = {
"code": {"type": "zenroom", "raw": "test_string", "parameters": [{"obj": "1"}, {"obj": "2"}]},
"state": "dd8bbd234f9869cab4cc0b84aa660e9b5ef0664559b8375804ee8dce75b10576",
"input": zenroom_script_input,
"output": ["ok"],
"policies": {},
}
metadata = {"result": {"output": ["ok"]}}
token_creation_tx = {
"operation": "CREATE",
"asset": {"data": {"test": "my asset"}},
"script": script_,
"metadata": metadata,
"outputs": [
output,
],
"inputs": [
input_,
],
"version": version,
"id": None,
}
# JSON: serialize the transaction-without-id to a json formatted string
tx = json.dumps(
token_creation_tx,
sort_keys=True,
separators=(",", ":"),
ensure_ascii=False,
)
script_ = json.dumps(script_)
# major workflow:
# we store the fulfill script in the transaction/message (zenroom-sha)
# the condition script is used to fulfill the transaction and create the signature
#
# the server should ick the fulfill script and recreate the zenroom-sha and verify the signature
signed_input = zenroomscpt.sign(script_, condition_script_zencode, alice)
input_signed = json.loads(signed_input)
input_signed["input"]["signature"] = input_signed["output"]["signature"]
del input_signed["output"]["signature"]
del input_signed["output"]["logs"]
input_signed["output"] = ["ok"] # define expected output that is to be compared
input_msg = json.dumps(input_signed)
assert zenroomscpt.validate(message=input_msg)
tx = json.loads(tx)
fulfillment_uri_zen = zenroomscpt.serialize_uri()
tx["inputs"][0]["fulfillment"] = fulfillment_uri_zen
tx["script"] = input_signed
tx["id"] = None
json_str_tx = json.dumps(tx, sort_keys=True, skipkeys=False, separators=(",", ":"))
# SHA3: hash the serialized id-less transaction to generate the id
shared_creation_txid = sha3_256(json_str_tx.encode()).hexdigest()
tx["id"] = shared_creation_txid
hosts = Hosts("/shared/hostnames")
pm_alpha = hosts.get_connection()
sent_transfer_tx = pm_alpha.transactions.send_commit(tx)
time.sleep(1)
# Assert that transaction is stored on both planetmint nodes
hosts.assert_transaction(shared_creation_txid)
print(f"\n\nstatus and result : + {sent_transfer_tx}")

View File

@ -1,14 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Planetmint configuration
/usr/src/app/scripts/planetmint-monit-config
# Tarantool startup and configuration
tarantool /usr/src/app/scripts/init.lua
# Start services
monit -d 5 -I -B

View File

@ -1,81 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Show tendermint node id
show_id () {
tendermint --home=/tendermint show_node_id | tail -n 1
}
# Show validator public key
show_validator () {
tendermint --home=/tendermint show_validator | tail -n 1
}
# Elect new voting power for node
elect_validator () {
planetmint election new upsert-validator $1 $2 $3 --private-key /tendermint/config/priv_validator_key.json 2>&1
}
# Propose new chain migration
propose_migration () {
planetmint election new chain-migration --private-key /tendermint/config/priv_validator_key.json 2>&1
}
# Show election state
show_election () {
planetmint election show $1 2>&1
}
# Approve election
approve_validator () {
planetmint election approve $1 --private-key /tendermint/config/priv_validator_key.json
}
# Fetch tendermint id and pubkey and create upsert proposal
elect () {
node_id=$(show_id)
validator_pubkey=$(show_validator | jq -r .value)
proposal=$(elect_validator $validator_pubkey $1 $node_id | grep SUCCESS)
echo ${proposal##* }
}
# Create chain migration proposal and return election id
migrate () {
proposal=$(propose_migration | grep SUCCESS)
echo ${proposal##* }
}
usage () {
echo "usage: TODO"
}
while [ "$1" != "" ]; do
case $1 in
show_id ) show_id
;;
show_validator ) show_validator
;;
elect ) shift
elect $1
;;
migrate ) shift
migrate
;;
show_election ) shift
show_election $1
;;
approve ) shift
approve_validator $1
;;
* ) usage
exit 1
esac
shift
done
exitcode=$?
exit $exitcode

View File

@ -1,33 +0,0 @@
#!/usr/bin/env python3
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import json
import sys
def edit_genesis() -> None:
file_names = sys.argv[1:]
validators = []
for file_name in file_names:
file = open(file_name)
genesis = json.load(file)
validators.extend(genesis["validators"])
file.close()
genesis_file = open(file_names[0])
genesis_json = json.load(genesis_file)
genesis_json["validators"] = validators
genesis_file.close()
with open("/shared/genesis.json", "w") as f:
json.dump(genesis_json, f, indent=True)
return None
if __name__ == "__main__":
edit_genesis()

View File

@ -1,86 +0,0 @@
#!/usr/bin/env tarantool
box.cfg {
listen = 3303,
background = true,
log = '.planetmint-monit/logs/tarantool.log',
pid_file = '.planetmint-monit/monit_processes/tarantool.pid'
}
box.schema.user.grant('guest','read,write,execute,create,drop','universe')
function indexed_pattern_search(space_name, field_no, pattern)
if (box.space[space_name] == nil) then
print("Error: Failed to find the specified space")
return nil
end
local index_no = -1
for i=0,box.schema.INDEX_MAX,1 do
if (box.space[space_name].index[i] == nil) then break end
if (box.space[space_name].index[i].type == "TREE"
and box.space[space_name].index[i].parts[1].fieldno == field_no
and (box.space[space_name].index[i].parts[1].type == "scalar"
or box.space[space_name].index[i].parts[1].type == "string")) then
index_no = i
break
end
end
if (index_no == -1) then
print("Error: Failed to find an appropriate index")
return nil
end
local index_search_key = ""
local index_search_key_length = 0
local last_character = ""
local c = ""
local c2 = ""
for i=1,string.len(pattern),1 do
c = string.sub(pattern, i, i)
if (last_character ~= "%") then
if (c == '^' or c == "$" or c == "(" or c == ")" or c == "."
or c == "[" or c == "]" or c == "*" or c == "+"
or c == "-" or c == "?") then
break
end
if (c == "%") then
c2 = string.sub(pattern, i + 1, i + 1)
if (string.match(c2, "%p") == nil) then break end
index_search_key = index_search_key .. c2
else
index_search_key = index_search_key .. c
end
end
last_character = c
end
index_search_key_length = string.len(index_search_key)
local result_set = {}
local number_of_tuples_in_result_set = 0
local previous_tuple_field = ""
while true do
local number_of_tuples_since_last_yield = 0
local is_time_for_a_yield = false
for _,tuple in box.space[space_name].index[index_no]:
pairs(index_search_key,{iterator = box.index.GE}) do
if (string.sub(tuple[field_no], 1, index_search_key_length)
> index_search_key) then
break
end
number_of_tuples_since_last_yield = number_of_tuples_since_last_yield + 1
if (number_of_tuples_since_last_yield >= 10
and tuple[field_no] ~= previous_tuple_field) then
index_search_key = tuple[field_no]
is_time_for_a_yield = true
break
end
previous_tuple_field = tuple[field_no]
if (string.match(tuple[field_no], pattern) ~= nil) then
number_of_tuples_in_result_set = number_of_tuples_in_result_set + 1
result_set[number_of_tuples_in_result_set] = tuple
end
end
if (is_time_for_a_yield ~= true) then
break
end
require('fiber').yield()
end
return result_set
end

View File

@ -1,208 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
set -o nounset
# Check if directory for monit logs exists
if [ ! -d "$HOME/.planetmint-monit" ]; then
mkdir -p "$HOME/.planetmint-monit"
fi
monit_pid_path=${MONIT_PID_PATH:=$HOME/.planetmint-monit/monit_processes}
monit_script_path=${MONIT_SCRIPT_PATH:=$HOME/.planetmint-monit/monit_script}
monit_log_path=${MONIT_LOG_PATH:=$HOME/.planetmint-monit/logs}
monitrc_path=${MONITRC_PATH:=$HOME/.monitrc}
function usage() {
cat <<EOM
Usage: ${0##*/} [-h]
Configure Monit for Planetmint and Tendermint process management.
ENV[MONIT_PID_PATH] || --monit-pid-path PATH
Absolute path to directory where the the program's pid-file will reside.
The pid-file contains the ID(s) of the process(es). (default: ${monit_pid_path})
ENV[MONIT_SCRIPT_PATH] || --monit-script-path PATH
Absolute path to the directory where the executable program or
script is present. (default: ${monit_script_path})
ENV[MONIT_LOG_PATH] || --monit-log-path PATH
Absolute path to the directory where all the logs for processes
monitored by Monit are stored. (default: ${monit_log_path})
ENV[MONITRC_PATH] || --monitrc-path PATH
Absolute path to the monit control file(monitrc). (default: ${monitrc_path})
-h|--help
Show this help and exit.
EOM
}
while [[ $# -gt 0 ]]; do
arg="$1"
case $arg in
--monit-pid-path)
monit_pid_path="$2"
shift
;;
--monit-script-path)
monit_script_path="$2"
shift
;;
--monit-log-path)
monit_log_path="$2"
shift
;;
--monitrc-path)
monitrc_path="$2"
shift
;;
-h | --help)
usage
exit
;;
*)
echo "Unknown option: $1"
usage
exit 1
;;
esac
shift
done
# Check if directory for monit logs exists
if [ ! -d "$monit_log_path" ]; then
mkdir -p "$monit_log_path"
fi
# Check if directory for monit pid files exists
if [ ! -d "$monit_pid_path" ]; then
mkdir -p "$monit_pid_path"
fi
cat >${monit_script_path} <<EOF
#!/bin/bash
case \$1 in
start_planetmint)
pushd \$4
nohup planetmint start > /dev/null 2>&1 &
echo \$! > \$2
popd
;;
stop_planetmint)
kill -2 \`cat \$2\`
rm -f \$2
;;
start_tendermint)
pushd \$4
nohup tendermint node \
--p2p.laddr "tcp://0.0.0.0:26656" \
--rpc.laddr "tcp://0.0.0.0:26657" \
--proxy_app="tcp://0.0.0.0:26658" \
--consensus.create_empty_blocks=false \
--p2p.pex=false >> \$3/tendermint.out.log 2>> \$3/tendermint.err.log &
echo \$! > \$2
popd
;;
stop_tendermint)
kill -2 \`cat \$2\`
rm -f \$2
;;
esac
exit 0
EOF
chmod +x ${monit_script_path}
cat >${monit_script_path}_logrotate <<EOF
#!/bin/bash
case \$1 in
rotate_tendermint_logs)
/bin/cp \$2 \$2.\$(date +%y-%m-%d)
/bin/tar -cvf \$2.\$(date +%Y%m%d_%H%M%S).tar.gz \$2.\$(date +%y-%m-%d)
/bin/rm \$2.\$(date +%y-%m-%d)
/bin/cp /dev/null \$2
;;
esac
exit 0
EOF
chmod +x ${monit_script_path}_logrotate
# Handling overwriting of control file interactively
if [ -f "$monitrc_path" ]; then
echo "$monitrc_path already exists."
read -p "Overwrite[Y]? " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
echo "Overriding $monitrc_path"
else
read -p "Enter absolute path to store Monit control file: " monitrc_path
eval monitrc_path="$monitrc_path"
if [ ! -d "$(dirname $monitrc_path)" ]; then
echo "Failed to save monit control file '$monitrc_path': No such file or directory."
exit 1
fi
fi
fi
# configure monitrc
cat >${monitrc_path} <<EOF
set httpd
port 2812
allow localhost
check process planetmint
with pidfile ${monit_pid_path}/planetmint.pid
start program "${monit_script_path} start_planetmint $monit_pid_path/planetmint.pid ${monit_log_path} ${monit_log_path}"
restart program "${monit_script_path} start_planetmint $monit_pid_path/planetmint.pid ${monit_log_path} ${monit_log_path}"
stop program "${monit_script_path} stop_planetmint $monit_pid_path/planetmint.pid ${monit_log_path} ${monit_log_path}"
check process tendermint
with pidfile ${monit_pid_path}/tendermint.pid
start program "${monit_script_path} start_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
restart program "${monit_script_path} start_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
stop program "${monit_script_path} stop_tendermint ${monit_pid_path}/tendermint.pid ${monit_log_path} ${monit_log_path}"
depends on planetmint
check file tendermint.out.log with path ${monit_log_path}/tendermint.out.log
if size > 200 MB then
exec "${monit_script_path}_logrotate rotate_tendermint_logs ${monit_log_path}/tendermint.out.log $monit_pid_path/tendermint.pid"
check file tendermint.err.log with path ${monit_log_path}/tendermint.err.log
if size > 200 MB then
exec "${monit_script_path}_logrotate rotate_tendermint_logs ${monit_log_path}/tendermint.err.log $monit_pid_path/tendermint.pid"
EOF
# Setting permissions for control file
chmod 0700 ${monitrc_path}
echo -e "Planetmint process manager configured!"
set -o errexit

View File

@ -1,83 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Write hostname to list
echo $(hostname) >> /shared/hostnames
# Create ssh folder
mkdir ~/.ssh
# Wait for test container pubkey
while [ ! -f /shared/id_rsa.pub ]; do
echo "WAIT FOR PUBKEY"
sleep 1
done
# Add pubkey to authorized keys
cat /shared/id_rsa.pub > ~/.ssh/authorized_keys
# Allow root user login
sed -i "s/#PermitRootLogin prohibit-password/PermitRootLogin yes/" /etc/ssh/sshd_config
# Restart ssh service
service ssh restart
# Tendermint configuration
tendermint init
# Write node id to shared folder
HOSTNAME=$(hostname)
NODE_ID=$(tendermint show_node_id | tail -n 1)
echo $NODE_ID > /shared/${HOSTNAME}_node_id
# Wait for other node ids
FILES=()
while [ ! ${#FILES[@]} == $SCALE ]; do
echo "WAIT FOR NODE IDS"
sleep 1
FILES=(/shared/*node_id)
done
# Write node ids to persistent peers
PEERS="persistent_peers = \""
for f in ${FILES[@]}; do
ID=$(cat $f)
HOST=$(echo $f | cut -c 9-20)
if [ ! $HOST == $HOSTNAME ]; then
PEERS+="${ID}@${HOST}:26656, "
fi
done
PEERS=$(echo $PEERS | rev | cut -c 2- | rev)
PEERS+="\""
sed -i "/persistent_peers = \"\"/c\\${PEERS}" /tendermint/config/config.toml
# Copy genesis.json to shared folder
cp /tendermint/config/genesis.json /shared/${HOSTNAME}_genesis.json
# Await config file of all services to be present
FILES=()
while [ ! ${#FILES[@]} == $SCALE ]; do
echo "WAIT FOR GENESIS FILES"
sleep 1
FILES=(/shared/*_genesis.json)
done
# Create genesis.json for nodes
if [ ! -f /shared/lock ]; then
echo LOCKING
touch /shared/lock
/usr/src/app/scripts/genesis.py ${FILES[@]}
fi
while [ ! -f /shared/genesis.json ]; do
echo "WAIT FOR GENESIS"
sleep 1
done
# Copy genesis.json to tendermint config
cp /shared/genesis.json /tendermint/config/genesis.json
exec "$@"

View File

@ -1,24 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Start CLI Tests
# Test upsert new validator
/tests/upsert-new-validator.sh
# Test chain migration
# TODO: implementation not finished
#/tests/chain-migration.sh
# TODO: Implement test for voting edge cases or implicit in chain migration and upsert validator?
exitcode=$?
if [ $exitcode -ne 0 ]; then
exit $exitcode
fi
exec "$@"

View File

@ -1,29 +0,0 @@
#!/bin/bash
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
# Only continue if all services are ready
HOSTNAMES=()
while [ ! ${#HOSTNAMES[@]} == $SCALE ]; do
echo "WAIT FOR HOSTNAMES"
sleep 1
readarray -t HOSTNAMES < /shared/hostnames
done
for host in ${HOSTNAMES[@]}; do
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $host:9984)" != "200" ]]; do
echo "WAIT FOR PLANETMINT $host"
sleep 1
done
done
for host in ${HOSTNAMES[@]}; do
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' $host:26657)" != "200" ]]; do
echo "WAIT FOR TENDERMINT $host"
sleep 1
done
done
exec "$@"

View File

@ -1,4 +1,4 @@
FROM tendermint/tendermint:v0.34.15
FROM tendermint/tendermint:v0.34.24
LABEL maintainer "contact@ipdb.global"
WORKDIR /
USER root

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -1,4 +1,4 @@
ARG tm_version=v0.31.5
ARG tm_version=v0.34.24
FROM tendermint/tendermint:${tm_version}
LABEL maintainer "contact@ipdb.global"
WORKDIR /

View File

@ -1,3 +1,4 @@
---
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)

View File

@ -17,7 +17,7 @@ stack_size=${STACK_SIZE:=4}
stack_type=${STACK_TYPE:="docker"}
stack_type_provider=${STACK_TYPE_PROVIDER:=""}
# NOTE versions prior v0.28.0 have different priv_validator format!
tm_version=${TM_VERSION:="v0.34.15"}
tm_version=${TM_VERSION:="v0.34.24"}
mongo_version=${MONGO_VERSION:="3.6"}
stack_vm_memory=${STACK_VM_MEMORY:=2048}
stack_vm_cpus=${STACK_VM_CPUS:=2}

View File

@ -16,7 +16,7 @@ stack_repo=${STACK_REPO:="planetmint/planetmint"}
stack_size=${STACK_SIZE:=4}
stack_type=${STACK_TYPE:="docker"}
stack_type_provider=${STACK_TYPE_PROVIDER:=""}
tm_version=${TM_VERSION:="0.31.5"}
tm_version=${TM_VERSION:="0.34.24"}
mongo_version=${MONGO_VERSION:="3.6"}
stack_vm_memory=${STACK_VM_MEMORY:=2048}
stack_vm_cpus=${STACK_VM_CPUS:=2}

View File

@ -19,7 +19,7 @@ The `Planetmint` class is defined here. Most node-level operations and database
`Block`, `Transaction`, and `Asset` classes are defined here. The classes mirror the block and transaction structure from the documentation, but also include methods for validation and signing.
### [`validation.py`](./validation.py)
### [`validation.py`](application/basevalidationrules.py)
Base class for validation methods (verification of votes, blocks, and transactions). The actual logic is mostly found in `transaction` and `block` models, defined in [`models.py`](./models.py).
@ -27,7 +27,7 @@ Base class for validation methods (verification of votes, blocks, and transactio
Entry point for the Planetmint process, after initialization. All subprocesses are started here: processes to handle new blocks, votes, etc.
### [`config_utils.py`](./config_utils.py)
### [`config_utils.py`](config_utils.py)
Methods for managing the configuration, including loading configuration files, automatically generating the configuration, and keeping the configuration consistent across Planetmint instances.

View File

@ -2,17 +2,3 @@
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from transactions.common.transaction import Transaction # noqa
from transactions.types.elections.validator_election import ValidatorElection # noqa
from transactions.types.elections.vote import Vote # noqa
from transactions.types.elections.chain_migration_election import ChainMigrationElection
from planetmint.lib import Planetmint
from planetmint.core import App
Transaction.register_type(Transaction.CREATE, Transaction)
Transaction.register_type(Transaction.TRANSFER, Transaction)
Transaction.register_type(ValidatorElection.OPERATION, ValidatorElection)
Transaction.register_type(ChainMigrationElection.OPERATION, ChainMigrationElection)
Transaction.register_type(Vote.OPERATION, Vote)

View File

@ -9,9 +9,10 @@ with Tendermint.
import logging
import sys
from tendermint.abci import types_pb2
from abci.application import BaseApplication
from abci.application import OkCode
from tendermint.abci import types_pb2
from tendermint.abci.types_pb2 import (
ResponseInfo,
ResponseInitChain,
@ -21,35 +22,40 @@ from tendermint.abci.types_pb2 import (
ResponseEndBlock,
ResponseCommit,
)
from planetmint import Planetmint
from planetmint.tendermint_utils import decode_transaction, calculate_hash, decode_validator
from planetmint.lib import Block
from planetmint.events import EventTypes, Event
from planetmint.application.validator import Validator
from planetmint.abci.utils import decode_validator, decode_transaction, calculate_hash
from planetmint.abci.block import Block
from planetmint.ipc.events import EventTypes, Event
from planetmint.backend.exceptions import DBConcurrencyError
CodeTypeError = 1
logger = logging.getLogger(__name__)
class App(BaseApplication):
class ApplicationLogic(BaseApplication):
"""Bridge between Planetmint and Tendermint.
The role of this class is to expose the Planetmint
transaction logic to Tendermint Core.
"""
def __init__(self, planetmint_node=None, events_queue=None):
def __init__(
self,
validator: Validator = None,
events_queue=None,
):
# super().__init__(abci)
logger.debug("Checking values of types")
logger.debug(dir(types_pb2))
self.events_queue = events_queue
self.planetmint_node = planetmint_node or Planetmint()
self.validator = validator if validator else Validator()
self.block_txn_ids = []
self.block_txn_hash = ""
self.block_transactions = []
self.validators = None
self.new_height = None
self.chain = self.planetmint_node.get_latest_abci_chain()
self.chain = self.validator.models.get_latest_abci_chain()
def log_abci_migration_error(self, chain_id, validators):
logger.error(
@ -61,7 +67,7 @@ class App(BaseApplication):
def abort_if_abci_chain_is_not_synced(self):
if self.chain is None or self.chain["is_synced"]:
return
validators = self.planetmint_node.get_validators()
validators = self.validator.models.get_validators()
self.log_abci_migration_error(self.chain["chain_id"], validators)
sys.exit(1)
@ -69,33 +75,42 @@ class App(BaseApplication):
"""Initialize chain upon genesis or a migration"""
app_hash = ""
height = 0
known_chain = self.planetmint_node.get_latest_abci_chain()
try:
known_chain = self.validator.models.get_latest_abci_chain()
if known_chain is not None:
chain_id = known_chain["chain_id"]
if known_chain["is_synced"]:
msg = f"Got invalid InitChain ABCI request ({genesis}) - " f"the chain {chain_id} is already synced."
msg = f"Got invalid InitChain ABCI request ({genesis}) - the chain {chain_id} is already synced."
logger.error(msg)
sys.exit(1)
if chain_id != genesis.chain_id:
validators = self.planetmint_node.get_validators()
validators = self.validator.models.get_validators()
self.log_abci_migration_error(chain_id, validators)
sys.exit(1)
# set migration values for app hash and height
block = self.planetmint_node.get_latest_block()
block = self.validator.models.get_latest_block()
app_hash = "" if block is None else block["app_hash"]
height = 0 if block is None else block["height"] + 1
known_validators = self.planetmint_node.get_validators()
known_validators = self.validator.models.get_validators()
validator_set = [decode_validator(v) for v in genesis.validators]
if known_validators and known_validators != validator_set:
self.log_abci_migration_error(known_chain["chain_id"], known_validators)
sys.exit(1)
block = Block(app_hash=app_hash, height=height, transactions=[])
self.planetmint_node.store_block(block._asdict())
self.planetmint_node.store_validator_set(height + 1, validator_set)
self.validator.models.store_block(block._asdict())
self.validator.models.store_validator_set(height + 1, validator_set)
abci_chain_height = 0 if known_chain is None else known_chain["height"]
self.planetmint_node.store_abci_chain(abci_chain_height, genesis.chain_id, True)
self.chain = {"height": abci_chain_height, "is_synced": True, "chain_id": genesis.chain_id}
self.validator.models.store_abci_chain(abci_chain_height, genesis.chain_id, True)
self.chain = {
"height": abci_chain_height,
"is_synced": True,
"chain_id": genesis.chain_id,
}
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
return ResponseInitChain()
def info(self, request):
@ -112,7 +127,13 @@ class App(BaseApplication):
# logger.info(f"Tendermint version: {request.version}")
r = ResponseInfo()
block = self.planetmint_node.get_latest_block()
block = None
try:
block = self.validator.models.get_latest_block()
except DBConcurrencyError:
block = None
except ValueError:
block = None
if block:
chain_shift = 0 if self.chain is None else self.chain["height"]
r.last_block_height = block["height"] - chain_shift
@ -134,12 +155,17 @@ class App(BaseApplication):
logger.debug("check_tx: %s", raw_transaction)
transaction = decode_transaction(raw_transaction)
if self.planetmint_node.is_valid_transaction(transaction):
try:
if self.validator.is_valid_transaction(transaction):
logger.debug("check_tx: VALID")
return ResponseCheckTx(code=OkCode)
else:
logger.debug("check_tx: INVALID")
return ResponseCheckTx(code=CodeTypeError)
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
def begin_block(self, req_begin_block):
"""Initialize list of transaction.
@ -167,9 +193,15 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced()
logger.debug("deliver_tx: %s", raw_transaction)
transaction = self.planetmint_node.is_valid_transaction(
transaction = None
try:
transaction = self.validator.is_valid_transaction(
decode_transaction(raw_transaction), self.block_transactions
)
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
if not transaction:
logger.debug("deliver_tx: INVALID")
@ -198,17 +230,25 @@ class App(BaseApplication):
# `end_block` or `commit`
logger.debug(f"Updating pre-commit state: {self.new_height}")
pre_commit_state = dict(height=self.new_height, transactions=self.block_txn_ids)
self.planetmint_node.store_pre_commit_state(pre_commit_state)
try:
self.validator.models.store_pre_commit_state(pre_commit_state)
block_txn_hash = calculate_hash(self.block_txn_ids)
block = self.planetmint_node.get_latest_block()
block = self.validator.models.get_latest_block()
logger.debug(f"BLOCK: {block}")
if self.block_txn_ids:
self.block_txn_hash = calculate_hash([block["app_hash"], block_txn_hash])
else:
self.block_txn_hash = block["app_hash"]
validator_update = self.planetmint_node.process_block(self.new_height, self.block_transactions)
validator_update = self.validator.process_block(self.new_height, self.block_transactions)
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
except TypeError:
sys.exit(1)
return ResponseEndBlock(validator_updates=validator_update)
@ -218,18 +258,26 @@ class App(BaseApplication):
self.abort_if_abci_chain_is_not_synced()
data = self.block_txn_hash.encode("utf-8")
try:
# register a new block only when new transactions are received
if self.block_txn_ids:
self.planetmint_node.store_bulk_transactions(self.block_transactions)
self.validator.models.store_bulk_transactions(self.block_transactions)
block = Block(app_hash=self.block_txn_hash, height=self.new_height, transactions=self.block_txn_ids)
block = Block(
app_hash=self.block_txn_hash,
height=self.new_height,
transactions=self.block_txn_ids,
)
# NOTE: storing the block should be the last operation during commit
# this effects crash recovery. Refer BEP#8 for details
self.planetmint_node.store_block(block._asdict())
self.validator.models.store_block(block._asdict())
except DBConcurrencyError:
sys.exit(1)
except ValueError:
sys.exit(1)
logger.debug(
"Commit-ing new block with hash: apphash=%s ," "height=%s, txn ids=%s",
"Commit-ing new block with hash: apphash=%s, height=%s, txn ids=%s",
data,
self.new_height,
self.block_txn_ids,
@ -238,31 +286,12 @@ class App(BaseApplication):
if self.events_queue:
event = Event(
EventTypes.BLOCK_VALID,
{"height": self.new_height, "hash": self.block_txn_hash, "transactions": self.block_transactions},
{
"height": self.new_height,
"hash": self.block_txn_hash,
"transactions": self.block_transactions,
},
)
self.events_queue.put(event)
return ResponseCommit(data=data)
def rollback(planetmint):
pre_commit = None
try:
pre_commit = planetmint.get_pre_commit_state()
except Exception as e:
logger.exception("Unexpected error occurred while executing get_pre_commit_state()", e)
if pre_commit is None or len(pre_commit) == 0:
# the pre_commit record is first stored in the first `end_block`
return
latest_block = planetmint.get_latest_block()
if latest_block is None:
logger.error("Found precommit state but no blocks!")
sys.exit(1)
# NOTE: the pre-commit state is always at most 1 block ahead of the commited state
if latest_block["height"] < pre_commit["height"]:
planetmint.rollback_election(pre_commit["height"], pre_commit["transactions"])
planetmint.delete_transactions(pre_commit["transactions"])

3
planetmint/abci/block.py Normal file
View File

@ -0,0 +1,3 @@
from collections import namedtuple
Block = namedtuple("Block", ("app_hash", "height", "transactions"))

View File

@ -3,12 +3,12 @@
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import multiprocessing as mp
import multiprocessing
from collections import defaultdict
from planetmint import App
from planetmint.lib import Planetmint
from planetmint.tendermint_utils import decode_transaction
from planetmint.abci.application_logic import ApplicationLogic
from planetmint.application.validator import Validator
from planetmint.abci.utils import decode_transaction
from abci.application import OkCode
from tendermint.abci.types_pb2 import (
ResponseCheckTx,
@ -16,7 +16,7 @@ from tendermint.abci.types_pb2 import (
)
class ParallelValidationApp(App):
class ParallelValidationApp(ApplicationLogic):
def __init__(self, planetmint=None, events_queue=None):
super().__init__(planetmint, events_queue)
self.parallel_validator = ParallelValidator()
@ -44,17 +44,17 @@ EXIT = "exit"
class ParallelValidator:
def __init__(self, number_of_workers=mp.cpu_count()):
def __init__(self, number_of_workers=multiprocessing.cpu_count()):
self.number_of_workers = number_of_workers
self.transaction_index = 0
self.routing_queues = [mp.Queue() for _ in range(self.number_of_workers)]
self.routing_queues = [multiprocessing.Queue() for _ in range(self.number_of_workers)]
self.workers = []
self.results_queue = mp.Queue()
self.results_queue = multiprocessing.Queue()
def start(self):
for routing_queue in self.routing_queues:
worker = ValidationWorker(routing_queue, self.results_queue)
process = mp.Process(target=worker.run)
process = multiprocessing.Process(target=worker.run)
process.start()
self.workers.append(process)
@ -93,7 +93,7 @@ class ValidationWorker:
def __init__(self, in_queue, results_queue):
self.in_queue = in_queue
self.results_queue = results_queue
self.planetmint = Planetmint()
self.validator = Validator()
self.reset()
def reset(self):
@ -112,7 +112,7 @@ class ValidationWorker:
except TypeError:
asset_id = dict_transaction["id"]
transaction = self.planetmint.is_valid_transaction(dict_transaction, self.validated_transactions[asset_id])
transaction = self.validator.is_valid_transaction(dict_transaction, self.validated_transactions[asset_id])
if transaction:
self.validated_transactions[asset_id].append(transaction)

80
planetmint/abci/rpc.py Normal file
View File

@ -0,0 +1,80 @@
import requests
from uuid import uuid4
from transactions.common.exceptions import ValidationError
from transactions.common.transaction_mode_types import (
BROADCAST_TX_COMMIT,
BROADCAST_TX_ASYNC,
BROADCAST_TX_SYNC,
)
from planetmint.abci.utils import encode_transaction
from planetmint.application.validator import logger
from planetmint.config_utils import autoconfigure
from planetmint.config import Config
MODE_COMMIT = BROADCAST_TX_COMMIT
MODE_LIST = (BROADCAST_TX_ASYNC, BROADCAST_TX_SYNC, MODE_COMMIT)
class ABCI_RPC:
def __init__(self):
autoconfigure()
self.tendermint_host = Config().get()["tendermint"]["host"]
self.tendermint_port = Config().get()["tendermint"]["port"]
self.tendermint_rpc_endpoint = "http://{}:{}/".format(self.tendermint_host, self.tendermint_port)
@staticmethod
def _process_post_response(mode_commit, response, mode):
logger.debug(response)
error = response.get("error")
if error:
status_code = 500
message = error.get("message", "Internal Error")
data = error.get("data", "")
if "Tx already exists in cache" in data:
status_code = 400
return (status_code, message + " - " + data)
result = response["result"]
if mode == mode_commit:
check_tx_code = result.get("check_tx", {}).get("code", 0)
deliver_tx_code = result.get("deliver_tx", {}).get("code", 0)
error_code = check_tx_code or deliver_tx_code
else:
error_code = result.get("code", 0)
if error_code:
return (500, "Transaction validation failed")
return (202, "")
def write_transaction(self, mode_list, endpoint, mode_commit, transaction, mode):
# This method offers backward compatibility with the Web API.
"""Submit a valid transaction to the mempool."""
response = self.post_transaction(mode_list, endpoint, transaction, mode)
return ABCI_RPC._process_post_response(mode_commit, response.json(), mode)
def post_transaction(self, mode_list, endpoint, transaction, mode):
"""Submit a valid transaction to the mempool."""
if not mode or mode not in mode_list:
raise ValidationError("Mode must be one of the following {}.".format(", ".join(mode_list)))
tx_dict = transaction.tx_dict if transaction.tx_dict else transaction.to_dict()
payload = {
"method": mode,
"jsonrpc": "2.0",
"params": [encode_transaction(tx_dict)],
"id": str(uuid4()),
}
try:
response = requests.post(endpoint, json=payload)
except requests.exceptions.ConnectionError as e:
logger.error(f"Tendermint RCP Connection issue: {e}")
raise e
except Exception as e:
logger.error(f"Tendermint RCP Connection issue: {e}")
raise e
return response

View File

@ -1,19 +1,46 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import base64
import codecs
import hashlib
import json
import codecs
from binascii import hexlify
from hashlib import sha3_256
from packaging import version
from tendermint.abci import types_pb2
from tendermint.crypto import keys_pb2
from hashlib import sha3_256
from transactions.common.crypto import key_pair_from_ed25519_key
from transactions.common.exceptions import InvalidPublicKey
from planetmint.version import __tm_supported_versions__
def load_node_key(path):
with open(path) as json_data:
priv_validator = json.load(json_data)
priv_key = priv_validator["priv_key"]["value"]
hex_private_key = key_from_base64(priv_key)
return key_pair_from_ed25519_key(hex_private_key)
def tendermint_version_is_compatible(running_tm_ver):
"""
Check Tendermint compatability with Planetmint server
:param running_tm_ver: Version number of the connected Tendermint instance
:type running_tm_ver: str
:return: True/False depending on the compatability with Planetmint server
:rtype: bool
"""
# Splitting because version can look like this e.g. 0.22.8-40d6dc2e
tm_ver = running_tm_ver.split("-")
if not tm_ver:
return False
for ver in __tm_supported_versions__:
if version.parse(ver) == version.parse(tm_ver[0]):
return True
return False
def encode_validator(v):
ed25519_public_key = v["public_key"]["value"]
@ -52,7 +79,6 @@ def new_validator_set(validators, updates):
def get_public_key_decoder(pk):
encoding = pk["type"]
decoder = base64.b64decode
if encoding == "ed25519-base16":
decoder = base64.b16decode
@ -121,7 +147,6 @@ def merkleroot(hashes):
return merkleroot(parent_hashes)
# ripemd160 is only available below python 3.9.13
@DeprecationWarning
def public_key64_to_address(base64_public_key):
"""Note this only compatible with Tendermint 0.19.x"""

View File

@ -0,0 +1,2 @@
from .validator import Validator
from .basevalidationrules import BaseValidationRules

View File

@ -0,0 +1,557 @@
import logging
import json
from collections import OrderedDict
from transactions import Transaction, Vote
from transactions.common.exceptions import (
DoubleSpend,
AssetIdMismatch,
InvalidSignature,
AmountError,
SchemaValidationError,
ValidationError,
MultipleInputsError,
DuplicateTransaction,
InvalidProposer,
UnequalValidatorSet,
InvalidPowerChange,
)
from transactions.common.crypto import public_key_from_ed25519_key
from transactions.common.output import Output as TransactionOutput
from transactions.common.transaction import VALIDATOR_ELECTION, CHAIN_MIGRATION_ELECTION
from transactions.types.elections.election import Election
from transactions.types.elections.validator_utils import election_id_to_public_key
from planetmint.abci.utils import encode_validator, new_validator_set, key_from_base64, public_key_to_base64
from planetmint.application.basevalidationrules import BaseValidationRules
from planetmint.backend.models.output import Output
from planetmint.model.dataaccessor import DataAccessor
from planetmint.config import Config
from planetmint.config_utils import load_validation_plugin
from planetmint.utils.singleton import Singleton
logger = logging.getLogger(__name__)
class Validator:
def __init__(self):
self.models = DataAccessor()
self.validation = Validator._get_validation_method()
@staticmethod
def _get_validation_method():
validationPlugin = Config().get().get("validation_plugin")
if validationPlugin:
validation_method = load_validation_plugin(validationPlugin)
else:
validation_method = BaseValidationRules
return validation_method
@staticmethod
def validate_inputs_distinct(tx: Transaction):
# Validate that all inputs are distinct
links = [i.fulfills.to_uri() for i in tx.inputs]
if len(links) != len(set(links)):
raise DoubleSpend('tx "{}" spends inputs twice'.format(tx.id))
@staticmethod
def validate_asset_id(tx: Transaction, input_txs: list):
# validate asset
if tx.operation != Transaction.COMPOSE:
asset_id = tx.get_asset_id(input_txs)
if asset_id != Transaction.read_out_asset_id(tx):
raise AssetIdMismatch(("The asset id of the input does not match the asset id of the transaction"))
else:
asset_ids = Transaction.get_asset_ids(input_txs)
if Transaction.read_out_asset_id(tx) in asset_ids:
raise AssetIdMismatch(("The asset ID of the compose must be different to all of its input asset IDs"))
@staticmethod
def validate_input_conditions(tx: Transaction, input_conditions: list[Output]):
# convert planetmint.Output objects to transactions.common.Output objects
input_conditions_dict = Output.list_to_dict(input_conditions)
input_conditions_converted = []
for input_cond in input_conditions_dict:
input_conditions_converted.append(TransactionOutput.from_dict(input_cond))
if not tx.inputs_valid(input_conditions_converted):
raise InvalidSignature("Transaction signature is invalid.")
def validate_compose_inputs(self, tx, current_transactions=[]) -> bool:
input_txs, input_conditions = self.models.get_input_txs_and_conditions(tx.inputs, current_transactions)
Validator.validate_input_conditions(tx, input_conditions)
Validator.validate_asset_id(tx, input_txs)
Validator.validate_inputs_distinct(tx)
return True
def validate_transfer_inputs(self, tx, current_transactions=[]) -> bool:
input_txs, input_conditions = self.models.get_input_txs_and_conditions(tx.inputs, current_transactions)
Validator.validate_input_conditions(tx, input_conditions)
Validator.validate_asset_id(tx, input_txs)
Validator.validate_inputs_distinct(tx)
input_amount = sum([input_condition.amount for input_condition in input_conditions])
output_amount = sum([output_condition.amount for output_condition in tx.outputs])
if output_amount != input_amount:
raise AmountError(
"The amount used in the inputs `{}` needs to be same as the amount used in the outputs `{}`".format(
input_amount, output_amount
)
)
return True
def validate_create_inputs(self, tx, current_transactions=[]) -> bool:
duplicates = any(txn for txn in current_transactions if txn.id == tx.id)
if self.models.is_committed(tx.id) or duplicates:
raise DuplicateTransaction("transaction `{}` already exists".format(tx.id))
fulfilling_inputs = [i for i in tx.inputs if i.fulfills is not None and i.fulfills.txid is not None]
if len(fulfilling_inputs) > 0:
input_txs, input_conditions = self.models.get_input_txs_and_conditions(
fulfilling_inputs, current_transactions
)
create_asset = tx.assets[0]
input_asset = input_txs[0].assets[tx.inputs[0].fulfills.output]["data"]
if create_asset != input_asset:
raise ValidationError("CREATE must have matching asset description with input transaction")
if input_txs[0].operation != Transaction.DECOMPOSE:
raise SchemaValidationError("CREATE can only consume DECOMPOSE outputs")
return True
def validate_transaction(self, transaction, current_transactions=[]):
"""Validate a transaction against the current status of the database."""
# CLEANUP: The conditional below checks for transaction in dict format.
# It would be better to only have a single format for the transaction
# throught the code base.
if isinstance(transaction, dict):
try:
transaction = Transaction.from_dict(transaction, False)
except SchemaValidationError as e:
logger.warning("Invalid transaction schema: %s", e.__cause__.message)
return False
except ValidationError as e:
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
if self.validate_script(transaction) == False:
logger.warning("Invalid transaction script")
return False
if transaction.operation == Transaction.CREATE:
self.validate_create_inputs(transaction, current_transactions)
elif transaction.operation in [Transaction.TRANSFER, Transaction.VOTE]:
self.validate_transfer_inputs(transaction, current_transactions)
elif transaction.operation in [Transaction.COMPOSE]:
self.validate_compose_inputs(transaction, current_transactions)
return transaction
def validate_script(self, transaction: Transaction) -> bool:
if transaction.script:
return transaction.script.validate()
return True
def validate_election(self, transaction, current_transactions=[]): # TODO: move somewhere else
"""Validate election transaction
NOTE:
* A valid election is initiated by an existing validator.
* A valid election is one where voters are validators and votes are
allocated according to the voting power of each validator node.
Args:
:param planet: (Planetmint) an instantiated planetmint.lib.Planetmint object.
:param current_transactions: (list) A list of transactions to be validated along with the election
Returns:
Election: a Election object or an object of the derived Election subclass.
Raises:
ValidationError: If the election is invalid
"""
duplicates = any(txn for txn in current_transactions if txn.id == transaction.id)
if self.models.is_committed(transaction.id) or duplicates:
raise DuplicateTransaction("transaction `{}` already exists".format(transaction.id))
current_validators = self.models.get_validators_dict()
# NOTE: Proposer should be a single node
if len(transaction.inputs) != 1 or len(transaction.inputs[0].owners_before) != 1:
raise MultipleInputsError("`tx_signers` must be a list instance of length one")
# NOTE: Check if the proposer is a validator.
[election_initiator_node_pub_key] = transaction.inputs[0].owners_before
if election_initiator_node_pub_key not in current_validators.keys():
raise InvalidProposer("Public key is not a part of the validator set")
# NOTE: Check if all validators have been assigned votes equal to their voting power
if not Validator.is_same_topology(current_validators, transaction.outputs):
raise UnequalValidatorSet("Validator set much be exactly same to the outputs of election")
if transaction.operation == VALIDATOR_ELECTION:
self.validate_validator_election(transaction)
return transaction
@staticmethod
def is_same_topology(current_topology, election_topology):
voters = {}
for voter in election_topology:
if len(voter.public_keys) > 1:
return False
[public_key] = voter.public_keys
voting_power = voter.amount
voters[public_key] = voting_power
# Check whether the voters and their votes is same to that of the
# validators and their voting power in the network
return current_topology == voters
def validate_validator_election(self, transaction): # TODO: move somewhere else
"""For more details refer BEP-21: https://github.com/planetmint/BEPs/tree/master/21"""
current_validators = self.models.get_validators_dict()
# NOTE: change more than 1/3 of the current power is not allowed
if transaction.get_assets()[0]["data"]["power"] >= (1 / 3) * sum(current_validators.values()):
raise InvalidPowerChange("`power` change must be less than 1/3 of total power")
def get_election_status(self, transaction):
election = self.models.get_election(transaction.id)
if election and election["is_concluded"]:
return Election.CONCLUDED
return Election.INCONCLUSIVE if self.has_validator_set_changed(transaction) else Election.ONGOING
def has_validator_set_changed(self, transaction):
latest_change = self.get_validator_change()
if latest_change is None:
return False
latest_change_height = latest_change["height"]
election = self.models.get_election(transaction.id)
return latest_change_height > election["height"]
def get_validator_change(self):
"""Return the validator set from the most recent approved block
:return: {
'height': <block_height>,
'validators': <validator_set>
}
"""
latest_block = self.models.get_latest_block()
if latest_block is None:
return None
return self.models.get_validator_set(latest_block["height"])
def get_validator_dict(self, height=None):
"""Return a dictionary of validators with key as `public_key` and
value as the `voting_power`
"""
validators = {}
for validator in self.models.get_validators(height=height):
# NOTE: we assume that Tendermint encodes public key in base64
public_key = public_key_from_ed25519_key(key_from_base64(validator["public_key"]["value"]))
validators[public_key] = validator["voting_power"]
return validators
# TODO to be moved to planetmint.commands.planetmint
def show_election_status(self, transaction):
data = transaction.assets[0]
data = data.to_dict()["data"]
if "public_key" in data.keys():
data["public_key"] = public_key_to_base64(data["public_key"]["value"])
response = ""
for k, v in data.items():
if k != "seed":
response += f"{k}={v}\n"
response += f"status={self.get_election_status(transaction)}"
if transaction.operation == CHAIN_MIGRATION_ELECTION:
response = self.append_chain_migration_status(response)
return response
# TODO to be moved to planetmint.commands.planetmint
def append_chain_migration_status(self, status):
chain = self.models.get_latest_abci_chain()
if chain is None or chain["is_synced"]:
return status
status += f'\nchain_id={chain["chain_id"]}'
block = self.models.get_latest_block()
status += f'\napp_hash={block["app_hash"]}'
validators = [
{
"pub_key": {
"type": "tendermint/PubKeyEd25519",
"value": k,
},
"power": v,
}
for k, v in self.get_validator_dict().items()
]
status += f"\nvalidators={json.dumps(validators, indent=4)}"
return status
def get_recipients_list(self):
"""Convert validator dictionary to a recipient list for `Transaction`"""
recipients = []
for public_key, voting_power in self.get_validator_dict().items():
recipients.append(([public_key], voting_power))
return recipients
def count_votes(self, election_pk, transactions):
votes = 0
for txn in transactions:
if txn.operation == Vote.OPERATION:
for output in txn.outputs:
# NOTE: We enforce that a valid vote to election id will have only
# election_pk in the output public keys, including any other public key
# along with election_pk will lead to vote being not considered valid.
if len(output.public_keys) == 1 and [election_pk] == output.public_keys:
votes = votes + output.amount
return votes
def get_commited_votes(self, transaction, election_pk=None): # TODO: move somewhere else
if election_pk is None:
election_pk = election_id_to_public_key(transaction.id)
txns = self.models.get_asset_tokens_for_public_key(transaction.id, election_pk)
return self.count_votes(election_pk, txns)
def _get_initiated_elections(self, height, txns): # TODO: move somewhere else
elections = []
for tx in txns:
if not isinstance(tx, Election):
continue
elections.append({"election_id": tx.id, "height": height, "is_concluded": False})
return elections
def _get_votes(self, txns): # TODO: move somewhere else
elections = OrderedDict()
for tx in txns:
if not isinstance(tx, Vote):
continue
election_id = Transaction.read_out_asset_id(tx)
if election_id not in elections:
elections[election_id] = []
elections[election_id].append(tx)
return elections
def process_block(self, new_height, txns): # TODO: move somewhere else
"""Looks for election and vote transactions inside the block, records
and processes elections.
Every election is recorded in the database.
Every vote has a chance to conclude the corresponding election. When
an election is concluded, the corresponding database record is
marked as such.
Elections and votes are processed in the order in which they
appear in the block. Elections are concluded in the order of
appearance of their first votes in the block.
For every election concluded in the block, calls its `on_approval`
method. The returned value of the last `on_approval`, if any,
is a validator set update to be applied in one of the following blocks.
`on_approval` methods are implemented by elections of particular type.
The method may contain side effects but should be idempotent. To account
for other concluded elections, if it requires so, the method should
rely on the database state.
"""
# elections initiated in this block
initiated_elections = self._get_initiated_elections(new_height, txns)
if initiated_elections:
self.models.store_elections(initiated_elections)
# elections voted for in this block and their votes
elections = self._get_votes(txns)
validator_update = None
for election_id, votes in elections.items():
election = self.models.get_transaction(election_id)
if election is None:
continue
if not self.has_election_concluded(election, votes):
continue
validator_update = self.approve_election(election, new_height)
self.models.store_election(election.id, new_height, is_concluded=True)
return [validator_update] if validator_update else []
def has_election_concluded(self, transaction, current_votes=[]): # TODO: move somewhere else
"""Check if the election can be concluded or not.
* Elections can only be concluded if the validator set has not changed
since the election was initiated.
* Elections can be concluded only if the current votes form a supermajority.
Custom elections may override this function and introduce additional checks.
"""
if self.has_validator_set_changed(transaction):
return False
if transaction.operation == VALIDATOR_ELECTION:
if not self.has_validator_election_concluded():
return False
if transaction.operation == CHAIN_MIGRATION_ELECTION:
if not self.has_chain_migration_concluded():
return False
election_pk = election_id_to_public_key(transaction.id)
votes_committed = self.get_commited_votes(transaction, election_pk)
votes_current = self.count_votes(election_pk, current_votes)
total_votes = sum(int(output.amount) for output in transaction.outputs)
if (votes_committed < (2 / 3) * total_votes) and (votes_committed + votes_current >= (2 / 3) * total_votes):
return True
return False
def has_validator_election_concluded(self): # TODO: move somewhere else
latest_block = self.models.get_latest_block()
if latest_block is not None:
latest_block_height = latest_block["height"]
latest_validator_change = self.models.get_validator_set()["height"]
# TODO change to `latest_block_height + 3` when upgrading to Tendermint 0.24.0.
if latest_validator_change == latest_block_height + 2:
# do not conclude the election if there is a change assigned already
return False
return True
def has_chain_migration_concluded(self): # TODO: move somewhere else
chain = self.models.get_latest_abci_chain()
if chain is not None and not chain["is_synced"]:
# do not conclude the migration election if
# there is another migration in progress
return False
return True
def rollback_election(self, new_height, txn_ids): # TODO: move somewhere else
"""Looks for election and vote transactions inside the block and
cleans up the database artifacts possibly created in `process_blocks`.
Part of the `end_block`/`commit` crash recovery.
"""
# delete election records for elections initiated at this height and
# elections concluded at this height
self.models.delete_elections(new_height)
txns = [self.models.get_transaction(tx_id) for tx_id in txn_ids]
txns = [Transaction.from_dict(tx.to_dict()) for tx in txns if tx]
elections = self._get_votes(txns)
for election_id in elections:
election = self.models.get_transaction(election_id)
if election.operation == VALIDATOR_ELECTION:
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
self.models.delete_validator_set(new_height + 1)
if election.operation == CHAIN_MIGRATION_ELECTION:
self.models.delete_abci_chain(new_height)
def approve_election(self, election, new_height):
"""Override to update the database state according to the
election rules. Consider the current database state to account for
other concluded elections, if required.
"""
if election.operation == CHAIN_MIGRATION_ELECTION:
self.migrate_abci_chain()
if election.operation == VALIDATOR_ELECTION:
validator_updates = [election.assets[0].data]
curr_validator_set = self.models.get_validators(height=new_height)
updated_validator_set = new_validator_set(curr_validator_set, validator_updates)
updated_validator_set = [v for v in updated_validator_set if v["voting_power"] > 0]
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
self.models.store_validator_set(new_height + 1, updated_validator_set)
return encode_validator(election.assets[0].data)
def is_valid_transaction(self, tx, current_transactions=[]):
# NOTE: the function returns the Transaction object in case
# the transaction is valid
try:
return self.validate_transaction(tx, current_transactions)
except ValidationError as e:
logger.warning("Invalid transaction (%s): %s", type(e).__name__, e)
return False
def migrate_abci_chain(self):
"""Generate and record a new ABCI chain ID. New blocks are not
accepted until we receive an InitChain ABCI request with
the matching chain ID and validator set.
Chain ID is generated based on the current chain and height.
`chain-X` => `chain-X-migrated-at-height-5`.
`chain-X-migrated-at-height-5` => `chain-X-migrated-at-height-21`.
If there is no known chain (we are at genesis), the function returns.
"""
latest_chain = self.models.get_latest_abci_chain()
if latest_chain is None:
return
block = self.models.get_latest_block()
suffix = "-migrated-at-height-"
chain_id = latest_chain["chain_id"]
block_height_str = str(block["height"])
new_chain_id = chain_id.split(suffix)[0] + suffix + block_height_str
self.models.store_abci_chain(block["height"] + 1, new_chain_id, False)
def rollback(self):
pre_commit = None
try:
pre_commit = self.models.get_pre_commit_state()
except Exception as e:
logger.exception("Unexpected error occurred while executing get_pre_commit_state()", e)
if pre_commit is None or len(pre_commit) == 0:
# the pre_commit record is first stored in the first `end_block`
return
latest_block = self.models.get_latest_block()
if latest_block is None:
logger.error("Found precommit state but no blocks!")
sys.exit(1)
# NOTE: the pre-commit state is always at most 1 block ahead of the commited state
if latest_block["height"] < pre_commit["height"]:
self.rollback_election(pre_commit["height"], pre_commit["transactions"])
self.models.delete_transactions(pre_commit["transactions"])

View File

@ -12,5 +12,5 @@ configuration or the ``PLANETMINT_DATABASE_BACKEND`` environment variable.
"""
# Include the backend interfaces
from planetmint.backend import schema, query, convert # noqa
from planetmint.backend import schema, query # noqa
from planetmint.backend.connection import Connection

View File

@ -11,7 +11,7 @@ from transactions.common.exceptions import ConfigurationError
from planetmint.config import Config
BACKENDS = {
"tarantool_db": "planetmint.backend.tarantool.connection.TarantoolDBConnection",
"tarantool_db": "planetmint.backend.tarantool.sync_io.connection.TarantoolDBConnection",
"localmongodb": "planetmint.backend.localmongodb.connection.LocalMongoDBConnection",
}

View File

@ -1,26 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
"""Convert interfaces for backends."""
from functools import singledispatch
@singledispatch
def prepare_asset(connection, transaction_type, transaction_id, filter_operation, asset):
"""
This function is used for preparing assets,
before storing them to database.
"""
raise NotImplementedError
@singledispatch
def prepare_metadata(connection, transaction_id, metadata):
"""
This function is used for preparing metadata,
before storing them to database.
"""
raise NotImplementedError

View File

@ -18,5 +18,13 @@ class OperationError(BackendError):
"""Exception raised when a backend operation fails."""
class OperationDataInsertionError(BackendError):
"""Exception raised when a Database operation fails."""
class DBConcurrencyError(BackendError):
"""Exception raised when a Database operation fails."""
class DuplicateKeyError(OperationError):
"""Exception raised when an insert fails because the key is not unique"""

View File

@ -22,7 +22,7 @@ generic backend interfaces to the implementations in this module.
"""
# Register the single dispatched modules on import.
from planetmint.backend.localmongodb import schema, query, convert # noqa
from planetmint.backend.localmongodb import schema, query # noqa
# MongoDBConnection should always be accessed via
# ``planetmint.backend.connect()``.

View File

@ -10,7 +10,7 @@ import pymongo
from planetmint.config import Config
from planetmint.backend.exceptions import DuplicateKeyError, OperationError, ConnectionError
from transactions.common.exceptions import ConfigurationError
from planetmint.utils import Lazy
from planetmint.utils.lazy import Lazy
from planetmint.backend.connection import DBConnection, _kwargs_parser
logger = logging.getLogger(__name__)
@ -73,7 +73,7 @@ class LocalMongoDBConnection(DBConnection):
try:
return query.run(self.connect())
except pymongo.errors.AutoReconnect:
logger.warning("Lost connection to the database, " "retrying query.")
logger.warning("Lost connection to the database, retrying query.")
return query.run(self.connect())
except pymongo.errors.AutoReconnect as exc:
raise ConnectionError from exc

View File

@ -1,24 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
"""Convert implementation for MongoDb"""
from planetmint.backend.utils import module_dispatch_registrar
from planetmint.backend import convert
from planetmint.backend.localmongodb.connection import LocalMongoDBConnection
register_query = module_dispatch_registrar(convert)
@register_query(LocalMongoDBConnection)
def prepare_asset(connection, transaction_type, transaction_id, filter_operation, asset):
if transaction_type not in filter_operation:
asset["id"] = transaction_id
return asset
@register_query(LocalMongoDBConnection)
def prepare_metadata(connection, transaction_id, metadata):
return {"id": transaction_id, "metadata": metadata}

View File

@ -77,7 +77,7 @@ def get_assets(conn, asset_ids):
@register_query(LocalMongoDBConnection)
def get_spent(conn, transaction_id, output):
def get_spending_transaction(conn, transaction_id, output):
query = {
"inputs": {
"$elemMatch": {"$and": [{"fulfills.transaction_id": transaction_id}, {"fulfills.output_index": output}]}
@ -102,7 +102,6 @@ def store_block(conn, block):
@register_query(LocalMongoDBConnection)
def get_txids_filtered(conn, asset_ids, operation=None, last_tx=None):
match = {
Transaction.CREATE: {"operation": "CREATE", "id": {"$in": asset_ids}},
Transaction.TRANSFER: {"operation": "TRANSFER", "asset.id": {"$in": asset_ids}},
@ -117,41 +116,6 @@ def get_txids_filtered(conn, asset_ids, operation=None, last_tx=None):
return (elem["id"] for elem in cursor)
@register_query(LocalMongoDBConnection)
def text_search(
conn,
search,
*,
language="english",
case_sensitive=False,
diacritic_sensitive=False,
text_score=False,
limit=0,
table="assets"
):
cursor = conn.run(
conn.collection(table)
.find(
{
"$text": {
"$search": search,
"$language": language,
"$caseSensitive": case_sensitive,
"$diacriticSensitive": diacritic_sensitive,
}
},
{"score": {"$meta": "textScore"}, "_id": False},
)
.sort([("score", {"$meta": "textScore"})])
.limit(limit)
)
if text_score:
return cursor
return (_remove_text_score(obj) for obj in cursor)
def _remove_text_score(asset):
asset.pop("score", None)
return asset
@ -203,21 +167,6 @@ def delete_transactions(conn, txn_ids):
conn.run(conn.collection("transactions").delete_many({"id": {"$in": txn_ids}}))
@register_query(LocalMongoDBConnection)
def store_unspent_outputs(conn, *unspent_outputs):
if unspent_outputs:
try:
return conn.run(
conn.collection("utxos").insert_many(
unspent_outputs,
ordered=False,
)
)
except DuplicateKeyError:
# TODO log warning at least
pass
@register_query(LocalMongoDBConnection)
def delete_unspent_outputs(conn, *unspent_outputs):
if unspent_outputs:

View File

@ -0,0 +1,13 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from .asset import Asset
from .fulfills import Fulfills
from .input import Input
from .metadata import MetaData
from .script import Script
from .output import Output
from .dbtransaction import DbTransaction
from .block import Block

View File

@ -0,0 +1,30 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass
@dataclass
class Asset:
key: str = ""
data: str = ""
@staticmethod
def from_dict(asset_dict: dict) -> Asset:
key = "data" if "data" in asset_dict.keys() else "id"
data = asset_dict[key]
return Asset(key, data)
def to_dict(self) -> dict:
return {self.key: self.data}
@staticmethod
def from_list_dict(asset_dict_list: list[dict]) -> list[Asset]:
return [Asset.from_dict(asset_dict) for asset_dict in asset_dict_list if isinstance(asset_dict, dict)]
@staticmethod
def list_to_dict(asset_list: list[Asset]) -> list[dict]:
return [asset.to_dict() for asset in asset_list or []]

View File

@ -0,0 +1,23 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
import json
from dataclasses import dataclass, field
@dataclass
class Block:
id: str = ""
app_hash: str = ""
height: int = 0
transactions: list[str] = field(default_factory=list)
@staticmethod
def from_tuple(block_tuple: tuple) -> Block:
return Block(block_tuple[0], block_tuple[1], block_tuple[2], block_tuple[3])
def to_dict(self) -> dict:
return {"app_hash": self.app_hash, "height": self.height, "transaction_ids": self.transactions}

View File

@ -0,0 +1,83 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass, field
from planetmint.backend.models import Asset, MetaData, Input, Script, Output
@dataclass
class DbTransaction:
id: str = ""
operation: str = ""
version: str = ""
metadata: MetaData = None
assets: list[Asset] = field(default_factory=list)
inputs: list[Input] = field(default_factory=list)
outputs: list[Output] = field(default_factory=list)
script: Script = None
@staticmethod
def from_dict(transaction: dict) -> DbTransaction:
return DbTransaction(
id=transaction["id"],
operation=transaction["operation"],
version=transaction["version"],
inputs=Input.from_list_dict(transaction["inputs"]),
assets=Asset.from_list_dict(transaction["assets"]),
metadata=MetaData.from_dict(transaction["metadata"]),
script=Script.from_dict(transaction["script"]) if "script" in transaction else None,
)
@staticmethod
def from_tuple(transaction: tuple) -> DbTransaction:
assets = Asset.from_list_dict(transaction[4])
return DbTransaction(
id=transaction[0],
operation=transaction[1],
version=transaction[2],
metadata=MetaData.from_dict(transaction[3]),
assets=assets if transaction[2] != "2.0" else [assets[0]],
inputs=Input.from_list_dict(transaction[5]),
script=Script.from_dict(transaction[6]),
)
@staticmethod
def remove_generated_fields(tx_dict: dict) -> dict:
tx_dict["outputs"] = [
DbTransaction.remove_generated_or_none_output_keys(output) for output in tx_dict["outputs"]
]
if "script" in tx_dict and tx_dict["script"] is None:
tx_dict.pop("script")
return tx_dict
@staticmethod
def remove_generated_or_none_output_keys(output: dict) -> dict:
output["condition"]["details"] = {k: v for k, v in output["condition"]["details"].items() if v is not None}
if "id" in output:
output.pop("id")
return output
def to_dict(self) -> dict:
"""
Returns
-------
object
"""
assets = Asset.list_to_dict(self.assets)
tx = {
"inputs": Input.list_to_dict(self.inputs),
"outputs": Output.list_to_dict(self.outputs),
"operation": self.operation,
"metadata": self.metadata.to_dict() if self.metadata is not None else None,
"assets": assets if self.version != "2.0" else assets[0],
"version": self.version,
"id": self.id,
"script": self.script.to_dict() if self.script is not None else None,
}
tx = DbTransaction.remove_generated_fields(tx)
return tx

View File

@ -0,0 +1,15 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from dataclasses import dataclass
@dataclass
class Fulfills:
transaction_id: str = ""
output_index: int = 0
def to_dict(self) -> dict:
return {"transaction_id": self.transaction_id, "output_index": self.output_index}

View File

@ -0,0 +1,58 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Optional
from .fulfills import Fulfills
@dataclass
class Input:
tx_id: str = ""
fulfills: Optional[Fulfills] = None
owners_before: list[str] = field(default_factory=list)
fulfillment: str = ""
@staticmethod
def from_dict(input_dict: dict, tx_id: str = "") -> Input:
fulfills = None
if input_dict["fulfills"]:
fulfills = Fulfills(input_dict["fulfills"]["transaction_id"], input_dict["fulfills"]["output_index"])
return Input(tx_id, fulfills, input_dict["owners_before"], input_dict["fulfillment"])
@staticmethod
def from_tuple(input_tuple: tuple) -> Input:
tx_id = input_tuple[0]
fulfillment = input_tuple[1]
owners_before = input_tuple[2]
fulfills = None
fulfills_tx_id = input_tuple[3]
if fulfills_tx_id:
# TODO: the output_index should be an unsigned int
fulfills = Fulfills(fulfills_tx_id, int(input_tuple[4]))
return Input(tx_id, fulfills, owners_before, fulfillment)
def to_dict(self) -> dict:
fulfills = (
{"transaction_id": self.fulfills.transaction_id, "output_index": self.fulfills.output_index}
if self.fulfills
else None
)
return {"owners_before": self.owners_before, "fulfills": fulfills, "fulfillment": self.fulfillment}
@staticmethod
def from_list_dict(input_tuple_list: list[dict]) -> list[Input]:
return [Input.from_dict(input_tuple) for input_tuple in input_tuple_list]
@staticmethod
def list_to_dict(input_list: list[Input]) -> list[dict]:
return [input.to_dict() for input in input_list or []]

View File

@ -0,0 +1,23 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
import json
from dataclasses import dataclass
from typing import Optional
@dataclass
class MetaData:
metadata: Optional[str] = None
@staticmethod
def from_dict(meta_data: dict) -> MetaData | None:
if meta_data is None:
return None
return MetaData(meta_data)
def to_dict(self) -> dict:
return self.metadata

View File

@ -0,0 +1,114 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass, field
from typing import List
@dataclass
class ConditionDetails:
type: str = ""
public_key: str = ""
threshold: int = None
sub_conditions: List[ConditionDetails] = field(default_factory=list)
def to_dict(self) -> dict:
if self.sub_conditions is None:
return {"type": self.type, "public_key": self.public_key}
else:
return {
"type": self.type,
"threshold": self.threshold,
"subconditions": [sub_condition.to_dict() for sub_condition in self.sub_conditions],
}
@staticmethod
def from_dict(details: dict) -> ConditionDetails:
sub_conditions = None
if "subconditions" in details:
sub_conditions = [ConditionDetails.from_dict(sub_condition) for sub_condition in details["subconditions"]]
return ConditionDetails(
type=details.get("type"),
public_key=details.get("public_key"),
threshold=details.get("threshold"),
sub_conditions=sub_conditions,
)
@dataclass
class Condition:
uri: str = ""
details: ConditionDetails = field(default_factory=ConditionDetails)
@staticmethod
def from_dict(data: dict) -> Condition:
return Condition(
uri=data.get("uri"),
details=ConditionDetails.from_dict(data.get("details")),
)
def to_dict(self) -> dict:
return {
"uri": self.uri,
"details": self.details.to_dict(),
}
@dataclass
class Output:
id: str = ""
amount: int = 0
transaction_id: str = ""
public_keys: List[str] = field(default_factory=list)
index: int = 0
condition: Condition = field(default_factory=Condition)
@staticmethod
def outputs_dict(output: dict, transaction_id: str = "") -> Output:
return Output(
transaction_id=transaction_id,
public_keys=output["public_keys"],
amount=output["amount"],
condition=Condition(
uri=output["condition"]["uri"], details=ConditionDetails.from_dict(output["condition"]["details"])
),
)
@staticmethod
def from_tuple(output: tuple) -> Output:
return Output(
id=output[0],
amount=output[1],
public_keys=output[2],
condition=Condition.from_dict(
output[3],
),
index=output[4],
transaction_id=output[5],
)
@staticmethod
def from_dict(output_dict: dict, index: int, transaction_id: str) -> Output:
return Output(
id=output_dict["id"] if "id" in output_dict else "placeholder",
amount=int(output_dict["amount"]),
public_keys=output_dict["public_keys"],
condition=Condition.from_dict(output_dict["condition"]),
index=index,
transaction_id=transaction_id,
)
def to_dict(self) -> dict:
return {
# "id": self.id,
"public_keys": self.public_keys,
"condition": self.condition.to_dict(),
"amount": str(self.amount),
}
@staticmethod
def list_to_dict(output_list: list[Output]) -> list[dict]:
return [output.to_dict() for output in output_list or []]

View File

@ -0,0 +1,22 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional
@dataclass
class Script:
script: dict = None
@staticmethod
def from_dict(script_dict: dict) -> Script | None:
if script_dict is None:
return None
return Script(script_dict["script"])
def to_dict(self) -> dict:
return {"script": self.script}

View File

@ -6,12 +6,15 @@
"""Query interfaces for backends."""
from functools import singledispatch
from planetmint.backend.models import Asset, Block, MetaData, Output, Input, Script
from planetmint.backend.exceptions import OperationError
from planetmint.backend.models.dbtransaction import DbTransaction
# FIXME ADD HERE HINT FOR RETURNING TYPE
@singledispatch
def store_asset(asset: dict, connection):
def store_asset(connection, asset: dict) -> Asset:
"""Write an asset to the asset table.
Args:
@ -25,7 +28,7 @@ def store_asset(asset: dict, connection):
@singledispatch
def store_assets(assets: list, connection):
def store_assets(connection, assets: list) -> list[Asset]:
"""Write a list of assets to the assets table.
Args:
@ -39,7 +42,7 @@ def store_assets(assets: list, connection):
@singledispatch
def store_metadatas(connection, metadata):
def store_metadatas(connection, metadata) -> MetaData:
"""Write a list of metadata to metadata table.
Args:
@ -59,8 +62,50 @@ def store_transactions(connection, signed_transactions):
raise NotImplementedError
@singledispatch
def store_transaction(connection, transaction):
"""Store a single transaction."""
raise NotImplementedError
@singledispatch
def get_transaction_by_id(connection, transaction_id):
"""Get the transaction by transaction id."""
raise NotImplementedError
@singledispatch
def get_transaction_single(connection, transaction_id) -> DbTransaction:
"""Get a single transaction by id."""
raise NotImplementedError
@singledispatch
def get_transaction(connection, transaction_id):
"""Get a transaction by id."""
raise NotImplementedError
@singledispatch
def get_transactions_by_asset(connection, asset):
"""Get a transaction by id."""
raise NotImplementedError
@singledispatch
def get_transactions_by_metadata(connection, metadata: str, limit: int = 1000) -> list[DbTransaction]:
"""Get a transaction by its metadata cid."""
raise NotImplementedError
@singledispatch
def get_transactions(connection, transactions_ids) -> list[DbTransaction]:
"""Get a transaction from the transactions table.
Args:
@ -74,21 +119,7 @@ def get_transaction(connection, transaction_id):
@singledispatch
def get_transactions(connection, transaction_ids):
"""Get transactions from the transactions table.
Args:
transaction_ids (list): list of transaction ids to fetch
Returns:
The result of the operation.
"""
raise NotImplementedError
@singledispatch
def get_asset(connection, asset_id):
def get_asset(connection, asset_id) -> Asset:
"""Get an asset from the assets table.
Args:
@ -102,7 +133,7 @@ def get_asset(connection, asset_id):
@singledispatch
def get_spent(connection, transaction_id, condition_id):
def get_spending_transaction(connection, transaction_id, condition_id):
"""Check if a `txid` was already used as an input.
A transaction can be used as an input for another transaction. Bigchain
@ -149,7 +180,7 @@ def get_owned_ids(connection, owner):
@singledispatch
def get_block(connection, block_id):
def get_block(connection, block_id) -> Block:
"""Get a block from the planet table.
Args:
@ -177,21 +208,18 @@ def get_block_with_transaction(connection, txid):
@singledispatch
def get_metadata(connection, transaction_ids):
"""Get a list of metadata from the metadata table.
def store_transaction_outputs(connection, output: Output, index: int, table: str):
"""Store the transaction outputs.
Args:
transaction_ids (list): a list of ids for the metadata to be retrieved from
the database.
Returns:
metadata (list): the list of returned metadata.
output (Output): the output to store.
index (int): the index of the output in the transaction.
"""
raise NotImplementedError
@singledispatch
def get_assets(connection, asset_ids) -> list:
def get_assets(connection, asset_ids) -> list[Asset]:
"""Get a list of assets from the assets table.
Args:
@ -215,47 +243,6 @@ def get_txids_filtered(connection, asset_id, operation=None):
raise NotImplementedError
@singledispatch
def text_search(
conn,
search,
*,
language="english",
case_sensitive=False,
diacritic_sensitive=False,
text_score=False,
limit=0,
table=None
):
"""Return all the assets that match the text search.
The results are sorted by text score.
For more information about the behavior of text search on MongoDB see
https://docs.mongodb.com/manual/reference/operator/query/text/#behavior
Args:
search (str): Text search string to query the text index
language (str, optional): The language for the search and the rules for
stemmer and tokenizer. If the language is ``None`` text search uses
simple tokenization and no stemming.
case_sensitive (bool, optional): Enable or disable case sensitive
search.
diacritic_sensitive (bool, optional): Enable or disable case sensitive
diacritic search.
text_score (bool, optional): If ``True`` returns the text score with
each document.
limit (int, optional): Limit the number of returned documents.
Returns:
:obj:`list` of :obj:`dict`: a list of assets
Raises:
OperationError: If the backend does not support text search
"""
raise OperationError("This query is only supported when running " "Planetmint with MongoDB as the backend.")
@singledispatch
def get_latest_block(conn):
"""Get the latest commited block i.e. block with largest height"""
@ -277,13 +264,6 @@ def store_block(conn, block):
raise NotImplementedError
@singledispatch
def store_unspent_outputs(connection, unspent_outputs):
"""Store unspent outputs in ``utxo_set`` table."""
raise NotImplementedError
@singledispatch
def delete_unspent_outputs(connection, unspent_outputs):
"""Delete unspent outputs in ``utxo_set`` table."""
@ -439,6 +419,42 @@ def get_latest_abci_chain(conn):
@singledispatch
def _group_transaction_by_ids(txids: list, connection):
def get_inputs_by_tx_id(connection, tx_id) -> list[Input]:
"""Retrieve inputs for a transaction by its id"""
raise NotImplementedError
@singledispatch
def store_transaction_inputs(connection, inputs: list[Input]):
"""Store inputs for a transaction"""
raise NotImplementedError
@singledispatch
def get_complete_transactions_by_ids(txids: list, connection):
"""Returns the transactions object (JSON TYPE), from list of ids."""
raise NotImplementedError
@singledispatch
def get_script_by_tx_id(connection, tx_id: str) -> Script:
"""Retrieve script for a transaction by its id"""
raise NotImplementedError
@singledispatch
def get_outputs_by_tx_id(connection, tx_id: str) -> list[Output]:
"""Retrieve outputs for a transaction by its id"""
raise NotImplementedError
@singledispatch
def get_outputs_by_owner(connection, public_key: str, table: str) -> list[Output]:
"""Retrieve an owners outputs by public key"""
raise NotImplementedError
@singledispatch
def get_metadata(conn, transaction_ids):
"""Retrieve metadata for a list of transactions by their ids"""
raise NotImplementedError

View File

@ -9,7 +9,6 @@ import logging
from functools import singledispatch
from planetmint.config import Config
from planetmint.backend.connection import Connection
from transactions.common.exceptions import ValidationError
from transactions.common.utils import (
validate_all_values_for_key_in_obj,
@ -119,7 +118,8 @@ def drop_database(connection, dbname):
raise NotImplementedError
def init_database(connection=None, dbname=None):
@singledispatch
def init_database(connection, dbname):
"""Initialize the configured backend for use with Planetmint.
Creates a database with :attr:`dbname` with any required tables
@ -134,11 +134,19 @@ def init_database(connection=None, dbname=None):
configuration.
"""
connection = connection or Connection()
dbname = dbname or Config().get()["database"]["name"]
raise NotImplementedError
create_database(connection, dbname)
create_tables(connection, dbname)
@singledispatch
def migrate(connection):
"""Migrate database
Args:
connection (:class:`~planetmint.backend.connection.Connection`): an
existing connection to use to migrate the database.
Creates one if not given.
"""
raise NotImplementedError
def validate_language_key(obj, key):

View File

@ -1,5 +1,2 @@
# Register the single dispatched modules on import.
from planetmint.backend.tarantool import query, connection, schema, convert # noqa
# MongoDBConnection should always be accessed via
# ``planetmint.backend.connect()``.
from planetmint.backend.tarantool.sync_io import connection, query, schema

Some files were not shown because too many files have changed in this diff Show More