refactor tarantool backend (#292)

* added initial interfaces for backend, refactored Asset and MetaData logic

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* adjusted input dataclass, added queries, removed convert

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* created backend models folder, replaced token_hex with uuid

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Add cleanup and add constants

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* added to and from static methods to asset, input model and removed logic from tools

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* simplified store_bulk_transaction and corresponding query, adjusted test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed script queries

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Add Output model

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Adapt Output class

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Further fixes

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Further fixes

* Get rid of decompose

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* refactored init.lua

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* refactored drop.lua

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Add transaction data class

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* refactored init.lua

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fix tests

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Fix more tests

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Format file

* Fix recursion error

* More fixes

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Further fixes

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* using init.lua for db setup

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed flush_db for new tarantool implementation

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* changed unique constraints

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* used new indexes on block related db operations

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Adapt models

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Check if blocks is empty

* adjusted get_txids_filtered for new indexes

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Adaptions due to schema change

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* fixed get block test case

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* Fix subcondition serialization

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* Remove unnecessary method

Signed-off-by: cybnon <stefan.weber93@googlemail.com>

* More fixes

* renamed group_txs and used data models in fastquery

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* adjusted query test cases, removed unused code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced asset search with get_asset_by_cid

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added limit to asset queries

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* replaced metadata search with cid lookup

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed most of the test_lib test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed election test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed some more test cases

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed 'is' vs '==' issue

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* - blackified & fixed recovery / delete transactions issues becaues of data model transitions
- reintegrated get_transaction() call in query -> delegating this to get_complete_transactions_by_ids

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* show election status uses the governance table from now on
show election status maps the asset["data"] object properly

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed input object differences between old / new version and lookup of transaction in the governance pool

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed TX lookup issues due to different pools

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed wrong index name issue:  transaction_by_asset vs transaction_by_asset_id

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed asset class key mixup

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* moved field removal methods to DbTransaction
redefined strcuture of DbTransction.to_dict() to be equal to the one of Transactions.to_dict()

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added proper input conversion of the test cases and a proper input validation and object converion

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* simplified imports
fixed transfer input issues of the tests

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed comparision issue : dict vs. object

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed schema validation errors

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* added verification of ConditionDetails to the owner verification to avoid mixup between ConditionDetails and SubCondition
fixed Object comparision issues due to object changes

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed object handling issue and complicated stuff

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added missing import

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added proper corner case handling in case a requested block is not found

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed object comparision issue

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed output handling for validate_transfer_inputs

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* fixed wrong search pool usage

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed zenroom testcase

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed last abci issues and blackified the code

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added tarantool exception catching and raising as well as logging

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed obj comparision issue in test_get_spent_issue_1271

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added raiing CriticialDoubleSpend Exception for governance and transactions
fixed search space issue with election / voting commit lookup

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* * made returned outputs unique (get_owned_ids)
* added delete_output method to init.lua
* fixd output deletion issue by relaying the deletion to lua instead of the python code

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed rollback after crash

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* adjusted assets=None to assets=[{"data":None}] to avoid exeptions in the background service

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed unused code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed unused code, reverted transaction fetching, added return types to queries

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed duplicate code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* removed depricated code

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

* store transactions of various versions (backwardcompatibility)
added _bdb variable to init/drop DBs for the single use cases (started failing as TXs are looked up in DB - compared to before)

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* added support for v2.0 transaction to DB writing/reading

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* fixed merge errors (arguments ... )

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* blackified

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* Simplified unit tests (#294)

* adjusted make test
* 1st improvments to ease testing
* simplified gh actions
* adjusted gh action file
* removed deps
* added sudo to apt calls
* removed predefined pytest module definitions
* added installing planetmint into the unit test container
* give time to the db container
* added environment variables to unit-test.yml
* removed acceptances tests from test executions

Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>

* removed unused code, updated version number

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>

Signed-off-by: Lorenz Herzberger <lorenzherzberger@gmail.com>
Signed-off-by: cybnon <stefan.weber93@googlemail.com>
Signed-off-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: cybnon <stefan.weber93@googlemail.com>
Co-authored-by: Jürgen Eckel <juergen@riddleandcode.com>
Co-authored-by: Jürgen Eckel <eckelj@users.noreply.github.com>
This commit is contained in:
Lorenz Herzberger 2023-01-16 15:21:56 +01:00 committed by GitHub
parent 8730d516a3
commit 4472a1a3ee
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
64 changed files with 1633 additions and 2019 deletions

View File

@ -1,59 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Unit tests - with direct ABCI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Build container
run: |
docker-compose -f docker-compose.yml build --no-cache --build-arg abci_status=enable planetmint
- name: Save image
run: docker save -o planetmint.tar planetmint_planetmint
- name: Upload image
uses: actions/upload-artifact@v3
with:
name: planetmint-abci
path: planetmint.tar
retention-days: 5
test-with-abci:
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- db: "Tarantool with ABCI"
host: "tarantool"
port: 3303
abci: "enabled"
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Download planetmint
uses: actions/download-artifact@v3
with:
name: planetmint-abci
- name: Load planetmint
run: docker load -i planetmint.tar
- name: Start containers
run: docker-compose -f docker-compose.yml up -d planetmint
- name: Run tests
run: docker exec planetmint_planetmint_1 pytest -v -m abci

View File

@ -1,60 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Unit tests - with Planemint
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Build container
run: |
docker-compose -f docker-compose.yml build --no-cache planetmint
- name: Save image
run: docker save -o planetmint.tar planetmint_planetmint
- name: Upload image
uses: actions/upload-artifact@v3
with:
name: planetmint
path: planetmint.tar
retention-days: 5
test-without-abci:
runs-on: ubuntu-latest
needs: build
strategy:
matrix:
include:
- db: "Tarantool without ABCI"
host: "tarantool"
port: 3303
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Download planetmint
uses: actions/download-artifact@v3
with:
name: planetmint
- name: Load planetmint
run: docker load -i planetmint.tar
- name: Start containers
run: docker-compose -f docker-compose.yml up -d bdb
- name: Run tests
run: docker exec planetmint_planetmint_1 pytest -v --cov=planetmint --cov-report xml:htmlcov/coverage.xml
- name: Upload Coverage to Codecov
uses: codecov/codecov-action@v3

42
.github/workflows/unit-tests.yml vendored Normal file
View File

@ -0,0 +1,42 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
name: Unit tests
on: [push, pull_request]
jobs:
unified-unit-tests:
runs-on: ubuntu-latest
env:
PLANETMINT_DATABASE_BACKEND: tarantool_db
PLANETMINT_DATABASE_HOST: localhost
PLANETMINT_DATABASE_PORT: 3303
PLANETMINT_SERVER_BIND: 0.0.0.0:9984
PLANETMINT_WSSERVER_HOST: 0.0.0.0
PLANETMINT_WSSERVER_ADVERTISED_HOST: localhost
PLANETMINT_TENDERMINT_HOST: localhost
PLANETMINT_TENDERMINT_PORT: 26657
steps:
- name: Check out repository code
uses: actions/checkout@v3
- name: Setup python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Prepare OS
run: sudo apt-get update && sudo apt-get install -y git zsh curl tarantool-common vim build-essential cmake
- name: Get Tendermint
run: wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_0.34.15_linux_amd64.tar.gz && tar zxf tendermint_0.34.15_linux_amd64.tar.gz
- name: Install Planetmint
run: pip install -e '.[dev]'
- name: Execute Tests
run: make test

View File

@ -25,6 +25,11 @@ For reference, the possible headings are:
* **Known Issues**
* **Notes**
## [2.0.0] - 2023-12-01
* **Changed** changed tarantool db schema
* **Removed** removed text_search routes
* **Added** metadata / asset cid route for fetching transactions
## [1.4.1] - 2022-21-12
* **fixed** inconsistent cryptocondition keyring tag handling. Using cryptoconditions > 1.1.0 from now on.

View File

@ -33,4 +33,5 @@ RUN mkdir -p /usr/src/app
COPY . /usr/src/app/
WORKDIR /usr/src/app
RUN pip install -e .[dev]
RUN pip install flask-cors
RUN pip install flask-cors
RUN pip install planetmint-transactions@git+https://git@github.com/planetmint/transactions.git@cybnon/adapt-class-access-to-new-schema

View File

@ -77,11 +77,18 @@ lint: check-py-deps ## Lint the project
format: check-py-deps ## Format the project
black -l 119 .
test: check-deps test-unit test-acceptance ## Run unit and acceptance tests
test: check-deps test-unit #test-acceptance ## Run unit and acceptance tests
test-unit: check-deps ## Run all tests once or specify a file/test with TEST=tests/file.py::Class::test
@$(DC) up -d bdb
@$(DC) exec planetmint pytest ${TEST}
@$(DC) up -d tarantool
#wget https://github.com/tendermint/tendermint/releases/download/v0.34.15/tendermint_0.34.15_linux_amd64.tar.gz
#tar zxf tendermint_0.34.15_linux_amd64.tar.gz
pytest -m "not abci"
rm -rf ~/.tendermint && ./tendermint init && ./tendermint node --consensus.create_empty_blocks=false --rpc.laddr=tcp://0.0.0.0:26657 --proxy_app=tcp://localhost:26658&
pytest -m abci
@$(DC) down
test-unit-watch: check-deps ## Run all tests and wait. Every time you change code, tests will be run again
@$(DC) run --rm --no-deps planetmint pytest -f

View File

@ -22,8 +22,8 @@ services:
- "3303:3303"
- "8081:8081"
volumes:
- ./planetmint/backend/tarantool/basic.lua:/opt/tarantool/basic.lua
command: tarantool /opt/tarantool/basic.lua
- ./planetmint/backend/tarantool/init.lua:/opt/tarantool/init.lua
command: tarantool /opt/tarantool/init.lua
restart: always
planetmint:
depends_on:

View File

@ -12,5 +12,5 @@ configuration or the ``PLANETMINT_DATABASE_BACKEND`` environment variable.
"""
# Include the backend interfaces
from planetmint.backend import schema, query, convert # noqa
from planetmint.backend import schema, query # noqa
from planetmint.backend.connection import Connection

View File

@ -1,26 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
"""Convert interfaces for backends."""
from functools import singledispatch
@singledispatch
def prepare_asset(connection, transaction_type, transaction_id, filter_operation, asset):
"""
This function is used for preparing assets,
before storing them to database.
"""
raise NotImplementedError
@singledispatch
def prepare_metadata(connection, transaction_id, metadata):
"""
This function is used for preparing metadata,
before storing them to database.
"""
raise NotImplementedError

View File

@ -18,5 +18,9 @@ class OperationError(BackendError):
"""Exception raised when a backend operation fails."""
class OperationDataInsertionError(BackendError):
"""Exception raised when a Database operation fails."""
class DuplicateKeyError(OperationError):
"""Exception raised when an insert fails because the key is not unique"""

View File

@ -0,0 +1,56 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from dataclasses import dataclass
# NOTE: only here temporarily
from planetmint.backend.models import Asset, MetaData, Input
from planetmint.backend.models import Output
@dataclass
class Block:
id: str = None
app_hash: str = None
height: int = None
@dataclass
class Script:
id: str = None
script = None
@dataclass
class UTXO:
id: str = None
output_index: int = None
utxo: dict = None
@dataclass
class Transaction:
id: str = None
assets: list[Asset] = None
metadata: MetaData = None
version: str = None # TODO: make enum
operation: str = None # TODO: make enum
inputs: list[Input] = None
outputs: list[Output] = None
script: str = None
@dataclass
class Validator:
id: str = None
height: int = None
validators = None
@dataclass
class ABCIChain:
height: str = None
is_synced: bool = None
chain_id: str = None

View File

@ -22,7 +22,7 @@ generic backend interfaces to the implementations in this module.
"""
# Register the single dispatched modules on import.
from planetmint.backend.localmongodb import schema, query, convert # noqa
from planetmint.backend.localmongodb import schema, query # noqa
# MongoDBConnection should always be accessed via
# ``planetmint.backend.connect()``.

View File

@ -1,24 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
"""Convert implementation for MongoDb"""
from planetmint.backend.utils import module_dispatch_registrar
from planetmint.backend import convert
from planetmint.backend.localmongodb.connection import LocalMongoDBConnection
register_query = module_dispatch_registrar(convert)
@register_query(LocalMongoDBConnection)
def prepare_asset(connection, transaction_type, transaction_id, filter_operation, asset):
if transaction_type not in filter_operation:
asset["id"] = transaction_id
return asset
@register_query(LocalMongoDBConnection)
def prepare_metadata(connection, transaction_id, metadata):
return {"id": transaction_id, "metadata": metadata}

View File

@ -0,0 +1,13 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from .asset import Asset
from .fulfills import Fulfills
from .input import Input
from .metadata import MetaData
from .script import Script
from .output import Output
from .dbtransaction import DbTransaction
from .block import Block

View File

@ -0,0 +1,30 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass
@dataclass
class Asset:
key: str = ""
data: str = ""
@staticmethod
def from_dict(asset_dict: dict) -> Asset:
key = "data" if "data" in asset_dict.keys() else "id"
data = asset_dict[key]
return Asset(key, data)
def to_dict(self) -> dict:
return {self.key: self.data}
@staticmethod
def from_list_dict(asset_dict_list: list[dict]) -> list[Asset]:
return [Asset.from_dict(asset_dict) for asset_dict in asset_dict_list]
@staticmethod
def list_to_dict(asset_list: list[Asset]) -> list[dict]:
return [asset.to_dict() for asset in asset_list or []]

View File

@ -0,0 +1,23 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
import json
from dataclasses import dataclass, field
@dataclass
class Block:
id: str = ""
app_hash: str = ""
height: int = 0
transactions: list[str] = field(default_factory=list)
@staticmethod
def from_tuple(block_tuple: tuple) -> Block:
return Block(block_tuple[0], block_tuple[1], block_tuple[2], block_tuple[3])
def to_dict(self) -> dict:
return {"app_hash": self.app_hash, "height": self.height, "transactions": self.transactions}

View File

@ -0,0 +1,77 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass, field
from planetmint.backend.models import Asset, MetaData, Input, Script, Output
@dataclass
class DbTransaction:
id: str = ""
operation: str = ""
version: str = ""
metadata: MetaData = None
assets: list[Asset] = field(default_factory=list)
inputs: list[Input] = field(default_factory=list)
outputs: list[Output] = field(default_factory=list)
script: Script = None
@staticmethod
def from_dict(transaction: dict) -> DbTransaction:
return DbTransaction(
id=transaction["id"],
operation=transaction["operation"],
version=transaction["version"],
inputs=Input.from_list_dict(transaction["inputs"]),
assets=Asset.from_list_dict(transaction["assets"]),
metadata=MetaData.from_dict(transaction["metadata"]),
script=Script.from_dict(transaction["script"]) if "script" in transaction else None,
)
@staticmethod
def from_tuple(transaction: tuple) -> DbTransaction:
assets = Asset.from_list_dict(transaction[4])
return DbTransaction(
id=transaction[0],
operation=transaction[1],
version=transaction[2],
metadata=MetaData.from_dict(transaction[3]),
assets=assets if transaction[2] != "2.0" else [assets[0]],
inputs=Input.from_list_dict(transaction[5]),
script=Script.from_dict(transaction[6]),
)
@staticmethod
def remove_generated_fields(tx_dict: dict):
tx_dict["outputs"] = [
DbTransaction.remove_generated_or_none_output_keys(output) for output in tx_dict["outputs"]
]
if "script" in tx_dict and tx_dict["script"] is None:
tx_dict.pop("script")
return tx_dict
@staticmethod
def remove_generated_or_none_output_keys(output):
output["condition"]["details"] = {k: v for k, v in output["condition"]["details"].items() if v is not None}
if "id" in output:
output.pop("id")
return output
def to_dict(self) -> dict:
assets = Asset.list_to_dict(self.assets)
tx = {
"inputs": Input.list_to_dict(self.inputs),
"outputs": Output.list_to_dict(self.outputs),
"operation": self.operation,
"metadata": self.metadata.to_dict() if self.metadata is not None else None,
"assets": assets if self.version != "2.0" else assets[0],
"version": self.version,
"id": self.id,
"script": self.script.to_dict() if self.script is not None else None,
}
tx = DbTransaction.remove_generated_fields(tx)
return tx

View File

@ -0,0 +1,15 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from dataclasses import dataclass
@dataclass
class Fulfills:
transaction_id: str = ""
output_index: int = 0
def to_dict(self) -> dict:
return {"transaction_id": self.transaction_id, "output_index": self.output_index}

View File

@ -0,0 +1,58 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Optional
from .fulfills import Fulfills
@dataclass
class Input:
tx_id: str = ""
fulfills: Optional[Fulfills] = None
owners_before: list[str] = field(default_factory=list)
fulfillment: str = ""
@staticmethod
def from_dict(input_dict: dict, tx_id: str = "") -> Input:
fulfills = None
if input_dict["fulfills"]:
fulfills = Fulfills(input_dict["fulfills"]["transaction_id"], input_dict["fulfills"]["output_index"])
return Input(tx_id, fulfills, input_dict["owners_before"], input_dict["fulfillment"])
@staticmethod
def from_tuple(input_tuple: tuple) -> Input:
tx_id = input_tuple[0]
fulfillment = input_tuple[1]
owners_before = input_tuple[2]
fulfills = None
fulfills_tx_id = input_tuple[3]
if fulfills_tx_id:
# TODO: the output_index should be an unsigned int
fulfills = Fulfills(fulfills_tx_id, int(input_tuple[4]))
return Input(tx_id, fulfills, owners_before, fulfillment)
def to_dict(self) -> dict:
fulfills = (
{"transaction_id": self.fulfills.transaction_id, "output_index": self.fulfills.output_index}
if self.fulfills
else None
)
return {"owners_before": self.owners_before, "fulfills": fulfills, "fulfillment": self.fulfillment}
@staticmethod
def from_list_dict(input_tuple_list: list[dict]) -> list[Input]:
return [Input.from_dict(input_tuple) for input_tuple in input_tuple_list]
@staticmethod
def list_to_dict(input_list: list[Input]) -> list[dict]:
return [input.to_dict() for input in input_list or []]

View File

@ -0,0 +1,23 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
import json
from dataclasses import dataclass
from typing import Optional
@dataclass
class MetaData:
metadata: Optional[str] = None
@staticmethod
def from_dict(meta_data: dict) -> MetaData | None:
if meta_data is None:
return None
return MetaData(meta_data)
def to_dict(self) -> dict:
return self.metadata

View File

@ -0,0 +1,182 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass, field
from typing import List
@dataclass
class SubCondition:
type: str
public_key: str
def to_tuple(self) -> tuple:
return self.type, self.public_key
def to_dict(self) -> dict:
return {"type": self.type, "public_key": self.public_key}
@staticmethod
def from_dict(subcondition_dict: dict) -> SubCondition:
return SubCondition(subcondition_dict["type"], subcondition_dict["public_key"])
@staticmethod
def list_to_dict(subconditions: List[SubCondition]) -> List[dict] | None:
if subconditions is None:
return None
return [subcondition.to_dict() for subcondition in subconditions]
@dataclass
class ConditionDetails:
type: str = ""
public_key: str = ""
threshold: int = None
sub_conditions: list[SubCondition] = None
def to_dict(self) -> dict:
if self.sub_conditions is None:
return {
"type": self.type,
"public_key": self.public_key,
}
else:
return {
"type": self.type,
"threshold": self.threshold,
"subconditions": [subcondition.to_dict() for subcondition in self.sub_conditions],
}
@staticmethod
def from_dict(data: dict) -> ConditionDetails:
sub_conditions = None
if "subconditions" in data:
sub_conditions = [SubCondition.from_dict(sub_condition) for sub_condition in data["subconditions"]]
return ConditionDetails(
type=data.get("type"),
public_key=data.get("public_key"),
threshold=data.get("threshold"),
sub_conditions=sub_conditions,
)
@dataclass
class Condition:
uri: str = ""
details: ConditionDetails = field(default_factory=ConditionDetails)
@staticmethod
def from_dict(data: dict) -> Condition:
return Condition(
uri=data.get("uri"),
details=ConditionDetails.from_dict(data.get("details")),
)
def to_dict(self) -> dict:
return {
"uri": self.uri,
"details": self.details.to_dict(),
}
@dataclass
class Output:
id: str = ""
amount: int = 0
transaction_id: str = ""
public_keys: List[str] = field(default_factory=list)
index: int = 0
condition: Condition = field(default_factory=Condition)
@staticmethod
def outputs_dict(output: dict, transaction_id: str = "") -> Output:
out_dict: Output
if output["condition"]["details"].get("subconditions") is None:
out_dict = output_with_public_key(output, transaction_id)
else:
out_dict = output_with_sub_conditions(output, transaction_id)
return out_dict
@staticmethod
def from_tuple(output: tuple) -> Output:
return Output(
id=output[0],
amount=output[1],
public_keys=output[2],
condition=Condition.from_dict(
output[3],
),
index=output[4],
transaction_id=output[5],
)
@staticmethod
def from_dict(output_dict: dict, index: int, transaction_id: str) -> Output:
return Output(
id=output_dict["id"] if "id" in output_dict else "placeholder",
amount=int(output_dict["amount"]),
public_keys=output_dict["public_keys"],
condition=Condition.from_dict(output_dict["condition"]),
index=index,
transaction_id=transaction_id,
)
def to_dict(self) -> dict:
return {
"id": self.id,
"public_keys": self.public_keys,
"condition": {
"details": {
"type": self.condition.details.type,
"public_key": self.condition.details.public_key,
"threshold": self.condition.details.threshold,
"subconditions": SubCondition.list_to_dict(self.condition.details.sub_conditions),
},
"uri": self.condition.uri,
},
"amount": str(self.amount),
}
@staticmethod
def list_to_dict(output_list: list[Output]) -> list[dict]:
return [output.to_dict() for output in output_list or []]
def output_with_public_key(output, transaction_id) -> Output:
return Output(
transaction_id=transaction_id,
public_keys=output["public_keys"],
amount=output["amount"],
condition=Condition(
uri=output["condition"]["uri"],
details=ConditionDetails(
type=output["condition"]["details"]["type"],
public_key=output["condition"]["details"]["public_key"],
),
),
)
def output_with_sub_conditions(output, transaction_id) -> Output:
return Output(
transaction_id=transaction_id,
public_keys=output["public_keys"],
amount=output["amount"],
condition=Condition(
uri=output["condition"]["uri"],
details=ConditionDetails(
type=output["condition"]["details"]["type"],
threshold=output["condition"]["details"]["threshold"],
sub_conditions=[
SubCondition(
type=sub_condition["type"],
public_key=sub_condition["public_key"],
)
for sub_condition in output["condition"]["details"]["subconditions"]
],
),
),
)

View File

@ -0,0 +1,22 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from __future__ import annotations
from dataclasses import dataclass
from typing import Optional
@dataclass
class Script:
script: dict = None
@staticmethod
def from_dict(script_dict: dict) -> Script | None:
if script_dict is None:
return None
return Script(script_dict["script"])
def to_dict(self) -> dict:
return {"script": self.script}

View File

@ -6,12 +6,16 @@
"""Query interfaces for backends."""
from functools import singledispatch
from planetmint.backend.models import Asset, MetaData, Output, Input, Script
from planetmint.backend.exceptions import OperationError
from planetmint.backend.interfaces import Block
from planetmint.backend.models.dbtransaction import DbTransaction
# FIXME ADD HERE HINT FOR RETURNING TYPE
@singledispatch
def store_asset(asset: dict, connection):
def store_asset(connection, asset: dict) -> Asset:
"""Write an asset to the asset table.
Args:
@ -25,7 +29,7 @@ def store_asset(asset: dict, connection):
@singledispatch
def store_assets(assets: list, connection):
def store_assets(connection, assets: list) -> list[Asset]:
"""Write a list of assets to the assets table.
Args:
@ -39,7 +43,7 @@ def store_assets(assets: list, connection):
@singledispatch
def store_metadatas(connection, metadata):
def store_metadatas(connection, metadata) -> MetaData:
"""Write a list of metadata to metadata table.
Args:
@ -59,8 +63,50 @@ def store_transactions(connection, signed_transactions):
raise NotImplementedError
@singledispatch
def store_transaction(connection, transaction):
"""Store a single transaction."""
raise NotImplementedError
@singledispatch
def get_transaction_by_id(connection, transaction_id):
"""Get the transaction by transaction id."""
raise NotImplementedError
@singledispatch
def get_transaction_single(connection, transaction_id) -> DbTransaction:
"""Get a single transaction by id."""
raise NotImplementedError
@singledispatch
def get_transaction(connection, transaction_id):
"""Get a transaction by id."""
raise NotImplementedError
@singledispatch
def get_transactions_by_asset(connection, asset):
"""Get a transaction by id."""
raise NotImplementedError
@singledispatch
def get_transactions_by_metadata(connection, metadata: str, limit: int = 1000) -> list[DbTransaction]:
"""Get a transaction by its metadata cid."""
raise NotImplementedError
@singledispatch
def get_transactions(connection, transactions_ids) -> list[DbTransaction]:
"""Get a transaction from the transactions table.
Args:
@ -74,21 +120,7 @@ def get_transaction(connection, transaction_id):
@singledispatch
def get_transactions(connection, transaction_ids):
"""Get transactions from the transactions table.
Args:
transaction_ids (list): list of transaction ids to fetch
Returns:
The result of the operation.
"""
raise NotImplementedError
@singledispatch
def get_asset(connection, asset_id):
def get_asset(connection, asset_id) -> Asset:
"""Get an asset from the assets table.
Args:
@ -149,7 +181,7 @@ def get_owned_ids(connection, owner):
@singledispatch
def get_block(connection, block_id):
def get_block(connection, block_id) -> Block:
"""Get a block from the planet table.
Args:
@ -177,21 +209,18 @@ def get_block_with_transaction(connection, txid):
@singledispatch
def get_metadata(connection, transaction_ids):
"""Get a list of metadata from the metadata table.
def store_transaction_outputs(connection, output: Output, index: int):
"""Store the transaction outputs.
Args:
transaction_ids (list): a list of ids for the metadata to be retrieved from
the database.
Returns:
metadata (list): the list of returned metadata.
output (Output): the output to store.
index (int): the index of the output in the transaction.
"""
raise NotImplementedError
@singledispatch
def get_assets(connection, asset_ids) -> list:
def get_assets(connection, asset_ids) -> list[Asset]:
"""Get a list of assets from the assets table.
Args:
@ -439,6 +468,36 @@ def get_latest_abci_chain(conn):
@singledispatch
def _group_transaction_by_ids(txids: list, connection):
def get_inputs_by_tx_id(connection, tx_id) -> list[Input]:
"""Retrieve inputs for a transaction by its id"""
raise NotImplementedError
@singledispatch
def store_transaction_inputs(connection, inputs: list[Input]):
"""Store inputs for a transaction"""
raise NotImplementedError
@singledispatch
def get_complete_transactions_by_ids(txids: list, connection):
"""Returns the transactions object (JSON TYPE), from list of ids."""
raise NotImplementedError
@singledispatch
def get_script_by_tx_id(connection, tx_id: str) -> Script:
"""Retrieve script for a transaction by its id"""
raise NotImplementedError
@singledispatch
def get_outputs_by_tx_id(connection, tx_id: str) -> list[Output]:
"""Retrieve outputs for a transaction by its id"""
raise NotImplementedError
@singledispatch
def get_metadata(conn, transaction_ids):
"""Retrieve metadata for a list of transactions by their ids"""
raise NotImplementedError

View File

@ -9,7 +9,6 @@ import logging
from functools import singledispatch
from planetmint.config import Config
from planetmint.backend.connection import Connection
from transactions.common.exceptions import ValidationError
from transactions.common.utils import (
validate_all_values_for_key_in_obj,
@ -119,7 +118,8 @@ def drop_database(connection, dbname):
raise NotImplementedError
def init_database(connection=None, dbname=None):
@singledispatch
def init_database(connection, dbname):
"""Initialize the configured backend for use with Planetmint.
Creates a database with :attr:`dbname` with any required tables
@ -134,11 +134,7 @@ def init_database(connection=None, dbname=None):
configuration.
"""
connection = connection or Connection()
dbname = dbname or Config().get()["database"]["name"]
create_database(connection, dbname)
create_tables(connection, dbname)
raise NotImplementedError
def validate_language_key(obj, key):

View File

@ -1,5 +1,5 @@
# Register the single dispatched modules on import.
from planetmint.backend.tarantool import query, connection, schema, convert # noqa
from planetmint.backend.tarantool import query, connection, schema # noqa
# MongoDBConnection should always be accessed via
# ``planetmint.backend.connect()``.

View File

@ -1,78 +0,0 @@
box.cfg{listen = 3303}
function indexed_pattern_search(space_name, field_no, pattern)
if (box.space[space_name] == nil) then
print("Error: Failed to find the specified space")
return nil
end
local index_no = -1
for i=0,box.schema.INDEX_MAX,1 do
if (box.space[space_name].index[i] == nil) then break end
if (box.space[space_name].index[i].type == "TREE"
and box.space[space_name].index[i].parts[1].fieldno == field_no
and (box.space[space_name].index[i].parts[1].type == "scalar"
or box.space[space_name].index[i].parts[1].type == "string")) then
index_no = i
break
end
end
if (index_no == -1) then
print("Error: Failed to find an appropriate index")
return nil
end
local index_search_key = ""
local index_search_key_length = 0
local last_character = ""
local c = ""
local c2 = ""
for i=1,string.len(pattern),1 do
c = string.sub(pattern, i, i)
if (last_character ~= "%") then
if (c == '^' or c == "$" or c == "(" or c == ")" or c == "."
or c == "[" or c == "]" or c == "*" or c == "+"
or c == "-" or c == "?") then
break
end
if (c == "%") then
c2 = string.sub(pattern, i + 1, i + 1)
if (string.match(c2, "%p") == nil) then break end
index_search_key = index_search_key .. c2
else
index_search_key = index_search_key .. c
end
end
last_character = c
end
index_search_key_length = string.len(index_search_key)
local result_set = {}
local number_of_tuples_in_result_set = 0
local previous_tuple_field = ""
while true do
local number_of_tuples_since_last_yield = 0
local is_time_for_a_yield = false
for _,tuple in box.space[space_name].index[index_no]:
pairs(index_search_key,{iterator = box.index.GE}) do
if (string.sub(tuple[field_no], 1, index_search_key_length)
> index_search_key) then
break
end
number_of_tuples_since_last_yield = number_of_tuples_since_last_yield + 1
if (number_of_tuples_since_last_yield >= 10
and tuple[field_no] ~= previous_tuple_field) then
index_search_key = tuple[field_no]
is_time_for_a_yield = true
break
end
previous_tuple_field = tuple[field_no]
if (string.match(tuple[field_no], pattern) ~= nil) then
number_of_tuples_in_result_set = number_of_tuples_in_result_set + 1
result_set[number_of_tuples_in_result_set] = tuple
end
end
if (is_time_for_a_yield ~= true) then
break
end
require('fiber').yield()
end
return result_set
end

View File

@ -34,18 +34,12 @@ class TarantoolDBConnection(DBConnection):
self.connect()
self.SPACE_NAMES = [
"abci_chains",
"assets",
"blocks",
"blocks_tx",
"elections",
"meta_data",
"pre_commits",
"validators",
"validator_sets",
"transactions",
"inputs",
"outputs",
"keys",
"scripts",
]
except tarantool.error.NetworkError as network_err:
logger.info("Host cant be reached")
@ -102,12 +96,10 @@ class TarantoolDBConnection(DBConnection):
raise net_error
def drop_database(self):
db_config = Config().get()["database"]
cmd_resp = self.run_command(command=self.drop_path, config=db_config) # noqa: F841
self.connect().call("drop")
def init_database(self):
db_config = Config().get()["database"]
cmd_resp = self.run_command(command=self.init_path, config=db_config) # noqa: F841
self.connect().call("init")
def run_command(self, command: str, config: dict):
from subprocess import run

View File

@ -0,0 +1,25 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
TARANT_TABLE_META_DATA = "meta_data"
TARANT_TABLE_ASSETS = "assets"
TARANT_TABLE_KEYS = "keys"
TARANT_TABLE_TRANSACTION = "transactions"
TARANT_TABLE_GOVERNANCE = "governance"
TARANT_TABLE_INPUT = "inputs"
TARANT_TABLE_OUTPUT = "outputs"
TARANT_TABLE_SCRIPT = "scripts"
TARANT_TABLE_BLOCKS = "blocks"
TARANT_TABLE_VALIDATOR_SETS = "validator_sets"
TARANT_TABLE_ABCI_CHAINS = "abci_chains"
TARANT_TABLE_UTXOS = "utxos"
TARANT_TABLE_PRE_COMMITS = "pre_commits"
TARANT_TABLE_ELECTIONS = "elections"
TARANT_TX_ID_SEARCH = "transaction_id"
TARANT_ID_SEARCH = "id"
TARANT_INDEX_TX_BY_ASSET_ID = "transactions_by_asset_id"
TARANT_INDEX_SPENDING_BY_ID_AND_OUTPUT_INDEX = "spending_transaction_by_id_and_output_index"

View File

@ -1,26 +0,0 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
"""Convert implementation for Tarantool"""
from planetmint.backend.utils import module_dispatch_registrar
from planetmint.backend import convert
from planetmint.backend.tarantool.connection import TarantoolDBConnection
from transactions import Transaction
register_query = module_dispatch_registrar(convert)
@register_query(TarantoolDBConnection)
def prepare_asset(connection, transaction: Transaction, filter_operation, assets):
asset_id = transaction.id
if transaction.operation not in filter_operation:
asset_id = Transaction.read_out_asset_id(transaction)
return tuple([assets, transaction.id, asset_id])
@register_query(TarantoolDBConnection)
def prepare_metadata(connection, transaction: Transaction, metadata):
return {"id": transaction.id, "metadata": metadata}

View File

@ -1,14 +0,0 @@
box.space.abci_chains:drop()
box.space.assets:drop()
box.space.blocks:drop()
box.space.blocks_tx:drop()
box.space.elections:drop()
box.space.meta_data:drop()
box.space.pre_commits:drop()
box.space.utxos:drop()
box.space.validators:drop()
box.space.transactions:drop()
box.space.inputs:drop()
box.space.outputs:drop()
box.space.keys:drop()
box.space.scripts:drop()

View File

@ -1,74 +1,320 @@
abci_chains = box.schema.space.create('abci_chains', {engine='memtx', is_sync = false})
abci_chains:format({{name='height' , type='integer'},{name='is_synched' , type='boolean'},{name='chain_id',type='string'}})
abci_chains:create_index('id_search' ,{type='hash', parts={'chain_id'}})
abci_chains:create_index('height_search' ,{type='tree',unique=false, parts={'height'}})
box.cfg{listen = 3303}
assets = box.schema.space.create('assets' , {engine='memtx' , is_sync=false})
assets:format({{name='data' , type='any'}, {name='tx_id', type='string'}, {name='asset_id', type='string'}})
assets:create_index('txid_search', {type='hash', parts={'tx_id'}})
assets:create_index('assetid_search', {type='tree',unique=false, parts={'asset_id', 'tx_id'}})
assets:create_index('only_asset_search', {type='tree', unique=false, parts={'asset_id'}})
function init()
-- ABCI chains
abci_chains = box.schema.create_space('abci_chains', { if_not_exists = true })
abci_chains:format({
{ name = 'id', type = 'string' },
{ name = 'height', type = 'unsigned' },
{ name = 'is_synced', type = 'boolean' }
})
abci_chains:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
abci_chains:create_index('height', {
if_not_exists = true,
unique = false,
parts = {{ field = 'height', type = 'unsigned' }}
})
blocks = box.schema.space.create('blocks' , {engine='memtx' , is_sync=false})
blocks:format{{name='app_hash',type='string'},{name='height' , type='integer'},{name='block_id' , type='string'}}
blocks:create_index('id_search' , {type='hash' , parts={'block_id'}})
blocks:create_index('block_search' , {type='tree', unique = false, parts={'height'}})
blocks:create_index('block_id_search', {type = 'hash', parts ={'block_id'}})
blocks_tx = box.schema.space.create('blocks_tx')
blocks_tx:format{{name='transaction_id', type = 'string'}, {name = 'block_id', type = 'string'}}
blocks_tx:create_index('id_search',{ type = 'hash', parts={'transaction_id'}})
blocks_tx:create_index('block_search', {type = 'tree',unique=false, parts={'block_id'}})
-- Transactions
transactions = box.schema.create_space('transactions', { if_not_exists = true })
transactions:format({
{ name = 'id', type = 'string' },
{ name = 'operation', type = 'string' },
{ name = 'version', type = 'string' },
{ name = 'metadata', type = 'string', is_nullable = true },
{ name = 'assets', type = 'array' },
{ name = 'inputs', type = 'array' },
{ name = 'scripts', type = 'map', is_nullable = true }
})
transactions:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
transactions:create_index('transactions_by_asset_id', {
if_not_exists = true,
unique = false,
parts = {
{ field = 'assets[*].id', type = 'string', is_nullable = true }
}
})
transactions:create_index('transactions_by_asset_cid', {
if_not_exists = true,
unique = false,
parts = {
{ field = 'assets[*].data', type = 'string', is_nullable = true }
}
})
transactions:create_index('transactions_by_metadata_cid', {
if_not_exists = true,
unique = false,
parts = {{ field = 'metadata', type = 'string' }}
})
transactions:create_index('spending_transaction_by_id_and_output_index', {
if_not_exists = true,
parts = {
{ field = 'inputs[*].fulfills["transaction_id"]', type = 'string', is_nullable = true },
{ field = 'inputs[*].fulfills["output_index"]', type = 'unsigned', is_nullable = true }
}})
transactions:create_index('transactions_by_id_and_operation', {
if_not_exists = true,
parts = {
{ field = 'id', type = 'string' },
{ field = 'operation', type = 'string' },
{ field = 'assets[*].id', type = 'string', is_nullable = true }
}
})
elections = box.schema.space.create('elections',{engine = 'memtx' , is_sync = false})
elections:format({{name='election_id' , type='string'},{name='height' , type='integer'}, {name='is_concluded' , type='boolean'}})
elections:create_index('id_search' , {type='hash', parts={'election_id'}})
elections:create_index('height_search' , {type='tree',unique=false, parts={'height'}})
elections:create_index('update_search', {type='tree', unique=false, parts={'election_id', 'height'}})
-- Governance
governance = box.schema.create_space('governance', { if_not_exists = true })
governance:format({
{ name = 'id', type = 'string' },
{ name = 'operation', type = 'string' },
{ name = 'version', type = 'string' },
{ name = 'metadata', type = 'string', is_nullable = true },
{ name = 'assets', type = 'array' },
{ name = 'inputs', type = 'array' },
{ name = 'scripts', type = 'map', is_nullable = true }
})
governance:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
governance:create_index('governance_by_asset_id', {
if_not_exists = true,
unique = false,
parts = {
{ field = 'assets[*].id', type = 'string', is_nullable = true }
}
})
governance:create_index('spending_governance_by_id_and_output_index', {
if_not_exists = true,
parts = {
{ field = 'inputs[*].fulfills["transaction_id"]', type = 'string', is_nullable = true },
{ field = 'inputs[*].fulfills["output_index"]', type = 'unsigned', is_nullable = true }
}})
meta_datas = box.schema.space.create('meta_data',{engine = 'memtx' , is_sync = false})
meta_datas:format({{name='transaction_id' , type='string'}, {name='meta_data' , type='any'}})
meta_datas:create_index('id_search', { type='hash' , parts={'transaction_id'}})
-- Outputs
outputs = box.schema.create_space('outputs', { if_not_exists = true })
outputs:format({
{ name = 'id', type = 'string' },
{ name = 'amount' , type = 'unsigned' },
{ name = 'public_keys', type = 'array' },
{ name = 'condition', type = 'map' },
{ name = 'output_index', type = 'number' },
{ name = 'transaction_id' , type = 'string' }
})
outputs:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
outputs:create_index('transaction_id', {
if_not_exists = true,
unique = false,
parts = {{ field = 'transaction_id', type = 'string' }}
})
outputs:create_index('public_keys', {
if_not_exists = true,
unique = false,
parts = {{field = 'public_keys[*]', type = 'string' }}
})
pre_commits = box.schema.space.create('pre_commits' , {engine='memtx' , is_sync=false})
pre_commits:format({{name='commit_id', type='string'}, {name='height',type='integer'}, {name='transactions',type=any}})
pre_commits:create_index('id_search', {type ='hash' , parts={'commit_id'}})
pre_commits:create_index('height_search', {type ='tree',unique=true, parts={'height'}})
validators = box.schema.space.create('validators' , {engine = 'memtx' , is_sync = false})
validators:format({{name='validator_id' , type='string'},{name='height',type='integer'},{name='validators' , type='any'}})
validators:create_index('id_search' , {type='hash' , parts={'validator_id'}})
validators:create_index('height_search' , {type='tree', unique=true, parts={'height'}})
-- Precommits
pre_commits = box.schema.create_space('pre_commits', { if_not_exists = true })
pre_commits:format({
{ name = 'id', type = 'string' },
{ name = 'height', type = 'unsigned' },
{ name = 'transaction_ids', type = 'array'}
})
pre_commits:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
pre_commits:create_index('height', {
if_not_exists = true,
parts = {{ field = 'height', type = 'unsigned' }}
})
transactions = box.schema.space.create('transactions',{engine='memtx' , is_sync=false})
transactions:format({{name='transaction_id' , type='string'}, {name='operation' , type='string'}, {name='version' ,type='string'}, {name='dict_map', type='any'}})
transactions:create_index('id_search' , {type = 'hash' , parts={'transaction_id'}})
transactions:create_index('transaction_search' , {type = 'tree',unique=false, parts={'operation', 'transaction_id'}})
inputs = box.schema.space.create('inputs')
inputs:format({{name='transaction_id' , type='string'}, {name='fulfillment' , type='any'}, {name='owners_before' , type='array'}, {name='fulfills_transaction_id', type = 'string'}, {name='fulfills_output_index', type = 'string'}, {name='input_id', type='string'}, {name='input_index', type='number'}})
inputs:create_index('delete_search' , {type = 'hash', parts={'input_id'}})
inputs:create_index('spent_search' , {type = 'tree', unique=false, parts={'fulfills_transaction_id', 'fulfills_output_index'}})
inputs:create_index('id_search', {type = 'tree', unique=false, parts = {'transaction_id'}})
-- Blocks
blocks = box.schema.create_space('blocks', { if_not_exists = true })
blocks:format({
{ name = 'id', type = 'string' },
{ name = 'app_hash', type = 'string' },
{ name = 'height', type = 'unsigned' },
{ name = 'transaction_ids', type = 'array' }
})
blocks:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
blocks:create_index('height', {
if_not_exists = true,
parts = {{ field = 'height', type = 'unsigned' }}
})
blocks:create_index('block_by_transaction_id', {
if_not_exists = true,
parts = {{ field = 'transaction_ids[*]', type = 'string' }}
})
outputs = box.schema.space.create('outputs')
outputs:format({{name='transaction_id' , type='string'}, {name='amount' , type='string'}, {name='uri', type='string'}, {name='details_type', type='string'}, {name='details_public_key', type='any'}, {name = 'output_id', type = 'string'}, {name='treshold', type='any'}, {name='subconditions', type='any'}, {name='output_index', type='number'}})
outputs:create_index('unique_search' ,{type='hash', parts={'output_id'}})
outputs:create_index('id_search' ,{type='tree', unique=false, parts={'transaction_id'}})
keys = box.schema.space.create('keys')
keys:format({{name = 'id', type='string'}, {name = 'transaction_id', type = 'string'} ,{name = 'output_id', type = 'string'}, {name = 'public_key', type = 'string'}, {name = 'key_index', type = 'integer'}})
keys:create_index('id_search', {type = 'hash', parts={'id'}})
keys:create_index('keys_search', {type = 'tree', unique=false, parts={'public_key'}})
keys:create_index('txid_search', {type = 'tree', unique=false, parts={'transaction_id'}})
keys:create_index('output_search', {type = 'tree', unique=false, parts={'output_id'}})
-- UTXO
utxos = box.schema.create_space('utxos', { if_not_exists = true })
utxos:format({
{ name = 'id', type = 'string' },
{ name = 'transaction_id', type = 'string' },
{ name = 'output_index', type = 'unsigned' },
{ name = 'utxo', type = 'map' }
})
utxos:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
utxos:create_index('utxos_by_transaction_id', {
if_not_exists = true,
unique = false,
parts = {{ field = 'transaction_id', type = 'string' }}
})
utxos:create_index('utxo_by_transaction_id_and_output_index', {
if_not_exists = true,
parts = {
{ field = 'transaction_id', type = 'string' },
{ field = 'output_index', type = 'unsigned' }
}})
utxos = box.schema.space.create('utxos', {engine = 'memtx' , is_sync = false})
utxos:format({{name='transaction_id' , type='string'}, {name='output_index' , type='integer'}, {name='utxo_dict', type='string'}})
utxos:create_index('id_search', {type='hash' , parts={'transaction_id', 'output_index'}})
utxos:create_index('transaction_search', {type='tree', unique=false, parts={'transaction_id'}})
utxos:create_index('index_search', {type='tree', unique=false, parts={'output_index'}})
scripts = box.schema.space.create('scripts' , {engine='memtx' , is_sync=false})
scripts:format({{name='transaction_id', type='string'},{name='script' , type='any'}})
scripts:create_index('txid_search', {type='hash', parts={'transaction_id'}})
-- Elections
elections = box.schema.create_space('elections', { if_not_exists = true })
elections:format({
{ name = 'id', type = 'string' },
{ name = 'height', type = 'unsigned' },
{ name = 'is_concluded', type = 'boolean' }
})
elections:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
elections:create_index('height', {
if_not_exists = true,
unique = false,
parts = {{ field = 'height', type = 'unsigned' }}
})
-- Validators
validator_sets = box.schema.create_space('validator_sets', { if_not_exists = true })
validator_sets:format({
{ name = 'id', type = 'string' },
{ name = 'height', type = 'unsigned' },
{ name = 'set', type = 'array' }
})
validator_sets:create_index('id', {
if_not_exists = true,
parts = {{ field = 'id', type = 'string' }}
})
validator_sets:create_index('height', {
if_not_exists = true,
parts = {{ field = 'height', type = 'unsigned' }}
})
end
function drop()
if pcall(function()
box.space.abci_chains:drop()
box.space.blocks:drop()
box.space.elections:drop()
box.space.pre_commits:drop()
box.space.utxos:drop()
box.space.validator_sets:drop()
box.space.transactions:drop()
box.space.outputs:drop()
box.space.governance:drop()
end) then
print("Error: specified space not found")
end
end
function indexed_pattern_search(space_name, field_no, pattern)
if (box.space[space_name] == nil) then
print("Error: Failed to find the specified space")
return nil
end
local index_no = -1
for i=0,box.schema.INDEX_MAX,1 do
if (box.space[space_name].index[i] == nil) then break end
if (box.space[space_name].index[i].type == "TREE"
and box.space[space_name].index[i].parts[1].fieldno == field_no
and (box.space[space_name].index[i].parts[1].type == "scalar"
or box.space[space_name].index[i].parts[1].type == "string")) then
index_no = i
break
end
end
if (index_no == -1) then
print("Error: Failed to find an appropriate index")
return nil
end
local index_search_key = ""
local index_search_key_length = 0
local last_character = ""
local c = ""
local c2 = ""
for i=1,string.len(pattern),1 do
c = string.sub(pattern, i, i)
if (last_character ~= "%") then
if (c == '^' or c == "$" or c == "(" or c == ")" or c == "."
or c == "[" or c == "]" or c == "*" or c == "+"
or c == "-" or c == "?") then
break
end
if (c == "%") then
c2 = string.sub(pattern, i + 1, i + 1)
if (string.match(c2, "%p") == nil) then break end
index_search_key = index_search_key .. c2
else
index_search_key = index_search_key .. c
end
end
last_character = c
end
index_search_key_length = string.len(index_search_key)
local result_set = {}
local number_of_tuples_in_result_set = 0
local previous_tuple_field = ""
while true do
local number_of_tuples_since_last_yield = 0
local is_time_for_a_yield = false
for _,tuple in box.space[space_name].index[index_no]:
pairs(index_search_key,{iterator = box.index.GE}) do
if (string.sub(tuple[field_no], 1, index_search_key_length)
> index_search_key) then
break
end
number_of_tuples_since_last_yield = number_of_tuples_since_last_yield + 1
if (number_of_tuples_since_last_yield >= 10
and tuple[field_no] ~= previous_tuple_field) then
index_search_key = tuple[field_no]
is_time_for_a_yield = true
break
end
previous_tuple_field = tuple[field_no]
if (string.match(tuple[field_no], pattern) ~= nil) then
number_of_tuples_in_result_set = number_of_tuples_in_result_set + 1
result_set[number_of_tuples_in_result_set] = tuple
end
end
if (is_time_for_a_yield ~= true) then
break
end
require('fiber').yield()
end
return result_set
end
function delete_output( id )
box.space.outputs:delete(id)
end

View File

@ -5,260 +5,285 @@
"""Query implementation for Tarantool"""
import json
from secrets import token_hex
from hashlib import sha256
import logging
from uuid import uuid4
from operator import itemgetter
from tarantool.error import DatabaseError
from typing import Union
from planetmint.backend import query
from planetmint.backend.models.dbtransaction import DbTransaction
from planetmint.backend.exceptions import OperationDataInsertionError
from planetmint.exceptions import CriticalDoubleSpend
from planetmint.backend.tarantool.const import (
TARANT_TABLE_META_DATA,
TARANT_TABLE_ASSETS,
TARANT_TABLE_TRANSACTION,
TARANT_TABLE_OUTPUT,
TARANT_TABLE_SCRIPT,
TARANT_TX_ID_SEARCH,
TARANT_ID_SEARCH,
TARANT_INDEX_TX_BY_ASSET_ID,
TARANT_INDEX_SPENDING_BY_ID_AND_OUTPUT_INDEX,
TARANT_TABLE_GOVERNANCE,
TARANT_TABLE_ABCI_CHAINS,
TARANT_TABLE_BLOCKS,
TARANT_TABLE_VALIDATOR_SETS,
TARANT_TABLE_UTXOS,
TARANT_TABLE_PRE_COMMITS,
TARANT_TABLE_ELECTIONS,
)
from planetmint.backend.utils import module_dispatch_registrar
from planetmint.backend.models import Asset, Block, Output
from planetmint.backend.tarantool.connection import TarantoolDBConnection
from planetmint.backend.tarantool.transaction.tools import TransactionCompose, TransactionDecompose
from transactions.common.transaction import Transaction
logger = logging.getLogger(__name__)
register_query = module_dispatch_registrar(query)
@register_query(TarantoolDBConnection)
def _group_transaction_by_ids(connection, txids: list):
def get_complete_transactions_by_ids(connection, txids: list) -> list[DbTransaction]:
_transactions = []
for txid in txids:
_txobject = connection.run(connection.space("transactions").select(txid, index="id_search"))
if len(_txobject) == 0:
tx = get_transaction_by_id(connection, txid, TARANT_TABLE_TRANSACTION)
if tx is None:
tx = get_transaction_by_id(connection, txid, TARANT_TABLE_GOVERNANCE)
if tx is None:
continue
_txobject = _txobject[0]
_txinputs = connection.run(connection.space("inputs").select(txid, index="id_search"))
_txoutputs = connection.run(connection.space("outputs").select(txid, index="id_search"))
_txkeys = connection.run(connection.space("keys").select(txid, index="txid_search"))
_txassets = connection.run(connection.space("assets").select(txid, index="txid_search"))
_txmeta = connection.run(connection.space("meta_data").select(txid, index="id_search"))
_txscript = connection.run(connection.space("scripts").select(txid, index="txid_search"))
_txinputs = sorted(_txinputs, key=itemgetter(6), reverse=False)
_txoutputs = sorted(_txoutputs, key=itemgetter(8), reverse=False)
result_map = {
"transaction": _txobject,
"inputs": _txinputs,
"outputs": _txoutputs,
"keys": _txkeys,
"assets": _txassets,
"metadata": _txmeta,
"script": _txscript,
}
tx_compose = TransactionCompose(db_results=result_map)
_transaction = tx_compose.convert_to_dict()
_transactions.append(_transaction)
outputs = get_outputs_by_tx_id(connection, txid)
tx.outputs = outputs
_transactions.append(tx)
return _transactions
@register_query(TarantoolDBConnection)
def store_transactions(connection, signed_transactions: list):
for transaction in signed_transactions:
txprepare = TransactionDecompose(transaction)
txtuples = txprepare.convert_to_tuple()
try:
connection.run(connection.space("transactions").insert(txtuples["transactions"]), only_data=False)
except: # This is used for omitting duplicate error in database for test -> test_bigchain_api::test_double_inclusion # noqa: E501, E722
continue
for _in in txtuples["inputs"]:
connection.run(connection.space("inputs").insert(_in), only_data=False)
for _out in txtuples["outputs"]:
connection.run(connection.space("outputs").insert(_out), only_data=False)
for _key in txtuples["keys"]:
connection.run(connection.space("keys").insert(_key), only_data=False)
if txtuples["metadata"] is not None:
connection.run(connection.space("meta_data").insert(txtuples["metadata"]), only_data=False)
if txtuples["assets"] is not None:
connection.run(connection.space("assets").insert(txtuples["assets"]), only_data=False)
if txtuples["script"] is not None:
connection.run(connection.space("scripts").insert(txtuples["script"]), only_data=False)
def get_outputs_by_tx_id(connection, tx_id: str) -> list[Output]:
_outputs = connection.run(connection.space(TARANT_TABLE_OUTPUT).select(tx_id, index=TARANT_TX_ID_SEARCH))
_sorted_outputs = sorted(_outputs, key=itemgetter(4))
return [Output.from_tuple(output) for output in _sorted_outputs]
@register_query(TarantoolDBConnection)
def get_transaction(connection, transaction_id: str):
_transactions = _group_transaction_by_ids(txids=[transaction_id], connection=connection)
return next(iter(_transactions), None)
def get_transaction(connection, tx_id: str) -> Union[DbTransaction, None]:
transactions = get_complete_transactions_by_ids(connection, (tx_id))
if len(transactions) > 1 or len(transactions) == 0:
return None
else:
return transactions[0]
@register_query(TarantoolDBConnection)
def get_transactions(connection, transactions_ids: list):
_transactions = _group_transaction_by_ids(txids=transactions_ids, connection=connection)
return _transactions
def get_transactions_by_asset(connection, asset: str, limit: int = 1000) -> list[DbTransaction]:
txs = connection.run(
connection.space(TARANT_TABLE_TRANSACTION).select(asset, limit=limit, index="transactions_by_asset_cid")
)
tx_ids = [tx[0] for tx in txs]
return get_complete_transactions_by_ids(connection, tx_ids)
@register_query(TarantoolDBConnection)
def store_metadatas(connection, metadata: list):
for meta in metadata:
connection.run(
connection.space("meta_data").insert(
(meta["id"], json.dumps(meta["data"] if not "metadata" in meta else meta["metadata"]))
) # noqa: E713
)
def get_transactions_by_metadata(connection, metadata: str, limit: int = 1000) -> list[DbTransaction]:
txs = connection.run(
connection.space(TARANT_TABLE_TRANSACTION).select(metadata, limit=limit, index="transactions_by_metadata_cid")
)
tx_ids = [tx[0] for tx in txs]
return get_complete_transactions_by_ids(connection, tx_ids)
@register_query(TarantoolDBConnection)
def get_metadata(connection, transaction_ids: list):
_returned_data = []
for _id in transaction_ids:
metadata = connection.run(connection.space("meta_data").select(_id, index="id_search"))
if metadata is not None:
if len(metadata) > 0:
metadata[0] = list(metadata[0])
metadata[0][1] = json.loads(metadata[0][1])
metadata[0] = tuple(metadata[0])
_returned_data.append(metadata)
return _returned_data
@register_query(TarantoolDBConnection)
def store_asset(connection, asset):
def convert(obj):
if isinstance(obj, tuple):
obj = list(obj)
obj[0] = json.dumps(obj[0])
return tuple(obj)
else:
return (json.dumps(obj), obj["id"], obj["id"])
def store_transaction_outputs(connection, output: Output, index: int) -> str:
output_id = uuid4().hex
try:
return connection.run(connection.space("assets").insert(convert(asset)), only_data=False)
except DatabaseError:
pass
connection.run(
connection.space(TARANT_TABLE_OUTPUT).insert(
(
output_id,
int(output.amount),
output.public_keys,
output.condition.to_dict(),
index,
output.transaction_id,
)
)
)
return output_id
except Exception as e:
logger.info(f"Could not insert Output: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def store_assets(connection, assets: list):
for asset in assets:
store_asset(connection, asset)
def store_transactions(connection, signed_transactions: list, table=TARANT_TABLE_TRANSACTION):
for transaction in signed_transactions:
store_transaction(connection, transaction, table)
[
store_transaction_outputs(connection, Output.outputs_dict(output, transaction["id"]), index)
for index, output in enumerate(transaction[TARANT_TABLE_OUTPUT])
]
@register_query(TarantoolDBConnection)
def get_asset(connection, asset_id: str):
_data = connection.run(connection.space("assets").select(asset_id, index="txid_search"))
return json.loads(_data[0][0]) if len(_data) > 0 else []
def store_transaction(connection, transaction, table=TARANT_TABLE_TRANSACTION):
scripts = None
if TARANT_TABLE_SCRIPT in transaction:
scripts = transaction[TARANT_TABLE_SCRIPT]
asset_obj = Transaction.get_assets_tag(transaction["version"])
if transaction["version"] == "2.0":
asset_array = [transaction[asset_obj]]
else:
asset_array = transaction[asset_obj]
tx = (
transaction["id"],
transaction["operation"],
transaction["version"],
transaction["metadata"],
asset_array,
transaction["inputs"],
scripts,
)
try:
connection.run(connection.space(table).insert(tx), only_data=False)
except Exception as e:
logger.info(f"Could not insert transactions: {e}")
if e.args[0] == 3 and e.args[1].startswith("Duplicate key exists in"):
raise CriticalDoubleSpend()
else:
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def get_assets(connection, assets_ids: list) -> list:
def get_transaction_by_id(connection, transaction_id, table=TARANT_TABLE_TRANSACTION):
txs = connection.run(connection.space(table).select(transaction_id, index=TARANT_ID_SEARCH), only_data=False)
if len(txs) == 0:
return None
return DbTransaction.from_tuple(txs[0])
@register_query(TarantoolDBConnection)
def get_transaction_single(connection, transaction_id) -> Union[DbTransaction, None]:
txs = get_complete_transactions_by_ids(txids=[transaction_id], connection=connection)
return txs[0] if len(txs) == 1 else None
@register_query(TarantoolDBConnection)
def get_transactions(connection, transactions_ids: list) -> list[DbTransaction]:
return get_complete_transactions_by_ids(txids=transactions_ids, connection=connection)
@register_query(TarantoolDBConnection)
def get_asset(connection, asset_id: str) -> Asset:
_data = connection.run(
connection.space(TARANT_TABLE_TRANSACTION).select(asset_id, index=TARANT_INDEX_TX_BY_ASSET_ID)
)
return Asset.from_dict(_data[0])
@register_query(TarantoolDBConnection)
def get_assets(connection, assets_ids: list) -> list[Asset]:
_returned_data = []
for _id in list(set(assets_ids)):
res = connection.run(connection.space("assets").select(_id, index="txid_search"))
res = connection.run(connection.space(TARANT_TABLE_TRANSACTION).select(_id, index=TARANT_INDEX_TX_BY_ASSET_ID))
if len(res) == 0:
continue
_returned_data.append(res[0])
sorted_assets = sorted(_returned_data, key=lambda k: k[1], reverse=False)
return [(json.loads(asset[0]), asset[1]) for asset in sorted_assets]
return [Asset.from_dict(asset) for asset in sorted_assets]
@register_query(TarantoolDBConnection)
def get_spent(connection, fullfil_transaction_id: str, fullfil_output_index: str):
def get_spent(connection, fullfil_transaction_id: str, fullfil_output_index: str) -> list[DbTransaction]:
_inputs = connection.run(
connection.space("inputs").select([fullfil_transaction_id, str(fullfil_output_index)], index="spent_search")
connection.space(TARANT_TABLE_TRANSACTION).select(
[fullfil_transaction_id, fullfil_output_index], index=TARANT_INDEX_SPENDING_BY_ID_AND_OUTPUT_INDEX
)
)
_transactions = _group_transaction_by_ids(txids=[inp[0] for inp in _inputs], connection=connection)
return _transactions
return get_complete_transactions_by_ids(txids=[inp[0] for inp in _inputs], connection=connection)
@register_query(TarantoolDBConnection)
def get_latest_block(connection): # TODO Here is used DESCENDING OPERATOR
_all_blocks = connection.run(connection.space("blocks").select())
block = {"app_hash": "", "height": 0, "transactions": []}
def get_latest_block(connection) -> Union[dict, None]:
blocks = connection.run(connection.space(TARANT_TABLE_BLOCKS).select())
if not blocks:
return None
if _all_blocks is not None:
if len(_all_blocks) > 0:
_block = sorted(_all_blocks, key=itemgetter(1), reverse=True)[0]
_txids = connection.run(connection.space("blocks_tx").select(_block[2], index="block_search"))
block["app_hash"] = _block[0]
block["height"] = _block[1]
block["transactions"] = [tx[0] for tx in _txids]
else:
block = None
return block
blocks = sorted(blocks, key=itemgetter(2), reverse=True)
latest_block = Block.from_tuple(blocks[0])
return latest_block.to_dict()
@register_query(TarantoolDBConnection)
def store_block(connection, block: dict):
block_unique_id = token_hex(8)
connection.run(
connection.space("blocks").insert((block["app_hash"], block["height"], block_unique_id)), only_data=False
)
for txid in block["transactions"]:
connection.run(connection.space("blocks_tx").insert((txid, block_unique_id)), only_data=False)
@register_query(TarantoolDBConnection)
def get_txids_filtered(
connection, asset_ids: list[str], operation: str = None, last_tx: any = None
): # TODO here is used 'OR' operator
actions = {
"CREATE": {"sets": ["CREATE", asset_ids], "index": "transaction_search"},
# 1 - operation, 2 - id (only in transactions) +
"TRANSFER": {"sets": ["TRANSFER", asset_ids], "index": "transaction_search"},
# 1 - operation, 2 - asset.id (linked mode) + OPERATOR OR
None: {"sets": [asset_ids, asset_ids]},
}[operation]
_transactions = []
if actions["sets"][0] == "CREATE": # +
_transactions = connection.run(
connection.space("transactions").select([operation, asset_ids[0]], index=actions["index"])
block_unique_id = uuid4().hex
try:
connection.run(
connection.space(TARANT_TABLE_BLOCKS).insert(
(block_unique_id, block["app_hash"], block["height"], block[TARANT_TABLE_TRANSACTION])
),
only_data=False,
)
elif actions["sets"][0] == "TRANSFER": # +
_assets = connection.run(connection.space("assets").select(asset_ids, index="only_asset_search"))
for asset in _assets:
_txid = asset[1]
_tmp_transactions = connection.run(
connection.space("transactions").select([operation, _txid], index=actions["index"])
)
if len(_tmp_transactions) != 0:
_transactions.extend(_tmp_transactions)
else:
_tx_ids = connection.run(connection.space("transactions").select(asset_ids, index="id_search"))
_assets_ids = connection.run(connection.space("assets").select(asset_ids, index="only_asset_search"))
return tuple(set([sublist[1] for sublist in _assets_ids] + [sublist[0] for sublist in _tx_ids]))
if last_tx:
return tuple(next(iter(_transactions)))
return tuple([elem[0] for elem in _transactions])
except Exception as e:
logger.info(f"Could not insert block: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def text_search(conn, search, table="assets", limit=0):
def get_txids_filtered(connection, asset_ids: list[str], operation: str = "", last_tx: bool = False) -> list[str]:
transactions = []
if operation == "CREATE":
transactions = connection.run(
connection.space(TARANT_TABLE_TRANSACTION).select(
[asset_ids[0], operation], index="transactions_by_id_and_operation"
)
)
elif operation == "TRANSFER":
transactions = connection.run(
connection.space(TARANT_TABLE_TRANSACTION).select(asset_ids, index=TARANT_INDEX_TX_BY_ASSET_ID)
)
else:
txs = connection.run(connection.space(TARANT_TABLE_TRANSACTION).select(asset_ids, index=TARANT_ID_SEARCH))
asset_txs = connection.run(
connection.space(TARANT_TABLE_TRANSACTION).select(asset_ids, index=TARANT_INDEX_TX_BY_ASSET_ID)
)
transactions = txs + asset_txs
ids = tuple([tx[0] for tx in transactions])
# NOTE: check when and where this is used and remove if not
if last_tx:
return ids[0]
return ids
@register_query(TarantoolDBConnection)
def text_search(conn, search, table=TARANT_TABLE_ASSETS, limit=0):
pattern = ".{}.".format(search)
field_no = 1 if table == "assets" else 2 # 2 for meta_data
field_no = 1 if table == TARANT_TABLE_ASSETS else 2 # 2 for meta_data
res = conn.run(conn.space(table).call("indexed_pattern_search", (table, field_no, pattern)))
to_return = []
if len(res[0]): # NEEDS BEAUTIFICATION
if table == "assets":
if table == TARANT_TABLE_ASSETS:
for result in res[0]:
to_return.append({"data": json.loads(result[0])[0]["data"], "id": result[1]})
to_return.append({"data": json.loads(result[0])["data"], "id": result[1]})
else:
for result in res[0]:
to_return.append({"metadata": json.loads(result[1]), "id": result[0]})
to_return.append({TARANT_TABLE_META_DATA: json.loads(result[1]), "id": result[0]})
return to_return if limit == 0 else to_return[:limit]
def _remove_text_score(asset):
asset.pop("score", None)
return asset
@register_query(TarantoolDBConnection)
def get_owned_ids(connection, owner: str):
_keys = connection.run(connection.space("keys").select(owner, index="keys_search"))
if _keys is None or len(_keys) == 0:
def get_owned_ids(connection, owner: str) -> list[DbTransaction]:
outputs = connection.run(connection.space(TARANT_TABLE_OUTPUT).select(owner, index="public_keys"))
if len(outputs) == 0:
return []
_transactionids = list(set([key[1] for key in _keys]))
_transactions = _group_transaction_by_ids(txids=_transactionids, connection=connection)
return _transactions
txids = [output[5] for output in outputs]
unique_set_txids = set(txids)
return get_complete_transactions_by_ids(connection, unique_set_txids)
@register_query(TarantoolDBConnection)
@ -277,44 +302,33 @@ def get_spending_transactions(connection, inputs):
@register_query(TarantoolDBConnection)
def get_block(connection, block_id=[]):
_block = connection.run(connection.space("blocks").select(block_id, index="block_search", limit=1))
if _block is None or len(_block) == 0:
return []
_block = _block[0]
_txblock = connection.run(connection.space("blocks_tx").select(_block[2], index="block_search"))
return {"app_hash": _block[0], "height": _block[1], "transactions": [_tx[0] for _tx in _txblock]}
def get_block(connection, block_id=None) -> Union[dict, None]:
_block = connection.run(connection.space(TARANT_TABLE_BLOCKS).select(block_id, index="height", limit=1))
if len(_block) == 0:
return
_block = Block.from_tuple(_block[0])
return _block.to_dict()
@register_query(TarantoolDBConnection)
def get_block_with_transaction(connection, txid: str):
_all_blocks_tx = connection.run(connection.space("blocks_tx").select(txid, index="id_search"))
if _all_blocks_tx is None or len(_all_blocks_tx) == 0:
return []
_block = connection.run(connection.space("blocks").select(_all_blocks_tx[0][1], index="block_id_search"))
return [{"height": _height[1]} for _height in _block]
def get_block_with_transaction(connection, txid: str) -> list[Block]:
_block = connection.run(connection.space(TARANT_TABLE_BLOCKS).select(txid, index="block_by_transaction_id"))
return _block if len(_block) > 0 else []
@register_query(TarantoolDBConnection)
def delete_transactions(connection, txn_ids: list):
for _id in txn_ids:
connection.run(connection.space("transactions").delete(_id), only_data=False)
for _id in txn_ids:
_inputs = connection.run(connection.space("inputs").select(_id, index="id_search"), only_data=False)
_outputs = connection.run(connection.space("outputs").select(_id, index="id_search"), only_data=False)
_keys = connection.run(connection.space("keys").select(_id, index="txid_search"), only_data=False)
for _kID in _keys:
connection.run(connection.space("keys").delete(_kID[0], index="id_search"), only_data=False)
for _inpID in _inputs:
connection.run(connection.space("inputs").delete(_inpID[5], index="delete_search"), only_data=False)
for _outpID in _outputs:
connection.run(connection.space("outputs").delete(_outpID[5], index="unique_search"), only_data=False)
for _id in txn_ids:
connection.run(connection.space("meta_data").delete(_id, index="id_search"), only_data=False)
for _id in txn_ids:
connection.run(connection.space("assets").delete(_id, index="txid_search"), only_data=False)
try:
for _id in txn_ids:
_outputs = get_outputs_by_tx_id(connection, _id)
for x in range(len(_outputs)):
connection.connect().call("delete_output", (_outputs[x].id))
for _id in txn_ids:
connection.run(connection.space(TARANT_TABLE_TRANSACTION).delete(_id), only_data=False)
connection.run(connection.space(TARANT_TABLE_GOVERNANCE).delete(_id), only_data=False)
except Exception as e:
logger.info(f"Could not insert unspent output: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
@ -322,10 +336,16 @@ def store_unspent_outputs(connection, *unspent_outputs: list):
result = []
if unspent_outputs:
for utxo in unspent_outputs:
output = connection.run(
connection.space("utxos").insert((utxo["transaction_id"], utxo["output_index"], json.dumps(utxo)))
)
result.append(output)
try:
output = connection.run(
connection.space(TARANT_TABLE_UTXOS).insert(
(uuid4().hex, utxo["transaction_id"], utxo["output_index"], utxo)
)
)
result.append(output)
except Exception as e:
logger.info(f"Could not insert unspent output: {e}")
raise OperationDataInsertionError()
return result
@ -334,94 +354,118 @@ def delete_unspent_outputs(connection, *unspent_outputs: list):
result = []
if unspent_outputs:
for utxo in unspent_outputs:
output = connection.run(connection.space("utxos").delete((utxo["transaction_id"], utxo["output_index"])))
output = connection.run(
connection.space(TARANT_TABLE_UTXOS).delete(
(utxo["transaction_id"], utxo["output_index"]), index="utxo_by_transaction_id_and_output_index"
)
)
result.append(output)
return result
@register_query(TarantoolDBConnection)
def get_unspent_outputs(connection, query=None): # for now we don't have implementation for 'query'.
_utxos = connection.run(connection.space("utxos").select([]))
return [json.loads(utx[2]) for utx in _utxos]
_utxos = connection.run(connection.space(TARANT_TABLE_UTXOS).select([]))
return [utx[3] for utx in _utxos]
@register_query(TarantoolDBConnection)
def store_pre_commit_state(connection, state: dict):
_precommit = connection.run(connection.space("pre_commits").select([], limit=1))
_precommit = connection.run(connection.space(TARANT_TABLE_PRE_COMMITS).select([], limit=1))
_precommitTuple = (
(token_hex(8), state["height"], state["transactions"])
(uuid4().hex, state["height"], state[TARANT_TABLE_TRANSACTION])
if _precommit is None or len(_precommit) == 0
else _precommit[0]
)
connection.run(
connection.space("pre_commits").upsert(
_precommitTuple, op_list=[("=", 1, state["height"]), ("=", 2, state["transactions"])], limit=1
),
only_data=False,
)
try:
connection.run(
connection.space(TARANT_TABLE_PRE_COMMITS).upsert(
_precommitTuple,
op_list=[("=", 1, state["height"]), ("=", 2, state[TARANT_TABLE_TRANSACTION])],
limit=1,
),
only_data=False,
)
except Exception as e:
logger.info(f"Could not insert pre commit state: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def get_pre_commit_state(connection):
_commit = connection.run(connection.space("pre_commits").select([], index="id_search"))
def get_pre_commit_state(connection) -> dict:
_commit = connection.run(connection.space(TARANT_TABLE_PRE_COMMITS).select([], index=TARANT_ID_SEARCH))
if _commit is None or len(_commit) == 0:
return None
_commit = sorted(_commit, key=itemgetter(1), reverse=False)[0]
return {"height": _commit[1], "transactions": _commit[2]}
return {"height": _commit[1], TARANT_TABLE_TRANSACTION: _commit[2]}
@register_query(TarantoolDBConnection)
def store_validator_set(conn, validators_update: dict):
_validator = conn.run(conn.space("validators").select(validators_update["height"], index="height_search", limit=1))
unique_id = token_hex(8) if _validator is None or len(_validator) == 0 else _validator[0][0]
conn.run(
conn.space("validators").upsert(
(unique_id, validators_update["height"], validators_update["validators"]),
op_list=[("=", 1, validators_update["height"]), ("=", 2, validators_update["validators"])],
limit=1,
),
only_data=False,
_validator = conn.run(
conn.space(TARANT_TABLE_VALIDATOR_SETS).select(validators_update["height"], index="height", limit=1)
)
unique_id = uuid4().hex if _validator is None or len(_validator) == 0 else _validator[0][0]
try:
conn.run(
conn.space(TARANT_TABLE_VALIDATOR_SETS).upsert(
(unique_id, validators_update["height"], validators_update["validators"]),
op_list=[("=", 1, validators_update["height"]), ("=", 2, validators_update["validators"])],
limit=1,
),
only_data=False,
)
except Exception as e:
logger.info(f"Could not insert validator set: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def delete_validator_set(connection, height: int):
_validators = connection.run(connection.space("validators").select(height, index="height_search"))
_validators = connection.run(connection.space(TARANT_TABLE_VALIDATOR_SETS).select(height, index="height"))
for _valid in _validators:
connection.run(connection.space("validators").delete(_valid[0]), only_data=False)
connection.run(connection.space(TARANT_TABLE_VALIDATOR_SETS).delete(_valid[0]), only_data=False)
@register_query(TarantoolDBConnection)
def store_election(connection, election_id: str, height: int, is_concluded: bool):
connection.run(
connection.space("elections").upsert(
(election_id, height, is_concluded), op_list=[("=", 1, height), ("=", 2, is_concluded)], limit=1
),
only_data=False,
)
try:
connection.run(
connection.space(TARANT_TABLE_ELECTIONS).upsert(
(election_id, height, is_concluded), op_list=[("=", 1, height), ("=", 2, is_concluded)], limit=1
),
only_data=False,
)
except Exception as e:
logger.info(f"Could not insert election: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def store_elections(connection, elections: list):
for election in elections:
_election = connection.run( # noqa: F841
connection.space("elections").insert(
(election["election_id"], election["height"], election["is_concluded"])
),
only_data=False,
)
try:
for election in elections:
_election = connection.run( # noqa: F841
connection.space(TARANT_TABLE_ELECTIONS).insert(
(election["election_id"], election["height"], election["is_concluded"])
),
only_data=False,
)
except Exception as e:
logger.info(f"Could not insert elections: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def delete_elections(connection, height: int):
_elections = connection.run(connection.space("elections").select(height, index="height_search"))
_elections = connection.run(connection.space(TARANT_TABLE_ELECTIONS).select(height, index="height"))
for _elec in _elections:
connection.run(connection.space("elections").delete(_elec[0]), only_data=False)
connection.run(connection.space(TARANT_TABLE_ELECTIONS).delete(_elec[0]), only_data=False)
@register_query(TarantoolDBConnection)
def get_validator_set(connection, height: int = None):
_validators = connection.run(connection.space("validators").select())
_validators = connection.run(connection.space(TARANT_TABLE_VALIDATOR_SETS).select())
if height is not None and _validators is not None:
_validators = [
{"height": validator[1], "validators": validator[2]} for validator in _validators if validator[1] <= height
@ -434,8 +478,8 @@ def get_validator_set(connection, height: int = None):
@register_query(TarantoolDBConnection)
def get_election(connection, election_id: str):
_elections = connection.run(connection.space("elections").select(election_id, index="id_search"))
def get_election(connection, election_id: str) -> dict:
_elections = connection.run(connection.space(TARANT_TABLE_ELECTIONS).select(election_id, index=TARANT_ID_SEARCH))
if _elections is None or len(_elections) == 0:
return None
_election = sorted(_elections, key=itemgetter(0), reverse=True)[0]
@ -443,40 +487,40 @@ def get_election(connection, election_id: str):
@register_query(TarantoolDBConnection)
def get_asset_tokens_for_public_key(
connection, asset_id: str, public_key: str
): # FIXME Something can be wrong with this function ! (public_key) is not used # noqa: E501
# space = connection.space("keys")
# _keys = space.select([public_key], index="keys_search")
_transactions = connection.run(connection.space("assets").select([asset_id], index="assetid_search"))
# _transactions = _transactions
# _keys = _keys.data
_grouped_transactions = _group_transaction_by_ids(connection=connection, txids=[_tx[1] for _tx in _transactions])
return _grouped_transactions
def get_asset_tokens_for_public_key(connection, asset_id: str, public_key: str) -> list[DbTransaction]:
id_transactions = connection.run(connection.space(TARANT_TABLE_GOVERNANCE).select([asset_id]))
asset_id_transactions = connection.run(
connection.space(TARANT_TABLE_GOVERNANCE).select([asset_id], index="governance_by_asset_id")
)
transactions = id_transactions + asset_id_transactions
return get_complete_transactions_by_ids(connection, [_tx[0] for _tx in transactions])
@register_query(TarantoolDBConnection)
def store_abci_chain(connection, height: int, chain_id: str, is_synced: bool = True):
hash_id_primarykey = sha256(json.dumps(obj={"height": height}).encode()).hexdigest()
connection.run(
connection.space("abci_chains").upsert(
(height, is_synced, chain_id, hash_id_primarykey),
op_list=[("=", 0, height), ("=", 1, is_synced), ("=", 2, chain_id)],
),
only_data=False,
)
try:
connection.run(
connection.space(TARANT_TABLE_ABCI_CHAINS).upsert(
(chain_id, height, is_synced),
op_list=[("=", 0, chain_id), ("=", 1, height), ("=", 2, is_synced)],
),
only_data=False,
)
except Exception as e:
logger.info(f"Could not insert abci-chain: {e}")
raise OperationDataInsertionError()
@register_query(TarantoolDBConnection)
def delete_abci_chain(connection, height: int):
hash_id_primarykey = sha256(json.dumps(obj={"height": height}).encode()).hexdigest()
connection.run(connection.space("abci_chains").delete(hash_id_primarykey), only_data=False)
chains = connection.run(connection.space(TARANT_TABLE_ABCI_CHAINS).select(height, index="height"), only_data=False)
connection.run(connection.space(TARANT_TABLE_ABCI_CHAINS).delete(chains[0][0], index="id"), only_data=False)
@register_query(TarantoolDBConnection)
def get_latest_abci_chain(connection):
_all_chains = connection.run(connection.space("abci_chains").select())
def get_latest_abci_chain(connection) -> Union[dict, None]:
_all_chains = connection.run(connection.space(TARANT_TABLE_ABCI_CHAINS).select())
if _all_chains is None or len(_all_chains) == 0:
return None
_chain = sorted(_all_chains, key=itemgetter(0), reverse=True)[0]
return {"height": _chain[0], "is_synced": _chain[1], "chain_id": _chain[2]}
_chain = sorted(_all_chains, key=itemgetter(1), reverse=True)[0]
return {"chain_id": _chain[0], "height": _chain[1], "is_synced": _chain[2]}

View File

@ -8,152 +8,17 @@ from planetmint.backend.tarantool.connection import TarantoolDBConnection
logger = logging.getLogger(__name__)
register_schema = module_dispatch_registrar(backend.schema)
SPACE_NAMES = (
"abci_chains",
"assets",
"blocks",
"blocks_tx",
"elections",
"meta_data",
"pre_commits",
"validators",
"transactions",
"inputs",
"outputs",
"keys",
"utxos",
"scripts",
)
SPACE_COMMANDS = {
"abci_chains": "abci_chains = box.schema.space.create('abci_chains', {engine='memtx', is_sync = false})",
"assets": "assets = box.schema.space.create('assets' , {engine='memtx' , is_sync=false})",
"blocks": "blocks = box.schema.space.create('blocks' , {engine='memtx' , is_sync=false})",
"blocks_tx": "blocks_tx = box.schema.space.create('blocks_tx')",
"elections": "elections = box.schema.space.create('elections',{engine = 'memtx' , is_sync = false})",
"meta_data": "meta_datas = box.schema.space.create('meta_data',{engine = 'memtx' , is_sync = false})",
"pre_commits": "pre_commits = box.schema.space.create('pre_commits' , {engine='memtx' , is_sync=false})",
"validators": "validators = box.schema.space.create('validators' , {engine = 'memtx' , is_sync = false})",
"transactions": "transactions = box.schema.space.create('transactions',{engine='memtx' , is_sync=false})",
"inputs": "inputs = box.schema.space.create('inputs')",
"outputs": "outputs = box.schema.space.create('outputs')",
"keys": "keys = box.schema.space.create('keys')",
"utxos": "utxos = box.schema.space.create('utxos', {engine = 'memtx' , is_sync = false})",
"scripts": "scripts = box.schema.space.create('scripts', {engine = 'memtx' , is_sync = false})",
}
INDEX_COMMANDS = {
"abci_chains": {
"id_search": "abci_chains:create_index('id_search' ,{type='tree', parts={'id'}})",
"height_search": "abci_chains:create_index('height_search' ,{type='tree', unique=false, parts={'height'}})",
},
"assets": {
"txid_search": "assets:create_index('txid_search', {type='tree', parts={'tx_id'}})",
"assetid_search": "assets:create_index('assetid_search', {type='tree',unique=false, parts={'asset_id', 'tx_id'}})", # noqa: E501
"only_asset_search": "assets:create_index('only_asset_search', {type='tree', unique=false, parts={'asset_id'}})", # noqa: E501
"text_search": "assets:create_index('secondary', {unique=false,parts={1,'string'}})",
},
"blocks": {
"id_search": "blocks:create_index('id_search' , {type='tree' , parts={'block_id'}})",
"block_search": "blocks:create_index('block_search' , {type='tree', unique = false, parts={'height'}})",
"block_id_search": "blocks:create_index('block_id_search', {type = 'hash', parts ={'block_id'}})",
},
"blocks_tx": {
"id_search": "blocks_tx:create_index('id_search',{ type = 'tree', parts={'transaction_id'}})",
"block_search": "blocks_tx:create_index('block_search', {type = 'tree',unique=false, parts={'block_id'}})",
},
"elections": {
"id_search": "elections:create_index('id_search' , {type='tree', parts={'election_id'}})",
"height_search": "elections:create_index('height_search' , {type='tree',unique=false, parts={'height'}})",
"update_search": "elections:create_index('update_search', {type='tree', unique=false, parts={'election_id', 'height'}})", # noqa: E501
},
"meta_data": {
"id_search": "meta_datas:create_index('id_search', { type='tree' , parts={'transaction_id'}})",
"text_search": "meta_datas:create_index('secondary', {unique=false,parts={2,'string'}})",
},
"pre_commits": {
"id_search": "pre_commits:create_index('id_search', {type ='tree' , parts={'commit_id'}})",
"height_search": "pre_commits:create_index('height_search', {type ='tree',unique=true, parts={'height'}})",
},
"validators": {
"id_search": "validators:create_index('id_search' , {type='tree' , parts={'validator_id'}})",
"height_search": "validators:create_index('height_search' , {type='tree', unique=true, parts={'height'}})",
},
"transactions": {
"id_search": "transactions:create_index('id_search' , {type = 'tree' , parts={'transaction_id'}})",
"transaction_search": "transactions:create_index('transaction_search' , {type = 'tree',unique=false, parts={'operation', 'transaction_id'}})", # noqa: E501
},
"inputs": {
"delete_search": "inputs:create_index('delete_search' , {type = 'tree', parts={'input_id'}})",
"spent_search": "inputs:create_index('spent_search' , {type = 'tree', unique=false, parts={'fulfills_transaction_id', 'fulfills_output_index'}})", # noqa: E501
"id_search": "inputs:create_index('id_search', {type = 'tree', unique=false, parts = {'transaction_id'}})",
},
"outputs": {
"unique_search": "outputs:create_index('unique_search' ,{type='tree', parts={'output_id'}})",
"id_search": "outputs:create_index('id_search' ,{type='tree', unique=false, parts={'transaction_id'}})",
},
"keys": {
"id_search": "keys:create_index('id_search', {type = 'tree', parts={'id'}})",
"keys_search": "keys:create_index('keys_search', {type = 'tree', unique=false, parts={'public_key'}})",
"txid_search": "keys:create_index('txid_search', {type = 'tree', unique=false, parts={'transaction_id'}})",
"output_search": "keys:create_index('output_search', {type = 'tree', unique=false, parts={'output_id'}})",
},
"utxos": {
"id_search": "utxos:create_index('id_search', {type='tree' , parts={'transaction_id', 'output_index'}})",
"transaction_search": "utxos:create_index('transaction_search', {type='tree', unique=false, parts={'transaction_id'}})", # noqa: E501
"index_Search": "utxos:create_index('index_search', {type='tree', unique=false, parts={'output_index'}})",
},
"scripts": {
"txid_search": "scripts:create_index('txid_search', {type='tree', parts={'transaction_id'}})",
},
}
SCHEMA_COMMANDS = {
"abci_chains": "abci_chains:format({{name='height' , type='integer'},{name='is_synched' , type='boolean'},{name='chain_id',type='string'}, {name='id', type='string'}})", # noqa: E501
"assets": "assets:format({{name='data' , type='string'}, {name='tx_id', type='string'}, {name='asset_id', type='string'}})", # noqa: E501
"blocks": "blocks:format{{name='app_hash',type='string'},{name='height' , type='integer'},{name='block_id' , type='string'}}", # noqa: E501
"blocks_tx": "blocks_tx:format{{name='transaction_id', type = 'string'}, {name = 'block_id', type = 'string'}}",
"elections": "elections:format({{name='election_id' , type='string'},{name='height' , type='integer'}, {name='is_concluded' , type='boolean'}})", # noqa: E501
"meta_data": "meta_datas:format({{name='transaction_id' , type='string'}, {name='meta_data' , type='string'}})", # noqa: E501
"pre_commits": "pre_commits:format({{name='commit_id', type='string'}, {name='height',type='integer'}, {name='transactions',type=any}})", # noqa: E501
"validators": "validators:format({{name='validator_id' , type='string'},{name='height',type='integer'},{name='validators' , type='any'}})", # noqa: E501
"transactions": "transactions:format({{name='transaction_id' , type='string'}, {name='operation' , type='string'}, {name='version' ,type='string'}, {name='dict_map', type='any'}})", # noqa: E501
"inputs": "inputs:format({{name='transaction_id' , type='string'}, {name='fulfillment' , type='any'}, {name='owners_before' , type='array'}, {name='fulfills_transaction_id', type = 'string'}, {name='fulfills_output_index', type = 'string'}, {name='input_id', type='string'}, {name='input_index', type='number'}})", # noqa: E501
"outputs": "outputs:format({{name='transaction_id' , type='string'}, {name='amount' , type='string'}, {name='uri', type='string'}, {name='details_type', type='string'}, {name='details_public_key', type='any'}, {name = 'output_id', type = 'string'}, {name='treshold', type='any'}, {name='subconditions', type='any'}, {name='output_index', type='number'}})", # noqa: E501
"keys": "keys:format({{name = 'id', type='string'}, {name = 'transaction_id', type = 'string'} ,{name = 'output_id', type = 'string'}, {name = 'public_key', type = 'string'}, {name = 'key_index', type = 'integer'}})", # noqa: E501
"utxos": "utxos:format({{name='transaction_id' , type='string'}, {name='output_index' , type='integer'}, {name='utxo_dict', type='string'}})", # noqa: E501
"scripts": "scripts:format({{name='transaction_id', type='string'},{name='script' , type='any'}})", # noqa: E501
}
SCHEMA_DROP_COMMANDS = {
"abci_chains": "box.space.abci_chains:drop()",
"assets": "box.space.assets:drop()",
"blocks": "box.space.blocks:drop()",
"blocks_tx": "box.space.blocks_tx:drop()",
"elections": "box.space.elections:drop()",
"meta_data": "box.space.meta_data:drop()",
"pre_commits": "box.space.pre_commits:drop()",
"validators": "box.space.validators:drop()",
"transactions": "box.space.transactions:drop()",
"inputs": "box.space.inputs:drop()",
"outputs": "box.space.outputs:drop()",
"keys": "box.space.keys:drop()",
"utxos": "box.space.utxos:drop()",
"scripts": "box.space.scripts:drop()",
}
@register_schema(TarantoolDBConnection)
def init_database(connection, db_name=None):
print("init database tarantool schema")
connection.connect().call("init")
@register_schema(TarantoolDBConnection)
def drop_database(connection, not_used=None):
for _space in SPACE_NAMES:
try:
cmd = SCHEMA_DROP_COMMANDS[_space].encode()
run_command_with_output(command=cmd)
print(f"Space '{_space}' was dropped succesfuly.")
except Exception:
print(f"Unexpected error while trying to drop space '{_space}'")
def drop_database(connection, db_name=None):
print("drop database tarantool schema")
connection.connect().call("drop")
@register_schema(TarantoolDBConnection)
@ -182,31 +47,4 @@ def run_command_with_output(command):
@register_schema(TarantoolDBConnection)
def create_tables(connection, dbname):
for _space in SPACE_NAMES:
try:
cmd = SPACE_COMMANDS[_space].encode()
run_command_with_output(command=cmd)
print(f"Space '{_space}' created.")
except Exception as err:
print(f"Unexpected error while trying to create '{_space}': {err}")
create_schema(space_name=_space)
create_indexes(space_name=_space)
def create_indexes(space_name):
indexes = INDEX_COMMANDS[space_name]
for index_name, index_cmd in indexes.items():
try:
run_command_with_output(command=index_cmd.encode())
print(f"Index '{index_name}' created succesfully.")
except Exception as err:
print(f"Unexpected error while trying to create index '{index_name}': '{err}'")
def create_schema(space_name):
try:
cmd = SCHEMA_COMMANDS[space_name].encode()
run_command_with_output(command=cmd)
print(f"Schema created for {space_name} succesfully.")
except Exception as unexpected_error:
print(f"Got unexpected error when creating index for '{space_name}' Space.\n {unexpected_error}")
connection.connect().call("init")

View File

@ -1,9 +1,15 @@
import copy
import json
from secrets import token_hex
from transactions.common.memoize import HDict
from planetmint.backend.tarantool.const import (
TARANT_TABLE_META_DATA,
TARANT_TABLE_ASSETS,
TARANT_TABLE_KEYS,
TARANT_TABLE_TRANSACTION,
TARANT_TABLE_INPUT,
TARANT_TABLE_OUTPUT,
TARANT_TABLE_SCRIPT,
)
def get_items(_list):
for item in _list:
@ -12,13 +18,12 @@ def get_items(_list):
def _save_keys_order(dictionary):
filter_keys = ["asset", "metadata"]
filter_keys = ["asset", TARANT_TABLE_META_DATA]
if type(dictionary) is dict or type(dictionary) is HDict:
keys = list(dictionary.keys())
_map = {}
for key in keys:
_map[key] = _save_keys_order(dictionary=dictionary[key]) if key not in filter_keys else None
return _map
elif type(dictionary) is list:
_maps = []
@ -29,21 +34,20 @@ def _save_keys_order(dictionary):
_map[key] = _save_keys_order(dictionary=_item[key]) if key not in filter_keys else None
_maps.append(_map)
return _maps
else:
return None
return None
class TransactionDecompose:
def __init__(self, _transaction):
self._transaction = _transaction
self._tuple_transaction = {
"transactions": (),
"inputs": [],
"outputs": [],
"keys": [],
"script": None,
"metadata": None,
"assets": None,
TARANT_TABLE_TRANSACTION: (),
TARANT_TABLE_INPUT: [],
TARANT_TABLE_OUTPUT: [],
TARANT_TABLE_KEYS: [],
TARANT_TABLE_SCRIPT: None,
TARANT_TABLE_META_DATA: None,
TARANT_TABLE_ASSETS: None,
}
def get_map(self, dictionary: dict = None):
@ -54,173 +58,32 @@ class TransactionDecompose:
else _save_keys_order(dictionary=self._transaction)
)
def __create_hash(self, n: int):
return token_hex(n)
def _metadata_check(self):
metadata = self._transaction.get("metadata")
if metadata is None:
return
self._tuple_transaction["metadata"] = (self._transaction["id"], json.dumps(metadata))
def __asset_check(self):
_asset = self._transaction.get("assets")
if _asset is None:
return
asset_id = _asset[0]["id"] if _asset[0].get("id") is not None else self._transaction["id"]
self._tuple_transaction["assets"] = (json.dumps(_asset), self._transaction["id"], asset_id)
def __prepare_inputs(self):
_inputs = []
input_index = 0
for _input in self._transaction["inputs"]:
_inputs.append(
(
self._transaction["id"],
_input["fulfillment"],
_input["owners_before"],
_input["fulfills"]["transaction_id"] if _input["fulfills"] is not None else "",
str(_input["fulfills"]["output_index"]) if _input["fulfills"] is not None else "",
self.__create_hash(7),
input_index,
)
)
input_index = input_index + 1
return _inputs
def __prepare_outputs(self):
_outputs = []
_keys = []
output_index = 0
for _output in self._transaction["outputs"]:
output_id = self.__create_hash(7)
if _output["condition"]["details"].get("subconditions") is None:
tmp_output = (
self._transaction["id"],
_output["amount"],
_output["condition"]["uri"],
_output["condition"]["details"]["type"],
_output["condition"]["details"]["public_key"],
output_id,
None,
None,
output_index,
)
else:
tmp_output = (
self._transaction["id"],
_output["amount"],
_output["condition"]["uri"],
_output["condition"]["details"]["type"],
None,
output_id,
_output["condition"]["details"]["threshold"],
_output["condition"]["details"]["subconditions"],
output_index,
)
_outputs.append(tmp_output)
output_index = output_index + 1
key_index = 0
for _key in _output["public_keys"]:
key_id = self.__create_hash(7)
_keys.append((key_id, self._transaction["id"], output_id, _key, key_index))
key_index = key_index + 1
return _keys, _outputs
def __prepare_transaction(self):
_map = self.get_map()
return (self._transaction["id"], self._transaction["operation"], self._transaction["version"], _map)
def __prepare_script(self):
try:
return (self._transaction["id"], self._transaction["script"])
except KeyError:
return None
def convert_to_tuple(self):
self._metadata_check()
self.__asset_check()
self._tuple_transaction["transactions"] = self.__prepare_transaction()
self._tuple_transaction["inputs"] = self.__prepare_inputs()
keys, outputs = self.__prepare_outputs()
self._tuple_transaction["outputs"] = outputs
self._tuple_transaction["keys"] = keys
self._tuple_transaction["script"] = self.__prepare_script()
self._tuple_transaction[TARANT_TABLE_TRANSACTION] = self.__prepare_transaction()
return self._tuple_transaction
class TransactionCompose:
def __init__(self, db_results):
self.db_results = db_results
self._map = self.db_results["transaction"][3]
self._map = self.db_results[TARANT_TABLE_TRANSACTION][3]
def _get_transaction_operation(self):
return self.db_results["transaction"][1]
return self.db_results[TARANT_TABLE_TRANSACTION][1]
def _get_transaction_version(self):
return self.db_results["transaction"][2]
return self.db_results[TARANT_TABLE_TRANSACTION][2]
def _get_transaction_id(self):
return self.db_results["transaction"][0]
def _get_asset(self):
_asset = iter(self.db_results["assets"])
_res_asset = next(iter(next(_asset, iter([]))), None)
return json.loads(_res_asset)
def _get_metadata(self):
return json.loads(self.db_results["metadata"][0][1]) if len(self.db_results["metadata"]) == 1 else None
def _get_inputs(self):
_inputs = []
for _input in self.db_results["inputs"]:
_in = copy.deepcopy(self._map["inputs"][_input[-1]])
_in["fulfillment"] = _input[1]
if _in["fulfills"] is not None:
_in["fulfills"]["transaction_id"] = _input[3]
_in["fulfills"]["output_index"] = int(_input[4])
_in["owners_before"] = _input[2]
_inputs.append(_in)
return _inputs
def _get_outputs(self):
_outputs = []
for _output in self.db_results["outputs"]:
_out = copy.deepcopy(self._map["outputs"][_output[-1]])
_out["amount"] = _output[1]
_tmp_keys = [(_key[3], _key[4]) for _key in self.db_results["keys"] if _key[2] == _output[5]]
_sorted_keys = sorted(_tmp_keys, key=lambda tup: (tup[1]))
_out["public_keys"] = [_key[0] for _key in _sorted_keys]
_out["condition"]["uri"] = _output[2]
if _output[7] is None:
_out["condition"]["details"]["type"] = _output[3]
_out["condition"]["details"]["public_key"] = _output[4]
else:
_out["condition"]["details"]["subconditions"] = _output[7]
_out["condition"]["details"]["type"] = _output[3]
_out["condition"]["details"]["threshold"] = _output[6]
_outputs.append(_out)
return _outputs
def _get_script(self):
if self.db_results["script"]:
return self.db_results["script"][0][1]
else:
return None
return self.db_results[TARANT_TABLE_TRANSACTION][0]
def convert_to_dict(self):
transaction = {k: None for k in list(self._map.keys())}
transaction["id"] = self._get_transaction_id()
transaction["assets"] = self._get_asset()
transaction["metadata"] = self._get_metadata()
transaction["version"] = self._get_transaction_version()
transaction["operation"] = self._get_transaction_operation()
transaction["inputs"] = self._get_inputs()
transaction["outputs"] = self._get_outputs()
if self._get_script():
transaction["script"] = self._get_script()
return transaction

View File

@ -21,6 +21,7 @@ from transactions.common.exceptions import DatabaseDoesNotExist, ValidationError
from transactions.types.elections.vote import Vote
from transactions.types.elections.chain_migration_election import ChainMigrationElection
from transactions.types.elections.validator_utils import election_id_to_public_key
from transactions.common.transaction import Transaction
from planetmint import ValidatorElection, Planetmint
from planetmint.backend import schema
from planetmint.commands import utils
@ -31,6 +32,8 @@ from planetmint.commands.election_types import elections
from planetmint.version import __tm_supported_versions__
from planetmint.config import Config
from planetmint.backend.tarantool.const import TARANT_TABLE_GOVERNANCE
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@ -201,7 +204,8 @@ def run_election_approve(args, planet):
logger.error("The key you provided does not match any of the eligible voters in this election.")
return False
inputs = [i for i in tx.to_inputs() if key.public_key in i.owners_before]
tx_converted = Transaction.from_dict(tx.to_dict(), True)
inputs = [i for i in tx_converted.to_inputs() if key.public_key in i.owners_before]
election_pub_key = election_id_to_public_key(tx.id)
approval = Vote.generate(inputs, [([election_pub_key], voting_power)], [tx.id]).sign([key.private_key])
planet.validate_transaction(approval)
@ -240,7 +244,7 @@ def run_election_show(args, planet):
def _run_init():
bdb = planetmint.Planetmint()
schema.init_database(connection=bdb.connection)
schema.init_database(bdb.connection)
@configure_planetmint

10
planetmint/const.py Normal file
View File

@ -0,0 +1,10 @@
# Copyright © 2020 Interplanetary Database Association e.V.,
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
CHAIN_MIGRATION_ELECTION = "CHAIN_MIGRATION_ELECTION"
VALIDATOR_ELECTION = "VALIDATOR_ELECTION"
VOTE = "VOTE"
GOVERNANCE_TRANSACTION_TYPES = [CHAIN_MIGRATION_ELECTION, VALIDATOR_ELECTION, VOTE]

View File

@ -203,6 +203,8 @@ class App(BaseApplication):
block_txn_hash = calculate_hash(self.block_txn_ids)
block = self.planetmint_node.get_latest_block()
logger.debug("BLOCK: ", block)
if self.block_txn_ids:
self.block_txn_hash = calculate_hash([block["app_hash"], block_txn_hash])
else:

View File

@ -16,12 +16,12 @@ class FastQuery:
def get_outputs_by_public_key(self, public_key):
"""Get outputs for a public key"""
txs = list(query.get_owned_ids(self.connection, public_key))
txs = query.get_owned_ids(self.connection, public_key)
return [
TransactionLink(tx["id"], index)
TransactionLink(tx.id, index)
for tx in txs
for index, output in enumerate(tx["outputs"])
if condition_details_has_owner(output["condition"]["details"], public_key)
for index, output in enumerate(tx.outputs)
if condition_details_has_owner(output.condition.details, public_key)
]
def filter_spent_outputs(self, outputs):
@ -31,8 +31,8 @@ class FastQuery:
outputs: list of TransactionLink
"""
links = [o.to_dict() for o in outputs]
txs = list(query.get_spending_transactions(self.connection, links))
spends = {TransactionLink.from_dict(input_["fulfills"]) for tx in txs for input_ in tx["inputs"]}
txs = query.get_spending_transactions(self.connection, links)
spends = {TransactionLink.from_dict(input.fulfills.to_dict()) for tx in txs for input in tx.inputs}
return [ff for ff in outputs if ff not in spends]
def filter_unspent_outputs(self, outputs):
@ -42,6 +42,6 @@ class FastQuery:
outputs: list of TransactionLink
"""
links = [o.to_dict() for o in outputs]
txs = list(query.get_spending_transactions(self.connection, links))
spends = {TransactionLink.from_dict(input_["fulfills"]) for tx in txs for input_ in tx["inputs"]}
txs = query.get_spending_transactions(self.connection, links)
spends = {TransactionLink.from_dict(input.fulfills.to_dict()) for tx in txs for input in tx.inputs}
return [ff for ff in outputs if ff in spends]

View File

@ -8,17 +8,13 @@ MongoDB.
"""
import logging
from collections import namedtuple
from uuid import uuid4
from planetmint.backend.connection import Connection
import rapidjson
from hashlib import sha3_256
import json
import rapidjson
import requests
import planetmint
from itertools import chain
from collections import namedtuple, OrderedDict
from uuid import uuid4
from hashlib import sha3_256
@ -39,9 +35,20 @@ from transactions.common.exceptions import (
InvalidPowerChange,
)
from transactions.common.transaction import VALIDATOR_ELECTION, CHAIN_MIGRATION_ELECTION
from transactions.common.transaction_mode_types import BROADCAST_TX_COMMIT, BROADCAST_TX_ASYNC, BROADCAST_TX_SYNC
from transactions.common.transaction_mode_types import (
BROADCAST_TX_COMMIT,
BROADCAST_TX_ASYNC,
BROADCAST_TX_SYNC,
)
from transactions.common.output import Output as TransactionOutput
from transactions.types.elections.election import Election
from transactions.types.elections.validator_utils import election_id_to_public_key
from planetmint.backend.models import Output, DbTransaction
from planetmint.backend.tarantool.const import (
TARANT_TABLE_GOVERNANCE,
TARANT_TABLE_TRANSACTION,
)
from planetmint.config import Config
from planetmint import backend, config_utils, fastquery
from planetmint.tendermint_utils import (
@ -54,6 +61,8 @@ from planetmint.tendermint_utils import (
)
from planetmint import exceptions as core_exceptions
from planetmint.validation import BaseValidationRules
from planetmint.backend.interfaces import Asset, MetaData
from planetmint.const import GOVERNANCE_TRANSACTION_TYPES
logger = logging.getLogger(__name__)
@ -101,7 +110,12 @@ class Planetmint(object):
raise ValidationError("Mode must be one of the following {}.".format(", ".join(self.mode_list)))
tx_dict = transaction.tx_dict if transaction.tx_dict else transaction.to_dict()
payload = {"method": mode, "jsonrpc": "2.0", "params": [encode_transaction(tx_dict)], "id": str(uuid4())}
payload = {
"method": mode,
"jsonrpc": "2.0",
"params": [encode_transaction(tx_dict)],
"id": str(uuid4()),
}
# TODO: handle connection errors!
return requests.post(self.endpoint, json=payload)
@ -140,36 +154,17 @@ class Planetmint(object):
def store_bulk_transactions(self, transactions):
txns = []
assets = []
txn_metadatas = []
gov_txns = []
for tx in transactions:
transaction = tx.tx_dict if tx.tx_dict else rapidjson.loads(rapidjson.dumps(tx.to_dict()))
for t in transactions:
transaction = t.tx_dict if t.tx_dict else rapidjson.loads(rapidjson.dumps(t.to_dict()))
if transaction["operation"] in GOVERNANCE_TRANSACTION_TYPES:
gov_txns.append(transaction)
else:
txns.append(transaction)
tx_assets = transaction.pop(Transaction.get_assets_tag(tx.version))
metadata = transaction.pop("metadata")
tx_assets = backend.convert.prepare_asset(
self.connection,
tx,
filter_operation=[
Transaction.CREATE,
Transaction.VALIDATOR_ELECTION,
Transaction.CHAIN_MIGRATION_ELECTION,
],
assets=tx_assets,
)
metadata = backend.convert.prepare_metadata(self.connection, tx, metadata=metadata)
txn_metadatas.append(metadata)
assets.append(tx_assets)
txns.append(transaction)
backend.query.store_metadatas(self.connection, txn_metadatas)
if assets:
backend.query.store_assets(self.connection, assets)
return backend.query.store_transactions(self.connection, txns)
backend.query.store_transactions(self.connection, txns, TARANT_TABLE_TRANSACTION)
backend.query.store_transactions(self.connection, gov_txns, TARANT_TABLE_GOVERNANCE)
def delete_transactions(self, txs):
return backend.query.delete_transactions(self.connection, txs)
@ -251,39 +246,24 @@ class Planetmint(object):
return backend.query.delete_unspent_outputs(self.connection, *unspent_outputs)
def is_committed(self, transaction_id):
transaction = backend.query.get_transaction(self.connection, transaction_id)
transaction = backend.query.get_transaction_single(self.connection, transaction_id)
return bool(transaction)
def get_transaction(self, transaction_id):
transaction = backend.query.get_transaction(self.connection, transaction_id)
if transaction:
assets = backend.query.get_assets(self.connection, [transaction_id])
metadata = backend.query.get_metadata(self.connection, [transaction_id])
# NOTE: assets must not be replaced for transfer transactions
# NOTE: assets should be appended for all txs that define new assets otherwise the ids are already stored in tx
if transaction["operation"] != "TRANSFER" and transaction["operation"] != "VOTE" and assets:
transaction["assets"] = assets[0][0]
if "metadata" not in transaction:
metadata = metadata[0] if metadata else None
if metadata:
metadata = metadata.get("metadata")
transaction.update({"metadata": metadata})
transaction = Transaction.from_dict(transaction, False)
return transaction
return backend.query.get_transaction_single(self.connection, transaction_id)
def get_transactions(self, txn_ids):
return backend.query.get_transactions(self.connection, txn_ids)
def get_transactions_filtered(self, asset_ids, operation=None, last_tx=None):
def get_transactions_filtered(self, asset_ids, operation=None, last_tx=False):
"""Get a list of transactions filtered on some criteria"""
txids = backend.query.get_txids_filtered(self.connection, asset_ids, operation, last_tx)
for txid in txids:
yield self.get_transaction(txid)
def get_outputs_by_tx_id(self, txid):
return backend.query.get_outputs_by_tx_id(self.connection, txid)
def get_outputs_filtered(self, owner, spent=None):
"""Get a list of output links filtered on some criteria
@ -307,11 +287,6 @@ class Planetmint(object):
def get_spent(self, txid, output, current_transactions=[]):
transactions = backend.query.get_spent(self.connection, txid, output)
transactions = list(transactions) if transactions else []
if len(transactions) > 1:
raise core_exceptions.CriticalDoubleSpend(
"`{}` was spent more than once. There is a problem" " with the chain".format(txid)
)
current_spent_transactions = []
for ctxn in current_transactions:
@ -323,8 +298,9 @@ class Planetmint(object):
if len(transactions) + len(current_spent_transactions) > 1:
raise DoubleSpend('tx "{}" spends inputs twice'.format(txid))
elif transactions:
transaction = backend.query.get_transaction(self.connection, transactions[0]["id"])
transaction = Transaction.from_dict(transaction, False)
tx_id = transactions[0].id
tx = backend.query.get_transaction_single(self.connection, tx_id)
transaction = tx.to_dict()
elif current_spent_transactions:
transaction = current_spent_transactions[0]
@ -361,7 +337,7 @@ class Planetmint(object):
if block:
transactions = backend.query.get_transactions(self.connection, block["transactions"])
result["transactions"] = [t.to_dict() for t in self.tx_from_db(transactions)]
result["transactions"] = [Transaction.from_dict(t.to_dict()).to_dict() for t in transactions]
return result
@ -379,19 +355,17 @@ class Planetmint(object):
if len(blocks) > 1:
logger.critical("Transaction id %s exists in multiple blocks", txid)
return [block["height"] for block in blocks]
return blocks
def validate_transaction(self, tx, current_transactions=[]):
def validate_transaction(self, transaction, current_transactions=[]):
"""Validate a transaction against the current status of the database."""
transaction = tx
# CLEANUP: The conditional below checks for transaction in dict format.
# It would be better to only have a single format for the transaction
# throught the code base.
if isinstance(transaction, dict):
try:
transaction = Transaction.from_dict(tx, False)
transaction = Transaction.from_dict(transaction, False)
except SchemaValidationError as e:
logger.warning("Invalid transaction schema: %s", e.__cause__.message)
return False
@ -412,13 +386,20 @@ class Planetmint(object):
# store the inputs so that we can check if the asset ids match
input_txs = []
input_conditions = []
for input_ in tx.inputs:
input_txid = input_.fulfills.txid
input_tx = self.get_transaction(input_txid)
_output = self.get_outputs_by_tx_id(input_txid)
if input_tx is None:
for ctxn in current_transactions:
if ctxn.id == input_txid:
input_tx = ctxn
ctxn_dict = ctxn.to_dict()
input_tx = DbTransaction.from_dict(ctxn_dict)
_output = [
Output.from_dict(output, index, ctxn.id)
for index, output in enumerate(ctxn_dict["outputs"])
]
if input_tx is None:
raise InputDoesNotExist("input `{}` doesn't exist".format(input_txid))
@ -427,9 +408,13 @@ class Planetmint(object):
if spent:
raise DoubleSpend("input `{}` was already spent".format(input_txid))
output = input_tx.outputs[input_.fulfills.output]
output = _output[input_.fulfills.output]
input_conditions.append(output)
input_txs.append(input_tx)
tx_dict = input_tx.to_dict()
tx_dict["outputs"] = Output.list_to_dict(_output)
tx_dict = DbTransaction.remove_generated_fields(tx_dict)
pm_transaction = Transaction.from_dict(tx_dict, False)
input_txs.append(pm_transaction)
# Validate that all inputs are distinct
links = [i.fulfills.to_uri() for i in tx.inputs]
@ -441,7 +426,13 @@ class Planetmint(object):
if asset_id != Transaction.read_out_asset_id(tx):
raise AssetIdMismatch(("The asset id of the input does not" " match the asset id of the" " transaction"))
if not tx.inputs_valid(input_conditions):
# convert planetmint.Output objects to transactions.common.Output objects
input_conditions_dict = Output.list_to_dict(input_conditions)
input_conditions_converted = []
for input_cond in input_conditions_dict:
input_conditions_converted.append(TransactionOutput.from_dict(input_cond))
if not tx.inputs_valid(input_conditions_converted):
raise InvalidSignature("Transaction signature is invalid.")
input_amount = sum([input_condition.amount for input_condition in input_conditions])
@ -477,7 +468,7 @@ class Planetmint(object):
"""
return backend.query.text_search(self.connection, search, limit=limit, table=table)
def get_assets(self, asset_ids):
def get_assets(self, asset_ids) -> list[Asset]:
"""Return a list of assets that match the asset_ids
Args:
@ -489,7 +480,12 @@ class Planetmint(object):
"""
return backend.query.get_assets(self.connection, asset_ids)
def get_metadata(self, txn_ids):
def get_assets_by_cid(self, asset_cid, **kwargs) -> list[dict]:
asset_txs = backend.query.get_transactions_by_asset(self.connection, asset_cid, **kwargs)
# flatten and return all found assets
return list(chain.from_iterable([Asset.list_to_dict(tx.assets) for tx in asset_txs]))
def get_metadata(self, txn_ids) -> list[MetaData]:
"""Return a list of metadata that match the transaction ids (txn_ids)
Args:
@ -501,6 +497,10 @@ class Planetmint(object):
"""
return backend.query.get_metadata(self.connection, txn_ids)
def get_metadata_by_cid(self, metadata_cid, **kwargs) -> list[str]:
metadata_txs = backend.query.get_transactions_by_metadata(self.connection, metadata_cid, **kwargs)
return [tx.metadata.metadata for tx in metadata_txs]
@property
def fastquery(self):
return fastquery.FastQuery(self.connection)
@ -573,54 +573,6 @@ class Planetmint(object):
def delete_elections(self, height):
return backend.query.delete_elections(self.connection, height)
def tx_from_db(self, tx_dict_list):
"""Helper method that reconstructs a transaction dict that was returned
from the database. It checks what asset_id to retrieve, retrieves the
asset from the asset table and reconstructs the transaction.
Args:
tx_dict_list (:obj:`list` of :dict: or :obj:`dict`): The transaction dict or
list of transaction dict as returned from the database.
Returns:
:class:`~Transaction`
"""
return_list = True
if isinstance(tx_dict_list, dict):
tx_dict_list = [tx_dict_list]
return_list = False
tx_map = {}
tx_ids = []
for tx in tx_dict_list:
tx.update({"metadata": None})
tx_map[tx["id"]] = tx
tx_ids.append(tx["id"])
assets = list(self.get_assets(tx_ids))
for asset in assets:
if asset is not None:
# This is tarantool specific behaviour needs to be addressed
tx = tx_map[asset[1]]
tx["asset"] = asset[0]
tx_ids = list(tx_map.keys())
metadata_list = list(self.get_metadata(tx_ids))
for metadata in metadata_list:
if "id" in metadata:
tx = tx_map[metadata["id"]]
tx.update({"metadata": metadata.get("metadata")})
if return_list:
tx_list = []
for tx_id, tx in tx_map.items():
tx_list.append(Transaction.from_dict(tx))
return tx_list
else:
tx = list(tx_map.values())[0]
return Transaction.from_dict(tx)
# NOTE: moved here from Election needs to be placed somewhere else
def get_validators_dict(self, height=None):
"""Return a dictionary of validators with key as `public_key` and
@ -740,7 +692,9 @@ class Planetmint(object):
return recipients
def show_election_status(self, transaction):
data = transaction.assets[0]["data"]
data = transaction.assets[0]
data = data.to_dict()["data"]
if "public_key" in data.keys():
data["public_key"] = public_key_to_base64(data["public_key"]["value"])
response = ""
@ -789,23 +743,23 @@ class Planetmint(object):
# validators and their voting power in the network
return current_topology == voters
def count_votes(self, election_pk, transactions, getter=getattr):
def count_votes(self, election_pk, transactions):
votes = 0
for txn in transactions:
if getter(txn, "operation") == Vote.OPERATION:
for output in getter(txn, "outputs"):
if txn.operation == Vote.OPERATION:
for output in txn.outputs:
# NOTE: We enforce that a valid vote to election id will have only
# election_pk in the output public keys, including any other public key
# along with election_pk will lead to vote being not considered valid.
if len(getter(output, "public_keys")) == 1 and [election_pk] == getter(output, "public_keys"):
votes = votes + int(getter(output, "amount"))
if len(output.public_keys) == 1 and [election_pk] == output.public_keys:
votes = votes + output.amount
return votes
def get_commited_votes(self, transaction, election_pk=None): # TODO: move somewhere else
if election_pk is None:
election_pk = election_id_to_public_key(transaction.id)
txns = list(backend.query.get_asset_tokens_for_public_key(self.connection, transaction.id, election_pk))
return self.count_votes(election_pk, txns, dict.get)
txns = backend.query.get_asset_tokens_for_public_key(self.connection, transaction.id, election_pk)
return self.count_votes(election_pk, txns)
def _get_initiated_elections(self, height, txns): # TODO: move somewhere else
elections = []
@ -898,7 +852,7 @@ class Planetmint(object):
votes_committed = self.get_commited_votes(transaction, election_pk)
votes_current = self.count_votes(election_pk, current_votes)
total_votes = sum(output.amount for output in transaction.outputs)
total_votes = sum(int(output.amount) for output in transaction.outputs)
if (votes_committed < (2 / 3) * total_votes) and (votes_committed + votes_current >= (2 / 3) * total_votes):
return True
@ -939,6 +893,8 @@ class Planetmint(object):
txns = [self.get_transaction(tx_id) for tx_id in txn_ids]
txns = [Transaction.from_dict(tx.to_dict()) for tx in txns]
elections = self._get_votes(txns)
for election_id in elections:
election = self.get_transaction(election_id)
@ -956,7 +912,7 @@ class Planetmint(object):
if election.operation == CHAIN_MIGRATION_ELECTION:
self.migrate_abci_chain()
if election.operation == VALIDATOR_ELECTION:
validator_updates = [election.assets[0]["data"]]
validator_updates = [election.assets[0].data]
curr_validator_set = self.get_validators(new_height)
updated_validator_set = new_validator_set(curr_validator_set, validator_updates)
@ -964,7 +920,7 @@ class Planetmint(object):
# TODO change to `new_height + 2` when upgrading to Tendermint 0.24.0.
self.store_validator_set(new_height + 1, updated_validator_set)
return encode_validator(election.assets[0]["data"])
return encode_validator(election.assets[0].data)
Block = namedtuple("Block", ("app_hash", "height", "transactions"))

View File

@ -13,6 +13,7 @@ import setproctitle
from packaging import version
from planetmint.version import __tm_supported_versions__
from planetmint.tendermint_utils import key_from_base64
from planetmint.backend.models.output import ConditionDetails
from transactions.common.crypto import key_pair_from_ed25519_key
@ -120,18 +121,17 @@ def condition_details_has_owner(condition_details, owner):
bool: True if the public key is found in the condition details, False otherwise
"""
if "subconditions" in condition_details:
result = condition_details_has_owner(condition_details["subconditions"], owner)
if isinstance(condition_details, ConditionDetails) and condition_details.sub_conditions is not None:
result = condition_details_has_owner(condition_details.sub_conditions, owner)
if result:
return True
elif isinstance(condition_details, list):
for subcondition in condition_details:
result = condition_details_has_owner(subcondition, owner)
if result:
return True
else:
if "public_key" in condition_details and owner == condition_details["public_key"]:
if condition_details.public_key is not None and owner == condition_details.public_key:
return True
return False

View File

@ -3,8 +3,8 @@
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
__version__ = "1.4.1"
__short_version__ = "1.4"
__version__ = "2.0.0"
__short_version__ = "2.0"
# Supported Tendermint versions
__tm_supported_versions__ = ["0.34.15"]

View File

@ -31,8 +31,8 @@ def r(*args, **kwargs):
ROUTES_API_V1 = [
r("/", info.ApiV1Index),
r("assets/", assets.AssetListApi),
r("metadata/", metadata.MetadataApi),
r("assets/<string:cid>", assets.AssetListApi),
r("metadata/<string:cid>", metadata.MetadataApi),
r("blocks/<int:block_id>", blocks.BlockApi),
r("blocks/latest", blocks.LatestBlock),
r("blocks/", blocks.BlockListApi),

View File

@ -9,7 +9,7 @@ For more information please refer to the documentation: http://planetmint.io/htt
"""
import logging
from flask_restful import reqparse, Resource
from flask_restful import Resource, reqparse
from flask import current_app
from planetmint.backend.exceptions import OperationError
from planetmint.web.views.base import make_error
@ -18,34 +18,21 @@ logger = logging.getLogger(__name__)
class AssetListApi(Resource):
def get(self):
"""API endpoint to perform a text search on the assets.
Args:
search (str): Text search string to query the text index
limit (int, optional): Limit the number of returned documents.
Return:
A list of assets that match the query.
"""
def get(self, cid: str):
parser = reqparse.RequestParser()
parser.add_argument("search", type=str, required=True)
parser.add_argument("limit", type=int)
args = parser.parse_args()
if not args["search"]:
return make_error(400, "text_search cannot be empty")
if not args["limit"]:
# if the limit is not specified do not pass None to `text_search`
del args["limit"]
pool = current_app.config["bigchain_pool"]
with pool() as planet:
assets = planet.text_search(**args)
assets = planet.get_assets_by_cid(cid, **args)
try:
# This only works with MongoDB as the backend
return list(assets)
return assets
except OperationError as e:
return make_error(400, "({}): {}".format(type(e).__name__, e))

View File

@ -18,33 +18,28 @@ logger = logging.getLogger(__name__)
class MetadataApi(Resource):
def get(self):
def get(self, cid):
"""API endpoint to perform a text search on transaction metadata.
Args:
search (str): Text search string to query the text index
limit (int, optional): Limit the number of returned documents.
Return:
A list of metadata that match the query.
"""
parser = reqparse.RequestParser()
parser.add_argument("search", type=str, required=True)
parser.add_argument("limit", type=int)
args = parser.parse_args()
if not args["search"]:
return make_error(400, "text_search cannot be empty")
if not args["limit"]:
del args["limit"]
pool = current_app.config["bigchain_pool"]
with pool() as planet:
args["table"] = "meta_data"
metadata = planet.text_search(**args)
metadata = planet.get_metadata_by_cid(cid, **args)
try:
return list(metadata)
return metadata
except OperationError as e:
return make_error(400, "({}): {}".format(type(e).__name__, e))

View File

@ -1,7 +1,7 @@
[pytest]
testpaths = tests/
norecursedirs = .* *.egg *.egg-info env* devenv* docs
addopts = -m "not abci"
#addopts = -m "not abci"
looponfailroots = planetmint tests
asyncio_mode = strict
markers =

View File

@ -9,7 +9,7 @@ from transactions.types.assets.create import Create
from transactions.types.assets.transfer import Transfer
def test_asset_transfer(b, signed_create_tx, user_pk, user_sk):
def test_asset_transfer(b, signed_create_tx, user_pk, user_sk, _bdb):
tx_transfer = Transfer.generate(signed_create_tx.to_inputs(), [([user_pk], 1)], [signed_create_tx.id])
tx_transfer_signed = tx_transfer.sign([user_sk])
@ -24,7 +24,7 @@ def test_asset_transfer(b, signed_create_tx, user_pk, user_sk):
# from planetmint.transactions.common.exceptions import AssetIdMismatch
def test_validate_transfer_asset_id_mismatch(b, signed_create_tx, user_pk, user_sk):
def test_validate_transfer_asset_id_mismatch(b, signed_create_tx, user_pk, user_sk, _bdb):
from transactions.common.exceptions import AssetIdMismatch
tx_transfer = Transfer.generate(signed_create_tx.to_inputs(), [([user_pk], 1)], [signed_create_tx.id])
@ -69,22 +69,22 @@ def test_asset_id_mismatch(alice, user_pk):
Transaction.get_asset_id([tx1, tx2])
def test_create_valid_divisible_asset(b, user_pk, user_sk):
def test_create_valid_divisible_asset(b, user_pk, user_sk, _bdb):
tx = Create.generate([user_pk], [([user_pk], 2)])
tx_signed = tx.sign([user_sk])
assert b.validate_transaction(tx_signed) == tx_signed
def test_v_2_0_validation_create(b, signed_2_0_create_tx):
def test_v_2_0_validation_create(b, signed_2_0_create_tx, _bdb):
validated = b.validate_transaction(signed_2_0_create_tx)
assert validated.to_dict() == signed_2_0_create_tx
def test_v_2_0_validation_create_invalid(b, signed_2_0_create_tx_assets):
def test_v_2_0_validation_create_invalid(b, signed_2_0_create_tx_assets, _bdb):
assert b.validate_transaction(signed_2_0_create_tx_assets)
def test_v_2_0_validation_transfer(b, signed_2_0_create_tx, signed_2_0_transfer_tx):
def test_v_2_0_validation_transfer(b, signed_2_0_create_tx, signed_2_0_transfer_tx, _bdb):
validated = b.validate_transaction(signed_2_0_create_tx)
b.store_bulk_transactions([validated])
assert validated.to_dict() == signed_2_0_create_tx

View File

@ -1,12 +0,0 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
pytest = "*"
[dev-packages]
[requires]
python_version = "3.8"

View File

@ -1,78 +0,0 @@
{
"_meta": {
"hash": {
"sha256": "97a0be44f6d5351e166a90d91c789c8100486c7cc30d922ef7f7e3541838acae"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.8"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
}
]
},
"default": {
"attrs": {
"hashes": [
"sha256:2d27e3784d7a565d36ab851fe94887c5eccd6a463168875832a1be79c82828b4",
"sha256:626ba8234211db98e869df76230a137c4c40a12d72445c45d5f5b716f076e2fd"
],
"version": "==21.4.0"
},
"iniconfig": {
"hashes": [
"sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3",
"sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"
],
"version": "==1.1.1"
},
"packaging": {
"hashes": [
"sha256:dd47c42927d89ab911e606518907cc2d3a1f38bbd026385970643f9c5b8ecfeb",
"sha256:ef103e05f519cdc783ae24ea4e2e0f508a9c99b2d4969652eed6a2e1ea5bd522"
],
"version": "==21.3"
},
"pluggy": {
"hashes": [
"sha256:4224373bacce55f955a878bf9cfa763c1e360858e330072059e10bad68531159",
"sha256:74134bbf457f031a36d68416e1509f34bd5ccc019f0bcc952c7b909d06b37bd3"
],
"version": "==1.0.0"
},
"py": {
"hashes": [
"sha256:51c75c4126074b472f746a24399ad32f6053d1b34b68d2fa41e558e6f4a98719",
"sha256:607c53218732647dff4acdfcd50cb62615cedf612e72d1724fb1a0cc6405b378"
],
"version": "==1.11.0"
},
"pyparsing": {
"hashes": [
"sha256:18ee9022775d270c55187733956460083db60b37d0d0fb357445f3094eed3eea",
"sha256:a6c06a88f252e6c322f65faf8f418b16213b51bdfaece0524c1c1bc30c63c484"
],
"version": "==3.0.7"
},
"pytest": {
"hashes": [
"sha256:9ce3ff477af913ecf6321fe337b93a2c0dcf2a0a1439c43f5452112c1e4280db",
"sha256:e30905a0c131d3d94b89624a1cc5afec3e0ba2fbdb151867d8e0ebd49850f171"
],
"index": "pypi",
"version": "==7.0.1"
},
"tomli": {
"hashes": [
"sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc",
"sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"
],
"version": "==2.0.1"
}
},
"develop": {}
}

View File

@ -1,31 +0,0 @@
import pytest
from planetmint.backend.connection import Connection
#
#
#
# @pytest.fixture
# def dummy_db(request):
# from planetmint.backend import Connection
#
# conn = Connection()
# dbname = request.fixturename
# xdist_suffix = getattr(request.config, 'slaveinput', {}).get('slaveid')
# if xdist_suffix:
# dbname = '{}_{}'.format(dbname, xdist_suffix)
#
# conn.drop_database()
# #_drop_db(conn, dbname) # make sure we start with a clean DB
# #schema.init_database(conn, dbname)
# conn.init_database()
# yield dbname
#
# conn.drop_database()
# #_drop_db(conn, dbname)
@pytest.fixture
def db_conn():
conn = Connection()
return conn

View File

@ -10,6 +10,8 @@ import json
from transactions.common.transaction import Transaction
from transactions.types.assets.create import Create
from transactions.types.assets.transfer import Transfer
from planetmint.backend.interfaces import Asset, MetaData
from planetmint.backend.models import DbTransaction
pytestmark = pytest.mark.bdb
@ -40,227 +42,16 @@ def test_get_txids_filtered(signed_create_tx, signed_transfer_tx, db_conn):
assert txids == {signed_transfer_tx.id}
def test_write_assets(db_conn):
from planetmint.backend.tarantool import query
assets = [
{"id": "1", "data": "1"},
{"id": "2", "data": "2"},
{"id": "3", "data": "3"},
# Duplicated id. Should not be written to the database
{"id": "1", "data": "1"},
]
# write the assets
for asset in assets:
query.store_asset(connection=db_conn, asset=asset)
# check that 3 assets were written to the database
documents = query.get_assets(assets_ids=[asset["id"] for asset in assets], connection=db_conn)
assert len(documents) == 3
assert list(documents)[0][0] == assets[:-1][0]
def test_get_assets(db_conn):
from planetmint.backend.tarantool import query
assets = [
("1", "1", "1"),
("2", "2", "2"),
("3", "3", "3"),
]
query.store_assets(assets=assets, connection=db_conn)
for asset in assets:
assert query.get_asset(asset_id=asset[2], connection=db_conn)
@pytest.mark.parametrize("table", ["assets", "metadata"])
def test_text_search(table):
assert "PASS FOR NOW"
# # Example data and tests cases taken from the mongodb documentation
# # https://docs.mongodb.com/manual/reference/operator/query/text/
# objects = [
# {'id': 1, 'subject': 'coffee', 'author': 'xyz', 'views': 50},
# {'id': 2, 'subject': 'Coffee Shopping', 'author': 'efg', 'views': 5},
# {'id': 3, 'subject': 'Baking a cake', 'author': 'abc', 'views': 90},
# {'id': 4, 'subject': 'baking', 'author': 'xyz', 'views': 100},
# {'id': 5, 'subject': 'Café Con Leche', 'author': 'abc', 'views': 200},
# {'id': 6, 'subject': 'Сырники', 'author': 'jkl', 'views': 80},
# {'id': 7, 'subject': 'coffee and cream', 'author': 'efg', 'views': 10},
# {'id': 8, 'subject': 'Cafe con Leche', 'author': 'xyz', 'views': 10}
# ]
#
# # insert the assets
# conn.db[table].insert_many(deepcopy(objects), ordered=False)
#
# # test search single word
# assert list(query.text_search(conn, 'coffee', table=table)) == [
# {'id': 1, 'subject': 'coffee', 'author': 'xyz', 'views': 50},
# {'id': 2, 'subject': 'Coffee Shopping', 'author': 'efg', 'views': 5},
# {'id': 7, 'subject': 'coffee and cream', 'author': 'efg', 'views': 10},
# ]
#
# # match any of the search terms
# assert list(query.text_search(conn, 'bake coffee cake', table=table)) == [
# {'author': 'abc', 'id': 3, 'subject': 'Baking a cake', 'views': 90},
# {'author': 'xyz', 'id': 1, 'subject': 'coffee', 'views': 50},
# {'author': 'xyz', 'id': 4, 'subject': 'baking', 'views': 100},
# {'author': 'efg', 'id': 2, 'subject': 'Coffee Shopping', 'views': 5},
# {'author': 'efg', 'id': 7, 'subject': 'coffee and cream', 'views': 10}
# ]
#
# # search for a phrase
# assert list(query.text_search(conn, '\"coffee shop\"', table=table)) == [
# {'id': 2, 'subject': 'Coffee Shopping', 'author': 'efg', 'views': 5},
# ]
#
# # exclude documents that contain a term
# assert list(query.text_search(conn, 'coffee -shop', table=table)) == [
# {'id': 1, 'subject': 'coffee', 'author': 'xyz', 'views': 50},
# {'id': 7, 'subject': 'coffee and cream', 'author': 'efg', 'views': 10},
# ]
#
# # search different language
# assert list(query.text_search(conn, 'leche', language='es', table=table)) == [
# {'id': 5, 'subject': 'Café Con Leche', 'author': 'abc', 'views': 200},
# {'id': 8, 'subject': 'Cafe con Leche', 'author': 'xyz', 'views': 10}
# ]
#
# # case and diacritic insensitive search
# assert list(query.text_search(conn, 'сы́рники CAFÉS', table=table)) == [
# {'id': 6, 'subject': 'Сырники', 'author': 'jkl', 'views': 80},
# {'id': 5, 'subject': 'Café Con Leche', 'author': 'abc', 'views': 200},
# {'id': 8, 'subject': 'Cafe con Leche', 'author': 'xyz', 'views': 10}
# ]
#
# # case sensitive search
# assert list(query.text_search(conn, 'Coffee', case_sensitive=True, table=table)) == [
# {'id': 2, 'subject': 'Coffee Shopping', 'author': 'efg', 'views': 5},
# ]
#
# # diacritic sensitive search
# assert list(query.text_search(conn, 'CAFÉ', diacritic_sensitive=True, table=table)) == [
# {'id': 5, 'subject': 'Café Con Leche', 'author': 'abc', 'views': 200},
# ]
#
# # return text score
# assert list(query.text_search(conn, 'coffee', text_score=True, table=table)) == [
# {'id': 1, 'subject': 'coffee', 'author': 'xyz', 'views': 50, 'score': 1.0},
# {'id': 2, 'subject': 'Coffee Shopping', 'author': 'efg', 'views': 5, 'score': 0.75},
# {'id': 7, 'subject': 'coffee and cream', 'author': 'efg', 'views': 10, 'score': 0.75},
# ]
#
# # limit search result
# assert list(query.text_search(conn, 'coffee', limit=2, table=table)) == [
# {'id': 1, 'subject': 'coffee', 'author': 'xyz', 'views': 50},
# {'id': 2, 'subject': 'Coffee Shopping', 'author': 'efg', 'views': 5},
# ]
def test_write_metadata(db_conn):
from planetmint.backend.tarantool import query
metadata = [{"id": "1", "data": "1"}, {"id": "2", "data": "2"}, {"id": "3", "data": "3"}]
# write the assets
query.store_metadatas(connection=db_conn, metadata=metadata)
# check that 3 assets were written to the database
metadatas = []
for meta in metadata:
_data = db_conn.run(db_conn.space("meta_data").select(meta["id"]))[0]
metadatas.append({"id": _data[0], "data": json.loads(_data[1])})
metadatas = sorted(metadatas, key=lambda k: k["id"])
assert len(metadatas) == 3
assert list(metadatas) == metadata
def test_get_metadata(db_conn):
from planetmint.backend.tarantool import query
metadata = [
{"id": "dd86682db39e4b424df0eec1413cfad65488fd48712097c5d865ca8e8e059b64", "metadata": None},
{"id": "55a2303e3bcd653e4b5bd7118d39c0e2d48ee2f18e22fbcf64e906439bdeb45d", "metadata": {"key": "value"}},
]
# conn.db.metadata.insert_many(deepcopy(metadata), ordered=False)
query.store_metadatas(connection=db_conn, metadata=metadata)
for meta in metadata:
_m = query.get_metadata(connection=db_conn, transaction_ids=[meta["id"]])
assert _m
def test_get_owned_ids(signed_create_tx, user_pk, db_conn):
from planetmint.backend.tarantool import query
# insert a transaction
query.store_transactions(connection=db_conn, signed_transactions=[signed_create_tx.to_dict()])
txns = list(query.get_owned_ids(connection=db_conn, owner=user_pk))
txns = query.get_owned_ids(connection=db_conn, owner=user_pk)
tx_dict = signed_create_tx.to_dict()
founded = [tx for tx in txns if tx["id"] == tx_dict["id"]]
assert founded[0] == tx_dict
def test_get_spending_transactions(user_pk, user_sk, db_conn):
from planetmint.backend.tarantool import query
out = [([user_pk], 1)]
tx1 = Create.generate([user_pk], out * 3)
tx1.sign([user_sk])
inputs = tx1.to_inputs()
tx2 = Transfer.generate([inputs[0]], out, [tx1.id]).sign([user_sk])
tx3 = Transfer.generate([inputs[1]], out, [tx1.id]).sign([user_sk])
tx4 = Transfer.generate([inputs[2]], out, [tx1.id]).sign([user_sk])
txns = [deepcopy(tx.to_dict()) for tx in [tx1, tx2, tx3, tx4]]
query.store_transactions(signed_transactions=txns, connection=db_conn)
links = [inputs[0].fulfills.to_dict(), inputs[2].fulfills.to_dict()]
txns = list(query.get_spending_transactions(connection=db_conn, inputs=links))
# tx3 not a member because input 1 not asked for
assert txns == [tx2.to_dict(), tx4.to_dict()]
def test_get_spending_transactions_multiple_inputs(db_conn):
from transactions.common.crypto import generate_key_pair
from planetmint.backend.tarantool import query
(alice_sk, alice_pk) = generate_key_pair()
(bob_sk, bob_pk) = generate_key_pair()
(carol_sk, carol_pk) = generate_key_pair()
out = [([alice_pk], 9)]
tx1 = Create.generate([alice_pk], out).sign([alice_sk])
inputs1 = tx1.to_inputs()
tx2 = Transfer.generate([inputs1[0]], [([alice_pk], 6), ([bob_pk], 3)], [tx1.id]).sign([alice_sk])
inputs2 = tx2.to_inputs()
tx3 = Transfer.generate([inputs2[0]], [([bob_pk], 3), ([carol_pk], 3)], [tx1.id]).sign([alice_sk])
inputs3 = tx3.to_inputs()
tx4 = Transfer.generate([inputs2[1], inputs3[0]], [([carol_pk], 6)], [tx1.id]).sign([bob_sk])
txns = [deepcopy(tx.to_dict()) for tx in [tx1, tx2, tx3, tx4]]
query.store_transactions(signed_transactions=txns, connection=db_conn)
links = [
({"transaction_id": tx2.id, "output_index": 0}, 1, [tx3.id]),
({"transaction_id": tx2.id, "output_index": 1}, 1, [tx4.id]),
({"transaction_id": tx3.id, "output_index": 0}, 1, [tx4.id]),
({"transaction_id": tx3.id, "output_index": 1}, 0, None),
]
for li, num, match in links:
txns = list(query.get_spending_transactions(connection=db_conn, inputs=[li]))
assert len(txns) == num
if len(txns):
assert [tx["id"] for tx in txns] == match
owned_tx = txns[0].to_dict()
assert owned_tx == tx_dict
def test_store_block(db_conn):
@ -286,104 +77,6 @@ def test_get_block(db_conn):
assert block["height"] == 3
# def test_delete_zero_unspent_outputs(db_context, utxoset):
# from planetmint.backend.tarantool import query
# return
#
# unspent_outputs, utxo_collection = utxoset
#
# delete_res = query.delete_unspent_outputs(db_context.conn)
#
# assert delete_res is None
# assert utxo_collection.count_documents({}) == 3
# assert utxo_collection.count_documents(
# {'$or': [
# {'transaction_id': 'a', 'output_index': 0},
# {'transaction_id': 'b', 'output_index': 0},
# {'transaction_id': 'a', 'output_index': 1},
# ]}
# ) == 3
#
#
# def test_delete_one_unspent_outputs(db_context, utxoset):
# return
# from planetmint.backend import query
# unspent_outputs, utxo_collection = utxoset
# delete_res = query.delete_unspent_outputs(db_context.conn,
# unspent_outputs[0])
# assert delete_res.raw_result['n'] == 1
# assert utxo_collection.count_documents(
# {'$or': [
# {'transaction_id': 'a', 'output_index': 1},
# {'transaction_id': 'b', 'output_index': 0},
# ]}
# ) == 2
# assert utxo_collection.count_documents(
# {'transaction_id': 'a', 'output_index': 0}) == 0
#
#
# def test_delete_many_unspent_outputs(db_context, utxoset):
# return
# from planetmint.backend import query
# unspent_outputs, utxo_collection = utxoset
# delete_res = query.delete_unspent_outputs(db_context.conn,
# *unspent_outputs[::2])
# assert delete_res.raw_result['n'] == 2
# assert utxo_collection.count_documents(
# {'$or': [
# {'transaction_id': 'a', 'output_index': 0},
# {'transaction_id': 'b', 'output_index': 0},
# ]}
# ) == 0
# assert utxo_collection.count_documents(
# {'transaction_id': 'a', 'output_index': 1}) == 1
#
#
# def test_store_zero_unspent_output(db_context, utxo_collection):
# return
# from planetmint.backend import query
# res = query.store_unspent_outputs(db_context.conn)
# assert res is None
# assert utxo_collection.count_documents({}) == 0
#
#
# def test_store_one_unspent_output(db_context,
# unspent_output_1, utxo_collection):
# return
# from planetmint.backend import query
# res = query.store_unspent_outputs(db_context.conn, unspent_output_1)
# assert res.acknowledged
# assert len(res.inserted_ids) == 1
# assert utxo_collection.count_documents(
# {'transaction_id': unspent_output_1['transaction_id'],
# 'output_index': unspent_output_1['output_index']}
# ) == 1
#
#
# def test_store_many_unspent_outputs(db_context,
# unspent_outputs, utxo_collection):
# return
# from planetmint.backend import query
# res = query.store_unspent_outputs(db_context.conn, *unspent_outputs)
# assert res.acknowledged
# assert len(res.inserted_ids) == 3
# assert utxo_collection.count_documents(
# {'transaction_id': unspent_outputs[0]['transaction_id']}
# ) == 3
#
#
# def test_get_unspent_outputs(db_context, utxoset):
# return
# from planetmint.backend import query
# cursor = query.get_unspent_outputs(db_context.conn)
# assert cursor.collection.count_documents({}) == 3
# retrieved_utxoset = list(cursor)
# unspent_outputs, utxo_collection = utxoset
# assert retrieved_utxoset == list(
# utxo_collection.find(projection={'_id': False}))
# assert retrieved_utxoset == unspent_outputs
def test_store_pre_commit_state(db_conn):
from planetmint.backend.tarantool import query

View File

@ -3,8 +3,6 @@
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
from planetmint.backend.tarantool.connection import TarantoolDBConnection
def _check_spaces_by_list(conn, space_names):
_exists = []
@ -25,5 +23,6 @@ def test_create_tables(db_conn):
def test_drop(db_conn): # remove dummy_db as argument
db_conn.drop_database()
db_conn.close()
actual_spaces = _check_spaces_by_list(conn=db_conn, space_names=db_conn.SPACE_NAMES)
assert [] == actual_spaces

View File

@ -12,9 +12,11 @@ from argparse import Namespace
from planetmint.config import Config
from planetmint import ValidatorElection
from planetmint.commands.planetmint import run_election_show
from planetmint.commands.planetmint import run_election_new_chain_migration
from planetmint.backend.connection import Connection
from planetmint.lib import Block
from transactions.types.elections.chain_migration_election import ChainMigrationElection
from tests.utils import generate_election, generate_validators
@ -352,8 +354,6 @@ def test_election_new_upsert_validator_without_tendermint(caplog, b, priv_valida
@pytest.mark.abci
def test_election_new_chain_migration_with_tendermint(b, priv_validator_path, user_sk, validators):
from planetmint.commands.planetmint import run_election_new_chain_migration
new_args = Namespace(action="new", election_type="migration", sk=priv_validator_path, config={})
election_id = run_election_new_chain_migration(new_args, b)
@ -363,8 +363,6 @@ def test_election_new_chain_migration_with_tendermint(b, priv_validator_path, us
@pytest.mark.bdb
def test_election_new_chain_migration_without_tendermint(caplog, b, priv_validator_path, user_sk):
from planetmint.commands.planetmint import run_election_new_chain_migration
def mock_write(tx, mode):
b.store_bulk_transactions([tx])
return (202, "")

View File

@ -27,7 +27,6 @@ from transactions.common.transaction_mode_types import BROADCAST_TX_COMMIT
from planetmint.tendermint_utils import key_from_base64
from planetmint.backend import schema, query
from transactions.common.crypto import key_pair_from_ed25519_key, public_key_from_ed25519_key
from transactions.common.exceptions import DatabaseDoesNotExist
from planetmint.lib import Block
from tests.utils import gen_vote
from planetmint.config import Config
@ -120,21 +119,20 @@ def _setup_database(_configure_planetmint): # TODO Here is located setup databa
dbname = Config().get()["database"]["name"]
conn = Connection()
_drop_db(conn, dbname)
schema.drop_database(conn, dbname)
schema.init_database(conn, dbname)
print("Finishing init database")
yield
print("Deleting `{}` database".format(dbname))
conn = Connection()
_drop_db(conn, dbname)
schema.drop_database(conn, dbname)
print("Finished deleting `{}`".format(dbname))
@pytest.fixture
def _bdb(_setup_database, _configure_planetmint):
def _bdb(_setup_database):
from transactions.common.memoize import to_dict, from_dict
from transactions.common.transaction import Transaction
from .utils import flush_db
@ -339,32 +337,6 @@ def inputs(user_pk, b, alice):
b.store_bulk_transactions(transactions)
# @pytest.fixture
# def dummy_db(request):
# from planetmint.backend import Connection
#
# conn = Connection()
# dbname = request.fixturename
# xdist_suffix = getattr(request.config, 'slaveinput', {}).get('slaveid')
# if xdist_suffix:
# dbname = '{}_{}'.format(dbname, xdist_suffix)
#
#
# _drop_db(conn, dbname) # make sure we start with a clean DB
# schema.init_database(conn, dbname)
# yield dbname
#
# _drop_db(conn, dbname)
def _drop_db(conn, dbname):
print(f"CONNECTION FOR DROPPING {conn}")
try:
schema.drop_database(conn, dbname)
except DatabaseDoesNotExist:
pass
@pytest.fixture
def db_config():
return Config().get()["database"]
@ -538,13 +510,6 @@ def tarantool_client(db_context): # TODO Here add TarantoolConnectionClass
return TarantoolDBConnection(host=db_context.host, port=db_context.port)
# @pytest.fixture
# def mongo_client(db_context): # TODO Here add TarantoolConnectionClass
# return None # MongoClient(host=db_context.host, port=db_context.port)
#
#
@pytest.fixture
def utxo_collection(tarantool_client, _setup_database):
return tarantool_client.get_space("utxos")
@ -561,11 +526,11 @@ def dummy_unspent_outputs():
@pytest.fixture
def utxoset(dummy_unspent_outputs, utxo_collection):
from json import dumps
from uuid import uuid4
num_rows_before_operation = utxo_collection.select().rowcount
for utxo in dummy_unspent_outputs:
res = utxo_collection.insert((utxo["transaction_id"], utxo["output_index"], dumps(utxo)))
res = utxo_collection.insert((uuid4().hex, utxo["transaction_id"], utxo["output_index"], utxo))
assert res
num_rows_after_operation = utxo_collection.select().rowcount
assert num_rows_after_operation == num_rows_before_operation + 3

View File

@ -2,23 +2,31 @@
# Planetmint and IPDB software contributors.
# SPDX-License-Identifier: (Apache-2.0 AND CC-BY-4.0)
# Code is Apache-2.0 and docs are CC-BY-4.0
import warnings
import random
import pytest
import warnings
from unittest.mock import patch
import pytest
from base58 import b58decode
from ipld import marshal, multihash
from transactions.common import crypto
from transactions.common.output import Output as TransactionOutput
from transactions.common.transaction import TransactionLink
from transactions.common.transaction import Transaction
from transactions.types.assets.create import Create
from transactions.types.assets.transfer import Transfer
from ipld import marshal, multihash
from base58 import b58decode
from planetmint.backend.models import Output
from planetmint.exceptions import CriticalDoubleSpend
pytestmark = pytest.mark.bdb
class TestBigchainApi(object):
def test_get_spent_with_double_spend_detected(self, b, alice):
from transactions.common.exceptions import DoubleSpend
from planetmint.exceptions import CriticalDoubleSpend
tx = Create.generate([alice.public_key], [([alice.public_key], 1)])
@ -39,14 +47,13 @@ class TestBigchainApi(object):
with pytest.raises(DoubleSpend):
b.validate_transaction(transfer_tx2)
b.store_bulk_transactions([transfer_tx2])
with pytest.raises(CriticalDoubleSpend):
b.get_spent(tx.id, 0)
b.store_bulk_transactions([transfer_tx2])
def test_double_inclusion(self, b, alice):
from planetmint.backend.exceptions import OperationError
from tarantool.error import DatabaseError
from planetmint.backend.exceptions import OperationError
from planetmint.backend.tarantool.connection import TarantoolDBConnection
tx = Create.generate([alice.public_key], [([alice.public_key], 1)])
@ -54,7 +61,7 @@ class TestBigchainApi(object):
b.store_bulk_transactions([tx])
if isinstance(b.connection, TarantoolDBConnection):
with pytest.raises(DatabaseError):
with pytest.raises(CriticalDoubleSpend):
b.store_bulk_transactions([tx])
else:
with pytest.raises(OperationError):
@ -110,9 +117,9 @@ class TestBigchainApi(object):
before = tx.to_dict()
after = tx_from_db.to_dict()
assert before["assets"][0]["data"] == after["assets"][0]["data"]
before.pop("asset", None)
after.pop("asset", None)
assert before["assets"][0] == after["assets"][0]
before.pop("assets", None)
after.pop("assets", None)
assert before == after
@ -153,14 +160,12 @@ class TestTransactionValidation(object):
class TestMultipleInputs(object):
def test_transfer_single_owner_single_input(self, b, inputs, user_pk, user_sk):
from transactions.common import crypto
user2_sk, user2_pk = crypto.generate_key_pair()
tx_link = b.fastquery.get_outputs_by_public_key(user_pk).pop()
input_tx = b.get_transaction(tx_link.txid)
inputs = input_tx.to_inputs()
tx = Transfer.generate(inputs, [([user2_pk], 1)], asset_ids=[input_tx.id])
tx_converted = Transaction.from_dict(input_tx.to_dict(), True)
tx = Transfer.generate(tx_converted.to_inputs(), [([user2_pk], 1)], asset_ids=[input_tx.id])
tx = tx.sign([user_sk])
# validate transaction
@ -169,14 +174,14 @@ class TestMultipleInputs(object):
assert len(tx.outputs) == 1
def test_single_owner_before_multiple_owners_after_single_input(self, b, user_sk, user_pk, inputs):
from transactions.common import crypto
user2_sk, user2_pk = crypto.generate_key_pair()
user3_sk, user3_pk = crypto.generate_key_pair()
tx_link = b.fastquery.get_outputs_by_public_key(user_pk).pop()
input_tx = b.get_transaction(tx_link.txid)
tx = Transfer.generate(input_tx.to_inputs(), [([user2_pk, user3_pk], 1)], asset_ids=[input_tx.id])
tx_converted = Transaction.from_dict(input_tx.to_dict(), True)
tx = Transfer.generate(tx_converted.to_inputs(), [([user2_pk, user3_pk], 1)], asset_ids=[input_tx.id])
tx = tx.sign([user_sk])
b.validate_transaction(tx)
@ -185,8 +190,6 @@ class TestMultipleInputs(object):
@pytest.mark.usefixtures("inputs")
def test_multiple_owners_before_single_owner_after_single_input(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
user2_sk, user2_pk = crypto.generate_key_pair()
user3_sk, user3_pk = crypto.generate_key_pair()
@ -196,9 +199,9 @@ class TestMultipleInputs(object):
owned_input = b.fastquery.get_outputs_by_public_key(user_pk).pop()
input_tx = b.get_transaction(owned_input.txid)
inputs = input_tx.to_inputs()
input_tx_converted = Transaction.from_dict(input_tx.to_dict(), True)
transfer_tx = Transfer.generate(inputs, [([user3_pk], 1)], asset_ids=[input_tx.id])
transfer_tx = Transfer.generate(input_tx_converted.to_inputs(), [([user3_pk], 1)], asset_ids=[input_tx.id])
transfer_tx = transfer_tx.sign([user_sk, user2_sk])
# validate transaction
@ -208,8 +211,6 @@ class TestMultipleInputs(object):
@pytest.mark.usefixtures("inputs")
def test_multiple_owners_before_multiple_owners_after_single_input(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
user2_sk, user2_pk = crypto.generate_key_pair()
user3_sk, user3_pk = crypto.generate_key_pair()
user4_sk, user4_pk = crypto.generate_key_pair()
@ -221,8 +222,9 @@ class TestMultipleInputs(object):
# get input
tx_link = b.fastquery.get_outputs_by_public_key(user_pk).pop()
tx_input = b.get_transaction(tx_link.txid)
input_tx_converted = Transaction.from_dict(tx_input.to_dict(), True)
tx = Transfer.generate(tx_input.to_inputs(), [([user3_pk, user4_pk], 1)], asset_ids=[tx_input.id])
tx = Transfer.generate(input_tx_converted.to_inputs(), [([user3_pk, user4_pk], 1)], asset_ids=[tx_input.id])
tx = tx.sign([user_sk, user2_sk])
b.validate_transaction(tx)
@ -230,9 +232,6 @@ class TestMultipleInputs(object):
assert len(tx.outputs) == 1
def test_get_owned_ids_single_tx_single_output(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
from transactions.common.transaction import TransactionLink
user2_sk, user2_pk = crypto.generate_key_pair()
tx = Create.generate([alice.public_key], [([user_pk], 1)])
@ -255,9 +254,6 @@ class TestMultipleInputs(object):
assert owned_inputs_user2 == [TransactionLink(tx_transfer.id, 0)]
def test_get_owned_ids_single_tx_multiple_outputs(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
from transactions.common.transaction import TransactionLink
user2_sk, user2_pk = crypto.generate_key_pair()
# create divisible asset
@ -286,9 +282,6 @@ class TestMultipleInputs(object):
assert owned_inputs_user2 == [TransactionLink(tx_transfer.id, 0), TransactionLink(tx_transfer.id, 1)]
def test_get_owned_ids_multiple_owners(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
from transactions.common.transaction import TransactionLink
user2_sk, user2_pk = crypto.generate_key_pair()
user3_sk, user3_pk = crypto.generate_key_pair()
@ -316,8 +309,6 @@ class TestMultipleInputs(object):
assert not spent_user1
def test_get_spent_single_tx_single_output(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
user2_sk, user2_pk = crypto.generate_key_pair()
tx = Create.generate([alice.public_key], [([user_pk], 1)])
@ -337,11 +328,9 @@ class TestMultipleInputs(object):
b.store_bulk_transactions([tx])
spent_inputs_user1 = b.get_spent(input_txid, 0)
assert spent_inputs_user1 == tx
assert spent_inputs_user1 == tx.to_dict()
def test_get_spent_single_tx_multiple_outputs(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
# create a new users
user2_sk, user2_pk = crypto.generate_key_pair()
@ -366,15 +355,13 @@ class TestMultipleInputs(object):
# check that used inputs are marked as spent
for ffill in tx_create.to_inputs()[:2]:
spent_tx = b.get_spent(ffill.fulfills.txid, ffill.fulfills.output)
assert spent_tx == tx_transfer_signed
assert spent_tx == tx_transfer_signed.to_dict()
# check if remaining transaction that was unspent is also perceived
# spendable by Planetmint
assert b.get_spent(tx_create.to_inputs()[2].fulfills.txid, 2) is None
def test_get_spent_multiple_owners(self, b, user_sk, user_pk, alice):
from transactions.common import crypto
user2_sk, user2_pk = crypto.generate_key_pair()
user3_sk, user3_pk = crypto.generate_key_pair()
@ -398,7 +385,7 @@ class TestMultipleInputs(object):
b.store_bulk_transactions([tx])
# check that used inputs are marked as spent
assert b.get_spent(transactions[0].id, 0) == tx
assert b.get_spent(transactions[0].id, 0) == tx.to_dict()
# check that the other remain marked as unspent
for unspent in transactions[1:]:
assert b.get_spent(unspent.id, 0) is None
@ -406,6 +393,7 @@ class TestMultipleInputs(object):
def test_get_outputs_filtered_only_unspent():
from transactions.common.transaction import TransactionLink
from planetmint.lib import Planetmint
go = "planetmint.fastquery.FastQuery.get_outputs_by_public_key"
@ -421,6 +409,7 @@ def test_get_outputs_filtered_only_unspent():
def test_get_outputs_filtered_only_spent():
from transactions.common.transaction import TransactionLink
from planetmint.lib import Planetmint
go = "planetmint.fastquery.FastQuery.get_outputs_by_public_key"
@ -438,6 +427,7 @@ def test_get_outputs_filtered_only_spent():
@patch("planetmint.fastquery.FastQuery.filter_spent_outputs")
def test_get_outputs_filtered(filter_spent, filter_unspent):
from transactions.common.transaction import TransactionLink
from planetmint.lib import Planetmint
go = "planetmint.fastquery.FastQuery.get_outputs_by_public_key"
@ -472,6 +462,7 @@ def test_cant_spend_same_input_twice_in_tx(b, alice):
def test_transaction_unicode(b, alice):
import copy
from transactions.common.utils import serialize
# http://www.fileformat.info/info/unicode/char/1f37a/index.htm

View File

@ -22,6 +22,7 @@ from planetmint.lib import Block
from planetmint.tendermint_utils import new_validator_set
from planetmint.tendermint_utils import public_key_to_base64
from planetmint.version import __tm_supported_versions__
from planetmint.backend.tarantool.const import TARANT_TABLE_GOVERNANCE
from tests.utils import generate_election, generate_validators
pytestmark = pytest.mark.bdb

View File

@ -112,7 +112,11 @@ def test_post_transaction_responses(tendermint_ws_url, b):
alice = generate_key_pair()
bob = generate_key_pair()
tx = Create.generate([alice.public_key], [([alice.public_key], 1)], assets=None).sign([alice.private_key])
tx = Create.generate(
[alice.public_key],
[([alice.public_key], 1)],
assets=[{"data": "QmaozNR7DZHQK1ZcU9p7QdrshMvXqWK6gpu5rmrkPdT3L4"}],
).sign([alice.private_key])
code, message = b.write_transaction(tx, BROADCAST_TX_COMMIT)
assert code == 202

View File

@ -21,6 +21,7 @@ from transactions.common.transaction_mode_types import (
)
from planetmint.lib import Block
from ipld import marshal, multihash
from uuid import uuid4
@pytest.mark.bdb
@ -66,8 +67,8 @@ def test_asset_is_separated_from_transaciton(b):
tx_dict = copy.deepcopy(tx.to_dict())
b.store_bulk_transactions([tx])
assert "asset" not in backend.query.get_transaction(b.connection, tx.id)
assert backend.query.get_asset(b.connection, tx.id)["data"] == assets[0]
assert "asset" not in backend.query.get_transaction_single(b.connection, tx.id)
assert backend.query.get_asset(b.connection, tx.id).data == assets[0]
assert b.get_transaction(tx.id).to_dict() == tx_dict
@ -159,126 +160,31 @@ def test_update_utxoset(b, signed_create_tx, signed_transfer_tx, db_conn):
utxoset = db_conn.get_space("utxos")
assert utxoset.select().rowcount == 1
utxo = utxoset.select().data
assert utxo[0][0] == signed_create_tx.id
assert utxo[0][1] == 0
assert utxo[0][1] == signed_create_tx.id
assert utxo[0][2] == 0
b.update_utxoset(signed_transfer_tx)
assert utxoset.select().rowcount == 1
utxo = utxoset.select().data
assert utxo[0][0] == signed_transfer_tx.id
assert utxo[0][1] == 0
assert utxo[0][1] == signed_transfer_tx.id
assert utxo[0][2] == 0
@pytest.mark.bdb
def test_store_transaction(mocker, b, signed_create_tx, signed_transfer_tx, db_context):
from planetmint.backend.tarantool.connection import TarantoolDBConnection
mocked_store_asset = mocker.patch("planetmint.backend.query.store_assets")
mocked_store_metadata = mocker.patch("planetmint.backend.query.store_metadatas")
def test_store_transaction(mocker, b, signed_create_tx, signed_transfer_tx):
mocked_store_transaction = mocker.patch("planetmint.backend.query.store_transactions")
b.store_bulk_transactions([signed_create_tx])
if not isinstance(b.connection, TarantoolDBConnection):
mongo_client = MongoClient(host=db_context.host, port=db_context.port)
utxoset = mongo_client[db_context.name]["utxos"]
assert utxoset.count_documents({}) == 1
utxo = utxoset.find_one()
assert utxo["transaction_id"] == signed_create_tx.id
assert utxo["output_index"] == 0
mocked_store_asset.assert_called_once_with(
b.connection,
[
{
"data": signed_create_tx.assets[0]["data"],
"tx_id": signed_create_tx.id,
"asset_ids": [signed_create_tx.id],
}
],
)
else:
mocked_store_asset.assert_called_once_with(
b.connection, [(signed_create_tx.assets, signed_create_tx.id, signed_create_tx.id)]
)
mocked_store_metadata.assert_called_once_with(
b.connection,
[{"id": signed_create_tx.id, "metadata": signed_create_tx.metadata}],
)
mocked_store_transaction.assert_called_once_with(
b.connection,
[{k: v for k, v in signed_create_tx.to_dict().items() if k not in ("assets", "metadata")}],
)
mocked_store_asset.reset_mock()
mocked_store_metadata.reset_mock()
mocked_store_transaction.assert_any_call(b.connection, [signed_create_tx.to_dict()], "transactions")
mocked_store_transaction.reset_mock()
b.store_bulk_transactions([signed_transfer_tx])
if not isinstance(b.connection, TarantoolDBConnection):
assert utxoset.count_documents({}) == 1
utxo = utxoset.find_one()
assert utxo["transaction_id"] == signed_transfer_tx.id
assert utxo["output_index"] == 0
assert not mocked_store_asset.called
mocked_store_metadata.asser_called_once_with(
b.connection,
[{"id": signed_transfer_tx.id, "metadata": signed_transfer_tx.metadata}],
)
if not isinstance(b.connection, TarantoolDBConnection):
mocked_store_transaction.assert_called_once_with(
b.connection,
[{k: v for k, v in signed_transfer_tx.to_dict().items() if k != "metadata"}],
)
@pytest.mark.bdb
def test_store_bulk_transaction(mocker, b, signed_create_tx, signed_transfer_tx, db_context):
from planetmint.backend.tarantool.connection import TarantoolDBConnection
mocked_store_assets = mocker.patch("planetmint.backend.query.store_assets")
mocked_store_metadata = mocker.patch("planetmint.backend.query.store_metadatas")
def test_store_bulk_transaction(mocker, b, signed_create_tx, signed_transfer_tx):
mocked_store_transactions = mocker.patch("planetmint.backend.query.store_transactions")
b.store_bulk_transactions((signed_create_tx,))
if not isinstance(b.connection, TarantoolDBConnection):
mongo_client = MongoClient(host=db_context.host, port=db_context.port)
utxoset = mongo_client[db_context.name]["utxos"]
assert utxoset.count_documents({}) == 1
utxo = utxoset.find_one()
assert utxo["transaction_id"] == signed_create_tx.id
assert utxo["output_index"] == 0
if isinstance(b.connection, TarantoolDBConnection):
mocked_store_assets.assert_called_once_with(
b.connection, # signed_create_tx.asset['data'] this was before
[(signed_create_tx.assets, signed_create_tx.id, signed_create_tx.id)],
)
else:
mocked_store_assets.assert_called_once_with(
b.connection, # signed_create_tx.asset['data'] this was before
[(signed_create_tx.assets[0]["data"], signed_create_tx.id, signed_create_tx.id)],
)
mocked_store_metadata.assert_called_once_with(
b.connection,
[{"id": signed_create_tx.id, "metadata": signed_create_tx.metadata}],
)
mocked_store_transactions.assert_called_once_with(
b.connection,
[{k: v for k, v in signed_create_tx.to_dict().items() if k not in ("assets", "metadata")}],
)
mocked_store_assets.reset_mock()
mocked_store_metadata.reset_mock()
mocked_store_transactions.assert_any_call(b.connection, [signed_create_tx.to_dict()], "transactions")
mocked_store_transactions.reset_mock()
b.store_bulk_transactions((signed_transfer_tx,))
if not isinstance(b.connection, TarantoolDBConnection):
assert utxoset.count_documents({}) == 1
utxo = utxoset.find_one()
assert utxo["transaction_id"] == signed_transfer_tx.id
assert utxo["output_index"] == 0
assert not mocked_store_assets.called
mocked_store_metadata.asser_called_once_with(
b.connection,
[{"id": signed_transfer_tx.id, "metadata": signed_transfer_tx.metadata}],
)
if not isinstance(b.connection, TarantoolDBConnection):
mocked_store_transactions.assert_called_once_with(
b.connection,
[{k: v for k, v in signed_transfer_tx.to_dict().items() if k != "metadata"}],
)
@pytest.mark.bdb
@ -289,78 +195,44 @@ def test_delete_zero_unspent_outputs(b, utxoset):
num_rows_after_operation = utxo_collection.select().rowcount
# assert delete_res is None
assert num_rows_before_operation == num_rows_after_operation
# assert utxo_collection.count_documents(
# {'$or': [
# {'transaction_id': 'a', 'output_index': 0},
# {'transaction_id': 'b', 'output_index': 0},
# {'transaction_id': 'a', 'output_index': 1},
# ]}
# ) == 3
@pytest.mark.bdb
def test_delete_one_unspent_outputs(b, utxoset):
from planetmint.backend.tarantool.connection import TarantoolDBConnection
def test_delete_one_unspent_outputs(b, dummy_unspent_outputs):
utxo_space = b.connection.get_space("utxos")
for utxo in dummy_unspent_outputs:
res = utxo_space.insert((uuid4().hex, utxo["transaction_id"], utxo["output_index"], utxo))
assert res
unspent_outputs, utxo_collection = utxoset
delete_res = b.delete_unspent_outputs(unspent_outputs[0])
if not isinstance(b.connection, TarantoolDBConnection):
assert len(list(delete_res)) == 1
assert (
utxo_collection.count_documents(
{
"$or": [
{"transaction_id": "a", "output_index": 1},
{"transaction_id": "b", "output_index": 0},
]
}
)
== 2
)
assert utxo_collection.count_documents({"transaction_id": "a", "output_index": 0}) == 0
else:
utx_space = b.connection.get_space("utxos")
res1 = utx_space.select(["a", 1], index="id_search").data
res2 = utx_space.select(["b", 0], index="id_search").data
assert len(res1) + len(res2) == 2
res3 = utx_space.select(["a", 0], index="id_search").data
assert len(res3) == 0
b.delete_unspent_outputs(dummy_unspent_outputs[0])
res1 = utxo_space.select(["a", 1], index="utxo_by_transaction_id_and_output_index").data
res2 = utxo_space.select(["b", 0], index="utxo_by_transaction_id_and_output_index").data
assert len(res1) + len(res2) == 2
res3 = utxo_space.select(["a", 0], index="utxo_by_transaction_id_and_output_index").data
assert len(res3) == 0
@pytest.mark.bdb
def test_delete_many_unspent_outputs(b, utxoset):
from planetmint.backend.tarantool.connection import TarantoolDBConnection
def test_delete_many_unspent_outputs(b, dummy_unspent_outputs):
utxo_space = b.connection.get_space("utxos")
for utxo in dummy_unspent_outputs:
res = utxo_space.insert((uuid4().hex, utxo["transaction_id"], utxo["output_index"], utxo))
assert res
unspent_outputs, utxo_collection = utxoset
delete_res = b.delete_unspent_outputs(*unspent_outputs[::2])
if not isinstance(b.connection, TarantoolDBConnection):
assert len(list(delete_res)) == 2
assert (
utxo_collection.count_documents(
{
"$or": [
{"transaction_id": "a", "output_index": 0},
{"transaction_id": "b", "output_index": 0},
]
}
)
== 0
)
assert utxo_collection.count_documents({"transaction_id": "a", "output_index": 1}) == 1
else: # TODO It looks ugly because query.get_unspent_outputs function, has not yet implemented query parameter.
utx_space = b.connection.get_space("utxos")
res1 = utx_space.select(["a", 0], index="id_search").data
res2 = utx_space.select(["b", 0], index="id_search").data
assert len(res1) + len(res2) == 0
res3 = utx_space.select([], index="id_search").data
assert len(res3) == 1
b.delete_unspent_outputs(*dummy_unspent_outputs[::2])
res1 = utxo_space.select(["a", 0], index="utxo_by_transaction_id_and_output_index").data
res2 = utxo_space.select(["b", 0], index="utxo_by_transaction_id_and_output_index").data
assert len(res1) + len(res2) == 0
res3 = utxo_space.select([], index="utxo_by_transaction_id_and_output_index").data
assert len(res3) == 1
@pytest.mark.bdb
def test_store_zero_unspent_output(b, utxo_collection):
num_rows_before_operation = utxo_collection.select().rowcount
def test_store_zero_unspent_output(b):
utxos = b.connection.get_space("utxos")
num_rows_before_operation = utxos.select().rowcount
res = b.store_unspent_outputs()
num_rows_after_operation = utxo_collection.select().rowcount
num_rows_after_operation = utxos.select().rowcount
assert res is None
assert num_rows_before_operation == num_rows_after_operation
@ -385,24 +257,18 @@ def test_store_one_unspent_output(b, unspent_output_1, utxo_collection):
else:
utx_space = b.connection.get_space("utxos")
res = utx_space.select(
[unspent_output_1["transaction_id"], unspent_output_1["output_index"]], index="id_search"
[unspent_output_1["transaction_id"], unspent_output_1["output_index"]],
index="utxo_by_transaction_id_and_output_index",
)
assert len(res.data) == 1
@pytest.mark.bdb
def test_store_many_unspent_outputs(b, unspent_outputs, utxo_collection):
from planetmint.backend.tarantool.connection import TarantoolDBConnection
res = b.store_unspent_outputs(*unspent_outputs)
if not isinstance(b.connection, TarantoolDBConnection):
assert res.acknowledged
assert len(list(res)) == 3
assert utxo_collection.count_documents({"transaction_id": unspent_outputs[0]["transaction_id"]}) == 3
else:
utxo_space = b.connection.get_space("utxos") # .select([], index="transaction_search").data
res = utxo_space.select([unspent_outputs[0]["transaction_id"]], index="transaction_search")
assert len(res.data) == 3
def test_store_many_unspent_outputs(b, unspent_outputs):
b.store_unspent_outputs(*unspent_outputs)
utxo_space = b.connection.get_space("utxos")
res = utxo_space.select([unspent_outputs[0]["transaction_id"]], index="utxos_by_transaction_id")
assert len(res.data) == 3
def test_get_utxoset_merkle_root_when_no_utxo(b):
@ -410,16 +276,19 @@ def test_get_utxoset_merkle_root_when_no_utxo(b):
@pytest.mark.bdb
@pytest.mark.usefixture("utxoset")
def test_get_utxoset_merkle_root(b, utxoset):
def test_get_utxoset_merkle_root(b, dummy_unspent_outputs):
utxo_space = b.connection.get_space("utxos")
for utxo in dummy_unspent_outputs:
res = utxo_space.insert((uuid4().hex, utxo["transaction_id"], utxo["output_index"], utxo))
assert res
expected_merkle_root = "86d311c03115bf4d287f8449ca5828505432d69b82762d47077b1c00fe426eac"
merkle_root = b.get_utxoset_merkle_root()
assert merkle_root == expected_merkle_root
@pytest.mark.bdb
def test_get_spent_transaction_critical_double_spend(b, alice, bob, carol):
from planetmint.exceptions import CriticalDoubleSpend
def test_get_spent_transaction_double_spend(b, alice, bob, carol):
from transactions.common.exceptions import DoubleSpend
assets = [{"data": multihash(marshal({"test": "asset"}))}]
@ -453,11 +322,6 @@ def test_get_spent_transaction_critical_double_spend(b, alice, bob, carol):
with pytest.raises(DoubleSpend):
b.get_spent(tx.id, tx_transfer.inputs[0].fulfills.output, [double_spend])
b.store_bulk_transactions([double_spend])
with pytest.raises(CriticalDoubleSpend):
b.get_spent(tx.id, tx_transfer.inputs[0].fulfills.output)
def test_validation_with_transaction_buffer(b):
from transactions.common.crypto import generate_key_pair
@ -513,28 +377,20 @@ def test_migrate_abci_chain_generates_new_chains(b, chain, block_height, expecte
@pytest.mark.bdb
def test_get_spent_key_order(b, user_pk, user_sk, user2_pk, user2_sk):
from planetmint import backend
from transactions.common.crypto import generate_key_pair
from transactions.common.exceptions import DoubleSpend
alice = generate_key_pair()
bob = generate_key_pair()
tx1 = Create.generate([user_pk], [([alice.public_key], 3), ([user_pk], 2)], assets=None).sign([user_sk])
tx1 = Create.generate([user_pk], [([alice.public_key], 3), ([user_pk], 2)]).sign([user_sk])
b.store_bulk_transactions([tx1])
inputs = tx1.to_inputs()
tx2 = Transfer.generate([inputs[1]], [([user2_pk], 2)], [tx1.id]).sign([user_sk])
assert b.validate_transaction(tx2)
tx2_dict = tx2.to_dict()
fulfills = tx2_dict["inputs"][0]["fulfills"]
tx2_dict["inputs"][0]["fulfills"] = {
"output_index": fulfills["output_index"],
"transaction_id": fulfills["transaction_id"],
}
backend.query.store_transactions(b.connection, [tx2_dict])
b.store_bulk_transactions([tx2])
tx3 = Transfer.generate([inputs[1]], [([bob.public_key], 2)], [tx1.id]).sign([user_sk])

View File

@ -8,9 +8,6 @@ import pytest
from planetmint.version import __tm_supported_versions__
from transactions.types.assets.create import Create
from transactions.types.assets.transfer import Transfer
from transactions.common.exceptions import ConfigurationError
from planetmint.backend.connection import Connection
from planetmint.backend.exceptions import ConnectionError
@pytest.fixture
@ -22,7 +19,7 @@ def config(request, monkeypatch):
config = {
"database": {
"backend": backend,
"host": "tarantool",
"host": "localhost",
"port": 3303,
"name": "bigchain",
"replicaset": "bigchain-rs",
@ -53,7 +50,6 @@ def test_bigchain_class_default_initialization(config):
@pytest.mark.bdb
def test_get_spent_issue_1271(b, alice, bob, carol):
b.connection.close()
tx_1 = Create.generate(
[carol.public_key],
[([carol.public_key], 8)],
@ -93,7 +89,7 @@ def test_get_spent_issue_1271(b, alice, bob, carol):
assert b.validate_transaction(tx_5)
b.store_bulk_transactions([tx_5])
assert b.get_spent(tx_2.id, 0) == tx_5
assert b.get_spent(tx_2.id, 0) == tx_5.to_dict()
assert not b.get_spent(tx_5.id, 0)
assert b.get_outputs_filtered(alice.public_key)
assert b.get_outputs_filtered(alice.public_key, spent=False)

View File

@ -7,6 +7,8 @@ import pytest
import codecs
from planetmint.tendermint_utils import public_key_to_base64
from planetmint.backend.tarantool.const import TARANT_TABLE_GOVERNANCE
from transactions.types.elections.validator_election import ValidatorElection
from transactions.common.exceptions import AmountError
from transactions.common.crypto import generate_key_pair
@ -206,7 +208,7 @@ def test_valid_election_conclude(b_mock, valid_upsert_validator_election, ed2551
# so any invocation of `.has_concluded` for that election should return False
assert not b_mock.has_election_concluded(valid_upsert_validator_election)
# Vote is still valid but the election cannot be.has_concludedd as it it assmed that it has
# Vote is still valid but the election cannot be.has_concluded as it it assumed that it has
# been.has_concludedd before
assert b_mock.validate_transaction(tx_vote3)
assert not b_mock.has_election_concluded(valid_upsert_validator_election, [tx_vote3])

View File

@ -10,7 +10,7 @@ import random
from functools import singledispatch
from planetmint.backend.localmongodb.connection import LocalMongoDBConnection
from planetmint.backend.tarantool.connection import TarantoolDBConnection
from planetmint.backend.schema import TABLES, SPACE_NAMES
from planetmint.backend.schema import TABLES
from transactions.common import crypto
from transactions.common.transaction_mode_types import BROADCAST_TX_COMMIT
from transactions.types.assets.create import Create
@ -32,32 +32,17 @@ def flush_localmongo_db(connection, dbname):
@flush_db.register(TarantoolDBConnection)
def flush_tarantool_db(connection, dbname):
for s in SPACE_NAMES:
_all_data = connection.run(connection.space(s).select([]))
if _all_data is None:
continue
for _id in _all_data:
if "assets" == s:
connection.run(connection.space(s).delete(_id[1]), only_data=False)
elif s == "blocks":
connection.run(connection.space(s).delete(_id[2]), only_data=False)
elif s == "inputs":
connection.run(connection.space(s).delete(_id[-2]), only_data=False)
elif s == "outputs":
connection.run(connection.space(s).delete(_id[-4]), only_data=False)
elif s == "utxos":
connection.run(connection.space(s).delete([_id[0], _id[1]]), only_data=False)
elif s == "abci_chains":
connection.run(connection.space(s).delete(_id[-1]), only_data=False)
else:
connection.run(connection.space(s).delete(_id[0]), only_data=False)
connection.connect().call("drop")
connection.connect().call("init")
def generate_block(planet):
from transactions.common.crypto import generate_key_pair
alice = generate_key_pair()
tx = Create.generate([alice.public_key], [([alice.public_key], 1)], assets=None).sign([alice.private_key])
tx = Create.generate([alice.public_key], [([alice.public_key], 1)], assets=[{"data": None}]).sign(
[alice.private_key]
)
code, message = planet.write_transaction(tx, BROADCAST_TX_COMMIT)
assert code == 202

View File

@ -11,56 +11,30 @@ from ipld import marshal, multihash
ASSETS_ENDPOINT = "/api/v1/assets/"
def test_get_assets_with_empty_text_search(client):
res = client.get(ASSETS_ENDPOINT + "?search=")
assert res.json == {"status": 400, "message": "text_search cannot be empty"}
assert res.status_code == 400
def test_get_assets_with_missing_text_search(client):
res = client.get(ASSETS_ENDPOINT)
assert res.status_code == 400
@pytest.mark.bdb
def test_get_assets_tendermint(client, b, alice):
# test returns empty list when no assets are found
res = client.get(ASSETS_ENDPOINT + "?search=abc")
assert res.json == []
assert res.status_code == 200
# create asset
assets = [{"data": multihash(marshal({"msg": "abc"}))}]
tx = Create.generate([alice.public_key], [([alice.public_key], 1)], assets=assets).sign([alice.private_key])
b.store_bulk_transactions([tx])
# test that asset is returned
res = client.get(ASSETS_ENDPOINT + "?search=" + assets[0]["data"])
res = client.get(ASSETS_ENDPOINT + assets[0]["data"])
assert res.status_code == 200
assert len(res.json) == 1
assert res.json[0] == {"data": assets[0]["data"], "id": tx.id}
assert res.json[0] == {"data": assets[0]["data"]}
@pytest.mark.bdb
def test_get_assets_limit_tendermint(client, b, alice):
def test_get_assets_tendermint_limit(client, b, alice, bob):
# create assets
assets = [{"data": multihash(marshal({"msg": "abc"}))}]
tx_1 = Create.generate([alice.public_key], [([alice.public_key], 1)], assets=assets).sign([alice.private_key])
tx_2 = Create.generate([bob.public_key], [([bob.public_key], 1)], assets=assets).sign([bob.private_key])
# create two assets
assets1 = [{"data": multihash(marshal({"msg": "abc 1"}))}]
assets2 = [{"data": multihash(marshal({"msg": "abc 2"}))}]
tx1 = Create.generate([alice.public_key], [([alice.public_key], 1)], assets=assets1).sign([alice.private_key])
tx2 = Create.generate([alice.public_key], [([alice.public_key], 1)], assets=assets2).sign([alice.private_key])
b.store_bulk_transactions([tx_1, tx_2])
b.store_bulk_transactions([tx1])
b.store_bulk_transactions([tx2])
# test that both assets are returned without limit
res = client.get(ASSETS_ENDPOINT + "?search=" + assets1[0]["data"])
assert res.status_code == 200
assert len(res.json) == 1
# test that only one asset is returned when using limit=1
res = client.get(ASSETS_ENDPOINT + "?search=" + assets1[0]["data"] + "&limit=1")
res = client.get(ASSETS_ENDPOINT + assets[0]["data"] + "?limit=1")
assert res.status_code == 200
assert len(res.json) == 1
assert res.json[0] == {"data": assets[0]["data"]}

View File

@ -59,8 +59,8 @@ def test_get_block_containing_transaction(b, client, alice):
block = Block(app_hash="random_utxo", height=13, transactions=[tx.id])
b.store_block(block._asdict())
res = client.get("{}?transaction_id={}".format(BLOCKS_ENDPOINT, tx.id))
expected_response = [block.height]
assert res.json == expected_response
expected_height = block.height
assert res.json[0][2] == expected_height
assert res.status_code == 200

View File

@ -11,22 +11,11 @@ from ipld import marshal, multihash
METADATA_ENDPOINT = "/api/v1/metadata/"
def test_get_metadata_with_empty_text_search(client):
res = client.get(METADATA_ENDPOINT + "?search=")
assert res.json == {"status": 400, "message": "text_search cannot be empty"}
assert res.status_code == 400
def test_get_metadata_with_missing_text_search(client):
res = client.get(METADATA_ENDPOINT)
assert res.status_code == 400
@pytest.mark.bdb
def test_get_metadata_tendermint(client, b, alice):
assets = [{"data": multihash(marshal({"msg": "abc"}))}]
# test returns empty list when no assets are found
res = client.get(METADATA_ENDPOINT + "?search=" + assets[0]["data"])
res = client.get(METADATA_ENDPOINT + assets[0]["data"])
assert res.json == []
assert res.status_code == 200
@ -40,10 +29,10 @@ def test_get_metadata_tendermint(client, b, alice):
b.store_bulk_transactions([tx])
# test that metadata is returned
res = client.get(METADATA_ENDPOINT + "?search=" + metadata)
res = client.get(METADATA_ENDPOINT + metadata)
assert res.status_code == 200
assert len(res.json) == 1
assert res.json[0] == {"metadata": metadata, "id": tx.id}
assert res.json[0] == metadata
@pytest.mark.bdb
@ -51,25 +40,24 @@ def test_get_metadata_limit_tendermint(client, b, alice):
# create two assets
assets1 = [{"data": multihash(marshal({"msg": "abc 1"}))}]
meta1 = multihash(marshal({"key": "meta 1"}))
tx1 = Create.generate([alice.public_key], [([alice.public_key], 1)], metadata=meta1, assets=assets1).sign(
meta = multihash(marshal({"key": "meta 1"}))
tx1 = Create.generate([alice.public_key], [([alice.public_key], 1)], metadata=meta, assets=assets1).sign(
[alice.private_key]
)
b.store_bulk_transactions([tx1])
assets2 = [{"data": multihash(marshal({"msg": "abc 2"}))}]
meta2 = multihash(marshal({"key": "meta 2"}))
tx2 = Create.generate([alice.public_key], [([alice.public_key], 1)], metadata=meta2, assets=assets2).sign(
tx2 = Create.generate([alice.public_key], [([alice.public_key], 1)], metadata=meta, assets=assets2).sign(
[alice.private_key]
)
b.store_bulk_transactions([tx2])
# test that both assets are returned without limit
res = client.get(METADATA_ENDPOINT + "?search=" + meta1)
res = client.get(METADATA_ENDPOINT + meta)
assert res.status_code == 200
assert len(res.json) == 1
assert len(res.json) == 2
# test that only one asset is returned when using limit=1
res = client.get(METADATA_ENDPOINT + "?search=" + meta2 + "&limit=1")
res = client.get(METADATA_ENDPOINT + meta + "?limit=1")
assert res.status_code == 200
assert len(res.json) == 1

View File

@ -12,6 +12,9 @@ import pytest
# from unittest.mock import patch
from transactions.types.assets.create import Create
from transactions.types.assets.transfer import Transfer
from transactions.common import crypto
from planetmint import events
from planetmint.web.websocket_server import init_app, EVENTS_ENDPOINT, EVENTS_ENDPOINT_BLOCKS
from ipld import multihash, marshal
@ -135,9 +138,6 @@ async def test_bridge_sync_async_queue(event_loop):
@pytest.mark.asyncio
async def test_websocket_block_event(aiohttp_client, event_loop):
from planetmint import events
from planetmint.web.websocket_server import init_app, EVENTS_ENDPOINT_BLOCKS
from transactions.common import crypto
user_priv, user_pub = crypto.generate_key_pair()
tx = Create.generate([user_pub], [([user_pub], 1)])
@ -169,9 +169,6 @@ async def test_websocket_block_event(aiohttp_client, event_loop):
@pytest.mark.asyncio
async def test_websocket_transaction_event(aiohttp_client, event_loop):
from planetmint import events
from planetmint.web.websocket_server import init_app, EVENTS_ENDPOINT
from transactions.common import crypto
user_priv, user_pub = crypto.generate_key_pair()
tx = Create.generate([user_pub], [([user_pub], 1)])
@ -240,8 +237,6 @@ def test_integration_from_webapi_to_websocket(monkeypatch, client, loop):
import random
import aiohttp
from transactions.common import crypto
# TODO processes does not exist anymore, when reactivating this test it
# will fail because of this
from planetmint import processes