Compare commits

...

27 Commits

Author SHA1 Message Date
tal
16a83f9bac Initialize the selcted parent on GHOSTDAG. 2021-04-08 14:27:22 +03:00
tal
7d20ee6b58 Adds a comment to exported function (GHOSTDAG). 2021-04-08 14:10:03 +03:00
tal
1ab05c3fbc Redesign the code of GHOSTDAG function. 2021-04-08 14:04:28 +03:00
Svarog
347dd8fc4b Support resolveBlockStatus without separate stagingAreas for usage of testConsensus (#1666) 2021-04-08 11:26:17 +03:00
Ori Newman
d2cccd2829 Add ECDSA support to the wallet (#1664)
* Add ECDSA support to the wallet

* Fix genkeypair

* Fix typo and rename var
2021-04-06 17:25:09 +03:00
Ori Newman
7186f83095 Add OpCheckMultiSigECDSA (#1663)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-04-06 16:29:16 +03:00
Ori Newman
5c394c2951 Add PubKeyECDSATy (#1662) 2021-04-06 15:56:31 +03:00
Ori Newman
a786cdc15e Add ECDSA support (#1657)
* Add ECDSA support

* Add domain separation to ECDSA sighash

* Use InfallibleWrite instead of Write

* Rename funcs

* Fix wrong use if vm.sigCache

* Add TestCalculateSignatureHashECDSA

* Add consts

* Fix comment and test name

* Move consts to the top

* Fix comment
2021-04-06 14:27:18 +03:00
Ori Newman
6dd3d4a9e7 Add dump unencrypted data sub command to the wallet (#1661) 2021-04-06 12:29:13 +03:00
stasatdaglabs
73b36f12f0 Implement importing private keys into the wallet (#1655)
* Implement importing private keys into the wallet.

* Fix bad --import default.

* Fix typo in --import annotation.

* Make go lint happy.

* Make go lint happier.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-04-05 18:10:33 +03:00
stasatdaglabs
a795a9e619 Add a size limit to the address manager (#1652)
* Remove a random address from the address manager if it's full.

* Implement TestOverfillAddressManager.

* Add connectionFailedCount to addresses.

* Mark connection failures.

* Mark connection successes.

* Implement removing by most connection failures.

* Expand TestOverfillAddressManager.

* Add comments.

* Use a better method for finding the address with the greatest connectionFailedCount.

* Fix a comment.

* Compare addresses by IP in TestOverfillAddressManager.

* Add a comment for updateNotBanned.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-04-05 17:56:13 +03:00
Ori Newman
0be1bba408 Fix TestAddresses (#1656) 2021-04-05 16:24:22 +03:00
Ori Newman
6afc06ce58 Replace p2pkh with p2pk (#1650)
* Replace p2pkh with p2pk

* Fix tests

* Fix comments and variable names

* Add README.md for genkeypair

* Rename pubkey->publicKey

* Rename p2pkh to p2pk

* Use util.PublicKeySize where needed

* Remove redundant pointer

* Fix comment

* Rename pubKey->publicKey
2021-04-05 14:35:34 +03:00
stasatdaglabs
d01a213f3d Add a show-address subcommand to kaspawallet (#1653)
* Add a show-address subcommand to kaspawallet.

* Update the description of the key-file command line parameter.
2021-04-05 14:22:03 +03:00
stasatdaglabs
7ad8ce521c Implement reconnection logic within the RPC client (#1643)
* Add a reconnect mechanism to RPCClient.

* Fix Reconnect().

* Connect the internal reconnection logic to the miner reconnection logic.

* Rename shouldReconnect to isClosed.

* Move safe reconnection logic from the miner to rpcclient.

* Remove sleep from HandleSubmitBlock.

* Properly handle client errors and only disconnect if we're already connected.

* Make go lint happy.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-04-05 13:57:28 +03:00
Ori Newman
86ba80a091 Improve wallet functionality (#1636)
* Add basic wallet library

* Add CLI

* Add multisig support

* Add persistence to wallet

* Add tests

* go mod tidy

* Fix lint errors

* Fix wallet send command

* Always use the password as byte slice

* Remove redundant empty string

* Use different salt per private key

* Don't sign a signed transaction

* Add comment

* Remove old wallet

* Change directory permissions

* Use NormalizeRPCServerAddress

* Fix compilation errors
2021-03-31 15:58:22 +03:00
Ori Newman
088e2114c2 Disconnect from RPC client after finishing the simple sync test (#1641) 2021-03-31 13:44:42 +03:00
stasatdaglabs
2854d91688 Add missing call to broadcastTransactionsAfterBlockAdded (#1639)
* Add missing call to broadcastTransactionsAfterBlockAdded.

* Fix a comment.

* Fix a comment some more.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-31 10:28:02 +03:00
Ori Newman
af10b59181 Use go-secp256k1 v0.0.5 (#1640) 2021-03-30 18:01:56 +03:00
stasatdaglabs
c5b0394bbc In RPC, use RPCTransactions and RPCBlocks instead of TransactionMessages and BlockMessages (#1609)
* Replace BlockMessage with RpcBlock in rpc.proto.

* Convert everything in kaspad to use RPCBlocks and fix tests.

* Fix compilation errors in stability tests and the miner.

* Update TransactionVerboseData in rpc.proto.

* Update TransactionVerboseData in the rest of kaspad.

* Make golint happy.

* Include RpcTransactionVerboseData in RpcTransaction instead of the other way around.

* Regenerate rpc.pb.go after merge.

* Update appmessage types.

* Update appmessage request and response types.

* Reimplement conversion functions between appmessage.RPCTransaction and protowire.RpcTransaction.

* Extract RpcBlockHeader toAppMessage/fromAppMessage out of RpcBlock.

* Fix compilation errors in getBlock, getBlocks, and submitBlock.

* Fix compilation errors in getMempoolEntry.

* Fix compilation errors in notifyBlockAdded.

* Update verbosedata.go.

* Fix compilation errors in getBlock and getBlocks.

* Fix compilation errors in getBlocks tests.

* Fix conversions between getBlocks message types.

* Fix integration tests.

* Fix a comment.

* Add selectedParent to the verbose block response.
2021-03-30 17:43:02 +03:00
Ori Newman
9266d179a9 Add a test with two signed inputs (#1628)
* Add TestSigningTwoInputs

* Rename fundingBlockHash to block1Hash

* Fix error message

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-03-30 16:54:54 +03:00
Ori Newman
321792778e Add mass limit to mempool (#1627)
* Add mass limit to mempool

* Pass only params instead of multiple configuration options

* Remove acceptNonStd from mempool constructor

* Remove acceptNonStd from mempool constructor

* Fix test compilation
2021-03-30 15:37:56 +03:00
talelbaz
70f3fa9893 Update miningManager test (#1593)
* [NOD-1429] add mining manager unit tests

* [NOD-1429] Add additional test

* found a bug, so stopped working on this test until the bug will be fix.

* Update miningmanager_test.go test.

* Delete payloadHash field - not used anymore in the current version.

* Change the condition for comparing slices instead of pointers.

* Fix due to review notes - change names, use testutils.CreateTransaction function and adds comments.

* Changes after fetch&merge to v0.10.0-dev

* Create a new function createChildTxWhenParentTxWasAddedByConsensus and add a comment

* Add an argument to create_transaction function and fix review notes

* Optimization

* Change to blockID(instead of the all transaction) in the error messages and fix review notes

* Change to blockID(instead of the all transaction) in the error messages and fix review notes

* Change format of error messages.

* Change name ofa variable

* Use go:embed to embed sample-kaspad.conf (only on go1.16)

* Revert "Use go:embed to embed sample-kaspad.conf (only on go1.16)"

This reverts commit bd28052b92.

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-30 13:52:40 +03:00
Svarog
4e18031483 Resolve each block status in it's own staging area (#1634) 2021-03-30 11:04:43 +03:00
Svarog
2abc284e3b Make sure the ghostdagDataStore cache is at least DifficultyAdjustmentBlockWindow sized (#1635) 2021-03-29 14:38:19 +03:00
Svarog
f1451406f7 Add support for multiple staging areas (#1633)
* Add StagingArea struct

* Implemented staging areas in blockStore

* Move blockStagingShard to separate folder

* Apply staging shard to acceptanceDataStore

* Update blockHeaderStore with StagingArea

* Add StagingArea to BlockRelationStore

* Add StagingArea to blockStatusStore

* Add StagingArea to consensusStateStore

* Add StagingArea to daaBlocksStore

* Add StagingArea to finalityStore

* Add StagingArea to ghostdagDataStore

* Add StagingArea to headersSelectedChainStore and headersSelectedTipStore

* Add StagingArea to multisetStore

* Add StagingArea to pruningStore

* Add StagingArea to reachabilityDataStore

* Add StagingArea to utxoDiffStore

* Fix forgotten compilation error

* Update reachability manager and some more things with StagingArea

* Add StagingArea to dagTopologyManager, and some more

* Add StagingArea to GHOSTDAGManager, and some more

* Add StagingArea to difficultyManager, and some more

* Add StagingArea to dagTraversalManager, and some more

* Add StagingArea to headerTipsManager, and some more

* Add StagingArea to constnsusStateManager, pastMedianTimeManager

* Add StagingArea to transactionValidator

* Add StagingArea to finalityManager

* Add StagingArea to mergeDepthManager

* Add StagingArea to pruningManager

* Add StagingArea to rest of ValidateAndInsertBlock

* Add StagingArea to blockValidator

* Add StagingArea to coinbaseManager

* Add StagingArea to syncManager

* Add StagingArea to blockBuilder

* Update consensus with StagingArea

* Add StagingArea to ghostdag2

* Fix remaining compilation errors

* Update names of stagingShards

* Fix forgotten stagingArea passing

* Mark stagingShard.isCommited = true once commited

* Move isStaged to stagingShard, so that it's available without going through store

* Make blockHeaderStore count be avilable from stagingShard

* Fix remaining forgotten stagingArea passing

* commitAllChanges should call dbTx.Commit in the end

* Fix all tests tests in blockValidator

* Fix all tests in consensusStateManager and some more

* Fix all tests in pruningManager

* Add many missing stagingAreas in tests

* Fix many tests

* Fix most of all other tests

* Fix ghostdag_test.go

* Add comment to StagingArea

* Make list of StagingShards an array

* Add comment to StagingShardID

* Make sure all staging shards are pointer-receiver

* Undo bucket rename in block_store

* Typo: isCommited -> isCommitted

* Add comment explaining why stagingArea.shards is an array
2021-03-29 10:34:11 +03:00
talelbaz
c12e180873 Use go:embed to embed sample-kaspad.conf (only on go1.16) (#1631)
* Use go:embed to embed sample-kaspad.conf (only on go1.16)

* Add a comment to justify the blank import.

* Change a variable name to sampleKaspad (instead configurationSampleKaspadString)

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-25 15:56:01 +02:00
280 changed files with 12095 additions and 7353 deletions

View File

@@ -3,6 +3,7 @@ package appmessage
import (
"encoding/hex"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
@@ -294,3 +295,70 @@ func DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(
}
return domainOutpointAndUTXOEntryPairs
}
// DomainBlockToRPCBlock converts DomainBlocks to RPCBlocks
func DomainBlockToRPCBlock(block *externalapi.DomainBlock) *RPCBlock {
header := &RPCBlockHeader{
Version: uint32(block.Header.Version()),
ParentHashes: hashes.ToStrings(block.Header.ParentHashes()),
HashMerkleRoot: block.Header.HashMerkleRoot().String(),
AcceptedIDMerkleRoot: block.Header.AcceptedIDMerkleRoot().String(),
UTXOCommitment: block.Header.UTXOCommitment().String(),
Timestamp: block.Header.TimeInMilliseconds(),
Bits: block.Header.Bits(),
Nonce: block.Header.Nonce(),
}
transactions := make([]*RPCTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
transactions[i] = DomainTransactionToRPCTransaction(transaction)
}
return &RPCBlock{
Header: header,
Transactions: transactions,
}
}
// RPCBlockToDomainBlock converts `block` into a DomainBlock
func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
parentHashes := make([]*externalapi.DomainHash, len(block.Header.ParentHashes))
for i, parentHash := range block.Header.ParentHashes {
domainParentHashes, err := externalapi.NewDomainHashFromString(parentHash)
if err != nil {
return nil, err
}
parentHashes[i] = domainParentHashes
}
hashMerkleRoot, err := externalapi.NewDomainHashFromString(block.Header.HashMerkleRoot)
if err != nil {
return nil, err
}
acceptedIDMerkleRoot, err := externalapi.NewDomainHashFromString(block.Header.AcceptedIDMerkleRoot)
if err != nil {
return nil, err
}
utxoCommitment, err := externalapi.NewDomainHashFromString(block.Header.UTXOCommitment)
if err != nil {
return nil, err
}
header := blockheader.NewImmutableBlockHeader(
uint16(block.Header.Version),
parentHashes,
hashMerkleRoot,
acceptedIDMerkleRoot,
utxoCommitment,
block.Header.Timestamp,
block.Header.Bits,
block.Header.Nonce)
transactions := make([]*externalapi.DomainTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
domainTransaction, err := RPCTransactionToDomainTransaction(transaction)
if err != nil {
return nil, err
}
transactions[i] = domainTransaction
}
return &externalapi.DomainBlock{
Header: header,
Transactions: transactions,
}, nil
}

View File

@@ -25,7 +25,7 @@ func NewGetBlockRequestMessage(hash string, includeTransactionVerboseData bool)
// its respective RPC message
type GetBlockResponseMessage struct {
baseMessage
BlockVerboseData *BlockVerboseData
Block *RPCBlock
Error *RPCError
}
@@ -39,70 +39,3 @@ func (msg *GetBlockResponseMessage) Command() MessageCommand {
func NewGetBlockResponseMessage() *GetBlockResponseMessage {
return &GetBlockResponseMessage{}
}
// BlockVerboseData holds verbose data about a block
type BlockVerboseData struct {
Hash string
Version uint16
VersionHex string
HashMerkleRoot string
AcceptedIDMerkleRoot string
UTXOCommitment string
TxIDs []string
TransactionVerboseData []*TransactionVerboseData
Time int64
Nonce uint64
Bits string
Difficulty float64
ParentHashes []string
ChildrenHashes []string
SelectedParentHash string
BlueScore uint64
IsHeaderOnly bool
}
// TransactionVerboseData holds verbose data about a transaction
type TransactionVerboseData struct {
TxID string
Hash string
Size uint64
Version uint16
LockTime uint64
SubnetworkID string
Gas uint64
Payload string
TransactionVerboseInputs []*TransactionVerboseInput
TransactionVerboseOutputs []*TransactionVerboseOutput
BlockHash string
Time uint64
BlockTime uint64
}
// TransactionVerboseInput holds data about a transaction input
type TransactionVerboseInput struct {
TxID string
OutputIndex uint32
ScriptSig *ScriptSig
Sequence uint64
}
// ScriptSig holds data about a script signature
type ScriptSig struct {
Asm string
Hex string
}
// TransactionVerboseOutput holds data about a transaction output
type TransactionVerboseOutput struct {
Value uint64
Index uint32
ScriptPubKey *ScriptPubKeyResult
}
// ScriptPubKeyResult holds data about a script public key
type ScriptPubKeyResult struct {
Hex string
Type string
Address string
Version uint16
}

View File

@@ -23,7 +23,7 @@ func NewGetBlockTemplateRequestMessage(payAddress string) *GetBlockTemplateReque
// its respective RPC message
type GetBlockTemplateResponseMessage struct {
baseMessage
MsgBlock *MsgBlock
Block *RPCBlock
IsSynced bool
Error *RPCError
@@ -35,9 +35,9 @@ func (msg *GetBlockTemplateResponseMessage) Command() MessageCommand {
}
// NewGetBlockTemplateResponseMessage returns a instance of the message
func NewGetBlockTemplateResponseMessage(msgBlock *MsgBlock, isSynced bool) *GetBlockTemplateResponseMessage {
func NewGetBlockTemplateResponseMessage(block *RPCBlock, isSynced bool) *GetBlockTemplateResponseMessage {
return &GetBlockTemplateResponseMessage{
MsgBlock: msgBlock,
Block: block,
IsSynced: isSynced,
}
}

View File

@@ -5,7 +5,7 @@ package appmessage
type GetBlocksRequestMessage struct {
baseMessage
LowHash string
IncludeBlockVerboseData bool
IncludeBlocks bool
IncludeTransactionVerboseData bool
}
@@ -15,11 +15,11 @@ func (msg *GetBlocksRequestMessage) Command() MessageCommand {
}
// NewGetBlocksRequestMessage returns a instance of the message
func NewGetBlocksRequestMessage(lowHash string, includeBlockVerboseData bool,
func NewGetBlocksRequestMessage(lowHash string, includeBlocks bool,
includeTransactionVerboseData bool) *GetBlocksRequestMessage {
return &GetBlocksRequestMessage{
LowHash: lowHash,
IncludeBlockVerboseData: includeBlockVerboseData,
IncludeBlocks: includeBlocks,
IncludeTransactionVerboseData: includeTransactionVerboseData,
}
}
@@ -28,8 +28,8 @@ func NewGetBlocksRequestMessage(lowHash string, includeBlockVerboseData bool,
// its respective RPC message
type GetBlocksResponseMessage struct {
baseMessage
BlockHashes []string
BlockVerboseData []*BlockVerboseData
BlockHashes []string
Blocks []*RPCBlock
Error *RPCError
}
@@ -40,11 +40,6 @@ func (msg *GetBlocksResponseMessage) Command() MessageCommand {
}
// NewGetBlocksResponseMessage returns a instance of the message
func NewGetBlocksResponseMessage(blockHashes []string, blockHexes []string,
blockVerboseData []*BlockVerboseData) *GetBlocksResponseMessage {
return &GetBlocksResponseMessage{
BlockHashes: blockHashes,
BlockVerboseData: blockVerboseData,
}
func NewGetBlocksResponseMessage() *GetBlocksResponseMessage {
return &GetBlocksResponseMessage{}
}

View File

@@ -28,8 +28,8 @@ type GetMempoolEntryResponseMessage struct {
// MempoolEntry represents a transaction in the mempool.
type MempoolEntry struct {
Fee uint64
TransactionVerboseData *TransactionVerboseData
Fee uint64
Transaction *RPCTransaction
}
// Command returns the protocol command string for the message
@@ -38,11 +38,11 @@ func (msg *GetMempoolEntryResponseMessage) Command() MessageCommand {
}
// NewGetMempoolEntryResponseMessage returns a instance of the message
func NewGetMempoolEntryResponseMessage(fee uint64, transactionVerboseData *TransactionVerboseData) *GetMempoolEntryResponseMessage {
func NewGetMempoolEntryResponseMessage(fee uint64, transaction *RPCTransaction) *GetMempoolEntryResponseMessage {
return &GetMempoolEntryResponseMessage{
Entry: &MempoolEntry{
Fee: fee,
TransactionVerboseData: transactionVerboseData,
Fee: fee,
Transaction: transaction,
},
}
}

View File

@@ -37,8 +37,7 @@ func NewNotifyBlockAddedResponseMessage() *NotifyBlockAddedResponseMessage {
// its respective RPC message
type BlockAddedNotificationMessage struct {
baseMessage
Block *MsgBlock
BlockVerboseData *BlockVerboseData
Block *RPCBlock
}
// Command returns the protocol command string for the message
@@ -47,9 +46,8 @@ func (msg *BlockAddedNotificationMessage) Command() MessageCommand {
}
// NewBlockAddedNotificationMessage returns a instance of the message
func NewBlockAddedNotificationMessage(block *MsgBlock, blockVerboseData *BlockVerboseData) *BlockAddedNotificationMessage {
func NewBlockAddedNotificationMessage(block *RPCBlock) *BlockAddedNotificationMessage {
return &BlockAddedNotificationMessage{
Block: block,
BlockVerboseData: blockVerboseData,
Block: block,
}
}

View File

@@ -4,7 +4,7 @@ package appmessage
// its respective RPC message
type SubmitBlockRequestMessage struct {
baseMessage
Block *MsgBlock
Block *RPCBlock
}
// Command returns the protocol command string for the message
@@ -13,7 +13,7 @@ func (msg *SubmitBlockRequestMessage) Command() MessageCommand {
}
// NewSubmitBlockRequestMessage returns a instance of the message
func NewSubmitBlockRequestMessage(block *MsgBlock) *SubmitBlockRequestMessage {
func NewSubmitBlockRequestMessage(block *RPCBlock) *SubmitBlockRequestMessage {
return &SubmitBlockRequestMessage{
Block: block,
}
@@ -57,3 +57,35 @@ func (msg *SubmitBlockResponseMessage) Command() MessageCommand {
func NewSubmitBlockResponseMessage() *SubmitBlockResponseMessage {
return &SubmitBlockResponseMessage{}
}
// RPCBlock is a kaspad block representation meant to be
// used over RPC
type RPCBlock struct {
Header *RPCBlockHeader
Transactions []*RPCTransaction
VerboseData *RPCBlockVerboseData
}
// RPCBlockHeader is a kaspad block header representation meant to be
// used over RPC
type RPCBlockHeader struct {
Version uint32
ParentHashes []string
HashMerkleRoot string
AcceptedIDMerkleRoot string
UTXOCommitment string
Timestamp int64
Bits uint32
Nonce uint64
}
// RPCBlockVerboseData holds verbose data about a block
type RPCBlockVerboseData struct {
Hash string
Difficulty float64
SelectedParentHash string
TransactionIDs []string
IsHeaderOnly bool
BlueScore uint64
ChildrenHashes []string
}

View File

@@ -50,6 +50,7 @@ type RPCTransaction struct {
SubnetworkID string
Gas uint64
Payload string
VerboseData *RPCTransactionVerboseData
}
// RPCTransactionInput is a kaspad transaction input representation
@@ -58,6 +59,7 @@ type RPCTransactionInput struct {
PreviousOutpoint *RPCOutpoint
SignatureScript string
Sequence uint64
VerboseData *RPCTransactionInputVerboseData
}
// RPCScriptPublicKey is a kaspad ScriptPublicKey representation
@@ -71,6 +73,7 @@ type RPCScriptPublicKey struct {
type RPCTransactionOutput struct {
Amount uint64
ScriptPublicKey *RPCScriptPublicKey
VerboseData *RPCTransactionOutputVerboseData
}
// RPCOutpoint is a kaspad outpoint representation meant to be used
@@ -88,3 +91,22 @@ type RPCUTXOEntry struct {
BlockDAAScore uint64
IsCoinbase bool
}
// RPCTransactionVerboseData holds verbose data about a transaction
type RPCTransactionVerboseData struct {
TransactionID string
Hash string
Size uint64
BlockHash string
BlockTime uint64
}
// RPCTransactionInputVerboseData holds data about a transaction input
type RPCTransactionInputVerboseData struct {
}
// RPCTransactionOutputVerboseData holds data about a transaction output
type RPCTransactionOutputVerboseData struct {
ScriptPublicKeyType string
ScriptPublicKeyAddress string
}

View File

@@ -37,12 +37,14 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
newBlockInsertionResults = append(newBlockInsertionResults, unorphaningResult.blockInsertionResult)
}
allAcceptedTransactions := make([]*externalapi.DomainTransaction, 0)
for i, newBlock := range newBlocks {
log.Debugf("OnNewBlock: passing block %s transactions to mining manager", hash)
_, err = f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
acceptedTransactions, err := f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
if err != nil {
return err
}
allAcceptedTransactions = append(allAcceptedTransactions, acceptedTransactions...)
if f.onBlockAddedToDAGHandler != nil {
log.Debugf("OnNewBlock: calling f.onBlockAddedToDAGHandler for block %s", hash)
@@ -54,7 +56,7 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
}
}
return nil
return f.broadcastTransactionsAfterBlockAdded(newBlocks, allAcceptedTransactions)
}
// OnPruningPointUTXOSetOverride calls the handler function whenever the UTXO set
@@ -67,9 +69,9 @@ func (f *FlowContext) OnPruningPointUTXOSetOverride() error {
}
func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
block *externalapi.DomainBlock, transactionsAcceptedToMempool []*externalapi.DomainTransaction) error {
addedBlocks []*externalapi.DomainBlock, transactionsAcceptedToMempool []*externalapi.DomainTransaction) error {
f.updateTransactionsToRebroadcast(block)
f.updateTransactionsToRebroadcast(addedBlocks)
// Don't relay transactions when in IBD.
if f.IsIBDRunning() {

View File

@@ -25,14 +25,17 @@ func (f *FlowContext) AddTransaction(tx *externalapi.DomainTransaction) error {
return f.Broadcast(inv)
}
func (f *FlowContext) updateTransactionsToRebroadcast(block *externalapi.DomainBlock) {
func (f *FlowContext) updateTransactionsToRebroadcast(addedBlocks []*externalapi.DomainBlock) {
f.transactionsToRebroadcastLock.Lock()
defer f.transactionsToRebroadcastLock.Unlock()
// Note: if the block is red, its transactions won't be rebroadcasted
// anymore, although they are not included in the UTXO set.
// This is probably ok, since red blocks are quite rare.
for _, tx := range block.Transactions {
delete(f.transactionsToRebroadcast, *consensushashing.TransactionID(tx))
for _, block := range addedBlocks {
// Note: if a transaction is included in the DAG but not accepted,
// it won't be rebroadcast anymore, although it is not included in
// the UTXO set
for _, tx := range block.Transactions {
delete(f.transactionsToRebroadcast, *consensushashing.TransactionID(tx))
}
}
}

View File

@@ -129,7 +129,7 @@ type fakeRelayInvsContext struct {
rwLock sync.RWMutex
}
func (f *fakeRelayInvsContext) GetBlockChildren(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
func (f *fakeRelayInvsContext) GetBlockRelations(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, *externalapi.DomainHash, []*externalapi.DomainHash, error) {
panic(errors.Errorf("called unimplemented function from test '%s'", f.testName))
}

View File

@@ -69,12 +69,12 @@ func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockIns
return err
}
msgBlock := appmessage.DomainBlockToMsgBlock(block)
blockVerboseData, err := m.context.BuildBlockVerboseData(block.Header, block, false)
rpcBlock := appmessage.DomainBlockToRPCBlock(block)
err = m.context.PopulateBlockWithVerboseData(rpcBlock, block.Header, block, false)
if err != nil {
return err
}
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(msgBlock, blockVerboseData)
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(rpcBlock)
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
}

View File

@@ -2,15 +2,10 @@ package rpccontext
import (
"encoding/hex"
"fmt"
difficultyPackage "github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
"math"
"math/big"
"strconv"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
@@ -26,79 +21,6 @@ import (
// ErrBuildBlockVerboseDataInvalidBlock indicates that a block that was given to BuildBlockVerboseData is invalid.
var ErrBuildBlockVerboseDataInvalidBlock = errors.New("ErrBuildBlockVerboseDataInvalidBlock")
// BuildBlockVerboseData builds a BlockVerboseData from the given blockHeader.
// A block may optionally also be given if it's available in the calling context.
func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, block *externalapi.DomainBlock,
includeTransactionVerboseData bool) (*appmessage.BlockVerboseData, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "BuildBlockVerboseData")
defer onEnd()
hash := consensushashing.HeaderHash(blockHeader)
blockInfo, err := ctx.Domain.Consensus().GetBlockInfo(hash)
if err != nil {
return nil, err
}
if blockInfo.BlockStatus == externalapi.StatusInvalid {
return nil, errors.Wrap(ErrBuildBlockVerboseDataInvalidBlock, "cannot build verbose data for "+
"invalid block")
}
childrenHashes, err := ctx.Domain.Consensus().GetBlockChildren(hash)
if err != nil {
return nil, err
}
result := &appmessage.BlockVerboseData{
Hash: hash.String(),
Version: blockHeader.Version(),
VersionHex: fmt.Sprintf("%08x", blockHeader.Version()),
HashMerkleRoot: blockHeader.HashMerkleRoot().String(),
AcceptedIDMerkleRoot: blockHeader.AcceptedIDMerkleRoot().String(),
UTXOCommitment: blockHeader.UTXOCommitment().String(),
ParentHashes: hashes.ToStrings(blockHeader.ParentHashes()),
ChildrenHashes: hashes.ToStrings(childrenHashes),
Nonce: blockHeader.Nonce(),
Time: blockHeader.TimeInMilliseconds(),
Bits: strconv.FormatInt(int64(blockHeader.Bits()), 16),
Difficulty: ctx.GetDifficultyRatio(blockHeader.Bits(), ctx.Config.ActiveNetParams),
BlueScore: blockInfo.BlueScore,
IsHeaderOnly: blockInfo.BlockStatus == externalapi.StatusHeaderOnly,
}
if blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
if block == nil {
block, err = ctx.Domain.Consensus().GetBlock(hash)
if err != nil {
return nil, err
}
}
txIDs := make([]string, len(block.Transactions))
for i, tx := range block.Transactions {
txIDs[i] = consensushashing.TransactionID(tx).String()
}
result.TxIDs = txIDs
if includeTransactionVerboseData {
transactionVerboseData := make([]*appmessage.TransactionVerboseData, len(block.Transactions))
for i, tx := range block.Transactions {
txID := consensushashing.TransactionID(tx).String()
data, err := ctx.BuildTransactionVerboseData(tx, txID, blockHeader, hash.String())
if err != nil {
return nil, err
}
transactionVerboseData[i] = data
}
result.TransactionVerboseData = transactionVerboseData
}
}
return result, nil
}
// GetDifficultyRatio returns the proof-of-work difficulty as a multiple of the
// minimum difficulty using the passed bits field from the header of a block.
func (ctx *Context) GetDifficultyRatio(bits uint32, params *dagconfig.Params) float64 {
@@ -106,7 +28,7 @@ func (ctx *Context) GetDifficultyRatio(bits uint32, params *dagconfig.Params) fl
// converted back to a number. Note this is not the same as the proof of
// work limit directly because the block difficulty is encoded in a block
// with the compact form which loses precision.
target := difficulty.CompactToBig(bits)
target := difficultyPackage.CompactToBig(bits)
difficulty := new(big.Rat).SetFrac(params.PowMax, target)
diff, _ := difficulty.Float64()
@@ -117,100 +39,125 @@ func (ctx *Context) GetDifficultyRatio(bits uint32, params *dagconfig.Params) fl
return diff
}
// BuildTransactionVerboseData builds a TransactionVerboseData from
// the given parameters
func (ctx *Context) BuildTransactionVerboseData(tx *externalapi.DomainTransaction, txID string,
blockHeader externalapi.BlockHeader, blockHash string) (
*appmessage.TransactionVerboseData, error) {
// PopulateBlockWithVerboseData populates the given `block` with verbose
// data from `domainBlockHeader` and optionally from `domainBlock`
func (ctx *Context) PopulateBlockWithVerboseData(block *appmessage.RPCBlock, domainBlockHeader externalapi.BlockHeader,
domainBlock *externalapi.DomainBlock, includeTransactionVerboseData bool) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "BuildTransactionVerboseData")
defer onEnd()
blockHash := consensushashing.HeaderHash(domainBlockHeader)
txReply := &appmessage.TransactionVerboseData{
TxID: txID,
Hash: consensushashing.TransactionHash(tx).String(),
Size: estimatedsize.TransactionEstimatedSerializedSize(tx),
TransactionVerboseInputs: ctx.buildTransactionVerboseInputs(tx),
TransactionVerboseOutputs: ctx.buildTransactionVerboseOutputs(tx, nil),
Version: tx.Version,
LockTime: tx.LockTime,
SubnetworkID: tx.SubnetworkID.String(),
Gas: tx.Gas,
Payload: hex.EncodeToString(tx.Payload),
blockInfo, err := ctx.Domain.Consensus().GetBlockInfo(blockHash)
if err != nil {
return err
}
if blockHeader != nil {
txReply.Time = uint64(blockHeader.TimeInMilliseconds())
txReply.BlockTime = uint64(blockHeader.TimeInMilliseconds())
txReply.BlockHash = blockHash
if blockInfo.BlockStatus == externalapi.StatusInvalid {
return errors.Wrap(ErrBuildBlockVerboseDataInvalidBlock, "cannot build verbose data for "+
"invalid block")
}
return txReply, nil
}
_, selectedParentHash, childrenHashes, err := ctx.Domain.Consensus().GetBlockRelations(blockHash)
if err != nil {
return err
}
func (ctx *Context) buildTransactionVerboseInputs(tx *externalapi.DomainTransaction) []*appmessage.TransactionVerboseInput {
inputs := make([]*appmessage.TransactionVerboseInput, len(tx.Inputs))
for i, transactionInput := range tx.Inputs {
// The disassembled string will contain [error] inline
// if the script doesn't fully parse, so ignore the
// error here.
disbuf, _ := txscript.DisasmString(constants.MaxScriptPublicKeyVersion, transactionInput.SignatureScript)
block.VerboseData = &appmessage.RPCBlockVerboseData{
Hash: blockHash.String(),
Difficulty: ctx.GetDifficultyRatio(domainBlockHeader.Bits(), ctx.Config.ActiveNetParams),
ChildrenHashes: hashes.ToStrings(childrenHashes),
SelectedParentHash: selectedParentHash.String(),
IsHeaderOnly: blockInfo.BlockStatus == externalapi.StatusHeaderOnly,
BlueScore: blockInfo.BlueScore,
}
input := &appmessage.TransactionVerboseInput{}
input.TxID = transactionInput.PreviousOutpoint.TransactionID.String()
input.OutputIndex = transactionInput.PreviousOutpoint.Index
input.Sequence = transactionInput.Sequence
input.ScriptSig = &appmessage.ScriptSig{
Asm: disbuf,
Hex: hex.EncodeToString(transactionInput.SignatureScript),
if blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return nil
}
// Get the block if we didn't receive it previously
if domainBlock == nil {
domainBlock, err = ctx.Domain.Consensus().GetBlock(blockHash)
if err != nil {
return err
}
inputs[i] = input
}
return inputs
}
transactionIDs := make([]string, len(domainBlock.Transactions))
for i, transaction := range domainBlock.Transactions {
transactionIDs[i] = consensushashing.TransactionID(transaction).String()
}
block.VerboseData.TransactionIDs = transactionIDs
// buildTransactionVerboseOutputs returns a slice of JSON objects for the outputs of the passed
// transaction.
func (ctx *Context) buildTransactionVerboseOutputs(tx *externalapi.DomainTransaction, filterAddrMap map[string]struct{}) []*appmessage.TransactionVerboseOutput {
outputs := make([]*appmessage.TransactionVerboseOutput, len(tx.Outputs))
for i, transactionOutput := range tx.Outputs {
// Ignore the error here since an error means the script
// couldn't parse and there is no additional information about
// it anyways.
scriptClass, addr, _ := txscript.ExtractScriptPubKeyAddress(
transactionOutput.ScriptPublicKey, ctx.Config.ActiveNetParams)
// Encode the addresses while checking if the address passes the
// filter when needed.
passesFilter := len(filterAddrMap) == 0
var encodedAddr string
if addr != nil {
encodedAddr = addr.EncodeAddress()
// If the filter doesn't already pass, make it pass if
// the address exists in the filter.
if _, exists := filterAddrMap[encodedAddr]; exists {
passesFilter = true
if includeTransactionVerboseData {
for _, transaction := range block.Transactions {
err := ctx.PopulateTransactionWithVerboseData(transaction, domainBlockHeader)
if err != nil {
return err
}
}
if !passesFilter {
continue
}
output := &appmessage.TransactionVerboseOutput{}
output.Index = uint32(i)
output.Value = transactionOutput.Value
output.ScriptPubKey = &appmessage.ScriptPubKeyResult{
Version: transactionOutput.ScriptPublicKey.Version,
Address: encodedAddr,
Hex: hex.EncodeToString(transactionOutput.ScriptPublicKey.Script),
Type: scriptClass.String(),
}
outputs[i] = output
}
return outputs
return nil
}
// PopulateTransactionWithVerboseData populates the given `transaction` with
// verbose data from `domainTransaction`
func (ctx *Context) PopulateTransactionWithVerboseData(
transaction *appmessage.RPCTransaction, domainBlockHeader externalapi.BlockHeader) error {
domainTransaction, err := appmessage.RPCTransactionToDomainTransaction(transaction)
if err != nil {
return err
}
transaction.VerboseData = &appmessage.RPCTransactionVerboseData{
TransactionID: consensushashing.TransactionID(domainTransaction).String(),
Hash: consensushashing.TransactionHash(domainTransaction).String(),
Size: estimatedsize.TransactionEstimatedSerializedSize(domainTransaction),
}
if domainBlockHeader != nil {
transaction.VerboseData.BlockHash = consensushashing.HeaderHash(domainBlockHeader).String()
transaction.VerboseData.BlockTime = uint64(domainBlockHeader.TimeInMilliseconds())
}
for _, input := range transaction.Inputs {
ctx.populateTransactionInputWithVerboseData(input)
}
for _, output := range transaction.Outputs {
err := ctx.populateTransactionOutputWithVerboseData(output)
if err != nil {
return err
}
}
return nil
}
func (ctx *Context) populateTransactionInputWithVerboseData(transactionInput *appmessage.RPCTransactionInput) {
transactionInput.VerboseData = &appmessage.RPCTransactionInputVerboseData{}
}
func (ctx *Context) populateTransactionOutputWithVerboseData(transactionOutput *appmessage.RPCTransactionOutput) error {
scriptPublicKey, err := hex.DecodeString(transactionOutput.ScriptPublicKey.Script)
if err != nil {
return err
}
domainScriptPublicKey := &externalapi.ScriptPublicKey{
Script: scriptPublicKey,
Version: transactionOutput.ScriptPublicKey.Version,
}
// Ignore the error here since an error means the script
// couldn't be parsed and there's no additional information about
// it anyways
scriptPublicKeyType, scriptPublicKeyAddress, _ := txscript.ExtractScriptPubKeyAddress(
domainScriptPublicKey, ctx.Config.ActiveNetParams)
var encodedScriptPublicKeyAddress string
if scriptPublicKeyAddress != nil {
encodedScriptPublicKeyAddress = scriptPublicKeyAddress.EncodeAddress()
}
transactionOutput.VerboseData = &appmessage.RPCTransactionOutputVerboseData{
ScriptPublicKeyType: scriptPublicKeyType.String(),
ScriptPublicKeyAddress: encodedScriptPublicKeyAddress,
}
return nil
}

View File

@@ -26,10 +26,12 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
errorMessage.Error = appmessage.RPCErrorf("Block %s not found", hash)
return errorMessage, nil
}
block := &externalapi.DomainBlock{Header: header}
response := appmessage.NewGetBlockResponseMessage()
response.Block = appmessage.DomainBlockToRPCBlock(block)
blockVerboseData, err := context.BuildBlockVerboseData(header, nil, getBlockRequest.IncludeTransactionVerboseData)
err = context.PopulateBlockWithVerboseData(response.Block, header, nil, getBlockRequest.IncludeTransactionVerboseData)
if err != nil {
if errors.Is(err, rpccontext.ErrBuildBlockVerboseDataInvalidBlock) {
errorMessage := &appmessage.GetBlockResponseMessage{}
@@ -39,7 +41,5 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
return nil, err
}
response.BlockVerboseData = blockVerboseData
return response, nil
}

View File

@@ -31,12 +31,12 @@ func HandleGetBlockTemplate(context *rpccontext.Context, _ *router.Router, reque
if err != nil {
return nil, err
}
msgBlock := appmessage.DomainBlockToMsgBlock(templateBlock)
rpcBlock := appmessage.DomainBlockToRPCBlock(templateBlock)
isSynced, err := context.ProtocolManager.ShouldMine()
if err != nil {
return nil, err
}
return appmessage.NewGetBlockTemplateResponseMessage(msgBlock, isSynced), nil
return appmessage.NewGetBlockTemplateResponseMessage(rpcBlock, isSynced), nil
}

View File

@@ -18,8 +18,8 @@ const (
func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
getBlocksRequest := request.(*appmessage.GetBlocksRequestMessage)
// Validate that user didn't set IncludeTransactionVerboseData without setting IncludeBlockVerboseData
if !getBlocksRequest.IncludeBlockVerboseData && getBlocksRequest.IncludeTransactionVerboseData {
// Validate that user didn't set IncludeTransactionVerboseData without setting IncludeBlocks
if !getBlocksRequest.IncludeBlocks && getBlocksRequest.IncludeTransactionVerboseData {
return &appmessage.GetBlocksResponseMessage{
Error: appmessage.RPCErrorf(
"If includeTransactionVerboseData is set, then includeBlockVerboseData must be set as well"),
@@ -55,8 +55,7 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
if err != nil {
return nil, err
}
blockHashes, highHash, err := context.Domain.Consensus().GetHashesBetween(
lowHash, virtualSelectedParent, maxBlocksInGetBlocksResponse)
blockHashes, highHash, err := context.Domain.Consensus().GetHashesBetween(lowHash, virtualSelectedParent, maxBlocksInGetBlocksResponse)
if err != nil {
return nil, err
}
@@ -82,26 +81,23 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
}
// Prepare the response
response := &appmessage.GetBlocksResponseMessage{
BlockHashes: hashes.ToStrings(blockHashes),
}
// Retrieve all block data in case BlockVerboseData was requested
if getBlocksRequest.IncludeBlockVerboseData {
response.BlockVerboseData = make([]*appmessage.BlockVerboseData, len(blockHashes))
response := appmessage.NewGetBlocksResponseMessage()
response.BlockHashes = hashes.ToStrings(blockHashes)
if getBlocksRequest.IncludeBlocks {
blocks := make([]*appmessage.RPCBlock, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := context.Domain.Consensus().GetBlockHeader(blockHash)
if err != nil {
return nil, err
}
blockVerboseData, err := context.BuildBlockVerboseData(blockHeader, nil,
getBlocksRequest.IncludeTransactionVerboseData)
block := &externalapi.DomainBlock{Header: blockHeader}
blocks[i] = appmessage.DomainBlockToRPCBlock(block)
err = context.PopulateBlockWithVerboseData(blocks[i], blockHeader, nil, getBlocksRequest.IncludeTransactionVerboseData)
if err != nil {
return nil, err
}
response.BlockVerboseData[i] = blockVerboseData
}
}
return response, nil
}

View File

@@ -5,6 +5,8 @@ import (
"sort"
"testing"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/app/rpc/rpchandlers"
@@ -27,6 +29,8 @@ func (d fakeDomain) MiningManager() miningmanager.MiningManager { return nil }
func TestHandleGetBlocks(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
stagingArea := model.NewStagingArea()
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(params, false, "TestHandleGetBlocks")
if err != nil {
@@ -55,7 +59,7 @@ func TestHandleGetBlocks(t *testing.T) {
antipast := make([]*externalapi.DomainHash, 0, len(slice))
for _, blockHash := range slice {
isInPastOfPovBlock, err := tc.DAGTopologyManager().IsAncestorOf(blockHash, povBlock)
isInPastOfPovBlock, err := tc.DAGTopologyManager().IsAncestorOf(stagingArea, blockHash, povBlock)
if err != nil {
t.Fatalf("Failed doing reachability check: '%v'", err)
}
@@ -87,7 +91,7 @@ func TestHandleGetBlocks(t *testing.T) {
}
splitBlocks = append(splitBlocks, blockHash)
}
sort.Sort(sort.Reverse(testutils.NewTestGhostDAGSorter(splitBlocks, tc, t)))
sort.Sort(sort.Reverse(testutils.NewTestGhostDAGSorter(stagingArea, splitBlocks, tc, t)))
restOfSplitBlocks, selectedParent := splitBlocks[:len(splitBlocks)-1], splitBlocks[len(splitBlocks)-1]
expectedOrder = append(expectedOrder, selectedParent)
expectedOrder = append(expectedOrder, restOfSplitBlocks...)

View File

@@ -3,25 +3,22 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleGetMempoolEntries handles the respectively named RPC command
func HandleGetMempoolEntries(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
transactions := context.Domain.MiningManager().AllTransactions()
entries := make([]*appmessage.MempoolEntry, 0, len(transactions))
for _, tx := range transactions {
transactionVerboseData, err := context.BuildTransactionVerboseData(
tx, consensushashing.TransactionID(tx).String(), nil, "")
for _, transaction := range transactions {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
entries = append(entries, &appmessage.MempoolEntry{
Fee: tx.Fee,
TransactionVerboseData: transactionVerboseData,
Fee: transaction.Fee,
Transaction: rpcTransaction,
})
}

View File

@@ -24,12 +24,11 @@ func HandleGetMempoolEntry(context *rpccontext.Context, _ *router.Router, reques
errorMessage.Error = appmessage.RPCErrorf("Transaction %s was not found", transactionID)
return errorMessage, nil
}
transactionVerboseData, err := context.BuildTransactionVerboseData(
transaction, getMempoolEntryRequest.TxID, nil, "")
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err = context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
return appmessage.NewGetMempoolEntryResponseMessage(transaction.Fee, transactionVerboseData), nil
return appmessage.NewGetMempoolEntryResponseMessage(transaction.Fee, rpcTransaction), nil
}

View File

@@ -14,9 +14,6 @@ import (
func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
submitBlockRequest := request.(*appmessage.SubmitBlockRequestMessage)
msgBlock := submitBlockRequest.Block
domainBlock := appmessage.MsgBlockToDomainBlock(msgBlock)
if context.ProtocolManager.IsIBDRunning() {
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Block not submitted - IBD is running"),
@@ -24,7 +21,15 @@ func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request ap
}, nil
}
err := context.ProtocolManager.AddBlock(domainBlock)
domainBlock, err := appmessage.RPCBlockToDomainBlock(submitBlockRequest.Block)
if err != nil {
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Could not parse block: %s", err),
RejectReason: appmessage.RejectReasonBlockInvalid,
}, nil
}
err = context.ProtocolManager.AddBlock(domainBlock)
if err != nil {
isProtocolOrRuleError := errors.As(err, &ruleerrors.RuleError{}) || errors.As(err, &protocolerrors.ProtocolError{})
if !isProtocolOrRuleError {

9
cmd/genkeypair/README.md Normal file
View File

@@ -0,0 +1,9 @@
genkeypair
========
A tool for generating private-key-address pairs.
Note: This tool prints unencrypted private keys and is not recommended for day
to day use, and is intended mainly for tests.
In order to manage your funds it's recommended to use [kaspawallet](../kaspawallet)

26
cmd/genkeypair/config.go Normal file
View File

@@ -0,0 +1,26 @@
package main
import (
"github.com/jessevdk/go-flags"
"github.com/kaspanet/kaspad/infrastructure/config"
)
type configFlags struct {
config.NetworkFlags
}
func parseConfig() (*configFlags, error) {
cfg := &configFlags{}
parser := flags.NewParser(cfg, flags.PrintErrors|flags.HelpFlag)
_, err := parser.Parse()
if err != nil {
return nil, err
}
err = cfg.ResolveNetwork(parser)
if err != nil {
return nil, err
}
return cfg, nil
}

27
cmd/genkeypair/main.go Normal file
View File

@@ -0,0 +1,27 @@
package main
import (
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/util"
)
func main() {
cfg, err := parseConfig()
if err != nil {
panic(err)
}
privateKey, publicKey, err := libkaspawallet.CreateKeyPair(false)
if err != nil {
panic(err)
}
addr, err := util.NewAddressPublicKey(publicKey, cfg.NetParams().Prefix)
if err != nil {
panic(err)
}
fmt.Printf("Private key: %x\n", privateKey)
fmt.Printf("Address: %s\n", addr)
}

View File

@@ -5,72 +5,32 @@ import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"github.com/pkg/errors"
"sync"
"sync/atomic"
"time"
)
const minerTimeout = 10 * time.Second
type minerClient struct {
isReconnecting uint32
clientLock sync.RWMutex
rpcClient *rpcclient.RPCClient
*rpcclient.RPCClient
cfg *configFlags
blockAddedNotificationChan chan struct{}
}
func (mc *minerClient) safeRPCClient() *rpcclient.RPCClient {
mc.clientLock.RLock()
defer mc.clientLock.RUnlock()
return mc.rpcClient
}
func (mc *minerClient) reconnect() {
swapped := atomic.CompareAndSwapUint32(&mc.isReconnecting, 0, 1)
if !swapped {
return
}
defer atomic.StoreUint32(&mc.isReconnecting, 0)
mc.clientLock.Lock()
defer mc.clientLock.Unlock()
retryDuration := time.Second
const maxRetryDuration = time.Minute
log.Infof("Reconnecting RPC connection")
for {
err := mc.connect()
if err == nil {
return
}
if retryDuration < time.Minute {
retryDuration *= 2
} else {
retryDuration = maxRetryDuration
}
log.Errorf("Got error '%s' while reconnecting. Trying again in %s", err, retryDuration)
time.Sleep(retryDuration)
}
}
func (mc *minerClient) connect() error {
rpcAddress, err := mc.cfg.NetParams().NormalizeRPCServerAddress(mc.cfg.RPCServer)
if err != nil {
return err
}
mc.rpcClient, err = rpcclient.NewRPCClient(rpcAddress)
rpcClient, err := rpcclient.NewRPCClient(rpcAddress)
if err != nil {
return err
}
mc.rpcClient.SetTimeout(minerTimeout)
mc.rpcClient.SetLogger(backendLog, logger.LevelTrace)
mc.RPCClient = rpcClient
mc.SetTimeout(minerTimeout)
mc.SetLogger(backendLog, logger.LevelTrace)
err = mc.rpcClient.RegisterForBlockAddedNotifications(func(_ *appmessage.BlockAddedNotificationMessage) {
err = mc.RegisterForBlockAddedNotifications(func(_ *appmessage.BlockAddedNotificationMessage) {
select {
case mc.blockAddedNotificationChan <- struct{}{}:
default:

View File

@@ -40,7 +40,7 @@ func main() {
if err != nil {
panic(errors.Wrap(err, "error connecting to the RPC server"))
}
defer client.safeRPCClient().Disconnect()
defer client.Disconnect()
miningAddr, err := util.DecodeAddress(cfg.MiningAddr, cfg.ActiveNetParams.Prefix)
if err != nil {

View File

@@ -114,13 +114,17 @@ func logHashRate() {
func handleFoundBlock(client *minerClient, block *externalapi.DomainBlock) error {
blockHash := consensushashing.BlockHash(block)
log.Infof("Submitting block %s to %s", blockHash, client.safeRPCClient().Address())
log.Infof("Submitting block %s to %s", blockHash, client.Address())
rejectReason, err := client.safeRPCClient().SubmitBlock(block)
rejectReason, err := client.SubmitBlock(block)
if err != nil {
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while submitting block %s to %s: %s", blockHash, client.safeRPCClient().Address(), err)
client.reconnect()
log.Warnf("Got timeout while submitting block %s to %s: %s", blockHash, client.Address(), err)
return client.Reconnect()
}
if nativeerrors.Is(err, router.ErrRouteClosed) {
log.Debugf("Got route is closed while requesting block template from %s. "+
"The client is most likely reconnecting", client.Address())
return nil
}
if rejectReason == appmessage.RejectReasonIsInIBD {
@@ -129,7 +133,7 @@ func handleFoundBlock(client *minerClient, block *externalapi.DomainBlock) error
time.Sleep(waitTime)
return nil
}
return errors.Wrapf(err, "Error submitting block %s to %s", blockHash, client.safeRPCClient().Address())
return errors.Wrapf(err, "Error submitting block %s to %s", blockHash, client.Address())
}
return nil
}
@@ -188,17 +192,29 @@ func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
func templatesLoop(client *minerClient, miningAddr util.Address, errChan chan error) {
getBlockTemplate := func() {
template, err := client.safeRPCClient().GetBlockTemplate(miningAddr.String())
template, err := client.GetBlockTemplate(miningAddr.String())
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while requesting block template from %s: %s", client.safeRPCClient().Address(), err)
client.reconnect()
log.Warnf("Got timeout while requesting block template from %s: %s", client.Address(), err)
reconnectErr := client.Reconnect()
if reconnectErr != nil {
errChan <- reconnectErr
}
return
}
if nativeerrors.Is(err, router.ErrRouteClosed) {
log.Debugf("Got route is closed while requesting block template from %s. "+
"The client is most likely reconnecting", client.Address())
return
}
if err != nil {
errChan <- errors.Wrapf(err, "Error getting block template from %s", client.safeRPCClient().Address())
errChan <- errors.Wrapf(err, "Error getting block template from %s", client.Address())
return
}
err = templatemanager.Set(template)
if err != nil {
errChan <- errors.Wrapf(err, "Error setting block template from %s", client.Address())
return
}
templatemanager.Set(template)
}
getBlockTemplate()

View File

@@ -23,10 +23,14 @@ func Get() (*externalapi.DomainBlock, bool) {
}
// Set sets the current template to work on
func Set(template *appmessage.GetBlockTemplateResponseMessage) {
block := appmessage.MsgBlockToDomainBlock(template.MsgBlock)
func Set(template *appmessage.GetBlockTemplateResponseMessage) error {
block, err := appmessage.RPCBlockToDomainBlock(template.Block)
if err != nil {
return err
}
lock.Lock()
defer lock.Unlock()
currentTemplate = block
isSynced = template.IsSynced
return nil
}

View File

@@ -2,17 +2,28 @@ package main
import (
"fmt"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/util"
)
func balance(conf *balanceConfig) error {
client, err := rpcclient.NewRPCClient(conf.RPCServer)
client, err := connectToRPC(conf.NetParams(), conf.RPCServer)
if err != nil {
return err
}
getUTXOsByAddressesResponse, err := client.GetUTXOsByAddresses([]string{conf.Address})
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
addr, err := libkaspawallet.Address(conf.NetParams(), keysFile.PublicKeys, keysFile.MinimumSignatures, keysFile.ECDSA)
if err != nil {
return err
}
getUTXOsByAddressesResponse, err := client.GetUTXOsByAddresses([]string{addr.String()})
if err != nil {
return err
}

View File

@@ -0,0 +1,34 @@
package main
import (
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
)
func broadcast(conf *broadcastConfig) error {
client, err := connectToRPC(conf.NetParams(), conf.RPCServer)
if err != nil {
return err
}
psTxBytes, err := hex.DecodeString(conf.Transaction)
if err != nil {
return err
}
tx, err := libkaspawallet.ExtractTransaction(psTxBytes)
if err != nil {
return err
}
transactionID, err := sendTransaction(client, tx)
if err != nil {
return err
}
fmt.Println("Transaction was sent successfully")
fmt.Printf("Transaction ID: \t%s\n", transactionID)
return nil
}

View File

@@ -3,6 +3,8 @@ package main
import (
"fmt"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"os"
)
@@ -19,3 +21,12 @@ func printErrorAndExit(err error) {
fmt.Fprintf(os.Stderr, "%s\n", err)
os.Exit(1)
}
func connectToRPC(params *dagconfig.Params, rpcServer string) (*rpcclient.RPCClient, error) {
rpcAddress, err := params.NormalizeRPCServerAddress(rpcServer)
if err != nil {
return nil, err
}
return rpcclient.NewRPCClient(rpcAddress)
}

198
cmd/kaspawallet/config.go Normal file
View File

@@ -0,0 +1,198 @@
package main
import (
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/pkg/errors"
"os"
"github.com/jessevdk/go-flags"
)
const (
createSubCmd = "create"
balanceSubCmd = "balance"
sendSubCmd = "send"
createUnsignedTransactionSubCmd = "create-unsigned-transaction"
signSubCmd = "sign"
broadcastSubCmd = "broadcast"
showAddressSubCmd = "show-address"
dumpUnencryptedDataSubCmd = "dump-unencrypted-data"
)
type configFlags struct {
config.NetworkFlags
}
type createConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
MinimumSignatures uint32 `long:"min-signatures" short:"m" description:"Minimum required signatures" default:"1"`
NumPrivateKeys uint32 `long:"num-private-keys" short:"k" description:"Number of private keys" default:"1"`
NumPublicKeys uint32 `long:"num-public-keys" short:"n" description:"Total number of keys" default:"1"`
ECDSA bool `long:"ecdsa" description:"Create an ECDSA wallet"`
Import bool `long:"import" short:"i" description:"Import private keys (as opposed to generating them)"`
config.NetworkFlags
}
type balanceConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
config.NetworkFlags
}
type sendConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
config.NetworkFlags
}
type createUnsignedTransactionConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
config.NetworkFlags
}
type signConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Transaction string `long:"transaction" short:"t" description:"The unsigned transaction to sign on (encoded in hex)" required:"true"`
config.NetworkFlags
}
type broadcastConfig struct {
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
Transaction string `long:"transaction" short:"t" description:"The signed transaction to broadcast (encoded in hex)" required:"true"`
config.NetworkFlags
}
type showAddressConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
config.NetworkFlags
}
type dumpUnencryptedDataConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
config.NetworkFlags
}
func parseCommandLine() (subCommand string, config interface{}) {
cfg := &configFlags{}
parser := flags.NewParser(cfg, flags.PrintErrors|flags.HelpFlag)
createConf := &createConfig{}
parser.AddCommand(createSubCmd, "Creates a new wallet",
"Creates a private key and 3 public addresses, one for each of MainNet, TestNet and DevNet", createConf)
balanceConf := &balanceConfig{}
parser.AddCommand(balanceSubCmd, "Shows the balance of a public address",
"Shows the balance for a public address in Kaspa", balanceConf)
sendConf := &sendConfig{}
parser.AddCommand(sendSubCmd, "Sends a Kaspa transaction to a public address",
"Sends a Kaspa transaction to a public address", sendConf)
createUnsignedTransactionConf := &createUnsignedTransactionConfig{}
parser.AddCommand(createUnsignedTransactionSubCmd, "Create an unsigned Kaspa transaction",
"Create an unsigned Kaspa transaction", createUnsignedTransactionConf)
signConf := &signConfig{}
parser.AddCommand(signSubCmd, "Sign the given partially signed transaction",
"Sign the given partially signed transaction", signConf)
broadcastConf := &broadcastConfig{}
parser.AddCommand(broadcastSubCmd, "Broadcast the given transaction",
"Broadcast the given transaction", broadcastConf)
showAddressConf := &showAddressConfig{}
parser.AddCommand(showAddressSubCmd, "Shows the public address of the current wallet",
"Shows the public address of the current wallet", showAddressConf)
dumpUnencryptedDataConf := &dumpUnencryptedDataConfig{}
parser.AddCommand(dumpUnencryptedDataSubCmd, "Prints the unencrypted wallet data",
"Prints the unencrypted wallet data including its private keys. Anyone that sees it can access "+
"the funds. Use only on safe environment.", dumpUnencryptedDataConf)
_, err := parser.Parse()
if err != nil {
var flagsErr *flags.Error
if ok := errors.As(err, &flagsErr); ok && flagsErr.Type == flags.ErrHelp {
os.Exit(0)
} else {
os.Exit(1)
}
return "", nil
}
switch parser.Command.Active.Name {
case createSubCmd:
combineNetworkFlags(&createConf.NetworkFlags, &cfg.NetworkFlags)
err := createConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = createConf
case balanceSubCmd:
combineNetworkFlags(&balanceConf.NetworkFlags, &cfg.NetworkFlags)
err := balanceConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = balanceConf
case sendSubCmd:
combineNetworkFlags(&sendConf.NetworkFlags, &cfg.NetworkFlags)
err := sendConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = sendConf
case createUnsignedTransactionSubCmd:
combineNetworkFlags(&createUnsignedTransactionConf.NetworkFlags, &cfg.NetworkFlags)
err := createUnsignedTransactionConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = createUnsignedTransactionConf
case signSubCmd:
combineNetworkFlags(&signConf.NetworkFlags, &cfg.NetworkFlags)
err := signConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = signConf
case broadcastSubCmd:
combineNetworkFlags(&broadcastConf.NetworkFlags, &cfg.NetworkFlags)
err := broadcastConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = broadcastConf
case showAddressSubCmd:
combineNetworkFlags(&showAddressConf.NetworkFlags, &cfg.NetworkFlags)
err := showAddressConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = showAddressConf
case dumpUnencryptedDataSubCmd:
combineNetworkFlags(&dumpUnencryptedDataConf.NetworkFlags, &cfg.NetworkFlags)
err := dumpUnencryptedDataConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = dumpUnencryptedDataConf
}
return parser.Command.Active.Name, config
}
func combineNetworkFlags(dst, src *config.NetworkFlags) {
dst.Testnet = dst.Testnet || src.Testnet
dst.Simnet = dst.Simnet || src.Simnet
dst.Devnet = dst.Devnet || src.Devnet
if dst.OverrideDAGParamsFile == "" {
dst.OverrideDAGParamsFile = src.OverrideDAGParamsFile
}
}

69
cmd/kaspawallet/create.go Normal file
View File

@@ -0,0 +1,69 @@
package main
import (
"bufio"
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/pkg/errors"
"os"
)
func create(conf *createConfig) error {
var encryptedPrivateKeys []*keys.EncryptedPrivateKey
var publicKeys [][]byte
var err error
if !conf.Import {
encryptedPrivateKeys, publicKeys, err = keys.CreateKeyPairs(conf.NumPrivateKeys, conf.ECDSA)
} else {
encryptedPrivateKeys, publicKeys, err = keys.ImportKeyPairs(conf.NumPrivateKeys)
}
if err != nil {
return err
}
for i, publicKey := range publicKeys {
fmt.Printf("Public key of private key #%d:\n%x\n\n", i+1, publicKey)
}
reader := bufio.NewReader(os.Stdin)
for i := conf.NumPrivateKeys; i < conf.NumPublicKeys; i++ {
fmt.Printf("Enter public key #%d here:\n", i+1)
line, isPrefix, err := reader.ReadLine()
if err != nil {
return err
}
fmt.Println()
if isPrefix {
return errors.Errorf("Public key is too long")
}
publicKey, err := hex.DecodeString(string(line))
if err != nil {
return err
}
publicKeys = append(publicKeys, publicKey)
}
err = keys.WriteKeysFile(conf.KeysFile, encryptedPrivateKeys, publicKeys, conf.MinimumSignatures, conf.ECDSA)
if err != nil {
return err
}
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
addr, err := libkaspawallet.Address(conf.NetParams(), keysFile.PublicKeys, keysFile.MinimumSignatures, keysFile.ECDSA)
if err != nil {
return err
}
fmt.Printf("The wallet address is:\n%s\n", addr)
return nil
}

View File

@@ -0,0 +1,61 @@
package main
import (
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/util"
)
func createUnsignedTransaction(conf *createUnsignedTransactionConfig) error {
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
toAddress, err := util.DecodeAddress(conf.ToAddress, conf.NetParams().Prefix)
if err != nil {
return err
}
fromAddress, err := libkaspawallet.Address(conf.NetParams(), keysFile.PublicKeys, keysFile.MinimumSignatures, keysFile.ECDSA)
if err != nil {
return err
}
client, err := connectToRPC(conf.NetParams(), conf.RPCServer)
if err != nil {
return err
}
utxos, err := fetchSpendableUTXOs(conf.NetParams(), client, fromAddress.String())
if err != nil {
return err
}
sendAmountSompi := uint64(conf.SendAmount * util.SompiPerKaspa)
const feePerInput = 1000
selectedUTXOs, changeSompi, err := selectUTXOs(utxos, sendAmountSompi, feePerInput)
if err != nil {
return err
}
psTx, err := libkaspawallet.CreateUnsignedTransaction(keysFile.PublicKeys,
keysFile.MinimumSignatures,
keysFile.ECDSA,
[]*libkaspawallet.Payment{{
Address: toAddress,
Amount: sendAmountSompi,
}, {
Address: fromAddress,
Amount: changeSompi,
}}, selectedUTXOs)
if err != nil {
return err
}
fmt.Println("Created unsigned transaction")
fmt.Println(hex.EncodeToString(psTx))
return nil
}

View File

@@ -0,0 +1,69 @@
package main
import (
"bufio"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/pkg/errors"
"os"
)
func dumpUnencryptedData(conf *dumpUnencryptedDataConfig) error {
err := confirmDump()
if err != nil {
return err
}
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
privateKeys, err := keysFile.DecryptPrivateKeys()
if err != nil {
return err
}
privateKeysPublicKeys := make(map[string]struct{})
for i, privateKey := range privateKeys {
fmt.Printf("Private key #%d:\n%x\n\n", i+1, privateKey)
publicKey, err := libkaspawallet.PublicKeyFromPrivateKey(privateKey)
if err != nil {
return err
}
privateKeysPublicKeys[string(publicKey)] = struct{}{}
}
i := 1
for _, publicKey := range keysFile.PublicKeys {
if _, exists := privateKeysPublicKeys[string(publicKey)]; exists {
continue
}
fmt.Printf("Public key #%d:\n%x\n\n", i, publicKey)
i++
}
fmt.Printf("Minimum number of signatures: %d\n", keysFile.MinimumSignatures)
return nil
}
func confirmDump() error {
reader := bufio.NewReader(os.Stdin)
fmt.Printf("This operation will print your unencrypted keys on the screen. Anyone that sees this information " +
"will be able to steal your funds. Are you sure you want to proceed (y/N)? ")
line, isPrefix, err := reader.ReadLine()
if err != nil {
return err
}
fmt.Println()
if isPrefix || string(line) != "y" {
return errors.Errorf("Dump aborted by user")
}
return nil
}

View File

@@ -0,0 +1,110 @@
package keys
import (
"bufio"
"crypto/rand"
"crypto/subtle"
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/pkg/errors"
"os"
)
// CreateKeyPairs generates `numKeys` number of key pairs.
func CreateKeyPairs(numKeys uint32, ecdsa bool) (encryptedPrivateKeys []*EncryptedPrivateKey, publicKeys [][]byte, err error) {
return createKeyPairsFromFunction(numKeys, func(_ uint32) ([]byte, []byte, error) {
return libkaspawallet.CreateKeyPair(ecdsa)
})
}
// ImportKeyPairs imports a `numKeys` of private keys and generates key pairs out of them.
func ImportKeyPairs(numKeys uint32) (encryptedPrivateKeys []*EncryptedPrivateKey, publicKeys [][]byte, err error) {
return createKeyPairsFromFunction(numKeys, func(keyIndex uint32) ([]byte, []byte, error) {
fmt.Printf("Enter private key #%d here:\n", keyIndex+1)
reader := bufio.NewReader(os.Stdin)
line, isPrefix, err := reader.ReadLine()
if err != nil {
return nil, nil, err
}
if isPrefix {
return nil, nil, errors.Errorf("Private key is too long")
}
privateKey, err := hex.DecodeString(string(line))
if err != nil {
return nil, nil, err
}
publicKey, err := libkaspawallet.PublicKeyFromPrivateKey(privateKey)
if err != nil {
return nil, nil, err
}
return privateKey, publicKey, nil
})
}
func createKeyPairsFromFunction(numKeys uint32, keyPairFunction func(keyIndex uint32) ([]byte, []byte, error)) (
encryptedPrivateKeys []*EncryptedPrivateKey, publicKeys [][]byte, err error) {
password := getPassword("Enter password for the key file:")
confirmPassword := getPassword("Confirm password:")
if subtle.ConstantTimeCompare(password, confirmPassword) != 1 {
return nil, nil, errors.New("Passwords are not identical")
}
encryptedPrivateKeys = make([]*EncryptedPrivateKey, 0, numKeys)
for i := uint32(0); i < numKeys; i++ {
privateKey, publicKey, err := keyPairFunction(i)
if err != nil {
return nil, nil, err
}
publicKeys = append(publicKeys, publicKey)
encryptedPrivateKey, err := encryptPrivateKey(privateKey, password)
if err != nil {
return nil, nil, err
}
encryptedPrivateKeys = append(encryptedPrivateKeys, encryptedPrivateKey)
}
return encryptedPrivateKeys, publicKeys, nil
}
func generateSalt() ([]byte, error) {
salt := make([]byte, 16)
_, err := rand.Read(salt)
if err != nil {
return nil, err
}
return salt, nil
}
func encryptPrivateKey(privateKey []byte, password []byte) (*EncryptedPrivateKey, error) {
salt, err := generateSalt()
if err != nil {
return nil, err
}
aead, err := getAEAD(password, salt)
if err != nil {
return nil, err
}
// Select a random nonce, and leave capacity for the ciphertext.
nonce := make([]byte, aead.NonceSize(), aead.NonceSize()+len(privateKey)+aead.Overhead())
if _, err := rand.Read(nonce); err != nil {
return nil, err
}
// Encrypt the message and append the ciphertext to the nonce.
cipher := aead.Seal(nonce, nonce, privateKey, nil)
return &EncryptedPrivateKey{
cipher: cipher,
salt: salt,
}, nil
}

View File

@@ -0,0 +1,41 @@
package keys
import (
"fmt"
"golang.org/x/term"
"os"
"os/signal"
"syscall"
)
// getPassword was adapted from https://gist.github.com/jlinoff/e8e26b4ffa38d379c7f1891fd174a6d0#file-getpassword2-go
func getPassword(prompt string) []byte {
// Get the initial state of the terminal.
initialTermState, e1 := term.GetState(syscall.Stdin)
if e1 != nil {
panic(e1)
}
// Restore it in the event of an interrupt.
// CITATION: Konstantin Shaposhnikov - https://groups.google.com/forum/#!topic/golang-nuts/kTVAbtee9UA
c := make(chan os.Signal)
signal.Notify(c, os.Interrupt, os.Kill)
go func() {
<-c
_ = term.Restore(syscall.Stdin, initialTermState)
os.Exit(1)
}()
// Now get the password.
fmt.Print(prompt)
p, err := term.ReadPassword(syscall.Stdin)
fmt.Println()
if err != nil {
panic(err)
}
// Stop looking for ^C on the channel.
signal.Stop(c)
return p
}

View File

@@ -0,0 +1,254 @@
package keys
import (
"bufio"
"crypto/cipher"
"encoding/hex"
"encoding/json"
"fmt"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
"golang.org/x/crypto/argon2"
"golang.org/x/crypto/chacha20poly1305"
"os"
"path/filepath"
"runtime"
)
var (
defaultAppDir = util.AppDir("kaspawallet", false)
defaultKeysFile = filepath.Join(defaultAppDir, "keys.json")
)
type encryptedPrivateKeyJSON struct {
Cipher string `json:"cipher"`
Salt string `json:"salt"`
}
type keysFileJSON struct {
EncryptedPrivateKeys []*encryptedPrivateKeyJSON `json:"encryptedPrivateKeys"`
PublicKeys []string `json:"publicKeys"`
MinimumSignatures uint32 `json:"minimumSignatures"`
ECDSA bool `json:"ecdsa"`
}
// EncryptedPrivateKey represents an encrypted private key
type EncryptedPrivateKey struct {
cipher []byte
salt []byte
}
// Data holds all the data related to the wallet keys
type Data struct {
encryptedPrivateKeys []*EncryptedPrivateKey
PublicKeys [][]byte
MinimumSignatures uint32
ECDSA bool
}
func (d *Data) toJSON() *keysFileJSON {
encryptedPrivateKeysJSON := make([]*encryptedPrivateKeyJSON, len(d.encryptedPrivateKeys))
for i, encryptedPrivateKey := range d.encryptedPrivateKeys {
encryptedPrivateKeysJSON[i] = &encryptedPrivateKeyJSON{
Cipher: hex.EncodeToString(encryptedPrivateKey.cipher),
Salt: hex.EncodeToString(encryptedPrivateKey.salt),
}
}
publicKeysHex := make([]string, len(d.PublicKeys))
for i, publicKey := range d.PublicKeys {
publicKeysHex[i] = hex.EncodeToString(publicKey)
}
return &keysFileJSON{
EncryptedPrivateKeys: encryptedPrivateKeysJSON,
PublicKeys: publicKeysHex,
MinimumSignatures: d.MinimumSignatures,
ECDSA: d.ECDSA,
}
}
func (d *Data) fromJSON(fileJSON *keysFileJSON) error {
d.MinimumSignatures = fileJSON.MinimumSignatures
d.ECDSA = fileJSON.ECDSA
d.encryptedPrivateKeys = make([]*EncryptedPrivateKey, len(fileJSON.EncryptedPrivateKeys))
for i, encryptedPrivateKeyJSON := range fileJSON.EncryptedPrivateKeys {
cipher, err := hex.DecodeString(encryptedPrivateKeyJSON.Cipher)
if err != nil {
return err
}
salt, err := hex.DecodeString(encryptedPrivateKeyJSON.Salt)
if err != nil {
return err
}
d.encryptedPrivateKeys[i] = &EncryptedPrivateKey{
cipher: cipher,
salt: salt,
}
}
d.PublicKeys = make([][]byte, len(fileJSON.PublicKeys))
for i, publicKey := range fileJSON.PublicKeys {
var err error
d.PublicKeys[i], err = hex.DecodeString(publicKey)
if err != nil {
return err
}
}
return nil
}
// DecryptPrivateKeys asks the user to enter the password for the private keys and
// returns the decrypted private keys.
func (d *Data) DecryptPrivateKeys() ([][]byte, error) {
password := getPassword("Password:")
privateKeys := make([][]byte, len(d.encryptedPrivateKeys))
for i, encryptedPrivateKey := range d.encryptedPrivateKeys {
var err error
privateKeys[i], err = decryptPrivateKey(encryptedPrivateKey, password)
if err != nil {
return nil, err
}
}
return privateKeys, nil
}
// ReadKeysFile returns the data related to the keys file
func ReadKeysFile(path string) (*Data, error) {
if path == "" {
path = defaultKeysFile
}
file, err := os.Open(path)
if err != nil {
return nil, err
}
decoder := json.NewDecoder(file)
decoder.DisallowUnknownFields()
decodedFile := &keysFileJSON{}
err = decoder.Decode(&decodedFile)
if err != nil {
return nil, err
}
keysFile := &Data{}
err = keysFile.fromJSON(decodedFile)
if err != nil {
return nil, err
}
return keysFile, nil
}
func createFileDirectoryIfDoesntExist(path string) error {
dir := filepath.Dir(path)
exists, err := pathExists(dir)
if err != nil {
return err
}
if exists {
return nil
}
return os.MkdirAll(dir, 0600)
}
func pathExists(path string) (bool, error) {
_, err := os.Stat(path)
if err == nil {
return true, nil
}
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
// WriteKeysFile writes a keys file with the given data
func WriteKeysFile(path string,
encryptedPrivateKeys []*EncryptedPrivateKey,
publicKeys [][]byte,
minimumSignatures uint32,
ecdsa bool) error {
if path == "" {
path = defaultKeysFile
}
exists, err := pathExists(path)
if err != nil {
return err
}
if exists {
reader := bufio.NewReader(os.Stdin)
fmt.Printf("The file %s already exists. Are you sure you want to override it (type 'y' to approve)? ", path)
line, _, err := reader.ReadLine()
if err != nil {
return err
}
if string(line) != "y" {
return errors.Errorf("aborted keys file creation")
}
}
err = createFileDirectoryIfDoesntExist(path)
if err != nil {
return err
}
file, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE, 0600)
if err != nil {
return err
}
defer file.Close()
keysFile := &Data{
encryptedPrivateKeys: encryptedPrivateKeys,
PublicKeys: publicKeys,
MinimumSignatures: minimumSignatures,
ECDSA: ecdsa,
}
encoder := json.NewEncoder(file)
err = encoder.Encode(keysFile.toJSON())
if err != nil {
return err
}
fmt.Printf("Wrote the keys into %s\n", path)
return nil
}
func getAEAD(password, salt []byte) (cipher.AEAD, error) {
key := argon2.IDKey(password, salt, 1, 64*1024, uint8(runtime.NumCPU()), 32)
return chacha20poly1305.NewX(key)
}
func decryptPrivateKey(encryptedPrivateKey *EncryptedPrivateKey, password []byte) ([]byte, error) {
aead, err := getAEAD(password, encryptedPrivateKey.salt)
if err != nil {
return nil, err
}
if len(encryptedPrivateKey.cipher) < aead.NonceSize() {
return nil, errors.New("ciphertext too short")
}
// Split nonce and ciphertext.
nonce, ciphertext := encryptedPrivateKey.cipher[:aead.NonceSize()], encryptedPrivateKey.cipher[aead.NonceSize():]
// Decrypt the message and check it wasn't tampered with.
return aead.Open(nil, nonce, ciphertext, nil)
}

View File

@@ -0,0 +1,93 @@
package libkaspawallet
import (
"github.com/kaspanet/go-secp256k1"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
)
// CreateKeyPair generates a private-public key pair
func CreateKeyPair(ecdsa bool) ([]byte, []byte, error) {
if ecdsa {
return createKeyPairECDSA()
}
return createKeyPair()
}
func createKeyPair() ([]byte, []byte, error) {
keyPair, err := secp256k1.GenerateSchnorrKeyPair()
if err != nil {
return nil, nil, errors.Wrap(err, "Failed to generate private key")
}
publicKey, err := keyPair.SchnorrPublicKey()
if err != nil {
return nil, nil, errors.Wrap(err, "Failed to generate public key")
}
publicKeySerialized, err := publicKey.Serialize()
if err != nil {
return nil, nil, errors.Wrap(err, "Failed to serialize public key")
}
return keyPair.SerializePrivateKey()[:], publicKeySerialized[:], nil
}
func createKeyPairECDSA() ([]byte, []byte, error) {
keyPair, err := secp256k1.GenerateECDSAPrivateKey()
if err != nil {
return nil, nil, errors.Wrap(err, "Failed to generate private key")
}
publicKey, err := keyPair.ECDSAPublicKey()
if err != nil {
return nil, nil, errors.Wrap(err, "Failed to generate public key")
}
publicKeySerialized, err := publicKey.Serialize()
if err != nil {
return nil, nil, errors.Wrap(err, "Failed to serialize public key")
}
return keyPair.Serialize()[:], publicKeySerialized[:], nil
}
// PublicKeyFromPrivateKey returns the public key associated with a private key
func PublicKeyFromPrivateKey(privateKeyBytes []byte) ([]byte, error) {
keyPair, err := secp256k1.DeserializeSchnorrPrivateKeyFromSlice(privateKeyBytes)
if err != nil {
return nil, errors.Wrap(err, "Failed to deserialize private key")
}
publicKey, err := keyPair.SchnorrPublicKey()
if err != nil {
return nil, errors.Wrap(err, "Failed to generate public key")
}
publicKeySerialized, err := publicKey.Serialize()
if err != nil {
return nil, errors.Wrap(err, "Failed to serialize public key")
}
return publicKeySerialized[:], nil
}
// Address returns the address associated with the given public keys and minimum signatures parameters.
func Address(params *dagconfig.Params, pubKeys [][]byte, minimumSignatures uint32, ecdsa bool) (util.Address, error) {
sortPublicKeys(pubKeys)
if uint32(len(pubKeys)) < minimumSignatures {
return nil, errors.Errorf("The minimum amount of signatures (%d) is greater than the amount of "+
"provided public keys (%d)", minimumSignatures, len(pubKeys))
}
if len(pubKeys) == 1 {
if ecdsa {
return util.NewAddressPublicKeyECDSA(pubKeys[0][:], params.Prefix)
}
return util.NewAddressPublicKey(pubKeys[0][:], params.Prefix)
}
redeemScript, err := multiSigRedeemScript(pubKeys, minimumSignatures, ecdsa)
if err != nil {
return nil, err
}
return util.NewAddressScriptHash(redeemScript, params.Prefix)
}

View File

@@ -0,0 +1,3 @@
//go:generate protoc --go_out=. --go-grpc_out=. --go_opt=paths=source_relative --go-grpc_opt=paths=source_relative wallet.proto
package protoserialization

View File

@@ -0,0 +1,915 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.25.0
// protoc v3.12.3
// source: wallet.proto
package protoserialization
import (
proto "github.com/golang/protobuf/proto"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
// This is a compile-time assertion that a sufficiently up-to-date version
// of the legacy proto package is being used.
const _ = proto.ProtoPackageIsVersion4
type PartiallySignedTransaction struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Tx *TransactionMessage `protobuf:"bytes,1,opt,name=tx,proto3" json:"tx,omitempty"`
PartiallySignedInputs []*PartiallySignedInput `protobuf:"bytes,2,rep,name=partiallySignedInputs,proto3" json:"partiallySignedInputs,omitempty"`
}
func (x *PartiallySignedTransaction) Reset() {
*x = PartiallySignedTransaction{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PartiallySignedTransaction) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PartiallySignedTransaction) ProtoMessage() {}
func (x *PartiallySignedTransaction) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PartiallySignedTransaction.ProtoReflect.Descriptor instead.
func (*PartiallySignedTransaction) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{0}
}
func (x *PartiallySignedTransaction) GetTx() *TransactionMessage {
if x != nil {
return x.Tx
}
return nil
}
func (x *PartiallySignedTransaction) GetPartiallySignedInputs() []*PartiallySignedInput {
if x != nil {
return x.PartiallySignedInputs
}
return nil
}
type PartiallySignedInput struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
RedeemScript []byte `protobuf:"bytes,1,opt,name=redeemScript,proto3" json:"redeemScript,omitempty"`
PrevOutput *TransactionOutput `protobuf:"bytes,2,opt,name=prevOutput,proto3" json:"prevOutput,omitempty"`
MinimumSignatures uint32 `protobuf:"varint,3,opt,name=minimumSignatures,proto3" json:"minimumSignatures,omitempty"`
PubKeySignaturePairs []*PubKeySignaturePair `protobuf:"bytes,4,rep,name=pubKeySignaturePairs,proto3" json:"pubKeySignaturePairs,omitempty"`
}
func (x *PartiallySignedInput) Reset() {
*x = PartiallySignedInput{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PartiallySignedInput) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PartiallySignedInput) ProtoMessage() {}
func (x *PartiallySignedInput) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PartiallySignedInput.ProtoReflect.Descriptor instead.
func (*PartiallySignedInput) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{1}
}
func (x *PartiallySignedInput) GetRedeemScript() []byte {
if x != nil {
return x.RedeemScript
}
return nil
}
func (x *PartiallySignedInput) GetPrevOutput() *TransactionOutput {
if x != nil {
return x.PrevOutput
}
return nil
}
func (x *PartiallySignedInput) GetMinimumSignatures() uint32 {
if x != nil {
return x.MinimumSignatures
}
return 0
}
func (x *PartiallySignedInput) GetPubKeySignaturePairs() []*PubKeySignaturePair {
if x != nil {
return x.PubKeySignaturePairs
}
return nil
}
type PubKeySignaturePair struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
PubKey []byte `protobuf:"bytes,1,opt,name=pubKey,proto3" json:"pubKey,omitempty"`
Signature []byte `protobuf:"bytes,2,opt,name=signature,proto3" json:"signature,omitempty"`
}
func (x *PubKeySignaturePair) Reset() {
*x = PubKeySignaturePair{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *PubKeySignaturePair) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*PubKeySignaturePair) ProtoMessage() {}
func (x *PubKeySignaturePair) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use PubKeySignaturePair.ProtoReflect.Descriptor instead.
func (*PubKeySignaturePair) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{2}
}
func (x *PubKeySignaturePair) GetPubKey() []byte {
if x != nil {
return x.PubKey
}
return nil
}
func (x *PubKeySignaturePair) GetSignature() []byte {
if x != nil {
return x.Signature
}
return nil
}
type SubnetworkId struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Bytes []byte `protobuf:"bytes,1,opt,name=bytes,proto3" json:"bytes,omitempty"`
}
func (x *SubnetworkId) Reset() {
*x = SubnetworkId{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *SubnetworkId) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*SubnetworkId) ProtoMessage() {}
func (x *SubnetworkId) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use SubnetworkId.ProtoReflect.Descriptor instead.
func (*SubnetworkId) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{3}
}
func (x *SubnetworkId) GetBytes() []byte {
if x != nil {
return x.Bytes
}
return nil
}
type TransactionMessage struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Version uint32 `protobuf:"varint,1,opt,name=version,proto3" json:"version,omitempty"`
Inputs []*TransactionInput `protobuf:"bytes,2,rep,name=inputs,proto3" json:"inputs,omitempty"`
Outputs []*TransactionOutput `protobuf:"bytes,3,rep,name=outputs,proto3" json:"outputs,omitempty"`
LockTime uint64 `protobuf:"varint,4,opt,name=lockTime,proto3" json:"lockTime,omitempty"`
SubnetworkId *SubnetworkId `protobuf:"bytes,5,opt,name=subnetworkId,proto3" json:"subnetworkId,omitempty"`
Gas uint64 `protobuf:"varint,6,opt,name=gas,proto3" json:"gas,omitempty"`
Payload []byte `protobuf:"bytes,8,opt,name=payload,proto3" json:"payload,omitempty"`
}
func (x *TransactionMessage) Reset() {
*x = TransactionMessage{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *TransactionMessage) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*TransactionMessage) ProtoMessage() {}
func (x *TransactionMessage) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use TransactionMessage.ProtoReflect.Descriptor instead.
func (*TransactionMessage) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{4}
}
func (x *TransactionMessage) GetVersion() uint32 {
if x != nil {
return x.Version
}
return 0
}
func (x *TransactionMessage) GetInputs() []*TransactionInput {
if x != nil {
return x.Inputs
}
return nil
}
func (x *TransactionMessage) GetOutputs() []*TransactionOutput {
if x != nil {
return x.Outputs
}
return nil
}
func (x *TransactionMessage) GetLockTime() uint64 {
if x != nil {
return x.LockTime
}
return 0
}
func (x *TransactionMessage) GetSubnetworkId() *SubnetworkId {
if x != nil {
return x.SubnetworkId
}
return nil
}
func (x *TransactionMessage) GetGas() uint64 {
if x != nil {
return x.Gas
}
return 0
}
func (x *TransactionMessage) GetPayload() []byte {
if x != nil {
return x.Payload
}
return nil
}
type TransactionInput struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
PreviousOutpoint *Outpoint `protobuf:"bytes,1,opt,name=previousOutpoint,proto3" json:"previousOutpoint,omitempty"`
SignatureScript []byte `protobuf:"bytes,2,opt,name=signatureScript,proto3" json:"signatureScript,omitempty"`
Sequence uint64 `protobuf:"varint,3,opt,name=sequence,proto3" json:"sequence,omitempty"`
}
func (x *TransactionInput) Reset() {
*x = TransactionInput{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *TransactionInput) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*TransactionInput) ProtoMessage() {}
func (x *TransactionInput) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use TransactionInput.ProtoReflect.Descriptor instead.
func (*TransactionInput) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{5}
}
func (x *TransactionInput) GetPreviousOutpoint() *Outpoint {
if x != nil {
return x.PreviousOutpoint
}
return nil
}
func (x *TransactionInput) GetSignatureScript() []byte {
if x != nil {
return x.SignatureScript
}
return nil
}
func (x *TransactionInput) GetSequence() uint64 {
if x != nil {
return x.Sequence
}
return 0
}
type Outpoint struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
TransactionId *TransactionId `protobuf:"bytes,1,opt,name=transactionId,proto3" json:"transactionId,omitempty"`
Index uint32 `protobuf:"varint,2,opt,name=index,proto3" json:"index,omitempty"`
}
func (x *Outpoint) Reset() {
*x = Outpoint{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *Outpoint) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*Outpoint) ProtoMessage() {}
func (x *Outpoint) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use Outpoint.ProtoReflect.Descriptor instead.
func (*Outpoint) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{6}
}
func (x *Outpoint) GetTransactionId() *TransactionId {
if x != nil {
return x.TransactionId
}
return nil
}
func (x *Outpoint) GetIndex() uint32 {
if x != nil {
return x.Index
}
return 0
}
type TransactionId struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Bytes []byte `protobuf:"bytes,1,opt,name=bytes,proto3" json:"bytes,omitempty"`
}
func (x *TransactionId) Reset() {
*x = TransactionId{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[7]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *TransactionId) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*TransactionId) ProtoMessage() {}
func (x *TransactionId) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[7]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use TransactionId.ProtoReflect.Descriptor instead.
func (*TransactionId) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{7}
}
func (x *TransactionId) GetBytes() []byte {
if x != nil {
return x.Bytes
}
return nil
}
type ScriptPublicKey struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Script []byte `protobuf:"bytes,1,opt,name=script,proto3" json:"script,omitempty"`
Version uint32 `protobuf:"varint,2,opt,name=version,proto3" json:"version,omitempty"`
}
func (x *ScriptPublicKey) Reset() {
*x = ScriptPublicKey{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[8]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ScriptPublicKey) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ScriptPublicKey) ProtoMessage() {}
func (x *ScriptPublicKey) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[8]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ScriptPublicKey.ProtoReflect.Descriptor instead.
func (*ScriptPublicKey) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{8}
}
func (x *ScriptPublicKey) GetScript() []byte {
if x != nil {
return x.Script
}
return nil
}
func (x *ScriptPublicKey) GetVersion() uint32 {
if x != nil {
return x.Version
}
return 0
}
type TransactionOutput struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Value uint64 `protobuf:"varint,1,opt,name=value,proto3" json:"value,omitempty"`
ScriptPublicKey *ScriptPublicKey `protobuf:"bytes,2,opt,name=scriptPublicKey,proto3" json:"scriptPublicKey,omitempty"`
}
func (x *TransactionOutput) Reset() {
*x = TransactionOutput{}
if protoimpl.UnsafeEnabled {
mi := &file_wallet_proto_msgTypes[9]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *TransactionOutput) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*TransactionOutput) ProtoMessage() {}
func (x *TransactionOutput) ProtoReflect() protoreflect.Message {
mi := &file_wallet_proto_msgTypes[9]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use TransactionOutput.ProtoReflect.Descriptor instead.
func (*TransactionOutput) Descriptor() ([]byte, []int) {
return file_wallet_proto_rawDescGZIP(), []int{9}
}
func (x *TransactionOutput) GetValue() uint64 {
if x != nil {
return x.Value
}
return 0
}
func (x *TransactionOutput) GetScriptPublicKey() *ScriptPublicKey {
if x != nil {
return x.ScriptPublicKey
}
return nil
}
var File_wallet_proto protoreflect.FileDescriptor
var file_wallet_proto_rawDesc = []byte{
0x0a, 0x0c, 0x77, 0x61, 0x6c, 0x6c, 0x65, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x12,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x22, 0xb4, 0x01, 0x0a, 0x1a, 0x50, 0x61, 0x72, 0x74, 0x69, 0x61, 0x6c, 0x6c, 0x79,
0x53, 0x69, 0x67, 0x6e, 0x65, 0x64, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f,
0x6e, 0x12, 0x36, 0x0a, 0x02, 0x74, 0x78, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x26, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x2e, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65,
0x73, 0x73, 0x61, 0x67, 0x65, 0x52, 0x02, 0x74, 0x78, 0x12, 0x5e, 0x0a, 0x15, 0x70, 0x61, 0x72,
0x74, 0x69, 0x61, 0x6c, 0x6c, 0x79, 0x53, 0x69, 0x67, 0x6e, 0x65, 0x64, 0x49, 0x6e, 0x70, 0x75,
0x74, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x28, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x50, 0x61,
0x72, 0x74, 0x69, 0x61, 0x6c, 0x6c, 0x79, 0x53, 0x69, 0x67, 0x6e, 0x65, 0x64, 0x49, 0x6e, 0x70,
0x75, 0x74, 0x52, 0x15, 0x70, 0x61, 0x72, 0x74, 0x69, 0x61, 0x6c, 0x6c, 0x79, 0x53, 0x69, 0x67,
0x6e, 0x65, 0x64, 0x49, 0x6e, 0x70, 0x75, 0x74, 0x73, 0x22, 0x8c, 0x02, 0x0a, 0x14, 0x50, 0x61,
0x72, 0x74, 0x69, 0x61, 0x6c, 0x6c, 0x79, 0x53, 0x69, 0x67, 0x6e, 0x65, 0x64, 0x49, 0x6e, 0x70,
0x75, 0x74, 0x12, 0x22, 0x0a, 0x0c, 0x72, 0x65, 0x64, 0x65, 0x65, 0x6d, 0x53, 0x63, 0x72, 0x69,
0x70, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x0c, 0x72, 0x65, 0x64, 0x65, 0x65, 0x6d,
0x53, 0x63, 0x72, 0x69, 0x70, 0x74, 0x12, 0x45, 0x0a, 0x0a, 0x70, 0x72, 0x65, 0x76, 0x4f, 0x75,
0x74, 0x70, 0x75, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x70, 0x72, 0x6f,
0x74, 0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e,
0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x4f, 0x75, 0x74, 0x70, 0x75,
0x74, 0x52, 0x0a, 0x70, 0x72, 0x65, 0x76, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x12, 0x2c, 0x0a,
0x11, 0x6d, 0x69, 0x6e, 0x69, 0x6d, 0x75, 0x6d, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72,
0x65, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x11, 0x6d, 0x69, 0x6e, 0x69, 0x6d, 0x75,
0x6d, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x73, 0x12, 0x5b, 0x0a, 0x14, 0x70,
0x75, 0x62, 0x4b, 0x65, 0x79, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x50, 0x61,
0x69, 0x72, 0x73, 0x18, 0x04, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x27, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x50,
0x75, 0x62, 0x4b, 0x65, 0x79, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x50, 0x61,
0x69, 0x72, 0x52, 0x14, 0x70, 0x75, 0x62, 0x4b, 0x65, 0x79, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74,
0x75, 0x72, 0x65, 0x50, 0x61, 0x69, 0x72, 0x73, 0x22, 0x4b, 0x0a, 0x13, 0x50, 0x75, 0x62, 0x4b,
0x65, 0x79, 0x53, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x50, 0x61, 0x69, 0x72, 0x12,
0x16, 0x0a, 0x06, 0x70, 0x75, 0x62, 0x4b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52,
0x06, 0x70, 0x75, 0x62, 0x4b, 0x65, 0x79, 0x12, 0x1c, 0x0a, 0x09, 0x73, 0x69, 0x67, 0x6e, 0x61,
0x74, 0x75, 0x72, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x09, 0x73, 0x69, 0x67, 0x6e,
0x61, 0x74, 0x75, 0x72, 0x65, 0x22, 0x24, 0x0a, 0x0c, 0x53, 0x75, 0x62, 0x6e, 0x65, 0x74, 0x77,
0x6f, 0x72, 0x6b, 0x49, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x62, 0x79, 0x74, 0x65, 0x73, 0x18, 0x01,
0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x62, 0x79, 0x74, 0x65, 0x73, 0x22, 0xbb, 0x02, 0x0a, 0x12,
0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x4d, 0x65, 0x73, 0x73, 0x61,
0x67, 0x65, 0x12, 0x18, 0x0a, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0d, 0x52, 0x07, 0x76, 0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x12, 0x3c, 0x0a, 0x06,
0x69, 0x6e, 0x70, 0x75, 0x74, 0x73, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x24, 0x2e, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f,
0x6e, 0x2e, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x6e, 0x70,
0x75, 0x74, 0x52, 0x06, 0x69, 0x6e, 0x70, 0x75, 0x74, 0x73, 0x12, 0x3f, 0x0a, 0x07, 0x6f, 0x75,
0x74, 0x70, 0x75, 0x74, 0x73, 0x18, 0x03, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x25, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x2e, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x4f, 0x75, 0x74, 0x70,
0x75, 0x74, 0x52, 0x07, 0x6f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x73, 0x12, 0x1a, 0x0a, 0x08, 0x6c,
0x6f, 0x63, 0x6b, 0x54, 0x69, 0x6d, 0x65, 0x18, 0x04, 0x20, 0x01, 0x28, 0x04, 0x52, 0x08, 0x6c,
0x6f, 0x63, 0x6b, 0x54, 0x69, 0x6d, 0x65, 0x12, 0x44, 0x0a, 0x0c, 0x73, 0x75, 0x62, 0x6e, 0x65,
0x74, 0x77, 0x6f, 0x72, 0x6b, 0x49, 0x64, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x20, 0x2e,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x2e, 0x53, 0x75, 0x62, 0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72, 0x6b, 0x49, 0x64, 0x52,
0x0c, 0x73, 0x75, 0x62, 0x6e, 0x65, 0x74, 0x77, 0x6f, 0x72, 0x6b, 0x49, 0x64, 0x12, 0x10, 0x0a,
0x03, 0x67, 0x61, 0x73, 0x18, 0x06, 0x20, 0x01, 0x28, 0x04, 0x52, 0x03, 0x67, 0x61, 0x73, 0x12,
0x18, 0x0a, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x18, 0x08, 0x20, 0x01, 0x28, 0x0c,
0x52, 0x07, 0x70, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x22, 0xa2, 0x01, 0x0a, 0x10, 0x54, 0x72,
0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x6e, 0x70, 0x75, 0x74, 0x12, 0x48,
0x0a, 0x10, 0x70, 0x72, 0x65, 0x76, 0x69, 0x6f, 0x75, 0x73, 0x4f, 0x75, 0x74, 0x70, 0x6f, 0x69,
0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1c, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x4f, 0x75,
0x74, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x52, 0x10, 0x70, 0x72, 0x65, 0x76, 0x69, 0x6f, 0x75, 0x73,
0x4f, 0x75, 0x74, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x12, 0x28, 0x0a, 0x0f, 0x73, 0x69, 0x67, 0x6e,
0x61, 0x74, 0x75, 0x72, 0x65, 0x53, 0x63, 0x72, 0x69, 0x70, 0x74, 0x18, 0x02, 0x20, 0x01, 0x28,
0x0c, 0x52, 0x0f, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x53, 0x63, 0x72, 0x69,
0x70, 0x74, 0x12, 0x1a, 0x0a, 0x08, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63, 0x65, 0x18, 0x03,
0x20, 0x01, 0x28, 0x04, 0x52, 0x08, 0x73, 0x65, 0x71, 0x75, 0x65, 0x6e, 0x63, 0x65, 0x22, 0x69,
0x0a, 0x08, 0x4f, 0x75, 0x74, 0x70, 0x6f, 0x69, 0x6e, 0x74, 0x12, 0x47, 0x0a, 0x0d, 0x74, 0x72,
0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x21, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69,
0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69,
0x6f, 0x6e, 0x49, 0x64, 0x52, 0x0d, 0x74, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f,
0x6e, 0x49, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x18, 0x02, 0x20, 0x01,
0x28, 0x0d, 0x52, 0x05, 0x69, 0x6e, 0x64, 0x65, 0x78, 0x22, 0x25, 0x0a, 0x0d, 0x54, 0x72, 0x61,
0x6e, 0x73, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x64, 0x12, 0x14, 0x0a, 0x05, 0x62, 0x79,
0x74, 0x65, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x62, 0x79, 0x74, 0x65, 0x73,
0x22, 0x43, 0x0a, 0x0f, 0x53, 0x63, 0x72, 0x69, 0x70, 0x74, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63,
0x4b, 0x65, 0x79, 0x12, 0x16, 0x0a, 0x06, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x18, 0x01, 0x20,
0x01, 0x28, 0x0c, 0x52, 0x06, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x12, 0x18, 0x0a, 0x07, 0x76,
0x65, 0x72, 0x73, 0x69, 0x6f, 0x6e, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0d, 0x52, 0x07, 0x76, 0x65,
0x72, 0x73, 0x69, 0x6f, 0x6e, 0x22, 0x78, 0x0a, 0x11, 0x54, 0x72, 0x61, 0x6e, 0x73, 0x61, 0x63,
0x74, 0x69, 0x6f, 0x6e, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x76, 0x61,
0x6c, 0x75, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65,
0x12, 0x4d, 0x0a, 0x0f, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63,
0x4b, 0x65, 0x79, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x53,
0x63, 0x72, 0x69, 0x70, 0x74, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x52, 0x0f,
0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x50, 0x75, 0x62, 0x6c, 0x69, 0x63, 0x4b, 0x65, 0x79, 0x42,
0x5c, 0x5a, 0x5a, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6b, 0x61,
0x73, 0x70, 0x61, 0x6e, 0x65, 0x74, 0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x64, 0x2f, 0x63, 0x6d,
0x64, 0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x77, 0x61, 0x6c, 0x6c, 0x65, 0x74, 0x2f, 0x6c, 0x69,
0x62, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x77, 0x61, 0x6c, 0x6c, 0x65, 0x74, 0x2f, 0x73, 0x65, 0x72,
0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x62, 0x06, 0x70,
0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_wallet_proto_rawDescOnce sync.Once
file_wallet_proto_rawDescData = file_wallet_proto_rawDesc
)
func file_wallet_proto_rawDescGZIP() []byte {
file_wallet_proto_rawDescOnce.Do(func() {
file_wallet_proto_rawDescData = protoimpl.X.CompressGZIP(file_wallet_proto_rawDescData)
})
return file_wallet_proto_rawDescData
}
var file_wallet_proto_msgTypes = make([]protoimpl.MessageInfo, 10)
var file_wallet_proto_goTypes = []interface{}{
(*PartiallySignedTransaction)(nil), // 0: protoserialization.PartiallySignedTransaction
(*PartiallySignedInput)(nil), // 1: protoserialization.PartiallySignedInput
(*PubKeySignaturePair)(nil), // 2: protoserialization.PubKeySignaturePair
(*SubnetworkId)(nil), // 3: protoserialization.SubnetworkId
(*TransactionMessage)(nil), // 4: protoserialization.TransactionMessage
(*TransactionInput)(nil), // 5: protoserialization.TransactionInput
(*Outpoint)(nil), // 6: protoserialization.Outpoint
(*TransactionId)(nil), // 7: protoserialization.TransactionId
(*ScriptPublicKey)(nil), // 8: protoserialization.ScriptPublicKey
(*TransactionOutput)(nil), // 9: protoserialization.TransactionOutput
}
var file_wallet_proto_depIdxs = []int32{
4, // 0: protoserialization.PartiallySignedTransaction.tx:type_name -> protoserialization.TransactionMessage
1, // 1: protoserialization.PartiallySignedTransaction.partiallySignedInputs:type_name -> protoserialization.PartiallySignedInput
9, // 2: protoserialization.PartiallySignedInput.prevOutput:type_name -> protoserialization.TransactionOutput
2, // 3: protoserialization.PartiallySignedInput.pubKeySignaturePairs:type_name -> protoserialization.PubKeySignaturePair
5, // 4: protoserialization.TransactionMessage.inputs:type_name -> protoserialization.TransactionInput
9, // 5: protoserialization.TransactionMessage.outputs:type_name -> protoserialization.TransactionOutput
3, // 6: protoserialization.TransactionMessage.subnetworkId:type_name -> protoserialization.SubnetworkId
6, // 7: protoserialization.TransactionInput.previousOutpoint:type_name -> protoserialization.Outpoint
7, // 8: protoserialization.Outpoint.transactionId:type_name -> protoserialization.TransactionId
8, // 9: protoserialization.TransactionOutput.scriptPublicKey:type_name -> protoserialization.ScriptPublicKey
10, // [10:10] is the sub-list for method output_type
10, // [10:10] is the sub-list for method input_type
10, // [10:10] is the sub-list for extension type_name
10, // [10:10] is the sub-list for extension extendee
0, // [0:10] is the sub-list for field type_name
}
func init() { file_wallet_proto_init() }
func file_wallet_proto_init() {
if File_wallet_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_wallet_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PartiallySignedTransaction); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PartiallySignedInput); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*PubKeySignaturePair); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*SubnetworkId); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*TransactionMessage); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*TransactionInput); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*Outpoint); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*TransactionId); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ScriptPublicKey); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_wallet_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*TransactionOutput); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_wallet_proto_rawDesc,
NumEnums: 0,
NumMessages: 10,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_wallet_proto_goTypes,
DependencyIndexes: file_wallet_proto_depIdxs,
MessageInfos: file_wallet_proto_msgTypes,
}.Build()
File_wallet_proto = out.File
file_wallet_proto_rawDesc = nil
file_wallet_proto_goTypes = nil
file_wallet_proto_depIdxs = nil
}

View File

@@ -0,0 +1,59 @@
syntax = "proto3";
package protoserialization;
option go_package = "github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet/serialization/protoserialization";
message PartiallySignedTransaction{
TransactionMessage tx = 1;
repeated PartiallySignedInput partiallySignedInputs = 2;
}
message PartiallySignedInput{
bytes redeemScript = 1;
TransactionOutput prevOutput = 2;
uint32 minimumSignatures = 3;
repeated PubKeySignaturePair pubKeySignaturePairs = 4;
}
message PubKeySignaturePair{
bytes pubKey = 1;
bytes signature = 2;
}
message SubnetworkId{
bytes bytes = 1;
}
message TransactionMessage{
uint32 version = 1;
repeated TransactionInput inputs = 2;
repeated TransactionOutput outputs = 3;
uint64 lockTime = 4;
SubnetworkId subnetworkId = 5;
uint64 gas = 6;
bytes payload = 8;
}
message TransactionInput{
Outpoint previousOutpoint = 1;
bytes signatureScript = 2;
uint64 sequence = 3;
}
message Outpoint{
TransactionId transactionId = 1;
uint32 index = 2;
}
message TransactionId{
bytes bytes = 1;
}
message ScriptPublicKey {
bytes script = 1;
uint32 version = 2;
}
message TransactionOutput{
uint64 value = 1;
ScriptPublicKey scriptPublicKey = 2;
}

View File

@@ -0,0 +1,273 @@
package serialization
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet/serialization/protoserialization"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
"github.com/pkg/errors"
"math"
)
// PartiallySignedTransaction is a type that is intended
// to be transferred between multiple parties so each
// party will be able to sign the transaction before
// it's fully signed.
type PartiallySignedTransaction struct {
Tx *externalapi.DomainTransaction
PartiallySignedInputs []*PartiallySignedInput
}
// PartiallySignedInput represents an input signed
// only by some of the relevant parties.
type PartiallySignedInput struct {
RedeeemScript []byte
PrevOutput *externalapi.DomainTransactionOutput
MinimumSignatures uint32
PubKeySignaturePairs []*PubKeySignaturePair
}
// PubKeySignaturePair is a pair of public key and (potentially) its associated signature
type PubKeySignaturePair struct {
PubKey []byte
Signature []byte
}
// DeserializePartiallySignedTransaction deserializes a byte slice into PartiallySignedTransaction.
func DeserializePartiallySignedTransaction(serializedPartiallySignedTransaction []byte) (*PartiallySignedTransaction, error) {
protoPartiallySignedTransaction := &protoserialization.PartiallySignedTransaction{}
err := proto.Unmarshal(serializedPartiallySignedTransaction, protoPartiallySignedTransaction)
if err != nil {
return nil, err
}
return partiallySignedTransactionFromProto(protoPartiallySignedTransaction)
}
// SerializePartiallySignedTransaction serializes a PartiallySignedTransaction.
func SerializePartiallySignedTransaction(partiallySignedTransaction *PartiallySignedTransaction) ([]byte, error) {
return proto.Marshal(partiallySignedTransactionToProto(partiallySignedTransaction))
}
func partiallySignedTransactionFromProto(protoPartiallySignedTransaction *protoserialization.PartiallySignedTransaction) (*PartiallySignedTransaction, error) {
tx, err := transactionFromProto(protoPartiallySignedTransaction.Tx)
if err != nil {
return nil, err
}
inputs := make([]*PartiallySignedInput, len(protoPartiallySignedTransaction.PartiallySignedInputs))
for i, protoInput := range protoPartiallySignedTransaction.PartiallySignedInputs {
inputs[i], err = partiallySignedInputFromProto(protoInput)
if err != nil {
return nil, err
}
}
return &PartiallySignedTransaction{
Tx: tx,
PartiallySignedInputs: inputs,
}, nil
}
func partiallySignedTransactionToProto(partiallySignedTransaction *PartiallySignedTransaction) *protoserialization.PartiallySignedTransaction {
protoInputs := make([]*protoserialization.PartiallySignedInput, len(partiallySignedTransaction.PartiallySignedInputs))
for i, input := range partiallySignedTransaction.PartiallySignedInputs {
protoInputs[i] = partiallySignedInputToProto(input)
}
return &protoserialization.PartiallySignedTransaction{
Tx: transactionToProto(partiallySignedTransaction.Tx),
PartiallySignedInputs: protoInputs,
}
}
func partiallySignedInputFromProto(protoPartiallySignedInput *protoserialization.PartiallySignedInput) (*PartiallySignedInput, error) {
output, err := transactionOutputFromProto(protoPartiallySignedInput.PrevOutput)
if err != nil {
return nil, err
}
pubKeySignaturePairs := make([]*PubKeySignaturePair, len(protoPartiallySignedInput.PubKeySignaturePairs))
for i, protoPair := range protoPartiallySignedInput.PubKeySignaturePairs {
pubKeySignaturePairs[i] = pubKeySignaturePairFromProto(protoPair)
}
return &PartiallySignedInput{
RedeeemScript: protoPartiallySignedInput.RedeemScript,
PrevOutput: output,
MinimumSignatures: protoPartiallySignedInput.MinimumSignatures,
PubKeySignaturePairs: pubKeySignaturePairs,
}, nil
}
func partiallySignedInputToProto(partiallySignedInput *PartiallySignedInput) *protoserialization.PartiallySignedInput {
protoPairs := make([]*protoserialization.PubKeySignaturePair, len(partiallySignedInput.PubKeySignaturePairs))
for i, pair := range partiallySignedInput.PubKeySignaturePairs {
protoPairs[i] = pubKeySignaturePairToProto(pair)
}
return &protoserialization.PartiallySignedInput{
RedeemScript: partiallySignedInput.RedeeemScript,
PrevOutput: transactionOutputToProto(partiallySignedInput.PrevOutput),
MinimumSignatures: partiallySignedInput.MinimumSignatures,
PubKeySignaturePairs: protoPairs,
}
}
func pubKeySignaturePairFromProto(protoPubKeySignaturePair *protoserialization.PubKeySignaturePair) *PubKeySignaturePair {
return &PubKeySignaturePair{
PubKey: protoPubKeySignaturePair.PubKey,
Signature: protoPubKeySignaturePair.Signature,
}
}
func pubKeySignaturePairToProto(pubKeySignaturePair *PubKeySignaturePair) *protoserialization.PubKeySignaturePair {
return &protoserialization.PubKeySignaturePair{
PubKey: pubKeySignaturePair.PubKey,
Signature: pubKeySignaturePair.Signature,
}
}
func transactionFromProto(protoTransaction *protoserialization.TransactionMessage) (*externalapi.DomainTransaction, error) {
if protoTransaction.Version > math.MaxUint16 {
return nil, errors.Errorf("protoTransaction.Version is %d and is too big to be a uint16", protoTransaction.Version)
}
inputs := make([]*externalapi.DomainTransactionInput, len(protoTransaction.Inputs))
for i, protoInput := range protoTransaction.Inputs {
var err error
inputs[i], err = transactionInputFromProto(protoInput)
if err != nil {
return nil, err
}
}
outputs := make([]*externalapi.DomainTransactionOutput, len(protoTransaction.Outputs))
for i, protoOutput := range protoTransaction.Outputs {
var err error
outputs[i], err = transactionOutputFromProto(protoOutput)
if err != nil {
return nil, err
}
}
subnetworkID, err := subnetworks.FromBytes(protoTransaction.SubnetworkId.Bytes)
if err != nil {
return nil, err
}
return &externalapi.DomainTransaction{
Version: uint16(protoTransaction.Version),
Inputs: inputs,
Outputs: outputs,
LockTime: protoTransaction.LockTime,
SubnetworkID: *subnetworkID,
Gas: protoTransaction.Gas,
Payload: protoTransaction.Payload,
}, nil
}
func transactionToProto(tx *externalapi.DomainTransaction) *protoserialization.TransactionMessage {
protoInputs := make([]*protoserialization.TransactionInput, len(tx.Inputs))
for i, input := range tx.Inputs {
protoInputs[i] = transactionInputToProto(input)
}
protoOutputs := make([]*protoserialization.TransactionOutput, len(tx.Outputs))
for i, output := range tx.Outputs {
protoOutputs[i] = transactionOutputToProto(output)
}
return &protoserialization.TransactionMessage{
Version: uint32(tx.Version),
Inputs: protoInputs,
Outputs: protoOutputs,
LockTime: tx.LockTime,
SubnetworkId: &protoserialization.SubnetworkId{Bytes: tx.SubnetworkID[:]},
Gas: tx.Gas,
Payload: tx.Payload,
}
}
func transactionInputFromProto(protoInput *protoserialization.TransactionInput) (*externalapi.DomainTransactionInput, error) {
outpoint, err := outpointFromProto(protoInput.PreviousOutpoint)
if err != nil {
return nil, err
}
return &externalapi.DomainTransactionInput{
PreviousOutpoint: *outpoint,
SignatureScript: protoInput.SignatureScript,
Sequence: protoInput.Sequence,
}, nil
}
func transactionInputToProto(input *externalapi.DomainTransactionInput) *protoserialization.TransactionInput {
return &protoserialization.TransactionInput{
PreviousOutpoint: outpointToProto(&input.PreviousOutpoint),
SignatureScript: input.SignatureScript,
Sequence: input.Sequence,
}
}
func outpointFromProto(protoOutpoint *protoserialization.Outpoint) (*externalapi.DomainOutpoint, error) {
txID, err := transactionIDFromProto(protoOutpoint.TransactionId)
if err != nil {
return nil, err
}
return &externalapi.DomainOutpoint{
TransactionID: *txID,
Index: protoOutpoint.Index,
}, nil
}
func outpointToProto(outpoint *externalapi.DomainOutpoint) *protoserialization.Outpoint {
return &protoserialization.Outpoint{
TransactionId: &protoserialization.TransactionId{Bytes: outpoint.TransactionID.ByteSlice()},
Index: outpoint.Index,
}
}
func transactionIDFromProto(protoTxID *protoserialization.TransactionId) (*externalapi.DomainTransactionID, error) {
if protoTxID == nil {
return nil, errors.Errorf("protoTxID is nil")
}
return externalapi.NewDomainTransactionIDFromByteSlice(protoTxID.Bytes)
}
func transactionOutputFromProto(protoOutput *protoserialization.TransactionOutput) (*externalapi.DomainTransactionOutput, error) {
scriptPublicKey, err := scriptPublicKeyFromProto(protoOutput.ScriptPublicKey)
if err != nil {
return nil, err
}
return &externalapi.DomainTransactionOutput{
Value: protoOutput.Value,
ScriptPublicKey: scriptPublicKey,
}, nil
}
func transactionOutputToProto(output *externalapi.DomainTransactionOutput) *protoserialization.TransactionOutput {
return &protoserialization.TransactionOutput{
Value: output.Value,
ScriptPublicKey: scriptPublicKeyToProto(output.ScriptPublicKey),
}
}
func scriptPublicKeyFromProto(protoScriptPublicKey *protoserialization.ScriptPublicKey) (*externalapi.ScriptPublicKey, error) {
if protoScriptPublicKey.Version > math.MaxUint16 {
return nil, errors.Errorf("protoOutput.ScriptPublicKey.Version is %d and is too big to be a uint16", protoScriptPublicKey.Version)
}
return &externalapi.ScriptPublicKey{
Script: protoScriptPublicKey.Script,
Version: uint16(protoScriptPublicKey.Version),
}, nil
}
func scriptPublicKeyToProto(scriptPublicKey *externalapi.ScriptPublicKey) *protoserialization.ScriptPublicKey {
return &protoserialization.ScriptPublicKey{
Script: scriptPublicKey.Script,
Version: uint32(scriptPublicKey.Version),
}
}

View File

@@ -0,0 +1,146 @@
package libkaspawallet
import (
"bytes"
"github.com/kaspanet/go-secp256k1"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
"github.com/pkg/errors"
)
type signer interface {
rawTxInSignature(tx *externalapi.DomainTransaction, idx int, hashType consensushashing.SigHashType,
sighashReusedValues *consensushashing.SighashReusedValues) ([]byte, error)
serializedPublicKey() ([]byte, error)
}
type schnorrSigner secp256k1.SchnorrKeyPair
func (s *schnorrSigner) rawTxInSignature(tx *externalapi.DomainTransaction, idx int, hashType consensushashing.SigHashType,
sighashReusedValues *consensushashing.SighashReusedValues) ([]byte, error) {
return txscript.RawTxInSignature(tx, idx, hashType, (*secp256k1.SchnorrKeyPair)(s), sighashReusedValues)
}
func (s *schnorrSigner) serializedPublicKey() ([]byte, error) {
publicKey, err := (*secp256k1.SchnorrKeyPair)(s).SchnorrPublicKey()
if err != nil {
return nil, err
}
serializedPublicKey, err := publicKey.Serialize()
if err != nil {
return nil, err
}
return serializedPublicKey[:], nil
}
type ecdsaSigner secp256k1.ECDSAPrivateKey
func (e *ecdsaSigner) rawTxInSignature(tx *externalapi.DomainTransaction, idx int, hashType consensushashing.SigHashType,
sighashReusedValues *consensushashing.SighashReusedValues) ([]byte, error) {
return txscript.RawTxInSignatureECDSA(tx, idx, hashType, (*secp256k1.ECDSAPrivateKey)(e), sighashReusedValues)
}
func (e *ecdsaSigner) serializedPublicKey() ([]byte, error) {
publicKey, err := (*secp256k1.ECDSAPrivateKey)(e).ECDSAPublicKey()
if err != nil {
return nil, err
}
serializedPublicKey, err := publicKey.Serialize()
if err != nil {
return nil, err
}
return serializedPublicKey[:], nil
}
func deserializeECDSAPrivateKey(privateKey []byte, ecdsa bool) (signer, error) {
if ecdsa {
keyPair, err := secp256k1.DeserializeECDSAPrivateKeyFromSlice(privateKey)
if err != nil {
return nil, errors.Wrap(err, "Error deserializing private key")
}
return (*ecdsaSigner)(keyPair), nil
}
keyPair, err := secp256k1.DeserializeSchnorrPrivateKeyFromSlice(privateKey)
if err != nil {
return nil, errors.Wrap(err, "Error deserializing private key")
}
return (*schnorrSigner)(keyPair), nil
}
// Sign signs the transaction with the given private keys
func Sign(privateKeys [][]byte, serializedPSTx []byte, ecdsa bool) ([]byte, error) {
keyPairs := make([]signer, len(privateKeys))
for i, privateKey := range privateKeys {
var err error
keyPairs[i], err = deserializeECDSAPrivateKey(privateKey, ecdsa)
if err != nil {
return nil, errors.Wrap(err, "Error deserializing private key")
}
}
partiallySignedTransaction, err := serialization.DeserializePartiallySignedTransaction(serializedPSTx)
if err != nil {
return nil, err
}
for _, keyPair := range keyPairs {
err = sign(keyPair, partiallySignedTransaction)
if err != nil {
return nil, err
}
}
return serialization.SerializePartiallySignedTransaction(partiallySignedTransaction)
}
func sign(keyPair signer, psTx *serialization.PartiallySignedTransaction) error {
if isTransactionFullySigned(psTx) {
return nil
}
serializedPublicKey, err := keyPair.serializedPublicKey()
if err != nil {
return err
}
sighashReusedValues := &consensushashing.SighashReusedValues{}
for i, partiallySignedInput := range psTx.PartiallySignedInputs {
prevOut := partiallySignedInput.PrevOutput
psTx.Tx.Inputs[i].UTXOEntry = utxo.NewUTXOEntry(
prevOut.Value,
prevOut.ScriptPublicKey,
false, // This is a fake value, because it's irrelevant for the signature
0, // This is a fake value, because it's irrelevant for the signature
)
}
signed := false
for i, partiallySignedInput := range psTx.PartiallySignedInputs {
for _, pair := range partiallySignedInput.PubKeySignaturePairs {
if bytes.Equal(pair.PubKey, serializedPublicKey[:]) {
pair.Signature, err = keyPair.rawTxInSignature(psTx.Tx, i, consensushashing.SigHashAll, sighashReusedValues)
if err != nil {
return err
}
signed = true
}
}
}
if !signed {
return errors.Errorf("Public key doesn't match any of the transaction public keys")
}
return nil
}

View File

@@ -0,0 +1,206 @@
package libkaspawallet
import (
"bytes"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
"sort"
)
// Payment contains a recipient payment details
type Payment struct {
Address util.Address
Amount uint64
}
func sortPublicKeys(publicKeys [][]byte) {
sort.Slice(publicKeys, func(i, j int) bool {
return bytes.Compare(publicKeys[i], publicKeys[j]) < 0
})
}
// CreateUnsignedTransaction creates an unsigned transaction
func CreateUnsignedTransaction(
pubKeys [][]byte,
minimumSignatures uint32,
ecdsa bool,
payments []*Payment,
selectedUTXOs []*externalapi.OutpointAndUTXOEntryPair) ([]byte, error) {
sortPublicKeys(pubKeys)
unsignedTransaction, err := createUnsignedTransaction(pubKeys, minimumSignatures, ecdsa, payments, selectedUTXOs)
if err != nil {
return nil, err
}
return serialization.SerializePartiallySignedTransaction(unsignedTransaction)
}
func multiSigRedeemScript(pubKeys [][]byte, minimumSignatures uint32, ecdsa bool) ([]byte, error) {
scriptBuilder := txscript.NewScriptBuilder()
scriptBuilder.AddInt64(int64(minimumSignatures))
for _, key := range pubKeys {
scriptBuilder.AddData(key)
}
scriptBuilder.AddInt64(int64(len(pubKeys)))
if ecdsa {
scriptBuilder.AddOp(txscript.OpCheckMultiSigECDSA)
} else {
scriptBuilder.AddOp(txscript.OpCheckMultiSig)
}
return scriptBuilder.Script()
}
func createUnsignedTransaction(
pubKeys [][]byte,
minimumSignatures uint32,
ecdsa bool,
payments []*Payment,
selectedUTXOs []*externalapi.OutpointAndUTXOEntryPair) (*serialization.PartiallySignedTransaction, error) {
var redeemScript []byte
if len(pubKeys) > 1 {
var err error
redeemScript, err = multiSigRedeemScript(pubKeys, minimumSignatures, ecdsa)
if err != nil {
return nil, err
}
}
inputs := make([]*externalapi.DomainTransactionInput, len(selectedUTXOs))
partiallySignedInputs := make([]*serialization.PartiallySignedInput, len(selectedUTXOs))
for i, utxo := range selectedUTXOs {
emptyPubKeySignaturePairs := make([]*serialization.PubKeySignaturePair, len(pubKeys))
for i, pubKey := range pubKeys {
emptyPubKeySignaturePairs[i] = &serialization.PubKeySignaturePair{
PubKey: pubKey,
}
}
inputs[i] = &externalapi.DomainTransactionInput{PreviousOutpoint: *utxo.Outpoint}
partiallySignedInputs[i] = &serialization.PartiallySignedInput{
RedeeemScript: redeemScript,
PrevOutput: &externalapi.DomainTransactionOutput{
Value: utxo.UTXOEntry.Amount(),
ScriptPublicKey: utxo.UTXOEntry.ScriptPublicKey(),
},
MinimumSignatures: minimumSignatures,
PubKeySignaturePairs: emptyPubKeySignaturePairs,
}
}
outputs := make([]*externalapi.DomainTransactionOutput, len(payments))
for i, payment := range payments {
scriptPublicKey, err := txscript.PayToAddrScript(payment.Address)
if err != nil {
return nil, err
}
outputs[i] = &externalapi.DomainTransactionOutput{
Value: payment.Amount,
ScriptPublicKey: scriptPublicKey,
}
}
domainTransaction := &externalapi.DomainTransaction{
Version: constants.MaxTransactionVersion,
Inputs: inputs,
Outputs: outputs,
LockTime: 0,
SubnetworkID: subnetworks.SubnetworkIDNative,
Gas: 0,
Payload: nil,
}
return &serialization.PartiallySignedTransaction{
Tx: domainTransaction,
PartiallySignedInputs: partiallySignedInputs,
}, nil
}
// IsTransactionFullySigned returns whether the transaction is fully signed and ready to broadcast.
func IsTransactionFullySigned(psTxBytes []byte) (bool, error) {
partiallySignedTransaction, err := serialization.DeserializePartiallySignedTransaction(psTxBytes)
if err != nil {
return false, err
}
return isTransactionFullySigned(partiallySignedTransaction), nil
}
func isTransactionFullySigned(psTx *serialization.PartiallySignedTransaction) bool {
for _, input := range psTx.PartiallySignedInputs {
numSignatures := 0
for _, pair := range input.PubKeySignaturePairs {
if pair.Signature != nil {
numSignatures++
}
}
if uint32(numSignatures) < input.MinimumSignatures {
return false
}
}
return true
}
// ExtractTransaction extracts a domain transaction from partially signed transaction after all of the
// relevant parties have signed it.
func ExtractTransaction(psTxBytes []byte) (*externalapi.DomainTransaction, error) {
partiallySignedTransaction, err := serialization.DeserializePartiallySignedTransaction(psTxBytes)
if err != nil {
return nil, err
}
return extractTransaction(partiallySignedTransaction)
}
func extractTransaction(psTx *serialization.PartiallySignedTransaction) (*externalapi.DomainTransaction, error) {
for i, input := range psTx.PartiallySignedInputs {
isMultisig := input.RedeeemScript != nil
scriptBuilder := txscript.NewScriptBuilder()
if isMultisig {
signatureCount := 0
for _, pair := range input.PubKeySignaturePairs {
if pair.Signature != nil {
scriptBuilder.AddData(pair.Signature)
signatureCount++
}
}
if uint32(signatureCount) < input.MinimumSignatures {
return nil, errors.Errorf("missing %d signatures", input.MinimumSignatures-uint32(signatureCount))
}
scriptBuilder.AddData(input.RedeeemScript)
sigScript, err := scriptBuilder.Script()
if err != nil {
return nil, err
}
psTx.Tx.Inputs[i].SignatureScript = sigScript
} else {
if len(input.PubKeySignaturePairs) > 1 {
return nil, errors.Errorf("Cannot sign on P2PK when len(input.PubKeySignaturePairs) > 1")
}
if input.PubKeySignaturePairs[0].Signature == nil {
return nil, errors.Errorf("missing signature")
}
sigScript, err := txscript.NewScriptBuilder().
AddData(input.PubKeySignaturePairs[0].Signature).
Script()
if err != nil {
return nil, err
}
psTx.Tx.Inputs[i].SignatureScript = sigScript
}
}
return psTx.Tx, nil
}

View File

@@ -0,0 +1,291 @@
package libkaspawallet_test
import (
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/util"
"strings"
"testing"
)
func forSchnorrAndECDSA(t *testing.T, testFunc func(t *testing.T, ecdsa bool)) {
t.Run("schnorr", func(t *testing.T) {
testFunc(t, false)
})
t.Run("ecdsa", func(t *testing.T) {
testFunc(t, true)
})
}
func TestMultisig(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
forSchnorrAndECDSA(t, func(t *testing.T, ecdsa bool) {
params.BlockCoinbaseMaturity = 0
tc, teardown, err := consensus.NewFactory().NewTestConsensus(params, false, "TestMultisig")
if err != nil {
t.Fatalf("Error setting up tc: %+v", err)
}
defer teardown(false)
const numKeys = 3
privateKeys := make([][]byte, numKeys)
publicKeys := make([][]byte, numKeys)
for i := 0; i < numKeys; i++ {
privateKeys[i], publicKeys[i], err = libkaspawallet.CreateKeyPair(ecdsa)
if err != nil {
t.Fatalf("CreateKeyPair: %+v", err)
}
}
const minimumSignatures = 2
address, err := libkaspawallet.Address(params, publicKeys, minimumSignatures, ecdsa)
if err != nil {
t.Fatalf("Address: %+v", err)
}
if _, ok := address.(*util.AddressScriptHash); !ok {
t.Fatalf("The address is of unexpected type")
}
scriptPublicKey, err := txscript.PayToAddrScript(address)
if err != nil {
t.Fatalf("PayToAddrScript: %+v", err)
}
coinbaseData := &externalapi.DomainCoinbaseData{
ScriptPublicKey: scriptPublicKey,
ExtraData: nil,
}
fundingBlockHash, _, err := tc.AddBlock([]*externalapi.DomainHash{params.GenesisHash}, coinbaseData, nil)
if err != nil {
t.Fatalf("AddBlock: %+v", err)
}
block1Hash, _, err := tc.AddBlock([]*externalapi.DomainHash{fundingBlockHash}, nil, nil)
if err != nil {
t.Fatalf("AddBlock: %+v", err)
}
block1, err := tc.GetBlock(block1Hash)
if err != nil {
t.Fatalf("GetBlock: %+v", err)
}
block1Tx := block1.Transactions[0]
block1TxOut := block1Tx.Outputs[0]
selectedUTXOs := []*externalapi.OutpointAndUTXOEntryPair{{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: *consensushashing.TransactionID(block1.Transactions[0]),
Index: 0,
},
UTXOEntry: utxo.NewUTXOEntry(block1TxOut.Value, block1TxOut.ScriptPublicKey, true, 0),
}}
unsignedTransaction, err := libkaspawallet.CreateUnsignedTransaction(publicKeys, minimumSignatures, ecdsa,
[]*libkaspawallet.Payment{{
Address: address,
Amount: 10,
}}, selectedUTXOs)
if err != nil {
t.Fatalf("CreateUnsignedTransaction: %+v", err)
}
isFullySigned, err := libkaspawallet.IsTransactionFullySigned(unsignedTransaction)
if err != nil {
t.Fatalf("IsTransactionFullySigned: %+v", err)
}
if isFullySigned {
t.Fatalf("Transaction is not expected to be signed")
}
_, err = libkaspawallet.ExtractTransaction(unsignedTransaction)
if err == nil || !strings.Contains(err.Error(), fmt.Sprintf("missing %d signatures", minimumSignatures)) {
t.Fatal("Unexpectedly succeed to extract a valid transaction out of unsigned transaction")
}
signedTxStep1, err := libkaspawallet.Sign(privateKeys[:1], unsignedTransaction, ecdsa)
if err != nil {
t.Fatalf("IsTransactionFullySigned: %+v", err)
}
isFullySigned, err = libkaspawallet.IsTransactionFullySigned(signedTxStep1)
if err != nil {
t.Fatalf("IsTransactionFullySigned: %+v", err)
}
if isFullySigned {
t.Fatalf("Transaction is not expected to be fully signed")
}
signedTxStep2, err := libkaspawallet.Sign(privateKeys[1:2], signedTxStep1, ecdsa)
if err != nil {
t.Fatalf("Sign: %+v", err)
}
extractedSignedTxStep2, err := libkaspawallet.ExtractTransaction(signedTxStep2)
if err != nil {
t.Fatalf("ExtractTransaction: %+v", err)
}
signedTxOneStep, err := libkaspawallet.Sign(privateKeys[:2], unsignedTransaction, ecdsa)
if err != nil {
t.Fatalf("Sign: %+v", err)
}
extractedSignedTxOneStep, err := libkaspawallet.ExtractTransaction(signedTxOneStep)
if err != nil {
t.Fatalf("ExtractTransaction: %+v", err)
}
// We check IDs instead of comparing the actual transactions because the actual transactions have different
// signature scripts due to non deterministic signature scheme.
if !consensushashing.TransactionID(extractedSignedTxStep2).Equal(consensushashing.TransactionID(extractedSignedTxOneStep)) {
t.Fatalf("Expected extractedSignedTxOneStep and extractedSignedTxStep2 IDs to be equal")
}
_, insertionResult, err := tc.AddBlock([]*externalapi.DomainHash{block1Hash}, nil, []*externalapi.DomainTransaction{extractedSignedTxStep2})
if err != nil {
t.Fatalf("AddBlock: %+v", err)
}
addedUTXO := &externalapi.DomainOutpoint{
TransactionID: *consensushashing.TransactionID(extractedSignedTxStep2),
Index: 0,
}
if !insertionResult.VirtualUTXODiff.ToAdd().Contains(addedUTXO) {
t.Fatalf("Transaction wasn't accepted in the DAG")
}
})
})
}
func TestP2PK(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
forSchnorrAndECDSA(t, func(t *testing.T, ecdsa bool) {
params.BlockCoinbaseMaturity = 0
tc, teardown, err := consensus.NewFactory().NewTestConsensus(params, false, "TestMultisig")
if err != nil {
t.Fatalf("Error setting up tc: %+v", err)
}
defer teardown(false)
const numKeys = 1
privateKeys := make([][]byte, numKeys)
publicKeys := make([][]byte, numKeys)
for i := 0; i < numKeys; i++ {
privateKeys[i], publicKeys[i], err = libkaspawallet.CreateKeyPair(ecdsa)
if err != nil {
t.Fatalf("CreateKeyPair: %+v", err)
}
}
const minimumSignatures = 1
address, err := libkaspawallet.Address(params, publicKeys, minimumSignatures, ecdsa)
if err != nil {
t.Fatalf("Address: %+v", err)
}
if ecdsa {
if _, ok := address.(*util.AddressPublicKeyECDSA); !ok {
t.Fatalf("The address is of unexpected type")
}
} else {
if _, ok := address.(*util.AddressPublicKey); !ok {
t.Fatalf("The address is of unexpected type")
}
}
scriptPublicKey, err := txscript.PayToAddrScript(address)
if err != nil {
t.Fatalf("PayToAddrScript: %+v", err)
}
coinbaseData := &externalapi.DomainCoinbaseData{
ScriptPublicKey: scriptPublicKey,
ExtraData: nil,
}
fundingBlockHash, _, err := tc.AddBlock([]*externalapi.DomainHash{params.GenesisHash}, coinbaseData, nil)
if err != nil {
t.Fatalf("AddBlock: %+v", err)
}
block1Hash, _, err := tc.AddBlock([]*externalapi.DomainHash{fundingBlockHash}, nil, nil)
if err != nil {
t.Fatalf("AddBlock: %+v", err)
}
block1, err := tc.GetBlock(block1Hash)
if err != nil {
t.Fatalf("GetBlock: %+v", err)
}
block1Tx := block1.Transactions[0]
block1TxOut := block1Tx.Outputs[0]
selectedUTXOs := []*externalapi.OutpointAndUTXOEntryPair{{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: *consensushashing.TransactionID(block1.Transactions[0]),
Index: 0,
},
UTXOEntry: utxo.NewUTXOEntry(block1TxOut.Value, block1TxOut.ScriptPublicKey, true, 0),
}}
unsignedTransaction, err := libkaspawallet.CreateUnsignedTransaction(publicKeys, minimumSignatures,
ecdsa,
[]*libkaspawallet.Payment{{
Address: address,
Amount: 10,
}}, selectedUTXOs)
if err != nil {
t.Fatalf("CreateUnsignedTransaction: %+v", err)
}
isFullySigned, err := libkaspawallet.IsTransactionFullySigned(unsignedTransaction)
if err != nil {
t.Fatalf("IsTransactionFullySigned: %+v", err)
}
if isFullySigned {
t.Fatalf("Transaction is not expected to be signed")
}
_, err = libkaspawallet.ExtractTransaction(unsignedTransaction)
if err == nil || !strings.Contains(err.Error(), "missing signature") {
t.Fatal("Unexpectedly succeed to extract a valid transaction out of unsigned transaction")
}
signedTx, err := libkaspawallet.Sign(privateKeys, unsignedTransaction, ecdsa)
if err != nil {
t.Fatalf("IsTransactionFullySigned: %+v", err)
}
tx, err := libkaspawallet.ExtractTransaction(signedTx)
if err != nil {
t.Fatalf("ExtractTransaction: %+v", err)
}
_, insertionResult, err := tc.AddBlock([]*externalapi.DomainHash{block1Hash}, nil, []*externalapi.DomainTransaction{tx})
if err != nil {
t.Fatalf("AddBlock: %+v", err)
}
addedUTXO := &externalapi.DomainOutpoint{
TransactionID: *consensushashing.TransactionID(tx),
Index: 0,
}
if !insertionResult.VirtualUTXODiff.ToAdd().Contains(addedUTXO) {
t.Fatalf("Transaction wasn't accepted in the DAG")
}
})
})
}

33
cmd/kaspawallet/main.go Normal file
View File

@@ -0,0 +1,33 @@
package main
import "github.com/pkg/errors"
func main() {
subCmd, config := parseCommandLine()
var err error
switch subCmd {
case createSubCmd:
err = create(config.(*createConfig))
case balanceSubCmd:
err = balance(config.(*balanceConfig))
case sendSubCmd:
err = send(config.(*sendConfig))
case createUnsignedTransactionSubCmd:
err = createUnsignedTransaction(config.(*createUnsignedTransactionConfig))
case signSubCmd:
err = sign(config.(*signConfig))
case broadcastSubCmd:
err = broadcast(config.(*broadcastConfig))
case showAddressSubCmd:
err = showAddress(config.(*showAddressConfig))
case dumpUnencryptedDataSubCmd:
err = dumpUnencryptedData(config.(*dumpUnencryptedDataConfig))
default:
err = errors.Errorf("Unknown sub-command '%s'\n", subCmd)
}
if err != nil {
printErrorAndExit(err)
}
}

170
cmd/kaspawallet/send.go Normal file
View File

@@ -0,0 +1,170 @@
package main
import (
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
utxopkg "github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionid"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
)
func send(conf *sendConfig) error {
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
toAddress, err := util.DecodeAddress(conf.ToAddress, conf.ActiveNetParams.Prefix)
if err != nil {
return err
}
fromAddress, err := libkaspawallet.Address(conf.NetParams(), keysFile.PublicKeys, keysFile.MinimumSignatures, keysFile.ECDSA)
if err != nil {
return err
}
client, err := connectToRPC(conf.NetParams(), conf.RPCServer)
if err != nil {
return err
}
utxos, err := fetchSpendableUTXOs(conf.NetParams(), client, fromAddress.String())
if err != nil {
return err
}
sendAmountSompi := uint64(conf.SendAmount * util.SompiPerKaspa)
const feePerInput = 1000
selectedUTXOs, changeSompi, err := selectUTXOs(utxos, sendAmountSompi, feePerInput)
if err != nil {
return err
}
psTx, err := libkaspawallet.CreateUnsignedTransaction(keysFile.PublicKeys,
keysFile.MinimumSignatures,
keysFile.ECDSA,
[]*libkaspawallet.Payment{{
Address: toAddress,
Amount: sendAmountSompi,
}, {
Address: fromAddress,
Amount: changeSompi,
}},
selectedUTXOs)
if err != nil {
return err
}
privateKeys, err := keysFile.DecryptPrivateKeys()
if err != nil {
return err
}
updatedPSTx, err := libkaspawallet.Sign(privateKeys, psTx, keysFile.ECDSA)
if err != nil {
return err
}
tx, err := libkaspawallet.ExtractTransaction(updatedPSTx)
if err != nil {
return err
}
transactionID, err := sendTransaction(client, tx)
if err != nil {
return err
}
fmt.Println("Transaction was sent successfully")
fmt.Printf("Transaction ID: \t%s\n", transactionID)
return nil
}
func fetchSpendableUTXOs(params *dagconfig.Params, client *rpcclient.RPCClient, address string) ([]*appmessage.UTXOsByAddressesEntry, error) {
getUTXOsByAddressesResponse, err := client.GetUTXOsByAddresses([]string{address})
if err != nil {
return nil, err
}
virtualSelectedParentBlueScoreResponse, err := client.GetVirtualSelectedParentBlueScore()
if err != nil {
return nil, err
}
virtualSelectedParentBlueScore := virtualSelectedParentBlueScoreResponse.BlueScore
spendableUTXOs := make([]*appmessage.UTXOsByAddressesEntry, 0)
for _, entry := range getUTXOsByAddressesResponse.Entries {
if !isUTXOSpendable(entry, virtualSelectedParentBlueScore, params.BlockCoinbaseMaturity) {
continue
}
spendableUTXOs = append(spendableUTXOs, entry)
}
return spendableUTXOs, nil
}
func selectUTXOs(utxos []*appmessage.UTXOsByAddressesEntry, spendAmount uint64, feePerInput uint64) (
selectedUTXOs []*externalapi.OutpointAndUTXOEntryPair, changeSompi uint64, err error) {
selectedUTXOs = []*externalapi.OutpointAndUTXOEntryPair{}
totalValue := uint64(0)
for _, utxo := range utxos {
txID, err := transactionid.FromString(utxo.Outpoint.TransactionID)
if err != nil {
return nil, 0, err
}
rpcUTXOEntry := utxo.UTXOEntry
scriptPublicKeyScript, err := hex.DecodeString(rpcUTXOEntry.ScriptPublicKey.Script)
if err != nil {
return nil, 0, err
}
scriptPublicKey := &externalapi.ScriptPublicKey{
Script: scriptPublicKeyScript,
Version: rpcUTXOEntry.ScriptPublicKey.Version,
}
utxoEntry := utxopkg.NewUTXOEntry(rpcUTXOEntry.Amount, scriptPublicKey, rpcUTXOEntry.IsCoinbase, rpcUTXOEntry.BlockDAAScore)
selectedUTXOs = append(selectedUTXOs, &externalapi.OutpointAndUTXOEntryPair{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: *txID,
Index: utxo.Outpoint.Index,
},
UTXOEntry: utxoEntry,
})
totalValue += utxo.UTXOEntry.Amount
fee := feePerInput * uint64(len(selectedUTXOs))
totalSpend := spendAmount + fee
if totalValue >= totalSpend {
break
}
}
fee := feePerInput * uint64(len(selectedUTXOs))
totalSpend := spendAmount + fee
if totalValue < totalSpend {
return nil, 0, errors.Errorf("Insufficient funds for send: %f required, while only %f available",
float64(totalSpend)/util.SompiPerKaspa, float64(totalValue)/util.SompiPerKaspa)
}
return selectedUTXOs, totalValue - totalSpend, nil
}
func sendTransaction(client *rpcclient.RPCClient, tx *externalapi.DomainTransaction) (string, error) {
submitTransactionResponse, err := client.SubmitTransaction(appmessage.DomainTransactionToRPCTransaction(tx))
if err != nil {
return "", errors.Wrapf(err, "error submitting transaction")
}
return submitTransactionResponse.TransactionID, nil
}

View File

@@ -0,0 +1,22 @@
package main
import (
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
)
func showAddress(conf *showAddressConfig) error {
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
address, err := libkaspawallet.Address(conf.NetParams(), keysFile.PublicKeys, keysFile.MinimumSignatures, keysFile.ECDSA)
if err != nil {
return err
}
fmt.Printf("The wallet address is:\n%s\n", address)
return nil
}

44
cmd/kaspawallet/sign.go Normal file
View File

@@ -0,0 +1,44 @@
package main
import (
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
)
func sign(conf *signConfig) error {
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
psTxBytes, err := hex.DecodeString(conf.Transaction)
if err != nil {
return err
}
privateKeys, err := keysFile.DecryptPrivateKeys()
if err != nil {
return err
}
updatedPSTxBytes, err := libkaspawallet.Sign(privateKeys, psTxBytes, keysFile.ECDSA)
if err != nil {
return err
}
isFullySigned, err := libkaspawallet.IsTransactionFullySigned(updatedPSTxBytes)
if err != nil {
return err
}
if isFullySigned {
fmt.Println("The transaction is signed and ready to broadcast")
} else {
fmt.Println("Successfully signed transaction")
}
fmt.Printf("Transaction: %x\n", updatedPSTxBytes)
return nil
}

View File

@@ -1,48 +0,0 @@
WALLET
======
## IMPORTANT:
### This software is for TESTING ONLY. Do NOT use it for handling real money.
`wallet` is a simple, no-frills wallet software operated via the command line.\
It is capable of generating wallet key-pairs, printing a wallet's current balance, and sending simple transactions.
## Requirements
Go 1.16 or later.
## Installation
#### Build from Source
- Install Go according to the installation instructions here:
http://golang.org/doc/install
- Ensure Go was installed properly and is a supported version:
```bash
$ go version
```
- Run the following commands to obtain and install kaspad including all dependencies:
```bash
$ git clone https://github.com/kaspanet/kaspad
$ cd kaspad/cmd/wallet
$ go install .
```
- Wallet should now be installed in `$(go env GOPATH)/bin`. If you did
not already add the bin directory to your system path during Go installation,
you are encouraged to do so now.
Usage
-----
* Create a new wallet key-pair: `wallet create --testnet`
* Print a wallet's current balance:
`wallet balance --testnet --address=kaspatest:000000000000000000000000000000000000000000`
* Send funds to another wallet:
`wallet send --testnet --private-key=0000000000000000000000000000000000000000000000000000000000000000 --send-amount=50 --to-address=kaspatest:000000000000000000000000000000000000000000`

View File

@@ -1,85 +0,0 @@
package main
import (
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/pkg/errors"
"os"
"github.com/jessevdk/go-flags"
)
const (
createSubCmd = "create"
balanceSubCmd = "balance"
sendSubCmd = "send"
)
type createConfig struct {
config.NetworkFlags
}
type balanceConfig struct {
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
Address string `long:"address" short:"d" description:"The public address to check the balance of" required:"true"`
config.NetworkFlags
}
type sendConfig struct {
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
PrivateKey string `long:"private-key" short:"k" description:"The private key of the sender (encoded in hex)" required:"true"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
config.NetworkFlags
}
func parseCommandLine() (subCommand string, config interface{}) {
cfg := &struct{}{}
parser := flags.NewParser(cfg, flags.PrintErrors|flags.HelpFlag)
createConf := &createConfig{}
parser.AddCommand(createSubCmd, "Creates a new wallet",
"Creates a private key and 3 public addresses, one for each of MainNet, TestNet and DevNet", createConf)
balanceConf := &balanceConfig{}
parser.AddCommand(balanceSubCmd, "Shows the balance of a public address",
"Shows the balance for a public address in Kaspa", balanceConf)
sendConf := &sendConfig{}
parser.AddCommand(sendSubCmd, "Sends a Kaspa transaction to a public address",
"Sends a Kaspa transaction to a public address", sendConf)
_, err := parser.Parse()
if err != nil {
var flagsErr *flags.Error
if ok := errors.As(err, &flagsErr); ok && flagsErr.Type == flags.ErrHelp {
os.Exit(0)
} else {
os.Exit(1)
}
return "", nil
}
switch parser.Command.Active.Name {
case createSubCmd:
err := createConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = createConf
case balanceSubCmd:
err := balanceConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = balanceConf
case sendSubCmd:
err := sendConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = sendConf
}
return parser.Command.Active.Name, config
}

View File

@@ -1,37 +0,0 @@
package main
import (
"fmt"
"github.com/kaspanet/go-secp256k1"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
)
func create(conf *createConfig) error {
privateKey, err := secp256k1.GeneratePrivateKey()
if err != nil {
return errors.Wrap(err, "Failed to generate private key")
}
fmt.Println("This is your private key, granting access to all wallet funds. Keep it safe. Use it only when sending Kaspa.")
fmt.Printf("Private key (hex):\t%s\n\n", privateKey.SerializePrivateKey())
fmt.Println("This is your public address, where money is to be sent.")
publicKey, err := privateKey.SchnorrPublicKey()
if err != nil {
return errors.Wrap(err, "Failed to generate public key")
}
publicKeySerialized, err := publicKey.Serialize()
if err != nil {
return errors.Wrap(err, "Failed to serialize public key")
}
addr, err := util.NewAddressPubKeyHashFromPublicKey(publicKeySerialized[:], conf.ActiveNetParams.Prefix)
if err != nil {
return errors.Wrap(err, "Failed to generate p2pkh address")
}
fmt.Printf("Address (%s):\t%s\n", conf.ActiveNetParams.Name, addr)
return nil
}

View File

@@ -1,25 +0,0 @@
package main
import (
"github.com/pkg/errors"
)
func main() {
subCmd, config := parseCommandLine()
var err error
switch subCmd {
case createSubCmd:
err = create(config.(*createConfig))
case balanceSubCmd:
err = balance(config.(*balanceConfig))
case sendSubCmd:
err = send(config.(*sendConfig))
default:
err = errors.Errorf("Unknown sub-command '%s'\n", subCmd)
}
if err != nil {
printErrorAndExit(err)
}
}

View File

@@ -1,204 +0,0 @@
package main
import (
"encoding/hex"
"fmt"
"github.com/kaspanet/go-secp256k1"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionid"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
)
const feeSompis uint64 = 1000
func send(conf *sendConfig) error {
toAddress, err := util.DecodeAddress(conf.ToAddress, conf.ActiveNetParams.Prefix)
if err != nil {
return err
}
keyPair, publicKey, err := parsePrivateKey(conf.PrivateKey)
if err != nil {
return err
}
serializedPublicKey, err := publicKey.Serialize()
if err != nil {
return err
}
fromAddress, err := util.NewAddressPubKeyHashFromPublicKey(serializedPublicKey[:], conf.ActiveNetParams.Prefix)
if err != nil {
return err
}
client, err := rpcclient.NewRPCClient(conf.RPCServer)
if err != nil {
return err
}
utxos, err := fetchSpendableUTXOs(conf, client, fromAddress.String())
if err != nil {
return err
}
sendAmountSompi := uint64(conf.SendAmount * util.SompiPerKaspa)
totalToSend := sendAmountSompi + feeSompis
selectedUTXOs, changeSompi, err := selectUTXOs(utxos, totalToSend)
if err != nil {
return err
}
rpcTransaction, err := generateTransaction(keyPair, selectedUTXOs, sendAmountSompi, changeSompi, toAddress, fromAddress)
if err != nil {
return err
}
transactionID, err := sendTransaction(client, rpcTransaction)
if err != nil {
return err
}
fmt.Println("Transaction was sent successfully")
fmt.Printf("Transaction ID: \t%s\n", transactionID)
return nil
}
func parsePrivateKey(privateKeyHex string) (*secp256k1.SchnorrKeyPair, *secp256k1.SchnorrPublicKey, error) {
privateKeyBytes, err := hex.DecodeString(privateKeyHex)
if err != nil {
return nil, nil, errors.Wrap(err, "Error parsing private key hex")
}
keyPair, err := secp256k1.DeserializePrivateKeyFromSlice(privateKeyBytes)
if err != nil {
return nil, nil, errors.Wrap(err, "Error deserializing private key")
}
publicKey, err := keyPair.SchnorrPublicKey()
if err != nil {
return nil, nil, errors.Wrap(err, "Error generating public key")
}
return keyPair, publicKey, nil
}
func fetchSpendableUTXOs(conf *sendConfig, client *rpcclient.RPCClient, address string) ([]*appmessage.UTXOsByAddressesEntry, error) {
getUTXOsByAddressesResponse, err := client.GetUTXOsByAddresses([]string{address})
if err != nil {
return nil, err
}
virtualSelectedParentBlueScoreResponse, err := client.GetVirtualSelectedParentBlueScore()
if err != nil {
return nil, err
}
virtualSelectedParentBlueScore := virtualSelectedParentBlueScoreResponse.BlueScore
spendableUTXOs := make([]*appmessage.UTXOsByAddressesEntry, 0)
for _, entry := range getUTXOsByAddressesResponse.Entries {
if !isUTXOSpendable(entry, virtualSelectedParentBlueScore, conf.ActiveNetParams.BlockCoinbaseMaturity) {
continue
}
spendableUTXOs = append(spendableUTXOs, entry)
}
return spendableUTXOs, nil
}
func selectUTXOs(utxos []*appmessage.UTXOsByAddressesEntry, totalToSpend uint64) (
selectedUTXOs []*appmessage.UTXOsByAddressesEntry, changeSompi uint64, err error) {
selectedUTXOs = []*appmessage.UTXOsByAddressesEntry{}
totalValue := uint64(0)
for _, utxo := range utxos {
selectedUTXOs = append(selectedUTXOs, utxo)
totalValue += utxo.UTXOEntry.Amount
if totalValue >= totalToSpend {
break
}
}
if totalValue < totalToSpend {
return nil, 0, errors.Errorf("Insufficient funds for send: %f required, while only %f available",
float64(totalToSpend)/util.SompiPerKaspa, float64(totalValue)/util.SompiPerKaspa)
}
return selectedUTXOs, totalValue - totalToSpend, nil
}
func generateTransaction(keyPair *secp256k1.SchnorrKeyPair, selectedUTXOs []*appmessage.UTXOsByAddressesEntry,
sompisToSend uint64, change uint64, toAddress util.Address,
fromAddress util.Address) (*appmessage.RPCTransaction, error) {
inputs := make([]*externalapi.DomainTransactionInput, len(selectedUTXOs))
for i, utxo := range selectedUTXOs {
outpointTransactionIDBytes, err := hex.DecodeString(utxo.Outpoint.TransactionID)
if err != nil {
return nil, err
}
outpointTransactionID, err := transactionid.FromBytes(outpointTransactionIDBytes)
if err != nil {
return nil, err
}
outpoint := externalapi.DomainOutpoint{
TransactionID: *outpointTransactionID,
Index: utxo.Outpoint.Index,
}
inputs[i] = &externalapi.DomainTransactionInput{PreviousOutpoint: outpoint}
}
toScript, err := txscript.PayToAddrScript(toAddress)
if err != nil {
return nil, err
}
mainOutput := &externalapi.DomainTransactionOutput{
Value: sompisToSend,
ScriptPublicKey: toScript,
}
fromScript, err := txscript.PayToAddrScript(fromAddress)
if err != nil {
return nil, err
}
changeOutput := &externalapi.DomainTransactionOutput{
Value: change,
ScriptPublicKey: fromScript,
}
outputs := []*externalapi.DomainTransactionOutput{mainOutput, changeOutput}
domainTransaction := &externalapi.DomainTransaction{
Version: constants.MaxTransactionVersion,
Inputs: inputs,
Outputs: outputs,
LockTime: 0,
SubnetworkID: subnetworks.SubnetworkIDNative,
Gas: 0,
Payload: nil,
}
sighashReusedValues := &consensushashing.SighashReusedValues{}
for i, input := range domainTransaction.Inputs {
signatureScript, err := txscript.SignatureScript(
domainTransaction, i, consensushashing.SigHashAll, keyPair, sighashReusedValues)
if err != nil {
return nil, err
}
input.SignatureScript = signatureScript
}
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(domainTransaction)
return rpcTransaction, nil
}
func sendTransaction(client *rpcclient.RPCClient, rpcTransaction *appmessage.RPCTransaction) (string, error) {
submitTransactionResponse, err := client.SubmitTransaction(rpcTransaction)
if err != nil {
return "", errors.Wrapf(err, "error submitting transaction")
}
return submitTransactionResponse.TransactionID, nil
}

View File

@@ -75,30 +75,34 @@ func (s *consensus) ValidateTransactionAndPopulateWithConsensusData(transaction
s.lock.Lock()
defer s.lock.Unlock()
stagingArea := model.NewStagingArea()
err := s.transactionValidator.ValidateTransactionInIsolation(transaction)
if err != nil {
return err
}
err = s.consensusStateManager.PopulateTransactionWithUTXOEntries(transaction)
err = s.consensusStateManager.PopulateTransactionWithUTXOEntries(stagingArea, transaction)
if err != nil {
return err
}
virtualSelectedParentMedianTime, err := s.pastMedianTimeManager.PastMedianTime(model.VirtualBlockHash)
virtualSelectedParentMedianTime, err := s.pastMedianTimeManager.PastMedianTime(stagingArea, model.VirtualBlockHash)
if err != nil {
return err
}
return s.transactionValidator.ValidateTransactionInContextAndPopulateMassAndFee(transaction,
model.VirtualBlockHash, virtualSelectedParentMedianTime)
return s.transactionValidator.ValidateTransactionInContextAndPopulateMassAndFee(
stagingArea, transaction, model.VirtualBlockHash, virtualSelectedParentMedianTime)
}
func (s *consensus) GetBlock(blockHash *externalapi.DomainHash) (*externalapi.DomainBlock, error) {
s.lock.Lock()
defer s.lock.Unlock()
block, err := s.blockStore.Block(s.databaseContext, blockHash)
stagingArea := model.NewStagingArea()
block, err := s.blockStore.Block(s.databaseContext, stagingArea, blockHash)
if err != nil {
if errors.Is(err, database.ErrNotFound) {
return nil, errors.Wrapf(err, "block %s does not exist", blockHash)
@@ -112,7 +116,9 @@ func (s *consensus) GetBlockHeader(blockHash *externalapi.DomainHash) (externala
s.lock.Lock()
defer s.lock.Unlock()
blockHeader, err := s.blockHeaderStore.BlockHeader(s.databaseContext, blockHash)
stagingArea := model.NewStagingArea()
blockHeader, err := s.blockHeaderStore.BlockHeader(s.databaseContext, stagingArea, blockHash)
if err != nil {
if errors.Is(err, database.ErrNotFound) {
return nil, errors.Wrapf(err, "block header %s does not exist", blockHash)
@@ -126,9 +132,11 @@ func (s *consensus) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalap
s.lock.Lock()
defer s.lock.Unlock()
stagingArea := model.NewStagingArea()
blockInfo := &externalapi.BlockInfo{}
exists, err := s.blockStatusStore.Exists(s.databaseContext, blockHash)
exists, err := s.blockStatusStore.Exists(s.databaseContext, stagingArea, blockHash)
if err != nil {
return nil, err
}
@@ -137,7 +145,7 @@ func (s *consensus) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalap
return blockInfo, nil
}
blockStatus, err := s.blockStatusStore.Get(s.databaseContext, blockHash)
blockStatus, err := s.blockStatusStore.Get(s.databaseContext, stagingArea, blockHash)
if err != nil {
return nil, err
}
@@ -148,7 +156,7 @@ func (s *consensus) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalap
return blockInfo, nil
}
ghostdagData, err := s.ghostdagDataStore.Get(s.databaseContext, blockHash)
ghostdagData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, blockHash)
if err != nil {
return nil, err
}
@@ -158,57 +166,74 @@ func (s *consensus) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalap
return blockInfo, nil
}
func (s *consensus) GetBlockChildren(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
func (s *consensus) GetBlockRelations(blockHash *externalapi.DomainHash) (
parents []*externalapi.DomainHash, selectedParent *externalapi.DomainHash,
children []*externalapi.DomainHash, err error) {
s.lock.Lock()
defer s.lock.Unlock()
blockRelation, err := s.blockRelationStore.BlockRelation(s.databaseContext, blockHash)
stagingArea := model.NewStagingArea()
blockRelation, err := s.blockRelationStore.BlockRelation(s.databaseContext, stagingArea, blockHash)
if err != nil {
return nil, err
return nil, nil, nil, err
}
return blockRelation.Children, nil
blockGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, blockHash)
if err != nil {
return nil, nil, nil, err
}
return blockRelation.Parents, blockGHOSTDAGData.SelectedParent(), blockRelation.Children, nil
}
func (s *consensus) GetBlockAcceptanceData(blockHash *externalapi.DomainHash) (externalapi.AcceptanceData, error) {
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(blockHash)
stagingArea := model.NewStagingArea()
err := s.validateBlockHashExists(stagingArea, blockHash)
if err != nil {
return nil, err
}
return s.acceptanceDataStore.Get(s.databaseContext, blockHash)
return s.acceptanceDataStore.Get(s.databaseContext, stagingArea, blockHash)
}
func (s *consensus) GetHashesBetween(lowHash, highHash *externalapi.DomainHash,
maxBlueScoreDifference uint64) (hashes []*externalapi.DomainHash, actualHighHash *externalapi.DomainHash, err error) {
func (s *consensus) GetHashesBetween(lowHash, highHash *externalapi.DomainHash, maxBlueScoreDifference uint64) (
hashes []*externalapi.DomainHash, actualHighHash *externalapi.DomainHash, err error) {
s.lock.Lock()
defer s.lock.Unlock()
err = s.validateBlockHashExists(lowHash)
stagingArea := model.NewStagingArea()
err = s.validateBlockHashExists(stagingArea, lowHash)
if err != nil {
return nil, nil, err
}
err = s.validateBlockHashExists(highHash)
err = s.validateBlockHashExists(stagingArea, highHash)
if err != nil {
return nil, nil, err
}
return s.syncManager.GetHashesBetween(lowHash, highHash, maxBlueScoreDifference)
return s.syncManager.GetHashesBetween(stagingArea, lowHash, highHash, maxBlueScoreDifference)
}
func (s *consensus) GetMissingBlockBodyHashes(highHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(highHash)
stagingArea := model.NewStagingArea()
err := s.validateBlockHashExists(stagingArea, highHash)
if err != nil {
return nil, err
}
return s.syncManager.GetMissingBlockBodyHashes(highHash)
return s.syncManager.GetMissingBlockBodyHashes(stagingArea, highHash)
}
func (s *consensus) GetPruningPointUTXOs(expectedPruningPointHash *externalapi.DomainHash,
@@ -217,7 +242,9 @@ func (s *consensus) GetPruningPointUTXOs(expectedPruningPointHash *externalapi.D
s.lock.Lock()
defer s.lock.Unlock()
pruningPointHash, err := s.pruningStore.PruningPoint(s.databaseContext)
stagingArea := model.NewStagingArea()
pruningPointHash, err := s.pruningStore.PruningPoint(s.databaseContext, stagingArea)
if err != nil {
return nil, err
}
@@ -241,7 +268,9 @@ func (s *consensus) GetVirtualUTXOs(expectedVirtualParents []*externalapi.Domain
s.lock.Lock()
defer s.lock.Unlock()
virtualParents, err := s.dagTopologyManager.Parents(model.VirtualBlockHash)
stagingArea := model.NewStagingArea()
virtualParents, err := s.dagTopologyManager.Parents(stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
@@ -263,7 +292,9 @@ func (s *consensus) PruningPoint() (*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()
return s.pruningStore.PruningPoint(s.databaseContext)
stagingArea := model.NewStagingArea()
return s.pruningStore.PruningPoint(s.databaseContext, stagingArea)
}
func (s *consensus) ClearImportedPruningPointData() error {
@@ -291,7 +322,9 @@ func (s *consensus) GetVirtualSelectedParent() (*externalapi.DomainHash, error)
s.lock.Lock()
defer s.lock.Unlock()
virtualGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, model.VirtualBlockHash)
stagingArea := model.NewStagingArea()
virtualGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
@@ -302,26 +335,30 @@ func (s *consensus) Tips() ([]*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()
return s.consensusStateStore.Tips(s.databaseContext)
stagingArea := model.NewStagingArea()
return s.consensusStateStore.Tips(stagingArea, s.databaseContext)
}
func (s *consensus) GetVirtualInfo() (*externalapi.VirtualInfo, error) {
s.lock.Lock()
defer s.lock.Unlock()
blockRelations, err := s.blockRelationStore.BlockRelation(s.databaseContext, model.VirtualBlockHash)
stagingArea := model.NewStagingArea()
blockRelations, err := s.blockRelationStore.BlockRelation(s.databaseContext, stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
bits, err := s.difficultyManager.RequiredDifficulty(model.VirtualBlockHash)
bits, err := s.difficultyManager.RequiredDifficulty(stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
pastMedianTime, err := s.pastMedianTimeManager.PastMedianTime(model.VirtualBlockHash)
pastMedianTime, err := s.pastMedianTimeManager.PastMedianTime(stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
virtualGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, model.VirtualBlockHash)
virtualGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
@@ -338,76 +375,87 @@ func (s *consensus) CreateBlockLocator(lowHash, highHash *externalapi.DomainHash
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(lowHash)
stagingArea := model.NewStagingArea()
err := s.validateBlockHashExists(stagingArea, lowHash)
if err != nil {
return nil, err
}
err = s.validateBlockHashExists(highHash)
err = s.validateBlockHashExists(stagingArea, highHash)
if err != nil {
return nil, err
}
return s.syncManager.CreateBlockLocator(lowHash, highHash, limit)
return s.syncManager.CreateBlockLocator(stagingArea, lowHash, highHash, limit)
}
func (s *consensus) CreateFullHeadersSelectedChainBlockLocator() (externalapi.BlockLocator, error) {
s.lock.Lock()
defer s.lock.Unlock()
lowHash, err := s.pruningStore.PruningPoint(s.databaseContext)
stagingArea := model.NewStagingArea()
lowHash, err := s.pruningStore.PruningPoint(s.databaseContext, stagingArea)
if err != nil {
return nil, err
}
highHash, err := s.headersSelectedTipStore.HeadersSelectedTip(s.databaseContext)
highHash, err := s.headersSelectedTipStore.HeadersSelectedTip(s.databaseContext, stagingArea)
if err != nil {
return nil, err
}
return s.syncManager.CreateHeadersSelectedChainBlockLocator(lowHash, highHash)
return s.syncManager.CreateHeadersSelectedChainBlockLocator(stagingArea, lowHash, highHash)
}
func (s *consensus) CreateHeadersSelectedChainBlockLocator(lowHash,
highHash *externalapi.DomainHash) (externalapi.BlockLocator, error) {
func (s *consensus) CreateHeadersSelectedChainBlockLocator(lowHash, highHash *externalapi.DomainHash) (externalapi.BlockLocator, error) {
s.lock.Lock()
defer s.lock.Unlock()
return s.syncManager.CreateHeadersSelectedChainBlockLocator(lowHash, highHash)
stagingArea := model.NewStagingArea()
return s.syncManager.CreateHeadersSelectedChainBlockLocator(stagingArea, lowHash, highHash)
}
func (s *consensus) GetSyncInfo() (*externalapi.SyncInfo, error) {
s.lock.Lock()
defer s.lock.Unlock()
return s.syncManager.GetSyncInfo()
stagingArea := model.NewStagingArea()
return s.syncManager.GetSyncInfo(stagingArea)
}
func (s *consensus) IsValidPruningPoint(blockHash *externalapi.DomainHash) (bool, error) {
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(blockHash)
stagingArea := model.NewStagingArea()
err := s.validateBlockHashExists(stagingArea, blockHash)
if err != nil {
return false, err
}
return s.pruningManager.IsValidPruningPoint(blockHash)
return s.pruningManager.IsValidPruningPoint(stagingArea, blockHash)
}
func (s *consensus) GetVirtualSelectedParentChainFromBlock(blockHash *externalapi.DomainHash) (*externalapi.SelectedChainPath, error) {
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(blockHash)
stagingArea := model.NewStagingArea()
err := s.validateBlockHashExists(stagingArea, blockHash)
if err != nil {
return nil, err
}
return s.consensusStateManager.GetVirtualSelectedParentChainFromBlock(blockHash)
return s.consensusStateManager.GetVirtualSelectedParentChainFromBlock(stagingArea, blockHash)
}
func (s *consensus) validateBlockHashExists(blockHash *externalapi.DomainHash) error {
exists, err := s.blockStatusStore.Exists(s.databaseContext, blockHash)
func (s *consensus) validateBlockHashExists(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) error {
exists, err := s.blockStatusStore.Exists(s.databaseContext, stagingArea, blockHash)
if err != nil {
return err
}
@@ -421,33 +469,39 @@ func (s *consensus) IsInSelectedParentChainOf(blockHashA *externalapi.DomainHash
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(blockHashA)
stagingArea := model.NewStagingArea()
err := s.validateBlockHashExists(stagingArea, blockHashA)
if err != nil {
return false, err
}
err = s.validateBlockHashExists(blockHashB)
err = s.validateBlockHashExists(stagingArea, blockHashB)
if err != nil {
return false, err
}
return s.dagTopologyManager.IsInSelectedParentChainOf(blockHashA, blockHashB)
return s.dagTopologyManager.IsInSelectedParentChainOf(stagingArea, blockHashA, blockHashB)
}
func (s *consensus) GetHeadersSelectedTip() (*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()
return s.headersSelectedTipStore.HeadersSelectedTip(s.databaseContext)
stagingArea := model.NewStagingArea()
return s.headersSelectedTipStore.HeadersSelectedTip(s.databaseContext, stagingArea)
}
func (s *consensus) Anticone(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(blockHash)
stagingArea := model.NewStagingArea()
err := s.validateBlockHashExists(stagingArea, blockHash)
if err != nil {
return nil, err
}
return s.dagTraversalManager.Anticone(blockHash)
return s.dagTraversalManager.Anticone(stagingArea, blockHash)
}

View File

@@ -0,0 +1,50 @@
package acceptancedatastore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type acceptanceDataStagingShard struct {
store *acceptanceDataStore
toAdd map[externalapi.DomainHash]externalapi.AcceptanceData
toDelete map[externalapi.DomainHash]struct{}
}
func (ads *acceptanceDataStore) stagingShard(stagingArea *model.StagingArea) *acceptanceDataStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDAcceptanceData, func() model.StagingShard {
return &acceptanceDataStagingShard{
store: ads,
toAdd: make(map[externalapi.DomainHash]externalapi.AcceptanceData),
toDelete: make(map[externalapi.DomainHash]struct{}),
}
}).(*acceptanceDataStagingShard)
}
func (adss *acceptanceDataStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, acceptanceData := range adss.toAdd {
acceptanceDataBytes, err := adss.store.serializeAcceptanceData(acceptanceData)
if err != nil {
return err
}
err = dbTx.Put(adss.store.hashAsKey(&hash), acceptanceDataBytes)
if err != nil {
return err
}
adss.store.cache.Add(&hash, acceptanceData)
}
for hash := range adss.toDelete {
err := dbTx.Delete(adss.store.hashAsKey(&hash))
if err != nil {
return err
}
adss.store.cache.Remove(&hash)
}
return nil
}
func (adss *acceptanceDataStagingShard) isStaged() bool {
return len(adss.toAdd) != 0 || len(adss.toDelete) != 0
}

View File

@@ -13,62 +13,31 @@ var bucket = database.MakeBucket([]byte("acceptance-data"))
// acceptanceDataStore represents a store of AcceptanceData
type acceptanceDataStore struct {
staging map[externalapi.DomainHash]externalapi.AcceptanceData
toDelete map[externalapi.DomainHash]struct{}
cache *lrucache.LRUCache
cache *lrucache.LRUCache
}
// New instantiates a new AcceptanceDataStore
func New(cacheSize int, preallocate bool) model.AcceptanceDataStore {
return &acceptanceDataStore{
staging: make(map[externalapi.DomainHash]externalapi.AcceptanceData),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
}
// Stage stages the given acceptanceData for the given blockHash
func (ads *acceptanceDataStore) Stage(blockHash *externalapi.DomainHash, acceptanceData externalapi.AcceptanceData) {
ads.staging[*blockHash] = acceptanceData.Clone()
func (ads *acceptanceDataStore) Stage(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, acceptanceData externalapi.AcceptanceData) {
stagingShard := ads.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = acceptanceData.Clone()
}
func (ads *acceptanceDataStore) IsStaged() bool {
return len(ads.staging) != 0 || len(ads.toDelete) != 0
}
func (ads *acceptanceDataStore) Discard() {
ads.staging = make(map[externalapi.DomainHash]externalapi.AcceptanceData)
ads.toDelete = make(map[externalapi.DomainHash]struct{})
}
func (ads *acceptanceDataStore) Commit(dbTx model.DBTransaction) error {
for hash, acceptanceData := range ads.staging {
acceptanceDataBytes, err := ads.serializeAcceptanceData(acceptanceData)
if err != nil {
return err
}
err = dbTx.Put(ads.hashAsKey(&hash), acceptanceDataBytes)
if err != nil {
return err
}
ads.cache.Add(&hash, acceptanceData)
}
for hash := range ads.toDelete {
err := dbTx.Delete(ads.hashAsKey(&hash))
if err != nil {
return err
}
ads.cache.Remove(&hash)
}
ads.Discard()
return nil
func (ads *acceptanceDataStore) IsStaged(stagingArea *model.StagingArea) bool {
return ads.stagingShard(stagingArea).isStaged()
}
// Get gets the acceptanceData associated with the given blockHash
func (ads *acceptanceDataStore) Get(dbContext model.DBReader, blockHash *externalapi.DomainHash) (externalapi.AcceptanceData, error) {
if acceptanceData, ok := ads.staging[*blockHash]; ok {
func (ads *acceptanceDataStore) Get(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (externalapi.AcceptanceData, error) {
stagingShard := ads.stagingShard(stagingArea)
if acceptanceData, ok := stagingShard.toAdd[*blockHash]; ok {
return acceptanceData.Clone(), nil
}
@@ -90,12 +59,14 @@ func (ads *acceptanceDataStore) Get(dbContext model.DBReader, blockHash *externa
}
// Delete deletes the acceptanceData associated with the given blockHash
func (ads *acceptanceDataStore) Delete(blockHash *externalapi.DomainHash) {
if _, ok := ads.staging[*blockHash]; ok {
delete(ads.staging, *blockHash)
func (ads *acceptanceDataStore) Delete(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) {
stagingShard := ads.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
delete(stagingShard.toAdd, *blockHash)
return
}
ads.toDelete[*blockHash] = struct{}{}
stagingShard.toDelete[*blockHash] = struct{}{}
}
func (ads *acceptanceDataStore) serializeAcceptanceData(acceptanceData externalapi.AcceptanceData) ([]byte, error) {

View File

@@ -0,0 +1,69 @@
package blockheaderstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type blockHeaderStagingShard struct {
store *blockHeaderStore
toAdd map[externalapi.DomainHash]externalapi.BlockHeader
toDelete map[externalapi.DomainHash]struct{}
}
func (bhs *blockHeaderStore) stagingShard(stagingArea *model.StagingArea) *blockHeaderStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlockHeader, func() model.StagingShard {
return &blockHeaderStagingShard{
store: bhs,
toAdd: make(map[externalapi.DomainHash]externalapi.BlockHeader),
toDelete: make(map[externalapi.DomainHash]struct{}),
}
}).(*blockHeaderStagingShard)
}
func (bhss *blockHeaderStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, header := range bhss.toAdd {
headerBytes, err := bhss.store.serializeHeader(header)
if err != nil {
return err
}
err = dbTx.Put(bhss.store.hashAsKey(&hash), headerBytes)
if err != nil {
return err
}
bhss.store.cache.Add(&hash, header)
}
for hash := range bhss.toDelete {
err := dbTx.Delete(bhss.store.hashAsKey(&hash))
if err != nil {
return err
}
bhss.store.cache.Remove(&hash)
}
err := bhss.commitCount(dbTx)
if err != nil {
return err
}
return nil
}
func (bhss *blockHeaderStagingShard) commitCount(dbTx model.DBTransaction) error {
count := bhss.store.count(bhss)
countBytes, err := bhss.store.serializeHeaderCount(count)
if err != nil {
return err
}
err = dbTx.Put(countKey, countBytes)
if err != nil {
return err
}
bhss.store.countCached = count
return nil
}
func (bhss *blockHeaderStagingShard) isStaged() bool {
return len(bhss.toAdd) != 0 || len(bhss.toDelete) != 0
}

View File

@@ -14,18 +14,14 @@ var countKey = database.MakeBucket(nil).Key([]byte("block-headers-count"))
// blockHeaderStore represents a store of blocks
type blockHeaderStore struct {
staging map[externalapi.DomainHash]externalapi.BlockHeader
toDelete map[externalapi.DomainHash]struct{}
cache *lrucache.LRUCache
count uint64
cache *lrucache.LRUCache
countCached uint64
}
// New instantiates a new BlockHeaderStore
func New(dbContext model.DBReader, cacheSize int, preallocate bool) (model.BlockHeaderStore, error) {
blockHeaderStore := &blockHeaderStore{
staging: make(map[externalapi.DomainHash]externalapi.BlockHeader),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
err := blockHeaderStore.initializeCount(dbContext)
@@ -52,57 +48,33 @@ func (bhs *blockHeaderStore) initializeCount(dbContext model.DBReader) error {
return err
}
}
bhs.count = count
bhs.countCached = count
return nil
}
// Stage stages the given block header for the given blockHash
func (bhs *blockHeaderStore) Stage(blockHash *externalapi.DomainHash, blockHeader externalapi.BlockHeader) {
bhs.staging[*blockHash] = blockHeader
func (bhs *blockHeaderStore) Stage(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, blockHeader externalapi.BlockHeader) {
stagingShard := bhs.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = blockHeader
}
func (bhs *blockHeaderStore) IsStaged() bool {
return len(bhs.staging) != 0 || len(bhs.toDelete) != 0
}
func (bhs *blockHeaderStore) Discard() {
bhs.staging = make(map[externalapi.DomainHash]externalapi.BlockHeader)
bhs.toDelete = make(map[externalapi.DomainHash]struct{})
}
func (bhs *blockHeaderStore) Commit(dbTx model.DBTransaction) error {
for hash, header := range bhs.staging {
headerBytes, err := bhs.serializeHeader(header)
if err != nil {
return err
}
err = dbTx.Put(bhs.hashAsKey(&hash), headerBytes)
if err != nil {
return err
}
bhs.cache.Add(&hash, header)
}
for hash := range bhs.toDelete {
err := dbTx.Delete(bhs.hashAsKey(&hash))
if err != nil {
return err
}
bhs.cache.Remove(&hash)
}
err := bhs.commitCount(dbTx)
if err != nil {
return err
}
bhs.Discard()
return nil
func (bhs *blockHeaderStore) IsStaged(stagingArea *model.StagingArea) bool {
return bhs.stagingShard(stagingArea).isStaged()
}
// BlockHeader gets the block header associated with the given blockHash
func (bhs *blockHeaderStore) BlockHeader(dbContext model.DBReader, blockHash *externalapi.DomainHash) (externalapi.BlockHeader, error) {
if header, ok := bhs.staging[*blockHash]; ok {
func (bhs *blockHeaderStore) BlockHeader(dbContext model.DBReader, stagingArea *model.StagingArea,
blockHash *externalapi.DomainHash) (externalapi.BlockHeader, error) {
stagingShard := bhs.stagingShard(stagingArea)
return bhs.blockHeader(dbContext, stagingShard, blockHash)
}
func (bhs *blockHeaderStore) blockHeader(dbContext model.DBReader, stagingShard *blockHeaderStagingShard,
blockHash *externalapi.DomainHash) (externalapi.BlockHeader, error) {
if header, ok := stagingShard.toAdd[*blockHash]; ok {
return header, nil
}
@@ -124,8 +96,10 @@ func (bhs *blockHeaderStore) BlockHeader(dbContext model.DBReader, blockHash *ex
}
// HasBlock returns whether a block header with a given hash exists in the store.
func (bhs *blockHeaderStore) HasBlockHeader(dbContext model.DBReader, blockHash *externalapi.DomainHash) (bool, error) {
if _, ok := bhs.staging[*blockHash]; ok {
func (bhs *blockHeaderStore) HasBlockHeader(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (bool, error) {
stagingShard := bhs.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
return true, nil
}
@@ -142,11 +116,15 @@ func (bhs *blockHeaderStore) HasBlockHeader(dbContext model.DBReader, blockHash
}
// BlockHeaders gets the block headers associated with the given blockHashes
func (bhs *blockHeaderStore) BlockHeaders(dbContext model.DBReader, blockHashes []*externalapi.DomainHash) ([]externalapi.BlockHeader, error) {
func (bhs *blockHeaderStore) BlockHeaders(dbContext model.DBReader, stagingArea *model.StagingArea,
blockHashes []*externalapi.DomainHash) ([]externalapi.BlockHeader, error) {
stagingShard := bhs.stagingShard(stagingArea)
headers := make([]externalapi.BlockHeader, len(blockHashes))
for i, hash := range blockHashes {
var err error
headers[i], err = bhs.BlockHeader(dbContext, hash)
headers[i], err = bhs.blockHeader(dbContext, stagingShard, hash)
if err != nil {
return nil, err
}
@@ -155,12 +133,14 @@ func (bhs *blockHeaderStore) BlockHeaders(dbContext model.DBReader, blockHashes
}
// Delete deletes the block associated with the given blockHash
func (bhs *blockHeaderStore) Delete(blockHash *externalapi.DomainHash) {
if _, ok := bhs.staging[*blockHash]; ok {
delete(bhs.staging, *blockHash)
func (bhs *blockHeaderStore) Delete(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) {
stagingShard := bhs.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
delete(stagingShard.toAdd, *blockHash)
return
}
bhs.toDelete[*blockHash] = struct{}{}
stagingShard.toDelete[*blockHash] = struct{}{}
}
func (bhs *blockHeaderStore) hashAsKey(hash *externalapi.DomainHash) model.DBKey {
@@ -181,8 +161,14 @@ func (bhs *blockHeaderStore) deserializeHeader(headerBytes []byte) (externalapi.
return serialization.DbBlockHeaderToDomainBlockHeader(dbBlockHeader)
}
func (bhs *blockHeaderStore) Count() uint64 {
return bhs.count + uint64(len(bhs.staging)) - uint64(len(bhs.toDelete))
func (bhs *blockHeaderStore) Count(stagingArea *model.StagingArea) uint64 {
stagingShard := bhs.stagingShard(stagingArea)
return bhs.count(stagingShard)
}
func (bhs *blockHeaderStore) count(stagingShard *blockHeaderStagingShard) uint64 {
return bhs.countCached + uint64(len(stagingShard.toAdd)) - uint64(len(stagingShard.toDelete))
}
func (bhs *blockHeaderStore) deserializeHeaderCount(countBytes []byte) (uint64, error) {
@@ -194,20 +180,6 @@ func (bhs *blockHeaderStore) deserializeHeaderCount(countBytes []byte) (uint64,
return dbBlockHeaderCount.Count, nil
}
func (bhs *blockHeaderStore) commitCount(dbTx model.DBTransaction) error {
count := bhs.Count()
countBytes, err := bhs.serializeHeaderCount(count)
if err != nil {
return err
}
err = dbTx.Put(countKey, countBytes)
if err != nil {
return err
}
bhs.count = count
return nil
}
func (bhs *blockHeaderStore) serializeHeaderCount(count uint64) ([]byte, error) {
dbBlockHeaderCount := &serialization.DbBlockHeaderCount{Count: count}
return proto.Marshal(dbBlockHeaderCount)

View File

@@ -0,0 +1,40 @@
package blockrelationstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type blockRelationStagingShard struct {
store *blockRelationStore
toAdd map[externalapi.DomainHash]*model.BlockRelations
}
func (brs *blockRelationStore) stagingShard(stagingArea *model.StagingArea) *blockRelationStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlockRelation, func() model.StagingShard {
return &blockRelationStagingShard{
store: brs,
toAdd: make(map[externalapi.DomainHash]*model.BlockRelations),
}
}).(*blockRelationStagingShard)
}
func (brss *blockRelationStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, blockRelations := range brss.toAdd {
blockRelationBytes, err := brss.store.serializeBlockRelations(blockRelations)
if err != nil {
return err
}
err = dbTx.Put(brss.store.hashAsKey(&hash), blockRelationBytes)
if err != nil {
return err
}
brss.store.cache.Add(&hash, blockRelations)
}
return nil
}
func (brss *blockRelationStagingShard) isStaged() bool {
return len(brss.toAdd) != 0
}

View File

@@ -13,49 +13,30 @@ var bucket = database.MakeBucket([]byte("block-relations"))
// blockRelationStore represents a store of BlockRelations
type blockRelationStore struct {
staging map[externalapi.DomainHash]*model.BlockRelations
cache *lrucache.LRUCache
cache *lrucache.LRUCache
}
// New instantiates a new BlockRelationStore
func New(cacheSize int, preallocate bool) model.BlockRelationStore {
return &blockRelationStore{
staging: make(map[externalapi.DomainHash]*model.BlockRelations),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
}
func (brs *blockRelationStore) StageBlockRelation(blockHash *externalapi.DomainHash, blockRelations *model.BlockRelations) {
brs.staging[*blockHash] = blockRelations.Clone()
func (brs *blockRelationStore) StageBlockRelation(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, blockRelations *model.BlockRelations) {
stagingShard := brs.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = blockRelations.Clone()
}
func (brs *blockRelationStore) IsStaged() bool {
return len(brs.staging) != 0
func (brs *blockRelationStore) IsStaged(stagingArea *model.StagingArea) bool {
return brs.stagingShard(stagingArea).isStaged()
}
func (brs *blockRelationStore) Discard() {
brs.staging = make(map[externalapi.DomainHash]*model.BlockRelations)
}
func (brs *blockRelationStore) BlockRelation(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (*model.BlockRelations, error) {
stagingShard := brs.stagingShard(stagingArea)
func (brs *blockRelationStore) Commit(dbTx model.DBTransaction) error {
for hash, blockRelations := range brs.staging {
blockRelationBytes, err := brs.serializeBlockRelations(blockRelations)
if err != nil {
return err
}
err = dbTx.Put(brs.hashAsKey(&hash), blockRelationBytes)
if err != nil {
return err
}
brs.cache.Add(&hash, blockRelations)
}
brs.Discard()
return nil
}
func (brs *blockRelationStore) BlockRelation(dbContext model.DBReader, blockHash *externalapi.DomainHash) (*model.BlockRelations, error) {
if blockRelations, ok := brs.staging[*blockHash]; ok {
if blockRelations, ok := stagingShard.toAdd[*blockHash]; ok {
return blockRelations.Clone(), nil
}
@@ -76,8 +57,10 @@ func (brs *blockRelationStore) BlockRelation(dbContext model.DBReader, blockHash
return blockRelations.Clone(), nil
}
func (brs *blockRelationStore) Has(dbContext model.DBReader, blockHash *externalapi.DomainHash) (bool, error) {
if _, ok := brs.staging[*blockHash]; ok {
func (brs *blockRelationStore) Has(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (bool, error) {
stagingShard := brs.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
return true, nil
}

View File

@@ -0,0 +1,40 @@
package blockstatusstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type blockStatusStagingShard struct {
store *blockStatusStore
toAdd map[externalapi.DomainHash]externalapi.BlockStatus
}
func (bss *blockStatusStore) stagingShard(stagingArea *model.StagingArea) *blockStatusStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlockStatus, func() model.StagingShard {
return &blockStatusStagingShard{
store: bss,
toAdd: make(map[externalapi.DomainHash]externalapi.BlockStatus),
}
}).(*blockStatusStagingShard)
}
func (bsss *blockStatusStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, status := range bsss.toAdd {
blockStatusBytes, err := bsss.store.serializeBlockStatus(status)
if err != nil {
return err
}
err = dbTx.Put(bsss.store.hashAsKey(&hash), blockStatusBytes)
if err != nil {
return err
}
bsss.store.cache.Add(&hash, status)
}
return nil
}
func (bsss *blockStatusStagingShard) isStaged() bool {
return len(bsss.toAdd) != 0
}

View File

@@ -13,51 +13,31 @@ var bucket = database.MakeBucket([]byte("block-statuses"))
// blockStatusStore represents a store of BlockStatuses
type blockStatusStore struct {
staging map[externalapi.DomainHash]externalapi.BlockStatus
cache *lrucache.LRUCache
cache *lrucache.LRUCache
}
// New instantiates a new BlockStatusStore
func New(cacheSize int, preallocate bool) model.BlockStatusStore {
return &blockStatusStore{
staging: make(map[externalapi.DomainHash]externalapi.BlockStatus),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
}
// Stage stages the given blockStatus for the given blockHash
func (bss *blockStatusStore) Stage(blockHash *externalapi.DomainHash, blockStatus externalapi.BlockStatus) {
bss.staging[*blockHash] = blockStatus.Clone()
func (bss *blockStatusStore) Stage(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, blockStatus externalapi.BlockStatus) {
stagingShard := bss.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = blockStatus.Clone()
}
func (bss *blockStatusStore) IsStaged() bool {
return len(bss.staging) != 0
}
func (bss *blockStatusStore) Discard() {
bss.staging = make(map[externalapi.DomainHash]externalapi.BlockStatus)
}
func (bss *blockStatusStore) Commit(dbTx model.DBTransaction) error {
for hash, status := range bss.staging {
blockStatusBytes, err := bss.serializeBlockStatus(status)
if err != nil {
return err
}
err = dbTx.Put(bss.hashAsKey(&hash), blockStatusBytes)
if err != nil {
return err
}
bss.cache.Add(&hash, status)
}
bss.Discard()
return nil
func (bss *blockStatusStore) IsStaged(stagingArea *model.StagingArea) bool {
return bss.stagingShard(stagingArea).isStaged()
}
// Get gets the blockStatus associated with the given blockHash
func (bss *blockStatusStore) Get(dbContext model.DBReader, blockHash *externalapi.DomainHash) (externalapi.BlockStatus, error) {
if status, ok := bss.staging[*blockHash]; ok {
func (bss *blockStatusStore) Get(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (externalapi.BlockStatus, error) {
stagingShard := bss.stagingShard(stagingArea)
if status, ok := stagingShard.toAdd[*blockHash]; ok {
return status, nil
}
@@ -79,8 +59,10 @@ func (bss *blockStatusStore) Get(dbContext model.DBReader, blockHash *externalap
}
// Exists returns true if the blockStatus for the given blockHash exists
func (bss *blockStatusStore) Exists(dbContext model.DBReader, blockHash *externalapi.DomainHash) (bool, error) {
if _, ok := bss.staging[*blockHash]; ok {
func (bss *blockStatusStore) Exists(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (bool, error) {
stagingShard := bss.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
return true, nil
}

View File

@@ -0,0 +1,69 @@
package blockstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type blockStagingShard struct {
store *blockStore
toAdd map[externalapi.DomainHash]*externalapi.DomainBlock
toDelete map[externalapi.DomainHash]struct{}
}
func (bs *blockStore) stagingShard(stagingArea *model.StagingArea) *blockStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlock, func() model.StagingShard {
return &blockStagingShard{
store: bs,
toAdd: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
toDelete: make(map[externalapi.DomainHash]struct{}),
}
}).(*blockStagingShard)
}
func (bss *blockStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, block := range bss.toAdd {
blockBytes, err := bss.store.serializeBlock(block)
if err != nil {
return err
}
err = dbTx.Put(bss.store.hashAsKey(&hash), blockBytes)
if err != nil {
return err
}
bss.store.cache.Add(&hash, block)
}
for hash := range bss.toDelete {
err := dbTx.Delete(bss.store.hashAsKey(&hash))
if err != nil {
return err
}
bss.store.cache.Remove(&hash)
}
err := bss.commitCount(dbTx)
if err != nil {
return err
}
return nil
}
func (bss *blockStagingShard) commitCount(dbTx model.DBTransaction) error {
count := bss.store.count(bss)
countBytes, err := bss.store.serializeBlockCount(count)
if err != nil {
return err
}
err = dbTx.Put(countKey, countBytes)
if err != nil {
return err
}
bss.store.countCached = count
return nil
}
func (bss *blockStagingShard) isStaged() bool {
return len(bss.toAdd) != 0 || len(bss.toDelete) != 0
}

View File

@@ -15,18 +15,14 @@ var countKey = database.MakeBucket(nil).Key([]byte("blocks-count"))
// blockStore represents a store of blocks
type blockStore struct {
staging map[externalapi.DomainHash]*externalapi.DomainBlock
toDelete map[externalapi.DomainHash]struct{}
cache *lrucache.LRUCache
count uint64
cache *lrucache.LRUCache
countCached uint64
}
// New instantiates a new BlockStore
func New(dbContext model.DBReader, cacheSize int, preallocate bool) (model.BlockStore, error) {
blockStore := &blockStore{
staging: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
err := blockStore.initializeCount(dbContext)
@@ -53,57 +49,29 @@ func (bs *blockStore) initializeCount(dbContext model.DBReader) error {
return err
}
}
bs.count = count
bs.countCached = count
return nil
}
// Stage stages the given block for the given blockHash
func (bs *blockStore) Stage(blockHash *externalapi.DomainHash, block *externalapi.DomainBlock) {
bs.staging[*blockHash] = block.Clone()
func (bs *blockStore) Stage(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, block *externalapi.DomainBlock) {
stagingShard := bs.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = block.Clone()
}
func (bs *blockStore) IsStaged() bool {
return len(bs.staging) != 0 || len(bs.toDelete) != 0
}
func (bs *blockStore) Discard() {
bs.staging = make(map[externalapi.DomainHash]*externalapi.DomainBlock)
bs.toDelete = make(map[externalapi.DomainHash]struct{})
}
func (bs *blockStore) Commit(dbTx model.DBTransaction) error {
for hash, block := range bs.staging {
blockBytes, err := bs.serializeBlock(block)
if err != nil {
return err
}
err = dbTx.Put(bs.hashAsKey(&hash), blockBytes)
if err != nil {
return err
}
bs.cache.Add(&hash, block)
}
for hash := range bs.toDelete {
err := dbTx.Delete(bs.hashAsKey(&hash))
if err != nil {
return err
}
bs.cache.Remove(&hash)
}
err := bs.commitCount(dbTx)
if err != nil {
return err
}
bs.Discard()
return nil
func (bs *blockStore) IsStaged(stagingArea *model.StagingArea) bool {
return bs.stagingShard(stagingArea).isStaged()
}
// Block gets the block associated with the given blockHash
func (bs *blockStore) Block(dbContext model.DBReader, blockHash *externalapi.DomainHash) (*externalapi.DomainBlock, error) {
if block, ok := bs.staging[*blockHash]; ok {
func (bs *blockStore) Block(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (*externalapi.DomainBlock, error) {
stagingShard := bs.stagingShard(stagingArea)
return bs.block(dbContext, stagingShard, blockHash)
}
func (bs *blockStore) block(dbContext model.DBReader, stagingShard *blockStagingShard, blockHash *externalapi.DomainHash) (*externalapi.DomainBlock, error) {
if block, ok := stagingShard.toAdd[*blockHash]; ok {
return block.Clone(), nil
}
@@ -125,8 +93,10 @@ func (bs *blockStore) Block(dbContext model.DBReader, blockHash *externalapi.Dom
}
// HasBlock returns whether a block with a given hash exists in the store.
func (bs *blockStore) HasBlock(dbContext model.DBReader, blockHash *externalapi.DomainHash) (bool, error) {
if _, ok := bs.staging[*blockHash]; ok {
func (bs *blockStore) HasBlock(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (bool, error) {
stagingShard := bs.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
return true, nil
}
@@ -143,11 +113,13 @@ func (bs *blockStore) HasBlock(dbContext model.DBReader, blockHash *externalapi.
}
// Blocks gets the blocks associated with the given blockHashes
func (bs *blockStore) Blocks(dbContext model.DBReader, blockHashes []*externalapi.DomainHash) ([]*externalapi.DomainBlock, error) {
func (bs *blockStore) Blocks(dbContext model.DBReader, stagingArea *model.StagingArea, blockHashes []*externalapi.DomainHash) ([]*externalapi.DomainBlock, error) {
stagingShard := bs.stagingShard(stagingArea)
blocks := make([]*externalapi.DomainBlock, len(blockHashes))
for i, hash := range blockHashes {
var err error
blocks[i], err = bs.Block(dbContext, hash)
blocks[i], err = bs.block(dbContext, stagingShard, hash)
if err != nil {
return nil, err
}
@@ -156,12 +128,14 @@ func (bs *blockStore) Blocks(dbContext model.DBReader, blockHashes []*externalap
}
// Delete deletes the block associated with the given blockHash
func (bs *blockStore) Delete(blockHash *externalapi.DomainHash) {
if _, ok := bs.staging[*blockHash]; ok {
delete(bs.staging, *blockHash)
func (bs *blockStore) Delete(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) {
stagingShard := bs.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
delete(stagingShard.toAdd, *blockHash)
return
}
bs.toDelete[*blockHash] = struct{}{}
stagingShard.toDelete[*blockHash] = struct{}{}
}
func (bs *blockStore) serializeBlock(block *externalapi.DomainBlock) ([]byte, error) {
@@ -182,8 +156,13 @@ func (bs *blockStore) hashAsKey(hash *externalapi.DomainHash) model.DBKey {
return bucket.Key(hash.ByteSlice())
}
func (bs *blockStore) Count() uint64 {
return bs.count + uint64(len(bs.staging)) - uint64(len(bs.toDelete))
func (bs *blockStore) Count(stagingArea *model.StagingArea) uint64 {
stagingShard := bs.stagingShard(stagingArea)
return bs.count(stagingShard)
}
func (bs *blockStore) count(stagingShard *blockStagingShard) uint64 {
return bs.countCached + uint64(len(stagingShard.toAdd)) - uint64(len(stagingShard.toDelete))
}
func (bs *blockStore) deserializeBlockCount(countBytes []byte) (uint64, error) {
@@ -195,20 +174,6 @@ func (bs *blockStore) deserializeBlockCount(countBytes []byte) (uint64, error) {
return dbBlockCount.Count, nil
}
func (bs *blockStore) commitCount(dbTx model.DBTransaction) error {
count := bs.Count()
countBytes, err := bs.serializeBlockCount(count)
if err != nil {
return err
}
err = dbTx.Put(countKey, countBytes)
if err != nil {
return err
}
bs.count = count
return nil
}
func (bs *blockStore) serializeBlockCount(count uint64) ([]byte, error) {
dbBlockCount := &serialization.DbBlockCount{Count: count}
return proto.Marshal(dbBlockCount)

View File

@@ -0,0 +1,40 @@
package consensusstatestore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type consensusStateStagingShard struct {
store *consensusStateStore
tipsStaging []*externalapi.DomainHash
virtualUTXODiffStaging externalapi.UTXODiff
}
func (bs *consensusStateStore) stagingShard(stagingArea *model.StagingArea) *consensusStateStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDConsensusState, func() model.StagingShard {
return &consensusStateStagingShard{
store: bs,
tipsStaging: nil,
virtualUTXODiffStaging: nil,
}
}).(*consensusStateStagingShard)
}
func (csss *consensusStateStagingShard) Commit(dbTx model.DBTransaction) error {
err := csss.commitTips(dbTx)
if err != nil {
return err
}
err = csss.commitVirtualUTXODiff(dbTx)
if err != nil {
return err
}
return nil
}
func (csss *consensusStateStagingShard) isStaged() bool {
return csss.tipsStaging != nil || csss.virtualUTXODiffStaging != nil
}

View File

@@ -8,9 +8,6 @@ import (
// consensusStateStore represents a store for the current consensus state
type consensusStateStore struct {
tipsStaging []*externalapi.DomainHash
virtualUTXODiffStaging externalapi.UTXODiff
virtualUTXOSetCache *utxolrucache.LRUCache
tipsCache []*externalapi.DomainHash
@@ -23,28 +20,6 @@ func New(utxoSetCacheSize int, preallocate bool) model.ConsensusStateStore {
}
}
func (css *consensusStateStore) Discard() {
css.tipsStaging = nil
css.virtualUTXODiffStaging = nil
}
func (css *consensusStateStore) Commit(dbTx model.DBTransaction) error {
err := css.commitTips(dbTx)
if err != nil {
return err
}
err = css.commitVirtualUTXODiff(dbTx)
if err != nil {
return err
}
css.Discard()
return nil
}
func (css *consensusStateStore) IsStaged() bool {
return css.tipsStaging != nil ||
css.virtualUTXODiffStaging != nil
func (css *consensusStateStore) IsStaged(stagingArea *model.StagingArea) bool {
return css.stagingShard(stagingArea).isStaged()
}

View File

@@ -10,9 +10,11 @@ import (
var tipsKey = database.MakeBucket(nil).Key([]byte("tips"))
func (css *consensusStateStore) Tips(dbContext model.DBReader) ([]*externalapi.DomainHash, error) {
if css.tipsStaging != nil {
return externalapi.CloneHashes(css.tipsStaging), nil
func (css *consensusStateStore) Tips(stagingArea *model.StagingArea, dbContext model.DBReader) ([]*externalapi.DomainHash, error) {
stagingShard := css.stagingShard(stagingArea)
if stagingShard.tipsStaging != nil {
return externalapi.CloneHashes(stagingShard.tipsStaging), nil
}
if css.tipsCache != nil {
@@ -32,28 +34,10 @@ func (css *consensusStateStore) Tips(dbContext model.DBReader) ([]*externalapi.D
return externalapi.CloneHashes(tips), nil
}
func (css *consensusStateStore) StageTips(tipHashes []*externalapi.DomainHash) {
css.tipsStaging = externalapi.CloneHashes(tipHashes)
}
func (css *consensusStateStore) StageTips(stagingArea *model.StagingArea, tipHashes []*externalapi.DomainHash) {
stagingShard := css.stagingShard(stagingArea)
func (css *consensusStateStore) commitTips(dbTx model.DBTransaction) error {
if css.tipsStaging == nil {
return nil
}
tipsBytes, err := css.serializeTips(css.tipsStaging)
if err != nil {
return err
}
err = dbTx.Put(tipsKey, tipsBytes)
if err != nil {
return err
}
css.tipsCache = css.tipsStaging
// Note: we don't discard the staging here since that's
// being done at the end of Commit()
return nil
stagingShard.tipsStaging = externalapi.CloneHashes(tipHashes)
}
func (css *consensusStateStore) serializeTips(tips []*externalapi.DomainHash) ([]byte, error) {
@@ -72,3 +56,21 @@ func (css *consensusStateStore) deserializeTips(tipsBytes []byte) ([]*externalap
return serialization.DBTipsToTips(dbTips)
}
func (csss *consensusStateStagingShard) commitTips(dbTx model.DBTransaction) error {
if csss.tipsStaging == nil {
return nil
}
tipsBytes, err := csss.store.serializeTips(csss.tipsStaging)
if err != nil {
return err
}
err = dbTx.Put(tipsKey, tipsBytes)
if err != nil {
return err
}
csss.store.tipsCache = csss.tipsStaging
return nil
}

View File

@@ -19,12 +19,14 @@ func utxoKey(outpoint *externalapi.DomainOutpoint) (model.DBKey, error) {
return utxoSetBucket.Key(serializedOutpoint), nil
}
func (css *consensusStateStore) StageVirtualUTXODiff(virtualUTXODiff externalapi.UTXODiff) {
css.virtualUTXODiffStaging = virtualUTXODiff
func (css *consensusStateStore) StageVirtualUTXODiff(stagingArea *model.StagingArea, virtualUTXODiff externalapi.UTXODiff) {
stagingShard := css.stagingShard(stagingArea)
stagingShard.virtualUTXODiffStaging = virtualUTXODiff
}
func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction) error {
hadStartedImportingPruningPointUTXOSet, err := css.HadStartedImportingPruningPointUTXOSet(dbTx)
func (csss *consensusStateStagingShard) commitVirtualUTXODiff(dbTx model.DBTransaction) error {
hadStartedImportingPruningPointUTXOSet, err := csss.store.HadStartedImportingPruningPointUTXOSet(dbTx)
if err != nil {
return err
}
@@ -32,11 +34,11 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
return errors.New("cannot commit virtual UTXO diff after starting to import the pruning point UTXO set")
}
if css.virtualUTXODiffStaging == nil {
if csss.virtualUTXODiffStaging == nil {
return nil
}
toRemoveIterator := css.virtualUTXODiffStaging.ToRemove().Iterator()
toRemoveIterator := csss.virtualUTXODiffStaging.ToRemove().Iterator()
defer toRemoveIterator.Close()
for ok := toRemoveIterator.First(); ok; ok = toRemoveIterator.Next() {
toRemoveOutpoint, _, err := toRemoveIterator.Get()
@@ -44,7 +46,7 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
return err
}
css.virtualUTXOSetCache.Remove(toRemoveOutpoint)
csss.store.virtualUTXOSetCache.Remove(toRemoveOutpoint)
dbKey, err := utxoKey(toRemoveOutpoint)
if err != nil {
@@ -56,7 +58,7 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
}
}
toAddIterator := css.virtualUTXODiffStaging.ToAdd().Iterator()
toAddIterator := csss.virtualUTXODiffStaging.ToAdd().Iterator()
defer toAddIterator.Close()
for ok := toAddIterator.First(); ok; ok = toAddIterator.Next() {
toAddOutpoint, toAddEntry, err := toAddIterator.Get()
@@ -64,7 +66,7 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
return err
}
css.virtualUTXOSetCache.Add(toAddOutpoint, toAddEntry)
csss.store.virtualUTXOSetCache.Add(toAddOutpoint, toAddEntry)
dbKey, err := utxoKey(toAddOutpoint)
if err != nil {
@@ -85,21 +87,22 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
return nil
}
func (css *consensusStateStore) UTXOByOutpoint(dbContext model.DBReader, outpoint *externalapi.DomainOutpoint) (
externalapi.UTXOEntry, error) {
func (css *consensusStateStore) UTXOByOutpoint(dbContext model.DBReader, stagingArea *model.StagingArea,
outpoint *externalapi.DomainOutpoint) (externalapi.UTXOEntry, error) {
return css.utxoByOutpointFromStagedVirtualUTXODiff(dbContext, outpoint)
stagingShard := css.stagingShard(stagingArea)
return css.utxoByOutpointFromStagedVirtualUTXODiff(dbContext, stagingShard, outpoint)
}
func (css *consensusStateStore) utxoByOutpointFromStagedVirtualUTXODiff(dbContext model.DBReader,
outpoint *externalapi.DomainOutpoint) (
externalapi.UTXOEntry, error) {
stagingShard *consensusStateStagingShard, outpoint *externalapi.DomainOutpoint) (externalapi.UTXOEntry, error) {
if css.virtualUTXODiffStaging != nil {
if css.virtualUTXODiffStaging.ToRemove().Contains(outpoint) {
if stagingShard.virtualUTXODiffStaging != nil {
if stagingShard.virtualUTXODiffStaging.ToRemove().Contains(outpoint) {
return nil, errors.Errorf("outpoint was not found")
}
if utxoEntry, ok := css.virtualUTXODiffStaging.ToAdd().Get(outpoint); ok {
if utxoEntry, ok := stagingShard.virtualUTXODiffStaging.ToAdd().Get(outpoint); ok {
return utxoEntry, nil
}
}
@@ -127,18 +130,22 @@ func (css *consensusStateStore) utxoByOutpointFromStagedVirtualUTXODiff(dbContex
return entry, nil
}
func (css *consensusStateStore) HasUTXOByOutpoint(dbContext model.DBReader, outpoint *externalapi.DomainOutpoint) (bool, error) {
return css.hasUTXOByOutpointFromStagedVirtualUTXODiff(dbContext, outpoint)
func (css *consensusStateStore) HasUTXOByOutpoint(dbContext model.DBReader, stagingArea *model.StagingArea,
outpoint *externalapi.DomainOutpoint) (bool, error) {
stagingShard := css.stagingShard(stagingArea)
return css.hasUTXOByOutpointFromStagedVirtualUTXODiff(dbContext, stagingShard, outpoint)
}
func (css *consensusStateStore) hasUTXOByOutpointFromStagedVirtualUTXODiff(dbContext model.DBReader,
outpoint *externalapi.DomainOutpoint) (bool, error) {
stagingShard *consensusStateStagingShard, outpoint *externalapi.DomainOutpoint) (bool, error) {
if css.virtualUTXODiffStaging != nil {
if css.virtualUTXODiffStaging.ToRemove().Contains(outpoint) {
if stagingShard.virtualUTXODiffStaging != nil {
if stagingShard.virtualUTXODiffStaging.ToRemove().Contains(outpoint) {
return false, nil
}
if _, ok := css.virtualUTXODiffStaging.ToAdd().Get(outpoint); ok {
if _, ok := stagingShard.virtualUTXODiffStaging.ToAdd().Get(outpoint); ok {
return true, nil
}
}
@@ -151,8 +158,8 @@ func (css *consensusStateStore) hasUTXOByOutpointFromStagedVirtualUTXODiff(dbCon
return dbContext.Has(key)
}
func (css *consensusStateStore) VirtualUTXOs(dbContext model.DBReader,
fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error) {
func (css *consensusStateStore) VirtualUTXOs(dbContext model.DBReader, fromOutpoint *externalapi.DomainOutpoint, limit int) (
[]*externalapi.OutpointAndUTXOEntryPair, error) {
cursor, err := dbContext.Cursor(utxoSetBucket)
if err != nil {
@@ -189,15 +196,19 @@ func (css *consensusStateStore) VirtualUTXOs(dbContext model.DBReader,
return outpointAndUTXOEntryPairs, nil
}
func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader) (externalapi.ReadOnlyUTXOSetIterator, error) {
func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader, stagingArea *model.StagingArea) (
externalapi.ReadOnlyUTXOSetIterator, error) {
stagingShard := css.stagingShard(stagingArea)
cursor, err := dbContext.Cursor(utxoSetBucket)
if err != nil {
return nil, err
}
mainIterator := newCursorUTXOSetIterator(cursor)
if css.virtualUTXODiffStaging != nil {
return utxo.IteratorWithDiff(mainIterator, css.virtualUTXODiffStaging)
if stagingShard.virtualUTXODiffStaging != nil {
return utxo.IteratorWithDiff(mainIterator, stagingShard.virtualUTXODiffStaging)
}
return mainIterator, nil

View File

@@ -24,10 +24,6 @@ func (css *consensusStateStore) FinishImportingPruningPointUTXOSet(dbContext mod
func (css *consensusStateStore) ImportPruningPointUTXOSetIntoVirtualUTXOSet(dbContext model.DBWriter,
pruningPointUTXOSetIterator externalapi.ReadOnlyUTXOSetIterator) error {
if css.virtualUTXODiffStaging != nil {
return errors.New("cannot import virtual UTXO set while virtual UTXO diff is staged")
}
hadStartedImportingPruningPointUTXOSet, err := css.HadStartedImportingPruningPointUTXOSet(dbContext)
if err != nil {
return err

View File

@@ -0,0 +1,72 @@
package daablocksstore
import (
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type daaBlocksStagingShard struct {
store *daaBlocksStore
daaScoreToAdd map[externalapi.DomainHash]uint64
daaAddedBlocksToAdd map[externalapi.DomainHash][]*externalapi.DomainHash
daaScoreToDelete map[externalapi.DomainHash]struct{}
daaAddedBlocksToDelete map[externalapi.DomainHash]struct{}
}
func (daas *daaBlocksStore) stagingShard(stagingArea *model.StagingArea) *daaBlocksStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDDAABlocks, func() model.StagingShard {
return &daaBlocksStagingShard{
store: daas,
daaScoreToAdd: make(map[externalapi.DomainHash]uint64),
daaAddedBlocksToAdd: make(map[externalapi.DomainHash][]*externalapi.DomainHash),
daaScoreToDelete: make(map[externalapi.DomainHash]struct{}),
daaAddedBlocksToDelete: make(map[externalapi.DomainHash]struct{}),
}
}).(*daaBlocksStagingShard)
}
func (daass *daaBlocksStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, daaScore := range daass.daaScoreToAdd {
daaScoreBytes := binaryserialization.SerializeUint64(daaScore)
err := dbTx.Put(daass.store.daaScoreHashAsKey(&hash), daaScoreBytes)
if err != nil {
return err
}
daass.store.daaScoreLRUCache.Add(&hash, daaScore)
}
for hash, addedBlocks := range daass.daaAddedBlocksToAdd {
addedBlocksBytes := binaryserialization.SerializeHashes(addedBlocks)
err := dbTx.Put(daass.store.daaAddedBlocksHashAsKey(&hash), addedBlocksBytes)
if err != nil {
return err
}
daass.store.daaAddedBlocksLRUCache.Add(&hash, addedBlocks)
}
for hash := range daass.daaScoreToDelete {
err := dbTx.Delete(daass.store.daaScoreHashAsKey(&hash))
if err != nil {
return err
}
daass.store.daaScoreLRUCache.Remove(&hash)
}
for hash := range daass.daaAddedBlocksToDelete {
err := dbTx.Delete(daass.store.daaAddedBlocksHashAsKey(&hash))
if err != nil {
return err
}
daass.store.daaAddedBlocksLRUCache.Remove(&hash)
}
return nil
}
func (daass *daaBlocksStagingShard) isStaged() bool {
return len(daass.daaScoreToAdd) != 0 ||
len(daass.daaAddedBlocksToAdd) != 0 ||
len(daass.daaScoreToDelete) != 0 ||
len(daass.daaAddedBlocksToDelete) != 0
}

View File

@@ -0,0 +1,114 @@
package daablocksstore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
)
var daaScoreBucket = database.MakeBucket([]byte("daa-score"))
var daaAddedBlocksBucket = database.MakeBucket([]byte("daa-added-blocks"))
// daaBlocksStore represents a store of DAABlocksStore
type daaBlocksStore struct {
daaScoreLRUCache *lrucache.LRUCache
daaAddedBlocksLRUCache *lrucache.LRUCache
}
// New instantiates a new DAABlocksStore
func New(daaScoreCacheSize int, daaAddedBlocksCacheSize int, preallocate bool) model.DAABlocksStore {
return &daaBlocksStore{
daaScoreLRUCache: lrucache.New(daaScoreCacheSize, preallocate),
daaAddedBlocksLRUCache: lrucache.New(daaAddedBlocksCacheSize, preallocate),
}
}
func (daas *daaBlocksStore) StageDAAScore(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, daaScore uint64) {
stagingShard := daas.stagingShard(stagingArea)
stagingShard.daaScoreToAdd[*blockHash] = daaScore
}
func (daas *daaBlocksStore) StageBlockDAAAddedBlocks(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, addedBlocks []*externalapi.DomainHash) {
stagingShard := daas.stagingShard(stagingArea)
stagingShard.daaAddedBlocksToAdd[*blockHash] = externalapi.CloneHashes(addedBlocks)
}
func (daas *daaBlocksStore) IsStaged(stagingArea *model.StagingArea) bool {
return daas.stagingShard(stagingArea).isStaged()
}
func (daas *daaBlocksStore) DAAScore(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (uint64, error) {
stagingShard := daas.stagingShard(stagingArea)
if daaScore, ok := stagingShard.daaScoreToAdd[*blockHash]; ok {
return daaScore, nil
}
if daaScore, ok := daas.daaScoreLRUCache.Get(blockHash); ok {
return daaScore.(uint64), nil
}
daaScoreBytes, err := dbContext.Get(daas.daaScoreHashAsKey(blockHash))
if err != nil {
return 0, err
}
daaScore, err := binaryserialization.DeserializeUint64(daaScoreBytes)
if err != nil {
return 0, err
}
daas.daaScoreLRUCache.Add(blockHash, daaScore)
return daaScore, nil
}
func (daas *daaBlocksStore) DAAAddedBlocks(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
stagingShard := daas.stagingShard(stagingArea)
if addedBlocks, ok := stagingShard.daaAddedBlocksToAdd[*blockHash]; ok {
return externalapi.CloneHashes(addedBlocks), nil
}
if addedBlocks, ok := daas.daaAddedBlocksLRUCache.Get(blockHash); ok {
return externalapi.CloneHashes(addedBlocks.([]*externalapi.DomainHash)), nil
}
addedBlocksBytes, err := dbContext.Get(daas.daaAddedBlocksHashAsKey(blockHash))
if err != nil {
return nil, err
}
addedBlocks, err := binaryserialization.DeserializeHashes(addedBlocksBytes)
if err != nil {
return nil, err
}
daas.daaAddedBlocksLRUCache.Add(blockHash, addedBlocks)
return externalapi.CloneHashes(addedBlocks), nil
}
func (daas *daaBlocksStore) daaScoreHashAsKey(hash *externalapi.DomainHash) model.DBKey {
return daaScoreBucket.Key(hash.ByteSlice())
}
func (daas *daaBlocksStore) daaAddedBlocksHashAsKey(hash *externalapi.DomainHash) model.DBKey {
return daaAddedBlocksBucket.Key(hash.ByteSlice())
}
func (daas *daaBlocksStore) Delete(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) {
stagingShard := daas.stagingShard(stagingArea)
if _, ok := stagingShard.daaScoreToAdd[*blockHash]; ok {
delete(stagingShard.daaScoreToAdd, *blockHash)
} else {
stagingShard.daaAddedBlocksToDelete[*blockHash] = struct{}{}
}
if _, ok := stagingShard.daaAddedBlocksToAdd[*blockHash]; ok {
delete(stagingShard.daaAddedBlocksToAdd, *blockHash)
} else {
stagingShard.daaAddedBlocksToDelete[*blockHash] = struct{}{}
}
}

View File

@@ -1,160 +0,0 @@
package daablocksstore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
)
var daaScoreBucket = database.MakeBucket([]byte("daa-score"))
var daaAddedBlocksBucket = database.MakeBucket([]byte("daa-added-blocks"))
// daaBlocksStore represents a store of DAABlocksStore
type daaBlocksStore struct {
daaScoreStaging map[externalapi.DomainHash]uint64
daaAddedBlocksStaging map[externalapi.DomainHash][]*externalapi.DomainHash
daaScoreToDelete map[externalapi.DomainHash]struct{}
daaAddedBlocksToDelete map[externalapi.DomainHash]struct{}
daaScoreLRUCache *lrucache.LRUCache
daaAddedBlocksLRUCache *lrucache.LRUCache
}
// New instantiates a new DAABlocksStore
func New(daaScoreCacheSize int, daaAddedBlocksCacheSize int, preallocate bool) model.DAABlocksStore {
return &daaBlocksStore{
daaScoreStaging: make(map[externalapi.DomainHash]uint64),
daaAddedBlocksStaging: make(map[externalapi.DomainHash][]*externalapi.DomainHash),
daaScoreLRUCache: lrucache.New(daaScoreCacheSize, preallocate),
daaAddedBlocksLRUCache: lrucache.New(daaAddedBlocksCacheSize, preallocate),
}
}
func (daas *daaBlocksStore) StageDAAScore(blockHash *externalapi.DomainHash, daaScore uint64) {
daas.daaScoreStaging[*blockHash] = daaScore
}
func (daas *daaBlocksStore) StageBlockDAAAddedBlocks(blockHash *externalapi.DomainHash,
addedBlocks []*externalapi.DomainHash) {
daas.daaAddedBlocksStaging[*blockHash] = externalapi.CloneHashes(addedBlocks)
}
func (daas *daaBlocksStore) IsAnythingStaged() bool {
return len(daas.daaScoreStaging) != 0 ||
len(daas.daaAddedBlocksStaging) != 0 ||
len(daas.daaScoreToDelete) != 0 ||
len(daas.daaAddedBlocksToDelete) != 0
}
func (daas *daaBlocksStore) Discard() {
daas.daaScoreStaging = make(map[externalapi.DomainHash]uint64)
daas.daaAddedBlocksStaging = make(map[externalapi.DomainHash][]*externalapi.DomainHash)
daas.daaScoreToDelete = make(map[externalapi.DomainHash]struct{})
daas.daaAddedBlocksToDelete = make(map[externalapi.DomainHash]struct{})
}
func (daas *daaBlocksStore) Commit(dbTx model.DBTransaction) error {
for hash, daaScore := range daas.daaScoreStaging {
daaScoreBytes := binaryserialization.SerializeUint64(daaScore)
err := dbTx.Put(daas.daaScoreHashAsKey(&hash), daaScoreBytes)
if err != nil {
return err
}
daas.daaScoreLRUCache.Add(&hash, daaScore)
}
for hash, addedBlocks := range daas.daaAddedBlocksStaging {
addedBlocksBytes := binaryserialization.SerializeHashes(addedBlocks)
err := dbTx.Put(daas.daaAddedBlocksHashAsKey(&hash), addedBlocksBytes)
if err != nil {
return err
}
daas.daaAddedBlocksLRUCache.Add(&hash, addedBlocks)
}
for hash := range daas.daaScoreToDelete {
err := dbTx.Delete(daas.daaScoreHashAsKey(&hash))
if err != nil {
return err
}
daas.daaScoreLRUCache.Remove(&hash)
}
for hash := range daas.daaAddedBlocksToDelete {
err := dbTx.Delete(daas.daaAddedBlocksHashAsKey(&hash))
if err != nil {
return err
}
daas.daaAddedBlocksLRUCache.Remove(&hash)
}
daas.Discard()
return nil
}
func (daas *daaBlocksStore) DAAScore(dbContext model.DBReader, blockHash *externalapi.DomainHash) (uint64, error) {
if daaScore, ok := daas.daaScoreStaging[*blockHash]; ok {
return daaScore, nil
}
if daaScore, ok := daas.daaScoreLRUCache.Get(blockHash); ok {
return daaScore.(uint64), nil
}
daaScoreBytes, err := dbContext.Get(daas.daaScoreHashAsKey(blockHash))
if err != nil {
return 0, err
}
daaScore, err := binaryserialization.DeserializeUint64(daaScoreBytes)
if err != nil {
return 0, err
}
daas.daaScoreLRUCache.Add(blockHash, daaScore)
return daaScore, nil
}
func (daas *daaBlocksStore) DAAAddedBlocks(dbContext model.DBReader, blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
if addedBlocks, ok := daas.daaAddedBlocksStaging[*blockHash]; ok {
return externalapi.CloneHashes(addedBlocks), nil
}
if addedBlocks, ok := daas.daaAddedBlocksLRUCache.Get(blockHash); ok {
return externalapi.CloneHashes(addedBlocks.([]*externalapi.DomainHash)), nil
}
addedBlocksBytes, err := dbContext.Get(daas.daaAddedBlocksHashAsKey(blockHash))
if err != nil {
return nil, err
}
addedBlocks, err := binaryserialization.DeserializeHashes(addedBlocksBytes)
if err != nil {
return nil, err
}
daas.daaAddedBlocksLRUCache.Add(blockHash, addedBlocks)
return externalapi.CloneHashes(addedBlocks), nil
}
func (daas *daaBlocksStore) daaScoreHashAsKey(hash *externalapi.DomainHash) model.DBKey {
return daaScoreBucket.Key(hash.ByteSlice())
}
func (daas *daaBlocksStore) daaAddedBlocksHashAsKey(hash *externalapi.DomainHash) model.DBKey {
return daaAddedBlocksBucket.Key(hash.ByteSlice())
}
func (daas *daaBlocksStore) Delete(blockHash *externalapi.DomainHash) {
if _, ok := daas.daaScoreStaging[*blockHash]; ok {
delete(daas.daaScoreStaging, *blockHash)
} else {
daas.daaAddedBlocksToDelete[*blockHash] = struct{}{}
}
if _, ok := daas.daaAddedBlocksStaging[*blockHash]; ok {
delete(daas.daaAddedBlocksStaging, *blockHash)
} else {
daas.daaAddedBlocksToDelete[*blockHash] = struct{}{}
}
}

View File

@@ -0,0 +1,36 @@
package finalitystore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type finalityStagingShard struct {
store *finalityStore
toAdd map[externalapi.DomainHash]*externalapi.DomainHash
}
func (fs *finalityStore) stagingShard(stagingArea *model.StagingArea) *finalityStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDFinality, func() model.StagingShard {
return &finalityStagingShard{
store: fs,
toAdd: make(map[externalapi.DomainHash]*externalapi.DomainHash),
}
}).(*finalityStagingShard)
}
func (fss *finalityStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, finalityPointHash := range fss.toAdd {
err := dbTx.Put(fss.store.hashAsKey(&hash), finalityPointHash.ByteSlice())
if err != nil {
return err
}
fss.store.cache.Add(&hash, finalityPointHash)
}
return nil
}
func (fss *finalityStagingShard) isStaged() bool {
return len(fss.toAdd) == 0
}

View File

@@ -10,27 +10,26 @@ import (
var bucket = database.MakeBucket([]byte("finality-points"))
type finalityStore struct {
staging map[externalapi.DomainHash]*externalapi.DomainHash
toDelete map[externalapi.DomainHash]struct{}
cache *lrucache.LRUCache
cache *lrucache.LRUCache
}
// New instantiates a new FinalityStore
func New(cacheSize int, preallocate bool) model.FinalityStore {
return &finalityStore{
staging: make(map[externalapi.DomainHash]*externalapi.DomainHash),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
}
func (fs *finalityStore) StageFinalityPoint(blockHash *externalapi.DomainHash, finalityPointHash *externalapi.DomainHash) {
fs.staging[*blockHash] = finalityPointHash
func (fs *finalityStore) StageFinalityPoint(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, finalityPointHash *externalapi.DomainHash) {
stagingShard := fs.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = finalityPointHash
}
func (fs *finalityStore) FinalityPoint(
dbContext model.DBReader, blockHash *externalapi.DomainHash) (*externalapi.DomainHash, error) {
if finalityPointHash, ok := fs.staging[*blockHash]; ok {
func (fs *finalityStore) FinalityPoint(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (*externalapi.DomainHash, error) {
stagingShard := fs.stagingShard(stagingArea)
if finalityPointHash, ok := stagingShard.toAdd[*blockHash]; ok {
return finalityPointHash, nil
}
@@ -51,25 +50,8 @@ func (fs *finalityStore) FinalityPoint(
return finalityPointHash, nil
}
func (fs *finalityStore) Discard() {
fs.staging = make(map[externalapi.DomainHash]*externalapi.DomainHash)
}
func (fs *finalityStore) Commit(dbTx model.DBTransaction) error {
for hash, finalityPointHash := range fs.staging {
err := dbTx.Put(fs.hashAsKey(&hash), finalityPointHash.ByteSlice())
if err != nil {
return err
}
fs.cache.Add(&hash, finalityPointHash)
}
fs.Discard()
return nil
}
func (fs *finalityStore) IsStaged() bool {
return len(fs.staging) == 0
func (fs *finalityStore) IsStaged(stagingArea *model.StagingArea) bool {
return fs.stagingShard(stagingArea).isStaged()
}
func (fs *finalityStore) hashAsKey(hash *externalapi.DomainHash) model.DBKey {

View File

@@ -0,0 +1,40 @@
package ghostdagdatastore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type ghostdagDataStagingShard struct {
store *ghostdagDataStore
toAdd map[externalapi.DomainHash]*model.BlockGHOSTDAGData
}
func (gds *ghostdagDataStore) stagingShard(stagingArea *model.StagingArea) *ghostdagDataStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDGHOSTDAG, func() model.StagingShard {
return &ghostdagDataStagingShard{
store: gds,
toAdd: make(map[externalapi.DomainHash]*model.BlockGHOSTDAGData),
}
}).(*ghostdagDataStagingShard)
}
func (gdss *ghostdagDataStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, blockGHOSTDAGData := range gdss.toAdd {
blockGhostdagDataBytes, err := gdss.store.serializeBlockGHOSTDAGData(blockGHOSTDAGData)
if err != nil {
return err
}
err = dbTx.Put(gdss.store.hashAsKey(&hash), blockGhostdagDataBytes)
if err != nil {
return err
}
gdss.store.cache.Add(&hash, blockGHOSTDAGData)
}
return nil
}
func (gdss *ghostdagDataStagingShard) isStaged() bool {
return len(gdss.toAdd) != 0
}

View File

@@ -13,51 +13,32 @@ var bucket = database.MakeBucket([]byte("block-ghostdag-data"))
// ghostdagDataStore represents a store of BlockGHOSTDAGData
type ghostdagDataStore struct {
staging map[externalapi.DomainHash]*model.BlockGHOSTDAGData
cache *lrucache.LRUCache
cache *lrucache.LRUCache
}
// New instantiates a new GHOSTDAGDataStore
func New(cacheSize int, preallocate bool) model.GHOSTDAGDataStore {
return &ghostdagDataStore{
staging: make(map[externalapi.DomainHash]*model.BlockGHOSTDAGData),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
}
// Stage stages the given blockGHOSTDAGData for the given blockHash
func (gds *ghostdagDataStore) Stage(blockHash *externalapi.DomainHash, blockGHOSTDAGData *model.BlockGHOSTDAGData) {
gds.staging[*blockHash] = blockGHOSTDAGData
func (gds *ghostdagDataStore) Stage(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, blockGHOSTDAGData *model.BlockGHOSTDAGData) {
stagingShard := gds.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = blockGHOSTDAGData
}
func (gds *ghostdagDataStore) IsStaged() bool {
return len(gds.staging) != 0
}
func (gds *ghostdagDataStore) Discard() {
gds.staging = make(map[externalapi.DomainHash]*model.BlockGHOSTDAGData)
}
func (gds *ghostdagDataStore) Commit(dbTx model.DBTransaction) error {
for hash, blockGHOSTDAGData := range gds.staging {
blockGhostdagDataBytes, err := gds.serializeBlockGHOSTDAGData(blockGHOSTDAGData)
if err != nil {
return err
}
err = dbTx.Put(gds.hashAsKey(&hash), blockGhostdagDataBytes)
if err != nil {
return err
}
gds.cache.Add(&hash, blockGHOSTDAGData)
}
gds.Discard()
return nil
func (gds *ghostdagDataStore) IsStaged(stagingArea *model.StagingArea) bool {
return gds.stagingShard(stagingArea).isStaged()
}
// Get gets the blockGHOSTDAGData associated with the given blockHash
func (gds *ghostdagDataStore) Get(dbContext model.DBReader, blockHash *externalapi.DomainHash) (*model.BlockGHOSTDAGData, error) {
if blockGHOSTDAGData, ok := gds.staging[*blockHash]; ok {
func (gds *ghostdagDataStore) Get(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (*model.BlockGHOSTDAGData, error) {
stagingShard := gds.stagingShard(stagingArea)
if blockGHOSTDAGData, ok := stagingShard.toAdd[*blockHash]; ok {
return blockGHOSTDAGData, nil
}

View File

@@ -0,0 +1,87 @@
package headersselectedchainstore
import (
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type headersSelectedChainStagingShard struct {
store *headersSelectedChainStore
addedByHash map[externalapi.DomainHash]uint64
removedByHash map[externalapi.DomainHash]struct{}
addedByIndex map[uint64]*externalapi.DomainHash
removedByIndex map[uint64]struct{}
}
func (hscs *headersSelectedChainStore) stagingShard(stagingArea *model.StagingArea) *headersSelectedChainStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDHeadersSelectedChain, func() model.StagingShard {
return &headersSelectedChainStagingShard{
store: hscs,
addedByHash: make(map[externalapi.DomainHash]uint64),
removedByHash: make(map[externalapi.DomainHash]struct{}),
addedByIndex: make(map[uint64]*externalapi.DomainHash),
removedByIndex: make(map[uint64]struct{}),
}
}).(*headersSelectedChainStagingShard)
}
func (hscss *headersSelectedChainStagingShard) Commit(dbTx model.DBTransaction) error {
if !hscss.isStaged() {
return nil
}
for hash := range hscss.removedByHash {
hashCopy := hash
err := dbTx.Delete(hscss.store.hashAsKey(&hashCopy))
if err != nil {
return err
}
hscss.store.cacheByHash.Remove(&hashCopy)
}
for index := range hscss.removedByIndex {
err := dbTx.Delete(hscss.store.indexAsKey(index))
if err != nil {
return err
}
hscss.store.cacheByIndex.Remove(index)
}
highestIndex := uint64(0)
for hash, index := range hscss.addedByHash {
hashCopy := hash
err := dbTx.Put(hscss.store.hashAsKey(&hashCopy), hscss.store.serializeIndex(index))
if err != nil {
return err
}
err = dbTx.Put(hscss.store.indexAsKey(index), binaryserialization.SerializeHash(&hashCopy))
if err != nil {
return err
}
hscss.store.cacheByHash.Add(&hashCopy, index)
hscss.store.cacheByIndex.Add(index, &hashCopy)
if index > highestIndex {
highestIndex = index
}
}
err := dbTx.Put(highestChainBlockIndexKey, hscss.store.serializeIndex(highestIndex))
if err != nil {
return err
}
hscss.store.cacheHighestChainBlockIndex = highestIndex
return nil
}
func (hscss *headersSelectedChainStagingShard) isStaged() bool {
return len(hscss.addedByHash) != 0 ||
len(hscss.removedByHash) != 0 ||
len(hscss.addedByIndex) != 0 ||
len(hscss.addedByIndex) != 0
}

View File

@@ -2,6 +2,7 @@ package headersselectedchainstore
import (
"encoding/binary"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
@@ -16,10 +17,6 @@ var bucketChainBlockIndexByHash = database.MakeBucket([]byte("chain-block-index-
var highestChainBlockIndexKey = database.MakeBucket(nil).Key([]byte("highest-chain-block-index"))
type headersSelectedChainStore struct {
stagingAddedByHash map[externalapi.DomainHash]uint64
stagingRemovedByHash map[externalapi.DomainHash]struct{}
stagingAddedByIndex map[uint64]*externalapi.DomainHash
stagingRemovedByIndex map[uint64]struct{}
cacheByIndex *lrucacheuint64tohash.LRUCache
cacheByHash *lrucache.LRUCache
cacheHighestChainBlockIndex uint64
@@ -28,31 +25,27 @@ type headersSelectedChainStore struct {
// New instantiates a new HeadersSelectedChainStore
func New(cacheSize int, preallocate bool) model.HeadersSelectedChainStore {
return &headersSelectedChainStore{
stagingAddedByHash: make(map[externalapi.DomainHash]uint64),
stagingRemovedByHash: make(map[externalapi.DomainHash]struct{}),
stagingAddedByIndex: make(map[uint64]*externalapi.DomainHash),
stagingRemovedByIndex: make(map[uint64]struct{}),
cacheByIndex: lrucacheuint64tohash.New(cacheSize, preallocate),
cacheByHash: lrucache.New(cacheSize, preallocate),
cacheByIndex: lrucacheuint64tohash.New(cacheSize, preallocate),
cacheByHash: lrucache.New(cacheSize, preallocate),
}
}
// Stage stages the given chain changes
func (hscs *headersSelectedChainStore) Stage(dbContext model.DBReader,
chainChanges *externalapi.SelectedChainPath) error {
func (hscs *headersSelectedChainStore) Stage(dbContext model.DBReader, stagingArea *model.StagingArea, chainChanges *externalapi.SelectedChainPath) error {
stagingShard := hscs.stagingShard(stagingArea)
if hscs.IsStaged() {
if hscs.IsStaged(stagingArea) {
return errors.Errorf("can't stage when there's already staged data")
}
for _, blockHash := range chainChanges.Removed {
index, err := hscs.GetIndexByHash(dbContext, blockHash)
index, err := hscs.GetIndexByHash(dbContext, stagingArea, blockHash)
if err != nil {
return err
}
hscs.stagingRemovedByIndex[index] = struct{}{}
hscs.stagingRemovedByHash[*blockHash] = struct{}{}
stagingShard.removedByIndex[index] = struct{}{}
stagingShard.removedByHash[*blockHash] = struct{}{}
}
currentIndex := uint64(0)
@@ -66,89 +59,27 @@ func (hscs *headersSelectedChainStore) Stage(dbContext model.DBReader,
}
for _, blockHash := range chainChanges.Added {
hscs.stagingAddedByIndex[currentIndex] = blockHash
hscs.stagingAddedByHash[*blockHash] = currentIndex
stagingShard.addedByIndex[currentIndex] = blockHash
stagingShard.addedByHash[*blockHash] = currentIndex
currentIndex++
}
return nil
}
func (hscs *headersSelectedChainStore) IsStaged() bool {
return len(hscs.stagingAddedByHash) != 0 ||
len(hscs.stagingRemovedByHash) != 0 ||
len(hscs.stagingAddedByIndex) != 0 ||
len(hscs.stagingAddedByIndex) != 0
}
func (hscs *headersSelectedChainStore) Discard() {
hscs.stagingAddedByHash = make(map[externalapi.DomainHash]uint64)
hscs.stagingRemovedByHash = make(map[externalapi.DomainHash]struct{})
hscs.stagingAddedByIndex = make(map[uint64]*externalapi.DomainHash)
hscs.stagingRemovedByIndex = make(map[uint64]struct{})
}
func (hscs *headersSelectedChainStore) Commit(dbTx model.DBTransaction) error {
if !hscs.IsStaged() {
return nil
}
for hash := range hscs.stagingRemovedByHash {
hashCopy := hash
err := dbTx.Delete(hscs.hashAsKey(&hashCopy))
if err != nil {
return err
}
hscs.cacheByHash.Remove(&hashCopy)
}
for index := range hscs.stagingRemovedByIndex {
err := dbTx.Delete(hscs.indexAsKey(index))
if err != nil {
return err
}
hscs.cacheByIndex.Remove(index)
}
highestIndex := uint64(0)
for hash, index := range hscs.stagingAddedByHash {
hashCopy := hash
err := dbTx.Put(hscs.hashAsKey(&hashCopy), hscs.serializeIndex(index))
if err != nil {
return err
}
err = dbTx.Put(hscs.indexAsKey(index), binaryserialization.SerializeHash(&hashCopy))
if err != nil {
return err
}
hscs.cacheByHash.Add(&hashCopy, index)
hscs.cacheByIndex.Add(index, &hashCopy)
if index > highestIndex {
highestIndex = index
}
}
err := dbTx.Put(highestChainBlockIndexKey, hscs.serializeIndex(highestIndex))
if err != nil {
return err
}
hscs.cacheHighestChainBlockIndex = highestIndex
hscs.Discard()
return nil
func (hscs *headersSelectedChainStore) IsStaged(stagingArea *model.StagingArea) bool {
return hscs.stagingShard(stagingArea).isStaged()
}
// Get gets the chain block index for the given blockHash
func (hscs *headersSelectedChainStore) GetIndexByHash(dbContext model.DBReader, blockHash *externalapi.DomainHash) (uint64, error) {
if index, ok := hscs.stagingAddedByHash[*blockHash]; ok {
func (hscs *headersSelectedChainStore) GetIndexByHash(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (uint64, error) {
stagingShard := hscs.stagingShard(stagingArea)
if index, ok := stagingShard.addedByHash[*blockHash]; ok {
return index, nil
}
if _, ok := hscs.stagingRemovedByHash[*blockHash]; ok {
if _, ok := stagingShard.removedByHash[*blockHash]; ok {
return 0, errors.Wrapf(database.ErrNotFound, "couldn't find block %s", blockHash)
}
@@ -170,12 +101,14 @@ func (hscs *headersSelectedChainStore) GetIndexByHash(dbContext model.DBReader,
return index, nil
}
func (hscs *headersSelectedChainStore) GetHashByIndex(dbContext model.DBReader, index uint64) (*externalapi.DomainHash, error) {
if blockHash, ok := hscs.stagingAddedByIndex[index]; ok {
func (hscs *headersSelectedChainStore) GetHashByIndex(dbContext model.DBReader, stagingArea *model.StagingArea, index uint64) (*externalapi.DomainHash, error) {
stagingShard := hscs.stagingShard(stagingArea)
if blockHash, ok := stagingShard.addedByIndex[index]; ok {
return blockHash, nil
}
if _, ok := hscs.stagingRemovedByIndex[index]; ok {
if _, ok := stagingShard.removedByIndex[index]; ok {
return nil, errors.Wrapf(database.ErrNotFound, "couldn't find chain block with index %d", index)
}

View File

@@ -0,0 +1,42 @@
package headersselectedtipstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type headersSelectedTipStagingShard struct {
store *headerSelectedTipStore
newSelectedTip *externalapi.DomainHash
}
func (hsts *headerSelectedTipStore) stagingShard(stagingArea *model.StagingArea) *headersSelectedTipStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDHeadersSelectedTip, func() model.StagingShard {
return &headersSelectedTipStagingShard{
store: hsts,
newSelectedTip: nil,
}
}).(*headersSelectedTipStagingShard)
}
func (hstss *headersSelectedTipStagingShard) Commit(dbTx model.DBTransaction) error {
if hstss.newSelectedTip == nil {
return nil
}
selectedTipBytes, err := hstss.store.serializeHeadersSelectedTip(hstss.newSelectedTip)
if err != nil {
return err
}
err = dbTx.Put(headerSelectedTipKey, selectedTipBytes)
if err != nil {
return err
}
hstss.store.cache = hstss.newSelectedTip
return nil
}
func (hstss *headersSelectedTipStagingShard) isStaged() bool {
return hstss.newSelectedTip != nil
}

View File

@@ -0,0 +1,83 @@
package headersselectedtipstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
var headerSelectedTipKey = database.MakeBucket(nil).Key([]byte("headers-selected-tip"))
type headerSelectedTipStore struct {
cache *externalapi.DomainHash
}
// New instantiates a new HeaderSelectedTipStore
func New() model.HeaderSelectedTipStore {
return &headerSelectedTipStore{}
}
func (hsts *headerSelectedTipStore) Has(dbContext model.DBReader, stagingArea *model.StagingArea) (bool, error) {
stagingShard := hsts.stagingShard(stagingArea)
if stagingShard.newSelectedTip != nil {
return true, nil
}
if hsts.cache != nil {
return true, nil
}
return dbContext.Has(headerSelectedTipKey)
}
func (hsts *headerSelectedTipStore) Stage(stagingArea *model.StagingArea, selectedTip *externalapi.DomainHash) {
stagingShard := hsts.stagingShard(stagingArea)
stagingShard.newSelectedTip = selectedTip
}
func (hsts *headerSelectedTipStore) IsStaged(stagingArea *model.StagingArea) bool {
return hsts.stagingShard(stagingArea).isStaged()
}
func (hsts *headerSelectedTipStore) HeadersSelectedTip(dbContext model.DBReader, stagingArea *model.StagingArea) (
*externalapi.DomainHash, error) {
stagingShard := hsts.stagingShard(stagingArea)
if stagingShard.newSelectedTip != nil {
return stagingShard.newSelectedTip, nil
}
if hsts.cache != nil {
return hsts.cache, nil
}
selectedTipBytes, err := dbContext.Get(headerSelectedTipKey)
if err != nil {
return nil, err
}
selectedTip, err := hsts.deserializeHeadersSelectedTip(selectedTipBytes)
if err != nil {
return nil, err
}
hsts.cache = selectedTip
return hsts.cache, nil
}
func (hsts *headerSelectedTipStore) serializeHeadersSelectedTip(selectedTip *externalapi.DomainHash) ([]byte, error) {
return proto.Marshal(serialization.DomainHashToDbHash(selectedTip))
}
func (hsts *headerSelectedTipStore) deserializeHeadersSelectedTip(selectedTipBytes []byte) (*externalapi.DomainHash, error) {
dbHash := &serialization.DbHash{}
err := proto.Unmarshal(selectedTipBytes, dbHash)
if err != nil {
return nil, err
}
return serialization.DbHashToDomainHash(dbHash)
}

View File

@@ -1,100 +0,0 @@
package headersselectedtipstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
var headerSelectedTipKey = database.MakeBucket(nil).Key([]byte("headers-selected-tip"))
type headerSelectedTipStore struct {
staging *externalapi.DomainHash
cache *externalapi.DomainHash
}
// New instantiates a new HeaderSelectedTipStore
func New() model.HeaderSelectedTipStore {
return &headerSelectedTipStore{}
}
func (hts *headerSelectedTipStore) Has(dbContext model.DBReader) (bool, error) {
if hts.staging != nil {
return true, nil
}
if hts.cache != nil {
return true, nil
}
return dbContext.Has(headerSelectedTipKey)
}
func (hts *headerSelectedTipStore) Discard() {
hts.staging = nil
}
func (hts *headerSelectedTipStore) Commit(dbTx model.DBTransaction) error {
if hts.staging == nil {
return nil
}
selectedTipBytes, err := hts.serializeHeadersSelectedTip(hts.staging)
if err != nil {
return err
}
err = dbTx.Put(headerSelectedTipKey, selectedTipBytes)
if err != nil {
return err
}
hts.cache = hts.staging
hts.Discard()
return nil
}
func (hts *headerSelectedTipStore) Stage(selectedTip *externalapi.DomainHash) {
hts.staging = selectedTip
}
func (hts *headerSelectedTipStore) IsStaged() bool {
return hts.staging != nil
}
func (hts *headerSelectedTipStore) HeadersSelectedTip(dbContext model.DBReader) (*externalapi.DomainHash, error) {
if hts.staging != nil {
return hts.staging, nil
}
if hts.cache != nil {
return hts.cache, nil
}
selectedTipBytes, err := dbContext.Get(headerSelectedTipKey)
if err != nil {
return nil, err
}
selectedTip, err := hts.deserializeHeadersSelectedTip(selectedTipBytes)
if err != nil {
return nil, err
}
hts.cache = selectedTip
return hts.cache, nil
}
func (hts *headerSelectedTipStore) serializeHeadersSelectedTip(selectedTip *externalapi.DomainHash) ([]byte, error) {
return proto.Marshal(serialization.DomainHashToDbHash(selectedTip))
}
func (hts *headerSelectedTipStore) deserializeHeadersSelectedTip(selectedTipBytes []byte) (*externalapi.DomainHash, error) {
dbHash := &serialization.DbHash{}
err := proto.Unmarshal(selectedTipBytes, dbHash)
if err != nil {
return nil, err
}
return serialization.DbHashToDomainHash(dbHash)
}

View File

@@ -0,0 +1,50 @@
package multisetstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type multisetStagingShard struct {
store *multisetStore
toAdd map[externalapi.DomainHash]model.Multiset
toDelete map[externalapi.DomainHash]struct{}
}
func (ms *multisetStore) stagingShard(stagingArea *model.StagingArea) *multisetStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDMultiset, func() model.StagingShard {
return &multisetStagingShard{
store: ms,
toAdd: make(map[externalapi.DomainHash]model.Multiset),
toDelete: make(map[externalapi.DomainHash]struct{}),
}
}).(*multisetStagingShard)
}
func (mss *multisetStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, multiset := range mss.toAdd {
multisetBytes, err := mss.store.serializeMultiset(multiset)
if err != nil {
return err
}
err = dbTx.Put(mss.store.hashAsKey(&hash), multisetBytes)
if err != nil {
return err
}
mss.store.cache.Add(&hash, multiset)
}
for hash := range mss.toDelete {
err := dbTx.Delete(mss.store.hashAsKey(&hash))
if err != nil {
return err
}
mss.store.cache.Remove(&hash)
}
return nil
}
func (mss *multisetStagingShard) isStaged() bool {
return len(mss.toAdd) != 0 || len(mss.toDelete) != 0
}

View File

@@ -13,62 +13,32 @@ var bucket = database.MakeBucket([]byte("multisets"))
// multisetStore represents a store of Multisets
type multisetStore struct {
staging map[externalapi.DomainHash]model.Multiset
toDelete map[externalapi.DomainHash]struct{}
cache *lrucache.LRUCache
cache *lrucache.LRUCache
}
// New instantiates a new MultisetStore
func New(cacheSize int, preallocate bool) model.MultisetStore {
return &multisetStore{
staging: make(map[externalapi.DomainHash]model.Multiset),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize, preallocate),
cache: lrucache.New(cacheSize, preallocate),
}
}
// Stage stages the given multiset for the given blockHash
func (ms *multisetStore) Stage(blockHash *externalapi.DomainHash, multiset model.Multiset) {
ms.staging[*blockHash] = multiset.Clone()
func (ms *multisetStore) Stage(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, multiset model.Multiset) {
stagingShard := ms.stagingShard(stagingArea)
stagingShard.toAdd[*blockHash] = multiset.Clone()
}
func (ms *multisetStore) IsStaged() bool {
return len(ms.staging) != 0 || len(ms.toDelete) != 0
}
func (ms *multisetStore) Discard() {
ms.staging = make(map[externalapi.DomainHash]model.Multiset)
ms.toDelete = make(map[externalapi.DomainHash]struct{})
}
func (ms *multisetStore) Commit(dbTx model.DBTransaction) error {
for hash, multiset := range ms.staging {
multisetBytes, err := ms.serializeMultiset(multiset)
if err != nil {
return err
}
err = dbTx.Put(ms.hashAsKey(&hash), multisetBytes)
if err != nil {
return err
}
ms.cache.Add(&hash, multiset)
}
for hash := range ms.toDelete {
err := dbTx.Delete(ms.hashAsKey(&hash))
if err != nil {
return err
}
ms.cache.Remove(&hash)
}
ms.Discard()
return nil
func (ms *multisetStore) IsStaged(stagingArea *model.StagingArea) bool {
return ms.stagingShard(stagingArea).isStaged()
}
// Get gets the multiset associated with the given blockHash
func (ms *multisetStore) Get(dbContext model.DBReader, blockHash *externalapi.DomainHash) (model.Multiset, error) {
if multiset, ok := ms.staging[*blockHash]; ok {
func (ms *multisetStore) Get(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (model.Multiset, error) {
stagingShard := ms.stagingShard(stagingArea)
if multiset, ok := stagingShard.toAdd[*blockHash]; ok {
return multiset.Clone(), nil
}
@@ -90,12 +60,14 @@ func (ms *multisetStore) Get(dbContext model.DBReader, blockHash *externalapi.Do
}
// Delete deletes the multiset associated with the given blockHash
func (ms *multisetStore) Delete(blockHash *externalapi.DomainHash) {
if _, ok := ms.staging[*blockHash]; ok {
delete(ms.staging, *blockHash)
func (ms *multisetStore) Delete(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) {
stagingShard := ms.stagingShard(stagingArea)
if _, ok := stagingShard.toAdd[*blockHash]; ok {
delete(stagingShard.toAdd, *blockHash)
return
}
ms.toDelete[*blockHash] = struct{}{}
stagingShard.toDelete[*blockHash] = struct{}{}
}
func (ms *multisetStore) hashAsKey(hash *externalapi.DomainHash) model.DBKey {

View File

@@ -0,0 +1,64 @@
package pruningstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type pruningStagingShard struct {
store *pruningStore
newPruningPoint *externalapi.DomainHash
newPruningPointCandidate *externalapi.DomainHash
startUpdatingPruningPointUTXOSet bool
}
func (ps *pruningStore) stagingShard(stagingArea *model.StagingArea) *pruningStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDPruning, func() model.StagingShard {
return &pruningStagingShard{
store: ps,
newPruningPoint: nil,
newPruningPointCandidate: nil,
startUpdatingPruningPointUTXOSet: false,
}
}).(*pruningStagingShard)
}
func (mss *pruningStagingShard) Commit(dbTx model.DBTransaction) error {
if mss.newPruningPoint != nil {
pruningPointBytes, err := mss.store.serializeHash(mss.newPruningPoint)
if err != nil {
return err
}
err = dbTx.Put(pruningBlockHashKey, pruningPointBytes)
if err != nil {
return err
}
mss.store.pruningPointCache = mss.newPruningPoint
}
if mss.newPruningPointCandidate != nil {
candidateBytes, err := mss.store.serializeHash(mss.newPruningPointCandidate)
if err != nil {
return err
}
err = dbTx.Put(candidatePruningPointHashKey, candidateBytes)
if err != nil {
return err
}
mss.store.pruningPointCandidateCache = mss.newPruningPointCandidate
}
if mss.startUpdatingPruningPointUTXOSet {
err := dbTx.Put(updatingPruningPointUTXOSetKey, []byte{0})
if err != nil {
return err
}
}
return nil
}
func (mss *pruningStagingShard) isStaged() bool {
return mss.newPruningPoint != nil || mss.startUpdatingPruningPointUTXOSet
}

View File

@@ -15,12 +15,8 @@ var updatingPruningPointUTXOSetKey = database.MakeBucket(nil).Key([]byte("updati
// pruningStore represents a store for the current pruning state
type pruningStore struct {
pruningPointStaging *externalapi.DomainHash
pruningPointCache *externalapi.DomainHash
pruningPointCandidateStaging *externalapi.DomainHash
pruningPointCandidateCache *externalapi.DomainHash
startUpdatingPruningPointUTXOSetStaging bool
pruningPointCache *externalapi.DomainHash
pruningPointCandidateCache *externalapi.DomainHash
}
// New instantiates a new PruningStore
@@ -28,13 +24,17 @@ func New() model.PruningStore {
return &pruningStore{}
}
func (ps *pruningStore) StagePruningPointCandidate(candidate *externalapi.DomainHash) {
ps.pruningPointCandidateStaging = candidate
func (ps *pruningStore) StagePruningPointCandidate(stagingArea *model.StagingArea, candidate *externalapi.DomainHash) {
stagingShard := ps.stagingShard(stagingArea)
stagingShard.newPruningPointCandidate = candidate
}
func (ps *pruningStore) PruningPointCandidate(dbContext model.DBReader) (*externalapi.DomainHash, error) {
if ps.pruningPointCandidateStaging != nil {
return ps.pruningPointCandidateStaging, nil
func (ps *pruningStore) PruningPointCandidate(dbContext model.DBReader, stagingArea *model.StagingArea) (*externalapi.DomainHash, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.newPruningPointCandidate != nil {
return stagingShard.newPruningPointCandidate, nil
}
if ps.pruningPointCandidateCache != nil {
@@ -54,8 +54,10 @@ func (ps *pruningStore) PruningPointCandidate(dbContext model.DBReader) (*extern
return candidate, nil
}
func (ps *pruningStore) HasPruningPointCandidate(dbContext model.DBReader) (bool, error) {
if ps.pruningPointCandidateStaging != nil {
func (ps *pruningStore) HasPruningPointCandidate(dbContext model.DBReader, stagingArea *model.StagingArea) (bool, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.newPruningPointCandidate != nil {
return true, nil
}
@@ -67,53 +69,14 @@ func (ps *pruningStore) HasPruningPointCandidate(dbContext model.DBReader) (bool
}
// Stage stages the pruning state
func (ps *pruningStore) StagePruningPoint(pruningPointBlockHash *externalapi.DomainHash) {
ps.pruningPointStaging = pruningPointBlockHash
func (ps *pruningStore) StagePruningPoint(stagingArea *model.StagingArea, pruningPointBlockHash *externalapi.DomainHash) {
stagingShard := ps.stagingShard(stagingArea)
stagingShard.newPruningPoint = pruningPointBlockHash
}
func (ps *pruningStore) IsStaged() bool {
return ps.pruningPointStaging != nil || ps.startUpdatingPruningPointUTXOSetStaging
}
func (ps *pruningStore) Discard() {
ps.pruningPointStaging = nil
ps.startUpdatingPruningPointUTXOSetStaging = false
}
func (ps *pruningStore) Commit(dbTx model.DBTransaction) error {
if ps.pruningPointStaging != nil {
pruningPointBytes, err := ps.serializeHash(ps.pruningPointStaging)
if err != nil {
return err
}
err = dbTx.Put(pruningBlockHashKey, pruningPointBytes)
if err != nil {
return err
}
ps.pruningPointCache = ps.pruningPointStaging
}
if ps.pruningPointCandidateStaging != nil {
candidateBytes, err := ps.serializeHash(ps.pruningPointCandidateStaging)
if err != nil {
return err
}
err = dbTx.Put(candidatePruningPointHashKey, candidateBytes)
if err != nil {
return err
}
ps.pruningPointCandidateCache = ps.pruningPointCandidateStaging
}
if ps.startUpdatingPruningPointUTXOSetStaging {
err := dbTx.Put(updatingPruningPointUTXOSetKey, []byte{0})
if err != nil {
return err
}
}
ps.Discard()
return nil
func (ps *pruningStore) IsStaged(stagingArea *model.StagingArea) bool {
return ps.stagingShard(stagingArea).isStaged()
}
func (ps *pruningStore) UpdatePruningPointUTXOSet(dbContext model.DBWriter,
@@ -161,9 +124,11 @@ func (ps *pruningStore) UpdatePruningPointUTXOSet(dbContext model.DBWriter,
}
// PruningPoint gets the current pruning point
func (ps *pruningStore) PruningPoint(dbContext model.DBReader) (*externalapi.DomainHash, error) {
if ps.pruningPointStaging != nil {
return ps.pruningPointStaging, nil
func (ps *pruningStore) PruningPoint(dbContext model.DBReader, stagingArea *model.StagingArea) (*externalapi.DomainHash, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.newPruningPoint != nil {
return stagingShard.newPruningPoint, nil
}
if ps.pruningPointCache != nil {
@@ -197,8 +162,10 @@ func (ps *pruningStore) deserializePruningPoint(pruningPointBytes []byte) (*exte
return serialization.DbHashToDomainHash(dbHash)
}
func (ps *pruningStore) HasPruningPoint(dbContext model.DBReader) (bool, error) {
if ps.pruningPointStaging != nil {
func (ps *pruningStore) HasPruningPoint(dbContext model.DBReader, stagingArea *model.StagingArea) (bool, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.newPruningPoint != nil {
return true, nil
}
@@ -247,8 +214,10 @@ func (ps *pruningStore) PruningPointUTXOs(dbContext model.DBReader,
return outpointAndUTXOEntryPairs, nil
}
func (ps *pruningStore) StageStartUpdatingPruningPointUTXOSet() {
ps.startUpdatingPruningPointUTXOSetStaging = true
func (ps *pruningStore) StageStartUpdatingPruningPointUTXOSet(stagingArea *model.StagingArea) {
stagingShard := ps.stagingShard(stagingArea)
stagingShard.startUpdatingPruningPointUTXOSet = true
}
func (ps *pruningStore) HadStartedUpdatingPruningPointUTXOSet(dbContext model.DBWriter) (bool, error) {

View File

@@ -0,0 +1,53 @@
package reachabilitydatastore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type reachabilityDataStagingShard struct {
store *reachabilityDataStore
reachabilityData map[externalapi.DomainHash]model.ReachabilityData
reachabilityReindexRoot *externalapi.DomainHash
}
func (rds *reachabilityDataStore) stagingShard(stagingArea *model.StagingArea) *reachabilityDataStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDReachabilityData, func() model.StagingShard {
return &reachabilityDataStagingShard{
store: rds,
reachabilityData: make(map[externalapi.DomainHash]model.ReachabilityData),
reachabilityReindexRoot: nil,
}
}).(*reachabilityDataStagingShard)
}
func (rdss *reachabilityDataStagingShard) Commit(dbTx model.DBTransaction) error {
if rdss.reachabilityReindexRoot != nil {
reachabilityReindexRootBytes, err := rdss.store.serializeReachabilityReindexRoot(rdss.reachabilityReindexRoot)
if err != nil {
return err
}
err = dbTx.Put(reachabilityReindexRootKey, reachabilityReindexRootBytes)
if err != nil {
return err
}
rdss.store.reachabilityReindexRootCache = rdss.reachabilityReindexRoot
}
for hash, reachabilityData := range rdss.reachabilityData {
reachabilityDataBytes, err := rdss.store.serializeReachabilityData(reachabilityData)
if err != nil {
return err
}
err = dbTx.Put(rdss.store.reachabilityDataBlockHashAsKey(&hash), reachabilityDataBytes)
if err != nil {
return err
}
rdss.store.reachabilityDataCache.Add(&hash, reachabilityData)
}
return nil
}
func (rdss *reachabilityDataStagingShard) isStaged() bool {
return len(rdss.reachabilityData) != 0 || rdss.reachabilityReindexRoot != nil
}

View File

@@ -14,74 +14,40 @@ var reachabilityReindexRootKey = database.MakeBucket(nil).Key([]byte("reachabili
// reachabilityDataStore represents a store of ReachabilityData
type reachabilityDataStore struct {
reachabilityDataStaging map[externalapi.DomainHash]model.ReachabilityData
reachabilityReindexRootStaging *externalapi.DomainHash
reachabilityDataCache *lrucache.LRUCache
reachabilityReindexRootCache *externalapi.DomainHash
reachabilityDataCache *lrucache.LRUCache
reachabilityReindexRootCache *externalapi.DomainHash
}
// New instantiates a new ReachabilityDataStore
func New(cacheSize int, preallocate bool) model.ReachabilityDataStore {
return &reachabilityDataStore{
reachabilityDataStaging: make(map[externalapi.DomainHash]model.ReachabilityData),
reachabilityDataCache: lrucache.New(cacheSize, preallocate),
reachabilityDataCache: lrucache.New(cacheSize, preallocate),
}
}
// StageReachabilityData stages the given reachabilityData for the given blockHash
func (rds *reachabilityDataStore) StageReachabilityData(blockHash *externalapi.DomainHash,
reachabilityData model.ReachabilityData) {
func (rds *reachabilityDataStore) StageReachabilityData(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, reachabilityData model.ReachabilityData) {
stagingShard := rds.stagingShard(stagingArea)
rds.reachabilityDataStaging[*blockHash] = reachabilityData
stagingShard.reachabilityData[*blockHash] = reachabilityData
}
// StageReachabilityReindexRoot stages the given reachabilityReindexRoot
func (rds *reachabilityDataStore) StageReachabilityReindexRoot(reachabilityReindexRoot *externalapi.DomainHash) {
rds.reachabilityReindexRootStaging = reachabilityReindexRoot
func (rds *reachabilityDataStore) StageReachabilityReindexRoot(stagingArea *model.StagingArea, reachabilityReindexRoot *externalapi.DomainHash) {
stagingShard := rds.stagingShard(stagingArea)
stagingShard.reachabilityReindexRoot = reachabilityReindexRoot
}
func (rds *reachabilityDataStore) IsAnythingStaged() bool {
return len(rds.reachabilityDataStaging) != 0 || rds.reachabilityReindexRootStaging != nil
}
func (rds *reachabilityDataStore) Discard() {
rds.reachabilityDataStaging = make(map[externalapi.DomainHash]model.ReachabilityData)
rds.reachabilityReindexRootStaging = nil
}
func (rds *reachabilityDataStore) Commit(dbTx model.DBTransaction) error {
if rds.reachabilityReindexRootStaging != nil {
reachabilityReindexRootBytes, err := rds.serializeReachabilityReindexRoot(rds.reachabilityReindexRootStaging)
if err != nil {
return err
}
err = dbTx.Put(reachabilityReindexRootKey, reachabilityReindexRootBytes)
if err != nil {
return err
}
rds.reachabilityReindexRootCache = rds.reachabilityReindexRootStaging
}
for hash, reachabilityData := range rds.reachabilityDataStaging {
reachabilityDataBytes, err := rds.serializeReachabilityData(reachabilityData)
if err != nil {
return err
}
err = dbTx.Put(rds.reachabilityDataBlockHashAsKey(&hash), reachabilityDataBytes)
if err != nil {
return err
}
rds.reachabilityDataCache.Add(&hash, reachabilityData)
}
rds.Discard()
return nil
func (rds *reachabilityDataStore) IsStaged(stagingArea *model.StagingArea) bool {
return rds.stagingShard(stagingArea).isStaged()
}
// ReachabilityData returns the reachabilityData associated with the given blockHash
func (rds *reachabilityDataStore) ReachabilityData(dbContext model.DBReader,
blockHash *externalapi.DomainHash) (model.ReachabilityData, error) {
func (rds *reachabilityDataStore) ReachabilityData(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (model.ReachabilityData, error) {
stagingShard := rds.stagingShard(stagingArea)
if reachabilityData, ok := rds.reachabilityDataStaging[*blockHash]; ok {
if reachabilityData, ok := stagingShard.reachabilityData[*blockHash]; ok {
return reachabilityData, nil
}
@@ -102,8 +68,10 @@ func (rds *reachabilityDataStore) ReachabilityData(dbContext model.DBReader,
return reachabilityData, nil
}
func (rds *reachabilityDataStore) HasReachabilityData(dbContext model.DBReader, blockHash *externalapi.DomainHash) (bool, error) {
if _, ok := rds.reachabilityDataStaging[*blockHash]; ok {
func (rds *reachabilityDataStore) HasReachabilityData(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (bool, error) {
stagingShard := rds.stagingShard(stagingArea)
if _, ok := stagingShard.reachabilityData[*blockHash]; ok {
return true, nil
}
@@ -115,9 +83,11 @@ func (rds *reachabilityDataStore) HasReachabilityData(dbContext model.DBReader,
}
// ReachabilityReindexRoot returns the current reachability reindex root
func (rds *reachabilityDataStore) ReachabilityReindexRoot(dbContext model.DBReader) (*externalapi.DomainHash, error) {
if rds.reachabilityReindexRootStaging != nil {
return rds.reachabilityReindexRootStaging, nil
func (rds *reachabilityDataStore) ReachabilityReindexRoot(dbContext model.DBReader, stagingArea *model.StagingArea) (*externalapi.DomainHash, error) {
stagingShard := rds.stagingShard(stagingArea)
if stagingShard.reachabilityReindexRoot != nil {
return stagingShard.reachabilityReindexRoot, nil
}
if rds.reachabilityReindexRootCache != nil {

View File

@@ -0,0 +1,74 @@
package utxodiffstore
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type utxoDiffStagingShard struct {
store *utxoDiffStore
utxoDiffToAdd map[externalapi.DomainHash]externalapi.UTXODiff
utxoDiffChildToAdd map[externalapi.DomainHash]*externalapi.DomainHash
toDelete map[externalapi.DomainHash]struct{}
}
func (uds *utxoDiffStore) stagingShard(stagingArea *model.StagingArea) *utxoDiffStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDUTXODiff, func() model.StagingShard {
return &utxoDiffStagingShard{
store: uds,
utxoDiffToAdd: make(map[externalapi.DomainHash]externalapi.UTXODiff),
utxoDiffChildToAdd: make(map[externalapi.DomainHash]*externalapi.DomainHash),
toDelete: make(map[externalapi.DomainHash]struct{}),
}
}).(*utxoDiffStagingShard)
}
func (udss *utxoDiffStagingShard) Commit(dbTx model.DBTransaction) error {
for hash, utxoDiff := range udss.utxoDiffToAdd {
utxoDiffBytes, err := udss.store.serializeUTXODiff(utxoDiff)
if err != nil {
return err
}
err = dbTx.Put(udss.store.utxoDiffHashAsKey(&hash), utxoDiffBytes)
if err != nil {
return err
}
udss.store.utxoDiffCache.Add(&hash, utxoDiff)
}
for hash, utxoDiffChild := range udss.utxoDiffChildToAdd {
if utxoDiffChild == nil {
continue
}
utxoDiffChildBytes, err := udss.store.serializeUTXODiffChild(utxoDiffChild)
if err != nil {
return err
}
err = dbTx.Put(udss.store.utxoDiffChildHashAsKey(&hash), utxoDiffChildBytes)
if err != nil {
return err
}
udss.store.utxoDiffChildCache.Add(&hash, utxoDiffChild)
}
for hash := range udss.toDelete {
err := dbTx.Delete(udss.store.utxoDiffHashAsKey(&hash))
if err != nil {
return err
}
udss.store.utxoDiffCache.Remove(&hash)
err = dbTx.Delete(udss.store.utxoDiffChildHashAsKey(&hash))
if err != nil {
return err
}
udss.store.utxoDiffChildCache.Remove(&hash)
}
return nil
}
func (udss *utxoDiffStagingShard) isStaged() bool {
return len(udss.utxoDiffToAdd) != 0 || len(udss.utxoDiffChildToAdd) != 0 || len(udss.toDelete) != 0
}

View File

@@ -15,100 +15,46 @@ var utxoDiffChildBucket = database.MakeBucket([]byte("utxo-diff-children"))
// utxoDiffStore represents a store of UTXODiffs
type utxoDiffStore struct {
utxoDiffStaging map[externalapi.DomainHash]externalapi.UTXODiff
utxoDiffChildStaging map[externalapi.DomainHash]*externalapi.DomainHash
toDelete map[externalapi.DomainHash]struct{}
utxoDiffCache *lrucache.LRUCache
utxoDiffChildCache *lrucache.LRUCache
utxoDiffCache *lrucache.LRUCache
utxoDiffChildCache *lrucache.LRUCache
}
// New instantiates a new UTXODiffStore
func New(cacheSize int, preallocate bool) model.UTXODiffStore {
return &utxoDiffStore{
utxoDiffStaging: make(map[externalapi.DomainHash]externalapi.UTXODiff),
utxoDiffChildStaging: make(map[externalapi.DomainHash]*externalapi.DomainHash),
toDelete: make(map[externalapi.DomainHash]struct{}),
utxoDiffCache: lrucache.New(cacheSize, preallocate),
utxoDiffChildCache: lrucache.New(cacheSize, preallocate),
utxoDiffCache: lrucache.New(cacheSize, preallocate),
utxoDiffChildCache: lrucache.New(cacheSize, preallocate),
}
}
// Stage stages the given utxoDiff for the given blockHash
func (uds *utxoDiffStore) Stage(blockHash *externalapi.DomainHash, utxoDiff externalapi.UTXODiff, utxoDiffChild *externalapi.DomainHash) {
uds.utxoDiffStaging[*blockHash] = utxoDiff
func (uds *utxoDiffStore) Stage(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, utxoDiff externalapi.UTXODiff, utxoDiffChild *externalapi.DomainHash) {
stagingShard := uds.stagingShard(stagingArea)
stagingShard.utxoDiffToAdd[*blockHash] = utxoDiff
if utxoDiffChild != nil {
uds.utxoDiffChildStaging[*blockHash] = utxoDiffChild
stagingShard.utxoDiffChildToAdd[*blockHash] = utxoDiffChild
}
}
func (uds *utxoDiffStore) IsStaged() bool {
return len(uds.utxoDiffStaging) != 0 || len(uds.utxoDiffChildStaging) != 0 || len(uds.toDelete) != 0
func (uds *utxoDiffStore) IsStaged(stagingArea *model.StagingArea) bool {
return uds.stagingShard(stagingArea).isStaged()
}
func (uds *utxoDiffStore) IsBlockHashStaged(blockHash *externalapi.DomainHash) bool {
if _, ok := uds.utxoDiffStaging[*blockHash]; ok {
func (uds *utxoDiffStore) isBlockHashStaged(stagingShard *utxoDiffStagingShard, blockHash *externalapi.DomainHash) bool {
if _, ok := stagingShard.utxoDiffToAdd[*blockHash]; ok {
return true
}
_, ok := uds.utxoDiffChildStaging[*blockHash]
_, ok := stagingShard.utxoDiffChildToAdd[*blockHash]
return ok
}
func (uds *utxoDiffStore) Discard() {
uds.utxoDiffStaging = make(map[externalapi.DomainHash]externalapi.UTXODiff)
uds.utxoDiffChildStaging = make(map[externalapi.DomainHash]*externalapi.DomainHash)
uds.toDelete = make(map[externalapi.DomainHash]struct{})
}
func (uds *utxoDiffStore) Commit(dbTx model.DBTransaction) error {
for hash, utxoDiff := range uds.utxoDiffStaging {
utxoDiffBytes, err := uds.serializeUTXODiff(utxoDiff)
if err != nil {
return err
}
err = dbTx.Put(uds.utxoDiffHashAsKey(&hash), utxoDiffBytes)
if err != nil {
return err
}
uds.utxoDiffCache.Add(&hash, utxoDiff)
}
for hash, utxoDiffChild := range uds.utxoDiffChildStaging {
if utxoDiffChild == nil {
continue
}
utxoDiffChildBytes, err := uds.serializeUTXODiffChild(utxoDiffChild)
if err != nil {
return err
}
err = dbTx.Put(uds.utxoDiffChildHashAsKey(&hash), utxoDiffChildBytes)
if err != nil {
return err
}
uds.utxoDiffChildCache.Add(&hash, utxoDiffChild)
}
for hash := range uds.toDelete {
err := dbTx.Delete(uds.utxoDiffHashAsKey(&hash))
if err != nil {
return err
}
uds.utxoDiffCache.Remove(&hash)
err = dbTx.Delete(uds.utxoDiffChildHashAsKey(&hash))
if err != nil {
return err
}
uds.utxoDiffChildCache.Remove(&hash)
}
uds.Discard()
return nil
}
// UTXODiff gets the utxoDiff associated with the given blockHash
func (uds *utxoDiffStore) UTXODiff(dbContext model.DBReader, blockHash *externalapi.DomainHash) (externalapi.UTXODiff, error) {
if utxoDiff, ok := uds.utxoDiffStaging[*blockHash]; ok {
func (uds *utxoDiffStore) UTXODiff(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (externalapi.UTXODiff, error) {
stagingShard := uds.stagingShard(stagingArea)
if utxoDiff, ok := stagingShard.utxoDiffToAdd[*blockHash]; ok {
return utxoDiff, nil
}
@@ -130,8 +76,10 @@ func (uds *utxoDiffStore) UTXODiff(dbContext model.DBReader, blockHash *external
}
// UTXODiffChild gets the utxoDiff child associated with the given blockHash
func (uds *utxoDiffStore) UTXODiffChild(dbContext model.DBReader, blockHash *externalapi.DomainHash) (*externalapi.DomainHash, error) {
if utxoDiffChild, ok := uds.utxoDiffChildStaging[*blockHash]; ok {
func (uds *utxoDiffStore) UTXODiffChild(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (*externalapi.DomainHash, error) {
stagingShard := uds.stagingShard(stagingArea)
if utxoDiffChild, ok := stagingShard.utxoDiffChildToAdd[*blockHash]; ok {
return utxoDiffChild, nil
}
@@ -153,8 +101,10 @@ func (uds *utxoDiffStore) UTXODiffChild(dbContext model.DBReader, blockHash *ext
}
// HasUTXODiffChild returns true if the given blockHash has a UTXODiffChild
func (uds *utxoDiffStore) HasUTXODiffChild(dbContext model.DBReader, blockHash *externalapi.DomainHash) (bool, error) {
if _, ok := uds.utxoDiffChildStaging[*blockHash]; ok {
func (uds *utxoDiffStore) HasUTXODiffChild(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (bool, error) {
stagingShard := uds.stagingShard(stagingArea)
if _, ok := stagingShard.utxoDiffChildToAdd[*blockHash]; ok {
return true, nil
}
@@ -166,17 +116,19 @@ func (uds *utxoDiffStore) HasUTXODiffChild(dbContext model.DBReader, blockHash *
}
// Delete deletes the utxoDiff associated with the given blockHash
func (uds *utxoDiffStore) Delete(blockHash *externalapi.DomainHash) {
if uds.IsBlockHashStaged(blockHash) {
if _, ok := uds.utxoDiffStaging[*blockHash]; ok {
delete(uds.utxoDiffStaging, *blockHash)
func (uds *utxoDiffStore) Delete(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) {
stagingShard := uds.stagingShard(stagingArea)
if uds.isBlockHashStaged(stagingShard, blockHash) {
if _, ok := stagingShard.utxoDiffToAdd[*blockHash]; ok {
delete(stagingShard.utxoDiffToAdd, *blockHash)
}
if _, ok := uds.utxoDiffChildStaging[*blockHash]; ok {
delete(uds.utxoDiffChildStaging, *blockHash)
if _, ok := stagingShard.utxoDiffChildToAdd[*blockHash]; ok {
delete(stagingShard.utxoDiffChildToAdd, *blockHash)
}
return
}
uds.toDelete[*blockHash] = struct{}{}
stagingShard.toDelete[*blockHash] = struct{}{}
}
func (uds *utxoDiffStore) utxoDiffHashAsKey(hash *externalapi.DomainHash) model.DBKey {

View File

@@ -1,11 +1,12 @@
package consensus
import (
daablocksstore "github.com/kaspanet/kaspad/domain/consensus/datastructures/daablocksstore"
"io/ioutil"
"os"
"sync"
daablocksstore "github.com/kaspanet/kaspad/domain/consensus/datastructures/daablocksstore"
"github.com/kaspanet/kaspad/domain/consensus/datastructures/headersselectedchainstore"
"github.com/kaspanet/kaspad/domain/consensus/processes/dagtraversalmanager"
@@ -123,7 +124,16 @@ func (f *factory) NewConsensus(dagParams *dagconfig.Params, db infrastructuredat
reachabilityDataStore := reachabilitydatastore.New(pruningWindowSizePlusFinalityDepthForCache, preallocateCaches)
utxoDiffStore := utxodiffstore.New(200, preallocateCaches)
consensusStateStore := consensusstatestore.New(10_000, preallocateCaches)
ghostdagDataStore := ghostdagdatastore.New(pruningWindowSizeForCaches, preallocateCaches)
// Some tests artificially decrease the pruningWindowSize, thus making the GhostDagStore cache too small for a
// a single DifficultyAdjustmentWindow. To alleviate this problem we make sure that the cache size is at least
// dagParams.DifficultyAdjustmentWindowSize
ghostdagDataCacheSize := pruningWindowSizeForCaches
if ghostdagDataCacheSize < dagParams.DifficultyAdjustmentWindowSize {
ghostdagDataCacheSize = dagParams.DifficultyAdjustmentWindowSize
}
ghostdagDataStore := ghostdagdatastore.New(ghostdagDataCacheSize, preallocateCaches)
headersSelectedTipStore := headersselectedtipstore.New()
finalityStore := finalitystore.New(200, preallocateCaches)
headersSelectedChainStore := headersselectedchainstore.New(pruningWindowSizeForCaches, preallocateCaches)

View File

@@ -83,6 +83,8 @@ func TestFinality(t *testing.T) {
}
}
stagingArea := model.NewStagingArea()
// Add two more blocks in the side-chain until it becomes the selected chain
for i := uint64(0); i < 2; i++ {
sideChainTip, err = buildAndInsertBlock([]*externalapi.DomainHash{sideChainTipHash})
@@ -120,7 +122,7 @@ func TestFinality(t *testing.T) {
mainChainTipHash = consensushashing.BlockHash(mainChainTip)
}
virtualFinality, err := consensus.FinalityManager().VirtualFinalityPoint()
virtualFinality, err := consensus.FinalityManager().VirtualFinalityPoint(stagingArea)
if err != nil {
t.Fatalf("TestFinality: Failed getting the virtual's finality point: %v", err)
}
@@ -145,12 +147,14 @@ func TestFinality(t *testing.T) {
if err != nil {
t.Fatalf("TestFinality: Failed getting virtual selectedParent: %v", err)
}
selectedTipGhostDagData, err := consensus.GHOSTDAGDataStore().Get(consensus.DatabaseContext(), selectedTip)
selectedTipGhostDagData, err :=
consensus.GHOSTDAGDataStore().Get(consensus.DatabaseContext(), stagingArea, selectedTip)
if err != nil {
t.Fatalf("TestFinality: Failed getting the ghost dag data of the selected tip: %v", err)
}
sideChainTipGhostDagData, err := consensus.GHOSTDAGDataStore().Get(consensus.DatabaseContext(), sideChainTipHash)
sideChainTipGhostDagData, err :=
consensus.GHOSTDAGDataStore().Get(consensus.DatabaseContext(), stagingArea, sideChainTipHash)
if err != nil {
t.Fatalf("TestFinality: Failed getting the ghost dag data of the sidechain tip: %v", err)
}
@@ -302,7 +306,9 @@ func TestBoundedMergeDepth(t *testing.T) {
t.Fatalf("TestBoundedMergeDepth: Expected blueKosherizingBlock to not violate merge depth")
}
virtualGhotDagData, err := consensusReal.GHOSTDAGDataStore().Get(consensusReal.DatabaseContext(), model.VirtualBlockHash)
stagingArea := model.NewStagingArea()
virtualGhotDagData, err := consensusReal.GHOSTDAGDataStore().Get(consensusReal.DatabaseContext(),
stagingArea, model.VirtualBlockHash)
if err != nil {
t.Fatalf("TestBoundedMergeDepth: Failed getting the ghostdag data of the virtual: %v", err)
}
@@ -350,7 +356,8 @@ func TestBoundedMergeDepth(t *testing.T) {
t.Fatalf("TestBoundedMergeDepth: Expected %s to be the selectedTip but found %s instead", tip, virtualSelectedParent)
}
virtualGhotDagData, err = consensusReal.GHOSTDAGDataStore().Get(consensusReal.DatabaseContext(), model.VirtualBlockHash)
virtualGhotDagData, err = consensusReal.GHOSTDAGDataStore().Get(
consensusReal.DatabaseContext(), stagingArea, model.VirtualBlockHash)
if err != nil {
t.Fatalf("TestBoundedMergeDepth: Failed getting the ghostdag data of the virtual: %v", err)
}
@@ -372,7 +379,8 @@ func TestBoundedMergeDepth(t *testing.T) {
}
// Now `pointAtBlueKosherizing` itself is actually still blue, so we can still point at that even though we can't point at kosherizing directly anymore
transitiveBlueKosherizing, isViolatingMergeDepth := checkViolatingMergeDepth(consensusReal, []*externalapi.DomainHash{consensushashing.BlockHash(pointAtBlueKosherizing), tip})
transitiveBlueKosherizing, isViolatingMergeDepth :=
checkViolatingMergeDepth(consensusReal, []*externalapi.DomainHash{consensushashing.BlockHash(pointAtBlueKosherizing), tip})
if isViolatingMergeDepth {
t.Fatalf("TestBoundedMergeDepth: Expected transitiveBlueKosherizing to not violate merge depth")
}

View File

@@ -9,7 +9,7 @@ type Consensus interface {
GetBlock(blockHash *DomainHash) (*DomainBlock, error)
GetBlockHeader(blockHash *DomainHash) (BlockHeader, error)
GetBlockInfo(blockHash *DomainHash) (*BlockInfo, error)
GetBlockChildren(blockHash *DomainHash) ([]*DomainHash, error)
GetBlockRelations(blockHash *DomainHash) (parents []*DomainHash, selectedParent *DomainHash, children []*DomainHash, err error)
GetBlockAcceptanceData(blockHash *DomainHash) (AcceptanceData, error)
GetHashesBetween(lowHash, highHash *DomainHash, maxBlueScoreDifference uint64) (hashes []*DomainHash, actualHighHash *DomainHash, err error)

View File

@@ -5,8 +5,8 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// AcceptanceDataStore represents a store of AcceptanceData
type AcceptanceDataStore interface {
Store
Stage(blockHash *externalapi.DomainHash, acceptanceData externalapi.AcceptanceData)
IsStaged() bool
Get(dbContext DBReader, blockHash *externalapi.DomainHash) (externalapi.AcceptanceData, error)
Delete(blockHash *externalapi.DomainHash)
Stage(stagingArea *StagingArea, blockHash *externalapi.DomainHash, acceptanceData externalapi.AcceptanceData)
IsStaged(stagingArea *StagingArea) bool
Get(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (externalapi.AcceptanceData, error)
Delete(stagingArea *StagingArea, blockHash *externalapi.DomainHash)
}

View File

@@ -5,11 +5,11 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// BlockHeaderStore represents a store of block headers
type BlockHeaderStore interface {
Store
Stage(blockHash *externalapi.DomainHash, blockHeader externalapi.BlockHeader)
IsStaged() bool
BlockHeader(dbContext DBReader, blockHash *externalapi.DomainHash) (externalapi.BlockHeader, error)
HasBlockHeader(dbContext DBReader, blockHash *externalapi.DomainHash) (bool, error)
BlockHeaders(dbContext DBReader, blockHashes []*externalapi.DomainHash) ([]externalapi.BlockHeader, error)
Delete(blockHash *externalapi.DomainHash)
Count() uint64
Stage(stagingArea *StagingArea, blockHash *externalapi.DomainHash, blockHeader externalapi.BlockHeader)
IsStaged(stagingArea *StagingArea) bool
BlockHeader(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (externalapi.BlockHeader, error)
HasBlockHeader(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (bool, error)
BlockHeaders(dbContext DBReader, stagingArea *StagingArea, blockHashes []*externalapi.DomainHash) ([]externalapi.BlockHeader, error)
Delete(stagingArea *StagingArea, blockHash *externalapi.DomainHash)
Count(stagingArea *StagingArea) uint64
}

View File

@@ -5,8 +5,8 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// BlockRelationStore represents a store of BlockRelations
type BlockRelationStore interface {
Store
StageBlockRelation(blockHash *externalapi.DomainHash, blockRelations *BlockRelations)
IsStaged() bool
BlockRelation(dbContext DBReader, blockHash *externalapi.DomainHash) (*BlockRelations, error)
Has(dbContext DBReader, blockHash *externalapi.DomainHash) (bool, error)
StageBlockRelation(stagingArea *StagingArea, blockHash *externalapi.DomainHash, blockRelations *BlockRelations)
IsStaged(stagingArea *StagingArea) bool
BlockRelation(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (*BlockRelations, error)
Has(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (bool, error)
}

View File

@@ -5,8 +5,8 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// BlockStatusStore represents a store of BlockStatuses
type BlockStatusStore interface {
Store
Stage(blockHash *externalapi.DomainHash, blockStatus externalapi.BlockStatus)
IsStaged() bool
Get(dbContext DBReader, blockHash *externalapi.DomainHash) (externalapi.BlockStatus, error)
Exists(dbContext DBReader, blockHash *externalapi.DomainHash) (bool, error)
Stage(stagingArea *StagingArea, blockHash *externalapi.DomainHash, blockStatus externalapi.BlockStatus)
IsStaged(stagingArea *StagingArea) bool
Get(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (externalapi.BlockStatus, error)
Exists(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (bool, error)
}

View File

@@ -5,12 +5,12 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// BlockStore represents a store of blocks
type BlockStore interface {
Store
Stage(blockHash *externalapi.DomainHash, block *externalapi.DomainBlock)
IsStaged() bool
Block(dbContext DBReader, blockHash *externalapi.DomainHash) (*externalapi.DomainBlock, error)
HasBlock(dbContext DBReader, blockHash *externalapi.DomainHash) (bool, error)
Blocks(dbContext DBReader, blockHashes []*externalapi.DomainHash) ([]*externalapi.DomainBlock, error)
Delete(blockHash *externalapi.DomainHash)
Count() uint64
Stage(stagingArea *StagingArea, blockHash *externalapi.DomainHash, block *externalapi.DomainBlock)
IsStaged(stagingArea *StagingArea) bool
Block(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (*externalapi.DomainBlock, error)
HasBlock(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (bool, error)
Blocks(dbContext DBReader, stagingArea *StagingArea, blockHashes []*externalapi.DomainHash) ([]*externalapi.DomainBlock, error)
Delete(stagingArea *StagingArea, blockHash *externalapi.DomainHash)
Count(stagingArea *StagingArea) uint64
AllBlockHashesIterator(dbContext DBReader) (BlockIterator, error)
}

View File

@@ -5,17 +5,16 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// ConsensusStateStore represents a store for the current consensus state
type ConsensusStateStore interface {
Store
IsStaged() bool
IsStaged(stagingArea *StagingArea) bool
StageVirtualUTXODiff(virtualUTXODiff externalapi.UTXODiff)
UTXOByOutpoint(dbContext DBReader, outpoint *externalapi.DomainOutpoint) (externalapi.UTXOEntry, error)
HasUTXOByOutpoint(dbContext DBReader, outpoint *externalapi.DomainOutpoint) (bool, error)
VirtualUTXOSetIterator(dbContext DBReader) (externalapi.ReadOnlyUTXOSetIterator, error)
VirtualUTXOs(dbContext DBReader,
fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error)
StageVirtualUTXODiff(stagingArea *StagingArea, virtualUTXODiff externalapi.UTXODiff)
UTXOByOutpoint(dbContext DBReader, stagingArea *StagingArea, outpoint *externalapi.DomainOutpoint) (externalapi.UTXOEntry, error)
HasUTXOByOutpoint(dbContext DBReader, stagingArea *StagingArea, outpoint *externalapi.DomainOutpoint) (bool, error)
VirtualUTXOSetIterator(dbContext DBReader, stagingArea *StagingArea) (externalapi.ReadOnlyUTXOSetIterator, error)
VirtualUTXOs(dbContext DBReader, fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error)
StageTips(tipHashes []*externalapi.DomainHash)
Tips(dbContext DBReader) ([]*externalapi.DomainHash, error)
StageTips(stagingArea *StagingArea, tipHashes []*externalapi.DomainHash)
Tips(stagingArea *StagingArea, dbContext DBReader) ([]*externalapi.DomainHash, error)
StartImportingPruningPointUTXOSet(dbContext DBWriter) error
HadStartedImportingPruningPointUTXOSet(dbContext DBWriter) (bool, error)

View File

@@ -5,10 +5,10 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// DAABlocksStore represents a store of ???
type DAABlocksStore interface {
Store
StageDAAScore(blockHash *externalapi.DomainHash, daaScore uint64)
StageBlockDAAAddedBlocks(blockHash *externalapi.DomainHash, addedBlocks []*externalapi.DomainHash)
IsAnythingStaged() bool
DAAAddedBlocks(dbContext DBReader, blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error)
DAAScore(dbContext DBReader, blockHash *externalapi.DomainHash) (uint64, error)
Delete(blockHash *externalapi.DomainHash)
StageDAAScore(stagingArea *StagingArea, blockHash *externalapi.DomainHash, daaScore uint64)
StageBlockDAAAddedBlocks(stagingArea *StagingArea, blockHash *externalapi.DomainHash, addedBlocks []*externalapi.DomainHash)
IsStaged(stagingArea *StagingArea) bool
DAAAddedBlocks(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error)
DAAScore(dbContext DBReader, stagingArea *StagingArea, blockHash *externalapi.DomainHash) (uint64, error)
Delete(stagingArea *StagingArea, blockHash *externalapi.DomainHash)
}

Some files were not shown because too many files have changed in this diff Show More