Compare commits

...

38 Commits

Author SHA1 Message Date
Michael Sutton
b8d36a1772 Modify DefaultTimeout to 120 seconds
A temporary workaround for nodes having trouble to sync (currently the download of pruning point related data during IBD takes more than 30 seconds)
2021-11-19 12:28:39 +02:00
Ori Newman
5c1ba9170e Don't set blocks from the pruning point anticone as the header select tip (#1850)
* Decrease the dial timeout to 1 second

* Don't set blocks from the pruning point anticone as the header selected tip.

Co-authored-by: Kaspa Profiler <>
2021-11-13 18:59:20 +02:00
Elichai Turkel
9d8c555bdf Fix a bug in the matrix ranking algorithm (#1849)
* Fix a bug in the matrix ranking algorithm

* Add tests and benchmarks for matrix generation and ranking

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-11-13 17:51:11 +02:00
Ori Newman
a2f574eab8 Update to version 0.11.3 2021-11-13 17:18:40 +02:00
Kaspa Profiler
7bed86dc1b Update changelog.txt 2021-11-11 09:30:12 +02:00
Ori Newman
9b81f5145e Increase p2p msg size and reset utxo after ibd (#1847)
* Don't build unnecessary binaries

* Reset UTXO index after IBD

* Enlarge max msg size to 1gb

* Fix UTXO chunks logic

* Revert UTXO set override change

* Fix sendPruningPointUTXOSet

* Increase tests timeout to 20 minutes

Co-authored-by: Kaspa Profiler <>
2021-11-11 09:27:07 +02:00
Kaspa Profiler
cd8341ef57 Update to version 0.11.2 2021-11-10 23:19:25 +02:00
Ori Newman
ad8bdbed21 Update changelog (#1845)
Co-authored-by: Kaspa Profiler <>
2021-11-09 00:32:56 +02:00
Elichai Turkel
7cdceb6df0 Cache the miner state (#1844)
* Implement a MinerState to cache the matrix and friends

* Modify the miner and related code to use the new MinerCache

* Change MinerState to State

* Make go lint happy

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Kaspa Profiler <>
2021-11-09 00:12:30 +02:00
stasatdaglabs
cc5248106e Update to version 0.11.1 2021-11-08 09:01:52 +02:00
Elichai Turkel
e3463b7268 Replace Keccak256 in oPoW with CSHAKE256 with domain seperation (#1842)
* Replace keccak with CSHAKE256 in oPoW

* Add benchmarks to hash writers to compare blake2b to the CSHAKE

* Update genesis blocks

* Update tests

* Define genesis's block level to be the maximal one

* Add message to genesis coinbase

* Add comments to genesis coinbase

* Fix tests

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-11-07 18:36:30 +02:00
Ori Newman
a2173ef80a Switch PoW to a keccak heavyhash variant (#1841)
* Add another hash domain for HeavyHash

* Add a xoShiRo256PlusPlus implementation

* Add a HeavyHash implementation

* Replace our current PoW algorithm with oPoW

* Change to pow hash to keccak256

* Fix genesis

* Fix tests

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-11-07 11:17:15 +02:00
stasatdaglabs
aeb4500b61 Add the daglabs-dev mainnet dnsseeder. (#1840) 2021-11-07 10:39:54 +02:00
Ori Newman
0a1daae319 Allow mainnet flag and raise wallet fee (#1838)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-11-07 10:04:27 +02:00
stasatdaglabs
131cd3357e Rename FixedSubsidySwitchHashRateDifference to FixedSubsidySwitchHashRateThreshold and set its value to 150GH/s. (#1837) 2021-11-07 09:33:39 +02:00
Ori Newman
ff72568d6b Fix pruning point anticone order (#1836)
* Send pruning point anticone in topological order
Fix a UTXO pagination bug
Lengthen the stabilization time for the last DAA test

* Extend "sudden hash rate drop" test length to 45 minutes

Co-authored-by: Kaspa Profiler <>
2021-11-07 08:21:34 +02:00
stasatdaglabs
2dddb650b9 Switch to a fixed block subsidy after a certain work threshold (#1831)
* Implement isBlockRewardFixed.

* Fix factory.go.

* Call isBlockRewardFixed from calcBlockSubsidy.

* Fix bad call to ghostdagDataStore.Get.

* Extract blue score and blue work from the header instead of from the ghostdagDataStore.

* Fix coinbasemanager constructor arguments order

* Format consensus_defaults.go

* Check the mainnet switch from the block's point of view rather than the virtual's.

* Don't call newBlockPruningPoint twice in buildBlock.

* Properly handle new pruning point blocks in isBlockRewardFixed.

* Use the correct variable.

* Add a comment explaining what we do when the pruning point is not found in isBlockRewardFixed.

* Implement TestBlockRewardSwitch.

* Add missing error handling.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-31 15:04:51 +02:00
Ori Newman
99aaacd649 Check blue score before requesting a pruning proof (#1835)
* Check blue score before requesting a pruning proof

* BuildPruningPointProof should return empty proof if the pruning point is genesis

* Don't fail many-tips if kaspad exits ungracefully
2021-10-31 12:48:18 +02:00
stasatdaglabs
77a344cc29 In IBD, validate the timestamps of the headers of the pruning point and selected tip (#1829)
* Implement validatePruningPointFutureHeaderTimestamps.

* Fix TestIBDWithPruning.

* Fix wrong logic.

* Add a comment.

* Fix a comment.

* Fix a variable name.

* Add a commment

* Fix TestIBDWithPruning.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-30 20:32:49 +03:00
stasatdaglabs
3dbc42b4f7 Implement the new block subsidy function (#1830)
* Replace the old blockSubsidy parameters with the new ones.

* Return subsidyGenesisReward if blockHash is the genesis hash.

* Traverse a block's past for the subsidy calculation.

* Partially implement SubsidyStore.

* Refer to SubsidyStore from CoinbaseManager.

* Wrap calcBlockSubsidy in getBlockSubsidy, which first checks the database.

* Fix finalityStore not calling GenerateShardingID.

* Implement calculateAveragePastSubsidy.

* Implement calculateMergeSetSubsidySum.

* Implement calculateSubsidyRandomVariable.

* Implement calcBlockSubsidy.

* Add a TODO about floats.

* Update the calcBlockSubsidy TODO.

* Use binary.LittleEndian in calculateSubsidyRandomVariable.

* Fix bad range in calculateSubsidyRandomVariable.

* Replace float64 with big.Rat everywhere except for subsidyRandomVariable.

* Fix a nil dereference.

* Use a random walk to approximate the normal distribution.

* In order to avoid unsupported fractional results from powInt64, flip the numerator and the denominator manually.

* Set standardDeviation to 0.25, MaxSompi to 10_000_000_000 * SompiPerKaspa and defaultSubsidyGenesisReward to 1_000.

* Set the standard deviation to 0.2.

* Use a binomial distribution instead of trying to estimate the normal distribution.

* Change some values around.

* Clamp the block subsidy.

* Remove the fake duplicate constants in the util package.

* Reduce MaxSompi to only 100m Kaspa to avoid hitting the uint64 ceiling.

* Lower MaxSompi further to avoid new and exciting ways for the uint64 ceiling to be hit.

* Remove debug logs.

* Fix a couple of failing tests.

* Fix TestBlockWindow.

* Fix limitTransactionCount sometimes crashing on index-out-of-bounds.

* In TrustedDataDataDAABlock, replace BlockHeader with DomainBlock

* In calculateAveragePastSubsidy, use blockWindow instead of doing a BFS manually.

* Remove the reference to DAGTopologyManager in coinbaseManager.

* Add subsidy to the coinbase payload.

* Get rid of the subsidy store and extract subsidies out of coinbase transactions.

* Keep a blockWindow amount of blocks under the virtual for IBD purposes.

* Manually remove the virtual genesis from the merge set.

* Fix simnet genesis.

* Fix TestPruning.

* Fix TestCheckBlockIsNotPruned.

* Fix TestBlockWindow.

* Fix TestCalculateSignatureHashSchnorr.

* Fix TestCalculateSignatureHashECDSA.

* Fix serializing the wrong value into the coinbase payload.

* Rename coinbaseOutputForBlueBlock to coinbaseOutputAndSubsidyForBlueBlock.

* Add a TODO about optimizing trusted data DAA window blocks.

* Expand on a comment in TestCheckBlockIsNotPruned.

* In calcBlockSubsidy, divide the big.Int numerator by the big.Int denominator instead of converting to float64.

* Clarify a comment.

* Rename SubsidyMinGenesisReward to MinSubsidy.

* Properly handle trusted data blocks in calculateMergeSetSubsidySum.

* Use the first two bytes of the selected parent's hash for randomness instead of math/rand.

* Restore maxSompi to what it used to be.

* Fix TestPruning.

* Fix TestAmountCreation.

* Fix TestBlockWindow.

* Fix TestAmountUnitConversions.

* Increase the timeout in many-tips to 30 minutes.

* Check coinbase subsidy for every block

* Re-rename functions

* Use shift instead of powInt64 to determine subsidyRandom

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-30 10:16:47 +03:00
Ori Newman
1b9be28613 Improve ExpectedHeaderPruningPoint performance (#1833)
* Improve ExpectedHeaderPruningPoint perf

* Add suggestedLowHash to nextPruningPointAndCandidateByBlockHash
2021-10-26 11:01:26 +03:00
Ori Newman
5dbb1da84b Implement pruning point proof (#1832)
* Calculate GHOSTDAG, reachability etc for each level

* Don't preallocate cache for dag stores except level 0 and reduce the number of connections in the integration test to 32

* Reduce the number of connections in the integration test to 16

* Increase page file

* BuildPruningPointProof

* BuildPruningPointProof

* Add PruningProofManager

* Implement ApplyPruningPointProof

* Add prefix and fix blockAtDepth and fill headersByLevel

* Some bug fixes

* Include all relevant blocks for each level in the proof

* Fix syncAndValidatePruningPointProof to return the right block hash

* Fix block window

* Fix isAncestorOfPruningPoint

* Ban for rule errors on pruning proof

* Find common ancestor for blockAtDepthMAtNextLevel

* Use pruning proof in TestValidateAndInsertImportedPruningPoint

* stage status and finality point for proof blocks

* Uncomment golint

* Change test timeouts

* Calculate merge set for ApplyPruningPointProof

* Increase test timeout

* Add better caching for daa window store

* Return to default timeout

* Add ErrPruningProofMissesBlocksBelowPruningPoint

* Add errDAAWindowBlockNotFound

* Force connection loop next iteration on connection manager stop

* Revert to Test64IncomingConnections

* Remove BlockAtDepth from DAGTraversalManager

* numBullies->16

* Set page file size to 8gb

* Increase p2p max message size

* Test64IncomingConnections->Test16IncomingConnections

* Add comment for PruningProofM

* Add comment in `func (c *ConnectionManager) Stop()`

* Rename isAncestorOfPruningPoint->isAncestorOfSelectedTip

* Revert page file to 16gb

* Improve ExpectedHeaderPruningPoint perf

* Fix comment

* Revert "Improve ExpectedHeaderPruningPoint perf"

This reverts commit bca1080e71.

* Don't test windows
2021-10-26 09:48:27 +03:00
Ori Newman
afaac28da1 Validate each level parents (#1827)
* Create BlockParentBuilder.

* Implement BuildParents.

* Explictly set level 0 blocks to be the same as direct parents.

* Add checkIndirectParents to validateBlockHeaderInContext.

* Fix test_block_builder.go and BlockLevelParents::Equal.

* Don't check indirect parents for blocks with trusted data.

* Handle pruned blocks when building block level parents.

* Fix bad deletions from unprocessedXxxParents.

* Fix merge errors.

* Fix bad pruning point parent replaces.

* Fix duplicates in newBlockLevelParents.

* Skip checkIndirectParents

* Get rid of staging constant IDs

* Fix BuildParents

* Fix tests

* Add comments

* Change order of directParentHashes

* Get rid of maybeAddDirectParentParents

* Add comments

* Add blockToReferences type

* Use ParentsAtLevel

Co-authored-by: stasatdaglabs <stas@daglabs.com>
2021-09-13 14:22:00 +03:00
stasatdaglabs
0053ee788d Use the BlueWork declared in an orphan block's header instead of requesting it explicitly from the peer that sent us the orphan (#1828) 2021-09-13 13:13:03 +03:00
stasatdaglabs
af7e7de247 Add PruningPointProof to the P2P protocol (#1825)
* Add PruningPointProof to externalapi.

* Add BuildPruningPointProof and ValidatePruningPointProof to Consensus.

* Add the pruning point proof to the protocol.

* Add the pruning point proof to the wire package.

* Add PruningPointBlueWork.

* Make go vet happy.

* Properly initialize PruningPointProof in consensus.go.

* Validate pruning point blue work.

* Populate PruningPointBlueWork with the actual blue work of the pruning point.

* Revert "Populate PruningPointBlueWork with the actual blue work of the pruning point."

This reverts commit f2a9829998.

* Revert "Validate pruning point blue work."

This reverts commit c6a90c5d2c.

* Revert "Properly initialize PruningPointProof in consensus.go."

This reverts commit 9391574bbf.

* Revert "Add PruningPointBlueWork."

This reverts commit 48182f652a.

* Fix PruningPointProof and MsgPruningPointProof to be two-dimensional.

* Fix wire PruningPointProof to be two-dimensional.
2021-09-05 17:20:15 +03:00
Ori Newman
02a08902a7 Fix current pruning point index cache (#1824)
* Fix ps.currentPruningPointIndexCache

* Remove redundant dependency from block builder

* Fix typo
2021-09-05 07:37:25 +03:00
Ori Newman
d9bc94a2a8 Replace header finality point with pruning point and enforce finality rules on IBD with headers proof (#1823)
* Replace header finality point with pruning point

* Fix TestTransactionAcceptance

* Fix pruning candidate

* Store all past pruning points

* Pass pruning points on IBD

* Add blue score to block header

* Simplify ArePruningPointsInValidChain

* Fix static check errors

* Fix genesis

* Renames and text fixing

* Use ExpectedHeaderPruningPoint in block builder

* Fix TestCheckPruningPointViolation
2021-08-31 08:01:48 +03:00
stasatdaglabs
837dac68b5 Update block headers to include multiple levels of parent blocks (#1822)
* Replace the old parents in the block header with BlockLevelParents.

* Begin fixing compilation errors.

* Implement database serialization for block level parents.

* Implement p2p serialization for block level parents.

* Implement rpc serialization for block level parents.

* Add DirectParents() to the block header interface.

* Use DirectParents() instead of Parents() in some places.

* Revert test_block_builder.go.

* Add block level parents to hash serialization.

* Use the zero hash for genesis finality points.

* Fix failing tests.

* Fix a variable name.

* Update headerEstimatedSerializedSize.

* Add comments in blocklevelparents.go.

* Fix the rpc-stability stability test.

* Change the field number for `parents` fields in p2p.proto and rpc.proto.

* Remove MsgBlockHeader::NumParentBlocks.
2021-08-24 12:06:39 +03:00
Ori Newman
ba5880fab1 Fix pruning candidate (#1821) 2021-08-23 07:26:43 +03:00
stasatdaglabs
7b5720a155 Implement GHOST (#1819)
* Implement GHOST.

* Implement TestGHOST.

* Make GHOST() take arbitrary subDAGs.

* Hold RootHashes in SubDAG rather than one GenesisHash.

* Select which root the GHOST chain starts with instead of passing a lowHash.

* If two child hashes have the same future size, decide which one is larger using the block hash.

* Extract blockHashWithLargestFutureSize to a separate function.

* Calculate future size for each block individually.

* Make TestGHOST deterministic.

* Increase the timeout for connecting 128 connections in TestRPCMaxInboundConnections.

* Implement BenchmarkGHOST.

* Fix an infinite loop.

* Use much larger benchmark data.

* Optimize `futureSizes` using reverse merge sets.

* Temporarily make the benchmark data smaller while GHOST is being optimized.

* Fix a bug in futureSizes.

* Fix a bug in populateReverseMergeSet.

* Choose a selectedChild at random instead of the one with the largest reverse merge set size.

* Rename populateReverseMergeSet to calculateReverseMergeSet.

* Use reachability to resolve isDescendantOf.

* Extract heightMaps to a separate object.

* Iterate using height maps in futureSizes.

* Don't store reverse merge sets in memory.

* Change calculateReverseMergeSet to calculateReverseMergeSetSize.

* Fix bad initial reverseMergeSetSize.

* Optimize calculateReverseMergeSetSize.

* Enlarge the benchmark data to 86k blocks.
2021-08-19 13:59:43 +03:00
stasatdaglabs
65b5a080e4 Fix the RPCClient leaking connections (#1820)
* Fix the RPCClient leaking connections.

* Wrap the error return from GetInfo.
2021-08-16 15:59:35 +03:00
stasatdaglabs
ce17348175 Limit the amount of inbound RPC connections (#1818)
* Limit the amount of inbound RPC connections.

* Increment/decrement the right variable.

* Implement TestRPCMaxInboundConnections.

* Make go vet happy.

* Increase RPCMaxInboundConnections to 128.

* Set NUM_CLIENTS=128 in the rpc-idle-clients stability test.

* Explain why the P2P server has unlimited inbound connections.
2021-08-12 14:40:49 +03:00
stasatdaglabs
d922ee1be2 Add header commitments for DAA score, blue work, and finality points (#1817)
* Add DAAScore, BlueWork, and FinalityPoint to externalapi.BlockHeader.

* Add DAAScore, BlueWork, and FinalityPoint to NewImmutableBlockHeader and fix compilation errors.

* Add DAAScore, BlueWork, and FinalityPoint to protowire header types and fix failing tests.

* Check for header DAA score in validateDifficulty.

* Add DAA score to buildBlock.

* Fix failing tests.

* Add a blue work check in validateDifficultyDAAAndBlueWork.

* Add blue work to buildBlock and fix failing tests.

* Add finality point validation to ValidateHeaderInContext.

* Fix genesis blocks' finality points.

* Add finalityPoint to blockBuilder.

* Fix tests that failed due to missing reachability data.

* Make blockBuilder use VirtualFinalityPoint instead of directly calling FinalityPoint with the virtual hash.

* Don't validate the finality point for blocks with trusted data.

* Add debug logs.

* Skip finality point validation for block whose finality points are the virtual genesis.

* Revert "Add debug logs."

This reverts commit 3c18f519cc.

* Move checkDAAScore and checkBlueWork to validateBlockHeaderInContext.

* Add checkCoinbaseBlueScore to validateBodyInContext.

* Fix failing tests.

* Add DAAScore, blueWork, and finalityPoint to blocks' hashes.

* Generate new genesis blocks.

* Fix failing tests.

* In BuildUTXOInvalidBlock, get the bits from StageDAADataAndReturnRequiredDifficulty instead of calling RequiredDifficulty separately.
2021-08-12 13:25:00 +03:00
stasatdaglabs
4132891ac9 In calculateDiffBetweenPreviousAndCurrentPruningPoints, collect diffChild hashes instead of UTXODiffs to give the GC a chance to clean up UTXODiffs. (#1815)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-08-08 12:46:21 +03:00
stasatdaglabs
2094f4facf Decrease the size of the small chains in many-small-chains-and-one-big-chain.json, since the merge set size limit was reduced to k*10. (#1816) 2021-08-08 11:22:27 +03:00
stasatdaglabs
2de68f43f0 Use blockStatusStore instead of blockStore in missingBlockBodyHashes. (#1814) 2021-08-05 09:45:09 +03:00
Ori Newman
d748089a14 Update the virtual after overriding the virtual UTXO set (#1811)
* Update the virtual after overriding the virtual utxo set

* Put the updateVirtual inside importVirtualUTXOSetAndPruningPointUTXOSet

* Add pruningPoint to importVirtualUTXOSetAndPruningPointUTXOSet

* Remove sanity check
2021-08-02 17:02:15 +03:00
stasatdaglabs
7d1071a9b1 Update testnet version to testnet-6 (#1808)
* Update testnet version to testnet-6.

* Fix failing test.
2021-07-29 10:12:23 +03:00
196 changed files with 9045 additions and 4921 deletions

View File

@@ -1,7 +1,7 @@
name: Build and upload assets
on:
release:
types: [published]
types: [ published ]
jobs:
build:
@@ -36,7 +36,7 @@ jobs:
# `-tags netgo,osusergo` means use pure go replacements for "os/user" and "net"
# `-s -w` strips the binary to produce smaller size binaries
run: |
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ ./...
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-linux.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-linux.zip"
zip -r "${archive}" ./bin/*
@@ -47,7 +47,7 @@ jobs:
if: runner.os == 'Windows'
shell: bash
run: |
go build -v -ldflags="-s -w" -o bin/ ./...
go build -v -ldflags="-s -w" -o bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-win64.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-win64.zip"
powershell "Compress-Archive bin/* \"${archive}\""
@@ -57,7 +57,7 @@ jobs:
- name: Build on MacOS
if: runner.os == 'macOS'
run: |
go build -v -ldflags="-s -w" -o ./bin/ ./...
go build -v -ldflags="-s -w" -o ./bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-osx.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-osx.zip"
zip -r "${archive}" ./bin/*

View File

@@ -14,7 +14,7 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ ubuntu-latest, macos-latest, windows-latest ]
os: [ ubuntu-latest, macos-latest ]
name: Tests, ${{ matrix.os }}
steps:

View File

@@ -2,6 +2,8 @@ package appmessage
import (
"encoding/hex"
"github.com/pkg/errors"
"math/big"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
@@ -29,13 +31,17 @@ func DomainBlockToMsgBlock(domainBlock *externalapi.DomainBlock) *MsgBlock {
func DomainBlockHeaderToBlockHeader(domainBlockHeader externalapi.BlockHeader) *MsgBlockHeader {
return &MsgBlockHeader{
Version: domainBlockHeader.Version(),
ParentHashes: domainBlockHeader.ParentHashes(),
Parents: domainBlockHeader.Parents(),
HashMerkleRoot: domainBlockHeader.HashMerkleRoot(),
AcceptedIDMerkleRoot: domainBlockHeader.AcceptedIDMerkleRoot(),
UTXOCommitment: domainBlockHeader.UTXOCommitment(),
Timestamp: mstime.UnixMilliseconds(domainBlockHeader.TimeInMilliseconds()),
Bits: domainBlockHeader.Bits(),
Nonce: domainBlockHeader.Nonce(),
BlueScore: domainBlockHeader.BlueScore(),
DAAScore: domainBlockHeader.DAAScore(),
BlueWork: domainBlockHeader.BlueWork(),
PruningPoint: domainBlockHeader.PruningPoint(),
}
}
@@ -56,13 +62,17 @@ func MsgBlockToDomainBlock(msgBlock *MsgBlock) *externalapi.DomainBlock {
func BlockHeaderToDomainBlockHeader(blockHeader *MsgBlockHeader) externalapi.BlockHeader {
return blockheader.NewImmutableBlockHeader(
blockHeader.Version,
blockHeader.ParentHashes,
blockHeader.Parents,
blockHeader.HashMerkleRoot,
blockHeader.AcceptedIDMerkleRoot,
blockHeader.UTXOCommitment,
blockHeader.Timestamp.UnixMilliseconds(),
blockHeader.Bits,
blockHeader.Nonce,
blockHeader.DAAScore,
blockHeader.BlueScore,
blockHeader.BlueWork,
blockHeader.PruningPoint,
)
}
@@ -334,15 +344,25 @@ func DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(
// DomainBlockToRPCBlock converts DomainBlocks to RPCBlocks
func DomainBlockToRPCBlock(block *externalapi.DomainBlock) *RPCBlock {
parents := make([]*RPCBlockLevelParents, len(block.Header.Parents()))
for i, blockLevelParents := range block.Header.Parents() {
parents[i] = &RPCBlockLevelParents{
ParentHashes: hashes.ToStrings(blockLevelParents),
}
}
header := &RPCBlockHeader{
Version: uint32(block.Header.Version()),
ParentHashes: hashes.ToStrings(block.Header.ParentHashes()),
Parents: parents,
HashMerkleRoot: block.Header.HashMerkleRoot().String(),
AcceptedIDMerkleRoot: block.Header.AcceptedIDMerkleRoot().String(),
UTXOCommitment: block.Header.UTXOCommitment().String(),
Timestamp: block.Header.TimeInMilliseconds(),
Bits: block.Header.Bits(),
Nonce: block.Header.Nonce(),
DAAScore: block.Header.DAAScore(),
BlueScore: block.Header.BlueScore(),
BlueWork: block.Header.BlueWork().Text(16),
PruningPoint: block.Header.PruningPoint().String(),
}
transactions := make([]*RPCTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
@@ -356,13 +376,16 @@ func DomainBlockToRPCBlock(block *externalapi.DomainBlock) *RPCBlock {
// RPCBlockToDomainBlock converts `block` into a DomainBlock
func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
parentHashes := make([]*externalapi.DomainHash, len(block.Header.ParentHashes))
for i, parentHash := range block.Header.ParentHashes {
domainParentHashes, err := externalapi.NewDomainHashFromString(parentHash)
if err != nil {
return nil, err
parents := make([]externalapi.BlockLevelParents, len(block.Header.Parents))
for i, blockLevelParents := range block.Header.Parents {
parents[i] = make(externalapi.BlockLevelParents, len(blockLevelParents.ParentHashes))
for j, parentHash := range blockLevelParents.ParentHashes {
var err error
parents[i][j], err = externalapi.NewDomainHashFromString(parentHash)
if err != nil {
return nil, err
}
}
parentHashes[i] = domainParentHashes
}
hashMerkleRoot, err := externalapi.NewDomainHashFromString(block.Header.HashMerkleRoot)
if err != nil {
@@ -376,15 +399,27 @@ func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
if err != nil {
return nil, err
}
blueWork, success := new(big.Int).SetString(block.Header.BlueWork, 16)
if !success {
return nil, errors.Errorf("failed to parse blue work: %s", block.Header.BlueWork)
}
pruningPoint, err := externalapi.NewDomainHashFromString(block.Header.PruningPoint)
if err != nil {
return nil, err
}
header := blockheader.NewImmutableBlockHeader(
uint16(block.Header.Version),
parentHashes,
parents,
hashMerkleRoot,
acceptedIDMerkleRoot,
utxoCommitment,
block.Header.Timestamp,
block.Header.Bits,
block.Header.Nonce)
block.Header.Nonce,
block.Header.DAAScore,
block.Header.BlueScore,
blueWork,
pruningPoint)
transactions := make([]*externalapi.DomainTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
domainTransaction, err := RPCTransactionToDomainTransaction(transaction)
@@ -404,7 +439,7 @@ func BlockWithTrustedDataToDomainBlockWithTrustedData(block *MsgBlockWithTrusted
daaWindow := make([]*externalapi.TrustedDataDataDAABlock, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &externalapi.TrustedDataDataDAABlock{
Header: BlockHeaderToDomainBlockHeader(daaBlock.Header),
Block: MsgBlockToDomainBlock(daaBlock.Block),
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
@@ -465,7 +500,7 @@ func DomainBlockWithTrustedDataToBlockWithTrustedData(block *externalapi.BlockWi
daaWindow := make([]*TrustedDataDataDAABlock, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &TrustedDataDataDAABlock{
Header: DomainBlockHeaderToBlockHeader(daaBlock.Header),
Block: DomainBlockToMsgBlock(daaBlock.Block),
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
@@ -485,3 +520,31 @@ func DomainBlockWithTrustedDataToBlockWithTrustedData(block *externalapi.BlockWi
GHOSTDAGData: ghostdagData,
}
}
// MsgPruningPointProofToDomainPruningPointProof converts *MsgPruningPointProof to *externalapi.PruningPointProof
func MsgPruningPointProofToDomainPruningPointProof(pruningPointProofMessage *MsgPruningPointProof) *externalapi.PruningPointProof {
headers := make([][]externalapi.BlockHeader, len(pruningPointProofMessage.Headers))
for blockLevel, blockLevelParents := range pruningPointProofMessage.Headers {
headers[blockLevel] = make([]externalapi.BlockHeader, len(blockLevelParents))
for i, header := range blockLevelParents {
headers[blockLevel][i] = BlockHeaderToDomainBlockHeader(header)
}
}
return &externalapi.PruningPointProof{
Headers: headers,
}
}
// DomainPruningPointProofToMsgPruningPointProof converts *externalapi.PruningPointProof to *MsgPruningPointProof
func DomainPruningPointProofToMsgPruningPointProof(pruningPointProof *externalapi.PruningPointProof) *MsgPruningPointProof {
headers := make([][]*MsgBlockHeader, len(pruningPointProof.Headers))
for blockLevel, blockLevelParents := range pruningPointProof.Headers {
headers[blockLevel] = make([]*MsgBlockHeader, len(blockLevelParents))
for i, header := range blockLevelParents {
headers[blockLevel][i] = DomainBlockHeaderToBlockHeader(header)
}
}
return &MsgPruningPointProof{
Headers: headers,
}
}

View File

@@ -58,13 +58,14 @@ const (
CmdBlockHeaders
CmdRequestNextPruningPointUTXOSetChunk
CmdDonePruningPointUTXOSetChunks
CmdBlockBlueWork
CmdBlockWithTrustedData
CmdDoneBlocksWithTrustedData
CmdRequestPruningPointAndItsAnticone
CmdRequestBlockBlueWork
CmdIBDBlock
CmdRequestIBDBlocks
CmdPruningPoints
CmdRequestPruningPointProof
CmdPruningPointProof
// rpc
CmdGetCurrentNetworkRequestMessage
@@ -176,13 +177,14 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdBlockHeaders: "BlockHeaders",
CmdRequestNextPruningPointUTXOSetChunk: "RequestNextPruningPointUTXOSetChunk",
CmdDonePruningPointUTXOSetChunks: "DonePruningPointUTXOSetChunks",
CmdBlockBlueWork: "BlockBlueWork",
CmdBlockWithTrustedData: "BlockWithTrustedData",
CmdDoneBlocksWithTrustedData: "DoneBlocksWithTrustedData",
CmdRequestPruningPointAndItsAnticone: "RequestPruningPointAndItsAnticoneHeaders",
CmdRequestBlockBlueWork: "RequestBlockBlueWork",
CmdIBDBlock: "IBDBlock",
CmdRequestIBDBlocks: "RequestIBDBlocks",
CmdPruningPoints: "PruningPoints",
CmdRequestPruningPointProof: "RequestPruningPointProof",
CmdPruningPointProof: "PruningPointProof",
}
// RPCMessageCommandToString maps all MessageCommands to their string representation

View File

@@ -21,13 +21,18 @@ func TestBlock(t *testing.T) {
pver := ProtocolVersion
// Block 1 header.
parentHashes := blockOne.Header.ParentHashes
parents := blockOne.Header.Parents
hashMerkleRoot := blockOne.Header.HashMerkleRoot
acceptedIDMerkleRoot := blockOne.Header.AcceptedIDMerkleRoot
utxoCommitment := blockOne.Header.UTXOCommitment
bits := blockOne.Header.Bits
nonce := blockOne.Header.Nonce
bh := NewBlockHeader(1, parentHashes, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce)
daaScore := blockOne.Header.DAAScore
blueScore := blockOne.Header.BlueScore
blueWork := blockOne.Header.BlueWork
pruningPoint := blockOne.Header.PruningPoint
bh := NewBlockHeader(1, parents, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce,
daaScore, blueScore, blueWork, pruningPoint)
// Ensure the command is expected value.
wantCmd := MessageCommand(5)
@@ -131,7 +136,7 @@ func TestConvertToPartial(t *testing.T) {
var blockOne = MsgBlock{
Header: MsgBlockHeader{
Version: 0,
ParentHashes: []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash},
Parents: []externalapi.BlockLevelParents{[]*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}},
HashMerkleRoot: mainnetGenesisMerkleRoot,
AcceptedIDMerkleRoot: exampleAcceptedIDMerkleRoot,
UTXOCommitment: exampleUTXOCommitment,

View File

@@ -1,23 +0,0 @@
package appmessage
import (
"math/big"
)
// MsgBlockBlueWork represents a kaspa BlockBlueWork message
type MsgBlockBlueWork struct {
baseMessage
BlueWork *big.Int
}
// Command returns the protocol command string for the message
func (msg *MsgBlockBlueWork) Command() MessageCommand {
return CmdBlockBlueWork
}
// NewBlockBlueWork returns a new kaspa BlockBlueWork message
func NewBlockBlueWork(blueWork *big.Int) *MsgBlockBlueWork {
return &MsgBlockBlueWork{
BlueWork: blueWork,
}
}

View File

@@ -5,13 +5,12 @@
package appmessage
import (
"math"
"math/big"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/pkg/errors"
)
// BaseBlockHeaderPayload is the base number of bytes a block header can be,
@@ -39,8 +38,8 @@ type MsgBlockHeader struct {
// Version of the block. This is not the same as the protocol version.
Version uint16
// Hashes of the parent block headers in the blockDAG.
ParentHashes []*externalapi.DomainHash
// Parents are the parent block hashes of the block in the DAG per superblock level.
Parents []externalapi.BlockLevelParents
// HashMerkleRoot is the merkle tree reference to hash of all transactions for the block.
HashMerkleRoot *externalapi.DomainHash
@@ -60,15 +59,16 @@ type MsgBlockHeader struct {
// Nonce used to generate the block.
Nonce uint64
}
// NumParentBlocks return the number of entries in ParentHashes
func (h *MsgBlockHeader) NumParentBlocks() byte {
numParents := len(h.ParentHashes)
if numParents > math.MaxUint8 {
panic(errors.Errorf("number of parents is %d, which is more than one byte can fit", numParents))
}
return byte(numParents)
// DAASCore is the DAA score of the block.
DAAScore uint64
BlueScore uint64
// BlueWork is the blue work of the block.
BlueWork *big.Int
PruningPoint *externalapi.DomainHash
}
// BlockHash computes the block identifier hash for the given block header.
@@ -76,27 +76,27 @@ func (h *MsgBlockHeader) BlockHash() *externalapi.DomainHash {
return consensushashing.HeaderHash(BlockHeaderToDomainBlockHeader(h))
}
// IsGenesis returns true iff this block is a genesis block
func (h *MsgBlockHeader) IsGenesis() bool {
return h.NumParentBlocks() == 0
}
// NewBlockHeader returns a new MsgBlockHeader using the provided version, previous
// block hash, hash merkle root, accepted ID merkle root, difficulty bits, and nonce used to generate the
// block with defaults or calclulated values for the remaining fields.
func NewBlockHeader(version uint16, parentHashes []*externalapi.DomainHash, hashMerkleRoot *externalapi.DomainHash,
acceptedIDMerkleRoot *externalapi.DomainHash, utxoCommitment *externalapi.DomainHash, bits uint32, nonce uint64) *MsgBlockHeader {
func NewBlockHeader(version uint16, parents []externalapi.BlockLevelParents, hashMerkleRoot *externalapi.DomainHash,
acceptedIDMerkleRoot *externalapi.DomainHash, utxoCommitment *externalapi.DomainHash, bits uint32, nonce,
daaScore, blueScore uint64, blueWork *big.Int, pruningPoint *externalapi.DomainHash) *MsgBlockHeader {
// Limit the timestamp to one millisecond precision since the protocol
// doesn't support better.
return &MsgBlockHeader{
Version: version,
ParentHashes: parentHashes,
Parents: parents,
HashMerkleRoot: hashMerkleRoot,
AcceptedIDMerkleRoot: acceptedIDMerkleRoot,
UTXOCommitment: utxoCommitment,
Timestamp: mstime.Now(),
Bits: bits,
Nonce: nonce,
DAAScore: daaScore,
BlueScore: blueScore,
BlueWork: blueWork,
PruningPoint: pruningPoint,
}
}

View File

@@ -5,29 +5,34 @@
package appmessage
import (
"math/big"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
)
// TestBlockHeader tests the MsgBlockHeader API.
func TestBlockHeader(t *testing.T) {
nonce := uint64(0xba4d87a69924a93d)
hashes := []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}
parents := []externalapi.BlockLevelParents{[]*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}}
merkleHash := mainnetGenesisMerkleRoot
acceptedIDMerkleRoot := exampleAcceptedIDMerkleRoot
bits := uint32(0x1d00ffff)
bh := NewBlockHeader(1, hashes, merkleHash, acceptedIDMerkleRoot, exampleUTXOCommitment, bits, nonce)
daaScore := uint64(123)
blueScore := uint64(456)
blueWork := big.NewInt(789)
pruningPoint := simnetGenesisHash
bh := NewBlockHeader(1, parents, merkleHash, acceptedIDMerkleRoot, exampleUTXOCommitment, bits, nonce,
daaScore, blueScore, blueWork, pruningPoint)
// Ensure we get the same data back out.
if !reflect.DeepEqual(bh.ParentHashes, hashes) {
t.Errorf("NewBlockHeader: wrong prev hashes - got %v, want %v",
spew.Sprint(bh.ParentHashes), spew.Sprint(hashes))
if !reflect.DeepEqual(bh.Parents, parents) {
t.Errorf("NewBlockHeader: wrong parents - got %v, want %v",
spew.Sprint(bh.Parents), spew.Sprint(parents))
}
if bh.HashMerkleRoot != merkleHash {
t.Errorf("NewBlockHeader: wrong merkle root - got %v, want %v",
@@ -41,44 +46,20 @@ func TestBlockHeader(t *testing.T) {
t.Errorf("NewBlockHeader: wrong nonce - got %v, want %v",
bh.Nonce, nonce)
}
}
func TestIsGenesis(t *testing.T) {
nonce := uint64(123123) // 0x1e0f3
bits := uint32(0x1d00ffff)
timestamp := mstime.UnixMilliseconds(0x495fab29000)
baseBlockHdr := &MsgBlockHeader{
Version: 1,
ParentHashes: []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash},
HashMerkleRoot: mainnetGenesisMerkleRoot,
Timestamp: timestamp,
Bits: bits,
Nonce: nonce,
if bh.DAAScore != daaScore {
t.Errorf("NewBlockHeader: wrong daaScore - got %v, want %v",
bh.DAAScore, daaScore)
}
genesisBlockHdr := &MsgBlockHeader{
Version: 1,
ParentHashes: []*externalapi.DomainHash{},
HashMerkleRoot: mainnetGenesisMerkleRoot,
Timestamp: timestamp,
Bits: bits,
Nonce: nonce,
if bh.BlueScore != blueScore {
t.Errorf("NewBlockHeader: wrong blueScore - got %v, want %v",
bh.BlueScore, blueScore)
}
tests := []struct {
in *MsgBlockHeader // Block header to encode
isGenesis bool // Expected result for call of .IsGenesis
}{
{genesisBlockHdr, true},
{baseBlockHdr, false},
if bh.BlueWork != blueWork {
t.Errorf("NewBlockHeader: wrong blueWork - got %v, want %v",
bh.BlueWork, blueWork)
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
isGenesis := test.in.IsGenesis()
if isGenesis != test.isGenesis {
t.Errorf("MsgBlockHeader.IsGenesis: #%d got: %t, want: %t",
i, isGenesis, test.isGenesis)
}
if !bh.PruningPoint.Equal(pruningPoint) {
t.Errorf("NewBlockHeader: wrong pruningPoint - got %v, want %v",
bh.PruningPoint, pruningPoint)
}
}

View File

@@ -27,7 +27,7 @@ func NewMsgBlockWithTrustedData() *MsgBlockWithTrustedData {
// TrustedDataDataDAABlock is an appmessage representation of externalapi.TrustedDataDataDAABlock
type TrustedDataDataDAABlock struct {
Header *MsgBlockHeader
Block *MsgBlock
GHOSTDAGData *BlockGHOSTDAGData
}

View File

@@ -0,0 +1,20 @@
package appmessage
// MsgPruningPointProof represents a kaspa PruningPointProof message
type MsgPruningPointProof struct {
baseMessage
Headers [][]*MsgBlockHeader
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPointProof) Command() MessageCommand {
return CmdPruningPointProof
}
// NewMsgPruningPointProof returns a new MsgPruningPointProof.
func NewMsgPruningPointProof(headers [][]*MsgBlockHeader) *MsgPruningPointProof {
return &MsgPruningPointProof{
Headers: headers,
}
}

View File

@@ -0,0 +1,20 @@
package appmessage
// MsgPruningPoints represents a kaspa PruningPoints message
type MsgPruningPoints struct {
baseMessage
Headers []*MsgBlockHeader
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPoints) Command() MessageCommand {
return CmdPruningPoints
}
// NewMsgPruningPoints returns a new MsgPruningPoints.
func NewMsgPruningPoints(headers []*MsgBlockHeader) *MsgPruningPoints {
return &MsgPruningPoints{
Headers: headers,
}
}

View File

@@ -1,23 +0,0 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestBlockBlueWork represents a kaspa RequestBlockBlueWork message
type MsgRequestBlockBlueWork struct {
baseMessage
Hash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgRequestBlockBlueWork) Command() MessageCommand {
return CmdRequestBlockBlueWork
}
// NewRequestBlockBlueWork returns a new kaspa RequestBlockBlueWork message
func NewRequestBlockBlueWork(hash *externalapi.DomainHash) *MsgRequestBlockBlueWork {
return &MsgRequestBlockBlueWork{
Hash: hash,
}
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgRequestPruningPointProof represents a kaspa RequestPruningPointProof message
type MsgRequestPruningPointProof struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointProof) Command() MessageCommand {
return CmdRequestPruningPointProof
}
// NewMsgRequestPruningPointProof returns a new MsgRequestPruningPointProof.
func NewMsgRequestPruningPointProof() *MsgRequestPruningPointProof {
return &MsgRequestPruningPointProof{}
}

View File

@@ -53,7 +53,7 @@ func (msg *SubmitBlockResponseMessage) Command() MessageCommand {
return CmdSubmitBlockResponseMessage
}
// NewSubmitBlockResponseMessage returns a instance of the message
// NewSubmitBlockResponseMessage returns an instance of the message
func NewSubmitBlockResponseMessage() *SubmitBlockResponseMessage {
return &SubmitBlockResponseMessage{}
}
@@ -70,13 +70,22 @@ type RPCBlock struct {
// used over RPC
type RPCBlockHeader struct {
Version uint32
ParentHashes []string
Parents []*RPCBlockLevelParents
HashMerkleRoot string
AcceptedIDMerkleRoot string
UTXOCommitment string
Timestamp int64
Bits uint32
Nonce uint64
DAAScore uint64
BlueScore uint64
BlueWork string
PruningPoint string
}
// RPCBlockLevelParents holds parent hashes for one block level
type RPCBlockLevelParents struct {
ParentHashes []string
}
// RPCBlockVerboseData holds verbose data about a block

View File

@@ -8,7 +8,7 @@ import (
// DefaultTimeout is the default duration to wait for enqueuing/dequeuing
// to/from routes.
const DefaultTimeout = 30 * time.Second
const DefaultTimeout = 120 * time.Second
// ErrPeerWithSameIDExists signifies that a peer with the same ID already exist.
var ErrPeerWithSameIDExists = errors.New("ready peer with the same ID already exists")

View File

@@ -73,10 +73,10 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
orphanBlock := f.orphans[orphanHash]
log.Debugf("Considering to unorphan block %s with parents %s",
orphanHash, orphanBlock.Header.ParentHashes())
orphanHash, orphanBlock.Header.DirectParents())
canBeUnorphaned := true
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
for _, orphanBlockParentHash := range orphanBlock.Header.DirectParents() {
orphanBlockParentInfo, err := f.domain.Consensus().GetBlockInfo(orphanBlockParentHash)
if err != nil {
return nil, err
@@ -133,7 +133,7 @@ func (f *FlowContext) addChildOrphansToProcessQueue(blockHash *externalapi.Domai
func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash) []externalapi.DomainHash {
var childOrphans []externalapi.DomainHash
for orphanHash, orphanBlock := range f.orphans {
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
for _, orphanBlockParentHash := range orphanBlock.Header.DirectParents() {
if orphanBlockParentHash.Equal(blockHash) {
childOrphans = append(childOrphans, orphanHash)
break
@@ -201,7 +201,7 @@ func (f *FlowContext) GetOrphanRoots(orphan *externalapi.DomainHash) ([]*externa
continue
}
for _, parent := range block.Header.ParentHashes() {
for _, parent := range block.Header.DirectParents() {
if !addedToQueueSet.Contains(parent) {
queue = append(queue, parent)
addedToQueueSet.Add(parent)

View File

@@ -1,42 +0,0 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// BlockBlueWorkRequestsContext is the interface for the context needed for the HandleBlockBlueWorkRequests flow.
type BlockBlueWorkRequestsContext interface {
Domain() domain.Domain
}
// HandleBlockBlueWorkRequests listens to appmessage.MsgRequestBlockBlueWork messages and sends
// their corresponding blue work to the requesting peer.
func HandleBlockBlueWorkRequests(context BlockBlueWorkRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
message, err := incomingRoute.Dequeue()
if err != nil {
return err
}
msgRequestBlockBlueWork := message.(*appmessage.MsgRequestBlockBlueWork)
log.Debugf("Got request for block %s blue work from %s", msgRequestBlockBlueWork.Hash, peer)
blockInfo, err := context.Domain().Consensus().GetBlockInfo(msgRequestBlockBlueWork.Hash)
if err != nil {
return err
}
if !blockInfo.Exists {
return protocolerrors.Errorf(true, "block %s not found", msgRequestBlockBlueWork.Hash)
}
err = outgoingRoute.Enqueue(appmessage.NewBlockBlueWork(blockInfo.BlueWork))
if err != nil {
return err
}
log.Debugf("Sent blue work for block %s to %s", msgRequestBlockBlueWork.Hash, peer)
}
}

View File

@@ -24,6 +24,22 @@ func HandlePruningPointAndItsAnticoneRequests(context PruningPointAndItsAnticone
}
log.Debugf("Got request for pruning point and its anticone from %s", peer)
pruningPointHeaders, err := context.Domain().Consensus().PruningPointHeaders()
if err != nil {
return err
}
msgPruningPointHeaders := make([]*appmessage.MsgBlockHeader, len(pruningPointHeaders))
for i, header := range pruningPointHeaders {
msgPruningPointHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(header)
}
err = outgoingRoute.Enqueue(appmessage.NewMsgPruningPoints(msgPruningPointHeaders))
if err != nil {
return err
}
blocks, err := context.Domain().Consensus().PruningPointAndItsAnticoneWithTrustedData()
if err != nil {
return err

View File

@@ -0,0 +1,40 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// PruningPointProofRequestsContext is the interface for the context needed for the HandlePruningPointProofRequests flow.
type PruningPointProofRequestsContext interface {
Domain() domain.Domain
}
// HandlePruningPointProofRequests listens to appmessage.MsgRequestPruningPointProof messages and sends
// the pruning point proof to the requesting peer.
func HandlePruningPointProofRequests(context PruningPointProofRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for pruning point proof from %s", peer)
pruningPointProof, err := context.Domain().Consensus().BuildPruningPointProof()
if err != nil {
return err
}
pruningPointProofMessage := appmessage.DomainPruningPointProofToMsgPruningPointProof(pruningPointProof)
err = outgoingRoute.Enqueue(pruningPointProofMessage)
if err != nil {
return err
}
log.Debugf("Sent pruning point proof to %s", peer)
}
}

View File

@@ -275,7 +275,7 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock) e
// Start IBD unless we already are in IBD
log.Debugf("Block %s is out of orphan resolution range. "+
"Attempting to start IBD against it.", blockHash)
return flow.runIBDIfNotRunning(blockHash)
return flow.runIBDIfNotRunning(block)
}
// isBlockInOrphanResolutionRange finds out whether the given blockHash should be

View File

@@ -102,14 +102,17 @@ func (flow *handleRequestPruningPointUTXOSetFlow) sendPruningPointUTXOSet(
return err
}
if len(pruningPointUTXOs) < step {
finished := len(pruningPointUTXOs) < step
if finished && chunksSent%ibdBatchSize != 0 {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSet.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
fromOutpoint = pruningPointUTXOs[len(pruningPointUTXOs)-1].Outpoint
if len(pruningPointUTXOs) > 0 {
fromOutpoint = pruningPointUTXOs[len(pruningPointUTXOs)-1].Outpoint
}
chunksSent++
// Wait for the peer to request more chunks every `ibdBatchSize` chunks
@@ -123,6 +126,13 @@ func (flow *handleRequestPruningPointUTXOSetFlow) sendPruningPointUTXOSet(
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointUTXOSetChunk, message.Command())
}
if finished {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSet.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
}
}
}

View File

@@ -16,7 +16,7 @@ import (
"github.com/pkg/errors"
)
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.DomainHash) error {
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) error {
wasIBDNotRunning := flow.TrySetIBDRunning(flow.peer)
if !wasIBDNotRunning {
log.Debugf("IBD is already running")
@@ -29,6 +29,7 @@ func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.Domain
flow.logIBDFinished(isFinishedSuccessfully)
}()
highHash := consensushashing.BlockHash(block)
log.Debugf("IBD started with peer %s and highHash %s", flow.peer, highHash)
log.Debugf("Syncing blocks up to %s", highHash)
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
@@ -38,7 +39,7 @@ func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.Domain
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(highHash, highestSharedBlockFound)
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(block, highestSharedBlockFound)
if err != nil {
return err
}
@@ -247,8 +248,8 @@ func (flow *handleRelayInvsFlow) syncPruningPointFutureHeaders(consensus externa
}
return nil
}
for _, block := range ibdBlocksMessage.BlockHeaders {
err = flow.processHeader(consensus, block)
for _, header := range ibdBlocksMessage.BlockHeaders {
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
@@ -319,6 +320,40 @@ func (flow *handleRelayInvsFlow) processHeader(consensus externalapi.Consensus,
return nil
}
func (flow *handleRelayInvsFlow) validatePruningPointFutureHeaderTimestamps() error {
headerSelectedTipHash, err := flow.Domain().StagingConsensus().GetHeadersSelectedTip()
if err != nil {
return err
}
headerSelectedTipHeader, err := flow.Domain().StagingConsensus().GetBlockHeader(headerSelectedTipHash)
if err != nil {
return err
}
headerSelectedTipTimestamp := headerSelectedTipHeader.TimeInMilliseconds()
currentSelectedTipHash, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return err
}
currentSelectedTipHeader, err := flow.Domain().Consensus().GetBlockHeader(currentSelectedTipHash)
if err != nil {
return err
}
currentSelectedTipTimestamp := currentSelectedTipHeader.TimeInMilliseconds()
if headerSelectedTipTimestamp < currentSelectedTipTimestamp {
return protocolerrors.Errorf(false, "the timestamp of the candidate selected "+
"tip is smaller than the current selected tip")
}
minTimestampDifferenceInMilliseconds := (10 * time.Minute).Milliseconds()
if headerSelectedTipTimestamp-currentSelectedTipTimestamp < minTimestampDifferenceInMilliseconds {
return protocolerrors.Errorf(false, "difference between the timestamps of "+
"the current pruning point and the candidate pruning point is too small. Aborting IBD...")
}
return nil
}
func (flow *handleRelayInvsFlow) receiveAndInsertPruningPointUTXOSet(
consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (bool, error) {

View File

@@ -7,6 +7,7 @@ import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
)
@@ -16,7 +17,7 @@ func (flow *handleRelayInvsFlow) ibdWithHeadersProof(highHash *externalapi.Domai
return err
}
err = flow.downloadHeadersAndPruningUTXOSet(flow.Domain().StagingConsensus(), highHash)
err = flow.downloadHeadersAndPruningUTXOSet(highHash)
if err != nil {
if !flow.IsRecoverableError(err) {
return err
@@ -38,16 +39,16 @@ func (flow *handleRelayInvsFlow) ibdWithHeadersProof(highHash *externalapi.Domai
return nil
}
func (flow *handleRelayInvsFlow) shouldSyncAndShouldDownloadHeadersProof(highHash *externalapi.DomainHash,
func (flow *handleRelayInvsFlow) shouldSyncAndShouldDownloadHeadersProof(highBlock *externalapi.DomainBlock,
highestSharedBlockFound bool) (shouldDownload, shouldSync bool, err error) {
if !highestSharedBlockFound {
hasMoreBlueWorkThanSelectedTip, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTip(highHash)
hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock)
if err != nil {
return false, false, err
}
if hasMoreBlueWorkThanSelectedTip {
if hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore {
return true, true, nil
}
@@ -57,24 +58,7 @@ func (flow *handleRelayInvsFlow) shouldSyncAndShouldDownloadHeadersProof(highHas
return false, true, nil
}
func (flow *handleRelayInvsFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTip(highHash *externalapi.DomainHash) (bool, error) {
err := flow.outgoingRoute.Enqueue(appmessage.NewRequestBlockBlueWork(highHash))
if err != nil {
return false, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return false, err
}
msgBlockBlueWork, ok := message.(*appmessage.MsgBlockBlueWork)
if !ok {
return false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdBlockBlueWork, message.Command())
}
func (flow *handleRelayInvsFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock *externalapi.DomainBlock) (bool, error) {
headersSelectedTip, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return false, err
@@ -85,40 +69,85 @@ func (flow *handleRelayInvsFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTip(h
return false, err
}
return msgBlockBlueWork.BlueWork.Cmp(headersSelectedTipInfo.BlueWork) > 0, nil
if highBlock.Header.BlueScore() < headersSelectedTipInfo.BlueScore+flow.Config().NetParams().PruningDepth() {
return false, nil
}
return highBlock.Header.BlueWork().Cmp(headersSelectedTipInfo.BlueWork) > 0, nil
}
func (flow *handleRelayInvsFlow) downloadHeadersProof() error {
// TODO: Implement headers proof mechanism
return nil
func (flow *handleRelayInvsFlow) syncAndValidatePruningPointProof() (*externalapi.DomainHash, error) {
log.Infof("Downloading the pruning point proof from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointProof())
if err != nil {
return nil, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
pruningPointProofMessage, ok := message.(*appmessage.MsgPruningPointProof)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPointProof, message.Command())
}
pruningPointProof := appmessage.MsgPruningPointProofToDomainPruningPointProof(pruningPointProofMessage)
err = flow.Domain().Consensus().ValidatePruningPointProof(pruningPointProof)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
return nil, protocolerrors.Wrapf(true, err, "pruning point proof validation failed")
}
return nil, err
}
err = flow.Domain().StagingConsensus().ApplyPruningPointProof(pruningPointProof)
if err != nil {
return nil, err
}
return consensushashing.HeaderHash(pruningPointProof.Headers[0][len(pruningPointProof.Headers[0])-1]), nil
}
func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(consensus externalapi.Consensus, highHash *externalapi.DomainHash) error {
err := flow.downloadHeadersProof()
func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(highHash *externalapi.DomainHash) error {
proofPruningPoint, err := flow.syncAndValidatePruningPointProof()
if err != nil {
return err
}
pruningPoint, err := flow.syncPruningPointAndItsAnticone(consensus)
err = flow.syncPruningPointsAndPruningPointAnticone(proofPruningPoint)
if err != nil {
return err
}
// TODO: Remove this condition once there's more proper way to check finality violation
// in the headers proof.
if pruningPoint.Equal(flow.Config().NetParams().GenesisHash) {
if proofPruningPoint.Equal(flow.Config().NetParams().GenesisHash) {
return protocolerrors.Errorf(true, "the genesis pruning point violates finality")
}
err = flow.syncPruningPointFutureHeaders(consensus, pruningPoint, highHash)
err = flow.syncPruningPointFutureHeaders(flow.Domain().StagingConsensus(), proofPruningPoint, highHash)
if err != nil {
return err
}
log.Debugf("Blocks downloaded from peer %s", flow.peer)
log.Debugf("Headers downloaded from peer %s", flow.peer)
highHashInfo, err := flow.Domain().StagingConsensus().GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashInfo.Exists {
return protocolerrors.Errorf(true, "the triggering IBD block was not sent")
}
err = flow.validatePruningPointFutureHeaderTimestamps()
if err != nil {
return err
}
log.Debugf("Syncing the current pruning point UTXO set")
syncedPruningPointUTXOSetSuccessfully, err := flow.syncPruningPointUTXOSet(consensus, pruningPoint)
syncedPruningPointUTXOSetSuccessfully, err := flow.syncPruningPointUTXOSet(flow.Domain().StagingConsensus(), proofPruningPoint)
if err != nil {
return err
}
@@ -130,45 +159,54 @@ func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(consensus exte
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointAndItsAnticone(consensus externalapi.Consensus) (*externalapi.DomainHash, error) {
log.Infof("Downloading pruning point and its anticone from %s", flow.peer)
func (flow *handleRelayInvsFlow) syncPruningPointsAndPruningPointAnticone(proofPruningPoint *externalapi.DomainHash) error {
log.Infof("Downloading the past pruning points and the pruning point anticone from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointAndItsAnticone())
if err != nil {
return nil, err
return err
}
pruningPoint, done, err := flow.receiveBlockWithTrustedData()
err = flow.validateAndInsertPruningPoints(proofPruningPoint)
if err != nil {
return nil, err
return err
}
pruningPointWithMetaData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
}
if done {
return nil, protocolerrors.Errorf(true, "got `done` message before receiving the pruning point")
return protocolerrors.Errorf(true, "got `done` message before receiving the pruning point")
}
err = flow.processBlockWithTrustedData(consensus, pruningPoint)
if !pruningPointWithMetaData.Block.Header.BlockHash().Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "first block with trusted data is not the pruning point")
}
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), pruningPointWithMetaData)
if err != nil {
return nil, err
return err
}
for {
blockWithTrustedData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return nil, err
return err
}
if done {
break
}
err = flow.processBlockWithTrustedData(consensus, blockWithTrustedData)
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), blockWithTrustedData)
if err != nil {
return nil, err
return err
}
}
log.Infof("Finished downloading pruning point and its anticone from %s", flow.peer)
return pruningPoint.Block.Header.BlockHash(), nil
return nil
}
func (flow *handleRelayInvsFlow) processBlockWithTrustedData(
@@ -199,7 +237,69 @@ func (flow *handleRelayInvsFlow) receiveBlockWithTrustedData() (*appmessage.MsgB
}
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet(consensus externalapi.Consensus, pruningPoint *externalapi.DomainHash) (bool, error) {
func (flow *handleRelayInvsFlow) receivePruningPoints() (*appmessage.MsgPruningPoints, error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
msgPruningPoints, ok := message.(*appmessage.MsgPruningPoints)
if !ok {
return nil,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPoints, message.Command())
}
return msgPruningPoints, nil
}
func (flow *handleRelayInvsFlow) validateAndInsertPruningPoints(proofPruningPoint *externalapi.DomainHash) error {
currentPruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return err
}
if currentPruningPoint.Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "the proposed pruning point is the same as the current pruning point")
}
pruningPoints, err := flow.receivePruningPoints()
if err != nil {
return err
}
headers := make([]externalapi.BlockHeader, len(pruningPoints.Headers))
for i, header := range pruningPoints.Headers {
headers[i] = appmessage.BlockHeaderToDomainBlockHeader(header)
}
arePruningPointsViolatingFinality, err := flow.Domain().Consensus().ArePruningPointsViolatingFinality(headers)
if err != nil {
return err
}
if arePruningPointsViolatingFinality {
// TODO: Find a better way to deal with finality conflicts.
return protocolerrors.Errorf(false, "pruning points are violating finality")
}
lastPruningPoint := consensushashing.HeaderHash(headers[len(headers)-1])
if !lastPruningPoint.Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "the proof pruning point is not equal to the last pruning "+
"point in the list")
}
err = flow.Domain().StagingConsensus().ImportPruningPoints(headers)
if err != nil {
return err
}
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet(consensus externalapi.Consensus,
pruningPoint *externalapi.DomainHash) (bool, error) {
log.Infof("Checking if the suggested pruning point %s is compatible to the node DAG", pruningPoint)
isValid, err := flow.Domain().StagingConsensus().IsValidPruningPoint(pruningPoint)
if err != nil {

View File

@@ -168,11 +168,12 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
}),
m.registerFlow("HandleRelayInvs", router, []appmessage.MessageCommand{
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator, appmessage.CmdBlockBlueWork,
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator,
appmessage.CmdDoneHeaders, appmessage.CmdUnexpectedPruningPoint, appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdBlockHeaders, appmessage.CmdIBDBlockLocatorHighestHash, appmessage.CmdBlockWithTrustedData,
appmessage.CmdDoneBlocksWithTrustedData, appmessage.CmdIBDBlockLocatorHighestHashNotFound,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdIBDBlock,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdIBDBlock, appmessage.CmdPruningPoints,
appmessage.CmdPruningPointProof,
},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayInvs(m.context, incomingRoute,
@@ -215,13 +216,6 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
},
),
m.registerFlow("HandleBlockBlueWorkRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestBlockBlueWork}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleBlockBlueWorkRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandlePruningPointAndItsAnticoneRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointAndItsAnticone}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
@@ -235,6 +229,13 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
return blockrelay.HandleIBDBlockLocator(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandlePruningPointProofRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointProof}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointProofRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
}
}

View File

@@ -21,7 +21,7 @@ go build $FLAGS -o kaspad .
if [ -n "${NO_PARALLEL}" ]
then
go test -parallel=1 $FLAGS ./...
go test -timeout 20m -parallel=1 $FLAGS ./...
else
go test $FLAGS ./...
go test -timeout 20m $FLAGS ./...
fi

View File

@@ -1,3 +1,15 @@
Kaspad v0.11.2 - 2021-11-11
===========================
Bug fixes:
* Enlarge p2p max message size to 1gb
* Fix UTXO chunks logic
* Increase tests timeout to 20 minutes
Kaspad v0.11.1 - 2021-11-09
===========================
Non-breaking changes:
* Cache the miner state
Kaspad v0.10.2 - 2021-05-18
===========================
Non-breaking changes:

View File

@@ -13,7 +13,6 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/utils/pow"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
)
@@ -143,20 +142,20 @@ func mineNextBlock(mineWhenNotSynced bool) *externalapi.DomainBlock {
// In the rare case where the nonce space is exhausted for a specific
// block, it'll keep looping the nonce until a new block template
// is discovered.
block := getBlockForMining(mineWhenNotSynced)
targetDifficulty := difficulty.CompactToBig(block.Header.Bits())
headerForMining := block.Header.ToMutable()
headerForMining.SetNonce(nonce)
block, state := getBlockForMining(mineWhenNotSynced)
state.Nonce = nonce
atomic.AddUint64(&hashesTried, 1)
if pow.CheckProofOfWorkWithTarget(headerForMining, targetDifficulty) {
block.Header = headerForMining.ToImmutable()
log.Infof("Found block %s with parents %s", consensushashing.BlockHash(block), block.Header.ParentHashes())
if state.CheckProofOfWork() {
mutHeader := block.Header.ToMutable()
mutHeader.SetNonce(nonce)
block.Header = mutHeader.ToImmutable()
log.Infof("Found block %s with parents %s", consensushashing.BlockHash(block), block.Header.DirectParents())
return block
}
}
}
func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
func getBlockForMining(mineWhenNotSynced bool) (*externalapi.DomainBlock, *pow.State) {
tryCount := 0
const sleepTime = 500 * time.Millisecond
@@ -166,7 +165,7 @@ func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
tryCount++
shouldLog := (tryCount-1)%10 == 0
template, isSynced := templatemanager.Get()
template, state, isSynced := templatemanager.Get()
if template == nil {
if shouldLog {
log.Info("Waiting for the initial template")
@@ -182,7 +181,7 @@ func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
continue
}
return template
return template, state
}
}

View File

@@ -3,23 +3,26 @@ package templatemanager
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/pow"
"sync"
)
var currentTemplate *externalapi.DomainBlock
var currentState *pow.State
var isSynced bool
var lock = &sync.Mutex{}
// Get returns the template to work on
func Get() (*externalapi.DomainBlock, bool) {
func Get() (*externalapi.DomainBlock, *pow.State, bool) {
lock.Lock()
defer lock.Unlock()
// Shallow copy the block so when the user replaces the header it won't affect the template here.
if currentTemplate == nil {
return nil, false
return nil, nil, false
}
block := *currentTemplate
return &block, isSynced
state := *currentState
return &block, &state, isSynced
}
// Set sets the current template to work on
@@ -31,6 +34,7 @@ func Set(template *appmessage.GetBlockTemplateResponseMessage) error {
lock.Lock()
defer lock.Unlock()
currentTemplate = block
currentState = pow.NewState(block.Header.ToMutable())
isSynced = template.IsSynced
return nil
}

View File

@@ -5,7 +5,7 @@ import (
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/client"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
)
func balance(conf *balanceConfig) error {
@@ -22,9 +22,9 @@ func balance(conf *balanceConfig) error {
return err
}
fmt.Printf("Balance:\t\tKAS %f\n", float64(response.Available)/util.SompiPerKaspa)
fmt.Printf("Balance:\t\tKAS %f\n", float64(response.Available)/constants.SompiPerKaspa)
if response.Pending > 0 {
fmt.Printf("Pending balance:\tKAS %f\n", float64(response.Pending)/util.SompiPerKaspa)
fmt.Printf("Pending balance:\tKAS %f\n", float64(response.Pending)/constants.SompiPerKaspa)
}
return nil

View File

@@ -6,7 +6,7 @@ import (
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/client"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
)
func createUnsignedTransaction(conf *createUnsignedTransactionConfig) error {
@@ -19,7 +19,7 @@ func createUnsignedTransaction(conf *createUnsignedTransactionConfig) error {
ctx, cancel := context.WithTimeout(context.Background(), daemonTimeout)
defer cancel()
sendAmountSompi := uint64(conf.SendAmount * util.SompiPerKaspa)
sendAmountSompi := uint64(conf.SendAmount * constants.SompiPerKaspa)
response, err := daemonClient.CreateUnsignedTransaction(ctx, &pb.CreateUnsignedTransactionRequest{
Address: conf.ToAddress,
Amount: sendAmountSompi,

View File

@@ -19,7 +19,7 @@ func Connect(address string) (pb.KaspawalletdClient, func(), error) {
conn, err := grpc.DialContext(ctx, address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
if errors.Is(err, context.DeadlineExceeded) {
return nil, nil, errors.New("kaspactl daemon is not running, start it with `kaspactl start-daemon`")
return nil, nil, errors.New("kaspawallet daemon is not running, start it with `kaspawallet start-daemon`")
}
return nil, nil, err
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
)
@@ -27,7 +28,7 @@ func (s *server) CreateUnsignedTransaction(_ context.Context, request *pb.Create
}
// TODO: Implement a better fee estimation mechanism
const feePerInput = 1000
const feePerInput = 10000
selectedUTXOs, changeSompi, err := s.selectUTXOs(request.Amount, feePerInput)
if err != nil {
return nil, err
@@ -88,7 +89,7 @@ func (s *server) selectUTXOs(spendAmount uint64, feePerInput uint64) (
totalSpend := spendAmount + fee
if totalValue < totalSpend {
return nil, 0, errors.Errorf("Insufficient funds for send: %f required, while only %f available",
float64(totalSpend)/util.SompiPerKaspa, float64(totalValue)/util.SompiPerKaspa)
float64(totalSpend)/constants.SompiPerKaspa, float64(totalValue)/constants.SompiPerKaspa)
}
return selectedUTXOs, totalValue - totalSpend, nil

View File

@@ -7,7 +7,7 @@ import (
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/pkg/errors"
)
@@ -30,7 +30,7 @@ func send(conf *sendConfig) error {
ctx, cancel := context.WithTimeout(context.Background(), daemonTimeout)
defer cancel()
sendAmountSompi := uint64(conf.SendAmount * util.SompiPerKaspa)
sendAmountSompi := uint64(conf.SendAmount * constants.SompiPerKaspa)
createUnsignedTransactionResponse, err := daemonClient.CreateUnsignedTransaction(ctx, &pb.CreateUnsignedTransactionRequest{
Address: conf.ToAddress,
Amount: sendAmountSompi,

View File

@@ -28,27 +28,28 @@ type consensus struct {
pastMedianTimeManager model.PastMedianTimeManager
blockValidator model.BlockValidator
coinbaseManager model.CoinbaseManager
dagTopologyManager model.DAGTopologyManager
dagTopologyManagers []model.DAGTopologyManager
dagTraversalManager model.DAGTraversalManager
difficultyManager model.DifficultyManager
ghostdagManager model.GHOSTDAGManager
ghostdagManagers []model.GHOSTDAGManager
headerTipsManager model.HeadersSelectedTipManager
mergeDepthManager model.MergeDepthManager
pruningManager model.PruningManager
reachabilityManager model.ReachabilityManager
reachabilityManagers []model.ReachabilityManager
finalityManager model.FinalityManager
pruningProofManager model.PruningProofManager
acceptanceDataStore model.AcceptanceDataStore
blockStore model.BlockStore
blockHeaderStore model.BlockHeaderStore
pruningStore model.PruningStore
ghostdagDataStore model.GHOSTDAGDataStore
blockRelationStore model.BlockRelationStore
ghostdagDataStores []model.GHOSTDAGDataStore
blockRelationStores []model.BlockRelationStore
blockStatusStore model.BlockStatusStore
consensusStateStore model.ConsensusStateStore
headersSelectedTipStore model.HeaderSelectedTipStore
multisetStore model.MultisetStore
reachabilityDataStore model.ReachabilityDataStore
reachabilityDataStores []model.ReachabilityDataStore
utxoDiffStore model.UTXODiffStore
finalityStore model.FinalityStore
headersSelectedChainStore model.HeadersSelectedChainStore
@@ -81,25 +82,31 @@ func (s *consensus) Init(skipAddingGenesis bool) error {
// on a node with pruned header all blocks without known parents points to it.
if !exists {
s.blockStatusStore.Stage(stagingArea, model.VirtualGenesisBlockHash, externalapi.StatusUTXOValid)
err = s.reachabilityManager.Init(stagingArea)
if err != nil {
return err
for _, reachabilityManager := range s.reachabilityManagers {
err = reachabilityManager.Init(stagingArea)
if err != nil {
return err
}
}
err = s.dagTopologyManager.SetParents(stagingArea, model.VirtualGenesisBlockHash, nil)
if err != nil {
return err
for _, dagTopologyManager := range s.dagTopologyManagers {
err = dagTopologyManager.SetParents(stagingArea, model.VirtualGenesisBlockHash, nil)
if err != nil {
return err
}
}
s.consensusStateStore.StageTips(stagingArea, []*externalapi.DomainHash{model.VirtualGenesisBlockHash})
s.ghostdagDataStore.Stage(stagingArea, model.VirtualGenesisBlockHash, externalapi.NewBlockGHOSTDAGData(
0,
big.NewInt(0),
nil,
nil,
nil,
nil,
), false)
for _, ghostdagDataStore := range s.ghostdagDataStores {
ghostdagDataStore.Stage(stagingArea, model.VirtualGenesisBlockHash, externalapi.NewBlockGHOSTDAGData(
0,
big.NewInt(0),
nil,
nil,
nil,
nil,
), false)
}
err = staging.CommitAllChanges(s.databaseContext, stagingArea)
if err != nil {
@@ -267,7 +274,7 @@ func (s *consensus) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalap
return blockInfo, nil
}
ghostdagData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, blockHash, false)
ghostdagData, err := s.ghostdagDataStores[0].Get(s.databaseContext, stagingArea, blockHash, false)
if err != nil {
return nil, err
}
@@ -287,12 +294,12 @@ func (s *consensus) GetBlockRelations(blockHash *externalapi.DomainHash) (
stagingArea := model.NewStagingArea()
blockRelation, err := s.blockRelationStore.BlockRelation(s.databaseContext, stagingArea, blockHash)
blockRelation, err := s.blockRelationStores[0].BlockRelation(s.databaseContext, stagingArea, blockHash)
if err != nil {
return nil, nil, nil, err
}
blockGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, blockHash, false)
blockGHOSTDAGData, err := s.ghostdagDataStores[0].Get(s.databaseContext, stagingArea, blockHash, false)
if err != nil {
return nil, nil, nil, err
}
@@ -382,7 +389,7 @@ func (s *consensus) GetVirtualUTXOs(expectedVirtualParents []*externalapi.Domain
stagingArea := model.NewStagingArea()
virtualParents, err := s.dagTopologyManager.Parents(stagingArea, model.VirtualBlockHash)
virtualParents, err := s.dagTopologyManagers[0].Parents(stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
@@ -409,6 +416,35 @@ func (s *consensus) PruningPoint() (*externalapi.DomainHash, error) {
return s.pruningStore.PruningPoint(s.databaseContext, stagingArea)
}
func (s *consensus) PruningPointHeaders() ([]externalapi.BlockHeader, error) {
s.lock.Lock()
defer s.lock.Unlock()
stagingArea := model.NewStagingArea()
lastPruningPointIndex, err := s.pruningStore.CurrentPruningPointIndex(s.databaseContext, stagingArea)
if err != nil {
return nil, err
}
headers := make([]externalapi.BlockHeader, 0, lastPruningPointIndex)
for i := uint64(0); i <= lastPruningPointIndex; i++ {
pruningPoint, err := s.pruningStore.PruningPointByIndex(s.databaseContext, stagingArea, i)
if err != nil {
return nil, err
}
header, err := s.blockHeaderStore.BlockHeader(s.databaseContext, stagingArea, pruningPoint)
if err != nil {
return nil, err
}
headers = append(headers, header)
}
return headers, nil
}
func (s *consensus) ClearImportedPruningPointData() error {
s.lock.Lock()
defer s.lock.Unlock()
@@ -436,7 +472,7 @@ func (s *consensus) GetVirtualSelectedParent() (*externalapi.DomainHash, error)
stagingArea := model.NewStagingArea()
virtualGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, model.VirtualBlockHash, false)
virtualGHOSTDAGData, err := s.ghostdagDataStores[0].Get(s.databaseContext, stagingArea, model.VirtualBlockHash, false)
if err != nil {
return nil, err
}
@@ -458,7 +494,7 @@ func (s *consensus) GetVirtualInfo() (*externalapi.VirtualInfo, error) {
stagingArea := model.NewStagingArea()
blockRelations, err := s.blockRelationStore.BlockRelation(s.databaseContext, stagingArea, model.VirtualBlockHash)
blockRelations, err := s.blockRelationStores[0].BlockRelation(s.databaseContext, stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
@@ -470,7 +506,7 @@ func (s *consensus) GetVirtualInfo() (*externalapi.VirtualInfo, error) {
if err != nil {
return nil, err
}
virtualGHOSTDAGData, err := s.ghostdagDataStore.Get(s.databaseContext, stagingArea, model.VirtualBlockHash, false)
virtualGHOSTDAGData, err := s.ghostdagDataStores[0].Get(s.databaseContext, stagingArea, model.VirtualBlockHash, false)
if err != nil {
return nil, err
}
@@ -568,6 +604,33 @@ func (s *consensus) IsValidPruningPoint(blockHash *externalapi.DomainHash) (bool
return s.pruningManager.IsValidPruningPoint(stagingArea, blockHash)
}
func (s *consensus) ArePruningPointsViolatingFinality(pruningPoints []externalapi.BlockHeader) (bool, error) {
s.lock.Lock()
defer s.lock.Unlock()
stagingArea := model.NewStagingArea()
return s.pruningManager.ArePruningPointsViolatingFinality(stagingArea, pruningPoints)
}
func (s *consensus) ImportPruningPoints(pruningPoints []externalapi.BlockHeader) error {
s.lock.Lock()
defer s.lock.Unlock()
stagingArea := model.NewStagingArea()
err := s.consensusStateManager.ImportPruningPoints(stagingArea, pruningPoints)
if err != nil {
return err
}
err = staging.CommitAllChanges(s.databaseContext, stagingArea)
if err != nil {
return err
}
return nil
}
func (s *consensus) GetVirtualSelectedParentChainFromBlock(blockHash *externalapi.DomainHash) (*externalapi.SelectedChainPath, error) {
s.lock.Lock()
defer s.lock.Unlock()
@@ -608,7 +671,7 @@ func (s *consensus) IsInSelectedParentChainOf(blockHashA *externalapi.DomainHash
return false, err
}
return s.dagTopologyManager.IsInSelectedParentChainOf(stagingArea, blockHashA, blockHashB)
return s.dagTopologyManagers[0].IsInSelectedParentChainOf(stagingArea, blockHashA, blockHashB)
}
func (s *consensus) GetHeadersSelectedTip() (*externalapi.DomainHash, error) {
@@ -631,7 +694,12 @@ func (s *consensus) Anticone(blockHash *externalapi.DomainHash) ([]*externalapi.
return nil, err
}
return s.dagTraversalManager.Anticone(stagingArea, blockHash)
tips, err := s.consensusStateStore.Tips(stagingArea, s.databaseContext)
if err != nil {
return nil, err
}
return s.dagTraversalManager.AnticoneFromBlocks(stagingArea, tips, blockHash)
}
func (s *consensus) EstimateNetworkHashesPerSecond(startHash *externalapi.DomainHash, windowSize int) (uint64, error) {
@@ -666,3 +734,32 @@ func (s *consensus) ResolveVirtual() error {
}
}
}
func (s *consensus) BuildPruningPointProof() (*externalapi.PruningPointProof, error) {
s.lock.Lock()
defer s.lock.Unlock()
return s.pruningProofManager.BuildPruningPointProof(model.NewStagingArea())
}
func (s *consensus) ValidatePruningPointProof(pruningPointProof *externalapi.PruningPointProof) error {
s.lock.Lock()
defer s.lock.Unlock()
return s.pruningProofManager.ValidatePruningPointProof(pruningPointProof)
}
func (s *consensus) ApplyPruningPointProof(pruningPointProof *externalapi.PruningPointProof) error {
stagingArea := model.NewStagingArea()
err := s.pruningProofManager.ApplyPruningPointProof(stagingArea, pruningPointProof)
if err != nil {
return err
}
err = staging.CommitAllChanges(s.databaseContext, stagingArea)
if err != nil {
return err
}
return nil
}

View File

@@ -5,25 +5,30 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/pkg/errors"
"math"
"math/big"
)
// DomainBlockHeaderToDbBlockHeader converts BlockHeader to DbBlockHeader
func DomainBlockHeaderToDbBlockHeader(domainBlockHeader externalapi.BlockHeader) *DbBlockHeader {
return &DbBlockHeader{
Version: uint32(domainBlockHeader.Version()),
ParentHashes: DomainHashesToDbHashes(domainBlockHeader.ParentHashes()),
Parents: DomainParentsToDbParents(domainBlockHeader.Parents()),
HashMerkleRoot: DomainHashToDbHash(domainBlockHeader.HashMerkleRoot()),
AcceptedIDMerkleRoot: DomainHashToDbHash(domainBlockHeader.AcceptedIDMerkleRoot()),
UtxoCommitment: DomainHashToDbHash(domainBlockHeader.UTXOCommitment()),
TimeInMilliseconds: domainBlockHeader.TimeInMilliseconds(),
Bits: domainBlockHeader.Bits(),
Nonce: domainBlockHeader.Nonce(),
DaaScore: domainBlockHeader.DAAScore(),
BlueScore: domainBlockHeader.BlueScore(),
BlueWork: domainBlockHeader.BlueWork().Bytes(),
PruningPoint: DomainHashToDbHash(domainBlockHeader.PruningPoint()),
}
}
// DbBlockHeaderToDomainBlockHeader converts DbBlockHeader to BlockHeader
func DbBlockHeaderToDomainBlockHeader(dbBlockHeader *DbBlockHeader) (externalapi.BlockHeader, error) {
parentHashes, err := DbHashesToDomainHashes(dbBlockHeader.ParentHashes)
parents, err := DbParentsToDomainParents(dbBlockHeader.Parents)
if err != nil {
return nil, err
}
@@ -43,14 +48,23 @@ func DbBlockHeaderToDomainBlockHeader(dbBlockHeader *DbBlockHeader) (externalapi
return nil, errors.Errorf("Invalid header version - bigger then uint16")
}
pruningPoint, err := DbHashToDomainHash(dbBlockHeader.PruningPoint)
if err != nil {
return nil, err
}
return blockheader.NewImmutableBlockHeader(
uint16(dbBlockHeader.Version),
parentHashes,
parents,
hashMerkleRoot,
acceptedIDMerkleRoot,
utxoCommitment,
dbBlockHeader.TimeInMilliseconds,
dbBlockHeader.Bits,
dbBlockHeader.Nonce,
dbBlockHeader.DaaScore,
dbBlockHeader.BlueScore,
new(big.Int).SetBytes(dbBlockHeader.BlueWork),
pruningPoint,
), nil
}

View File

@@ -0,0 +1,49 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// DbBlockLevelParentsToDomainBlockLevelParents converts a DbBlockLevelParents to a BlockLevelParents
func DbBlockLevelParentsToDomainBlockLevelParents(dbBlockLevelParents *DbBlockLevelParents) (externalapi.BlockLevelParents, error) {
domainBlockLevelParents := make(externalapi.BlockLevelParents, len(dbBlockLevelParents.ParentHashes))
for i, parentHash := range dbBlockLevelParents.ParentHashes {
var err error
domainBlockLevelParents[i], err = externalapi.NewDomainHashFromByteSlice(parentHash.Hash)
if err != nil {
return nil, err
}
}
return domainBlockLevelParents, nil
}
// DomainBlockLevelParentsToDbBlockLevelParents converts a BlockLevelParents to a DbBlockLevelParents
func DomainBlockLevelParentsToDbBlockLevelParents(domainBlockLevelParents externalapi.BlockLevelParents) *DbBlockLevelParents {
parentHashes := make([]*DbHash, len(domainBlockLevelParents))
for i, parentHash := range domainBlockLevelParents {
parentHashes[i] = &DbHash{Hash: parentHash.ByteSlice()}
}
return &DbBlockLevelParents{ParentHashes: parentHashes}
}
// DomainParentsToDbParents converts a slice of BlockLevelParents to a slice of DbBlockLevelParents
func DomainParentsToDbParents(domainParents []externalapi.BlockLevelParents) []*DbBlockLevelParents {
dbParents := make([]*DbBlockLevelParents, len(domainParents))
for i, domainBlockLevelParents := range domainParents {
dbParents[i] = DomainBlockLevelParentsToDbBlockLevelParents(domainBlockLevelParents)
}
return dbParents
}
// DbParentsToDomainParents converts a slice of DbBlockLevelParents to a slice of BlockLevelParents
func DbParentsToDomainParents(dbParents []*DbBlockLevelParents) ([]externalapi.BlockLevelParents, error) {
domainParents := make([]externalapi.BlockLevelParents, len(dbParents))
for i, domainBlockLevelParents := range dbParents {
var err error
domainParents[i], err = DbBlockLevelParentsToDomainBlockLevelParents(domainBlockLevelParents)
if err != nil {
return nil, err
}
}
return domainParents, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -10,13 +10,21 @@ message DbBlock {
message DbBlockHeader {
uint32 version = 1;
repeated DbHash parentHashes = 2;
repeated DbBlockLevelParents parents = 2;
DbHash hashMerkleRoot = 3;
DbHash acceptedIDMerkleRoot = 4;
DbHash utxoCommitment = 5;
int64 timeInMilliseconds = 6;
uint32 bits = 7;
uint64 nonce = 8;
uint64 daaScore = 9;
bytes blueWork = 10;
DbHash pruningPoint = 12;
uint64 blueScore = 13;
}
message DbBlockLevelParents {
repeated DbHash parentHashes = 1;
}
message DbHash {

View File

@@ -12,7 +12,7 @@ type acceptanceDataStagingShard struct {
}
func (ads *acceptanceDataStore) stagingShard(stagingArea *model.StagingArea) *acceptanceDataStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDAcceptanceData, func() model.StagingShard {
return stagingArea.GetOrCreateShard(ads.shardID, func() model.StagingShard {
return &acceptanceDataStagingShard{
store: ads,
toAdd: make(map[externalapi.DomainHash]externalapi.AcceptanceData),

View File

@@ -1,12 +1,11 @@
package acceptancedatastore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
"google.golang.org/protobuf/proto"
)
@@ -14,15 +13,17 @@ var bucketName = []byte("acceptance-data")
// acceptanceDataStore represents a store of AcceptanceData
type acceptanceDataStore struct {
cache *lrucache.LRUCache
bucket model.DBBucket
shardID model.StagingShardID
cache *lrucache.LRUCache
bucket model.DBBucket
}
// New instantiates a new AcceptanceDataStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.AcceptanceDataStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.AcceptanceDataStore {
return &acceptanceDataStore{
cache: lrucache.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
shardID: staging.GenerateShardingID(),
cache: lrucache.New(cacheSize, preallocate),
bucket: prefixBucket.Bucket(bucketName),
}
}

View File

@@ -12,7 +12,7 @@ type blockHeaderStagingShard struct {
}
func (bhs *blockHeaderStore) stagingShard(stagingArea *model.StagingArea) *blockHeaderStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlockHeader, func() model.StagingShard {
return stagingArea.GetOrCreateShard(bhs.shardID, func() model.StagingShard {
return &blockHeaderStagingShard{
store: bhs,
toAdd: make(map[externalapi.DomainHash]externalapi.BlockHeader),

View File

@@ -2,12 +2,11 @@ package blockheaderstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var bucketName = []byte("block-headers")
@@ -15,6 +14,7 @@ var countKeyName = []byte("block-headers-count")
// blockHeaderStore represents a store of blocks
type blockHeaderStore struct {
shardID model.StagingShardID
cache *lrucache.LRUCache
countCached uint64
bucket model.DBBucket
@@ -22,11 +22,12 @@ type blockHeaderStore struct {
}
// New instantiates a new BlockHeaderStore
func New(dbContext model.DBReader, prefix *prefix.Prefix, cacheSize int, preallocate bool) (model.BlockHeaderStore, error) {
func New(dbContext model.DBReader, prefixBucket model.DBBucket, cacheSize int, preallocate bool) (model.BlockHeaderStore, error) {
blockHeaderStore := &blockHeaderStore{
shardID: staging.GenerateShardingID(),
cache: lrucache.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
countKey: database.MakeBucket(prefix.Serialize()).Key(countKeyName),
bucket: prefixBucket.Bucket(bucketName),
countKey: prefixBucket.Key(countKeyName),
}
err := blockHeaderStore.initializeCount(dbContext)

View File

@@ -11,7 +11,7 @@ type blockRelationStagingShard struct {
}
func (brs *blockRelationStore) stagingShard(stagingArea *model.StagingArea) *blockRelationStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlockRelation, func() model.StagingShard {
return stagingArea.GetOrCreateShard(brs.shardID, func() model.StagingShard {
return &blockRelationStagingShard{
store: brs,
toAdd: make(map[externalapi.DomainHash]*model.BlockRelations),

View File

@@ -2,27 +2,28 @@ package blockrelationstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var bucketName = []byte("block-relations")
// blockRelationStore represents a store of BlockRelations
type blockRelationStore struct {
cache *lrucache.LRUCache
bucket model.DBBucket
shardID model.StagingShardID
cache *lrucache.LRUCache
bucket model.DBBucket
}
// New instantiates a new BlockRelationStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.BlockRelationStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.BlockRelationStore {
return &blockRelationStore{
cache: lrucache.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
shardID: staging.GenerateShardingID(),
cache: lrucache.New(cacheSize, preallocate),
bucket: prefixBucket.Bucket(bucketName),
}
}

View File

@@ -11,7 +11,7 @@ type blockStatusStagingShard struct {
}
func (bss *blockStatusStore) stagingShard(stagingArea *model.StagingArea) *blockStatusStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlockStatus, func() model.StagingShard {
return stagingArea.GetOrCreateShard(bss.shardID, func() model.StagingShard {
return &blockStatusStagingShard{
store: bss,
toAdd: make(map[externalapi.DomainHash]externalapi.BlockStatus),

View File

@@ -2,27 +2,28 @@ package blockstatusstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var bucketName = []byte("block-statuses")
// blockStatusStore represents a store of BlockStatuses
type blockStatusStore struct {
cache *lrucache.LRUCache
bucket model.DBBucket
shardID model.StagingShardID
cache *lrucache.LRUCache
bucket model.DBBucket
}
// New instantiates a new BlockStatusStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.BlockStatusStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.BlockStatusStore {
return &blockStatusStore{
cache: lrucache.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
shardID: staging.GenerateShardingID(),
cache: lrucache.New(cacheSize, preallocate),
bucket: prefixBucket.Bucket(bucketName),
}
}

View File

@@ -12,7 +12,7 @@ type blockStagingShard struct {
}
func (bs *blockStore) stagingShard(stagingArea *model.StagingArea) *blockStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDBlock, func() model.StagingShard {
return stagingArea.GetOrCreateShard(bs.shardID, func() model.StagingShard {
return &blockStagingShard{
store: bs,
toAdd: make(map[externalapi.DomainHash]*externalapi.DomainBlock),

View File

@@ -2,12 +2,11 @@ package blockstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
"github.com/pkg/errors"
)
@@ -15,6 +14,7 @@ var bucketName = []byte("blocks")
// blockStore represents a store of blocks
type blockStore struct {
shardID model.StagingShardID
cache *lrucache.LRUCache
countCached uint64
bucket model.DBBucket
@@ -22,11 +22,12 @@ type blockStore struct {
}
// New instantiates a new BlockStore
func New(dbContext model.DBReader, prefix *prefix.Prefix, cacheSize int, preallocate bool) (model.BlockStore, error) {
func New(dbContext model.DBReader, prefixBucket model.DBBucket, cacheSize int, preallocate bool) (model.BlockStore, error) {
blockStore := &blockStore{
shardID: staging.GenerateShardingID(),
cache: lrucache.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
countKey: database.MakeBucket(prefix.Serialize()).Key([]byte("blocks-count")),
bucket: prefixBucket.Bucket(bucketName),
countKey: prefixBucket.Key([]byte("blocks-count")),
}
err := blockStore.initializeCount(dbContext)

View File

@@ -12,7 +12,7 @@ type consensusStateStagingShard struct {
}
func (bs *consensusStateStore) stagingShard(stagingArea *model.StagingArea) *consensusStateStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDConsensusState, func() model.StagingShard {
return stagingArea.GetOrCreateShard(bs.shardID, func() model.StagingShard {
return &consensusStateStagingShard{
store: bs,
tipsStaging: nil,

View File

@@ -1,17 +1,17 @@
package consensusstatestore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxolrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var importingPruningPointUTXOSetKeyName = []byte("importing-pruning-point-utxo-set")
// consensusStateStore represents a store for the current consensus state
type consensusStateStore struct {
shardID model.StagingShardID
virtualUTXOSetCache *utxolrucache.LRUCache
tipsCache []*externalapi.DomainHash
tipsKey model.DBKey
@@ -20,12 +20,13 @@ type consensusStateStore struct {
}
// New instantiates a new ConsensusStateStore
func New(prefix *prefix.Prefix, utxoSetCacheSize int, preallocate bool) model.ConsensusStateStore {
func New(prefixBucket model.DBBucket, utxoSetCacheSize int, preallocate bool) model.ConsensusStateStore {
return &consensusStateStore{
shardID: staging.GenerateShardingID(),
virtualUTXOSetCache: utxolrucache.New(utxoSetCacheSize, preallocate),
tipsKey: database.MakeBucket(prefix.Serialize()).Key(tipsKeyName),
importingPruningPointUTXOSetKey: database.MakeBucket(prefix.Serialize()).Key(importingPruningPointUTXOSetKeyName),
utxoSetBucket: database.MakeBucket(prefix.Serialize()).Bucket(utxoSetBucketName),
tipsKey: prefixBucket.Key(tipsKeyName),
importingPruningPointUTXOSetKey: prefixBucket.Key(importingPruningPointUTXOSetKeyName),
utxoSetBucket: prefixBucket.Bucket(utxoSetBucketName),
}
}

View File

@@ -25,14 +25,6 @@ func (css *consensusStateStore) StageVirtualUTXODiff(stagingArea *model.StagingA
}
func (csss *consensusStateStagingShard) commitVirtualUTXODiff(dbTx model.DBTransaction) error {
hadStartedImportingPruningPointUTXOSet, err := csss.store.HadStartedImportingPruningPointUTXOSet(dbTx)
if err != nil {
return err
}
if hadStartedImportingPruningPointUTXOSet {
return errors.New("cannot commit virtual UTXO diff after starting to import the pruning point UTXO set")
}
if csss.virtualUTXODiffStaging == nil {
return nil
}

View File

@@ -15,7 +15,7 @@ type daaBlocksStagingShard struct {
}
func (daas *daaBlocksStore) stagingShard(stagingArea *model.StagingArea) *daaBlocksStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDDAABlocks, func() model.StagingShard {
return stagingArea.GetOrCreateShard(daas.shardID, func() model.StagingShard {
return &daaBlocksStagingShard{
store: daas,
daaScoreToAdd: make(map[externalapi.DomainHash]uint64),

View File

@@ -1,12 +1,11 @@
package daablocksstore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var daaScoreBucketName = []byte("daa-score")
@@ -14,6 +13,7 @@ var daaAddedBlocksBucketName = []byte("daa-added-blocks")
// daaBlocksStore represents a store of DAABlocksStore
type daaBlocksStore struct {
shardID model.StagingShardID
daaScoreLRUCache *lrucache.LRUCache
daaAddedBlocksLRUCache *lrucache.LRUCache
daaScoreBucket model.DBBucket
@@ -21,12 +21,13 @@ type daaBlocksStore struct {
}
// New instantiates a new DAABlocksStore
func New(prefix *prefix.Prefix, daaScoreCacheSize int, daaAddedBlocksCacheSize int, preallocate bool) model.DAABlocksStore {
func New(prefixBucket model.DBBucket, daaScoreCacheSize int, daaAddedBlocksCacheSize int, preallocate bool) model.DAABlocksStore {
return &daaBlocksStore{
shardID: staging.GenerateShardingID(),
daaScoreLRUCache: lrucache.New(daaScoreCacheSize, preallocate),
daaAddedBlocksLRUCache: lrucache.New(daaAddedBlocksCacheSize, preallocate),
daaScoreBucket: database.MakeBucket(prefix.Serialize()).Bucket(daaScoreBucketName),
daaAddedBlocksBucket: database.MakeBucket(prefix.Serialize()).Bucket(daaAddedBlocksBucketName),
daaScoreBucket: prefixBucket.Bucket(daaScoreBucketName),
daaAddedBlocksBucket: prefixBucket.Bucket(daaAddedBlocksBucketName),
}
}

View File

@@ -25,7 +25,7 @@ type daaWindowStagingShard struct {
}
func (daaws *daaWindowStore) stagingShard(stagingArea *model.StagingArea) *daaWindowStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDDAAWindow, func() model.StagingShard {
return stagingArea.GetOrCreateShard(daaws.shardID, func() model.StagingShard {
return &daaWindowStagingShard{
store: daaws,
toAdd: make(map[dbKey]*externalapi.BlockGHOSTDAGDataHashPair),

View File

@@ -3,26 +3,29 @@ package daawindowstore
import (
"encoding/binary"
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucachehashpairtoblockghostdagdatahashpair"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/util/staging"
"github.com/pkg/errors"
)
var bucketName = []byte("daa-window")
type daaWindowStore struct {
cache *lrucachehashpairtoblockghostdagdatahashpair.LRUCache
bucket model.DBBucket
shardID model.StagingShardID
cache *lrucachehashpairtoblockghostdagdatahashpair.LRUCache
bucket model.DBBucket
}
// New instantiates a new BlocksWithTrustedDataDAAWindowStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.BlocksWithTrustedDataDAAWindowStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.BlocksWithTrustedDataDAAWindowStore {
return &daaWindowStore{
cache: lrucachehashpairtoblockghostdagdatahashpair.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
shardID: staging.GenerateShardingID(),
cache: lrucachehashpairtoblockghostdagdatahashpair.New(cacheSize, preallocate),
bucket: prefixBucket.Bucket(bucketName),
}
}
@@ -36,6 +39,8 @@ func (daaws *daaWindowStore) Stage(stagingArea *model.StagingArea, blockHash *ex
}
var errDAAWindowBlockNotFound = errors.Wrap(database.ErrNotFound, "DAA window block not found")
func (daaws *daaWindowStore) DAAWindowBlock(dbContext model.DBReader, stagingArea *model.StagingArea, blockHash *externalapi.DomainHash, index uint64) (*externalapi.BlockGHOSTDAGDataHashPair, error) {
stagingShard := daaws.stagingShard(stagingArea)
@@ -45,10 +50,17 @@ func (daaws *daaWindowStore) DAAWindowBlock(dbContext model.DBReader, stagingAre
}
if pair, ok := daaws.cache.Get(blockHash, index); ok {
if pair == nil {
return nil, errDAAWindowBlockNotFound
}
return pair, nil
}
pairBytes, err := dbContext.Get(daaws.key(dbKey))
if database.IsNotFoundError(err) {
daaws.cache.Add(blockHash, index, nil)
}
if err != nil {
return nil, err
}

View File

@@ -11,7 +11,7 @@ type finalityStagingShard struct {
}
func (fs *finalityStore) stagingShard(stagingArea *model.StagingArea) *finalityStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDFinality, func() model.StagingShard {
return stagingArea.GetOrCreateShard(fs.shardID, func() model.StagingShard {
return &finalityStagingShard{
store: fs,
toAdd: make(map[externalapi.DomainHash]*externalapi.DomainHash),

View File

@@ -1,25 +1,26 @@
package finalitystore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var bucketName = []byte("finality-points")
type finalityStore struct {
cache *lrucache.LRUCache
bucket model.DBBucket
shardID model.StagingShardID
cache *lrucache.LRUCache
bucket model.DBBucket
}
// New instantiates a new FinalityStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.FinalityStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.FinalityStore {
return &finalityStore{
cache: lrucache.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
shardID: staging.GenerateShardingID(),
cache: lrucache.New(cacheSize, preallocate),
bucket: prefixBucket.Bucket(bucketName),
}
}

View File

@@ -23,7 +23,7 @@ type ghostdagDataStagingShard struct {
}
func (gds *ghostdagDataStore) stagingShard(stagingArea *model.StagingArea) *ghostdagDataStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDGHOSTDAG, func() model.StagingShard {
return stagingArea.GetOrCreateShard(gds.shardID, func() model.StagingShard {
return &ghostdagDataStagingShard{
store: gds,
toAdd: make(map[key]*externalapi.BlockGHOSTDAGData),

View File

@@ -2,12 +2,11 @@ package ghostdagdatastore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucacheghostdagdata"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var ghostdagDataBucketName = []byte("block-ghostdag-data")
@@ -15,17 +14,19 @@ var trustedDataBucketName = []byte("block-with-trusted-data-ghostdag-data")
// ghostdagDataStore represents a store of BlockGHOSTDAGData
type ghostdagDataStore struct {
shardID model.StagingShardID
cache *lrucacheghostdagdata.LRUCache
ghostdagDataBucket model.DBBucket
trustedDataBucket model.DBBucket
}
// New instantiates a new GHOSTDAGDataStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.GHOSTDAGDataStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.GHOSTDAGDataStore {
return &ghostdagDataStore{
shardID: staging.GenerateShardingID(),
cache: lrucacheghostdagdata.New(cacheSize, preallocate),
ghostdagDataBucket: database.MakeBucket(prefix.Serialize()).Bucket(ghostdagDataBucketName),
trustedDataBucket: database.MakeBucket(prefix.Serialize()).Bucket(trustedDataBucketName),
ghostdagDataBucket: prefixBucket.Bucket(ghostdagDataBucketName),
trustedDataBucket: prefixBucket.Bucket(trustedDataBucketName),
}
}

View File

@@ -15,7 +15,7 @@ type headersSelectedChainStagingShard struct {
}
func (hscs *headersSelectedChainStore) stagingShard(stagingArea *model.StagingArea) *headersSelectedChainStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDHeadersSelectedChain, func() model.StagingShard {
return stagingArea.GetOrCreateShard(hscs.shardID, func() model.StagingShard {
return &headersSelectedChainStagingShard{
store: hscs,
addedByHash: make(map[externalapi.DomainHash]uint64),

View File

@@ -2,7 +2,7 @@ package headersselectedchainstore
import (
"encoding/binary"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
@@ -18,6 +18,7 @@ var bucketChainBlockIndexByHashName = []byte("chain-block-index-by-hash")
var highestChainBlockIndexKeyName = []byte("highest-chain-block-index")
type headersSelectedChainStore struct {
shardID model.StagingShardID
cacheByIndex *lrucacheuint64tohash.LRUCache
cacheByHash *lrucache.LRUCache
cacheHighestChainBlockIndex uint64
@@ -27,13 +28,14 @@ type headersSelectedChainStore struct {
}
// New instantiates a new HeadersSelectedChainStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.HeadersSelectedChainStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.HeadersSelectedChainStore {
return &headersSelectedChainStore{
shardID: staging.GenerateShardingID(),
cacheByIndex: lrucacheuint64tohash.New(cacheSize, preallocate),
cacheByHash: lrucache.New(cacheSize, preallocate),
bucketChainBlockHashByIndex: database.MakeBucket(prefix.Serialize()).Bucket(bucketChainBlockHashByIndexName),
bucketChainBlockIndexByHash: database.MakeBucket(prefix.Serialize()).Bucket(bucketChainBlockIndexByHashName),
highestChainBlockIndexKey: database.MakeBucket(prefix.Serialize()).Key(highestChainBlockIndexKeyName),
bucketChainBlockHashByIndex: prefixBucket.Bucket(bucketChainBlockHashByIndexName),
bucketChainBlockIndexByHash: prefixBucket.Bucket(bucketChainBlockIndexByHashName),
highestChainBlockIndexKey: prefixBucket.Key(highestChainBlockIndexKeyName),
}
}

View File

@@ -11,7 +11,7 @@ type headersSelectedTipStagingShard struct {
}
func (hsts *headerSelectedTipStore) stagingShard(stagingArea *model.StagingArea) *headersSelectedTipStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDHeadersSelectedTip, func() model.StagingShard {
return stagingArea.GetOrCreateShard(hsts.shardID, func() model.StagingShard {
return &headersSelectedTipStagingShard{
store: hsts,
newSelectedTip: nil,

View File

@@ -2,24 +2,25 @@ package headersselectedtipstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var keyName = []byte("headers-selected-tip")
type headerSelectedTipStore struct {
cache *externalapi.DomainHash
key model.DBKey
shardID model.StagingShardID
cache *externalapi.DomainHash
key model.DBKey
}
// New instantiates a new HeaderSelectedTipStore
func New(prefix *prefix.Prefix) model.HeaderSelectedTipStore {
func New(prefixBucket model.DBBucket) model.HeaderSelectedTipStore {
return &headerSelectedTipStore{
key: database.MakeBucket(prefix.Serialize()).Key(keyName),
shardID: staging.GenerateShardingID(),
key: prefixBucket.Key(keyName),
}
}

View File

@@ -12,7 +12,7 @@ type multisetStagingShard struct {
}
func (ms *multisetStore) stagingShard(stagingArea *model.StagingArea) *multisetStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDMultiset, func() model.StagingShard {
return stagingArea.GetOrCreateShard(ms.shardID, func() model.StagingShard {
return &multisetStagingShard{
store: ms,
toAdd: make(map[externalapi.DomainHash]model.Multiset),

View File

@@ -2,27 +2,28 @@ package multisetstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var bucketName = []byte("multisets")
// multisetStore represents a store of Multisets
type multisetStore struct {
cache *lrucache.LRUCache
bucket model.DBBucket
shardID model.StagingShardID
cache *lrucache.LRUCache
bucket model.DBBucket
}
// New instantiates a new MultisetStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.MultisetStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.MultisetStore {
return &multisetStore{
cache: lrucache.New(cacheSize, preallocate),
bucket: database.MakeBucket(prefix.Serialize()).Bucket(bucketName),
shardID: staging.GenerateShardingID(),
cache: lrucache.New(cacheSize, preallocate),
bucket: prefixBucket.Bucket(bucketName),
}
}

View File

@@ -8,18 +8,17 @@ import (
type pruningStagingShard struct {
store *pruningStore
currentPruningPoint *externalapi.DomainHash
previousPruningPoint *externalapi.DomainHash
pruningPointByIndex map[uint64]*externalapi.DomainHash
currentPruningPointIndex *uint64
newPruningPointCandidate *externalapi.DomainHash
startUpdatingPruningPointUTXOSet bool
}
func (ps *pruningStore) stagingShard(stagingArea *model.StagingArea) *pruningStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDPruning, func() model.StagingShard {
return stagingArea.GetOrCreateShard(ps.shardID, func() model.StagingShard {
return &pruningStagingShard{
store: ps,
currentPruningPoint: nil,
previousPruningPoint: nil,
pruningPointByIndex: map[uint64]*externalapi.DomainHash{},
newPruningPointCandidate: nil,
startUpdatingPruningPointUTXOSet: false,
}
@@ -27,28 +26,32 @@ func (ps *pruningStore) stagingShard(stagingArea *model.StagingArea) *pruningSta
}
func (mss *pruningStagingShard) Commit(dbTx model.DBTransaction) error {
if mss.currentPruningPoint != nil {
pruningPointBytes, err := mss.store.serializeHash(mss.currentPruningPoint)
for index, hash := range mss.pruningPointByIndex {
hashCopy := hash
hashBytes, err := mss.store.serializeHash(hash)
if err != nil {
return err
}
err = dbTx.Put(mss.store.pruningBlockHashKey, pruningPointBytes)
err = dbTx.Put(mss.store.indexAsKey(index), hashBytes)
if err != nil {
return err
}
mss.store.pruningPointCache = mss.currentPruningPoint
mss.store.pruningPointByIndexCache.Add(index, hashCopy)
}
if mss.previousPruningPoint != nil {
oldPruningPointBytes, err := mss.store.serializeHash(mss.previousPruningPoint)
if mss.currentPruningPointIndex != nil {
indexBytes := mss.store.serializeIndex(*mss.currentPruningPointIndex)
err := dbTx.Put(mss.store.currentPruningPointIndexKey, indexBytes)
if err != nil {
return err
}
err = dbTx.Put(mss.store.previousPruningBlockHashKey, oldPruningPointBytes)
if err != nil {
return err
if mss.store.currentPruningPointIndexCache == nil {
var zero uint64
mss.store.currentPruningPointIndexCache = &zero
}
mss.store.oldPruningPointCache = mss.previousPruningPoint
*mss.store.currentPruningPointIndexCache = *mss.currentPruningPointIndex
}
if mss.newPruningPointCandidate != nil {
@@ -74,5 +77,5 @@ func (mss *pruningStagingShard) Commit(dbTx model.DBTransaction) error {
}
func (mss *pruningStagingShard) isStaged() bool {
return mss.currentPruningPoint != nil || mss.newPruningPointCandidate != nil || mss.previousPruningPoint != nil || mss.startUpdatingPruningPointUTXOSet
return len(mss.pruningPointByIndex) > 0 || mss.newPruningPointCandidate != nil || mss.startUpdatingPruningPointUTXOSet
}

View File

@@ -1,45 +1,51 @@
package pruningstore
import (
"encoding/binary"
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/binaryserialization"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucacheuint64tohash"
"github.com/kaspanet/kaspad/util/staging"
)
var pruningBlockHashKeyName = []byte("pruning-block-hash")
var previousPruningBlockHashKeyName = []byte("previous-pruning-block-hash")
var currentPruningPointIndexKeyName = []byte("pruning-block-index")
var candidatePruningPointHashKeyName = []byte("candidate-pruning-point-hash")
var pruningPointUTXOSetBucketName = []byte("pruning-point-utxo-set")
var updatingPruningPointUTXOSetKeyName = []byte("updating-pruning-point-utxo-set")
var pruningPointByIndexBucketName = []byte("pruning-point-by-index")
// pruningStore represents a store for the current pruning state
type pruningStore struct {
pruningPointCache *externalapi.DomainHash
oldPruningPointCache *externalapi.DomainHash
pruningPointCandidateCache *externalapi.DomainHash
shardID model.StagingShardID
pruningPointByIndexCache *lrucacheuint64tohash.LRUCache
currentPruningPointIndexCache *uint64
pruningPointCandidateCache *externalapi.DomainHash
pruningBlockHashKey model.DBKey
previousPruningBlockHashKey model.DBKey
currentPruningPointIndexKey model.DBKey
candidatePruningPointHashKey model.DBKey
pruningPointUTXOSetBucket model.DBBucket
updatingPruningPointUTXOSetKey model.DBKey
importedPruningPointUTXOsBucket model.DBBucket
importedPruningPointMultisetKey model.DBKey
pruningPointByIndexBucket model.DBBucket
}
// New instantiates a new PruningStore
func New(prefix *prefix.Prefix) model.PruningStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.PruningStore {
return &pruningStore{
pruningBlockHashKey: database.MakeBucket(prefix.Serialize()).Key(pruningBlockHashKeyName),
previousPruningBlockHashKey: database.MakeBucket(prefix.Serialize()).Key(previousPruningBlockHashKeyName),
candidatePruningPointHashKey: database.MakeBucket(prefix.Serialize()).Key(candidatePruningPointHashKeyName),
pruningPointUTXOSetBucket: database.MakeBucket(prefix.Serialize()).Bucket(pruningPointUTXOSetBucketName),
importedPruningPointUTXOsBucket: database.MakeBucket(prefix.Serialize()).Bucket(importedPruningPointUTXOsBucketName),
updatingPruningPointUTXOSetKey: database.MakeBucket(prefix.Serialize()).Key(updatingPruningPointUTXOSetKeyName),
importedPruningPointMultisetKey: database.MakeBucket(prefix.Serialize()).Key(importedPruningPointMultisetKeyName),
shardID: staging.GenerateShardingID(),
pruningPointByIndexCache: lrucacheuint64tohash.New(cacheSize, preallocate),
currentPruningPointIndexKey: prefixBucket.Key(currentPruningPointIndexKeyName),
candidatePruningPointHashKey: prefixBucket.Key(candidatePruningPointHashKeyName),
pruningPointUTXOSetBucket: prefixBucket.Bucket(pruningPointUTXOSetBucketName),
importedPruningPointUTXOsBucket: prefixBucket.Bucket(importedPruningPointUTXOsBucketName),
updatingPruningPointUTXOSetKey: prefixBucket.Key(updatingPruningPointUTXOSetKeyName),
importedPruningPointMultisetKey: prefixBucket.Key(importedPruningPointMultisetKeyName),
pruningPointByIndexBucket: prefixBucket.Bucket(pruningPointByIndexBucketName),
}
}
@@ -60,7 +66,7 @@ func (ps *pruningStore) PruningPointCandidate(dbContext model.DBReader, stagingA
return ps.pruningPointCandidateCache, nil
}
candidateBytes, err := dbContext.Get(ps.pruningBlockHashKey)
candidateBytes, err := dbContext.Get(ps.candidatePruningPointHashKey)
if err != nil {
return nil, err
}
@@ -87,16 +93,24 @@ func (ps *pruningStore) HasPruningPointCandidate(dbContext model.DBReader, stagi
return dbContext.Has(ps.candidatePruningPointHashKey)
}
// Stage stages the pruning state
func (ps *pruningStore) StagePruningPoint(stagingArea *model.StagingArea, pruningPointBlockHash *externalapi.DomainHash) {
stagingShard := ps.stagingShard(stagingArea)
// StagePruningPoint stages the pruning state
func (ps *pruningStore) StagePruningPoint(dbContext model.DBWriter, stagingArea *model.StagingArea, pruningPointBlockHash *externalapi.DomainHash) error {
newPruningPointIndex := uint64(0)
pruningPointIndex, err := ps.CurrentPruningPointIndex(dbContext, stagingArea)
if database.IsNotFoundError(err) {
newPruningPointIndex = 0
} else if err != nil {
return err
} else {
newPruningPointIndex = pruningPointIndex + 1
}
stagingShard.currentPruningPoint = pruningPointBlockHash
}
err = ps.StagePruningPointByIndex(dbContext, stagingArea, pruningPointBlockHash, newPruningPointIndex)
if err != nil {
return err
}
func (ps *pruningStore) StagePreviousPruningPoint(stagingArea *model.StagingArea, oldPruningPointBlockHash *externalapi.DomainHash) {
stagingShard := ps.stagingShard(stagingArea)
stagingShard.previousPruningPoint = oldPruningPointBlockHash
return nil
}
func (ps *pruningStore) IsStaged(stagingArea *model.StagingArea) bool {
@@ -146,17 +160,26 @@ func (ps *pruningStore) UpdatePruningPointUTXOSet(dbContext model.DBWriter, diff
// PruningPoint gets the current pruning point
func (ps *pruningStore) PruningPoint(dbContext model.DBReader, stagingArea *model.StagingArea) (*externalapi.DomainHash, error) {
pruningPointIndex, err := ps.CurrentPruningPointIndex(dbContext, stagingArea)
if err != nil {
return nil, err
}
return ps.PruningPointByIndex(dbContext, stagingArea, pruningPointIndex)
}
func (ps *pruningStore) PruningPointByIndex(dbContext model.DBReader, stagingArea *model.StagingArea, index uint64) (*externalapi.DomainHash, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.currentPruningPoint != nil {
return stagingShard.currentPruningPoint, nil
if hash, exists := stagingShard.pruningPointByIndex[index]; exists {
return hash, nil
}
if ps.pruningPointCache != nil {
return ps.pruningPointCache, nil
if hash, exists := ps.pruningPointByIndexCache.Get(index); exists {
return hash, nil
}
pruningPointBytes, err := dbContext.Get(ps.pruningBlockHashKey)
pruningPointBytes, err := dbContext.Get(ps.indexAsKey(index))
if err != nil {
return nil, err
}
@@ -165,34 +188,10 @@ func (ps *pruningStore) PruningPoint(dbContext model.DBReader, stagingArea *mode
if err != nil {
return nil, err
}
ps.pruningPointCache = pruningPoint
ps.pruningPointByIndexCache.Add(index, pruningPoint)
return pruningPoint, nil
}
// OldPruningPoint returns the pruning point *before* the current one
func (ps *pruningStore) PreviousPruningPoint(dbContext model.DBReader, stagingArea *model.StagingArea) (*externalapi.DomainHash, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.previousPruningPoint != nil {
return stagingShard.previousPruningPoint, nil
}
if ps.oldPruningPointCache != nil {
return ps.oldPruningPointCache, nil
}
oldPruningPointBytes, err := dbContext.Get(ps.previousPruningBlockHashKey)
if err != nil {
return nil, err
}
oldPruningPoint, err := ps.deserializePruningPoint(oldPruningPointBytes)
if err != nil {
return nil, err
}
ps.oldPruningPointCache = oldPruningPoint
return oldPruningPoint, nil
}
func (ps *pruningStore) serializeHash(hash *externalapi.DomainHash) ([]byte, error) {
return proto.Marshal(serialization.DomainHashToDbHash(hash))
}
@@ -207,18 +206,26 @@ func (ps *pruningStore) deserializePruningPoint(pruningPointBytes []byte) (*exte
return serialization.DbHashToDomainHash(dbHash)
}
func (ps *pruningStore) deserializeIndex(indexBytes []byte) (uint64, error) {
return binaryserialization.DeserializeUint64(indexBytes)
}
func (ps *pruningStore) serializeIndex(index uint64) []byte {
return binaryserialization.SerializeUint64(index)
}
func (ps *pruningStore) HasPruningPoint(dbContext model.DBReader, stagingArea *model.StagingArea) (bool, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.currentPruningPoint != nil {
if stagingShard.currentPruningPointIndex != nil {
return true, nil
}
if ps.pruningPointCache != nil {
if ps.currentPruningPointIndexCache != nil {
return true, nil
}
return dbContext.Has(ps.pruningBlockHashKey)
return dbContext.Has(ps.currentPruningPointIndexKey)
}
func (ps *pruningStore) PruningPointUTXOIterator(dbContext model.DBReader) (externalapi.ReadOnlyUTXOSetIterator, error) {
@@ -280,3 +287,63 @@ func (ps *pruningStore) HadStartedUpdatingPruningPointUTXOSet(dbContext model.DB
func (ps *pruningStore) FinishUpdatingPruningPointUTXOSet(dbContext model.DBWriter) error {
return dbContext.Delete(ps.updatingPruningPointUTXOSetKey)
}
func (ps *pruningStore) indexAsKey(index uint64) model.DBKey {
var keyBytes [8]byte
binary.BigEndian.PutUint64(keyBytes[:], index)
return ps.pruningPointByIndexBucket.Key(keyBytes[:])
}
func (ps *pruningStore) StagePruningPointByIndex(dbContext model.DBReader, stagingArea *model.StagingArea,
pruningPointBlockHash *externalapi.DomainHash, index uint64) error {
stagingShard := ps.stagingShard(stagingArea)
stagingShard.pruningPointByIndex[index] = pruningPointBlockHash
pruningPointIndex, err := ps.CurrentPruningPointIndex(dbContext, stagingArea)
isNotFoundError := database.IsNotFoundError(err)
if !isNotFoundError && err != nil {
return err
}
if stagingShard.currentPruningPointIndex == nil {
var zero uint64
stagingShard.currentPruningPointIndex = &zero
}
if isNotFoundError || index > pruningPointIndex {
*stagingShard.currentPruningPointIndex = index
}
return nil
}
func (ps *pruningStore) CurrentPruningPointIndex(dbContext model.DBReader, stagingArea *model.StagingArea) (uint64, error) {
stagingShard := ps.stagingShard(stagingArea)
if stagingShard.currentPruningPointIndex != nil {
return *stagingShard.currentPruningPointIndex, nil
}
if ps.currentPruningPointIndexCache != nil {
return *ps.currentPruningPointIndexCache, nil
}
pruningPointIndexBytes, err := dbContext.Get(ps.currentPruningPointIndexKey)
if err != nil {
return 0, err
}
index, err := ps.deserializeIndex(pruningPointIndexBytes)
if err != nil {
return 0, err
}
if ps.currentPruningPointIndexCache == nil {
var zero uint64
ps.currentPruningPointIndexCache = &zero
}
*ps.currentPruningPointIndexCache = index
return index, nil
}

View File

@@ -12,7 +12,7 @@ type reachabilityDataStagingShard struct {
}
func (rds *reachabilityDataStore) stagingShard(stagingArea *model.StagingArea) *reachabilityDataStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDReachabilityData, func() model.StagingShard {
return stagingArea.GetOrCreateShard(rds.shardID, func() model.StagingShard {
return &reachabilityDataStagingShard{
store: rds,
reachabilityData: make(map[externalapi.DomainHash]model.ReachabilityData),

View File

@@ -2,12 +2,11 @@ package reachabilitydatastore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
)
var reachabilityDataBucketName = []byte("reachability-data")
@@ -15,6 +14,7 @@ var reachabilityReindexRootKeyName = []byte("reachability-reindex-root")
// reachabilityDataStore represents a store of ReachabilityData
type reachabilityDataStore struct {
shardID model.StagingShardID
reachabilityDataCache *lrucache.LRUCache
reachabilityReindexRootCache *externalapi.DomainHash
@@ -23,11 +23,12 @@ type reachabilityDataStore struct {
}
// New instantiates a new ReachabilityDataStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.ReachabilityDataStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.ReachabilityDataStore {
return &reachabilityDataStore{
shardID: staging.GenerateShardingID(),
reachabilityDataCache: lrucache.New(cacheSize, preallocate),
reachabilityDataBucket: database.MakeBucket(prefix.Serialize()).Bucket(reachabilityDataBucketName),
reachabilityReindexRootKey: database.MakeBucket(prefix.Serialize()).Key(reachabilityReindexRootKeyName),
reachabilityDataBucket: prefixBucket.Bucket(reachabilityDataBucketName),
reachabilityReindexRootKey: prefixBucket.Key(reachabilityReindexRootKeyName),
}
}

View File

@@ -13,7 +13,7 @@ type utxoDiffStagingShard struct {
}
func (uds *utxoDiffStore) stagingShard(stagingArea *model.StagingArea) *utxoDiffStagingShard {
return stagingArea.GetOrCreateShard(model.StagingShardIDUTXODiff, func() model.StagingShard {
return stagingArea.GetOrCreateShard(uds.shardID, func() model.StagingShard {
return &utxoDiffStagingShard{
store: uds,
utxoDiffToAdd: make(map[externalapi.DomainHash]externalapi.UTXODiff),

View File

@@ -2,12 +2,11 @@ package utxodiffstore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/kaspanet/kaspad/domain/prefixmanager/prefix"
"github.com/kaspanet/kaspad/util/staging"
"github.com/pkg/errors"
)
@@ -16,6 +15,7 @@ var utxoDiffChildBucketName = []byte("utxo-diff-children")
// utxoDiffStore represents a store of UTXODiffs
type utxoDiffStore struct {
shardID model.StagingShardID
utxoDiffCache *lrucache.LRUCache
utxoDiffChildCache *lrucache.LRUCache
utxoDiffBucket model.DBBucket
@@ -23,12 +23,13 @@ type utxoDiffStore struct {
}
// New instantiates a new UTXODiffStore
func New(prefix *prefix.Prefix, cacheSize int, preallocate bool) model.UTXODiffStore {
func New(prefixBucket model.DBBucket, cacheSize int, preallocate bool) model.UTXODiffStore {
return &utxoDiffStore{
shardID: staging.GenerateShardingID(),
utxoDiffCache: lrucache.New(cacheSize, preallocate),
utxoDiffChildCache: lrucache.New(cacheSize, preallocate),
utxoDiffBucket: database.MakeBucket(prefix.Serialize()).Bucket(utxoDiffBucketName),
utxoDiffChildBucket: database.MakeBucket(prefix.Serialize()).Bucket(utxoDiffChildBucketName),
utxoDiffBucket: prefixBucket.Bucket(utxoDiffBucketName),
utxoDiffChildBucket: prefixBucket.Bucket(utxoDiffChildBucketName),
}
}

View File

@@ -2,6 +2,10 @@ package consensus
import (
"github.com/kaspanet/kaspad/domain/consensus/datastructures/daawindowstore"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/processes/blockparentbuilder"
"github.com/kaspanet/kaspad/domain/consensus/processes/pruningproofmanager"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"io/ioutil"
"os"
"sync"
@@ -15,7 +19,7 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/datastructures/blockstatusstore"
"github.com/kaspanet/kaspad/domain/consensus/datastructures/blockstore"
"github.com/kaspanet/kaspad/domain/consensus/datastructures/consensusstatestore"
daablocksstore "github.com/kaspanet/kaspad/domain/consensus/datastructures/daablocksstore"
"github.com/kaspanet/kaspad/domain/consensus/datastructures/daablocksstore"
"github.com/kaspanet/kaspad/domain/consensus/datastructures/finalitystore"
"github.com/kaspanet/kaspad/domain/consensus/datastructures/ghostdagdatastore"
"github.com/kaspanet/kaspad/domain/consensus/datastructures/headersselectedchainstore"
@@ -103,6 +107,7 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
externalapi.Consensus, error) {
dbManager := consensusdatabase.New(db)
prefixBucket := consensusdatabase.MakeBucket(dbPrefix.Serialize())
pruningWindowSizeForCaches := int(config.PruningDepth())
@@ -118,65 +123,48 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
pruningWindowSizePlusFinalityDepthForCache := int(config.PruningDepth() + config.FinalityDepth())
// Data Structures
daaWindowStore := daawindowstore.New(dbPrefix, 10_000, preallocateCaches)
acceptanceDataStore := acceptancedatastore.New(dbPrefix, 200, preallocateCaches)
blockStore, err := blockstore.New(dbManager, dbPrefix, 200, preallocateCaches)
daaWindowStore := daawindowstore.New(prefixBucket, 10_000, preallocateCaches)
acceptanceDataStore := acceptancedatastore.New(prefixBucket, 200, preallocateCaches)
blockStore, err := blockstore.New(dbManager, prefixBucket, 200, preallocateCaches)
if err != nil {
return nil, err
}
blockHeaderStore, err := blockheaderstore.New(dbManager, dbPrefix, 10_000, preallocateCaches)
blockHeaderStore, err := blockheaderstore.New(dbManager, prefixBucket, 10_000, preallocateCaches)
if err != nil {
return nil, err
}
blockRelationStore := blockrelationstore.New(dbPrefix, pruningWindowSizePlusFinalityDepthForCache, preallocateCaches)
blockStatusStore := blockstatusstore.New(dbPrefix, pruningWindowSizePlusFinalityDepthForCache, preallocateCaches)
multisetStore := multisetstore.New(dbPrefix, 200, preallocateCaches)
pruningStore := pruningstore.New(dbPrefix)
reachabilityDataStore := reachabilitydatastore.New(dbPrefix, pruningWindowSizePlusFinalityDepthForCache, preallocateCaches)
utxoDiffStore := utxodiffstore.New(dbPrefix, 200, preallocateCaches)
consensusStateStore := consensusstatestore.New(dbPrefix, 10_000, preallocateCaches)
blockStatusStore := blockstatusstore.New(prefixBucket, pruningWindowSizePlusFinalityDepthForCache, preallocateCaches)
multisetStore := multisetstore.New(prefixBucket, 200, preallocateCaches)
pruningStore := pruningstore.New(prefixBucket, 2, preallocateCaches)
utxoDiffStore := utxodiffstore.New(prefixBucket, 200, preallocateCaches)
consensusStateStore := consensusstatestore.New(prefixBucket, 10_000, preallocateCaches)
// Some tests artificially decrease the pruningWindowSize, thus making the GhostDagStore cache too small for a
// a single DifficultyAdjustmentWindow. To alleviate this problem we make sure that the cache size is at least
// dagParams.DifficultyAdjustmentWindowSize
ghostdagDataCacheSize := pruningWindowSizeForCaches
if ghostdagDataCacheSize < config.DifficultyAdjustmentWindowSize {
ghostdagDataCacheSize = config.DifficultyAdjustmentWindowSize
}
ghostdagDataStore := ghostdagdatastore.New(dbPrefix, ghostdagDataCacheSize, preallocateCaches)
headersSelectedTipStore := headersselectedtipstore.New(prefixBucket)
finalityStore := finalitystore.New(prefixBucket, 200, preallocateCaches)
headersSelectedChainStore := headersselectedchainstore.New(prefixBucket, pruningWindowSizeForCaches, preallocateCaches)
daaBlocksStore := daablocksstore.New(prefixBucket, pruningWindowSizeForCaches, int(config.FinalityDepth()), preallocateCaches)
headersSelectedTipStore := headersselectedtipstore.New(dbPrefix)
finalityStore := finalitystore.New(dbPrefix, 200, preallocateCaches)
headersSelectedChainStore := headersselectedchainstore.New(dbPrefix, pruningWindowSizeForCaches, preallocateCaches)
daaBlocksStore := daablocksstore.New(dbPrefix, pruningWindowSizeForCaches, int(config.FinalityDepth()), preallocateCaches)
blockRelationStores, reachabilityDataStores, ghostdagDataStores := dagStores(config, prefixBucket, pruningWindowSizePlusFinalityDepthForCache, pruningWindowSizeForCaches, preallocateCaches)
reachabilityManagers, dagTopologyManagers, ghostdagManagers, dagTraversalManagers := f.dagProcesses(config, dbManager, blockHeaderStore, daaWindowStore, blockRelationStores, reachabilityDataStores, ghostdagDataStores)
blockRelationStore := blockRelationStores[0]
reachabilityDataStore := reachabilityDataStores[0]
ghostdagDataStore := ghostdagDataStores[0]
reachabilityManager := reachabilityManagers[0]
dagTopologyManager := dagTopologyManagers[0]
ghostdagManager := ghostdagManagers[0]
dagTraversalManager := dagTraversalManagers[0]
// Processes
reachabilityManager := reachabilitymanager.New(
blockParentBuilder := blockparentbuilder.New(
dbManager,
ghostdagDataStore,
reachabilityDataStore)
dagTopologyManager := dagtopologymanager.New(
dbManager,
reachabilityManager,
blockRelationStore,
ghostdagDataStore)
ghostdagManager := f.ghostdagConstructor(
dbManager,
dagTopologyManager,
ghostdagDataStore,
blockHeaderStore,
config.K,
config.GenesisHash)
dagTraversalManager := dagtraversalmanager.New(
dbManager,
dagTopologyManager,
ghostdagDataStore,
reachabilityDataStore,
ghostdagManager,
consensusStateStore,
daaWindowStore,
config.GenesisHash)
pruningStore,
)
pastMedianTimeManager := f.pastMedianTimeConsructor(
config.TimestampDeviationTolerance,
dbManager,
@@ -210,12 +198,22 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
config.GenesisBlock.Header.Bits())
coinbaseManager := coinbasemanager.New(
dbManager,
config.SubsidyReductionInterval,
config.BaseSubsidy,
config.SubsidyGenesisReward,
config.MinSubsidy,
config.MaxSubsidy,
config.SubsidyPastRewardMultiplier,
config.SubsidyMergeSetRewardMultiplier,
config.CoinbasePayloadScriptPublicKeyMaxLength,
config.GenesisHash,
config.FixedSubsidySwitchPruningPointInterval,
config.FixedSubsidySwitchHashRateThreshold,
dagTraversalManager,
ghostdagDataStore,
acceptanceDataStore,
daaBlocksStore)
daaBlocksStore,
blockStore,
pruningStore,
blockHeaderStore)
headerTipsManager := headersselectedtipmanager.New(dbManager, dagTopologyManager, dagTraversalManager,
ghostdagManager, headersSelectedTipStore, headersSelectedChainStore)
genesisHash := config.GenesisHash
@@ -232,39 +230,8 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
dagTraversalManager,
finalityManager,
ghostdagDataStore)
blockValidator := blockvalidator.New(
config.PowMax,
config.SkipProofOfWork,
genesisHash,
config.EnableNonNativeSubnetworks,
config.MaxBlockMass,
config.MergeSetSizeLimit,
config.MaxBlockParents,
config.TimestampDeviationTolerance,
config.TargetTimePerBlock,
dbManager,
difficultyManager,
pastMedianTimeManager,
transactionValidator,
ghostdagManager,
dagTopologyManager,
dagTraversalManager,
coinbaseManager,
mergeDepthManager,
reachabilityManager,
pruningStore,
blockStore,
ghostdagDataStore,
blockHeaderStore,
blockStatusStore,
reachabilityDataStore,
consensusStateStore,
)
consensusStateManager, err := consensusstatemanager.New(
dbManager,
config.PruningDepth(),
config.MaxBlockParents,
config.MergeSetSizeLimit,
genesisHash,
@@ -274,8 +241,6 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
dagTraversalManager,
pastMedianTimeManager,
transactionValidator,
blockValidator,
reachabilityManager,
coinbaseManager,
mergeDepthManager,
finalityManager,
@@ -302,6 +267,8 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
dagTraversalManager,
dagTopologyManager,
consensusStateManager,
finalityManager,
consensusStateStore,
ghostdagDataStore,
pruningStore,
@@ -325,6 +292,41 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
config.DifficultyAdjustmentWindowSize,
)
blockValidator := blockvalidator.New(
config.PowMax,
config.SkipProofOfWork,
genesisHash,
config.EnableNonNativeSubnetworks,
config.MaxBlockMass,
config.MergeSetSizeLimit,
config.MaxBlockParents,
config.TimestampDeviationTolerance,
config.TargetTimePerBlock,
dbManager,
difficultyManager,
pastMedianTimeManager,
transactionValidator,
ghostdagManagers,
dagTopologyManagers,
dagTraversalManager,
coinbaseManager,
mergeDepthManager,
reachabilityManagers,
finalityManager,
blockParentBuilder,
pruningManager,
pruningStore,
blockStore,
ghostdagDataStores,
blockHeaderStore,
blockStatusStore,
reachabilityDataStore,
consensusStateStore,
daaBlocksStore,
)
syncManager := syncmanager.New(
dbManager,
genesisHash,
@@ -343,17 +345,23 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
blockBuilder := blockbuilder.New(
dbManager,
genesisHash,
difficultyManager,
pastMedianTimeManager,
coinbaseManager,
consensusStateManager,
ghostdagManager,
transactionValidator,
finalityManager,
blockParentBuilder,
pruningManager,
acceptanceDataStore,
blockRelationStore,
multisetStore,
ghostdagDataStore,
daaBlocksStore,
)
blockProcessor := blockprocessor.New(
@@ -389,6 +397,25 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
daaBlocksStore,
daaWindowStore)
pruningProofManager := pruningproofmanager.New(
dbManager,
dagTopologyManagers,
ghostdagManagers,
reachabilityManagers,
dagTraversalManagers,
ghostdagDataStores,
pruningStore,
blockHeaderStore,
blockStatusStore,
finalityStore,
consensusStateStore,
genesisHash,
config.K,
config.PruningProofM,
)
c := &consensus{
lock: &sync.Mutex{},
databaseContext: dbManager,
@@ -404,27 +431,28 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
pastMedianTimeManager: pastMedianTimeManager,
blockValidator: blockValidator,
coinbaseManager: coinbaseManager,
dagTopologyManager: dagTopologyManager,
dagTopologyManagers: dagTopologyManagers,
dagTraversalManager: dagTraversalManager,
difficultyManager: difficultyManager,
ghostdagManager: ghostdagManager,
ghostdagManagers: ghostdagManagers,
headerTipsManager: headerTipsManager,
mergeDepthManager: mergeDepthManager,
pruningManager: pruningManager,
reachabilityManager: reachabilityManager,
reachabilityManagers: reachabilityManagers,
finalityManager: finalityManager,
pruningProofManager: pruningProofManager,
acceptanceDataStore: acceptanceDataStore,
blockStore: blockStore,
blockHeaderStore: blockHeaderStore,
pruningStore: pruningStore,
ghostdagDataStore: ghostdagDataStore,
ghostdagDataStores: ghostdagDataStores,
blockStatusStore: blockStatusStore,
blockRelationStore: blockRelationStore,
blockRelationStores: blockRelationStores,
consensusStateStore: consensusStateStore,
headersSelectedTipStore: headersSelectedTipStore,
multisetStore: multisetStore,
reachabilityDataStore: reachabilityDataStore,
reachabilityDataStores: reachabilityDataStores,
utxoDiffStore: utxoDiffStore,
finalityStore: finalityStore,
headersSelectedChainStore: headersSelectedChainStore,
@@ -491,7 +519,7 @@ func (f *factory) NewTestConsensus(config *Config, testName string) (
database: db,
testConsensusStateManager: testConsensusStateManager,
testReachabilityManager: reachabilitymanager.NewTestReachabilityManager(consensusAsImplementation.
reachabilityManager),
reachabilityManagers[0]),
testTransactionValidator: testTransactionValidator,
}
tstConsensus.testBlockBuilder = blockbuilder.NewTestBlockBuilder(consensusAsImplementation.blockBuilder, tstConsensus)
@@ -530,3 +558,84 @@ func (f *factory) SetTestLevelDBCacheSize(cacheSizeMiB int) {
func (f *factory) SetTestPreAllocateCache(preallocateCaches bool) {
f.preallocateCaches = &preallocateCaches
}
func dagStores(config *Config,
prefixBucket model.DBBucket,
pruningWindowSizePlusFinalityDepthForCache, pruningWindowSizeForCaches int,
preallocateCaches bool) ([]model.BlockRelationStore, []model.ReachabilityDataStore, []model.GHOSTDAGDataStore) {
blockRelationStores := make([]model.BlockRelationStore, constants.MaxBlockLevel+1)
reachabilityDataStores := make([]model.ReachabilityDataStore, constants.MaxBlockLevel+1)
ghostdagDataStores := make([]model.GHOSTDAGDataStore, constants.MaxBlockLevel+1)
ghostdagDataCacheSize := pruningWindowSizeForCaches * 2
if ghostdagDataCacheSize < config.DifficultyAdjustmentWindowSize {
ghostdagDataCacheSize = config.DifficultyAdjustmentWindowSize
}
for i := 0; i <= constants.MaxBlockLevel; i++ {
prefixBucket := prefixBucket.Bucket([]byte{byte(i)})
if i == 0 {
blockRelationStores[i] = blockrelationstore.New(prefixBucket, pruningWindowSizePlusFinalityDepthForCache, preallocateCaches)
reachabilityDataStores[i] = reachabilitydatastore.New(prefixBucket, pruningWindowSizePlusFinalityDepthForCache*2, preallocateCaches)
ghostdagDataStores[i] = ghostdagdatastore.New(prefixBucket, ghostdagDataCacheSize, preallocateCaches)
} else {
blockRelationStores[i] = blockrelationstore.New(prefixBucket, 200, false)
reachabilityDataStores[i] = reachabilitydatastore.New(prefixBucket, 200, false)
ghostdagDataStores[i] = ghostdagdatastore.New(prefixBucket, 200, false)
}
}
return blockRelationStores, reachabilityDataStores, ghostdagDataStores
}
func (f *factory) dagProcesses(config *Config,
dbManager model.DBManager,
blockHeaderStore model.BlockHeaderStore,
daaWindowStore model.BlocksWithTrustedDataDAAWindowStore,
blockRelationStores []model.BlockRelationStore,
reachabilityDataStores []model.ReachabilityDataStore,
ghostdagDataStores []model.GHOSTDAGDataStore) (
[]model.ReachabilityManager,
[]model.DAGTopologyManager,
[]model.GHOSTDAGManager,
[]model.DAGTraversalManager,
) {
reachabilityManagers := make([]model.ReachabilityManager, constants.MaxBlockLevel+1)
dagTopologyManagers := make([]model.DAGTopologyManager, constants.MaxBlockLevel+1)
ghostdagManagers := make([]model.GHOSTDAGManager, constants.MaxBlockLevel+1)
dagTraversalManagers := make([]model.DAGTraversalManager, constants.MaxBlockLevel+1)
for i := 0; i <= constants.MaxBlockLevel; i++ {
reachabilityManagers[i] = reachabilitymanager.New(
dbManager,
ghostdagDataStores[i],
reachabilityDataStores[i])
dagTopologyManagers[i] = dagtopologymanager.New(
dbManager,
reachabilityManagers[i],
blockRelationStores[i],
ghostdagDataStores[i])
ghostdagManagers[i] = f.ghostdagConstructor(
dbManager,
dagTopologyManagers[i],
ghostdagDataStores[i],
blockHeaderStore,
config.K,
config.GenesisHash)
dagTraversalManagers[i] = dagtraversalmanager.New(
dbManager,
dagTopologyManagers[i],
ghostdagDataStores[i],
reachabilityDataStores[i],
ghostdagManagers[i],
daaWindowStore,
config.GenesisHash)
}
return reachabilityManagers, dagTopologyManagers, ghostdagManagers, dagTraversalManagers
}

View File

@@ -1,5 +1,7 @@
package externalapi
import "math/big"
// DomainBlock represents a Kaspa block
type DomainBlock struct {
Header BlockHeader
@@ -55,13 +57,19 @@ type BlockHeader interface {
// BaseBlockHeader represents the header part of a Kaspa block
type BaseBlockHeader interface {
Version() uint16
ParentHashes() []*DomainHash
Parents() []BlockLevelParents
ParentsAtLevel(level int) BlockLevelParents
DirectParents() BlockLevelParents
HashMerkleRoot() *DomainHash
AcceptedIDMerkleRoot() *DomainHash
UTXOCommitment() *DomainHash
TimeInMilliseconds() int64
Bits() uint32
Nonce() uint64
DAAScore() uint64
BlueScore() uint64
BlueWork() *big.Int
PruningPoint() *DomainHash
Equal(other BaseBlockHeader) bool
}

View File

@@ -3,6 +3,7 @@ package externalapi_test
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"math/big"
"reflect"
"testing"
)
@@ -96,32 +97,37 @@ func initTestTwoTransactions() []*externalapi.DomainTransaction {
}
func initTestBlockStructsForClone() []*externalapi.DomainBlock {
tests := []*externalapi.DomainBlock{
{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{0})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{0})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
4,
5,
6,
7,
8,
big.NewInt(9),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{10}),
),
initTestBaseTransactions(),
}, {
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
4,
5,
6,
7,
8,
big.NewInt(9),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{10}),
),
initTestBaseTransactions(),
},
@@ -143,13 +149,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{0})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{0})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
4,
5,
6,
7,
8,
big.NewInt(9),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{10}),
),
initTestBaseTransactions()},
expectedResult: false,
@@ -159,13 +169,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
baseBlock: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -178,13 +192,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestAnotherTransactions(),
},
@@ -193,13 +211,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -208,16 +230,20 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
},
}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -226,13 +252,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{100})}, // Changed
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{100})}}, // Changed
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestTwoTransactions(),
},
@@ -241,13 +271,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{100}), // Changed
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -256,13 +290,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{100}), // Changed
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -271,13 +309,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{100}), // Changed
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -286,13 +328,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
100, // Changed
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -301,13 +347,17 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
100, // Changed
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
@@ -316,13 +366,93 @@ func initTestBlockStructsForEqual() *[]TestBlockStruct {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
100, // Changed
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
expectedResult: false,
}, {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
100, // Changed
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
expectedResult: false,
}, {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
100, // Changed
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
expectedResult: false,
}, {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(100), // Changed
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{11}),
),
initTestBaseTransactions(),
},
expectedResult: false,
}, {
block: &externalapi.DomainBlock{
blockheader.NewImmutableBlockHeader(
0,
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{2}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{3}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{4}),
5,
6,
7,
8,
9,
big.NewInt(10),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{100}), // Changed
),
initTestBaseTransactions(),
},

View File

@@ -12,8 +12,11 @@ type BlockWithTrustedData struct {
}
// TrustedDataDataDAABlock is a block that belongs to BlockWithTrustedData.DAAWindow
// TODO: Currently each trusted data block contains the entire set of blocks in its
// DAA window. There's a lot of duplications between DAA windows of trusted blocks.
// This duplication should be optimized out.
type TrustedDataDataDAABlock struct {
Header BlockHeader
Block *DomainBlock
GHOSTDAGData *BlockGHOSTDAGData
}

View File

@@ -0,0 +1,63 @@
package externalapi
// BlockLevelParents represent the parents within a single super-block level
// See https://github.com/kaspanet/research/issues/3 for further details
type BlockLevelParents []*DomainHash
// Equal returns true if this BlockLevelParents is equal to `other`
func (sl BlockLevelParents) Equal(other BlockLevelParents) bool {
if len(sl) != len(other) {
return false
}
for _, thisHash := range sl {
found := false
for _, otherHash := range other {
if thisHash.Equal(otherHash) {
found = true
break
}
}
if !found {
return false
}
}
return true
}
// Clone creates a clone of this BlockLevelParents
func (sl BlockLevelParents) Clone() BlockLevelParents {
return CloneHashes(sl)
}
// Contains returns true if this BlockLevelParents contains the given blockHash
func (sl BlockLevelParents) Contains(blockHash *DomainHash) bool {
for _, blockLevelParent := range sl {
if blockLevelParent.Equal(blockHash) {
return true
}
}
return false
}
// ParentsEqual returns true if all the BlockLevelParents in `a` and `b` are
// equal pairwise
func ParentsEqual(a, b []BlockLevelParents) bool {
if len(a) != len(b) {
return false
}
for i, blockLevelParents := range a {
if !blockLevelParents.Equal(b[i]) {
return false
}
}
return true
}
// CloneParents creates a clone of the given BlockLevelParents slice
func CloneParents(parents []BlockLevelParents) []BlockLevelParents {
clone := make([]BlockLevelParents, len(parents))
for i, blockLevelParents := range parents {
clone[i] = blockLevelParents.Clone()
}
return clone
}

View File

@@ -7,6 +7,10 @@ type Consensus interface {
ValidateAndInsertBlock(block *DomainBlock, shouldValidateAgainstUTXO bool) (*BlockInsertionResult, error)
ValidateAndInsertBlockWithTrustedData(block *BlockWithTrustedData, validateUTXO bool) (*BlockInsertionResult, error)
ValidateTransactionAndPopulateWithConsensusData(transaction *DomainTransaction) error
ImportPruningPoints(pruningPoints []BlockHeader) error
BuildPruningPointProof() (*PruningPointProof, error)
ValidatePruningPointProof(pruningPointProof *PruningPointProof) error
ApplyPruningPointProof(pruningPointProof *PruningPointProof) error
GetBlock(blockHash *DomainHash) (*DomainBlock, error)
GetBlockEvenIfHeaderOnly(blockHash *DomainHash) (*DomainBlock, error)
@@ -20,6 +24,7 @@ type Consensus interface {
GetPruningPointUTXOs(expectedPruningPointHash *DomainHash, fromOutpoint *DomainOutpoint, limit int) ([]*OutpointAndUTXOEntryPair, error)
GetVirtualUTXOs(expectedVirtualParents []*DomainHash, fromOutpoint *DomainOutpoint, limit int) ([]*OutpointAndUTXOEntryPair, error)
PruningPoint() (*DomainHash, error)
PruningPointHeaders() ([]BlockHeader, error)
PruningPointAndItsAnticoneWithTrustedData() ([]*BlockWithTrustedData, error)
ClearImportedPruningPointData() error
AppendImportedPruningPointUTXOs(outpointAndUTXOEntryPairs []*OutpointAndUTXOEntryPair) error
@@ -33,6 +38,7 @@ type Consensus interface {
GetVirtualInfo() (*VirtualInfo, error)
GetVirtualDAAScore() (uint64, error)
IsValidPruningPoint(blockHash *DomainHash) (bool, error)
ArePruningPointsViolatingFinality(pruningPoints []BlockHeader) (bool, error)
GetVirtualSelectedParentChainFromBlock(blockHash *DomainHash) (*SelectedChainPath, error)
IsInSelectedParentChainOf(blockHashA *DomainHash, blockHashB *DomainHash) (bool, error)
GetHeadersSelectedTip() (*DomainHash, error)

View File

@@ -0,0 +1,6 @@
package externalapi
// PruningPointProof is the data structure holding the pruning point proof
type PruningPointProof struct {
Headers [][]BlockHeader
}

View File

@@ -5,15 +5,17 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// PruningStore represents a store for the current pruning state
type PruningStore interface {
Store
StagePruningPoint(stagingArea *StagingArea, pruningPointBlockHash *externalapi.DomainHash)
StagePreviousPruningPoint(stagingArea *StagingArea, oldPruningPointBlockHash *externalapi.DomainHash)
StagePruningPoint(dbContext DBWriter, stagingArea *StagingArea, pruningPointBlockHash *externalapi.DomainHash) error
StagePruningPointByIndex(dbContext DBReader, stagingArea *StagingArea,
pruningPointBlockHash *externalapi.DomainHash, index uint64) error
StagePruningPointCandidate(stagingArea *StagingArea, candidate *externalapi.DomainHash)
IsStaged(stagingArea *StagingArea) bool
PruningPointCandidate(dbContext DBReader, stagingArea *StagingArea) (*externalapi.DomainHash, error)
HasPruningPointCandidate(dbContext DBReader, stagingArea *StagingArea) (bool, error)
PreviousPruningPoint(dbContext DBReader, stagingArea *StagingArea) (*externalapi.DomainHash, error)
PruningPoint(dbContext DBReader, stagingArea *StagingArea) (*externalapi.DomainHash, error)
HasPruningPoint(dbContext DBReader, stagingArea *StagingArea) (bool, error)
CurrentPruningPointIndex(dbContext DBReader, stagingArea *StagingArea) (uint64, error)
PruningPointByIndex(dbContext DBReader, stagingArea *StagingArea, index uint64) (*externalapi.DomainHash, error)
StageStartUpdatingPruningPointUTXOSet(stagingArea *StagingArea)
HadStartedUpdatingPruningPointUTXOSet(dbContext DBWriter) (bool, error)

View File

@@ -0,0 +1,9 @@
package model
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// BlockParentBuilder exposes a method to build super-block parents for
// a given set of direct parents
type BlockParentBuilder interface {
BuildParents(stagingArea *StagingArea, directParentHashes []*externalapi.DomainHash) ([]externalapi.BlockLevelParents, error)
}

View File

@@ -6,6 +6,7 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// coinbase transactions
type CoinbaseManager interface {
ExpectedCoinbaseTransaction(stagingArea *StagingArea, blockHash *externalapi.DomainHash,
coinbaseData *externalapi.DomainCoinbaseData) (*externalapi.DomainTransaction, error)
ExtractCoinbaseDataAndBlueScore(coinbaseTx *externalapi.DomainTransaction) (blueScore uint64, coinbaseData *externalapi.DomainCoinbaseData, err error)
coinbaseData *externalapi.DomainCoinbaseData, blockPruningPoint *externalapi.DomainHash) (*externalapi.DomainTransaction, error)
CalcBlockSubsidy(stagingArea *StagingArea, blockHash *externalapi.DomainHash, blockPruningPoint *externalapi.DomainHash) (uint64, error)
ExtractCoinbaseDataBlueScoreAndSubsidy(coinbaseTx *externalapi.DomainTransaction) (blueScore uint64, coinbaseData *externalapi.DomainCoinbaseData, subsidy uint64, err error)
}

View File

@@ -6,7 +6,8 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
type ConsensusStateManager interface {
AddBlock(stagingArea *StagingArea, blockHash *externalapi.DomainHash, updateVirtual bool) (*externalapi.SelectedChainPath, externalapi.UTXODiff, *UTXODiffReversalData, error)
PopulateTransactionWithUTXOEntries(stagingArea *StagingArea, transaction *externalapi.DomainTransaction) error
ImportPruningPoint(stagingArea *StagingArea, newPruningPoint *externalapi.DomainHash) error
ImportPruningPointUTXOSet(stagingArea *StagingArea, newPruningPoint *externalapi.DomainHash) error
ImportPruningPoints(stagingArea *StagingArea, pruningPoints []externalapi.BlockHeader) error
RestorePastUTXOSetIterator(stagingArea *StagingArea, blockHash *externalapi.DomainHash) (externalapi.ReadOnlyUTXOSetIterator, error)
CalculatePastUTXOAndAcceptanceData(stagingArea *StagingArea, blockHash *externalapi.DomainHash) (externalapi.UTXODiff, externalapi.AcceptanceData, Multiset, error)
GetVirtualSelectedParentChainFromBlock(stagingArea *StagingArea, blockHash *externalapi.DomainHash) (*externalapi.SelectedChainPath, error)

View File

@@ -5,13 +5,12 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// DAGTraversalManager exposes methods for traversing blocks
// in the DAG
type DAGTraversalManager interface {
BlockAtDepth(stagingArea *StagingArea, highHash *externalapi.DomainHash, depth uint64) (*externalapi.DomainHash, error)
LowestChainBlockAboveOrEqualToBlueScore(stagingArea *StagingArea, highHash *externalapi.DomainHash, blueScore uint64) (*externalapi.DomainHash, error)
// SelectedChildIterator should return a BlockIterator that iterates
// from lowHash (exclusive) to highHash (inclusive) over highHash's selected parent chain
SelectedChildIterator(stagingArea *StagingArea, highHash, lowHash *externalapi.DomainHash) (BlockIterator, error)
SelectedChild(stagingArea *StagingArea, highHash, lowHash *externalapi.DomainHash) (*externalapi.DomainHash, error)
Anticone(stagingArea *StagingArea, blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error)
AnticoneFromBlocks(stagingArea *StagingArea, tips []*externalapi.DomainHash, blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error)
AnticoneFromVirtualPOV(stagingArea *StagingArea, blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error)
BlockWindow(stagingArea *StagingArea, highHash *externalapi.DomainHash, windowSize int) ([]*externalapi.DomainHash, error)
BlockWindowWithGHOSTDAGData(stagingArea *StagingArea, highHash *externalapi.DomainHash, windowSize int) ([]*externalapi.BlockGHOSTDAGDataHashPair, error)

View File

@@ -6,9 +6,12 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
type PruningManager interface {
UpdatePruningPointByVirtual(stagingArea *StagingArea) error
IsValidPruningPoint(stagingArea *StagingArea, blockHash *externalapi.DomainHash) (bool, error)
ArePruningPointsViolatingFinality(stagingArea *StagingArea, pruningPoints []externalapi.BlockHeader) (bool, error)
ArePruningPointsInValidChain(stagingArea *StagingArea) (bool, error)
ClearImportedPruningPointData() error
AppendImportedPruningPointUTXOs(outpointAndUTXOEntryPairs []*externalapi.OutpointAndUTXOEntryPair) error
UpdatePruningPointIfRequired() error
PruneAllBlocksBelow(stagingArea *StagingArea, pruningPointHash *externalapi.DomainHash) error
PruningPointAndItsAnticoneWithTrustedData() ([]*externalapi.BlockWithTrustedData, error)
ExpectedHeaderPruningPoint(stagingArea *StagingArea, blockHash *externalapi.DomainHash) (*externalapi.DomainHash, error)
}

View File

@@ -0,0 +1,10 @@
package model
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// PruningProofManager builds, validates and applies pruning proofs.
type PruningProofManager interface {
BuildPruningPointProof(stagingArea *StagingArea) (*externalapi.PruningPointProof, error)
ValidatePruningPointProof(pruningPointProof *externalapi.PruningPointProof) error
ApplyPruningPointProof(stagingArea *StagingArea, pruningPointProof *externalapi.PruningPointProof) error
}

View File

@@ -9,29 +9,7 @@ type StagingShard interface {
}
// StagingShardID is used to identify each of the store's staging shards
type StagingShardID byte
// StagingShardID constants
const (
StagingShardIDAcceptanceData StagingShardID = iota
StagingShardIDBlockHeader
StagingShardIDBlockRelation
StagingShardIDBlockStatus
StagingShardIDBlock
StagingShardIDConsensusState
StagingShardIDDAABlocks
StagingShardIDFinality
StagingShardIDGHOSTDAG
StagingShardIDHeadersSelectedChain
StagingShardIDHeadersSelectedTip
StagingShardIDMultiset
StagingShardIDPruning
StagingShardIDReachabilityData
StagingShardIDUTXODiff
StagingShardIDDAAWindow
// Always leave StagingShardIDLen as the last constant
StagingShardIDLen
)
type StagingShardID uint64
// StagingArea is single changeset inside the consensus database, similar to a transaction in a classic database.
// Each StagingArea consists of multiple StagingShards, one for each dataStore that has any changes within it.
@@ -41,16 +19,14 @@ const (
// When the StagingArea is being Committed, it goes over all it's shards, and commits those one-by-one.
// Since Commit happens in a DatabaseTransaction, a StagingArea is atomic.
type StagingArea struct {
// shards is deliberately an array and not a map, as an optimization - since it's being read a lot of time, and
// reads from maps are relatively slow.
shards [StagingShardIDLen]StagingShard
shards []StagingShard
isCommitted bool
}
// NewStagingArea creates a new, empty staging area.
func NewStagingArea() *StagingArea {
return &StagingArea{
shards: [StagingShardIDLen]StagingShard{},
shards: []StagingShard{},
isCommitted: false,
}
}
@@ -58,6 +34,9 @@ func NewStagingArea() *StagingArea {
// GetOrCreateShard attempts to retrieve a shard with the given name.
// If it does not exist - a new shard is created using `createFunc`.
func (sa *StagingArea) GetOrCreateShard(shardID StagingShardID, createFunc func() StagingShard) StagingShard {
for uint64(len(sa.shards)) <= uint64(shardID) {
sa.shards = append(sa.shards, nil)
}
if sa.shards[shardID] == nil {
sa.shards[shardID] = createFunc()
}

View File

@@ -0,0 +1,17 @@
package model
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// SubDAG represents a context-free representation of a partial DAG
type SubDAG struct {
RootHashes []*externalapi.DomainHash
TipHashes []*externalapi.DomainHash
Blocks map[externalapi.DomainHash]*SubDAGBlock
}
// SubDAGBlock represents a block in a SubDAG
type SubDAGBlock struct {
BlockHash *externalapi.DomainHash
ParentHashes []*externalapi.DomainHash
ChildHashes []*externalapi.DomainHash
}

View File

@@ -59,6 +59,7 @@ type TestConsensus interface {
BlockStore() model.BlockStore
ConsensusStateStore() model.ConsensusStateStore
GHOSTDAGDataStore() model.GHOSTDAGDataStore
GHOSTDAGDataStores() []model.GHOSTDAGDataStore
HeaderTipsStore() model.HeaderSelectedTipStore
MultisetStore() model.MultisetStore
PruningStore() model.PruningStore

View File

@@ -1,6 +1,7 @@
package blockbuilder
import (
"math/big"
"sort"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
@@ -18,6 +19,7 @@ import (
type blockBuilder struct {
databaseContext model.DBManager
genesisHash *externalapi.DomainHash
difficultyManager model.DifficultyManager
pastMedianTimeManager model.PastMedianTimeManager
@@ -25,16 +27,21 @@ type blockBuilder struct {
consensusStateManager model.ConsensusStateManager
ghostdagManager model.GHOSTDAGManager
transactionValidator model.TransactionValidator
finalityManager model.FinalityManager
pruningManager model.PruningManager
blockParentBuilder model.BlockParentBuilder
acceptanceDataStore model.AcceptanceDataStore
blockRelationStore model.BlockRelationStore
multisetStore model.MultisetStore
ghostdagDataStore model.GHOSTDAGDataStore
daaBlocksStore model.DAABlocksStore
}
// New creates a new instance of a BlockBuilder
func New(
databaseContext model.DBManager,
genesisHash *externalapi.DomainHash,
difficultyManager model.DifficultyManager,
pastMedianTimeManager model.PastMedianTimeManager,
@@ -42,26 +49,36 @@ func New(
consensusStateManager model.ConsensusStateManager,
ghostdagManager model.GHOSTDAGManager,
transactionValidator model.TransactionValidator,
finalityManager model.FinalityManager,
blockParentBuilder model.BlockParentBuilder,
pruningManager model.PruningManager,
acceptanceDataStore model.AcceptanceDataStore,
blockRelationStore model.BlockRelationStore,
multisetStore model.MultisetStore,
ghostdagDataStore model.GHOSTDAGDataStore,
daaBlocksStore model.DAABlocksStore,
) model.BlockBuilder {
return &blockBuilder{
databaseContext: databaseContext,
databaseContext: databaseContext,
genesisHash: genesisHash,
difficultyManager: difficultyManager,
pastMedianTimeManager: pastMedianTimeManager,
coinbaseManager: coinbaseManager,
consensusStateManager: consensusStateManager,
ghostdagManager: ghostdagManager,
transactionValidator: transactionValidator,
finalityManager: finalityManager,
blockParentBuilder: blockParentBuilder,
pruningManager: pruningManager,
acceptanceDataStore: acceptanceDataStore,
blockRelationStore: blockRelationStore,
multisetStore: multisetStore,
ghostdagDataStore: ghostdagDataStore,
daaBlocksStore: daaBlocksStore,
}
}
@@ -86,13 +103,17 @@ func (bb *blockBuilder) buildBlock(stagingArea *model.StagingArea, coinbaseData
return nil, err
}
coinbase, err := bb.newBlockCoinbaseTransaction(stagingArea, coinbaseData)
newBlockPruningPoint, err := bb.newBlockPruningPoint(stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
coinbase, err := bb.newBlockCoinbaseTransaction(stagingArea, coinbaseData, newBlockPruningPoint)
if err != nil {
return nil, err
}
transactionsWithCoinbase := append([]*externalapi.DomainTransaction{coinbase}, transactions...)
header, err := bb.buildHeader(stagingArea, transactionsWithCoinbase)
header, err := bb.buildHeader(stagingArea, transactionsWithCoinbase, newBlockPruningPoint)
if err != nil {
return nil, err
}
@@ -154,15 +175,15 @@ func (bb *blockBuilder) validateTransaction(
}
func (bb *blockBuilder) newBlockCoinbaseTransaction(stagingArea *model.StagingArea,
coinbaseData *externalapi.DomainCoinbaseData) (*externalapi.DomainTransaction, error) {
coinbaseData *externalapi.DomainCoinbaseData, blockPruningPoint *externalapi.DomainHash) (*externalapi.DomainTransaction, error) {
return bb.coinbaseManager.ExpectedCoinbaseTransaction(stagingArea, model.VirtualBlockHash, coinbaseData)
return bb.coinbaseManager.ExpectedCoinbaseTransaction(stagingArea, model.VirtualBlockHash, coinbaseData, blockPruningPoint)
}
func (bb *blockBuilder) buildHeader(stagingArea *model.StagingArea, transactions []*externalapi.DomainTransaction) (
externalapi.BlockHeader, error) {
func (bb *blockBuilder) buildHeader(stagingArea *model.StagingArea, transactions []*externalapi.DomainTransaction,
newBlockPruningPoint *externalapi.DomainHash) (externalapi.BlockHeader, error) {
parentHashes, err := bb.newBlockParentHashes(stagingArea)
parents, err := bb.newBlockParents(stagingArea)
if err != nil {
return nil, err
}
@@ -183,26 +204,41 @@ func (bb *blockBuilder) buildHeader(stagingArea *model.StagingArea, transactions
if err != nil {
return nil, err
}
daaScore, err := bb.newBlockDAAScore(stagingArea)
if err != nil {
return nil, err
}
blueWork, err := bb.newBlockBlueWork(stagingArea)
if err != nil {
return nil, err
}
blueScore, err := bb.newBlockBlueScore(stagingArea)
if err != nil {
return nil, err
}
return blockheader.NewImmutableBlockHeader(
constants.MaxBlockVersion,
parentHashes,
parents,
hashMerkleRoot,
acceptedIDMerkleRoot,
utxoCommitment,
timeInMilliseconds,
bits,
0,
daaScore,
blueScore,
blueWork,
newBlockPruningPoint,
), nil
}
func (bb *blockBuilder) newBlockParentHashes(stagingArea *model.StagingArea) ([]*externalapi.DomainHash, error) {
func (bb *blockBuilder) newBlockParents(stagingArea *model.StagingArea) ([]externalapi.BlockLevelParents, error) {
virtualBlockRelations, err := bb.blockRelationStore.BlockRelation(bb.databaseContext, stagingArea, model.VirtualBlockHash)
if err != nil {
return nil, err
}
return virtualBlockRelations.Parents, nil
return bb.blockParentBuilder.BuildParents(stagingArea, virtualBlockRelations.Parents)
}
func (bb *blockBuilder) newBlockTime(stagingArea *model.StagingArea) (int64, error) {
@@ -276,3 +312,27 @@ func (bb *blockBuilder) newBlockUTXOCommitment(stagingArea *model.StagingArea) (
newBlockUTXOCommitment := newBlockMultiset.Hash()
return newBlockUTXOCommitment, nil
}
func (bb *blockBuilder) newBlockDAAScore(stagingArea *model.StagingArea) (uint64, error) {
return bb.daaBlocksStore.DAAScore(bb.databaseContext, stagingArea, model.VirtualBlockHash)
}
func (bb *blockBuilder) newBlockBlueWork(stagingArea *model.StagingArea) (*big.Int, error) {
virtualGHOSTDAGData, err := bb.ghostdagDataStore.Get(bb.databaseContext, stagingArea, model.VirtualBlockHash, false)
if err != nil {
return nil, err
}
return virtualGHOSTDAGData.BlueWork(), nil
}
func (bb *blockBuilder) newBlockBlueScore(stagingArea *model.StagingArea) (uint64, error) {
virtualGHOSTDAGData, err := bb.ghostdagDataStore.Get(bb.databaseContext, stagingArea, model.VirtualBlockHash, false)
if err != nil {
return 0, err
}
return virtualGHOSTDAGData.BlueScore(), nil
}
func (bb *blockBuilder) newBlockPruningPoint(stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) (*externalapi.DomainHash, error) {
return bb.pruningManager.ExpectedHeaderPruningPoint(stagingArea, blockHash)
}

View File

@@ -1,14 +1,18 @@
package blockbuilder
import (
"encoding/binary"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/model/testapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionhelper"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/pkg/errors"
"math/big"
"sort"
)
type testBlockBuilder struct {
@@ -63,8 +67,9 @@ func (bb *testBlockBuilder) BuildBlockWithParents(parentHashes []*externalapi.Do
return block, diff, nil
}
func (bb *testBlockBuilder) buildUTXOInvalidHeader(stagingArea *model.StagingArea, parentHashes []*externalapi.DomainHash,
bits uint32, transactions []*externalapi.DomainTransaction) (externalapi.BlockHeader, error) {
func (bb *testBlockBuilder) buildUTXOInvalidHeader(stagingArea *model.StagingArea,
parentHashes []*externalapi.DomainHash, bits uint32, daaScore, blueScore uint64, blueWork *big.Int,
transactions []*externalapi.DomainTransaction) (externalapi.BlockHeader, error) {
timeInMilliseconds, err := bb.minBlockTime(stagingArea, tempBlockHash)
if err != nil {
@@ -73,24 +78,44 @@ func (bb *testBlockBuilder) buildUTXOInvalidHeader(stagingArea *model.StagingAre
hashMerkleRoot := bb.newBlockHashMerkleRoot(transactions)
pruningPoint, err := bb.newBlockPruningPoint(stagingArea, tempBlockHash)
if err != nil {
return nil, err
}
parents, err := bb.blockParentBuilder.BuildParents(stagingArea, parentHashes)
if err != nil {
return nil, err
}
for _, blockLevelParents := range parents {
sort.Slice(blockLevelParents, func(i, j int) bool {
return blockLevelParents[i].Less(blockLevelParents[j])
})
}
bb.nonceCounter++
return blockheader.NewImmutableBlockHeader(
constants.MaxBlockVersion,
parentHashes,
parents,
hashMerkleRoot,
&externalapi.DomainHash{},
&externalapi.DomainHash{},
timeInMilliseconds,
bits,
bb.nonceCounter,
daaScore,
blueScore,
blueWork,
pruningPoint,
), nil
}
func (bb *testBlockBuilder) buildHeaderWithParents(stagingArea *model.StagingArea, parentHashes []*externalapi.DomainHash,
bits uint32, transactions []*externalapi.DomainTransaction, acceptanceData externalapi.AcceptanceData, multiset model.Multiset) (
externalapi.BlockHeader, error) {
func (bb *testBlockBuilder) buildHeaderWithParents(stagingArea *model.StagingArea,
parentHashes []*externalapi.DomainHash, bits uint32, transactions []*externalapi.DomainTransaction,
acceptanceData externalapi.AcceptanceData, multiset model.Multiset, daaScore, blueScore uint64, blueWork *big.Int) (externalapi.BlockHeader, error) {
header, err := bb.buildUTXOInvalidHeader(stagingArea, parentHashes, bits, transactions)
header, err := bb.buildUTXOInvalidHeader(stagingArea, parentHashes, bits, daaScore, blueScore, blueWork, transactions)
if err != nil {
return nil, err
}
@@ -104,13 +129,17 @@ func (bb *testBlockBuilder) buildHeaderWithParents(stagingArea *model.StagingAre
return blockheader.NewImmutableBlockHeader(
header.Version(),
header.ParentHashes(),
header.Parents(),
hashMerkleRoot,
acceptedIDMerkleRoot,
utxoCommitment,
header.TimeInMilliseconds(),
header.Bits(),
header.Nonce(),
header.DAAScore(),
header.BlueScore(),
header.BlueWork(),
header.PruningPoint(),
), nil
}
@@ -141,11 +170,17 @@ func (bb *testBlockBuilder) buildBlockWithParents(stagingArea *model.StagingArea
if err != nil {
return nil, nil, err
}
daaScore, err := bb.daaBlocksStore.DAAScore(bb.databaseContext, stagingArea, tempBlockHash)
if err != nil {
return nil, nil, err
}
ghostdagData, err := bb.ghostdagDataStore.Get(bb.databaseContext, stagingArea, tempBlockHash, false)
if err != nil {
return nil, nil, err
}
blueWork := ghostdagData.BlueWork()
blueScore := ghostdagData.BlueScore()
selectedParentStatus, err := bb.testConsensus.ConsensusStateManager().ResolveBlockStatus(
stagingArea, ghostdagData.SelectedParent(), false)
@@ -165,14 +200,23 @@ func (bb *testBlockBuilder) buildBlockWithParents(stagingArea *model.StagingArea
bb.acceptanceDataStore.Stage(stagingArea, tempBlockHash, acceptanceData)
coinbase, err := bb.coinbaseManager.ExpectedCoinbaseTransaction(stagingArea, tempBlockHash, coinbaseData)
pruningPoint, err := bb.newBlockPruningPoint(stagingArea, tempBlockHash)
if err != nil {
return nil, nil, err
}
coinbase, err := bb.coinbaseManager.ExpectedCoinbaseTransaction(stagingArea, tempBlockHash, coinbaseData, pruningPoint)
if err != nil {
return nil, nil, err
}
transactionsWithCoinbase := append([]*externalapi.DomainTransaction{coinbase}, transactions...)
err = bb.testConsensus.ReachabilityManager().AddBlock(stagingArea, tempBlockHash)
if err != nil {
return nil, nil, err
}
header, err := bb.buildHeaderWithParents(
stagingArea, parentHashes, bits, transactionsWithCoinbase, acceptanceData, multiset)
stagingArea, parentHashes, bits, transactionsWithCoinbase, acceptanceData, multiset, daaScore, blueScore, blueWork)
if err != nil {
return nil, nil, err
}
@@ -206,21 +250,40 @@ func (bb *testBlockBuilder) BuildUTXOInvalidBlock(parentHashes []*externalapi.Do
return nil, err
}
// We use genesis transactions so we'll have something to build merkle root and coinbase with
genesisTransactions := bb.testConsensus.DAGParams().GenesisBlock.Transactions
bits, err := bb.difficultyManager.RequiredDifficulty(stagingArea, tempBlockHash)
bits, err := bb.difficultyManager.StageDAADataAndReturnRequiredDifficulty(stagingArea, tempBlockHash, false)
if err != nil {
return nil, err
}
daaScore, err := bb.daaBlocksStore.DAAScore(bb.databaseContext, stagingArea, tempBlockHash)
if err != nil {
return nil, err
}
header, err := bb.buildUTXOInvalidHeader(stagingArea, parentHashes, bits, genesisTransactions)
ghostdagData, err := bb.ghostdagDataStore.Get(bb.databaseContext, stagingArea, tempBlockHash, false)
if err != nil {
return nil, err
}
blueWork := ghostdagData.BlueWork()
blueScore := ghostdagData.BlueScore()
// We use the genesis coinbase so that we'll have something to build merkle root and a new coinbase with
genesisTransactions := bb.testConsensus.DAGParams().GenesisBlock.Transactions
genesisCoinbase := genesisTransactions[transactionhelper.CoinbaseTransactionIndex].Clone()
binary.LittleEndian.PutUint64(genesisCoinbase.Payload[:8], ghostdagData.BlueScore())
transactions := []*externalapi.DomainTransaction{genesisCoinbase}
err = bb.testConsensus.ReachabilityManager().AddBlock(stagingArea, tempBlockHash)
if err != nil {
return nil, err
}
header, err := bb.buildUTXOInvalidHeader(stagingArea, parentHashes, bits, daaScore, blueScore, blueWork, transactions)
if err != nil {
return nil, err
}
return &externalapi.DomainBlock{
Header: header,
Transactions: genesisTransactions,
Transactions: transactions,
}, nil
}

View File

@@ -0,0 +1,216 @@
package blockparentbuilder
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashset"
"github.com/kaspanet/kaspad/domain/consensus/utils/pow"
"github.com/pkg/errors"
)
type blockParentBuilder struct {
databaseContext model.DBManager
blockHeaderStore model.BlockHeaderStore
dagTopologyManager model.DAGTopologyManager
reachabilityDataStore model.ReachabilityDataStore
pruningStore model.PruningStore
}
// New creates a new instance of a BlockParentBuilder
func New(
databaseContext model.DBManager,
blockHeaderStore model.BlockHeaderStore,
dagTopologyManager model.DAGTopologyManager,
reachabilityDataStore model.ReachabilityDataStore,
pruningStore model.PruningStore,
) model.BlockParentBuilder {
return &blockParentBuilder{
databaseContext: databaseContext,
blockHeaderStore: blockHeaderStore,
dagTopologyManager: dagTopologyManager,
reachabilityDataStore: reachabilityDataStore,
pruningStore: pruningStore,
}
}
func (bpb *blockParentBuilder) BuildParents(stagingArea *model.StagingArea,
directParentHashes []*externalapi.DomainHash) ([]externalapi.BlockLevelParents, error) {
// Late on we'll mutate direct parent hashes, so we first clone it.
directParentHashesCopy := make([]*externalapi.DomainHash, len(directParentHashes))
copy(directParentHashesCopy, directParentHashes)
pruningPoint, err := bpb.pruningStore.PruningPoint(bpb.databaseContext, stagingArea)
if err != nil {
return nil, err
}
// The first candidates to be added should be from a parent in the future of the pruning
// point, so later on we'll know that every block that doesn't have reachability data
// (i.e. pruned) is necessarily in the past of the current candidates and cannot be
// considered as a valid candidate.
// This is why we sort the direct parent headers in a way that the first one will be
// in the future of the pruning point.
directParentHeaders := make([]externalapi.BlockHeader, len(directParentHashesCopy))
firstParentInFutureOfPruningPointIndex := 0
foundFirstParentInFutureOfPruningPoint := false
for i, directParentHash := range directParentHashesCopy {
isInFutureOfPruningPoint, err := bpb.dagTopologyManager.IsAncestorOf(stagingArea, pruningPoint, directParentHash)
if err != nil {
return nil, err
}
if !isInFutureOfPruningPoint {
continue
}
firstParentInFutureOfPruningPointIndex = i
foundFirstParentInFutureOfPruningPoint = true
break
}
if !foundFirstParentInFutureOfPruningPoint {
return nil, errors.New("BuildParents should get at least one parent in the future of the pruning point")
}
oldFirstDirectParent := directParentHashesCopy[0]
directParentHashesCopy[0] = directParentHashesCopy[firstParentInFutureOfPruningPointIndex]
directParentHashesCopy[firstParentInFutureOfPruningPointIndex] = oldFirstDirectParent
for i, directParentHash := range directParentHashesCopy {
directParentHeader, err := bpb.blockHeaderStore.BlockHeader(bpb.databaseContext, stagingArea, directParentHash)
if err != nil {
return nil, err
}
directParentHeaders[i] = directParentHeader
}
type blockToReferences map[externalapi.DomainHash][]*externalapi.DomainHash
candidatesByLevelToReferenceBlocksMap := make(map[int]blockToReferences)
// Direct parents are guaranteed to be in one other's anticones so add them all to
// all the block levels they occupy
for _, directParentHeader := range directParentHeaders {
directParentHash := consensushashing.HeaderHash(directParentHeader)
blockLevel := pow.BlockLevel(directParentHeader)
for i := 0; i <= blockLevel; i++ {
if _, exists := candidatesByLevelToReferenceBlocksMap[i]; !exists {
candidatesByLevelToReferenceBlocksMap[i] = make(map[externalapi.DomainHash][]*externalapi.DomainHash)
}
candidatesByLevelToReferenceBlocksMap[i][*directParentHash] = []*externalapi.DomainHash{directParentHash}
}
}
virtualGenesisChildren, err := bpb.dagTopologyManager.Children(stagingArea, model.VirtualGenesisBlockHash)
if err != nil {
return nil, err
}
virtualGenesisChildrenHeaders := make(map[externalapi.DomainHash]externalapi.BlockHeader, len(virtualGenesisChildren))
for _, child := range virtualGenesisChildren {
virtualGenesisChildrenHeaders[*child], err = bpb.blockHeaderStore.BlockHeader(bpb.databaseContext, stagingArea, child)
if err != nil {
return nil, err
}
}
for _, directParentHeader := range directParentHeaders {
for blockLevel, blockLevelParentsInHeader := range directParentHeader.Parents() {
isEmptyLevel := false
if _, exists := candidatesByLevelToReferenceBlocksMap[blockLevel]; !exists {
candidatesByLevelToReferenceBlocksMap[blockLevel] = make(map[externalapi.DomainHash][]*externalapi.DomainHash)
isEmptyLevel = true
}
for _, parent := range blockLevelParentsInHeader {
hasReachabilityData, err := bpb.reachabilityDataStore.HasReachabilityData(bpb.databaseContext, stagingArea, parent)
if err != nil {
return nil, err
}
// Reference blocks are the blocks that are used in reachability queries to check if
// a candidate is in the future of another candidate. In most cases this is just the
// block itself, but in the case where a block doesn't have reachability data we need
// to use some blocks in its future as reference instead.
// If we make sure to add a parent in the future of the pruning point first, we can
// know that any pruned candidate that is in the past of some blocks in the pruning
// point anticone should have should be a parent (in the relevant level) of one of
// the virtual genesis children in the pruning point anticone. So we can check which
// virtual genesis children have this block as parent and use those block as
// reference blocks.
var referenceBlocks []*externalapi.DomainHash
if hasReachabilityData {
referenceBlocks = []*externalapi.DomainHash{parent}
} else {
for childHash, childHeader := range virtualGenesisChildrenHeaders {
childHash := childHash // Assign to a new pointer to avoid `range` pointer reuse
if childHeader.ParentsAtLevel(blockLevel).Contains(parent) {
referenceBlocks = append(referenceBlocks, &childHash)
}
}
}
if isEmptyLevel {
candidatesByLevelToReferenceBlocksMap[blockLevel][*parent] = referenceBlocks
continue
}
if !hasReachabilityData {
continue
}
toRemove := hashset.New()
isAncestorOfAnyCandidate := false
for candidate, candidateReferences := range candidatesByLevelToReferenceBlocksMap[blockLevel] {
candidate := candidate // Assign to a new pointer to avoid `range` pointer reuse
isInFutureOfCurrentCandidate, err := bpb.dagTopologyManager.IsAnyAncestorOf(stagingArea, candidateReferences, parent)
if err != nil {
return nil, err
}
if isInFutureOfCurrentCandidate {
toRemove.Add(&candidate)
continue
}
if isAncestorOfAnyCandidate {
continue
}
isAncestorOfCurrentCandidate, err := bpb.dagTopologyManager.IsAncestorOfAny(stagingArea, parent, candidateReferences)
if err != nil {
return nil, err
}
if isAncestorOfCurrentCandidate {
isAncestorOfAnyCandidate = true
}
}
if toRemove.Length() > 0 {
for hash := range toRemove {
delete(candidatesByLevelToReferenceBlocksMap[blockLevel], hash)
}
}
// We should add the block as a candidate if it's in the future of another candidate
// or in the anticone of all candidates.
if !isAncestorOfAnyCandidate || toRemove.Length() > 0 {
candidatesByLevelToReferenceBlocksMap[blockLevel][*parent] = referenceBlocks
}
}
}
}
parents := make([]externalapi.BlockLevelParents, len(candidatesByLevelToReferenceBlocksMap))
for blockLevel := 0; blockLevel < len(candidatesByLevelToReferenceBlocksMap); blockLevel++ {
levelBlocks := make(externalapi.BlockLevelParents, 0, len(candidatesByLevelToReferenceBlocksMap[blockLevel]))
for block := range candidatesByLevelToReferenceBlocksMap[blockLevel] {
block := block // Assign to a new pointer to avoid `range` pointer reuse
levelBlocks = append(levelBlocks, &block)
}
parents[blockLevel] = levelBlocks
}
return parents, nil
}

View File

@@ -102,9 +102,30 @@ func (bp *blockProcessor) validateAndInsertBlock(stagingArea *model.StagingArea,
}
}
err = bp.headerTipsManager.AddHeaderTip(stagingArea, blockHash)
if err != nil {
return nil, err
shouldAddHeaderSelectedTip := false
if !hasHeaderSelectedTip {
shouldAddHeaderSelectedTip = true
} else {
pruningPoint, err := bp.pruningStore.PruningPoint(bp.databaseContext, stagingArea)
if err != nil {
return nil, err
}
isInSelectedChainOfPruningPoint, err := bp.dagTopologyManager.IsInSelectedParentChainOf(stagingArea, pruningPoint, blockHash)
if err != nil {
return nil, err
}
// Don't set blocks in the anticone of the pruning point as header selected tip.
shouldAddHeaderSelectedTip = isInSelectedChainOfPruningPoint
}
if shouldAddHeaderSelectedTip {
// Don't set blocks in the anticone of the pruning point as header selected tip.
err = bp.headerTipsManager.AddHeaderTip(stagingArea, blockHash)
if err != nil {
return nil, err
}
}
var selectedParentChainChanges *externalapi.SelectedChainPath

View File

@@ -66,13 +66,17 @@ func TestBlockStatus(t *testing.T) {
}
disqualifiedBlock.Header = blockheader.NewImmutableBlockHeader(
disqualifiedBlock.Header.Version(),
disqualifiedBlock.Header.ParentHashes(),
disqualifiedBlock.Header.Parents(),
disqualifiedBlock.Header.HashMerkleRoot(),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{}), // This should disqualify the block
disqualifiedBlock.Header.UTXOCommitment(),
disqualifiedBlock.Header.TimeInMilliseconds(),
disqualifiedBlock.Header.Bits(),
disqualifiedBlock.Header.Nonce(),
disqualifiedBlock.Header.DAAScore(),
disqualifiedBlock.Header.BlueScore(),
disqualifiedBlock.Header.BlueWork(),
disqualifiedBlock.Header.PruningPoint(),
)
_, err = tc.ValidateAndInsertBlock(disqualifiedBlock, true)
@@ -89,13 +93,17 @@ func TestBlockStatus(t *testing.T) {
invalidBlock.Transactions[0].Version = constants.MaxTransactionVersion + 1 // This should invalidate the block
invalidBlock.Header = blockheader.NewImmutableBlockHeader(
disqualifiedBlock.Header.Version(),
disqualifiedBlock.Header.ParentHashes(),
disqualifiedBlock.Header.Parents(),
merkle.CalculateHashMerkleRoot(invalidBlock.Transactions),
disqualifiedBlock.Header.AcceptedIDMerkleRoot(),
disqualifiedBlock.Header.UTXOCommitment(),
disqualifiedBlock.Header.TimeInMilliseconds(),
disqualifiedBlock.Header.Bits(),
disqualifiedBlock.Header.Nonce(),
disqualifiedBlock.Header.DAAScore(),
disqualifiedBlock.Header.BlueScore(),
disqualifiedBlock.Header.BlueWork(),
disqualifiedBlock.Header.PruningPoint(),
)
_, err = tc.ValidateAndInsertBlock(invalidBlock, true)

View File

@@ -12,12 +12,13 @@ func (bp *blockProcessor) validateAndInsertBlockWithTrustedData(stagingArea *mod
blockHash := consensushashing.BlockHash(block.Block)
for i, daaBlock := range block.DAAWindow {
hash := consensushashing.HeaderHash(daaBlock.Header)
hash := consensushashing.BlockHash(daaBlock.Block)
bp.blocksWithTrustedDataDAAWindowStore.Stage(stagingArea, blockHash, uint64(i), &externalapi.BlockGHOSTDAGDataHashPair{
Hash: hash,
GHOSTDAGData: daaBlock.GHOSTDAGData,
})
bp.blockHeaderStore.Stage(stagingArea, hash, daaBlock.Header)
bp.blockStore.Stage(stagingArea, hash, daaBlock.Block)
bp.blockHeaderStore.Stage(stagingArea, hash, daaBlock.Block.Header)
}
blockReplacedGHOSTDAGData, err := bp.ghostdagDataWithoutPrunedBlocks(stagingArea, block.GHOSTDAGData[0].GHOSTDAGData)

View File

@@ -22,8 +22,18 @@ func (bp *blockProcessor) validateAndInsertImportedPruningPoint(
newPruningPointHash)
}
arePruningPointsInValidChain, err := bp.pruningManager.ArePruningPointsInValidChain(stagingArea)
if err != nil {
return err
}
if !arePruningPointsInValidChain {
return errors.Wrapf(ruleerrors.ErrInvalidPruningPointsChain, "pruning points do not compose a valid "+
"chain to genesis")
}
log.Infof("Updating consensus state manager according to the new pruning point %s", newPruningPointHash)
err = bp.consensusStateManager.ImportPruningPoint(stagingArea, newPruningPointHash)
err = bp.consensusStateManager.ImportPruningPointUTXOSet(stagingArea, newPruningPointHash)
if err != nil {
return err
}

View File

@@ -26,7 +26,6 @@ func addBlock(tc testapi.TestConsensus, parentHashes []*externalapi.DomainHash,
}
blockHash := consensushashing.BlockHash(block)
_, err = tc.ValidateAndInsertBlock(block, true)
if err != nil {
t.Fatalf("ValidateAndInsertBlock: %+v", err)
@@ -37,14 +36,64 @@ func addBlock(tc testapi.TestConsensus, parentHashes []*externalapi.DomainHash,
func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
syncConsensuses := func(tcSyncer, tcSyncee testapi.TestConsensus) {
pointAndItsAnticoneWithTrustedData, err := tcSyncer.PruningPointAndItsAnticoneWithTrustedData()
factory := consensus.NewFactory()
// This is done to reduce the pruning depth to 6 blocks
finalityDepth := 5
consensusConfig.FinalityDuration = time.Duration(finalityDepth) * consensusConfig.TargetTimePerBlock
consensusConfig.K = 0
consensusConfig.PruningProofM = 1
syncConsensuses := func(tcSyncerRef, tcSynceeRef *testapi.TestConsensus) {
tcSyncer, tcSyncee := *tcSyncerRef, *tcSynceeRef
pruningPointProof, err := tcSyncer.BuildPruningPointProof()
if err != nil {
t.Fatalf("BuildPruningPointProof: %+v", err)
}
err = tcSyncee.ValidatePruningPointProof(pruningPointProof)
if err != nil {
t.Fatalf("ValidatePruningPointProof: %+v", err)
}
stagingConfig := *consensusConfig
stagingConfig.SkipAddingGenesis = true
synceeStaging, _, err := factory.NewTestConsensus(&stagingConfig, "TestValidateAndInsertPruningPointSyncerStaging")
if err != nil {
t.Fatalf("Error setting up synceeStaging: %+v", err)
}
err = synceeStaging.ApplyPruningPointProof(pruningPointProof)
if err != nil {
t.Fatalf("ApplyPruningPointProof: %+v", err)
}
pruningPointHeaders, err := tcSyncer.PruningPointHeaders()
if err != nil {
t.Fatalf("PruningPointHeaders: %+v", err)
}
arePruningPointsViolatingFinality, err := tcSyncee.ArePruningPointsViolatingFinality(pruningPointHeaders)
if err != nil {
t.Fatalf("ArePruningPointsViolatingFinality: %+v", err)
}
if arePruningPointsViolatingFinality {
t.Fatalf("unexpected finality violation")
}
err = synceeStaging.ImportPruningPoints(pruningPointHeaders)
if err != nil {
t.Fatalf("PruningPointHeaders: %+v", err)
}
pruningPointAndItsAnticoneWithTrustedData, err := tcSyncer.PruningPointAndItsAnticoneWithTrustedData()
if err != nil {
t.Fatalf("PruningPointAndItsAnticoneWithTrustedData: %+v", err)
}
for _, blockWithTrustedData := range pointAndItsAnticoneWithTrustedData {
_, err := tcSyncee.ValidateAndInsertBlockWithTrustedData(blockWithTrustedData, false)
for _, blockWithTrustedData := range pruningPointAndItsAnticoneWithTrustedData {
_, err := synceeStaging.ValidateAndInsertBlockWithTrustedData(blockWithTrustedData, false)
if err != nil {
t.Fatalf("ValidateAndInsertBlockWithTrustedData: %+v", err)
}
@@ -65,8 +114,8 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
t.Fatalf("GetHashesBetween: %+v", err)
}
for _, blocksHash := range missingHeaderHashes {
blockInfo, err := tcSyncee.GetBlockInfo(blocksHash)
for i, blocksHash := range missingHeaderHashes {
blockInfo, err := synceeStaging.GetBlockInfo(blocksHash)
if err != nil {
t.Fatalf("GetBlockInfo: %+v", err)
}
@@ -80,9 +129,9 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
t.Fatalf("GetBlockHeader: %+v", err)
}
_, err = tcSyncee.ValidateAndInsertBlock(&externalapi.DomainBlock{Header: header}, false)
_, err = synceeStaging.ValidateAndInsertBlock(&externalapi.DomainBlock{Header: header}, false)
if err != nil {
t.Fatalf("ValidateAndInsertBlock: %+v", err)
t.Fatalf("ValidateAndInsertBlock %d: %+v", i, err)
}
}
@@ -90,7 +139,7 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
if err != nil {
t.Fatalf("GetPruningPointUTXOs: %+v", err)
}
err = tcSyncee.AppendImportedPruningPointUTXOs(pruningPointUTXOs)
err = synceeStaging.AppendImportedPruningPointUTXOs(pruningPointUTXOs)
if err != nil {
t.Fatalf("AppendImportedPruningPointUTXOs: %+v", err)
}
@@ -101,37 +150,37 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
}
// Check that ValidateAndInsertImportedPruningPoint fails for invalid pruning point
err = tcSyncee.ValidateAndInsertImportedPruningPoint(virtualSelectedParent)
err = synceeStaging.ValidateAndInsertImportedPruningPoint(virtualSelectedParent)
if !errors.Is(err, ruleerrors.ErrUnexpectedPruningPoint) {
t.Fatalf("Unexpected error: %+v", err)
}
err = tcSyncee.ClearImportedPruningPointData()
err = synceeStaging.ClearImportedPruningPointData()
if err != nil {
t.Fatalf("ClearImportedPruningPointData: %+v", err)
}
err = tcSyncee.AppendImportedPruningPointUTXOs(makeFakeUTXOs())
err = synceeStaging.AppendImportedPruningPointUTXOs(makeFakeUTXOs())
if err != nil {
t.Fatalf("AppendImportedPruningPointUTXOs: %+v", err)
}
// Check that ValidateAndInsertImportedPruningPoint fails if the UTXO commitment doesn't fit the provided UTXO set.
err = tcSyncee.ValidateAndInsertImportedPruningPoint(pruningPoint)
err = synceeStaging.ValidateAndInsertImportedPruningPoint(pruningPoint)
if !errors.Is(err, ruleerrors.ErrBadPruningPointUTXOSet) {
t.Fatalf("Unexpected error: %+v", err)
}
err = tcSyncee.ClearImportedPruningPointData()
err = synceeStaging.ClearImportedPruningPointData()
if err != nil {
t.Fatalf("ClearImportedPruningPointData: %+v", err)
}
err = tcSyncee.AppendImportedPruningPointUTXOs(pruningPointUTXOs)
err = synceeStaging.AppendImportedPruningPointUTXOs(pruningPointUTXOs)
if err != nil {
t.Fatalf("AppendImportedPruningPointUTXOs: %+v", err)
}
// Check that ValidateAndInsertImportedPruningPoint works given the right arguments.
err = tcSyncee.ValidateAndInsertImportedPruningPoint(pruningPoint)
err = synceeStaging.ValidateAndInsertImportedPruningPoint(pruningPoint)
if err != nil {
t.Fatalf("ValidateAndInsertImportedPruningPoint: %+v", err)
}
@@ -144,18 +193,18 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
}
// Check that we can build a block just after importing the pruning point.
_, err = tcSyncee.BuildBlock(emptyCoinbase, nil)
_, err = synceeStaging.BuildBlock(emptyCoinbase, nil)
if err != nil {
t.Fatalf("BuildBlock: %+v", err)
}
// Sync block bodies
headersSelectedTip, err := tcSyncee.GetHeadersSelectedTip()
headersSelectedTip, err := synceeStaging.GetHeadersSelectedTip()
if err != nil {
t.Fatalf("GetHeadersSelectedTip: %+v", err)
}
missingBlockHashes, err := tcSyncee.GetMissingBlockBodyHashes(headersSelectedTip)
missingBlockHashes, err := synceeStaging.GetMissingBlockBodyHashes(headersSelectedTip)
if err != nil {
t.Fatalf("GetMissingBlockBodyHashes: %+v", err)
}
@@ -166,13 +215,13 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
t.Fatalf("GetBlock: %+v", err)
}
_, err = tcSyncee.ValidateAndInsertBlock(block, true)
_, err = synceeStaging.ValidateAndInsertBlock(block, true)
if err != nil {
t.Fatalf("ValidateAndInsertBlock: %+v", err)
}
}
synceeTips, err := tcSyncee.Tips()
synceeTips, err := synceeStaging.Tips()
if err != nil {
t.Fatalf("Tips: %+v", err)
}
@@ -192,12 +241,12 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
t.Fatalf("GetBlock: %+v", err)
}
_, err = tcSyncee.ValidateAndInsertBlock(tip, true)
_, err = synceeStaging.ValidateAndInsertBlock(tip, true)
if err != nil {
t.Fatalf("ValidateAndInsertBlock: %+v", err)
}
blockInfo, err := tcSyncee.GetBlockInfo(tipHash)
blockInfo, err := synceeStaging.GetBlockInfo(tipHash)
if err != nil {
t.Fatalf("GetBlockInfo: %+v", err)
}
@@ -206,7 +255,7 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
t.Fatalf("Tip didn't pass UTXO verification")
}
synceePruningPoint, err := tcSyncee.PruningPoint()
synceePruningPoint, err := synceeStaging.PruningPoint()
if err != nil {
t.Fatalf("PruningPoint: %+v", err)
}
@@ -214,274 +263,88 @@ func TestValidateAndInsertImportedPruningPoint(t *testing.T) {
if !synceePruningPoint.Equal(pruningPoint) {
t.Fatalf("The syncee pruning point has not changed as exepcted")
}
*tcSynceeRef = synceeStaging
}
// This is done to reduce the pruning depth to 6 blocks
finalityDepth := 3
consensusConfig.FinalityDuration = time.Duration(finalityDepth) * consensusConfig.TargetTimePerBlock
consensusConfig.K = 0
synceeConfig := *consensusConfig
synceeConfig.SkipAddingGenesis = true
factory := consensus.NewFactory()
tcSyncer, teardownSyncer, err := factory.NewTestConsensus(consensusConfig, "TestValidateAndInsertPruningPointSyncer")
if err != nil {
t.Fatalf("Error setting up tcSyncer: %+v", err)
}
defer teardownSyncer(false)
tcSyncee1, teardownSyncee1, err := factory.NewTestConsensus(&synceeConfig, "TestValidateAndInsertPruningPointSyncee1")
tcSyncee1, teardownSyncee1, err := factory.NewTestConsensus(consensusConfig, "TestValidateAndInsertPruningPointSyncee1")
if err != nil {
t.Fatalf("Error setting up tcSyncee1: %+v", err)
}
defer teardownSyncee1(false)
const numSharedBlocks = 2
tipHash := consensusConfig.GenesisHash
for i := 0; i < finalityDepth-2; i++ {
for i := 0; i < numSharedBlocks; i++ {
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
}
// Add block in the anticone of the pruning point to test such situation
pruningPointAnticoneBlock := addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
nextPruningPoint := addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{pruningPointAnticoneBlock, nextPruningPoint}, t)
// Add blocks until the pruning point changes
for {
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
pruningPoint, err := tcSyncer.PruningPoint()
if err != nil {
t.Fatalf("PruningPoint: %+v", err)
}
if !pruningPoint.Equal(consensusConfig.GenesisHash) {
break
}
}
pruningPoint, err := tcSyncer.PruningPoint()
if err != nil {
t.Fatalf("PruningPoint: %+v", err)
}
if !pruningPoint.Equal(nextPruningPoint) {
t.Fatalf("Unexpected pruning point %s", pruningPoint)
}
syncConsensuses(tcSyncer, tcSyncee1)
// Test a situation where a consensus with pruned headers syncs another fresh consensus.
tcSyncee2, teardownSyncee2, err := factory.NewTestConsensus(&synceeConfig, "TestValidateAndInsertPruningPointSyncee2")
if err != nil {
t.Fatalf("Error setting up tcSyncee2: %+v", err)
}
defer teardownSyncee2(false)
syncConsensuses(tcSyncee1, tcSyncee2)
})
}
// TestValidateAndInsertPruningPointWithSideBlocks makes sure that when a node applies a UTXO-Set downloaded during
// IBD, while it already has a non-empty UTXO-Set originating from blocks mined on top of genesis - the resulting
// UTXO set is correct
func TestValidateAndInsertPruningPointWithSideBlocks(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
// This is done to reduce the pruning depth to 6 blocks
finalityDepth := 3
consensusConfig.FinalityDuration = time.Duration(finalityDepth) * consensusConfig.TargetTimePerBlock
consensusConfig.K = 0
synceeConfig := *consensusConfig
synceeConfig.SkipAddingGenesis = true
factory := consensus.NewFactory()
tcSyncer, teardownSyncer, err := factory.NewTestConsensus(consensusConfig, "TestValidateAndInsertPruningPointSyncer")
if err != nil {
t.Fatalf("Error setting up tcSyncer: %+v", err)
}
defer teardownSyncer(false)
tcSyncee, teardownSyncee, err := factory.NewTestConsensus(&synceeConfig, "TestValidateAndInsertPruningPointSyncee")
if err != nil {
t.Fatalf("Error setting up tcSyncee: %+v", err)
}
defer teardownSyncee(false)
// Mine two blocks on syncee on top of genesis
synceeOnlyBlock := addBlock(tcSyncer, []*externalapi.DomainHash{consensusConfig.GenesisHash}, t)
addBlock(tcSyncer, []*externalapi.DomainHash{synceeOnlyBlock}, t)
tipHash := consensusConfig.GenesisHash
for i := 0; i < finalityDepth-2; i++ {
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
}
// Add block in the anticone of the pruning point to test such situation
pruningPointAnticoneBlock := addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
nextPruningPoint := addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{pruningPointAnticoneBlock, nextPruningPoint}, t)
// Add blocks until the pruning point changes
for {
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
pruningPoint, err := tcSyncer.PruningPoint()
if err != nil {
t.Fatalf("PruningPoint: %+v", err)
}
if !pruningPoint.Equal(consensusConfig.GenesisHash) {
break
}
}
pruningPoint, err := tcSyncer.PruningPoint()
if err != nil {
t.Fatalf("PruningPoint: %+v", err)
}
if !pruningPoint.Equal(nextPruningPoint) {
t.Fatalf("Unexpected pruning point %s", pruningPoint)
}
pointAndItsAnticoneWithTrustedData, err := tcSyncer.PruningPointAndItsAnticoneWithTrustedData()
if err != nil {
t.Fatalf("PruningPointAndItsAnticoneWithTrustedData: %+v", err)
}
for _, blockWithTrustedData := range pointAndItsAnticoneWithTrustedData {
_, err := tcSyncee.ValidateAndInsertBlockWithTrustedData(blockWithTrustedData, false)
if err != nil {
t.Fatalf("ValidateAndInsertBlockWithTrustedData: %+v", err)
}
}
syncerVirtualSelectedParent, err := tcSyncer.GetVirtualSelectedParent()
if err != nil {
t.Fatalf("GetVirtualSelectedParent: %+v", err)
}
missingBlocksHashes, _, err := tcSyncer.GetHashesBetween(pruningPoint, syncerVirtualSelectedParent, math.MaxUint64)
if err != nil {
t.Fatalf("GetHashesBetween: %+v", err)
}
for _, blocksHash := range missingBlocksHashes {
blockInfo, err := tcSyncee.GetBlockInfo(blocksHash)
if err != nil {
t.Fatalf("GetBlockInfo: %+v", err)
}
if blockInfo.Exists {
continue
}
block, err := tcSyncer.GetBlock(blocksHash)
block, err := tcSyncer.GetBlock(tipHash)
if err != nil {
t.Fatalf("GetBlock: %+v", err)
}
_, err = tcSyncee.ValidateAndInsertBlock(block, false)
_, err = tcSyncee1.ValidateAndInsertBlock(block, true)
if err != nil {
t.Fatalf("ValidateAndInsertBlock: %+v", err)
}
}
pruningPointUTXOs, err := tcSyncer.GetPruningPointUTXOs(pruningPoint, nil, 1000)
if err != nil {
t.Fatalf("GetPruningPointUTXOs: %+v", err)
}
err = tcSyncee.AppendImportedPruningPointUTXOs(pruningPointUTXOs)
if err != nil {
t.Fatalf("AppendImportedPruningPointUTXOs: %+v", err)
// Add two side blocks to syncee
tipHashSyncee := tipHash
for i := 0; i < 2; i++ {
tipHashSyncee = addBlock(tcSyncee1, []*externalapi.DomainHash{tipHashSyncee}, t)
}
// Check that ValidateAndInsertImportedPruningPoint fails for invalid pruning point
err = tcSyncee.ValidateAndInsertImportedPruningPoint(tipHash)
if !errors.Is(err, ruleerrors.ErrUnexpectedPruningPoint) {
t.Fatalf("Unexpected error: %+v", err)
for i := 0; i < finalityDepth-numSharedBlocks-2; i++ {
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
}
err = tcSyncee.ClearImportedPruningPointData()
if err != nil {
t.Fatalf("ClearImportedPruningPointData: %+v", err)
}
err = tcSyncee.AppendImportedPruningPointUTXOs(makeFakeUTXOs())
if err != nil {
t.Fatalf("AppendImportedPruningPointUTXOs: %+v", err)
// Add block in the anticone of the pruning point to test such situation
pruningPointAnticoneBlock := addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
nextPruningPoint := addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{pruningPointAnticoneBlock, nextPruningPoint}, t)
// Add blocks until the pruning point changes
for {
tipHash = addBlock(tcSyncer, []*externalapi.DomainHash{tipHash}, t)
pruningPoint, err := tcSyncer.PruningPoint()
if err != nil {
t.Fatalf("PruningPoint: %+v", err)
}
if !pruningPoint.Equal(consensusConfig.GenesisHash) {
break
}
}
// Check that ValidateAndInsertImportedPruningPoint fails if the UTXO commitment doesn't fit the provided UTXO set.
err = tcSyncee.ValidateAndInsertImportedPruningPoint(pruningPoint)
if !errors.Is(err, ruleerrors.ErrBadPruningPointUTXOSet) {
t.Fatalf("Unexpected error: %+v", err)
}
err = tcSyncee.ClearImportedPruningPointData()
if err != nil {
t.Fatalf("ClearImportedPruningPointData: %+v", err)
}
err = tcSyncee.AppendImportedPruningPointUTXOs(pruningPointUTXOs)
if err != nil {
t.Fatalf("AppendImportedPruningPointUTXOs: %+v", err)
}
// Check that ValidateAndInsertImportedPruningPoint works given the right arguments.
err = tcSyncee.ValidateAndInsertImportedPruningPoint(pruningPoint)
if err != nil {
t.Fatalf("ValidateAndInsertImportedPruningPoint: %+v", err)
}
synceeTips, err := tcSyncee.Tips()
if err != nil {
t.Fatalf("Tips: %+v", err)
}
syncerTips, err := tcSyncer.Tips()
if err != nil {
t.Fatalf("Tips: %+v", err)
}
if !externalapi.HashesEqual(synceeTips, syncerTips) {
t.Fatalf("Syncee's tips are %s while syncer's are %s", synceeTips, syncerTips)
}
tipHash = addBlock(tcSyncer, syncerTips, t)
tip, err := tcSyncer.GetBlock(tipHash)
if err != nil {
t.Fatalf("GetBlock: %+v", err)
}
_, err = tcSyncee.ValidateAndInsertBlock(tip, true)
if err != nil {
t.Fatalf("ValidateAndInsertBlock: %+v", err)
}
blockInfo, err := tcSyncee.GetBlockInfo(tipHash)
if err != nil {
t.Fatalf("GetBlockInfo: %+v", err)
}
if blockInfo.BlockStatus != externalapi.StatusUTXOValid {
t.Fatalf("Tip didn't pass UTXO verification")
}
synceePruningPoint, err := tcSyncee.PruningPoint()
pruningPoint, err := tcSyncer.PruningPoint()
if err != nil {
t.Fatalf("PruningPoint: %+v", err)
}
if !synceePruningPoint.Equal(pruningPoint) {
t.Fatalf("The syncee pruning point has not changed as exepcted")
if !pruningPoint.Equal(nextPruningPoint) {
t.Fatalf("Unexpected pruning point %s", pruningPoint)
}
tcSyncee1Ref := &tcSyncee1
syncConsensuses(&tcSyncer, tcSyncee1Ref)
// Test a situation where a consensus with pruned headers syncs another fresh consensus.
tcSyncee2, teardownSyncee2, err := factory.NewTestConsensus(consensusConfig, "TestValidateAndInsertPruningPointSyncee2")
if err != nil {
t.Fatalf("Error setting up tcSyncee2: %+v", err)
}
defer teardownSyncee2(false)
syncConsensuses(tcSyncee1Ref, &tcSyncee2)
})
}

View File

@@ -4,6 +4,7 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionhelper"
"github.com/kaspanet/kaspad/domain/consensus/utils/virtual"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/pkg/errors"
@@ -15,12 +16,14 @@ func (v *blockValidator) ValidateBodyInContext(stagingArea *model.StagingArea, b
onEnd := logger.LogAndMeasureExecutionTime(log, "ValidateBodyInContext")
defer onEnd()
err := v.checkBlockIsNotPruned(stagingArea, blockHash)
if err != nil {
return err
if !isBlockWithTrustedData {
err := v.checkBlockIsNotPruned(stagingArea, blockHash)
if err != nil {
return err
}
}
err = v.checkBlockTransactions(stagingArea, blockHash)
err := v.checkBlockTransactions(stagingArea, blockHash)
if err != nil {
return err
}
@@ -30,6 +33,11 @@ func (v *blockValidator) ValidateBodyInContext(stagingArea *model.StagingArea, b
if err != nil {
return err
}
err = v.checkCoinbaseSubsidy(stagingArea, blockHash)
if err != nil {
return err
}
}
return nil
}
@@ -52,7 +60,7 @@ func (v *blockValidator) checkBlockIsNotPruned(stagingArea *model.StagingArea, b
return err
}
isAncestorOfSomeTips, err := v.dagTopologyManager.IsAncestorOfAny(stagingArea, blockHash, tips)
isAncestorOfSomeTips, err := v.dagTopologyManagers[0].IsAncestorOfAny(stagingArea, blockHash, tips)
if err != nil {
return err
}
@@ -69,7 +77,7 @@ func (v *blockValidator) checkParentBlockBodiesExist(
stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) error {
missingParentHashes := []*externalapi.DomainHash{}
parents, err := v.dagTopologyManager.Parents(stagingArea, blockHash)
parents, err := v.dagTopologyManagers[0].Parents(stagingArea, blockHash)
if err != nil {
return err
}
@@ -90,7 +98,7 @@ func (v *blockValidator) checkParentBlockBodiesExist(
return err
}
isInPastOfPruningPoint, err := v.dagTopologyManager.IsAncestorOf(stagingArea, parent, pruningPoint)
isInPastOfPruningPoint, err := v.dagTopologyManagers[0].IsAncestorOf(stagingArea, parent, pruningPoint)
if err != nil {
return err
}
@@ -136,3 +144,54 @@ func (v *blockValidator) checkBlockTransactions(
return nil
}
func (v *blockValidator) checkCoinbaseSubsidy(
stagingArea *model.StagingArea, blockHash *externalapi.DomainHash) error {
pruningPoint, err := v.pruningStore.PruningPoint(v.databaseContext, stagingArea)
if err != nil {
return err
}
parents, err := v.dagTopologyManagers[0].Parents(stagingArea, blockHash)
if err != nil {
return err
}
for _, parent := range parents {
isInFutureOfPruningPoint, err := v.dagTopologyManagers[0].IsAncestorOf(stagingArea, pruningPoint, parent)
if err != nil {
return err
}
// The pruning proof ( https://github.com/kaspanet/docs/blob/main/Reference/prunality/Prunality.pdf ) concludes
// that it's impossible for a block to be merged if it was created in the anticone of the pruning point that was
// present at the time of the block creation. So if such situation happens we can be sure that it happens during
// IBD and that this block has at least pruningDepth-finalityInterval confirmations.
if !isInFutureOfPruningPoint {
return nil
}
}
block, err := v.blockStore.Block(v.databaseContext, stagingArea, blockHash)
if err != nil {
return err
}
expectedSubsidy, err := v.coinbaseManager.CalcBlockSubsidy(stagingArea, blockHash, block.Header.PruningPoint())
if err != nil {
return err
}
_, _, subsidy, err := v.coinbaseManager.ExtractCoinbaseDataBlueScoreAndSubsidy(block.Transactions[transactionhelper.CoinbaseTransactionIndex])
if err != nil {
return err
}
if expectedSubsidy != subsidy {
return errors.Wrapf(ruleerrors.ErrWrongCoinbaseSubsidy, "the subsidy specified on the coinbase of %s is "+
"wrong: expected %d but got %d", blockHash, expectedSubsidy, subsidy)
}
return nil
}

View File

@@ -20,6 +20,12 @@ func TestCheckBlockIsNotPruned(t *testing.T) {
consensusConfig.FinalityDuration = 2 * consensusConfig.TargetTimePerBlock
consensusConfig.K = 0
// When pruning, blocks in the DAA window of the pruning point and its
// anticone are kept for the sake of IBD. Setting this value to zero
// forces all DAA windows to be empty, and as such, no blocks are kept
// below the pruning point
consensusConfig.DifficultyAdjustmentWindowSize = 0
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestCheckBlockIsNotPruned")
@@ -135,7 +141,7 @@ func TestCheckParentBlockBodiesExist(t *testing.T) {
}
}
// Add anticonePruningBlock's body and Check that it's valid to point to
// Add anticonePruningBlock's body and check that it's valid to point to
// a header only block in the past of the pruning point.
_, err = tc.ValidateAndInsertBlock(anticonePruningBlock, true)
if err != nil {
@@ -192,7 +198,7 @@ func TestIsFinalizedTransaction(t *testing.T) {
if err != nil {
t.Fatalf("Error getting block DAA score : %+v", err)
}
blockParents := block.Header.ParentHashes()
blockParents := block.Header.DirectParents()
parentToSpend, err := tc.GetBlock(blockParents[0])
if err != nil {
t.Fatalf("Error getting block1: %+v", err)

View File

@@ -48,7 +48,7 @@ func (v *blockValidator) ValidateBodyInIsolation(stagingArea *model.StagingArea,
return err
}
err = v.checkCoinbase(block)
err = v.checkCoinbaseBlueScore(block)
if err != nil {
return err
}
@@ -91,11 +91,15 @@ func (v *blockValidator) ValidateBodyInIsolation(stagingArea *model.StagingArea,
return nil
}
func (v *blockValidator) checkCoinbase(block *externalapi.DomainBlock) error {
_, _, err := v.coinbaseManager.ExtractCoinbaseDataAndBlueScore(block.Transactions[transactionhelper.CoinbaseTransactionIndex])
func (v *blockValidator) checkCoinbaseBlueScore(block *externalapi.DomainBlock) error {
coinbaseBlueScore, _, _, err := v.coinbaseManager.ExtractCoinbaseDataBlueScoreAndSubsidy(block.Transactions[transactionhelper.CoinbaseTransactionIndex])
if err != nil {
return err
}
if coinbaseBlueScore != block.Header.BlueScore() {
return errors.Wrapf(ruleerrors.ErrUnexpectedCoinbaseBlueScore, "block blue score of %d is not the expected "+
"value of %d", coinbaseBlueScore, block.Header.BlueScore())
}
return nil
}

View File

@@ -3,6 +3,7 @@ package blockvalidator_test
import (
"bytes"
"math"
"math/big"
"testing"
"github.com/kaspanet/kaspad/domain/consensus"
@@ -134,7 +135,7 @@ func TestCheckBlockSanity(t *testing.T) {
var unOrderedParentsBlock = externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
0x00000000,
[]*externalapi.DomainHash{
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x4b, 0xb0, 0x75, 0x35, 0xdf, 0xd5, 0x8e, 0x0b,
0x3c, 0xd6, 0x4f, 0xd7, 0x15, 0x52, 0x80, 0x87,
@@ -147,12 +148,12 @@ var unOrderedParentsBlock = externalapi.DomainBlock{
0x46, 0x11, 0x89, 0x6b, 0x82, 0x1a, 0x68, 0x3b,
0x7a, 0x4e, 0xde, 0xfe, 0x2c, 0x00, 0x00, 0x00,
}),
},
}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x09, 0xaf, 0x3b, 0x09, 0xa8, 0x8f, 0xfc, 0x7e,
0x7d, 0xc6, 0x06, 0x78, 0x04, 0x2b, 0x1c, 0x8a,
0xbe, 0x37, 0x0d, 0x55, 0x41, 0xb0, 0xb8, 0x15,
0xb1, 0x08, 0xd4, 0x01, 0x2a, 0xf0, 0xfd, 0x29,
0x31, 0x33, 0x37, 0x72, 0x5c, 0xde, 0x1c, 0xdf,
0xf5, 0x9f, 0xde, 0x16, 0x74, 0xbf, 0x0c, 0x64,
0x37, 0x40, 0x49, 0xdf, 0x02, 0x05, 0xca, 0x6d,
0x52, 0x23, 0x6f, 0xc2, 0x2b, 0xec, 0xad, 0x42,
}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x80, 0xf7, 0x00, 0xe3, 0x16, 0x3d, 0x04, 0x95,
@@ -169,6 +170,15 @@ var unOrderedParentsBlock = externalapi.DomainBlock{
0x5cd18053000,
0x207fffff,
0x1,
0,
9,
big.NewInt(0),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}),
),
Transactions: []*externalapi.DomainTransaction{
{
@@ -197,7 +207,7 @@ var unOrderedParentsBlock = externalapi.DomainBlock{
},
LockTime: 0,
SubnetworkID: subnetworks.SubnetworkIDCoinbase,
Payload: []byte{9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
Payload: []byte{9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
Version: 0,
@@ -402,7 +412,7 @@ var unOrderedParentsBlock = externalapi.DomainBlock{
var exampleValidBlock = externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
0x00000000,
[]*externalapi.DomainHash{
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x16, 0x5e, 0x38, 0xe8, 0xb3, 0x91, 0x45, 0x95,
0xd9, 0xc6, 0x41, 0xf3, 0xb8, 0xee, 0xc2, 0xf3,
@@ -415,12 +425,12 @@ var exampleValidBlock = externalapi.DomainBlock{
0x2a, 0x04, 0x71, 0xbc, 0xf8, 0x30, 0x95, 0x52,
0x6a, 0xce, 0x0e, 0x38, 0xc6, 0x00, 0x00, 0x00,
}),
},
}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x6e, 0xa3, 0x1a, 0xd6, 0xb8, 0xd9, 0xc1, 0xb2,
0xab, 0x18, 0xcc, 0x59, 0x6d, 0x03, 0x6b, 0x8d,
0x86, 0x59, 0x8f, 0x0e, 0x42, 0x07, 0x81, 0xa9,
0x59, 0x16, 0x95, 0x97, 0x8c, 0x9b, 0x0a, 0x19,
0x86, 0x8b, 0x73, 0xcd, 0x20, 0x51, 0x23, 0x60,
0xea, 0x62, 0x99, 0x9b, 0x87, 0xf6, 0xdd, 0x8d,
0xa4, 0x0b, 0xd7, 0xcf, 0xc6, 0x32, 0x38, 0xee,
0xd9, 0x68, 0x72, 0x1f, 0xa2, 0x51, 0xe4, 0x28,
}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x8a, 0xb7, 0xd6, 0x73, 0x1b, 0xe6, 0xc5, 0xd3,
@@ -432,6 +442,15 @@ var exampleValidBlock = externalapi.DomainBlock{
0x17305aa654a,
0x207fffff,
1,
0,
9,
big.NewInt(0),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}),
),
Transactions: []*externalapi.DomainTransaction{
{
@@ -463,7 +482,7 @@ var exampleValidBlock = externalapi.DomainBlock{
},
LockTime: 0,
SubnetworkID: subnetworks.SubnetworkIDCoinbase,
Payload: []byte{9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
Payload: []byte{9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
Version: 0,
@@ -698,7 +717,7 @@ var exampleValidBlock = externalapi.DomainBlock{
var blockWithWrongTxOrder = externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
0,
[]*externalapi.DomainHash{
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x16, 0x5e, 0x38, 0xe8, 0xb3, 0x91, 0x45, 0x95,
0xd9, 0xc6, 0x41, 0xf3, 0xb8, 0xee, 0xc2, 0xf3,
@@ -711,12 +730,12 @@ var blockWithWrongTxOrder = externalapi.DomainBlock{
0x2a, 0x04, 0x71, 0xbc, 0xf8, 0x30, 0x95, 0x52,
0x6a, 0xce, 0x0e, 0x38, 0xc6, 0x00, 0x00, 0x00,
}),
},
}},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0xfc, 0x03, 0xa8, 0x09, 0x03, 0xf6, 0x64, 0xd9,
0xba, 0xab, 0x6d, 0x50, 0x1c, 0x67, 0xcb, 0xff,
0x2e, 0x53, 0x76, 0x6b, 0x02, 0xa9, 0xd4, 0x78,
0x2b, 0x49, 0xe8, 0x90, 0x33, 0x90, 0xdd, 0xdf,
0x7b, 0x25, 0x8b, 0xfa, 0xfb, 0x49, 0xe4, 0x94,
0x48, 0x2c, 0xf9, 0x74, 0xdd, 0xad, 0x9d, 0x6f,
0x98, 0x8f, 0xfb, 0x01, 0x9d, 0x49, 0x29, 0xbe,
0x3c, 0xec, 0x90, 0xfe, 0xa5, 0x0c, 0xaf, 0x6b,
}),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0xa0, 0x69, 0x2d, 0x16, 0xb5, 0xd7, 0xe4, 0xf3,
@@ -733,6 +752,15 @@ var blockWithWrongTxOrder = externalapi.DomainBlock{
0x5cd16eaa000,
0x207fffff,
1,
0,
9,
big.NewInt(0),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}),
),
Transactions: []*externalapi.DomainTransaction{
{
@@ -764,7 +792,7 @@ var blockWithWrongTxOrder = externalapi.DomainBlock{
},
LockTime: 0,
SubnetworkID: subnetworks.SubnetworkIDCoinbase,
Payload: []byte{9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
Payload: []byte{9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
{
@@ -1318,7 +1346,7 @@ func initBlockWithFirstTransactionDifferentThanCoinbase(consensusConfig *consens
return &externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
constants.MaxBlockVersion,
[]*externalapi.DomainHash{consensusConfig.GenesisHash},
[]externalapi.BlockLevelParents{[]*externalapi.DomainHash{consensusConfig.GenesisHash}},
merkle.CalculateHashMerkleRoot([]*externalapi.DomainTransaction{tx}),
&externalapi.DomainHash{},
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
@@ -1329,7 +1357,16 @@ func initBlockWithFirstTransactionDifferentThanCoinbase(consensusConfig *consens
}),
0x5cd18053000,
0x207fffff,
0x1),
0x1,
0,
0,
big.NewInt(0),
externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
})),
Transactions: []*externalapi.DomainTransaction{tx},
}
}

Some files were not shown because too many files have changed in this diff Show More