Compare commits

...

134 Commits

Author SHA1 Message Date
stasatdaglabs
c477340b4c Merge branch 'dev' into readme-update 2021-12-12 16:30:25 +02:00
Ori Newman
5806fef35f Lower devnet's initial difficulty (#1869)
* Lower devnet's initial difficulty

* Increase simple-sync timeToPropagate to 10 seconds

* Check expectedAveragePropagationTime

Co-authored-by: Ori Newman <>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-12-12 16:24:43 +02:00
Ori Newman
227ef392ba Update version 2021-12-11 20:55:25 +02:00
Ori Newman
f3d76d6565 Ignore header mass in devnet and testnet (#1879) 2021-12-11 20:36:58 +02:00
Ori Newman
df573bba63 Remove unused args from CalcSubsidy (#1877) 2021-12-11 20:16:23 +02:00
Ori Newman
2a97b7c9bb ExpectedHeaderPruningPoint fix (#1876)
* ExpectedHeaderPruningPoint fix

* Fix off by one error when iterating lowHash's selected chain
2021-12-11 19:26:35 +02:00
Svarog
70900c571b Changes to libkaspawallet to support Kaspaper (#1878)
* Add NewFileFromMnemonics

* Export InternalKeychain and ExternalKeychain

* Rename NewFileFromMnemonics -> NewFileFromMnemonic

* NewFileFromMnemonic: change also argument name

* Use libkaspawallet.ExternalKeychain instead of externalKeychain
2021-12-11 17:52:21 +02:00
Constantine Bytensky
7292438e4a kaspawallet: show-address →new-address + show-addresses (#1870)
* Rename show-address to new-address

* New proto: ShowAdresses

* kaspawallet: added show-addresses command

Co-authored-by: Constantine Bitensky <cbitensky1@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-07 00:49:06 +02:00
Ori Newman
dced1a9376 Fix numThreads using getAEAD instead of decryptMnemonic (#1859)
* Fix num threads using getAEAD instead of decryptMnemonic

* Use d.NumThread to init bruteforce for num threads in getAEAD

Co-authored-by: Ori Newman <>
2021-12-06 23:56:00 +02:00
Ori Newman
32e8e539ac Apply ResolveVirtual diffs to the UTXO index (#1868)
* Apply ResolveVirtual diffs to the UTXO index

* Add comments

Co-authored-by: Ori Newman <>
2021-12-05 13:22:48 +02:00
Ori Newman
11103a36d3 Get rid of genesis's UTXO dump (#1867)
* Get rid of genesis's UTXO dump

* Allow known orphans when AllowSubmitBlockWhenNotSynced=true

* gofmt

* Avoid IBD without changing the pruning point when only genesis is available

* Add DisallowDirectBlocksOnTopOfGenesis=true for mainnet

* Remove any mention to nobanning to let stability tests run

* Rename ifGenesisSetUtxoSet to loadUTXODataForGenesis

Co-authored-by: Ori Newman <>
2021-12-05 12:46:41 +02:00
Elichai Turkel
606b781ca0 Fix bug in RequiredDifficulty and release another version 2021-11-25 21:49:15 +02:00
Elichai Turkel
dbf18d8052 Hard fork - new genesis with the utxo set of the last block (#1856)
* UTXO dump of block 0fca37ca667c2d550a6c4416dad9717e50927128c424fa4edbebc436ab13aeef

* Activate HF immediately and change reward to 1000

* Change protocol version and datadir location

* Delete comments

* Fix zero hash to muhash zero hash in genesis utxo dump check

* Don't omit genesis as direct parent

* Fix tests

* Change subsidy to 500

* Dont assume genesis multiset is empty

* Fix BlockReward test

* Fix TestValidateAndInsertImportedPruningPoint test

* Fix pruning point genesis utxo set

* Fix tests related to mainnet utxo set

* Dont change the difficulty before you have a full window

* Fix TestBlockWindow tests

* Remove global utxo set variable, and persist mainnetnet utxo deserialization between runs

* Fix last tests

* Make peer banning opt-in

* small fix for a test

* Fix go lint

* Fix Ori's review comments

* Change DAA score of genesis to checkpoint DAA score and fix all tests

* Fix the BlockLevel bits counting

* Fix some tests and make them run a little faster

* Change datadir name back to kaspa-mainnet and change db path from /data to /datadir

* Last changes for the release and change the version to 0.11.5

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Ori Newman <>
Co-authored-by: msutton <mikisiton2@gmail.com>
2021-11-25 20:18:43 +02:00
Michael Sutton
2a1b38ce7a Fix mergeset limit violation bug (#1855)
* Added a test which reproduces a bug where virtual's mergeset exceeds the mergeset limit -- this test currently fails

* Added a simple condition to fix the mergeset limit bug

* Format issue
2021-11-24 19:34:59 +02:00
Ori Newman
29c410d123 Change HF time 2021-11-22 11:27:17 +02:00
Ori Newman
6e6fabf956 Change HF time 2021-11-22 11:24:55 +02:00
Ori Newman
b04292c97a Don't server parallel PruningPointAndItsAnticoneRequests 2021-11-22 09:56:30 +02:00
Ori Newman
765dd170e4 Optimizations and header size reduce hardfork (#1853)
* Modify DefaultTimeout to 120 seconds

A temporary workaround for nodes having trouble to sync (currently the download of pruning point related data during IBD takes more than 30 seconds)

* Cache existence in reachability store

* Cache block level in the header

* Fix IBD indication on submit block

* Add hardForkOmitGenesisFromParentsDAAScore logic

* Fix NumThreads bug in the wallet

* Get rid of ParentsAtLevel header method

* Fix a bug in BuildPruningPointProof

* Increase race detector timeout

* Add cache to BuildPruningPointProof

* Add comments and temp comment out go vet

* Fix ParentsAtLevel

* Dont fill empty parents

* Change HardForkOmitGenesisFromParentsDAAScore in fast netsync test

* Add --allow-submit-block-when-not-synced in stability tests

* Fix TestPruning

* Return fast tests

* Fix off by one error on kaspawallet

* Fetch only one block with trusted data at a time

* Update fork DAA score

* Don't ban for unexpected message type

* Fix tests

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
Co-authored-by: Ori Newman <>
2021-11-22 09:00:39 +02:00
Ori Newman
8e362845b3 Update to version 0.11.4 2021-11-21 01:52:56 +02:00
Ori Newman
5c1ba9170e Don't set blocks from the pruning point anticone as the header select tip (#1850)
* Decrease the dial timeout to 1 second

* Don't set blocks from the pruning point anticone as the header selected tip.

Co-authored-by: Kaspa Profiler <>
2021-11-13 18:59:20 +02:00
Elichai Turkel
9d8c555bdf Fix a bug in the matrix ranking algorithm (#1849)
* Fix a bug in the matrix ranking algorithm

* Add tests and benchmarks for matrix generation and ranking

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-11-13 17:51:11 +02:00
Ori Newman
a2f574eab8 Update to version 0.11.3 2021-11-13 17:18:40 +02:00
Kaspa Profiler
7bed86dc1b Update changelog.txt 2021-11-11 09:30:12 +02:00
Ori Newman
9b81f5145e Increase p2p msg size and reset utxo after ibd (#1847)
* Don't build unnecessary binaries

* Reset UTXO index after IBD

* Enlarge max msg size to 1gb

* Fix UTXO chunks logic

* Revert UTXO set override change

* Fix sendPruningPointUTXOSet

* Increase tests timeout to 20 minutes

Co-authored-by: Kaspa Profiler <>
2021-11-11 09:27:07 +02:00
Michael Sutton
db36110c2f Update readme
Update readme to reflect that project is no longer at pre-Alpha state
2021-11-11 00:29:28 +02:00
Kaspa Profiler
cd8341ef57 Update to version 0.11.2 2021-11-10 23:19:25 +02:00
Ori Newman
ad8bdbed21 Update changelog (#1845)
Co-authored-by: Kaspa Profiler <>
2021-11-09 00:32:56 +02:00
Elichai Turkel
7cdceb6df0 Cache the miner state (#1844)
* Implement a MinerState to cache the matrix and friends

* Modify the miner and related code to use the new MinerCache

* Change MinerState to State

* Make go lint happy

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Kaspa Profiler <>
2021-11-09 00:12:30 +02:00
stasatdaglabs
cc5248106e Update to version 0.11.1 2021-11-08 09:01:52 +02:00
Elichai Turkel
e3463b7268 Replace Keccak256 in oPoW with CSHAKE256 with domain seperation (#1842)
* Replace keccak with CSHAKE256 in oPoW

* Add benchmarks to hash writers to compare blake2b to the CSHAKE

* Update genesis blocks

* Update tests

* Define genesis's block level to be the maximal one

* Add message to genesis coinbase

* Add comments to genesis coinbase

* Fix tests

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-11-07 18:36:30 +02:00
Ori Newman
a2173ef80a Switch PoW to a keccak heavyhash variant (#1841)
* Add another hash domain for HeavyHash

* Add a xoShiRo256PlusPlus implementation

* Add a HeavyHash implementation

* Replace our current PoW algorithm with oPoW

* Change to pow hash to keccak256

* Fix genesis

* Fix tests

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-11-07 11:17:15 +02:00
stasatdaglabs
aeb4500b61 Add the daglabs-dev mainnet dnsseeder. (#1840) 2021-11-07 10:39:54 +02:00
Ori Newman
0a1daae319 Allow mainnet flag and raise wallet fee (#1838)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-11-07 10:04:27 +02:00
stasatdaglabs
131cd3357e Rename FixedSubsidySwitchHashRateDifference to FixedSubsidySwitchHashRateThreshold and set its value to 150GH/s. (#1837) 2021-11-07 09:33:39 +02:00
Ori Newman
ff72568d6b Fix pruning point anticone order (#1836)
* Send pruning point anticone in topological order
Fix a UTXO pagination bug
Lengthen the stabilization time for the last DAA test

* Extend "sudden hash rate drop" test length to 45 minutes

Co-authored-by: Kaspa Profiler <>
2021-11-07 08:21:34 +02:00
stasatdaglabs
2dddb650b9 Switch to a fixed block subsidy after a certain work threshold (#1831)
* Implement isBlockRewardFixed.

* Fix factory.go.

* Call isBlockRewardFixed from calcBlockSubsidy.

* Fix bad call to ghostdagDataStore.Get.

* Extract blue score and blue work from the header instead of from the ghostdagDataStore.

* Fix coinbasemanager constructor arguments order

* Format consensus_defaults.go

* Check the mainnet switch from the block's point of view rather than the virtual's.

* Don't call newBlockPruningPoint twice in buildBlock.

* Properly handle new pruning point blocks in isBlockRewardFixed.

* Use the correct variable.

* Add a comment explaining what we do when the pruning point is not found in isBlockRewardFixed.

* Implement TestBlockRewardSwitch.

* Add missing error handling.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-31 15:04:51 +02:00
Ori Newman
99aaacd649 Check blue score before requesting a pruning proof (#1835)
* Check blue score before requesting a pruning proof

* BuildPruningPointProof should return empty proof if the pruning point is genesis

* Don't fail many-tips if kaspad exits ungracefully
2021-10-31 12:48:18 +02:00
stasatdaglabs
77a344cc29 In IBD, validate the timestamps of the headers of the pruning point and selected tip (#1829)
* Implement validatePruningPointFutureHeaderTimestamps.

* Fix TestIBDWithPruning.

* Fix wrong logic.

* Add a comment.

* Fix a comment.

* Fix a variable name.

* Add a commment

* Fix TestIBDWithPruning.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-30 20:32:49 +03:00
stasatdaglabs
3dbc42b4f7 Implement the new block subsidy function (#1830)
* Replace the old blockSubsidy parameters with the new ones.

* Return subsidyGenesisReward if blockHash is the genesis hash.

* Traverse a block's past for the subsidy calculation.

* Partially implement SubsidyStore.

* Refer to SubsidyStore from CoinbaseManager.

* Wrap calcBlockSubsidy in getBlockSubsidy, which first checks the database.

* Fix finalityStore not calling GenerateShardingID.

* Implement calculateAveragePastSubsidy.

* Implement calculateMergeSetSubsidySum.

* Implement calculateSubsidyRandomVariable.

* Implement calcBlockSubsidy.

* Add a TODO about floats.

* Update the calcBlockSubsidy TODO.

* Use binary.LittleEndian in calculateSubsidyRandomVariable.

* Fix bad range in calculateSubsidyRandomVariable.

* Replace float64 with big.Rat everywhere except for subsidyRandomVariable.

* Fix a nil dereference.

* Use a random walk to approximate the normal distribution.

* In order to avoid unsupported fractional results from powInt64, flip the numerator and the denominator manually.

* Set standardDeviation to 0.25, MaxSompi to 10_000_000_000 * SompiPerKaspa and defaultSubsidyGenesisReward to 1_000.

* Set the standard deviation to 0.2.

* Use a binomial distribution instead of trying to estimate the normal distribution.

* Change some values around.

* Clamp the block subsidy.

* Remove the fake duplicate constants in the util package.

* Reduce MaxSompi to only 100m Kaspa to avoid hitting the uint64 ceiling.

* Lower MaxSompi further to avoid new and exciting ways for the uint64 ceiling to be hit.

* Remove debug logs.

* Fix a couple of failing tests.

* Fix TestBlockWindow.

* Fix limitTransactionCount sometimes crashing on index-out-of-bounds.

* In TrustedDataDataDAABlock, replace BlockHeader with DomainBlock

* In calculateAveragePastSubsidy, use blockWindow instead of doing a BFS manually.

* Remove the reference to DAGTopologyManager in coinbaseManager.

* Add subsidy to the coinbase payload.

* Get rid of the subsidy store and extract subsidies out of coinbase transactions.

* Keep a blockWindow amount of blocks under the virtual for IBD purposes.

* Manually remove the virtual genesis from the merge set.

* Fix simnet genesis.

* Fix TestPruning.

* Fix TestCheckBlockIsNotPruned.

* Fix TestBlockWindow.

* Fix TestCalculateSignatureHashSchnorr.

* Fix TestCalculateSignatureHashECDSA.

* Fix serializing the wrong value into the coinbase payload.

* Rename coinbaseOutputForBlueBlock to coinbaseOutputAndSubsidyForBlueBlock.

* Add a TODO about optimizing trusted data DAA window blocks.

* Expand on a comment in TestCheckBlockIsNotPruned.

* In calcBlockSubsidy, divide the big.Int numerator by the big.Int denominator instead of converting to float64.

* Clarify a comment.

* Rename SubsidyMinGenesisReward to MinSubsidy.

* Properly handle trusted data blocks in calculateMergeSetSubsidySum.

* Use the first two bytes of the selected parent's hash for randomness instead of math/rand.

* Restore maxSompi to what it used to be.

* Fix TestPruning.

* Fix TestAmountCreation.

* Fix TestBlockWindow.

* Fix TestAmountUnitConversions.

* Increase the timeout in many-tips to 30 minutes.

* Check coinbase subsidy for every block

* Re-rename functions

* Use shift instead of powInt64 to determine subsidyRandom

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-30 10:16:47 +03:00
Ori Newman
1b9be28613 Improve ExpectedHeaderPruningPoint performance (#1833)
* Improve ExpectedHeaderPruningPoint perf

* Add suggestedLowHash to nextPruningPointAndCandidateByBlockHash
2021-10-26 11:01:26 +03:00
Ori Newman
5dbb1da84b Implement pruning point proof (#1832)
* Calculate GHOSTDAG, reachability etc for each level

* Don't preallocate cache for dag stores except level 0 and reduce the number of connections in the integration test to 32

* Reduce the number of connections in the integration test to 16

* Increase page file

* BuildPruningPointProof

* BuildPruningPointProof

* Add PruningProofManager

* Implement ApplyPruningPointProof

* Add prefix and fix blockAtDepth and fill headersByLevel

* Some bug fixes

* Include all relevant blocks for each level in the proof

* Fix syncAndValidatePruningPointProof to return the right block hash

* Fix block window

* Fix isAncestorOfPruningPoint

* Ban for rule errors on pruning proof

* Find common ancestor for blockAtDepthMAtNextLevel

* Use pruning proof in TestValidateAndInsertImportedPruningPoint

* stage status and finality point for proof blocks

* Uncomment golint

* Change test timeouts

* Calculate merge set for ApplyPruningPointProof

* Increase test timeout

* Add better caching for daa window store

* Return to default timeout

* Add ErrPruningProofMissesBlocksBelowPruningPoint

* Add errDAAWindowBlockNotFound

* Force connection loop next iteration on connection manager stop

* Revert to Test64IncomingConnections

* Remove BlockAtDepth from DAGTraversalManager

* numBullies->16

* Set page file size to 8gb

* Increase p2p max message size

* Test64IncomingConnections->Test16IncomingConnections

* Add comment for PruningProofM

* Add comment in `func (c *ConnectionManager) Stop()`

* Rename isAncestorOfPruningPoint->isAncestorOfSelectedTip

* Revert page file to 16gb

* Improve ExpectedHeaderPruningPoint perf

* Fix comment

* Revert "Improve ExpectedHeaderPruningPoint perf"

This reverts commit bca1080e71.

* Don't test windows
2021-10-26 09:48:27 +03:00
Ori Newman
afaac28da1 Validate each level parents (#1827)
* Create BlockParentBuilder.

* Implement BuildParents.

* Explictly set level 0 blocks to be the same as direct parents.

* Add checkIndirectParents to validateBlockHeaderInContext.

* Fix test_block_builder.go and BlockLevelParents::Equal.

* Don't check indirect parents for blocks with trusted data.

* Handle pruned blocks when building block level parents.

* Fix bad deletions from unprocessedXxxParents.

* Fix merge errors.

* Fix bad pruning point parent replaces.

* Fix duplicates in newBlockLevelParents.

* Skip checkIndirectParents

* Get rid of staging constant IDs

* Fix BuildParents

* Fix tests

* Add comments

* Change order of directParentHashes

* Get rid of maybeAddDirectParentParents

* Add comments

* Add blockToReferences type

* Use ParentsAtLevel

Co-authored-by: stasatdaglabs <stas@daglabs.com>
2021-09-13 14:22:00 +03:00
stasatdaglabs
0053ee788d Use the BlueWork declared in an orphan block's header instead of requesting it explicitly from the peer that sent us the orphan (#1828) 2021-09-13 13:13:03 +03:00
stasatdaglabs
af7e7de247 Add PruningPointProof to the P2P protocol (#1825)
* Add PruningPointProof to externalapi.

* Add BuildPruningPointProof and ValidatePruningPointProof to Consensus.

* Add the pruning point proof to the protocol.

* Add the pruning point proof to the wire package.

* Add PruningPointBlueWork.

* Make go vet happy.

* Properly initialize PruningPointProof in consensus.go.

* Validate pruning point blue work.

* Populate PruningPointBlueWork with the actual blue work of the pruning point.

* Revert "Populate PruningPointBlueWork with the actual blue work of the pruning point."

This reverts commit f2a9829998.

* Revert "Validate pruning point blue work."

This reverts commit c6a90c5d2c.

* Revert "Properly initialize PruningPointProof in consensus.go."

This reverts commit 9391574bbf.

* Revert "Add PruningPointBlueWork."

This reverts commit 48182f652a.

* Fix PruningPointProof and MsgPruningPointProof to be two-dimensional.

* Fix wire PruningPointProof to be two-dimensional.
2021-09-05 17:20:15 +03:00
Ori Newman
02a08902a7 Fix current pruning point index cache (#1824)
* Fix ps.currentPruningPointIndexCache

* Remove redundant dependency from block builder

* Fix typo
2021-09-05 07:37:25 +03:00
Ori Newman
d9bc94a2a8 Replace header finality point with pruning point and enforce finality rules on IBD with headers proof (#1823)
* Replace header finality point with pruning point

* Fix TestTransactionAcceptance

* Fix pruning candidate

* Store all past pruning points

* Pass pruning points on IBD

* Add blue score to block header

* Simplify ArePruningPointsInValidChain

* Fix static check errors

* Fix genesis

* Renames and text fixing

* Use ExpectedHeaderPruningPoint in block builder

* Fix TestCheckPruningPointViolation
2021-08-31 08:01:48 +03:00
stasatdaglabs
837dac68b5 Update block headers to include multiple levels of parent blocks (#1822)
* Replace the old parents in the block header with BlockLevelParents.

* Begin fixing compilation errors.

* Implement database serialization for block level parents.

* Implement p2p serialization for block level parents.

* Implement rpc serialization for block level parents.

* Add DirectParents() to the block header interface.

* Use DirectParents() instead of Parents() in some places.

* Revert test_block_builder.go.

* Add block level parents to hash serialization.

* Use the zero hash for genesis finality points.

* Fix failing tests.

* Fix a variable name.

* Update headerEstimatedSerializedSize.

* Add comments in blocklevelparents.go.

* Fix the rpc-stability stability test.

* Change the field number for `parents` fields in p2p.proto and rpc.proto.

* Remove MsgBlockHeader::NumParentBlocks.
2021-08-24 12:06:39 +03:00
Ori Newman
ba5880fab1 Fix pruning candidate (#1821) 2021-08-23 07:26:43 +03:00
stasatdaglabs
7b5720a155 Implement GHOST (#1819)
* Implement GHOST.

* Implement TestGHOST.

* Make GHOST() take arbitrary subDAGs.

* Hold RootHashes in SubDAG rather than one GenesisHash.

* Select which root the GHOST chain starts with instead of passing a lowHash.

* If two child hashes have the same future size, decide which one is larger using the block hash.

* Extract blockHashWithLargestFutureSize to a separate function.

* Calculate future size for each block individually.

* Make TestGHOST deterministic.

* Increase the timeout for connecting 128 connections in TestRPCMaxInboundConnections.

* Implement BenchmarkGHOST.

* Fix an infinite loop.

* Use much larger benchmark data.

* Optimize `futureSizes` using reverse merge sets.

* Temporarily make the benchmark data smaller while GHOST is being optimized.

* Fix a bug in futureSizes.

* Fix a bug in populateReverseMergeSet.

* Choose a selectedChild at random instead of the one with the largest reverse merge set size.

* Rename populateReverseMergeSet to calculateReverseMergeSet.

* Use reachability to resolve isDescendantOf.

* Extract heightMaps to a separate object.

* Iterate using height maps in futureSizes.

* Don't store reverse merge sets in memory.

* Change calculateReverseMergeSet to calculateReverseMergeSetSize.

* Fix bad initial reverseMergeSetSize.

* Optimize calculateReverseMergeSetSize.

* Enlarge the benchmark data to 86k blocks.
2021-08-19 13:59:43 +03:00
stasatdaglabs
65b5a080e4 Fix the RPCClient leaking connections (#1820)
* Fix the RPCClient leaking connections.

* Wrap the error return from GetInfo.
2021-08-16 15:59:35 +03:00
stasatdaglabs
ce17348175 Limit the amount of inbound RPC connections (#1818)
* Limit the amount of inbound RPC connections.

* Increment/decrement the right variable.

* Implement TestRPCMaxInboundConnections.

* Make go vet happy.

* Increase RPCMaxInboundConnections to 128.

* Set NUM_CLIENTS=128 in the rpc-idle-clients stability test.

* Explain why the P2P server has unlimited inbound connections.
2021-08-12 14:40:49 +03:00
stasatdaglabs
d922ee1be2 Add header commitments for DAA score, blue work, and finality points (#1817)
* Add DAAScore, BlueWork, and FinalityPoint to externalapi.BlockHeader.

* Add DAAScore, BlueWork, and FinalityPoint to NewImmutableBlockHeader and fix compilation errors.

* Add DAAScore, BlueWork, and FinalityPoint to protowire header types and fix failing tests.

* Check for header DAA score in validateDifficulty.

* Add DAA score to buildBlock.

* Fix failing tests.

* Add a blue work check in validateDifficultyDAAAndBlueWork.

* Add blue work to buildBlock and fix failing tests.

* Add finality point validation to ValidateHeaderInContext.

* Fix genesis blocks' finality points.

* Add finalityPoint to blockBuilder.

* Fix tests that failed due to missing reachability data.

* Make blockBuilder use VirtualFinalityPoint instead of directly calling FinalityPoint with the virtual hash.

* Don't validate the finality point for blocks with trusted data.

* Add debug logs.

* Skip finality point validation for block whose finality points are the virtual genesis.

* Revert "Add debug logs."

This reverts commit 3c18f519cc.

* Move checkDAAScore and checkBlueWork to validateBlockHeaderInContext.

* Add checkCoinbaseBlueScore to validateBodyInContext.

* Fix failing tests.

* Add DAAScore, blueWork, and finalityPoint to blocks' hashes.

* Generate new genesis blocks.

* Fix failing tests.

* In BuildUTXOInvalidBlock, get the bits from StageDAADataAndReturnRequiredDifficulty instead of calling RequiredDifficulty separately.
2021-08-12 13:25:00 +03:00
stasatdaglabs
4132891ac9 In calculateDiffBetweenPreviousAndCurrentPruningPoints, collect diffChild hashes instead of UTXODiffs to give the GC a chance to clean up UTXODiffs. (#1815)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-08-08 12:46:21 +03:00
stasatdaglabs
2094f4facf Decrease the size of the small chains in many-small-chains-and-one-big-chain.json, since the merge set size limit was reduced to k*10. (#1816) 2021-08-08 11:22:27 +03:00
stasatdaglabs
2de68f43f0 Use blockStatusStore instead of blockStore in missingBlockBodyHashes. (#1814) 2021-08-05 09:45:09 +03:00
Ori Newman
d748089a14 Update the virtual after overriding the virtual UTXO set (#1811)
* Update the virtual after overriding the virtual utxo set

* Put the updateVirtual inside importVirtualUTXOSetAndPruningPointUTXOSet

* Add pruningPoint to importVirtualUTXOSetAndPruningPointUTXOSet

* Remove sanity check
2021-08-02 17:02:15 +03:00
stasatdaglabs
7d1071a9b1 Update testnet version to testnet-6 (#1808)
* Update testnet version to testnet-6.

* Fix failing test.
2021-07-29 10:12:23 +03:00
Ori Newman
f26a7fdedf Return headers first (#1806)
* Return headers first

* Delete TestHandleRelayInvs

* resolve virtual only after IBD

* Fix ResolveVirtual

* Fix comments and variable names
2021-07-27 17:07:29 +03:00
Ori Newman
d207888b67 Implement pruned headers node (#1787)
* Pruning headers p2p basic structure

* Remove headers-first

* Fix consensus tests except TestValidateAndInsertPruningPointWithSideBlocks and TestValidateAndInsertImportedPruningPoint

* Add virtual genesis

* Implement PruningPointAndItsAnticoneWithMetaData

* Start fixing TestValidateAndInsertImportedPruningPoint

* Fix TestValidateAndInsertImportedPruningPoint

* Fix BlockWindow

* Update p2p and gRPC

* Fix all tests except TestHandleRelayInvs

* Delete TestHandleRelayInvs parts that cover the old IBD flow

* Fix lint errors

* Add p2p_request_ibd_blocks.go

* Clean code

* Make MsgBlockWithMetaData implement its own representation

* Remove redundant check if highest share block is below the pruning point

* Fix TestCheckLockTimeVerifyConditionedByAbsoluteTimeWithWrongLockTime

* Fix comments, errors ane names

* Fix window size to the real value

* Check reindex root after each block at TestUpdateReindexRoot

* Remove irrelevant check

* Renames and comments

* Remove redundant argument from sendGetBlockLocator

* Don't delete staging on non-recoverable errors

* Renames and comments

* Remove redundant code

* Commit changes inside ResolveVirtual

* Add comment to IsRecoverableError

* Remove blocksWithMetaDataGHOSTDAGDataStore

* Increase windows pagefile

* Move DeleteStagingConsensus outside of defer

* Get rid of mustAccepted in receiveBlockWithMetaData

* Ban on invalid pruning point

* Rename interface_datastructures_daawindowstore.go to interface_datastructures_blocks_with_meta_data_daa_window_store.go

* * Change GetVirtualSelectedParentChainFromBlockResponseMessage and VirtualSelectedParentChainChangedNotificationMessage to show only added block hashes
*  Remove ResolveVirtual
* Use externalapi.ConsensusWrapper inside MiningManager
* Fix pruningmanager.blockwithmetadata

* Set pruning point selected child when importing the pruning point UTXO set

* Change virtual genesis hash

* replace the selected parent with virtual genesis on removePrunedBlocksFromGHOSTDAGData

* Get rid of low hash in block locators

* Remove +1 from everywhere we use difficultyAdjustmentWindowSize and increase the default value by one

* Add comments about consensus wrapper

* Don't use separate staging area when resolving resolveBlockStatus

* Fix netsync stability test

* Fix checkResolveVirtual

* Rename ConsensusWrapper->ConsensusReference

* Get rid of blockHeapNode

* Add comment to defaultDifficultyAdjustmentWindowSize

* Add SelectedChild to DAGTraversalManager

* Remove redundant copy

* Rename blockWindowHeap->calculateBlockWindowHeap

* Move isVirtualGenesisOnlyParent to utils

* Change BlockWithMetaData->BlockWithTrustedData

* Get rid of maxReasonLength

* Split IBD to 100 blocks each time

* Fix a bug in calculateBlockWindowHeap

* Switch to trusted data when encountering virtual genesis in blockWithTrustedData

* Move ConsensusReference to domain

* Update ConsensusReference comment

* Add comment

* Rename shouldNotAddGenesis->skipAddingGenesis
2021-07-26 12:24:07 +03:00
stasatdaglabs
38e2ee1b43 Change the log level of the transaction propagation log from Info to Debug. (#1804) 2021-07-19 10:30:29 +03:00
talelbaz
aba44e7bfb Disable relative time lock by time (#1800)
* ignore type flag

* Ignore type flag of relative time lock - interpret as DAA score

* Split verifyLockTime to functions with and without threshold.relative lockTimes dont need threshold check

* Change function name and order of the functions calls

Co-authored-by: tal <tal@daglabs.com>
2021-07-18 17:52:16 +03:00
Constantine Bitensky
c731d74bc0 Replace queue by stack in GetRedeemer (#1802) 2021-07-15 15:02:06 +03:00
Svarog
60e7a8ebed Make transaction propagation much more frequent (#1799)
* Make transaction propagation much more frequent

* Update f.transactionIDsToPropagate after brodacasting
2021-07-14 16:06:24 +03:00
Svarog
369a3bac09 Limit block mass instead of merge set limit + Introduce SigOpCount to TransactionInput (#1790)
* Update constants

* Add to transaction SigOpCount

* Update mass calculation, and move it from InContext to InIsolation

* Update block validation accordingly

* Add SigOpCount validation during TransactionInContext

* Remove checking of mass vs maxMassAcceptedByBlock from consensusStateManager

* Update mining manager with latest changes

* Add SigOpCount to MsgTx.Copy()

* Fix initTestTransactionAcceptanceDataForClone

* Fix all tests in transaction_equal_clone_test.go

* Fix TestBlockMass

* Fix tests in transactionvalidator package

* Add SigOpCount to sighash

* Fix TestPruningDepth

* Fix problems in libkaspawalelt

* Fix integration tests

* Fix CalculateSignatureHash tests

* Remove remaining places talking about block size

* Add sanity check to checkBlockMass to make sure all transactions have their mass filled

* always add own sigOpCount to sigHash

* Update protowire/rpc.md

* Start working on removing any remaining reference to block/tx size

* Update rpc transaction verbose data to include mass rather then size

* Convert verboseData and block size check to mass

* Remove remaining usages of tx size in mempool

* Move transactionEstimatedSerializedSize to transactionvalidator

* Add PopulateMass to fakeRelayInvsContext

* Move PopulateMass to beggining of ValidateAndInsertTransaction + fix in it

* Assign mass a new number for backward-compatibility
2021-07-14 14:21:57 +03:00
talelbaz
8022e4cbea Validate locktime when admitted into mempool and when building a block. (#1794)
* Validate locktime when admitted into mempool and when build a block. Also, fix isFinalized to use DAAscore instead of blue score.

* Change the function name:ValidateTransactionInContextIgnoringUTXO

Co-authored-by: tal <tal@daglabs.com>
2021-07-14 11:00:03 +03:00
talelbaz
28ac77b202 Tests for timelock - check lock time verify (CLTV) (#1751)
* Create a file

* Add tests for lockTime - CLTV scripts conditioned by time and block height

* Add a handle for an unhandled error.

* Renamed the test file

* Fix typo

* Add a counter for current block height.

* Change variable name

* Adds a test for wrong lock time, removed fundingTransaction variable

* Fix LockTimeThreshold constant, fix opcodeCheckLockTimeVerify and opcodeCheckSequenceVerify(padding in the end), add support for sequence and lock time number in the script builder, add more checks to the CLTV test.

* Call AddData instead of addData. Rename fixedSize to unpaddedSize

* Creating wrapper functions to lockTime&sequence numbers that call to a shared function in script builder.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-08 18:09:45 +03:00
Constantine Bitensky
28af7eb596 Add stability-fast action to pull requests (#1791)
* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests
2021-07-08 16:08:21 +03:00
stasatdaglabs
a4d241c30a Batch transaction inv messages (#1788)
* Implement EnqueueTransactionIDsForPropagation.

* Fix tests.

* Fix TxRelayTest.

* Add a log for transaction propagation.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-07 17:54:57 +03:00
stasatdaglabs
487fab0e2b Implement a stability test to stress test the DAA algorithm (#1767)
* Copy over boilerplate and begin implementing TestDAA.

* Implement a fairly reliable method of hashing at a certain hashrate.

* Convert the DAA test to an application.

* Start kaspad and make sure that hashrate throttling works with that as well.

* Finish implementing testConstantHashRate.

* Tidied up a bit.

* Convert TestDAA back into a go test.

* Reorganize TestDAA to be more like a traditional test.

* Add sudden hashrate drop/jump tests.

* Simplify targetHashNanosecondsFunction.

* Improve progress logs.

* Add more tests.

* Remove the no-longer relevant `hashes` part of targetHashNanosecondsFunction.

* Implement a constant hashrate increase test.

* Implement a constant hashrate decrease test.

* Give the correct run duration to the constant hashrate decrease test.

* Add cooldowns to exponential functions.

* Add run.sh to the DAA test.

* Add a README.

* Add `daa` to run-slow.sh.

* Make go lint happy.

* Fix the README's title.

* Keep running tests even if one of them failed on high block rate deviation.

* Fix hashrate peak/valley tests.

* Preallocate arrays for hash and mining durations.

* Add more statistics to the "mined block" log.

* Make sure runDAATest stops when it's suppposed to.

* Add a newline after "5 minute cooldown."

* Fix variable names.

* Rename totalElapsedTime to tatalElapsedDuration.

* In measureMachineHashNanoseconds, generate a random nonce only once.

* In runDAATest, add "DAA" to the start/finish log.

* Remove --logdir from kaspadRunCommand.

* In runDAATest, enlarge the nonce range to the entirety of uint64.

* Explain what targetHashNanosecondsFunction is.

* Move RunKaspadForTesting into common.

* Rename runForDuration to loopForDuration.

* Make go lint happy.

* Extract fetchBlockForMining to a separate function.

* Extract waitUntilTargetHashDurationHadElapsed to a separate function.

* Extract pushHashDuration and pushMiningDuration to separate functions.

* Extract logMinedBlockStatsAndUpdateStatFields to a separate function.

* Extract submitMinedBlock to a separate function.

* Extract tryNonceForMiningAndIncrementNonce to a separate function.

* Add comments.

* Use a rolling average instead of appending to an array for performance/accuracy.

* Change a word in a comment.

* Explain why we wait for five minutes at the end of the exponential increase/decrease tests.

Co-authored-by: Svarog <feanorr@gmail.com>
2021-07-07 16:14:22 +03:00
stasatdaglabs
2f272cd517 Expire old transactions when both an amount of DAA and an amount of time had passed. (#1784)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-06 15:23:53 +03:00
Constantine Bitensky
e3a6d9e49a Added RPC connection server version checking, fixes https://github.com/kaspanet/kaspad/issues/1047 (#1783)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-06 15:10:35 +03:00
Svarog
069ee26e84 Adds name to route, and writes it in every error message (#1777)
* Adds name to route, and writes it in every error message

* Update all calls with route name

* Fixed a few missed points

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-07-04 14:40:27 +03:00
Svarog
61aa15fd61 Update lastRebroadcastTime when we are rebroadcasting + Add some logs to mempool (#1776)
* Update lastRebroadcastTime when we are rebroadcasting

* Add some logs to mempool

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-07-04 12:08:44 +03:00
Svarog
f7cce5cb39 Cache virtual past median time (#1775)
* Add cache for virtual pastMedianTime

* Implement InvalidateVirtualPastMedianTimeCache for mocPastMedianTimeManager
2021-07-04 11:47:43 +03:00
cbitensky
2f7a1395e7 Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween (#1772)
* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-06-30 17:21:22 +03:00
Svarog
8b1ac86532 Modify locktime thresholds to accomodate 64 bits and millisecond timestamps (#1770)
* Change SequenceLockTimeDisabled to 1 << 63

* Move LockTimeThreshold to constants

* Update locktime constants according to new proposal

* Fix opcodeCheckSequenceVerify and failed tests

* Disallow numbers above 8 bytes in makeScriptNum

* Use littleEndian.Uint64 for sequence instead of ScriptNum

* Update comments on constants

* Update some more comments
2021-06-30 10:57:09 +03:00
Svarog
ab721f3ad6 All orphans inputs should be added to op.orphansByPreviousOutpoint even if outpoint is not missing (#1766)
* All orphans inputs should be added to op.orphansByPreviousOutpoint even if outpoint is not missing

* Remove redundant log

* processOrphansAfterAcceptedTransaction: wqCheck that UTXOEntry is empty before filling it

* Don't remove redeemers in expireOrphanTransactions
2021-06-24 17:09:16 +03:00
Svarog
798c5fab7d Add allowOrphans to rpcclient.SubmitTransaction (#1765) 2021-06-24 15:23:10 +03:00
Svarog
c13a4d90ed Mempool redesign (#1752)
* Added model and stubs for all main methods

* Add constructors to all main objects

* Implement BlockCandidateTransactions

* implement expireOldTransactions and expireOrphanTransactions

* Rename isHighPriority to neverExpires

* Add stub for checkDoubleSpends

* Revert "Rename isHighPriority to neverExpires"

This reverts commit b2da9a4a00.

* Imeplement transactionsOrderedByFeeRate

* Orphan maps should be idToOrphan

* Add error.go to mempool

* Invert the condition for banning when mempool rejects a transaction

* Move all model objects to model package

* Implement getParentsInPool

* Implemented mempoolUTXOSet.addTransaction

* Implement removeTransaction, remove sanity checks

* Implemented mempoolUTXOSet.checkDoubleSpends

* Implemented removeOrphan

* Implement removeOrphan

* Implement maybeAddOrphan and AddOrphan

* Implemented processOrphansAfterAcceptedTransaction

* Implement transactionsPool.addTransaction

* Implement RemoveTransaction

* If a transaction was removed from the mempool - update it's redeemers in orphan pool as well

* Use maximumOrphanTransactionCount

* Add allowOrphans to ValidateAndInsertTransaction stub

* Implement validateTransaction functions

* Implement fillInputs

* Implement ValidateAndInsertTransaction

* Implement HandleNewBlockTransactions

* Implement missing mempool interface methods

* Add comments to exported functions

* Call ValidateTransactionInIsolation where needed

* Implement RevalidateHighPriorityTransactions

* Rewire kaspad to use new mempool, and fix compilation errors

* Update rebroadcast logic to use new structure

* Handle non-standard transaction errors properly

* Add mutex to mempool

* bugfix: GetTransaction panics when ok is false

* properly calculate targetBlocksPerSecond in config.go

* Fix various lint errors and tests

* Fix expected text in test for duplicate transactions

* Skip the coinbase transaction in HandleNewBlockTransactions

* Unorphan the correct transactions

* Call ValidateTransactionAndPopulateWithConsensusData on unorphanTransaction

* Re-apply policy_test as check_transactions_standard_test

* removeTransaction: Remove redeemers in orphan pool as well

* Remove redundant check for uint64 < 0

* Export and rename isDust -> IsTransactionOutputDust to allow usage by rothschild

* Add allowOrphan to SubmitTransaction RPC request

* Remove all implementation from mempool.go

* tidy go mod

* Don't pass acceptedOrphans to handleNewBlockTransactions

* Use t.Errorf in stead of t.Fatalf

* Remove minimum relay fee from TestDust, as it's no longer configurable

* Add separate VirtualDAASCore method for faster retrieval where it's repeated multiple times

* Broadcast all transactions that were accepted

* Don't re-use GetVirtualDAAScore in GetVirtualInfo - this causes a deadlock

* Use real transaction count, and not Orphan

* Get mempool config from outside, incorporating values received from cli

* Use MinRelayFee and MaxOrphanTxs from global kaspad config

* Add explanation for the seemingly redundant check for transaction version in checkTransactionStandard

* Update some comment

* Convert creation of acceptedTransactions to a single line

* Move mempoolUTXOSet out of checkDoubleSpends

* Add test for attempt to insert double spend into mempool

* fillInputs: Skip check for coinbase - it's always false in mempool

* Clarify comment about removeRedeemers when removing random orphan

* Don't remove high-priority transactions in limitTransactionCount

* Use mempool.removeTransaction in limitTransactionCount

* Add mutex comment to handleNewBlockTransactions

* Return error from limitTransactionCount

* Pluralize the map types

* mempoolUTXOSet.removeTransaction: Don't restore utxo if it was not created in mempool

* Don't evacuate from orphanPool high-priority transactions

* Disallow double-spends in orphan pool

* Don't use exported (and locking) methods from inside mempool

* Check for double spends in mempool during revalidateTransaction

* Add checkOrphanDuplicate

* Add orphan to acceptedOrphans, not current

* Add TestHighPriorityTransactions

* Fix off-by-one error in limitTransactionCount

* Add TestRevalidateHighPriorityTransactions

* Remove checkDoubleSpends from revalidateTransaction

* Fix TestRevalidateHighPriorityTransactions

* Move check for MaximumOrphanCount to beggining of maybeAddOrphan

* Rename all map type to singulateToSingularMap

* limitOrphanPool only after the orphan was added

* TestDoubleSpendInMempool: use createChildTxWhenParentTxWasAddedByConsensus instead of createTransactionWithUTXOEntry

* Fix some comments

* Have separate min/max transaction versions for mempool

* Add comment on defaultMaximumOrphanTransactionCount to keep it small as long as we have recursion

* Fix comment

* Rename: createChildTxWhenParentTxWasAddedByConsensus -> createChildTxWhereParentTxWasAddedByConsensus

* Handle error from createChildTxWhereParentTxWasAddedByConsensus

* Rename createChildTxWhereParentTxWasAddedByConsensus -> createChildAndParentTxsAndAddParentToConsensus

* Convert all MaximumXXX constants to uint64

* Add comment

* remove mutex comments
2021-06-23 15:49:20 +03:00
Constantine Bitensky
4ba8b14675 Make GetVirtualSelectedParentBlueScore work properly (#1764) 2021-06-21 15:56:37 +03:00
Constantine Bitensky
319cbce768 Friendly error messages for ban and unban with ip with brackets (#1762) 2021-06-21 11:42:45 +03:00
Constantine Bitensky
bdd42903b4 Remove NOP1..10 (#1761) 2021-06-20 16:52:12 +03:00
Svarog
9bedf84740 kaspawallet: Add timeout to connect to daemon, and print friendly error (#1756)
message if it's not available
2021-06-16 15:05:19 +03:00
Svarog
f317f51cdd Change IBD finished log to specify if completed succesfully or not (#1754)
* Change IBD finished log to specify if complete succesfully or not

* Move log to outside UnsetIBDRunning

* Style inhancement of IBD finished string
2021-06-16 11:13:29 +03:00
Ori Newman
4207c82f5a Add database prefix (#1750)
* Add prefix to stores

* Add prefix to forgotten stores

* Add a special type for prefix

* Rename transaction->dbTx

* Change error message

* Use countKeyName

* Rename Temporary Consesnsus to Staging

* Add DeleteStagingConsensus to Domain interface

* Add lock to staging consensus

* Make prefix type-safer

* Use ioutil.TempDir instead of t.TempDir
2021-06-15 17:47:17 +03:00
Constantine Bitensky
70399dae2a Added a friendly error message when genesis hash is not relevant to database (#1745)
Co-authored-by: Constantine Bitensky <constantinebitensky@gmail.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-06-13 11:36:47 +03:00
Constantine Bitensky
2ae1b7853f Added comment to database lock error (#1746)
More readable database lock error string

Co-authored-by: Constantine Bitensky <constantinebitensky@gmail.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-06-10 19:49:08 +03:00
Constantine Bitensky
d53d040bee Add .github/workflows/stability-fast.yml to run stability test (#1744)
* Added .github/stability-fast.yml

Co-authored-by: Constantine Bitensky <constantine@daglabs.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-06-09 15:11:56 +03:00
Constantine Bitensky
79c74c482b Get connections from Seeder when no connections left (#1742)
Co-authored-by: Constantine Bitensky <constantinebitensky@gmail.com>
2021-06-08 17:42:13 +03:00
Ori Newman
3b0394eefe Skip solving the block if SkipProofOfWork (#1741) 2021-06-08 15:44:27 +03:00
tal
43e6467ff1 Fix merge errors 2021-06-06 17:00:29 +03:00
stasatdaglabs
363494ef7a Implement NotifyVirtualDaaScoreChanged (#1737)
* Add notifyVirtualDaaScoreChanged to protowire.

* Add notifyVirtualDaaScoreChanged to the rest of kaspad.

* Add notifyVirtualDaaScoreChanged to the rest of kaspad.

* Test the DAA score notification in TestVirtualSelectedParentBlueScore.

* Rename TestVirtualSelectedParentBlueScore to TestVirtualSelectedParentBlueScoreAndVirtualDAAScore.

(cherry picked from commit 83e631548f)
2021-06-06 16:46:02 +03:00
cbitensky
d1df97c4c5 added --password and --yes cmdline parameters (#1735)
* added --password and --yes cmdline parameters

* Renamed:
confPassword -> cmdLinePassword
yes -> forceOverride

Added password in ImportMnemonics

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-06-01 15:36:18 +03:00
Svarog
4f52a6de51 Check if err != nil returned from ConnectionManager.Ban() (#1736) 2021-05-31 10:50:12 +03:00
Svarog
4f4a8934e7 Add option to specify blockHash in EstimateNetworkHashesPerSecond (#1731)
* Add BlockHash optional parameter to EstimateNetworkBlockHashesPerSecond

* Allow to pass '-' for optional values in kaspactl

* Solve a division-by-zero in estimateNetworkHashesPerSecond

* Add BlockHash to toAppMessage/fromAppMessage functions

* Rename: topHash -> StartHash

* Return proper error message if provided startHash doesn't deserialize into a hash
2021-05-27 14:59:29 +03:00
Svarog
16ba2bd312 Export DefaultPath + Add logging to kaspawallet daemon (#1730)
* Export DefaultPath + Add logging to kaspawallet daemon

* Export purpose and CoinType instead of defaultPath

* Move TODO to correct place
2021-05-26 18:51:11 +03:00
Svarog
6613faee2d Invert coin-type and purpose in default derivation path (#1728) 2021-05-20 17:43:07 +03:00
Ori Newman
edc459ae1b Improve pickVirtualParents performance and add many-tips to the stability tests suite (#1725)
* First limit the candidates size to 3*csm.maxBlockParents before taking the bottom csm.maxBlockParents/2

* Change log level of printing all tips to Tracef

* Add many-tips to run-fast.sh and run-slow.sh

* Fix preallocation size

* Assign intermediate variables
2021-05-19 16:06:52 +03:00
talelbaz
d7f2cf81c0 Change merge set order to topological order (#1654)
* Change mergeSet to be ordered topologically.

* Add special condition for genesis.

* Add check that the coinbase is validated.

* Change names of variables(old: chainHash, blueHash).

* Fix the DAG diagram in the comment above the function.

* Fix variables names.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-05-19 14:40:55 +03:00
Ori Newman
4658f9d05c Implement BIP 39 and HD wallet features (#1705)
* Naive bip39 with address reuse

* Avoid address reuse in libkaspawallet

* Add wallet daemon

* Use daemon everywhere

* Add forceOverride

* Make CreateUnsignedTransaction endpoint receive amount in sompis

* Collect close UTXOs

* Filter out non-spendable UTXOs from selectUTXOs

* Use different paths for multisig and non multisig

* Fix tests to use non zero path

* Fix multisig cosigner index detection

* Add comments

* Fix dump_unencrypted_data.go according to bip39 and bip32

* Fix wrong derivation path for multisig on wallet creation

* Remove IsSynced endpoint and add validation if wallet is synced for the relevant endpoints

* Rename server address to daemon address

* Fix capacity for extendedPublicKeys

* Use ReadBytes instead of ReadLine

* Add validation when importing

* Increment before using index value, and use it as is

* Save keys file exactly where needed

* Use %+v printErrorAndExit

* Remove redundant consts

* Rnemae collectCloseUTXOs and collectFarUTXOs

* Move typedefs around

* Add comment to addressesToQuery

* Update collectUTXOsFromRecentAddresses comment about locks

* Split collectUTXOs to small functions

* Add sanity check

* Add addEntryToUTXOSet function

* Change validateIsSynced to isSynced

* Simplify createKeyPairsFromFunction logic

* Rename .Sync() to .Save()

* Fix typo

* Create bip39BitSize const

* Add consts to purposes

* Add multisig check for 'send'

* Rename updatedPSTxBytes to partiallySignedTransaction

* Change collectUTXOsFromFarAddresses's comment

* Use setters for last used indexes

* Don't use the pstx acronym

* Fix SetPath

* Remove spaces when reading lines

* Fix walletserver to daemonaddress

* Fix isUTXOSpendable to use DAA score

Co-authored-by: Svarog <feanorr@gmail.com>
2021-05-19 10:03:23 +03:00
Ori Newman
010df3b0d3 Rename IncludeTransactionVerboseData flag to IncludeTransactions (#1717)
* Rename IncludeTransactionVerboseData flag to IncludeTransactions

* Regenerate auto-generated files
2021-05-18 17:40:06 +03:00
stasatdaglabs
346598e67f Fix merge errors. 2021-05-18 17:09:11 +03:00
Ori Newman
268906a7ce Add VirtualDaaScore to GetBlockDagInfo (#1719)
Co-authored-by: Svarog <feanorr@gmail.com>

(cherry picked from commit 36c56f73bf)
2021-05-18 17:07:07 +03:00
stasatdaglabs
befc60b185 Update changelog.txt for v0.10.2.
(cherry picked from commit 91866dd61c)
2021-05-18 16:37:10 +03:00
Ori Newman
dd3e04e671 Fix overflow when checking coinbase maturity and don't ban peers that send transactions with immature spend (#1722)
* Fix overflow when checking coinbase maturity and don't ban peers that send transactions with immature spend

* Fix tests

Co-authored-by: Svarog <feanorr@gmail.com>
(cherry picked from commit a18f2f8802)
2021-05-18 16:33:29 +03:00
stasatdaglabs
9c743db4d6 Fix merge errors. 2021-05-18 16:33:01 +03:00
Ori Newman
eb3dba5c88 Fix calcTxSequenceLockFromReferencedUTXOEntries for loop break condition (#1723)
Co-authored-by: Svarog <feanorr@gmail.com>
(cherry picked from commit 04dc1947ff)
2021-05-18 16:30:26 +03:00
stasatdaglabs
e46e2580b1 Add VirtualDaaScore to GetBlockDagInfo (#1719)
Co-authored-by: Svarog <feanorr@gmail.com>
(cherry picked from commit 36c56f73bf)

# Conflicts:
#	infrastructure/network/netadapter/server/grpcserver/protowire/rpc.pb.go
2021-05-18 16:28:57 +03:00
Ori Newman
414f58fb90 serializeAddress should always serialize as IPv6, since it assumes the IP size is 16 bytes (#1720)
(cherry picked from commit b76ca41109)
2021-05-18 16:05:36 +03:00
Ori Newman
9df80957b1 Fix getBlock and getBlocks RPC commands to return blocks and transactions properly (#1716)
* Fix getBlock RPC command to return transactions

* Fix getBlocks RPC command to return transactions and blocks

* Add GetBlockEvenIfHeaderOnly and use it for getBlock and getBlocks

* Implement GetBlockEvenIfHeaderOnly for fakeRelayInvsContext

* Use less nested code

(cherry picked from commit 50fd86e287)
2021-05-13 16:00:42 +03:00
stasatdaglabs
268c9fa83c Update changelog for v0.10.1.
(cherry picked from commit 9e0b50c0dd)
2021-05-11 12:14:04 +03:00
Ori Newman
2e3592e351 Calculate virtual's acceptance data and multiset after importing a new pruning point (#1700)
(cherry picked from commit b405ea50e5)
2021-05-11 12:14:00 +03:00
talelbaz
19718ac102 Change removeTransactionAndItsChainedTransactions to be non-recursive (#1696)
* Change removeTransactionAndItsChainedTransactions to be non-recursive

* Split the variables assigning.

* Change names of function and variables.

* Append the correct queue.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-05-10 15:21:48 +03:00
talelbaz
28a8e96e65 New stability test - many tips (#1694)
* Unfinished code.

* Update the testnet version to testnet-5. (#1683)

* Generalize stability-tests/docker/Dockerfile. (#1685)

* Committed for rebasing.

* Adds stability-test many-tips, which tests kaspad handling with many tips in the DAG.

* Delete manytips_test.go.

* Add timeout to the test and create only one RPC client.

* Place the spawn before the for loop and remove a redundant condition.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-05-10 15:03:20 +03:00
Ori Newman
4df283934a Add a link to the kaspanet github project in the README (#1698)
* Add link to project

* Fix README.md
2021-05-04 11:56:51 +03:00
talelbaz
ab89efe3dc Update relayTransactions unit tests. (#1623)
* [NOD-1344] relaytransactions: simple unit tests

* [NOD-1344]  Add mid-complexity unit tests for relaytransactions
* Improve TestHandleRelayedTransactionssub tests
* Improve TestHandleRequestedTransactions sub tests

* [NOD-1344]  Fix Simple call test

* [NOD-1344]  Fix tests after redesign

* Divide transactionrelay_test.go to 2 separated tests and updates the tests.

* Changes due to review:change the test files name and the test function names, adds new comments and fix typo.

* Delete an unnecessary comparison to True in the if statement condition.

* Update the branch to v0.11.0-dev.

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-05-02 16:16:57 +03:00
Ori Newman
fa16c30cf3 Implement bip32 (#1676)
* Implement bip32

* Unite private and public extended keys

* Change variable names

* Change test name and add comment

* Rename var name

* Rename ckd.go to child_key_derivation.go

* Rename ser32 -> serializeUint32

* Add PrivateKey method

* Rename Path -> DeriveFromPath

* Add comment to validateChecksum

* Remove redundant condition from parsePath

* Rename Fingerprint->ParentFingerprint

* Merge hardened and non-hardened paths in calcI

* Change fingerPrintFromPoint to method

* Move hash160 to hash.go

* Fix a bug in calcI

* Simplify doubleSha256

* Remove slice end bound

* Split long line

* Change KaspaMainnetPrivate/public to represent kprv/kpub

* Add comments

* Fix comment

* Copy base58 library to kaspad

* Add versions for all networks

* Change versions to hex

* Add comments

Co-authored-by: Svarog <feanorr@gmail.com>
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-04-28 15:27:16 +03:00
stasatdaglabs
c28366eb50 Add v0.10.0 to the changelog. (#1692)
(cherry picked from commit 9dd8136e4b)
2021-04-26 15:12:50 +03:00
stasatdaglabs
dc0bf56bf3 Fix the mempool-limits stability test (#1690)
* Add -v to the `go test` command.

* Generate a new keypair for mempool-limits.

* Set mempool-limits to time out only after 24 hours.

(cherry picked from commit eb1703b948)
2021-04-26 15:12:46 +03:00
stasatdaglabs
91de1807ad Generalize stability-tests/docker/Dockerfile. (#1685)
(cherry picked from commit a6da3251d0)
2021-04-26 15:12:44 +03:00
stasatdaglabs
830684167c Update the testnet version to testnet-5. (#1683)
(cherry picked from commit bf198948c4)
2021-04-26 15:12:41 +03:00
stasatdaglabs
1f56a68a28 Add an RPC command: EstimateNetworkHashesPerSecond (#1686)
* Implement EstimateNetworkHashesPerSecond.

* Fix failing tests.

* Add request/response messages to the .proto files.

* Add the EstimateNetworkHashesPerSecond RPC command.

* Add the EstimateNetworkHashesPerSecond RPC client function.

* Add the EstimateNetworkHashesPerSecond RPC command to kaspactl.

* Disallow windowSize lesser than 2.

* Fix wrong scale (milliseconds instead of seconds).

* Handle windowHashes being 0.
2021-04-22 15:18:21 +03:00
stasatdaglabs
13a6b4cc51 Update to version 0.11.0 2021-04-20 13:40:11 +03:00
Svarog
dfd8b3423d Implement new mechanism for updating UTXO Diffs (#1671)
* Use selectedParent instead of selectedTip for non-selectedTip blocks in restoreSingleBlockStatus

* Cache the selectedParent for re-use in a resolveSingleBlockStatus chain

* Implement and use reverseUTXOSet

* Reverse blocks in correct order

* Support resolveBlockStatus without separate stagingAreas for usage of testConsensus

* Handle the case where the tip of the resolved block is not the next selectedTip

* Unify isResolveTip

* Some minor fixes and cleanup

* Add full finality window re-org test to stability-slow

* rename: useSeparateStagingAreasPerBlock -> useSeparateStagingAreaPerBlock

* Better logs in resolveSingleBlockStatus

* A few retouches to reverseUTXODiffs

* TEMPORARY COMMIT: EXTRAT ALL DIFFFROMS TO SEPARATE METHODS

* TEMPORARY COMMIT: REMOVE DIFFICULTY CHECKS IN DEVNET

* Don't pre-allocate in utxo-algebra, since the numbers are not known ahead-of-time

* Add some logs to reverseUTXODiffs

* Revert "TEMPORARY COMMIT: REMOVE DIFFICULTY CHECKS IN DEVNET"

This reverts commit c0af9dc6ad.

* Revert "TEMPORARY COMMIT: EXTRAT ALL DIFFFROMS TO SEPARATE METHODS"

This reverts commit 4fcca1b48c.

* Remove redundant paranthesis

* Revise some logs messages

* Rename:oneBlockBeforeCurrentUTXOSet -> lastResolvedBlockUTXOSet

* Don't break if the block was resolved as invalid

* rename unverifiedBlocks to recentlyVerifiedBlcks in reverseUTXODiffs

* Add errors.New to the panic, for a stack trace

* Reverse the UTXODiffs after the main block has been commited

* Use the correct value for previousUTXODiff

* Add test for ReverseUTXODiff

* Fix some names and comments

* Update TestReverseUTXODiffs to use consensus.Config

* Fix comments mentioning 'oneBlockBeforeTip'
2021-04-20 10:26:55 +03:00
Svarog
28bfc0fb9c Move pow package from model to utils (#1681) 2021-04-19 15:35:36 +03:00
Elichai Turkel
83beae4463 Add consensus.Config as a wrapper for dagParams (#1680)
* Add a new consensus.Config wrapper to dagParams

* Update all tests to use consensus.Config
2021-04-19 09:07:34 +03:00
Elichai Turkel
a6ebe83198 Disable validate commitment unless explictly requested (#1679)
* Add a flag for sanity check pruning point utxo set and do the sanity check only if it's enabled

* add description to EnableSanityCheckPruningUTXOSet

* review fix

Co-authored-by: Svarog <feanorr@gmail.com>
2021-04-18 17:55:17 +03:00
Svarog
acdc59b565 Add version file to database (#1678)
* Add version file to database

* Remove redundant code

* Check for version before opening the database, create version file after

* Create version file before opening the database
2021-04-14 12:35:39 +03:00
stasatdaglabs
15811b0bcb Fix missing VerboseData in BlockAddedNotifications. (#1675)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-04-13 11:35:06 +03:00
Svarog
a8a7e3dd9b Add windows to the CI + fix errors when testing on Windows (#1674)
* Add windows to the CI

* Cast syscall.Stdin into an integer

* DataDir -> AppDir in service_windows.go

* Rename mempool-limits package to something non-main

* Close database after re-assigining to it

* Up rpcTimout to 10 seconds
2021-04-12 14:53:34 +03:00
Svarog
3f193e9219 Use a separate folder for every network under ~/.kaspawallet (#1673) 2021-04-11 18:07:21 +03:00
stasatdaglabs
dfa24d8353 Implement a stability test for mempool limits (#1647)
* Copy some boilerplate from the other stability tests.

* Fix a copy+paste error in run.sh.

* Copy over some stability test boilerplate go code.

* Run kaspad in the background.

* Catch panics and initialize the RPC client.

* Mine enough blocks to fund filling up the mempool.

* Extract coinbase transactions out of the generated blocks.

* Tidy up a bit.

* Implement submitting transactions.

* Lower the amount of outputs in each transaction.

* Verify that the mempool size has the expected amount of transactions.

* Pregenerate enough funds before submitting the first transaction so that block creation doesn't interfere with the test.

* Empty mempool out by continuously adding blocks to the DAG.

* Handle orphan transactions when overfilling the mempool.

* Increase mempoolSizeLimit to 1m.

* Fix a comment.

* Fix a comment.

* Add mempool-limits to run-slow.sh.

* Rename generateTransactionsWithLotsOfOutputs to generateTransactionsWithMultipleOutputs.

* Rename generateCoinbaseTransaction to mineBlockAndGetCoinbaseTransaction.

* Make generateFundingCoinbaseTransactions return an object instead of store a global variable.

* Convert mempool-limits into a Go test.

* Convert panics to t.Fatalfs.

* Fix a comment.

* Increase mempoolSizeLimit to 1m.

* Run TestMempoolLimits only if RUN_STABILITY_TESTS is set.

* Move the run of mempool-limits in run-slow.sh.

* Add a comment above fundingCoinbaseTransactions.

* Make a couple of stylistic changes.

* Use transactionhelper.CoinbaseTransactionIndex instead of hardcoding 0.

* Make uninteresting errors print %+v instead of %s.

Co-authored-by: Svarog <feanorr@gmail.com>
2021-04-11 16:59:11 +03:00
Elichai Turkel
3c3ad1425d Make moving the pruning point faster (#1660)
* Add oldPruningPoint to pruningStore

* Make the pruning store work with utxo diff and return an iterator over pruning point utxoset

* Redesign pruning point utxo storage by creating a diff and modifying the old pruning utxo set

* Fix review comments

* Rename updatePruningPointUTXOSet
2021-04-11 11:17:13 +03:00
Svarog
9bb8123391 Don't include selectedParentHash in block verbose data if it's nil + Fix test vectors in rpc-stability (#1668)
* Fix submitBlockRequest in rpc-stability/commands.json

* Don't include selectedParentHash in block verbose data if it's nil (a.k.a. genesis)
2021-04-08 15:34:10 +03:00
539 changed files with 27750 additions and 13619 deletions

View File

@@ -11,8 +11,8 @@
#>
param(
[System.UInt64] $MinimumSize = 8gb ,
[System.UInt64] $MaximumSize = 8gb ,
[System.UInt64] $MinimumSize = 16gb ,
[System.UInt64] $MaximumSize = 16gb ,
[System.String] $DiskRoot = "D:"
)

View File

@@ -1,7 +1,7 @@
name: Build and Upload assets
name: Build and upload assets
on:
release:
types: [published]
types: [ published ]
jobs:
build:
@@ -10,34 +10,33 @@ jobs:
fail-fast: false
matrix:
os: [ ubuntu-latest, windows-latest, macos-latest ]
name: Building For ${{ matrix.os }}
name: Building, ${{ matrix.os }}
steps:
- name: Fix windows CRLF
- name: Fix CRLF on Windows
if: runner.os == 'Windows'
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# We need to increase the page size because the tests run out of memory on github CI windows.
# Use the powershell script from this github action: https://github.com/al-cheb/configure-pagefile-action/blob/master/scripts/SetPageFileSize.ps1
# MIT License (MIT) Copyright (c) 2020 Maxim Lobanov and contributors
- name: Increase page size on windows
# Increase the pagefile size on Windows to aviod running out of memory
- name: Increase pagefile size on Windows
if: runner.os == 'Windows'
shell: powershell
run: powershell -command .\.github\workflows\SetPageFileSize.ps1
run: powershell -command .github\workflows\SetPageFileSize.ps1
- name: Set up Go 1.x
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Build on linux
- name: Build on Linux
if: runner.os == 'Linux'
# `-extldflags=-static` - means static link everything, `-tags netgo,osusergo` means use pure go replacements for "os/user" and "net"
# `-extldflags=-static` - means static link everything,
# `-tags netgo,osusergo` means use pure go replacements for "os/user" and "net"
# `-s -w` strips the binary to produce smaller size binaries
run: |
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ ./...
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-linux.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-linux.zip"
zip -r "${archive}" ./bin/*
@@ -48,7 +47,7 @@ jobs:
if: runner.os == 'Windows'
shell: bash
run: |
go build -v -ldflags="-s -w" -o bin/ ./...
go build -v -ldflags="-s -w" -o bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-win64.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-win64.zip"
powershell "Compress-Archive bin/* \"${archive}\""
@@ -58,7 +57,7 @@ jobs:
- name: Build on MacOS
if: runner.os == 'macOS'
run: |
go build -v -ldflags="-s -w" -o ./bin/ ./...
go build -v -ldflags="-s -w" -o ./bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-osx.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-osx.zip"
zip -r "${archive}" ./bin/*
@@ -66,7 +65,7 @@ jobs:
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Upload Release Asset
- name: Upload release asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,4 +1,4 @@
name: Go-Race
name: Race
on:
schedule:
@@ -7,7 +7,7 @@ on:
jobs:
race_test:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
@@ -19,10 +19,10 @@ jobs:
with:
fetch-depth: 0
- name: Set up Go 1.x
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.15
go-version: 1.16
- name: Set scheduled branch name
shell: bash
@@ -46,4 +46,4 @@ jobs:
run: |
git checkout "${{ env.run_on }}"
git status
go test -race ./...
go test -timeout 20m -race ./...

View File

@@ -1,38 +1,36 @@
name: Go
name: Tests
on:
push:
pull_request:
# edtited - "title, body, or the base branch of the PR is modified"
# synchronize - "commit(s) pushed to the pull request"
types: [opened, synchronize, edited, reopened]
# edtited - because base branch can be modified
# synchronize - update commits on PR
types: [opened, synchronize, edited]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-16.04, macos-10.15 ]
name: Testing on on ${{ matrix.os }}
os: [ ubuntu-latest, macos-latest ]
name: Tests, ${{ matrix.os }}
steps:
- name: Fix windows CRLF
- name: Fix CRLF on Windows
if: runner.os == 'Windows'
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# We need to increase the page size because the tests run out of memory on github CI windows.
# Use the powershell script from this github action: https://github.com/al-cheb/configure-pagefile-action/blob/master/scripts/SetPageFileSize.ps1
# MIT License (MIT) Copyright (c) 2020 Maxim Lobanov and contributors
- name: Increase page size on windows
# Increase the pagefile size on Windows to aviod running out of memory
- name: Increase pagefile size on Windows
if: runner.os == 'Windows'
shell: powershell
run: powershell -command .\.github\workflows\SetPageFileSize.ps1
run: powershell -command .github\workflows\SetPageFileSize.ps1
- name: Set up Go 1.x
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
@@ -51,14 +49,41 @@ jobs:
shell: bash
run: ./build_and_test.sh -v
stability-test-fast:
runs-on: ubuntu-latest
name: Fast stability tests, ${{ github.head_ref }}
steps:
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Install kaspad
run: go install ./...
- name: Install golint
run: go get -u golang.org/x/lint/golint
- name: Run fast stability tests
working-directory: stability-tests
run: ./install_and_test.sh
coverage:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
name: Produce code coverage
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Set up Go 1.x
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
@@ -70,4 +95,4 @@ jobs:
run: go test -v -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...
- name: Upload coverage file
run: bash <(curl -s https://codecov.io/bash)
run: bash <(curl -s https://codecov.io/bash)

View File

@@ -1,16 +1,13 @@
Kaspad
====
Warning: This is pre-alpha software. There's no guarantee anything works.
====
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](https://choosealicense.com/licenses/isc/)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/kaspanet/kaspad)
Kaspad is the reference full node Kaspa implementation written in Go (golang).
This project is currently under active development and is in a pre-Alpha state.
Some things still don't work and APIs are far from finalized. The code is provided for reference only.
This project is currently under active development and is in Beta state.
## What is kaspa
@@ -63,6 +60,8 @@ Join our discord server using the following link: https://discord.gg/YNYnNN5Pf2
The [integrated github issue tracker](https://github.com/kaspanet/kaspad/issues)
is used for this project.
Issue priorities may be seen at https://github.com/orgs/kaspanet/projects/4
## Documentation
The [documentation](https://github.com/kaspanet/docs) is a work-in-progress

View File

@@ -20,7 +20,10 @@ import (
"github.com/kaspanet/kaspad/version"
)
const leveldbCacheSizeMiB = 256
const (
leveldbCacheSizeMiB = 256
defaultDataDirname = "datadir2"
)
var desiredLimits = &limits.DesiredLimits{
FileLimitWant: 2048,
@@ -85,12 +88,6 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
profiling.Start(app.cfg.Profile, log)
}
// Perform upgrades to kaspad as new versions require it.
if err := doUpgrades(); err != nil {
log.Error(err)
return err
}
// Return now if an interrupt signal was triggered.
if signal.InterruptRequested(interrupt) {
return nil
@@ -107,7 +104,7 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
// Open the database
databaseContext, err := openDB(app.cfg)
if err != nil {
log.Error(err)
log.Errorf("Loading database failed: %+v", err)
return err
}
@@ -163,15 +160,9 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
return nil
}
// doUpgrades performs upgrades to kaspad as new versions require it.
// currently it's a placeholder we got from kaspad upstream, that does nothing
func doUpgrades() error {
return nil
}
// dbPath returns the path to the block database given a database type.
func databasePath(cfg *config.Config) string {
return filepath.Join(cfg.AppDir, "data")
return filepath.Join(cfg.AppDir, defaultDataDirname)
}
func removeDatabase(cfg *config.Config) error {
@@ -181,6 +172,17 @@ func removeDatabase(cfg *config.Config) error {
func openDB(cfg *config.Config) (database.Database, error) {
dbPath := databasePath(cfg)
err := checkDatabaseVersion(dbPath)
if err != nil {
return nil, err
}
log.Infof("Loading database from '%s'", dbPath)
return ldb.NewLevelDB(dbPath, leveldbCacheSizeMiB)
db, err := ldb.NewLevelDB(dbPath, leveldbCacheSizeMiB)
if err != nil {
return nil, err
}
return db, nil
}

View File

@@ -2,6 +2,9 @@ package appmessage
import (
"encoding/hex"
"github.com/pkg/errors"
"math/big"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
@@ -28,13 +31,17 @@ func DomainBlockToMsgBlock(domainBlock *externalapi.DomainBlock) *MsgBlock {
func DomainBlockHeaderToBlockHeader(domainBlockHeader externalapi.BlockHeader) *MsgBlockHeader {
return &MsgBlockHeader{
Version: domainBlockHeader.Version(),
ParentHashes: domainBlockHeader.ParentHashes(),
Parents: domainBlockHeader.Parents(),
HashMerkleRoot: domainBlockHeader.HashMerkleRoot(),
AcceptedIDMerkleRoot: domainBlockHeader.AcceptedIDMerkleRoot(),
UTXOCommitment: domainBlockHeader.UTXOCommitment(),
Timestamp: mstime.UnixMilliseconds(domainBlockHeader.TimeInMilliseconds()),
Bits: domainBlockHeader.Bits(),
Nonce: domainBlockHeader.Nonce(),
BlueScore: domainBlockHeader.BlueScore(),
DAAScore: domainBlockHeader.DAAScore(),
BlueWork: domainBlockHeader.BlueWork(),
PruningPoint: domainBlockHeader.PruningPoint(),
}
}
@@ -55,13 +62,17 @@ func MsgBlockToDomainBlock(msgBlock *MsgBlock) *externalapi.DomainBlock {
func BlockHeaderToDomainBlockHeader(blockHeader *MsgBlockHeader) externalapi.BlockHeader {
return blockheader.NewImmutableBlockHeader(
blockHeader.Version,
blockHeader.ParentHashes,
blockHeader.Parents,
blockHeader.HashMerkleRoot,
blockHeader.AcceptedIDMerkleRoot,
blockHeader.UTXOCommitment,
blockHeader.Timestamp.UnixMilliseconds(),
blockHeader.Bits,
blockHeader.Nonce,
blockHeader.DAAScore,
blockHeader.BlueScore,
blockHeader.BlueWork,
blockHeader.PruningPoint,
)
}
@@ -100,6 +111,7 @@ func domainTransactionInputToTxIn(domainTransactionInput *externalapi.DomainTran
PreviousOutpoint: *domainOutpointToOutpoint(domainTransactionInput.PreviousOutpoint),
SignatureScript: domainTransactionInput.SignatureScript,
Sequence: domainTransactionInput.Sequence,
SigOpCount: domainTransactionInput.SigOpCount,
}
}
@@ -148,6 +160,7 @@ func txInToDomainTransactionInput(txIn *TxIn) *externalapi.DomainTransactionInpu
return &externalapi.DomainTransactionInput{
PreviousOutpoint: *outpointToDomainOutpoint(&txIn.PreviousOutpoint), //TODO
SignatureScript: txIn.SignatureScript,
SigOpCount: txIn.SigOpCount,
Sequence: txIn.Sequence,
}
}
@@ -163,14 +176,10 @@ func outpointToDomainOutpoint(outpoint *Outpoint) *externalapi.DomainOutpoint {
func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externalapi.DomainTransaction, error) {
inputs := make([]*externalapi.DomainTransactionInput, len(rpcTransaction.Inputs))
for i, input := range rpcTransaction.Inputs {
transactionID, err := transactionid.FromString(input.PreviousOutpoint.TransactionID)
previousOutpoint, err := RPCOutpointToDomainOutpoint(input.PreviousOutpoint)
if err != nil {
return nil, err
}
previousOutpoint := &externalapi.DomainOutpoint{
TransactionID: *transactionID,
Index: input.PreviousOutpoint.Index,
}
signatureScript, err := hex.DecodeString(input.SignatureScript)
if err != nil {
return nil, err
@@ -179,6 +188,7 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
PreviousOutpoint: *previousOutpoint,
SignatureScript: signatureScript,
Sequence: input.Sequence,
SigOpCount: input.SigOpCount,
}
}
outputs := make([]*externalapi.DomainTransactionOutput, len(rpcTransaction.Outputs))
@@ -213,6 +223,36 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
}, nil
}
// RPCOutpointToDomainOutpoint converts RPCOutpoint to DomainOutpoint
func RPCOutpointToDomainOutpoint(outpoint *RPCOutpoint) (*externalapi.DomainOutpoint, error) {
transactionID, err := transactionid.FromString(outpoint.TransactionID)
if err != nil {
return nil, err
}
return &externalapi.DomainOutpoint{
TransactionID: *transactionID,
Index: outpoint.Index,
}, nil
}
// RPCUTXOEntryToUTXOEntry converts RPCUTXOEntry to UTXOEntry
func RPCUTXOEntryToUTXOEntry(entry *RPCUTXOEntry) (externalapi.UTXOEntry, error) {
script, err := hex.DecodeString(entry.ScriptPublicKey.Script)
if err != nil {
return nil, err
}
return utxo.NewUTXOEntry(
entry.Amount,
&externalapi.ScriptPublicKey{
Script: script,
Version: entry.ScriptPublicKey.Version,
},
entry.IsCoinbase,
entry.BlockDAAScore,
), nil
}
// DomainTransactionToRPCTransaction converts DomainTransactions to RPCTransactions
func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransaction) *RPCTransaction {
inputs := make([]*RPCTransactionInput, len(transaction.Inputs))
@@ -227,6 +267,7 @@ func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransactio
PreviousOutpoint: previousOutpoint,
SignatureScript: signatureScript,
Sequence: input.Sequence,
SigOpCount: input.SigOpCount,
}
}
outputs := make([]*RPCTransactionOutput, len(transaction.Outputs))
@@ -257,22 +298,27 @@ func OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(
domainOutpointAndUTXOEntryPairs := make([]*externalapi.OutpointAndUTXOEntryPair, len(outpointAndUTXOEntryPairs))
for i, outpointAndUTXOEntryPair := range outpointAndUTXOEntryPairs {
domainOutpointAndUTXOEntryPairs[i] = &externalapi.OutpointAndUTXOEntryPair{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: outpointAndUTXOEntryPair.Outpoint.TxID,
Index: outpointAndUTXOEntryPair.Outpoint.Index,
},
UTXOEntry: utxo.NewUTXOEntry(
outpointAndUTXOEntryPair.UTXOEntry.Amount,
outpointAndUTXOEntryPair.UTXOEntry.ScriptPublicKey,
outpointAndUTXOEntryPair.UTXOEntry.IsCoinbase,
outpointAndUTXOEntryPair.UTXOEntry.BlockDAAScore,
),
}
domainOutpointAndUTXOEntryPairs[i] = outpointAndUTXOEntryPairToDomainOutpointAndUTXOEntryPair(outpointAndUTXOEntryPair)
}
return domainOutpointAndUTXOEntryPairs
}
func outpointAndUTXOEntryPairToDomainOutpointAndUTXOEntryPair(
outpointAndUTXOEntryPair *OutpointAndUTXOEntryPair) *externalapi.OutpointAndUTXOEntryPair {
return &externalapi.OutpointAndUTXOEntryPair{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: outpointAndUTXOEntryPair.Outpoint.TxID,
Index: outpointAndUTXOEntryPair.Outpoint.Index,
},
UTXOEntry: utxo.NewUTXOEntry(
outpointAndUTXOEntryPair.UTXOEntry.Amount,
outpointAndUTXOEntryPair.UTXOEntry.ScriptPublicKey,
outpointAndUTXOEntryPair.UTXOEntry.IsCoinbase,
outpointAndUTXOEntryPair.UTXOEntry.BlockDAAScore,
),
}
}
// DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs converts
// domain OutpointAndUTXOEntryPairs to OutpointAndUTXOEntryPairs
func DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(
@@ -298,15 +344,25 @@ func DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(
// DomainBlockToRPCBlock converts DomainBlocks to RPCBlocks
func DomainBlockToRPCBlock(block *externalapi.DomainBlock) *RPCBlock {
parents := make([]*RPCBlockLevelParents, len(block.Header.Parents()))
for i, blockLevelParents := range block.Header.Parents() {
parents[i] = &RPCBlockLevelParents{
ParentHashes: hashes.ToStrings(blockLevelParents),
}
}
header := &RPCBlockHeader{
Version: uint32(block.Header.Version()),
ParentHashes: hashes.ToStrings(block.Header.ParentHashes()),
Parents: parents,
HashMerkleRoot: block.Header.HashMerkleRoot().String(),
AcceptedIDMerkleRoot: block.Header.AcceptedIDMerkleRoot().String(),
UTXOCommitment: block.Header.UTXOCommitment().String(),
Timestamp: block.Header.TimeInMilliseconds(),
Bits: block.Header.Bits(),
Nonce: block.Header.Nonce(),
DAAScore: block.Header.DAAScore(),
BlueScore: block.Header.BlueScore(),
BlueWork: block.Header.BlueWork().Text(16),
PruningPoint: block.Header.PruningPoint().String(),
}
transactions := make([]*RPCTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
@@ -320,13 +376,16 @@ func DomainBlockToRPCBlock(block *externalapi.DomainBlock) *RPCBlock {
// RPCBlockToDomainBlock converts `block` into a DomainBlock
func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
parentHashes := make([]*externalapi.DomainHash, len(block.Header.ParentHashes))
for i, parentHash := range block.Header.ParentHashes {
domainParentHashes, err := externalapi.NewDomainHashFromString(parentHash)
if err != nil {
return nil, err
parents := make([]externalapi.BlockLevelParents, len(block.Header.Parents))
for i, blockLevelParents := range block.Header.Parents {
parents[i] = make(externalapi.BlockLevelParents, len(blockLevelParents.ParentHashes))
for j, parentHash := range blockLevelParents.ParentHashes {
var err error
parents[i][j], err = externalapi.NewDomainHashFromString(parentHash)
if err != nil {
return nil, err
}
}
parentHashes[i] = domainParentHashes
}
hashMerkleRoot, err := externalapi.NewDomainHashFromString(block.Header.HashMerkleRoot)
if err != nil {
@@ -340,15 +399,27 @@ func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
if err != nil {
return nil, err
}
blueWork, success := new(big.Int).SetString(block.Header.BlueWork, 16)
if !success {
return nil, errors.Errorf("failed to parse blue work: %s", block.Header.BlueWork)
}
pruningPoint, err := externalapi.NewDomainHashFromString(block.Header.PruningPoint)
if err != nil {
return nil, err
}
header := blockheader.NewImmutableBlockHeader(
uint16(block.Header.Version),
parentHashes,
parents,
hashMerkleRoot,
acceptedIDMerkleRoot,
utxoCommitment,
block.Header.Timestamp,
block.Header.Bits,
block.Header.Nonce)
block.Header.Nonce,
block.Header.DAAScore,
block.Header.BlueScore,
blueWork,
pruningPoint)
transactions := make([]*externalapi.DomainTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
domainTransaction, err := RPCTransactionToDomainTransaction(transaction)
@@ -362,3 +433,118 @@ func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
Transactions: transactions,
}, nil
}
// BlockWithTrustedDataToDomainBlockWithTrustedData converts *MsgBlockWithTrustedData to *externalapi.BlockWithTrustedData
func BlockWithTrustedDataToDomainBlockWithTrustedData(block *MsgBlockWithTrustedData) *externalapi.BlockWithTrustedData {
daaWindow := make([]*externalapi.TrustedDataDataDAABlock, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &externalapi.TrustedDataDataDAABlock{
Block: MsgBlockToDomainBlock(daaBlock.Block),
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
ghostdagData := make([]*externalapi.BlockGHOSTDAGDataHashPair, len(block.GHOSTDAGData))
for i, datum := range block.GHOSTDAGData {
ghostdagData[i] = &externalapi.BlockGHOSTDAGDataHashPair{
Hash: datum.Hash,
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(datum.GHOSTDAGData),
}
}
return &externalapi.BlockWithTrustedData{
Block: MsgBlockToDomainBlock(block.Block),
DAAScore: block.DAAScore,
DAAWindow: daaWindow,
GHOSTDAGData: ghostdagData,
}
}
func ghostdagDataToDomainGHOSTDAGData(data *BlockGHOSTDAGData) *externalapi.BlockGHOSTDAGData {
bluesAnticoneSizes := make(map[externalapi.DomainHash]externalapi.KType, len(data.BluesAnticoneSizes))
for _, pair := range data.BluesAnticoneSizes {
bluesAnticoneSizes[*pair.BlueHash] = pair.AnticoneSize
}
return externalapi.NewBlockGHOSTDAGData(
data.BlueScore,
data.BlueWork,
data.SelectedParent,
data.MergeSetBlues,
data.MergeSetReds,
bluesAnticoneSizes,
)
}
func domainGHOSTDAGDataGHOSTDAGData(data *externalapi.BlockGHOSTDAGData) *BlockGHOSTDAGData {
bluesAnticoneSizes := make([]*BluesAnticoneSizes, 0, len(data.BluesAnticoneSizes()))
for blueHash, anticoneSize := range data.BluesAnticoneSizes() {
blueHashCopy := blueHash
bluesAnticoneSizes = append(bluesAnticoneSizes, &BluesAnticoneSizes{
BlueHash: &blueHashCopy,
AnticoneSize: anticoneSize,
})
}
return &BlockGHOSTDAGData{
BlueScore: data.BlueScore(),
BlueWork: data.BlueWork(),
SelectedParent: data.SelectedParent(),
MergeSetBlues: data.MergeSetBlues(),
MergeSetReds: data.MergeSetReds(),
BluesAnticoneSizes: bluesAnticoneSizes,
}
}
// DomainBlockWithTrustedDataToBlockWithTrustedData converts *externalapi.BlockWithTrustedData to *MsgBlockWithTrustedData
func DomainBlockWithTrustedDataToBlockWithTrustedData(block *externalapi.BlockWithTrustedData) *MsgBlockWithTrustedData {
daaWindow := make([]*TrustedDataDataDAABlock, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &TrustedDataDataDAABlock{
Block: DomainBlockToMsgBlock(daaBlock.Block),
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
ghostdagData := make([]*BlockGHOSTDAGDataHashPair, len(block.GHOSTDAGData))
for i, datum := range block.GHOSTDAGData {
ghostdagData[i] = &BlockGHOSTDAGDataHashPair{
Hash: datum.Hash,
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(datum.GHOSTDAGData),
}
}
return &MsgBlockWithTrustedData{
Block: DomainBlockToMsgBlock(block.Block),
DAAScore: block.DAAScore,
DAAWindow: daaWindow,
GHOSTDAGData: ghostdagData,
}
}
// MsgPruningPointProofToDomainPruningPointProof converts *MsgPruningPointProof to *externalapi.PruningPointProof
func MsgPruningPointProofToDomainPruningPointProof(pruningPointProofMessage *MsgPruningPointProof) *externalapi.PruningPointProof {
headers := make([][]externalapi.BlockHeader, len(pruningPointProofMessage.Headers))
for blockLevel, blockLevelParents := range pruningPointProofMessage.Headers {
headers[blockLevel] = make([]externalapi.BlockHeader, len(blockLevelParents))
for i, header := range blockLevelParents {
headers[blockLevel][i] = BlockHeaderToDomainBlockHeader(header)
}
}
return &externalapi.PruningPointProof{
Headers: headers,
}
}
// DomainPruningPointProofToMsgPruningPointProof converts *externalapi.PruningPointProof to *MsgPruningPointProof
func DomainPruningPointProofToMsgPruningPointProof(pruningPointProof *externalapi.PruningPointProof) *MsgPruningPointProof {
headers := make([][]*MsgBlockHeader, len(pruningPointProof.Headers))
for blockLevel, blockLevelParents := range pruningPointProof.Headers {
headers[blockLevel] = make([]*MsgBlockHeader, len(blockLevelParents))
for i, header := range blockLevelParents {
headers[blockLevel][i] = DomainBlockHeaderToBlockHeader(header)
}
}
return &MsgPruningPointProof{
Headers: headers,
}
}

View File

@@ -45,24 +45,27 @@ const (
CmdRequestRelayBlocks
CmdInvTransaction
CmdRequestTransactions
CmdIBDBlock
CmdDoneHeaders
CmdTransactionNotFound
CmdReject
CmdHeader
CmdRequestNextHeaders
CmdRequestPruningPointUTXOSetAndBlock
CmdRequestPruningPointUTXOSet
CmdPruningPointUTXOSetChunk
CmdRequestIBDBlocks
CmdUnexpectedPruningPoint
CmdRequestPruningPointHash
CmdPruningPointHash
CmdIBDBlockLocator
CmdIBDBlockLocatorHighestHash
CmdIBDBlockLocatorHighestHashNotFound
CmdBlockHeaders
CmdRequestNextPruningPointUTXOSetChunk
CmdDonePruningPointUTXOSetChunks
CmdBlockWithTrustedData
CmdDoneBlocksWithTrustedData
CmdRequestPruningPointAndItsAnticone
CmdIBDBlock
CmdRequestIBDBlocks
CmdPruningPoints
CmdRequestPruningPointProof
CmdPruningPointProof
// rpc
CmdGetCurrentNetworkRequestMessage
@@ -137,6 +140,11 @@ const (
CmdPruningPointUTXOSetOverrideNotificationMessage
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage
CmdEstimateNetworkHashesPerSecondRequestMessage
CmdEstimateNetworkHashesPerSecondResponseMessage
CmdNotifyVirtualDaaScoreChangedRequestMessage
CmdNotifyVirtualDaaScoreChangedResponseMessage
CmdVirtualDaaScoreChangedNotificationMessage
)
// ProtocolMessageCommandToString maps all MessageCommands to their string representation
@@ -145,7 +153,7 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdVerAck: "VerAck",
CmdRequestAddresses: "RequestAddresses",
CmdAddresses: "Addresses",
CmdRequestHeaders: "RequestHeaders",
CmdRequestHeaders: "CmdRequestHeaders",
CmdBlock: "Block",
CmdTx: "Tx",
CmdPing: "Ping",
@@ -156,24 +164,27 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdRequestRelayBlocks: "RequestRelayBlocks",
CmdInvTransaction: "InvTransaction",
CmdRequestTransactions: "RequestTransactions",
CmdIBDBlock: "IBDBlock",
CmdDoneHeaders: "DoneHeaders",
CmdTransactionNotFound: "TransactionNotFound",
CmdReject: "Reject",
CmdHeader: "Header",
CmdRequestNextHeaders: "RequestNextHeaders",
CmdRequestPruningPointUTXOSetAndBlock: "RequestPruningPointUTXOSetAndBlock",
CmdRequestPruningPointUTXOSet: "RequestPruningPointUTXOSet",
CmdPruningPointUTXOSetChunk: "PruningPointUTXOSetChunk",
CmdRequestIBDBlocks: "RequestIBDBlocks",
CmdUnexpectedPruningPoint: "UnexpectedPruningPoint",
CmdRequestPruningPointHash: "RequestPruningPointHashHash",
CmdPruningPointHash: "PruningPointHash",
CmdIBDBlockLocator: "IBDBlockLocator",
CmdIBDBlockLocatorHighestHash: "IBDBlockLocatorHighestHash",
CmdIBDBlockLocatorHighestHashNotFound: "IBDBlockLocatorHighestHashNotFound",
CmdBlockHeaders: "BlockHeaders",
CmdRequestNextPruningPointUTXOSetChunk: "RequestNextPruningPointUTXOSetChunk",
CmdDonePruningPointUTXOSetChunks: "DonePruningPointUTXOSetChunks",
CmdBlockWithTrustedData: "BlockWithTrustedData",
CmdDoneBlocksWithTrustedData: "DoneBlocksWithTrustedData",
CmdRequestPruningPointAndItsAnticone: "RequestPruningPointAndItsAnticoneHeaders",
CmdIBDBlock: "IBDBlock",
CmdRequestIBDBlocks: "RequestIBDBlocks",
CmdPruningPoints: "PruningPoints",
CmdRequestPruningPointProof: "RequestPruningPointProof",
CmdPruningPointProof: "PruningPointProof",
}
// RPCMessageCommandToString maps all MessageCommands to their string representation
@@ -248,6 +259,11 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdPruningPointUTXOSetOverrideNotificationMessage: "PruningPointUTXOSetOverrideNotification",
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: "StopNotifyingPruningPointUTXOSetOverrideRequest",
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage: "StopNotifyingPruningPointUTXOSetOverrideResponse",
CmdEstimateNetworkHashesPerSecondRequestMessage: "EstimateNetworkHashesPerSecondRequest",
CmdEstimateNetworkHashesPerSecondResponseMessage: "EstimateNetworkHashesPerSecondResponse",
CmdNotifyVirtualDaaScoreChangedRequestMessage: "NotifyVirtualDaaScoreChangedRequest",
CmdNotifyVirtualDaaScoreChangedResponseMessage: "NotifyVirtualDaaScoreChangedResponse",
CmdVirtualDaaScoreChangedNotificationMessage: "VirtualDaaScoreChangedNotification",
}
// Message is an interface that describes a kaspa message. A type that

View File

@@ -21,13 +21,18 @@ func TestBlock(t *testing.T) {
pver := ProtocolVersion
// Block 1 header.
parentHashes := blockOne.Header.ParentHashes
parents := blockOne.Header.Parents
hashMerkleRoot := blockOne.Header.HashMerkleRoot
acceptedIDMerkleRoot := blockOne.Header.AcceptedIDMerkleRoot
utxoCommitment := blockOne.Header.UTXOCommitment
bits := blockOne.Header.Bits
nonce := blockOne.Header.Nonce
bh := NewBlockHeader(1, parentHashes, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce)
daaScore := blockOne.Header.DAAScore
blueScore := blockOne.Header.BlueScore
blueWork := blockOne.Header.BlueWork
pruningPoint := blockOne.Header.PruningPoint
bh := NewBlockHeader(1, parents, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce,
daaScore, blueScore, blueWork, pruningPoint)
// Ensure the command is expected value.
wantCmd := MessageCommand(5)
@@ -131,7 +136,7 @@ func TestConvertToPartial(t *testing.T) {
var blockOne = MsgBlock{
Header: MsgBlockHeader{
Version: 0,
ParentHashes: []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash},
Parents: []externalapi.BlockLevelParents{[]*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}},
HashMerkleRoot: mainnetGenesisMerkleRoot,
AcceptedIDMerkleRoot: exampleAcceptedIDMerkleRoot,
UTXOCommitment: exampleUTXOCommitment,

View File

@@ -5,13 +5,12 @@
package appmessage
import (
"math"
"math/big"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/pkg/errors"
)
// BaseBlockHeaderPayload is the base number of bytes a block header can be,
@@ -39,8 +38,8 @@ type MsgBlockHeader struct {
// Version of the block. This is not the same as the protocol version.
Version uint16
// Hashes of the parent block headers in the blockDAG.
ParentHashes []*externalapi.DomainHash
// Parents are the parent block hashes of the block in the DAG per superblock level.
Parents []externalapi.BlockLevelParents
// HashMerkleRoot is the merkle tree reference to hash of all transactions for the block.
HashMerkleRoot *externalapi.DomainHash
@@ -60,15 +59,16 @@ type MsgBlockHeader struct {
// Nonce used to generate the block.
Nonce uint64
}
// NumParentBlocks return the number of entries in ParentHashes
func (h *MsgBlockHeader) NumParentBlocks() byte {
numParents := len(h.ParentHashes)
if numParents > math.MaxUint8 {
panic(errors.Errorf("number of parents is %d, which is more than one byte can fit", numParents))
}
return byte(numParents)
// DAASCore is the DAA score of the block.
DAAScore uint64
BlueScore uint64
// BlueWork is the blue work of the block.
BlueWork *big.Int
PruningPoint *externalapi.DomainHash
}
// BlockHash computes the block identifier hash for the given block header.
@@ -76,33 +76,27 @@ func (h *MsgBlockHeader) BlockHash() *externalapi.DomainHash {
return consensushashing.HeaderHash(BlockHeaderToDomainBlockHeader(h))
}
// IsGenesis returns true iff this block is a genesis block
func (h *MsgBlockHeader) IsGenesis() bool {
return h.NumParentBlocks() == 0
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (h *MsgBlockHeader) Command() MessageCommand {
return CmdHeader
}
// NewBlockHeader returns a new MsgBlockHeader using the provided version, previous
// block hash, hash merkle root, accepted ID merkle root, difficulty bits, and nonce used to generate the
// block with defaults or calclulated values for the remaining fields.
func NewBlockHeader(version uint16, parentHashes []*externalapi.DomainHash, hashMerkleRoot *externalapi.DomainHash,
acceptedIDMerkleRoot *externalapi.DomainHash, utxoCommitment *externalapi.DomainHash, bits uint32, nonce uint64) *MsgBlockHeader {
func NewBlockHeader(version uint16, parents []externalapi.BlockLevelParents, hashMerkleRoot *externalapi.DomainHash,
acceptedIDMerkleRoot *externalapi.DomainHash, utxoCommitment *externalapi.DomainHash, bits uint32, nonce,
daaScore, blueScore uint64, blueWork *big.Int, pruningPoint *externalapi.DomainHash) *MsgBlockHeader {
// Limit the timestamp to one millisecond precision since the protocol
// doesn't support better.
return &MsgBlockHeader{
Version: version,
ParentHashes: parentHashes,
Parents: parents,
HashMerkleRoot: hashMerkleRoot,
AcceptedIDMerkleRoot: acceptedIDMerkleRoot,
UTXOCommitment: utxoCommitment,
Timestamp: mstime.Now(),
Bits: bits,
Nonce: nonce,
DAAScore: daaScore,
BlueScore: blueScore,
BlueWork: blueWork,
PruningPoint: pruningPoint,
}
}

View File

@@ -5,29 +5,34 @@
package appmessage
import (
"math/big"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
)
// TestBlockHeader tests the MsgBlockHeader API.
func TestBlockHeader(t *testing.T) {
nonce := uint64(0xba4d87a69924a93d)
hashes := []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}
parents := []externalapi.BlockLevelParents{[]*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}}
merkleHash := mainnetGenesisMerkleRoot
acceptedIDMerkleRoot := exampleAcceptedIDMerkleRoot
bits := uint32(0x1d00ffff)
bh := NewBlockHeader(1, hashes, merkleHash, acceptedIDMerkleRoot, exampleUTXOCommitment, bits, nonce)
daaScore := uint64(123)
blueScore := uint64(456)
blueWork := big.NewInt(789)
pruningPoint := simnetGenesisHash
bh := NewBlockHeader(1, parents, merkleHash, acceptedIDMerkleRoot, exampleUTXOCommitment, bits, nonce,
daaScore, blueScore, blueWork, pruningPoint)
// Ensure we get the same data back out.
if !reflect.DeepEqual(bh.ParentHashes, hashes) {
t.Errorf("NewBlockHeader: wrong prev hashes - got %v, want %v",
spew.Sprint(bh.ParentHashes), spew.Sprint(hashes))
if !reflect.DeepEqual(bh.Parents, parents) {
t.Errorf("NewBlockHeader: wrong parents - got %v, want %v",
spew.Sprint(bh.Parents), spew.Sprint(parents))
}
if bh.HashMerkleRoot != merkleHash {
t.Errorf("NewBlockHeader: wrong merkle root - got %v, want %v",
@@ -41,44 +46,20 @@ func TestBlockHeader(t *testing.T) {
t.Errorf("NewBlockHeader: wrong nonce - got %v, want %v",
bh.Nonce, nonce)
}
}
func TestIsGenesis(t *testing.T) {
nonce := uint64(123123) // 0x1e0f3
bits := uint32(0x1d00ffff)
timestamp := mstime.UnixMilliseconds(0x495fab29000)
baseBlockHdr := &MsgBlockHeader{
Version: 1,
ParentHashes: []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash},
HashMerkleRoot: mainnetGenesisMerkleRoot,
Timestamp: timestamp,
Bits: bits,
Nonce: nonce,
if bh.DAAScore != daaScore {
t.Errorf("NewBlockHeader: wrong daaScore - got %v, want %v",
bh.DAAScore, daaScore)
}
genesisBlockHdr := &MsgBlockHeader{
Version: 1,
ParentHashes: []*externalapi.DomainHash{},
HashMerkleRoot: mainnetGenesisMerkleRoot,
Timestamp: timestamp,
Bits: bits,
Nonce: nonce,
if bh.BlueScore != blueScore {
t.Errorf("NewBlockHeader: wrong blueScore - got %v, want %v",
bh.BlueScore, blueScore)
}
tests := []struct {
in *MsgBlockHeader // Block header to encode
isGenesis bool // Expected result for call of .IsGenesis
}{
{genesisBlockHdr, true},
{baseBlockHdr, false},
if bh.BlueWork != blueWork {
t.Errorf("NewBlockHeader: wrong blueWork - got %v, want %v",
bh.BlueWork, blueWork)
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
isGenesis := test.in.IsGenesis()
if isGenesis != test.isGenesis {
t.Errorf("MsgBlockHeader.IsGenesis: #%d got: %t, want: %t",
i, isGenesis, test.isGenesis)
}
if !bh.PruningPoint.Equal(pruningPoint) {
t.Errorf("NewBlockHeader: wrong pruningPoint - got %v, want %v",
bh.PruningPoint, pruningPoint)
}
}

View File

@@ -0,0 +1,54 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"math/big"
)
// MsgBlockWithTrustedData represents a kaspa BlockWithTrustedData message
type MsgBlockWithTrustedData struct {
baseMessage
Block *MsgBlock
DAAScore uint64
DAAWindow []*TrustedDataDataDAABlock
GHOSTDAGData []*BlockGHOSTDAGDataHashPair
}
// Command returns the protocol command string for the message
func (msg *MsgBlockWithTrustedData) Command() MessageCommand {
return CmdBlockWithTrustedData
}
// NewMsgBlockWithTrustedData returns a new MsgBlockWithTrustedData.
func NewMsgBlockWithTrustedData() *MsgBlockWithTrustedData {
return &MsgBlockWithTrustedData{}
}
// TrustedDataDataDAABlock is an appmessage representation of externalapi.TrustedDataDataDAABlock
type TrustedDataDataDAABlock struct {
Block *MsgBlock
GHOSTDAGData *BlockGHOSTDAGData
}
// BlockGHOSTDAGData is an appmessage representation of externalapi.BlockGHOSTDAGData
type BlockGHOSTDAGData struct {
BlueScore uint64
BlueWork *big.Int
SelectedParent *externalapi.DomainHash
MergeSetBlues []*externalapi.DomainHash
MergeSetReds []*externalapi.DomainHash
BluesAnticoneSizes []*BluesAnticoneSizes
}
// BluesAnticoneSizes is an appmessage representation of the BluesAnticoneSizes part of GHOSTDAG data.
type BluesAnticoneSizes struct {
BlueHash *externalapi.DomainHash
AnticoneSize externalapi.KType
}
// BlockGHOSTDAGDataHashPair is an appmessage representation of externalapi.BlockGHOSTDAGDataHashPair
type BlockGHOSTDAGDataHashPair struct {
Hash *externalapi.DomainHash
GHOSTDAGData *BlockGHOSTDAGData
}

View File

@@ -0,0 +1,21 @@
package appmessage
// MsgDoneBlocksWithTrustedData implements the Message interface and represents a kaspa
// DoneBlocksWithTrustedData message
//
// This message has no payload.
type MsgDoneBlocksWithTrustedData struct {
baseMessage
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgDoneBlocksWithTrustedData) Command() MessageCommand {
return CmdDoneBlocksWithTrustedData
}
// NewMsgDoneBlocksWithTrustedData returns a new kaspa DoneBlocksWithTrustedData message that conforms to the
// Message interface.
func NewMsgDoneBlocksWithTrustedData() *MsgDoneBlocksWithTrustedData {
return &MsgDoneBlocksWithTrustedData{}
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgRequestPruningPointAndItsAnticone represents a kaspa RequestPruningPointAndItsAnticone message
type MsgRequestPruningPointAndItsAnticone struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointAndItsAnticone) Command() MessageCommand {
return CmdRequestPruningPointAndItsAnticone
}
// NewMsgRequestPruningPointAndItsAnticone returns a new MsgRequestPruningPointAndItsAnticone.
func NewMsgRequestPruningPointAndItsAnticone() *MsgRequestPruningPointAndItsAnticone {
return &MsgRequestPruningPointAndItsAnticone{}
}

View File

@@ -1,65 +0,0 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package appmessage
import (
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
)
// TestIBDBlock tests the MsgIBDBlock API.
func TestIBDBlock(t *testing.T) {
pver := ProtocolVersion
// Block 1 header.
parentHashes := blockOne.Header.ParentHashes
hashMerkleRoot := blockOne.Header.HashMerkleRoot
acceptedIDMerkleRoot := blockOne.Header.AcceptedIDMerkleRoot
utxoCommitment := blockOne.Header.UTXOCommitment
bits := blockOne.Header.Bits
nonce := blockOne.Header.Nonce
bh := NewBlockHeader(1, parentHashes, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce)
// Ensure the command is expected value.
wantCmd := MessageCommand(15)
msg := NewMsgIBDBlock(NewMsgBlock(bh))
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgIBDBlock: wrong command - got %v want %v",
cmd, wantCmd)
}
// Ensure max payload is expected value for latest protocol version.
wantPayload := uint32(1024 * 1024 * 32)
maxPayload := msg.MaxPayloadLength(pver)
if maxPayload != wantPayload {
t.Errorf("MaxPayloadLength: wrong max payload length for "+
"protocol version %d - got %v, want %v", pver,
maxPayload, wantPayload)
}
// Ensure we get the same block header data back out.
if !reflect.DeepEqual(&msg.Header, bh) {
t.Errorf("NewMsgIBDBlock: wrong block header - got %v, want %v",
spew.Sdump(&msg.Header), spew.Sdump(bh))
}
// Ensure transactions are added properly.
tx := blockOne.Transactions[0].Copy()
msg.AddTransaction(tx)
if !reflect.DeepEqual(msg.Transactions, blockOne.Transactions) {
t.Errorf("AddTransaction: wrong transactions - got %v, want %v",
spew.Sdump(msg.Transactions),
spew.Sdump(blockOne.Transactions))
}
// Ensure transactions are properly cleared.
msg.ClearTransactions()
if len(msg.Transactions) != 0 {
t.Errorf("ClearTransactions: wrong transactions - got %v, want %v",
len(msg.Transactions), 0)
}
}

View File

@@ -1,23 +0,0 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgPruningPointHashMessage represents a kaspa PruningPointHash message
type MsgPruningPointHashMessage struct {
baseMessage
Hash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPointHashMessage) Command() MessageCommand {
return CmdPruningPointHash
}
// NewPruningPointHashMessage returns a new kaspa PruningPointHash message
func NewPruningPointHashMessage(hash *externalapi.DomainHash) *MsgPruningPointHashMessage {
return &MsgPruningPointHashMessage{
Hash: hash,
}
}

View File

@@ -0,0 +1,20 @@
package appmessage
// MsgPruningPointProof represents a kaspa PruningPointProof message
type MsgPruningPointProof struct {
baseMessage
Headers [][]*MsgBlockHeader
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPointProof) Command() MessageCommand {
return CmdPruningPointProof
}
// NewMsgPruningPointProof returns a new MsgPruningPointProof.
func NewMsgPruningPointProof(headers [][]*MsgBlockHeader) *MsgPruningPointProof {
return &MsgPruningPointProof{
Headers: headers,
}
}

View File

@@ -0,0 +1,20 @@
package appmessage
// MsgPruningPoints represents a kaspa PruningPoints message
type MsgPruningPoints struct {
baseMessage
Headers []*MsgBlockHeader
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPoints) Command() MessageCommand {
return CmdPruningPoints
}
// NewMsgPruningPoints returns a new MsgPruningPoints.
func NewMsgPruningPoints(headers []*MsgBlockHeader) *MsgPruningPoints {
return &MsgPruningPoints{
Headers: headers,
}
}

View File

@@ -10,7 +10,6 @@ import (
// The locator is returned via a locator message (MsgBlockLocator).
type MsgRequestBlockLocator struct {
baseMessage
LowHash *externalapi.DomainHash
HighHash *externalapi.DomainHash
Limit uint32
}
@@ -24,9 +23,8 @@ func (msg *MsgRequestBlockLocator) Command() MessageCommand {
// NewMsgRequestBlockLocator returns a new RequestBlockLocator message that conforms to the
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgRequestBlockLocator(lowHash, highHash *externalapi.DomainHash, limit uint32) *MsgRequestBlockLocator {
func NewMsgRequestBlockLocator(highHash *externalapi.DomainHash, limit uint32) *MsgRequestBlockLocator {
return &MsgRequestBlockLocator{
LowHash: lowHash,
HighHash: highHash,
Limit: limit,
}

View File

@@ -16,7 +16,7 @@ func TestRequestBlockLocator(t *testing.T) {
// Ensure the command is expected value.
wantCmd := MessageCommand(9)
msg := NewMsgRequestBlockLocator(highHash, &externalapi.DomainHash{}, 0)
msg := NewMsgRequestBlockLocator(highHash, 0)
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgRequestBlockLocator: wrong command - got %v want %v",
cmd, wantCmd)

View File

@@ -10,7 +10,7 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// TestRequstIBDBlocks tests the MsgRequestHeaders API.
// TestRequstIBDBlocks tests the MsgRequestIBDBlocks API.
func TestRequstIBDBlocks(t *testing.T) {
hashStr := "000000000002e7ad7b9eef9479e4aabc65cb831269cc20d2632c13684406dee0"
lowHash, err := externalapi.NewDomainHashFromString(hashStr)
@@ -27,14 +27,14 @@ func TestRequstIBDBlocks(t *testing.T) {
// Ensure we get the same data back out.
msg := NewMsgRequstHeaders(lowHash, highHash)
if !msg.HighHash.Equal(highHash) {
t.Errorf("NewMsgRequstHeaders: wrong high hash - got %v, want %v",
t.Errorf("NewMsgRequstIBDBlocks: wrong high hash - got %v, want %v",
msg.HighHash, highHash)
}
// Ensure the command is expected value.
wantCmd := MessageCommand(4)
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgRequstHeaders: wrong command - got %v want %v",
t.Errorf("NewMsgRequstIBDBlocks: wrong command - got %v want %v",
cmd, wantCmd)
}
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgRequestPruningPointProof represents a kaspa RequestPruningPointProof message
type MsgRequestPruningPointProof struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointProof) Command() MessageCommand {
return CmdRequestPruningPointProof
}
// NewMsgRequestPruningPointProof returns a new MsgRequestPruningPointProof.
func NewMsgRequestPruningPointProof() *MsgRequestPruningPointProof {
return &MsgRequestPruningPointProof{}
}

View File

@@ -4,20 +4,20 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestPruningPointUTXOSetAndBlock represents a kaspa RequestPruningPointUTXOSetAndBlock message
type MsgRequestPruningPointUTXOSetAndBlock struct {
// MsgRequestPruningPointUTXOSet represents a kaspa RequestPruningPointUTXOSet message
type MsgRequestPruningPointUTXOSet struct {
baseMessage
PruningPointHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointUTXOSetAndBlock) Command() MessageCommand {
return CmdRequestPruningPointUTXOSetAndBlock
func (msg *MsgRequestPruningPointUTXOSet) Command() MessageCommand {
return CmdRequestPruningPointUTXOSet
}
// NewMsgRequestPruningPointUTXOSetAndBlock returns a new MsgRequestPruningPointUTXOSetAndBlock
func NewMsgRequestPruningPointUTXOSetAndBlock(pruningPointHash *externalapi.DomainHash) *MsgRequestPruningPointUTXOSetAndBlock {
return &MsgRequestPruningPointUTXOSetAndBlock{
// NewMsgRequestPruningPointUTXOSet returns a new MsgRequestPruningPointUTXOSet
func NewMsgRequestPruningPointUTXOSet(pruningPointHash *externalapi.DomainHash) *MsgRequestPruningPointUTXOSet {
return &MsgRequestPruningPointUTXOSet{
PruningPointHash: pruningPointHash,
}
}

View File

@@ -90,16 +90,18 @@ type TxIn struct {
PreviousOutpoint Outpoint
SignatureScript []byte
Sequence uint64
SigOpCount byte
}
// NewTxIn returns a new kaspa transaction input with the provided
// previous outpoint point and signature script with a default sequence of
// MaxTxInSequenceNum.
func NewTxIn(prevOut *Outpoint, signatureScript []byte, sequence uint64) *TxIn {
func NewTxIn(prevOut *Outpoint, signatureScript []byte, sequence uint64, sigOpCount byte) *TxIn {
return &TxIn{
PreviousOutpoint: *prevOut,
SignatureScript: signatureScript,
Sequence: sequence,
SigOpCount: sigOpCount,
}
}
@@ -206,6 +208,7 @@ func (msg *MsgTx) Copy() *MsgTx {
PreviousOutpoint: newOutpoint,
SignatureScript: newScript,
Sequence: oldTxIn.Sequence,
SigOpCount: oldTxIn.SigOpCount,
}
// Finally, append this fully copied txin.

View File

@@ -68,7 +68,7 @@ func TestTx(t *testing.T) {
// Ensure we get the same transaction input back out.
sigScript := []byte{0x04, 0x31, 0xdc, 0x00, 0x1b, 0x01, 0x62}
txIn := NewTxIn(prevOut, sigScript, constants.MaxTxInSequenceNum)
txIn := NewTxIn(prevOut, sigScript, constants.MaxTxInSequenceNum, 1)
if !reflect.DeepEqual(&txIn.PreviousOutpoint, prevOut) {
t.Errorf("NewTxIn: wrong prev outpoint - got %v, want %v",
spew.Sprint(&txIn.PreviousOutpoint),

View File

@@ -1,16 +0,0 @@
package appmessage
// MsgRequestPruningPointHashMessage represents a kaspa RequestPruningPointHashMessage message
type MsgRequestPruningPointHashMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointHashMessage) Command() MessageCommand {
return CmdRequestPruningPointHash
}
// NewMsgRequestPruningPointHashMessage returns a new kaspa RequestPruningPointHash message
func NewMsgRequestPruningPointHashMessage() *MsgRequestPruningPointHashMessage {
return &MsgRequestPruningPointHashMessage{}
}

View File

@@ -12,7 +12,7 @@ import (
const (
// ProtocolVersion is the latest protocol version this package supports.
ProtocolVersion uint32 = 1
ProtocolVersion uint32 = 3
// DefaultServices describes the default services that are supported by
// the server.

View File

@@ -0,0 +1,43 @@
package appmessage
// EstimateNetworkHashesPerSecondRequestMessage is an appmessage corresponding to
// its respective RPC message
type EstimateNetworkHashesPerSecondRequestMessage struct {
baseMessage
StartHash string
WindowSize uint32
}
// Command returns the protocol command string for the message
func (msg *EstimateNetworkHashesPerSecondRequestMessage) Command() MessageCommand {
return CmdEstimateNetworkHashesPerSecondRequestMessage
}
// NewEstimateNetworkHashesPerSecondRequestMessage returns a instance of the message
func NewEstimateNetworkHashesPerSecondRequestMessage(startHash string, windowSize uint32) *EstimateNetworkHashesPerSecondRequestMessage {
return &EstimateNetworkHashesPerSecondRequestMessage{
StartHash: startHash,
WindowSize: windowSize,
}
}
// EstimateNetworkHashesPerSecondResponseMessage is an appmessage corresponding to
// its respective RPC message
type EstimateNetworkHashesPerSecondResponseMessage struct {
baseMessage
NetworkHashesPerSecond uint64
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *EstimateNetworkHashesPerSecondResponseMessage) Command() MessageCommand {
return CmdEstimateNetworkHashesPerSecondResponseMessage
}
// NewEstimateNetworkHashesPerSecondResponseMessage returns a instance of the message
func NewEstimateNetworkHashesPerSecondResponseMessage(networkHashesPerSecond uint64) *EstimateNetworkHashesPerSecondResponseMessage {
return &EstimateNetworkHashesPerSecondResponseMessage{
NetworkHashesPerSecond: networkHashesPerSecond,
}
}

View File

@@ -4,8 +4,8 @@ package appmessage
// its respective RPC message
type GetBlockRequestMessage struct {
baseMessage
Hash string
IncludeTransactionVerboseData bool
Hash string
IncludeTransactions bool
}
// Command returns the protocol command string for the message
@@ -14,10 +14,10 @@ func (msg *GetBlockRequestMessage) Command() MessageCommand {
}
// NewGetBlockRequestMessage returns a instance of the message
func NewGetBlockRequestMessage(hash string, includeTransactionVerboseData bool) *GetBlockRequestMessage {
func NewGetBlockRequestMessage(hash string, includeTransactions bool) *GetBlockRequestMessage {
return &GetBlockRequestMessage{
Hash: hash,
IncludeTransactionVerboseData: includeTransactionVerboseData,
Hash: hash,
IncludeTransactions: includeTransactions,
}
}

View File

@@ -28,6 +28,7 @@ type GetBlockDAGInfoResponseMessage struct {
Difficulty float64
PastMedianTime int64
PruningPointHash string
VirtualDAAScore uint64
Error *RPCError
}

View File

@@ -4,9 +4,9 @@ package appmessage
// its respective RPC message
type GetBlocksRequestMessage struct {
baseMessage
LowHash string
IncludeBlocks bool
IncludeTransactionVerboseData bool
LowHash string
IncludeBlocks bool
IncludeTransactions bool
}
// Command returns the protocol command string for the message
@@ -16,11 +16,11 @@ func (msg *GetBlocksRequestMessage) Command() MessageCommand {
// NewGetBlocksRequestMessage returns a instance of the message
func NewGetBlocksRequestMessage(lowHash string, includeBlocks bool,
includeTransactionVerboseData bool) *GetBlocksRequestMessage {
includeTransactions bool) *GetBlocksRequestMessage {
return &GetBlocksRequestMessage{
LowHash: lowHash,
IncludeBlocks: includeBlocks,
IncludeTransactionVerboseData: includeTransactionVerboseData,
LowHash: lowHash,
IncludeBlocks: includeBlocks,
IncludeTransactions: includeTransactions,
}
}

View File

@@ -11,8 +11,8 @@ func (msg *GetInfoRequestMessage) Command() MessageCommand {
return CmdGetInfoRequestMessage
}
// NewGeInfoRequestMessage returns a instance of the message
func NewGeInfoRequestMessage() *GetInfoRequestMessage {
// NewGetInfoRequestMessage returns a instance of the message
func NewGetInfoRequestMessage() *GetInfoRequestMessage {
return &GetInfoRequestMessage{}
}
@@ -20,8 +20,9 @@ func NewGeInfoRequestMessage() *GetInfoRequestMessage {
// its respective RPC message
type GetInfoResponseMessage struct {
baseMessage
P2PID string
MempoolSize uint64
P2PID string
MempoolSize uint64
ServerVersion string
Error *RPCError
}
@@ -32,9 +33,10 @@ func (msg *GetInfoResponseMessage) Command() MessageCommand {
}
// NewGetInfoResponseMessage returns a instance of the message
func NewGetInfoResponseMessage(p2pID string, mempoolSize uint64) *GetInfoResponseMessage {
func NewGetInfoResponseMessage(p2pID string, mempoolSize uint64, serverVersion string) *GetInfoResponseMessage {
return &GetInfoResponseMessage{
P2PID: p2pID,
MempoolSize: mempoolSize,
P2PID: p2pID,
MempoolSize: mempoolSize,
ServerVersion: serverVersion,
}
}

View File

@@ -24,7 +24,7 @@ func NewGetVirtualSelectedParentChainFromBlockRequestMessage(startHash string) *
type GetVirtualSelectedParentChainFromBlockResponseMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
AddedChainBlockHashes []string
Error *RPCError
}
@@ -35,11 +35,11 @@ func (msg *GetVirtualSelectedParentChainFromBlockResponseMessage) Command() Mess
}
// NewGetVirtualSelectedParentChainFromBlockResponseMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockResponseMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock) *GetVirtualSelectedParentChainFromBlockResponseMessage {
func NewGetVirtualSelectedParentChainFromBlockResponseMessage(removedChainBlockHashes,
addedChainBlockHashes []string) *GetVirtualSelectedParentChainFromBlockResponseMessage {
return &GetVirtualSelectedParentChainFromBlockResponseMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
AddedChainBlockHashes: addedChainBlockHashes,
}
}

View File

@@ -0,0 +1,55 @@
package appmessage
// NotifyVirtualDaaScoreChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualDaaScoreChangedRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualDaaScoreChangedRequestMessage) Command() MessageCommand {
return CmdNotifyVirtualDaaScoreChangedRequestMessage
}
// NewNotifyVirtualDaaScoreChangedRequestMessage returns a instance of the message
func NewNotifyVirtualDaaScoreChangedRequestMessage() *NotifyVirtualDaaScoreChangedRequestMessage {
return &NotifyVirtualDaaScoreChangedRequestMessage{}
}
// NotifyVirtualDaaScoreChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualDaaScoreChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualDaaScoreChangedResponseMessage) Command() MessageCommand {
return CmdNotifyVirtualDaaScoreChangedResponseMessage
}
// NewNotifyVirtualDaaScoreChangedResponseMessage returns a instance of the message
func NewNotifyVirtualDaaScoreChangedResponseMessage() *NotifyVirtualDaaScoreChangedResponseMessage {
return &NotifyVirtualDaaScoreChangedResponseMessage{}
}
// VirtualDaaScoreChangedNotificationMessage is an appmessage corresponding to
// its respective RPC message
type VirtualDaaScoreChangedNotificationMessage struct {
baseMessage
VirtualDaaScore uint64
}
// Command returns the protocol command string for the message
func (msg *VirtualDaaScoreChangedNotificationMessage) Command() MessageCommand {
return CmdVirtualDaaScoreChangedNotificationMessage
}
// NewVirtualDaaScoreChangedNotificationMessage returns a instance of the message
func NewVirtualDaaScoreChangedNotificationMessage(
virtualDaaScore uint64) *VirtualDaaScoreChangedNotificationMessage {
return &VirtualDaaScoreChangedNotificationMessage{
VirtualDaaScore: virtualDaaScore,
}
}

View File

@@ -38,19 +38,7 @@ func NewNotifyVirtualSelectedParentChainChangedResponseMessage() *NotifyVirtualS
type VirtualSelectedParentChainChangedNotificationMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
}
// ChainBlock represents a DAG chain-block
type ChainBlock struct {
Hash string
AcceptedBlocks []*AcceptedBlock
}
// AcceptedBlock represents a block accepted into the DAG
type AcceptedBlock struct {
Hash string
AcceptedTransactionIDs []string
AddedChainBlockHashes []string
}
// Command returns the protocol command string for the message
@@ -59,11 +47,11 @@ func (msg *VirtualSelectedParentChainChangedNotificationMessage) Command() Messa
}
// NewVirtualSelectedParentChainChangedNotificationMessage returns a instance of the message
func NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock) *VirtualSelectedParentChainChangedNotificationMessage {
func NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes,
addedChainBlocks []string) *VirtualSelectedParentChainChangedNotificationMessage {
return &VirtualSelectedParentChainChangedNotificationMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
AddedChainBlockHashes: addedChainBlocks,
}
}

View File

@@ -53,7 +53,7 @@ func (msg *SubmitBlockResponseMessage) Command() MessageCommand {
return CmdSubmitBlockResponseMessage
}
// NewSubmitBlockResponseMessage returns a instance of the message
// NewSubmitBlockResponseMessage returns an instance of the message
func NewSubmitBlockResponseMessage() *SubmitBlockResponseMessage {
return &SubmitBlockResponseMessage{}
}
@@ -70,13 +70,22 @@ type RPCBlock struct {
// used over RPC
type RPCBlockHeader struct {
Version uint32
ParentHashes []string
Parents []*RPCBlockLevelParents
HashMerkleRoot string
AcceptedIDMerkleRoot string
UTXOCommitment string
Timestamp int64
Bits uint32
Nonce uint64
DAAScore uint64
BlueScore uint64
BlueWork string
PruningPoint string
}
// RPCBlockLevelParents holds parent hashes for one block level
type RPCBlockLevelParents struct {
ParentHashes []string
}
// RPCBlockVerboseData holds verbose data about a block

View File

@@ -5,6 +5,7 @@ package appmessage
type SubmitTransactionRequestMessage struct {
baseMessage
Transaction *RPCTransaction
AllowOrphan bool
}
// Command returns the protocol command string for the message
@@ -13,9 +14,10 @@ func (msg *SubmitTransactionRequestMessage) Command() MessageCommand {
}
// NewSubmitTransactionRequestMessage returns a instance of the message
func NewSubmitTransactionRequestMessage(transaction *RPCTransaction) *SubmitTransactionRequestMessage {
func NewSubmitTransactionRequestMessage(transaction *RPCTransaction, allowOrphan bool) *SubmitTransactionRequestMessage {
return &SubmitTransactionRequestMessage{
Transaction: transaction,
AllowOrphan: allowOrphan,
}
}
@@ -59,6 +61,7 @@ type RPCTransactionInput struct {
PreviousOutpoint *RPCOutpoint
SignatureScript string
Sequence uint64
SigOpCount byte
VerboseData *RPCTransactionInputVerboseData
}
@@ -96,7 +99,7 @@ type RPCUTXOEntry struct {
type RPCTransactionVerboseData struct {
TransactionID string
Hash string
Size uint64
Mass uint64
BlockHash string
BlockTime uint64
}

View File

@@ -4,23 +4,19 @@ import (
"fmt"
"sync/atomic"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
infrastructuredatabase "github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol"
"github.com/kaspanet/kaspad/app/rpc"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/infrastructure/config"
infrastructuredatabase "github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/infrastructure/network/dnsseed"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
"github.com/kaspanet/kaspad/util/panics"
)
@@ -50,8 +46,6 @@ func (a *ComponentManager) Start() {
panics.Exit(log, fmt.Sprintf("Error starting the net adapter: %+v", err))
}
a.maybeSeedFromDNS()
a.connectionManager.Start()
}
@@ -82,7 +76,16 @@ func (a *ComponentManager) Stop() {
func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database, interrupt chan<- struct{}) (
*ComponentManager, error) {
domain, err := domain.New(cfg.ActiveNetParams, db, cfg.IsArchivalNode)
consensusConfig := consensus.Config{
Params: *cfg.ActiveNetParams,
IsArchival: cfg.IsArchivalNode,
EnableSanityCheckPruningUTXOSet: cfg.EnableSanityCheckPruningUTXOSet,
}
mempoolConfig := mempool.DefaultConfig(&consensusConfig.Params)
mempoolConfig.MaximumOrphanTransactionCount = cfg.MaxOrphanTxs
mempoolConfig.MinimumRelayTransactionFee = cfg.MinRelayTxFee
domain, err := domain.New(&consensusConfig, mempoolConfig, db)
if err != nil {
return nil, err
}
@@ -149,29 +152,13 @@ func setupRPC(
utxoIndex,
shutDownChan,
)
protocolManager.SetOnVirtualChange(rpcManager.NotifyVirtualChange)
protocolManager.SetOnBlockAddedToDAGHandler(rpcManager.NotifyBlockAddedToDAG)
protocolManager.SetOnPruningPointUTXOSetOverrideHandler(rpcManager.NotifyPruningPointUTXOSetOverride)
return rpcManager
}
func (a *ComponentManager) maybeSeedFromDNS() {
if !a.cfg.DisableDNSSeed {
dnsseed.SeedFromDNS(a.cfg.NetParams(), a.cfg.DNSSeed, false, nil,
a.cfg.Lookup, func(addresses []*appmessage.NetAddress) {
// Kaspad uses a lookup of the dns seeder here. Since seeder returns
// IPs of nodes and not its own IP, we can not know real IP of
// source. So we'll take first returned address as source.
a.addressManager.AddAddresses(addresses...)
})
dnsseed.SeedFromGRPC(a.cfg.NetParams(), a.cfg.GRPCSeed, false, nil,
func(addresses []*appmessage.NetAddress) {
a.addressManager.AddAddresses(addresses...)
})
}
}
// P2PNodeID returns the network ID associated with this ComponentManager
func (a *ComponentManager) P2PNodeID() *id.ID {
return a.netAdapter.ID()

57
app/db_version.go Normal file
View File

@@ -0,0 +1,57 @@
package app
import (
"os"
"path"
"strconv"
"github.com/pkg/errors"
)
const currentDatabaseVersion = 1
func checkDatabaseVersion(dbPath string) (err error) {
versionFileName := versionFilePath(dbPath)
versionBytes, err := os.ReadFile(versionFileName)
if err != nil {
if os.IsNotExist(err) { // If version file doesn't exist, we assume that the database is new
return createDatabaseVersionFile(dbPath, versionFileName)
}
return err
}
databaseVersion, err := strconv.Atoi(string(versionBytes))
if err != nil {
return err
}
if databaseVersion != currentDatabaseVersion {
// TODO: Once there's more then one database version, it might make sense to add upgrade logic at this point
return errors.Errorf("Invalid database version %d. Expected version: %d", databaseVersion, currentDatabaseVersion)
}
return nil
}
func createDatabaseVersionFile(dbPath string, versionFileName string) error {
err := os.MkdirAll(dbPath, 0700)
if err != nil {
return err
}
versionFile, err := os.Create(versionFileName)
if err != nil {
return nil
}
defer versionFile.Close()
versionString := strconv.Itoa(currentDatabaseVersion)
_, err = versionFile.Write([]byte(versionString))
return err
}
func versionFilePath(dbPath string) string {
dbVersionFileName := path.Join(dbPath, "version")
return dbVersionFileName
}

View File

@@ -8,7 +8,7 @@ import (
// DefaultTimeout is the default duration to wait for enqueuing/dequeuing
// to/from routes.
const DefaultTimeout = 30 * time.Second
const DefaultTimeout = 120 * time.Second
// ErrPeerWithSameIDExists signifies that a peer with the same ID already exist.
var ErrPeerWithSameIDExists = errors.New("ready peer with the same ID already exists")

View File

@@ -1,13 +1,14 @@
package flowcontext
import (
"time"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
@@ -17,7 +18,7 @@ import (
// relays newly unorphaned transactions and possibly rebroadcast
// manually added transactions when not in IBD.
func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
blockInsertionResult *externalapi.BlockInsertionResult) error {
virtualChangeSet *externalapi.VirtualChangeSet) error {
hash := consensushashing.BlockHash(block)
log.Debugf("OnNewBlock start for block %s", hash)
@@ -31,10 +32,10 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
log.Debugf("OnNewBlock: block %s unorphaned %d blocks", hash, len(unorphaningResults))
newBlocks := []*externalapi.DomainBlock{block}
newBlockInsertionResults := []*externalapi.BlockInsertionResult{blockInsertionResult}
newVirtualChangeSets := []*externalapi.VirtualChangeSet{virtualChangeSet}
for _, unorphaningResult := range unorphaningResults {
newBlocks = append(newBlocks, unorphaningResult.block)
newBlockInsertionResults = append(newBlockInsertionResults, unorphaningResult.blockInsertionResult)
newVirtualChangeSets = append(newVirtualChangeSets, unorphaningResult.virtualChangeSet)
}
allAcceptedTransactions := make([]*externalapi.DomainTransaction, 0)
@@ -48,8 +49,8 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
if f.onBlockAddedToDAGHandler != nil {
log.Debugf("OnNewBlock: calling f.onBlockAddedToDAGHandler for block %s", hash)
blockInsertionResult = newBlockInsertionResults[i]
err := f.onBlockAddedToDAGHandler(newBlock, blockInsertionResult)
virtualChangeSet = newVirtualChangeSets[i]
err := f.onBlockAddedToDAGHandler(newBlock, virtualChangeSet)
if err != nil {
return err
}
@@ -59,6 +60,15 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
return f.broadcastTransactionsAfterBlockAdded(newBlocks, allAcceptedTransactions)
}
// OnVirtualChange calls the handler function whenever the virtual block changes.
func (f *FlowContext) OnVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error {
if f.onVirtualChangeHandler != nil && virtualChangeSet != nil {
return f.onVirtualChangeHandler(virtualChangeSet)
}
return nil
}
// OnPruningPointUTXOSetOverride calls the handler function whenever the UTXO set
// resets due to pruning point change via IBD.
func (f *FlowContext) OnPruningPointUTXOSetOverride() error {
@@ -71,8 +81,6 @@ func (f *FlowContext) OnPruningPointUTXOSetOverride() error {
func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
addedBlocks []*externalapi.DomainBlock, transactionsAcceptedToMempool []*externalapi.DomainTransaction) error {
f.updateTransactionsToRebroadcast(addedBlocks)
// Don't relay transactions when in IBD.
if f.IsIBDRunning() {
return nil
@@ -80,7 +88,12 @@ func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
var txIDsToRebroadcast []*externalapi.DomainTransactionID
if f.shouldRebroadcastTransactions() {
txIDsToRebroadcast = f.txIDsToRebroadcast()
txsToRebroadcast, err := f.Domain().MiningManager().RevalidateHighPriorityTransactions()
if err != nil {
return err
}
txIDsToRebroadcast = consensushashing.TransactionIDs(txsToRebroadcast)
f.lastRebroadcastTime = time.Now()
}
txIDsToBroadcast := make([]*externalapi.DomainTransactionID, len(transactionsAcceptedToMempool)+len(txIDsToRebroadcast))
@@ -91,15 +104,7 @@ func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
for i, txID := range txIDsToRebroadcast {
txIDsToBroadcast[offset+i] = txID
}
if len(txIDsToBroadcast) == 0 {
return nil
}
if len(txIDsToBroadcast) > appmessage.MaxInvPerTxInvMsg {
txIDsToBroadcast = txIDsToBroadcast[:appmessage.MaxInvPerTxInvMsg]
}
inv := appmessage.NewMsgInvTransaction(txIDsToBroadcast)
return f.Broadcast(inv)
return f.EnqueueTransactionIDsForPropagation(txIDsToBroadcast)
}
// SharedRequestedBlocks returns a *blockrelay.SharedRequestedBlocks for sharing
@@ -114,14 +119,14 @@ func (f *FlowContext) AddBlock(block *externalapi.DomainBlock) error {
return protocolerrors.Errorf(false, "cannot add header only block")
}
blockInsertionResult, err := f.Domain().Consensus().ValidateAndInsertBlock(block)
virtualChangeSet, err := f.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for block %s: %s", consensushashing.BlockHash(block), err)
}
return err
}
err = f.OnNewBlock(block, blockInsertionResult)
err = f.OnNewBlock(block, virtualChangeSet)
if err != nil {
return err
}
@@ -161,7 +166,6 @@ func (f *FlowContext) UnsetIBDRunning() {
}
f.ibdPeer = nil
log.Infof("IBD finished")
}
// IBDPeer returns the current IBD peer or null if the node is not

View File

@@ -29,3 +29,8 @@ func (*FlowContext) HandleError(err error, flowName string, isStopping *uint32,
errChan <- err
}
}
// IsRecoverableError returns whether the error is recoverable
func (*FlowContext) IsRecoverableError(err error) bool {
return err == nil || errors.Is(err, router.ErrRouteClosed) || errors.As(err, &protocolerrors.ProtocolError{})
}

View File

@@ -1,10 +1,11 @@
package flowcontext
import (
"github.com/kaspanet/kaspad/util/mstime"
"sync"
"time"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain"
@@ -21,7 +22,10 @@ import (
// OnBlockAddedToDAGHandler is a handler function that's triggered
// when a block is added to the DAG
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock, virtualChangeSet *externalapi.VirtualChangeSet) error
// OnVirtualChangeHandler is a handler function that's triggered when the virtual changes
type OnVirtualChangeHandler func(virtualChangeSet *externalapi.VirtualChangeSet) error
// OnPruningPointUTXOSetOverrideHandler is a handle function that's triggered whenever the UTXO set
// resets due to pruning point change via IBD.
@@ -42,14 +46,13 @@ type FlowContext struct {
timeStarted int64
onVirtualChangeHandler OnVirtualChangeHandler
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
transactionsToRebroadcastLock sync.Mutex
transactionsToRebroadcast map[externalapi.DomainTransactionID]*externalapi.DomainTransaction
lastRebroadcastTime time.Time
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
lastRebroadcastTime time.Time
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
sharedRequestedBlocks *blockrelay.SharedRequestedBlocks
@@ -62,6 +65,10 @@ type FlowContext struct {
orphans map[externalapi.DomainHash]*externalapi.DomainBlock
orphansMutex sync.RWMutex
transactionIDsToPropagate []*externalapi.DomainTransactionID
lastTransactionIDPropagationTime time.Time
transactionIDPropagationLock sync.Mutex
shutdownChan chan struct{}
}
@@ -70,18 +77,19 @@ func New(cfg *config.Config, domain domain.Domain, addressManager *addressmanage
netAdapter *netadapter.NetAdapter, connectionManager *connmanager.ConnectionManager) *FlowContext {
return &FlowContext{
cfg: cfg,
netAdapter: netAdapter,
domain: domain,
addressManager: addressManager,
connectionManager: connectionManager,
sharedRequestedTransactions: transactionrelay.NewSharedRequestedTransactions(),
sharedRequestedBlocks: blockrelay.NewSharedRequestedBlocks(),
peers: make(map[id.ID]*peerpkg.Peer),
transactionsToRebroadcast: make(map[externalapi.DomainTransactionID]*externalapi.DomainTransaction),
orphans: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
timeStarted: mstime.Now().UnixMilliseconds(),
shutdownChan: make(chan struct{}),
cfg: cfg,
netAdapter: netAdapter,
domain: domain,
addressManager: addressManager,
connectionManager: connectionManager,
sharedRequestedTransactions: transactionrelay.NewSharedRequestedTransactions(),
sharedRequestedBlocks: blockrelay.NewSharedRequestedBlocks(),
peers: make(map[id.ID]*peerpkg.Peer),
orphans: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
timeStarted: mstime.Now().UnixMilliseconds(),
transactionIDsToPropagate: []*externalapi.DomainTransactionID{},
lastTransactionIDPropagationTime: time.Now(),
shutdownChan: make(chan struct{}),
}
}
@@ -96,6 +104,11 @@ func (f *FlowContext) ShutdownChan() <-chan struct{} {
return f.shutdownChan
}
// SetOnVirtualChangeHandler sets the onVirtualChangeHandler handler
func (f *FlowContext) SetOnVirtualChangeHandler(onVirtualChangeHandler OnVirtualChangeHandler) {
f.onVirtualChangeHandler = onVirtualChangeHandler
}
// SetOnBlockAddedToDAGHandler sets the onBlockAddedToDAG handler
func (f *FlowContext) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler OnBlockAddedToDAGHandler) {
f.onBlockAddedToDAGHandler = onBlockAddedToDAGHandler

View File

@@ -17,8 +17,8 @@ const maxOrphans = 600
// UnorphaningResult is the result of unorphaning a block
type UnorphaningResult struct {
block *externalapi.DomainBlock
blockInsertionResult *externalapi.BlockInsertionResult
block *externalapi.DomainBlock
virtualChangeSet *externalapi.VirtualChangeSet
}
// AddOrphan adds the block to the orphan set
@@ -73,10 +73,10 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
orphanBlock := f.orphans[orphanHash]
log.Debugf("Considering to unorphan block %s with parents %s",
orphanHash, orphanBlock.Header.ParentHashes())
orphanHash, orphanBlock.Header.DirectParents())
canBeUnorphaned := true
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
for _, orphanBlockParentHash := range orphanBlock.Header.DirectParents() {
orphanBlockParentInfo, err := f.domain.Consensus().GetBlockInfo(orphanBlockParentHash)
if err != nil {
return nil, err
@@ -90,14 +90,14 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
}
}
if canBeUnorphaned {
blockInsertionResult, unorphaningSucceeded, err := f.unorphanBlock(orphanHash)
virtualChangeSet, unorphaningSucceeded, err := f.unorphanBlock(orphanHash)
if err != nil {
return nil, err
}
if unorphaningSucceeded {
unorphaningResults = append(unorphaningResults, &UnorphaningResult{
block: orphanBlock,
blockInsertionResult: blockInsertionResult,
block: orphanBlock,
virtualChangeSet: virtualChangeSet,
})
processQueue = f.addChildOrphansToProcessQueue(&orphanHash, processQueue)
}
@@ -133,7 +133,7 @@ func (f *FlowContext) addChildOrphansToProcessQueue(blockHash *externalapi.Domai
func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash) []externalapi.DomainHash {
var childOrphans []externalapi.DomainHash
for orphanHash, orphanBlock := range f.orphans {
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
for _, orphanBlockParentHash := range orphanBlock.Header.DirectParents() {
if orphanBlockParentHash.Equal(blockHash) {
childOrphans = append(childOrphans, orphanHash)
break
@@ -143,14 +143,14 @@ func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash)
return childOrphans
}
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (*externalapi.BlockInsertionResult, bool, error) {
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (*externalapi.VirtualChangeSet, bool, error) {
orphanBlock, ok := f.orphans[orphanHash]
if !ok {
return nil, false, errors.Errorf("attempted to unorphan a non-orphan block %s", orphanHash)
}
delete(f.orphans, orphanHash)
blockInsertionResult, err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock)
virtualChangeSet, err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for orphan block %s: %s", orphanHash, err)
@@ -160,7 +160,7 @@ func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (*externa
}
log.Infof("Unorphaned block %s", orphanHash)
return blockInsertionResult, true, nil
return virtualChangeSet, true, nil
}
// GetOrphanRoots returns the roots of the missing ancestors DAG of the given orphan
@@ -201,7 +201,7 @@ func (f *FlowContext) GetOrphanRoots(orphan *externalapi.DomainHash) ([]*externa
continue
}
for _, parent := range block.Header.ParentHashes() {
for _, parent := range block.Header.DirectParents() {
if !addedToQueueSet.Contains(parent) {
queue = append(queue, parent)
addedToQueueSet.Add(parent)

View File

@@ -25,6 +25,10 @@ func (f *FlowContext) ShouldMine() (bool, error) {
return false, err
}
if virtualSelectedParent.Equal(f.Config().NetParams().GenesisHash) {
return false, nil
}
virtualSelectedParentHeader, err := f.domain.Consensus().GetBlockHeader(virtualSelectedParent)
if err != nil {
return false, err

View File

@@ -9,34 +9,18 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// AddTransaction adds transaction to the mempool and propagates it.
func (f *FlowContext) AddTransaction(tx *externalapi.DomainTransaction) error {
f.transactionsToRebroadcastLock.Lock()
defer f.transactionsToRebroadcastLock.Unlock()
// TransactionIDPropagationInterval is the interval between transaction IDs propagations
const TransactionIDPropagationInterval = 500 * time.Millisecond
err := f.Domain().MiningManager().ValidateAndInsertTransaction(tx, false)
// AddTransaction adds transaction to the mempool and propagates it.
func (f *FlowContext) AddTransaction(tx *externalapi.DomainTransaction, allowOrphan bool) error {
acceptedTransactions, err := f.Domain().MiningManager().ValidateAndInsertTransaction(tx, true, allowOrphan)
if err != nil {
return err
}
transactionID := consensushashing.TransactionID(tx)
f.transactionsToRebroadcast[*transactionID] = tx
inv := appmessage.NewMsgInvTransaction([]*externalapi.DomainTransactionID{transactionID})
return f.Broadcast(inv)
}
func (f *FlowContext) updateTransactionsToRebroadcast(addedBlocks []*externalapi.DomainBlock) {
f.transactionsToRebroadcastLock.Lock()
defer f.transactionsToRebroadcastLock.Unlock()
for _, block := range addedBlocks {
// Note: if a transaction is included in the DAG but not accepted,
// it won't be rebroadcast anymore, although it is not included in
// the UTXO set
for _, tx := range block.Transactions {
delete(f.transactionsToRebroadcast, *consensushashing.TransactionID(tx))
}
}
acceptedTransactionIDs := consensushashing.TransactionIDs(acceptedTransactions)
return f.EnqueueTransactionIDsForPropagation(acceptedTransactionIDs)
}
func (f *FlowContext) shouldRebroadcastTransactions() bool {
@@ -44,19 +28,6 @@ func (f *FlowContext) shouldRebroadcastTransactions() bool {
return time.Since(f.lastRebroadcastTime) > rebroadcastInterval
}
func (f *FlowContext) txIDsToRebroadcast() []*externalapi.DomainTransactionID {
f.transactionsToRebroadcastLock.Lock()
defer f.transactionsToRebroadcastLock.Unlock()
txIDs := make([]*externalapi.DomainTransactionID, len(f.transactionsToRebroadcast))
i := 0
for _, tx := range f.transactionsToRebroadcast {
txIDs[i] = consensushashing.TransactionID(tx)
i++
}
return txIDs
}
// SharedRequestedTransactions returns a *transactionrelay.SharedRequestedTransactions for sharing
// data about requested transactions between different peers.
func (f *FlowContext) SharedRequestedTransactions() *transactionrelay.SharedRequestedTransactions {
@@ -70,3 +41,42 @@ func (f *FlowContext) OnTransactionAddedToMempool() {
f.onTransactionAddedToMempoolHandler()
}
}
// EnqueueTransactionIDsForPropagation add the given transactions IDs to a set of IDs to
// propagate. The IDs will be broadcast to all peers within a single transaction Inv message.
// The broadcast itself may happen only during a subsequent call to this method
func (f *FlowContext) EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error {
f.transactionIDPropagationLock.Lock()
defer f.transactionIDPropagationLock.Unlock()
f.transactionIDsToPropagate = append(f.transactionIDsToPropagate, transactionIDs...)
return f.maybePropagateTransactions()
}
func (f *FlowContext) maybePropagateTransactions() error {
if time.Since(f.lastTransactionIDPropagationTime) < TransactionIDPropagationInterval &&
len(f.transactionIDsToPropagate) < appmessage.MaxInvPerTxInvMsg {
return nil
}
for len(f.transactionIDsToPropagate) > 0 {
transactionIDsToBroadcast := f.transactionIDsToPropagate
if len(transactionIDsToBroadcast) > appmessage.MaxInvPerTxInvMsg {
transactionIDsToBroadcast = f.transactionIDsToPropagate[:len(transactionIDsToBroadcast)]
}
log.Debugf("Transaction propagation: broadcasting %d transactions", len(transactionIDsToBroadcast))
inv := appmessage.NewMsgInvTransaction(transactionIDsToBroadcast)
err := f.Broadcast(inv)
if err != nil {
return err
}
f.transactionIDsToPropagate = f.transactionIDsToPropagate[len(transactionIDsToBroadcast):]
}
f.lastTransactionIDPropagationTime = time.Now()
return nil
}

View File

@@ -7,10 +7,8 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
func (flow *handleRelayInvsFlow) sendGetBlockLocator(lowHash *externalapi.DomainHash,
highHash *externalapi.DomainHash, limit uint32) error {
msgGetBlockLocator := appmessage.NewMsgRequestBlockLocator(lowHash, highHash, limit)
func (flow *handleRelayInvsFlow) sendGetBlockLocator(highHash *externalapi.DomainHash, limit uint32) error {
msgGetBlockLocator := appmessage.NewMsgRequestBlockLocator(highHash, limit)
return flow.outgoingRoute.Enqueue(msgGetBlockLocator)
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
@@ -44,7 +45,9 @@ func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *
if err != nil {
return err
}
if !blockInfo.Exists {
// The IBD block locator is checking only existing blocks with bodies.
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
continue
}

View File

@@ -48,7 +48,7 @@ func HandleIBDBlockRequests(context HandleIBDBlockRequestsContext, incomingRoute
if err != nil {
return err
}
log.Debugf("sent %d out of %d", i, len(msgRequestIBDBlocks.Hashes))
log.Debugf("sent %d out of %d", i+1, len(msgRequestIBDBlocks.Hashes))
}
}
}

View File

@@ -0,0 +1,95 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"runtime"
"sync/atomic"
)
// PruningPointAndItsAnticoneRequestsContext is the interface for the context needed for the HandlePruningPointAndItsAnticoneRequests flow.
type PruningPointAndItsAnticoneRequestsContext interface {
Domain() domain.Domain
}
var isBusy uint32
// HandlePruningPointAndItsAnticoneRequests listens to appmessage.MsgRequestPruningPointAndItsAnticone messages and sends
// the pruning point and its anticone to the requesting peer.
func HandlePruningPointAndItsAnticoneRequests(context PruningPointAndItsAnticoneRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
err := func() error {
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
if !atomic.CompareAndSwapUint32(&isBusy, 0, 1) {
return protocolerrors.Errorf(false, "node is busy with other pruning point anticone requests")
}
defer atomic.StoreUint32(&isBusy, 0)
log.Debugf("Got request for pruning point and its anticone from %s", peer)
pruningPointHeaders, err := context.Domain().Consensus().PruningPointHeaders()
if err != nil {
return err
}
msgPruningPointHeaders := make([]*appmessage.MsgBlockHeader, len(pruningPointHeaders))
for i, header := range pruningPointHeaders {
msgPruningPointHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(header)
}
err = outgoingRoute.Enqueue(appmessage.NewMsgPruningPoints(msgPruningPointHeaders))
if err != nil {
return err
}
pointAndItsAnticone, err := context.Domain().Consensus().PruningPointAndItsAnticone()
if err != nil {
return err
}
for _, blockHash := range pointAndItsAnticone {
err := sendBlockWithTrustedData(context, outgoingRoute, blockHash)
if err != nil {
return err
}
}
err = outgoingRoute.Enqueue(appmessage.NewMsgDoneBlocksWithTrustedData())
if err != nil {
return err
}
log.Debugf("Sent pruning point and its anticone to %s", peer)
return nil
}()
if err != nil {
return err
}
}
}
func sendBlockWithTrustedData(context PruningPointAndItsAnticoneRequestsContext, outgoingRoute *router.Route, blockHash *externalapi.DomainHash) error {
blockWithTrustedData, err := context.Domain().Consensus().BlockWithTrustedData(blockHash)
if err != nil {
return err
}
err = outgoingRoute.Enqueue(appmessage.DomainBlockWithTrustedDataToBlockWithTrustedData(blockWithTrustedData))
if err != nil {
return err
}
runtime.GC()
return nil
}

View File

@@ -1,51 +0,0 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandlePruningPointHashRequestsFlowContext is the interface for the context needed for the handlePruningPointHashRequestsFlow flow.
type HandlePruningPointHashRequestsFlowContext interface {
Domain() domain.Domain
}
type handlePruningPointHashRequestsFlow struct {
HandlePruningPointHashRequestsFlowContext
incomingRoute, outgoingRoute *router.Route
}
// HandlePruningPointHashRequests listens to appmessage.MsgRequestPruningPointHashMessage messages and sends
// the pruning point hash as response.
func HandlePruningPointHashRequests(context HandlePruningPointHashRequestsFlowContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handlePruningPointHashRequestsFlow{
HandlePruningPointHashRequestsFlowContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handlePruningPointHashRequestsFlow) start() error {
for {
_, err := flow.incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for a pruning point hash")
pruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return err
}
err = flow.outgoingRoute.Enqueue(appmessage.NewPruningPointHashMessage(pruningPoint))
if err != nil {
return err
}
log.Debugf("Sent pruning point hash %s", pruningPoint)
}
}

View File

@@ -0,0 +1,40 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// PruningPointProofRequestsContext is the interface for the context needed for the HandlePruningPointProofRequests flow.
type PruningPointProofRequestsContext interface {
Domain() domain.Domain
}
// HandlePruningPointProofRequests listens to appmessage.MsgRequestPruningPointProof messages and sends
// the pruning point proof to the requesting peer.
func HandlePruningPointProofRequests(context PruningPointProofRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for pruning point proof from %s", peer)
pruningPointProof, err := context.Domain().Consensus().BuildPruningPointProof()
if err != nil {
return err
}
pruningPointProofMessage := appmessage.DomainPruningPointProofToMsgPruningPointProof(pruningPointProof)
err = outgoingRoute.Enqueue(pruningPointProofMessage)
if err != nil {
return err
}
log.Debugf("Sent pruning point proof to %s", peer)
}
}

View File

@@ -23,7 +23,8 @@ var orphanResolutionRange uint32 = 5
type RelayInvsContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
OnNewBlock(block *externalapi.DomainBlock, virtualChangeSet *externalapi.VirtualChangeSet) error
OnVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error
OnPruningPointUTXOSetOverride() error
SharedRequestedBlocks() *SharedRequestedBlocks
Broadcast(message appmessage.Message) error
@@ -33,6 +34,7 @@ type RelayInvsContext interface {
IsIBDRunning() bool
TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool
UnsetIBDRunning()
IsRecoverableError(err error) bool
}
type handleRelayInvsFlow struct {
@@ -80,7 +82,18 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
isGenesisVirtualSelectedParent, err := flow.isGenesisVirtualSelectedParent()
if err != nil {
return err
}
if flow.IsOrphan(inv.Hash) {
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced && isGenesisVirtualSelectedParent {
log.Infof("Cannot process orphan %s for a node with only the genesis block. The node needs to IBD "+
"to the recent pruning point before normal operation can resume.", inv.Hash)
continue
}
log.Debugf("Block %s is a known orphan. Requesting its missing ancestors", inv.Hash)
err := flow.AddOrphanRootsToQueue(inv.Hash)
if err != nil {
@@ -110,8 +123,13 @@ func (flow *handleRelayInvsFlow) start() error {
return err
}
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced && !flow.Config().Devnet && flow.isChildOfGenesis(block) {
log.Infof("Cannot process %s because it's a direct child of genesis.", consensushashing.BlockHash(block))
continue
}
log.Debugf("Processing block %s", inv.Hash)
missingParents, blockInsertionResult, err := flow.processBlock(block)
missingParents, virtualChangeSet, err := flow.processBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrPrunedBlock) {
log.Infof("Ignoring pruned block %s", inv.Hash)
@@ -126,7 +144,7 @@ func (flow *handleRelayInvsFlow) start() error {
}
if len(missingParents) > 0 {
log.Debugf("Block %s is orphan and has missing parents: %s", inv.Hash, missingParents)
err := flow.processOrphan(block, missingParents)
err := flow.processOrphan(block)
if err != nil {
return err
}
@@ -139,7 +157,7 @@ func (flow *handleRelayInvsFlow) start() error {
return err
}
log.Infof("Accepted block %s via relay", inv.Hash)
err = flow.OnNewBlock(block, blockInsertionResult)
err = flow.OnNewBlock(block, virtualChangeSet)
if err != nil {
return err
}
@@ -226,9 +244,9 @@ func (flow *handleRelayInvsFlow) readMsgBlock() (msgBlock *appmessage.MsgBlock,
}
}
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, *externalapi.BlockInsertionResult, error) {
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, *externalapi.VirtualChangeSet, error) {
blockHash := consensushashing.BlockHash(block)
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
virtualChangeSet, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return nil, nil, errors.Wrapf(err, "failed to process block %s", blockHash)
@@ -241,7 +259,7 @@ func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([
log.Warnf("Rejected block %s from %s: %s", blockHash, flow.peer, err)
return nil, nil, protocolerrors.Wrapf(true, err, "got invalid block %s from relay", blockHash)
}
return nil, blockInsertionResult, nil
return nil, virtualChangeSet, nil
}
func (flow *handleRelayInvsFlow) relayBlock(block *externalapi.DomainBlock) error {
@@ -249,7 +267,7 @@ func (flow *handleRelayInvsFlow) relayBlock(block *externalapi.DomainBlock) erro
return flow.Broadcast(appmessage.NewMsgInvBlock(blockHash))
}
func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, missingParents []*externalapi.DomainHash) error {
func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock) error {
blockHash := consensushashing.BlockHash(block)
// Return if the block has been orphaned from elsewhere already
@@ -264,6 +282,19 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, m
return err
}
if isBlockInOrphanResolutionRange {
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced {
isGenesisVirtualSelectedParent, err := flow.isGenesisVirtualSelectedParent()
if err != nil {
return err
}
if isGenesisVirtualSelectedParent {
log.Infof("Cannot process orphan %s for a node with only the genesis block. The node needs to IBD "+
"to the recent pruning point before normal operation can resume.", blockHash)
return nil
}
}
log.Debugf("Block %s is within orphan resolution range. "+
"Adding it to the orphan set", blockHash)
flow.AddOrphan(block)
@@ -274,7 +305,21 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, m
// Start IBD unless we already are in IBD
log.Debugf("Block %s is out of orphan resolution range. "+
"Attempting to start IBD against it.", blockHash)
return flow.runIBDIfNotRunning(blockHash)
return flow.runIBDIfNotRunning(block)
}
func (flow *handleRelayInvsFlow) isGenesisVirtualSelectedParent() (bool, error) {
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
return virtualSelectedParent.Equal(flow.Config().NetParams().GenesisHash), nil
}
func (flow *handleRelayInvsFlow) isChildOfGenesis(block *externalapi.DomainBlock) bool {
parents := block.Header.DirectParents()
return len(parents) == 1 && parents[0].Equal(flow.Config().NetParams().GenesisHash)
}
// isBlockInOrphanResolutionRange finds out whether the given blockHash should be
@@ -283,8 +328,7 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, m
// In the response, if we know none of the hashes, we should retrieve the given
// blockHash via IBD. Otherwise, via unorphaning.
func (flow *handleRelayInvsFlow) isBlockInOrphanResolutionRange(blockHash *externalapi.DomainHash) (bool, error) {
lowHash := flow.Config().ActiveNetParams.GenesisHash
err := flow.sendGetBlockLocator(lowHash, blockHash, orphanResolutionRange)
err := flow.sendGetBlockLocator(blockHash, orphanResolutionRange)
if err != nil {
return false, err
}

View File

@@ -32,20 +32,19 @@ func HandleRequestBlockLocator(context RequestBlockLocatorContext, incomingRoute
func (flow *handleRequestBlockLocatorFlow) start() error {
for {
lowHash, highHash, limit, err := flow.receiveGetBlockLocator()
highHash, limit, err := flow.receiveGetBlockLocator()
if err != nil {
return err
}
log.Debugf("Received getBlockLocator with lowHash: %s, highHash: %s, limit: %d",
lowHash, highHash, limit)
log.Debugf("Received getBlockLocator with highHash: %s, limit: %d", highHash, limit)
locator, err := flow.Domain().Consensus().CreateBlockLocator(lowHash, highHash, limit)
locator, err := flow.Domain().Consensus().CreateBlockLocatorFromPruningPoint(highHash, limit)
if err != nil || len(locator) == 0 {
if err != nil {
log.Debugf("Received error from CreateBlockLocator: %s", err)
log.Debugf("Received error from CreateBlockLocatorFromPruningPoint: %s", err)
}
return protocolerrors.Errorf(true, "couldn't build a block "+
"locator between blocks %s and %s", lowHash, highHash)
"locator between the pruning point and %s", highHash)
}
err = flow.sendBlockLocator(locator)
@@ -55,16 +54,15 @@ func (flow *handleRequestBlockLocatorFlow) start() error {
}
}
func (flow *handleRequestBlockLocatorFlow) receiveGetBlockLocator() (lowHash *externalapi.DomainHash,
highHash *externalapi.DomainHash, limit uint32, err error) {
func (flow *handleRequestBlockLocatorFlow) receiveGetBlockLocator() (highHash *externalapi.DomainHash, limit uint32, err error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, nil, 0, err
return nil, 0, err
}
msgGetBlockLocator := message.(*appmessage.MsgRequestBlockLocator)
return msgGetBlockLocator.LowHash, msgGetBlockLocator.HighHash, msgGetBlockLocator.Limit, nil
return msgGetBlockLocator.HighHash, msgGetBlockLocator.Limit, nil
}
func (flow *handleRequestBlockLocatorFlow) sendBlockLocator(locator externalapi.BlockLocator) error {

View File

@@ -12,26 +12,26 @@ import (
const ibdBatchSize = router.DefaultMaxMessages
// RequestIBDBlocksContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestIBDBlocksContext interface {
// RequestHeadersContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestHeadersContext interface {
Domain() domain.Domain
}
type handleRequestHeadersFlow struct {
RequestIBDBlocksContext
RequestHeadersContext
incomingRoute, outgoingRoute *router.Route
peer *peer.Peer
}
// HandleRequestHeaders handles RequestHeaders messages
func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router.Route,
func HandleRequestHeaders(context RequestHeadersContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
flow := &handleRequestHeadersFlow{
RequestIBDBlocksContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
RequestHeadersContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
@@ -49,8 +49,9 @@ func (flow *handleRequestHeadersFlow) start() error {
// GetHashesBetween is a relatively heavy operation so we limit it
// in order to avoid locking the consensus for too long
const maxBlueScoreDifference = 1 << 10
blockHashes, _, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash, maxBlueScoreDifference)
// maxBlocks MUST be >= MergeSetSizeLimit + 1
const maxBlocks = 1 << 10
blockHashes, _, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash, maxBlocks)
if err != nil {
return err
}

View File

@@ -0,0 +1,140 @@
package blockrelay
import (
"errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleRequestPruningPointUTXOSetContext is the interface for the context needed for the HandleRequestPruningPointUTXOSet flow.
type HandleRequestPruningPointUTXOSetContext interface {
Domain() domain.Domain
}
type handleRequestPruningPointUTXOSetFlow struct {
HandleRequestPruningPointUTXOSetContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestPruningPointUTXOSet listens to appmessage.MsgRequestPruningPointUTXOSet messages and sends
// the pruning point UTXO set and block body.
func HandleRequestPruningPointUTXOSet(context HandleRequestPruningPointUTXOSetContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handleRequestPruningPointUTXOSetFlow{
HandleRequestPruningPointUTXOSetContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestPruningPointUTXOSetFlow) start() error {
for {
msgRequestPruningPointUTXOSet, err := flow.waitForRequestPruningPointUTXOSetMessages()
if err != nil {
return err
}
err = flow.handleRequestPruningPointUTXOSetMessage(msgRequestPruningPointUTXOSet)
if err != nil {
return err
}
}
}
func (flow *handleRequestPruningPointUTXOSetFlow) handleRequestPruningPointUTXOSetMessage(
msgRequestPruningPointUTXOSet *appmessage.MsgRequestPruningPointUTXOSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "handleRequestPruningPointUTXOSetFlow")
defer onEnd()
log.Debugf("Got request for pruning point UTXO set")
return flow.sendPruningPointUTXOSet(msgRequestPruningPointUTXOSet)
}
func (flow *handleRequestPruningPointUTXOSetFlow) waitForRequestPruningPointUTXOSetMessages() (
*appmessage.MsgRequestPruningPointUTXOSet, error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, err
}
msgRequestPruningPointUTXOSet, ok := message.(*appmessage.MsgRequestPruningPointUTXOSet)
if !ok {
// TODO: Change to shouldBan: true once we fix the bug of getting redundant messages
return nil, protocolerrors.Errorf(false, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestPruningPointUTXOSet, message.Command())
}
return msgRequestPruningPointUTXOSet, nil
}
func (flow *handleRequestPruningPointUTXOSetFlow) sendPruningPointUTXOSet(
msgRequestPruningPointUTXOSet *appmessage.MsgRequestPruningPointUTXOSet) error {
// Send the UTXO set in `step`-sized chunks
const step = 1000
var fromOutpoint *externalapi.DomainOutpoint
chunksSent := 0
for {
pruningPointUTXOs, err := flow.Domain().Consensus().GetPruningPointUTXOs(
msgRequestPruningPointUTXOSet.PruningPointHash, fromOutpoint, step)
if err != nil {
if errors.Is(err, ruleerrors.ErrWrongPruningPointHash) {
return flow.outgoingRoute.Enqueue(appmessage.NewMsgUnexpectedPruningPoint())
}
}
log.Debugf("Retrieved %d UTXOs for pruning block %s",
len(pruningPointUTXOs), msgRequestPruningPointUTXOSet.PruningPointHash)
outpointAndUTXOEntryPairs :=
appmessage.DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(pruningPointUTXOs)
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgPruningPointUTXOSetChunk(outpointAndUTXOEntryPairs))
if err != nil {
return err
}
finished := len(pruningPointUTXOs) < step
if finished && chunksSent%ibdBatchSize != 0 {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSet.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
if len(pruningPointUTXOs) > 0 {
fromOutpoint = pruningPointUTXOs[len(pruningPointUTXOs)-1].Outpoint
}
chunksSent++
// Wait for the peer to request more chunks every `ibdBatchSize` chunks
if chunksSent%ibdBatchSize == 0 {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
_, ok := message.(*appmessage.MsgRequestNextPruningPointUTXOSetChunk)
if !ok {
// TODO: Change to shouldBan: true once we fix the bug of getting redundant messages
return protocolerrors.Errorf(false, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointUTXOSetChunk, message.Command())
}
if finished {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSet.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
}
}
}

View File

@@ -1,144 +0,0 @@
package blockrelay
import (
"errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleRequestPruningPointUTXOSetAndBlockContext is the interface for the context needed for the HandleRequestPruningPointUTXOSetAndBlock flow.
type HandleRequestPruningPointUTXOSetAndBlockContext interface {
Domain() domain.Domain
}
type handleRequestPruningPointUTXOSetAndBlockFlow struct {
HandleRequestPruningPointUTXOSetAndBlockContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestPruningPointUTXOSetAndBlock listens to appmessage.MsgRequestPruningPointUTXOSetAndBlock messages and sends
// the pruning point UTXO set and block body.
func HandleRequestPruningPointUTXOSetAndBlock(context HandleRequestPruningPointUTXOSetAndBlockContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handleRequestPruningPointUTXOSetAndBlockFlow{
HandleRequestPruningPointUTXOSetAndBlockContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) start() error {
for {
msgRequestPruningPointUTXOSetAndBlock, err := flow.waitForRequestPruningPointUTXOSetAndBlockMessages()
if err != nil {
return err
}
err = flow.handleRequestPruningPointUTXOSetAndBlockMessage(msgRequestPruningPointUTXOSetAndBlock)
if err != nil {
return err
}
}
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) handleRequestPruningPointUTXOSetAndBlockMessage(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "handleRequestPruningPointUTXOSetAndBlockFlow")
defer onEnd()
log.Debugf("Got request for PruningPointHash UTXOSet and Block")
err := flow.sendPruningPointBlock(msgRequestPruningPointUTXOSetAndBlock)
if err != nil {
return err
}
return flow.sendPruningPointUTXOSet(msgRequestPruningPointUTXOSetAndBlock)
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) waitForRequestPruningPointUTXOSetAndBlockMessages() (
*appmessage.MsgRequestPruningPointUTXOSetAndBlock, error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, err
}
msgRequestPruningPointUTXOSetAndBlock, ok := message.(*appmessage.MsgRequestPruningPointUTXOSetAndBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestPruningPointUTXOSetAndBlock, message.Command())
}
return msgRequestPruningPointUTXOSetAndBlock, nil
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) sendPruningPointBlock(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
block, err := flow.Domain().Consensus().GetBlock(msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
if err != nil {
return err
}
log.Debugf("Retrieved pruning block %s", msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgIBDBlock(appmessage.DomainBlockToMsgBlock(block)))
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) sendPruningPointUTXOSet(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
// Send the UTXO set in `step`-sized chunks
const step = 1000
var fromOutpoint *externalapi.DomainOutpoint
chunksSent := 0
for {
pruningPointUTXOs, err := flow.Domain().Consensus().GetPruningPointUTXOs(
msgRequestPruningPointUTXOSetAndBlock.PruningPointHash, fromOutpoint, step)
if err != nil {
if errors.Is(err, ruleerrors.ErrWrongPruningPointHash) {
return flow.outgoingRoute.Enqueue(appmessage.NewMsgUnexpectedPruningPoint())
}
}
log.Debugf("Retrieved %d UTXOs for pruning block %s",
len(pruningPointUTXOs), msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
outpointAndUTXOEntryPairs :=
appmessage.DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(pruningPointUTXOs)
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgPruningPointUTXOSetChunk(outpointAndUTXOEntryPairs))
if err != nil {
return err
}
if len(pruningPointUTXOs) < step {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
fromOutpoint = pruningPointUTXOs[len(pruningPointUTXOs)-1].Outpoint
chunksSent++
// Wait for the peer to request more chunks every `ibdBatchSize` chunks
if chunksSent%ibdBatchSize == 0 {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
_, ok := message.(*appmessage.MsgRequestNextPruningPointUTXOSetChunk)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointUTXOSetChunk, message.Command())
}
}
}
}

View File

@@ -1,7 +1,6 @@
package blockrelay
import (
"fmt"
"time"
"github.com/kaspanet/kaspad/infrastructure/logger"
@@ -17,78 +16,80 @@ import (
"github.com/pkg/errors"
)
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.DomainHash) error {
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) error {
wasIBDNotRunning := flow.TrySetIBDRunning(flow.peer)
if !wasIBDNotRunning {
log.Debugf("IBD is already running")
return nil
}
defer flow.UnsetIBDRunning()
isFinishedSuccessfully := false
defer func() {
flow.UnsetIBDRunning()
flow.logIBDFinished(isFinishedSuccessfully)
}()
highHash := consensushashing.BlockHash(block)
log.Debugf("IBD started with peer %s and highHash %s", flow.peer, highHash)
log.Debugf("Syncing headers up to %s", highHash)
headersSynced, err := flow.syncHeaders(highHash)
log.Debugf("Syncing blocks up to %s", highHash)
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, highestSharedBlockFound, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return err
}
if !headersSynced {
log.Debugf("Aborting IBD because the headers failed to sync")
return nil
}
log.Debugf("Finished syncing headers up to %s", highHash)
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
log.Debugf("Syncing the current pruning point UTXO set")
syncedPruningPointUTXOSetSuccessfully, err := flow.syncPruningPointUTXOSet()
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(block, highestSharedBlockFound)
if err != nil {
return err
}
if !syncedPruningPointUTXOSetSuccessfully {
log.Debugf("Aborting IBD because the pruning point UTXO set failed to sync")
if !shouldSync {
return nil
}
log.Debugf("Finished syncing the current pruning point UTXO set")
log.Debugf("Downloading block bodies up to %s", highHash)
if shouldDownloadHeadersProof {
log.Infof("Starting IBD with headers proof")
err := flow.ibdWithHeadersProof(highHash)
if err != nil {
return err
}
} else {
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced {
isGenesisVirtualSelectedParent, err := flow.isGenesisVirtualSelectedParent()
if err != nil {
return err
}
if isGenesisVirtualSelectedParent {
log.Infof("Cannot IBD to %s because it won't change the pruning point. The node needs to IBD "+
"to the recent pruning point before normal operation can resume.", highHash)
return nil
}
}
err = flow.syncPruningPointFutureHeaders(flow.Domain().Consensus(), highestSharedBlockHash, highHash)
if err != nil {
return err
}
}
err = flow.syncMissingBlockBodies(highHash)
if err != nil {
return err
}
log.Debugf("Finished downloading block bodies up to %s", highHash)
log.Debugf("Finished syncing blocks up to %s", highHash)
isFinishedSuccessfully = true
return nil
}
// syncHeaders attempts to sync headers from the peer. This method may fail
// because the peer and us have conflicting pruning points. In that case we
// return (false, nil) so that we may stop IBD gracefully.
func (flow *handleRelayInvsFlow) syncHeaders(highHash *externalapi.DomainHash) (bool, error) {
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, highestSharedBlockFound, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return false, err
func (flow *handleRelayInvsFlow) logIBDFinished(isFinishedSuccessfully bool) {
successString := "successfully"
if !isFinishedSuccessfully {
successString = "(interrupted)"
}
if !highestSharedBlockFound {
return false, nil
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
err = flow.downloadHeaders(highestSharedBlockHash, highHash)
if err != nil {
return false, err
}
// If the highHash has not been received, the peer is misbehaving
highHashBlockInfo, err := flow.Domain().Consensus().GetBlockInfo(highHash)
if err != nil {
return false, err
}
if !highHashBlockInfo.Exists {
return false, protocolerrors.Errorf(true, "did not receive "+
"highHash header %s from peer %s during header download", highHash, flow.peer)
}
log.Debugf("Headers downloaded from peer %s", flow.peer)
return true, nil
log.Infof("IBD finished %s", successString)
}
// findHighestSharedBlock attempts to find the highest shared block between the peer
@@ -208,20 +209,22 @@ func (flow *handleRelayInvsFlow) fetchHighestHash(
}
}
func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externalapi.DomainHash,
func (flow *handleRelayInvsFlow) syncPruningPointFutureHeaders(consensus externalapi.Consensus, highestSharedBlockHash *externalapi.DomainHash,
highHash *externalapi.DomainHash) error {
log.Infof("Downloading headers from %s", flow.peer)
err := flow.sendRequestHeaders(highestSharedBlockHash, highHash)
if err != nil {
return err
}
// Keep a short queue of blockHeadersMessages so that there's
// Keep a short queue of BlockHeadersMessages so that there's
// never a moment when the node is not validating and inserting
// headers
blockHeadersMessageChan := make(chan *appmessage.BlockHeadersMessage, 2)
errChan := make(chan error)
spawn("handleRelayInvsFlow-downloadHeaders", func() {
spawn("handleRelayInvsFlow-syncPruningPointFutureHeaders", func() {
for {
blockHeadersMessage, doneIBD, err := flow.receiveHeaders()
if err != nil {
@@ -245,12 +248,21 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
for {
select {
case blockHeadersMessage, ok := <-blockHeadersMessageChan:
case ibdBlocksMessage, ok := <-blockHeadersMessageChan:
if !ok {
// If the highHash has not been received, the peer is misbehaving
highHashBlockInfo, err := consensus.GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashBlockInfo.Exists {
return protocolerrors.Errorf(true, "did not receive "+
"highHash block %s from peer %s during block download", highHash, flow.peer)
}
return nil
}
for _, header := range blockHeadersMessage.BlockHeaders {
err = flow.processHeader(header)
for _, header := range ibdBlocksMessage.BlockHeaders {
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
@@ -268,7 +280,7 @@ func (flow *handleRelayInvsFlow) sendRequestHeaders(highestSharedBlockHash *exte
return flow.outgoingRoute.Enqueue(msgGetBlockInvs)
}
func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneIBD bool, err error) {
func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneHeaders bool, err error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, false, err
@@ -281,11 +293,14 @@ func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.Block
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s", appmessage.CmdHeader, appmessage.CmdDoneHeaders, message.Command())
"expected: %s or %s, got: %s",
appmessage.CmdBlockHeaders,
appmessage.CmdDoneHeaders,
message.Command())
}
}
func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlockHeader) error {
func (flow *handleRelayInvsFlow) processHeader(consensus externalapi.Consensus, msgBlockHeader *appmessage.MsgBlockHeader) error {
header := appmessage.BlockHeaderToDomainBlockHeader(msgBlockHeader)
block := &externalapi.DomainBlock{
Header: header,
@@ -293,7 +308,7 @@ func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlo
}
blockHash := consensushashing.BlockHash(block)
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(blockHash)
blockInfo, err := consensus.GetBlockInfo(blockHash)
if err != nil {
return err
}
@@ -301,7 +316,7 @@ func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlo
log.Debugf("Block header %s is already in the DAG. Skipping...", blockHash)
return nil
}
_, err = flow.Domain().Consensus().ValidateAndInsertBlock(block)
_, err = consensus.ValidateAndInsertBlock(block, false)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
@@ -318,129 +333,42 @@ func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlo
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet() (bool, error) {
log.Debugf("Checking if a new pruning point is available")
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointHashMessage())
func (flow *handleRelayInvsFlow) validatePruningPointFutureHeaderTimestamps() error {
headerSelectedTipHash, err := flow.Domain().StagingConsensus().GetHeadersSelectedTip()
if err != nil {
return false, err
return err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
headerSelectedTipHeader, err := flow.Domain().StagingConsensus().GetBlockHeader(headerSelectedTipHash)
if err != nil {
return false, err
}
msgPruningPointHash, ok := message.(*appmessage.MsgPruningPointHashMessage)
if !ok {
return false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPointHash, message.Command())
return err
}
headerSelectedTipTimestamp := headerSelectedTipHeader.TimeInMilliseconds()
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(msgPruningPointHash.Hash)
currentSelectedTipHash, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return false, err
return err
}
if !blockInfo.Exists {
return false, errors.Errorf("The pruning point header is missing")
}
if blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
log.Debugf("Already has the block data of the new suggested pruning point %s", msgPruningPointHash.Hash)
return true, nil
}
log.Infof("Checking if the suggested pruning point %s is compatible to the node DAG", msgPruningPointHash.Hash)
isValid, err := flow.Domain().Consensus().IsValidPruningPoint(msgPruningPointHash.Hash)
currentSelectedTipHeader, err := flow.Domain().Consensus().GetBlockHeader(currentSelectedTipHash)
if err != nil {
return false, err
return err
}
currentSelectedTipTimestamp := currentSelectedTipHeader.TimeInMilliseconds()
if headerSelectedTipTimestamp < currentSelectedTipTimestamp {
return protocolerrors.Errorf(false, "the timestamp of the candidate selected "+
"tip is smaller than the current selected tip")
}
if !isValid {
log.Infof("The suggested pruning point %s is incompatible to this node DAG, so stopping IBD with this"+
" peer", msgPruningPointHash.Hash)
return false, nil
minTimestampDifferenceInMilliseconds := (10 * time.Minute).Milliseconds()
if headerSelectedTipTimestamp-currentSelectedTipTimestamp < minTimestampDifferenceInMilliseconds {
return protocolerrors.Errorf(false, "difference between the timestamps of "+
"the current pruning point and the candidate pruning point is too small. Aborting IBD...")
}
log.Info("Fetching the pruning point UTXO set")
succeed, err := flow.fetchMissingUTXOSet(msgPruningPointHash.Hash)
if err != nil {
return false, err
}
if !succeed {
log.Infof("Couldn't successfully fetch the pruning point UTXO set. Stopping IBD.")
return false, nil
}
log.Info("Fetched the new pruning point UTXO set")
return true, nil
}
func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(pruningPointHash *externalapi.DomainHash) (succeed bool, err error) {
defer func() {
err := flow.Domain().Consensus().ClearImportedPruningPointData()
if err != nil {
panic(fmt.Sprintf("failed to clear imported pruning point data: %s", err))
}
}()
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointUTXOSetAndBlock(pruningPointHash))
if err != nil {
return false, err
}
block, err := flow.receivePruningPointBlock()
if err != nil {
return false, err
}
receivedAll, err := flow.receiveAndInsertPruningPointUTXOSet(pruningPointHash)
if err != nil {
return false, err
}
if !receivedAll {
return false, nil
}
err = flow.Domain().Consensus().ValidateAndInsertImportedPruningPoint(block)
if err != nil {
// TODO: Find a better way to deal with finality conflicts.
if errors.Is(err, ruleerrors.ErrSuggestedPruningViolatesFinality) {
return false, nil
}
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return false, err
}
return true, nil
}
func (flow *handleRelayInvsFlow) receivePruningPointBlock() (*externalapi.DomainBlock, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receivePruningPointBlock")
defer onEnd()
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
ibdBlockMessage, ok := message.(*appmessage.MsgIBDBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlock, message.Command())
}
block := appmessage.MsgBlockToDomainBlock(ibdBlockMessage.MsgBlock)
log.Debugf("Received pruning point block %s", consensushashing.BlockHash(block))
return block, nil
return nil
}
func (flow *handleRelayInvsFlow) receiveAndInsertPruningPointUTXOSet(
pruningPointHash *externalapi.DomainHash) (bool, error) {
consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (bool, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receiveAndInsertPruningPointUTXOSet")
defer onEnd()
@@ -459,7 +387,7 @@ func (flow *handleRelayInvsFlow) receiveAndInsertPruningPointUTXOSet(
domainOutpointAndUTXOEntryPairs :=
appmessage.OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(message.OutpointAndUTXOEntryPairs)
err := flow.Domain().Consensus().AppendImportedPruningPointUTXOs(domainOutpointAndUTXOEntryPairs)
err := consensus.AppendImportedPruningPointUTXOs(domainOutpointAndUTXOEntryPairs)
if err != nil {
return false, err
}
@@ -543,7 +471,7 @@ func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.Do
return err
}
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
virtualChangeSet, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, false)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping IBD Block %s as it has already been added to the DAG", blockHash)
@@ -551,14 +479,36 @@ func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.Do
}
return protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "invalid block %s", blockHash)
}
err = flow.OnNewBlock(block, blockInsertionResult)
err = flow.OnNewBlock(block, virtualChangeSet)
if err != nil {
return err
}
}
}
return nil
return flow.resolveVirtual()
}
func (flow *handleRelayInvsFlow) resolveVirtual() error {
for i := 0; ; i++ {
if i%10 == 0 {
log.Infof("Resolving virtual. This may take some time...")
}
virtualChangeSet, isCompletelyResolved, err := flow.Domain().Consensus().ResolveVirtual()
if err != nil {
return err
}
err = flow.OnVirtualChange(virtualChangeSet)
if err != nil {
return err
}
if isCompletelyResolved {
log.Infof("Resolved virtual")
return nil
}
}
}
// dequeueIncomingMessageAndSkipInvs is a convenience method to be used during

View File

@@ -0,0 +1,364 @@
package blockrelay
import (
"fmt"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
)
func (flow *handleRelayInvsFlow) ibdWithHeadersProof(highHash *externalapi.DomainHash) error {
err := flow.Domain().InitStagingConsensus()
if err != nil {
return err
}
err = flow.downloadHeadersAndPruningUTXOSet(highHash)
if err != nil {
if !flow.IsRecoverableError(err) {
return err
}
deleteStagingConsensusErr := flow.Domain().DeleteStagingConsensus()
if deleteStagingConsensusErr != nil {
return deleteStagingConsensusErr
}
return err
}
err = flow.Domain().CommitStagingConsensus()
if err != nil {
return err
}
return nil
}
func (flow *handleRelayInvsFlow) shouldSyncAndShouldDownloadHeadersProof(highBlock *externalapi.DomainBlock,
highestSharedBlockFound bool) (shouldDownload, shouldSync bool, err error) {
if !highestSharedBlockFound {
hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock)
if err != nil {
return false, false, err
}
if hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore {
return true, true, nil
}
return false, false, nil
}
return false, true, nil
}
func (flow *handleRelayInvsFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock *externalapi.DomainBlock) (bool, error) {
headersSelectedTip, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return false, err
}
headersSelectedTipInfo, err := flow.Domain().Consensus().GetBlockInfo(headersSelectedTip)
if err != nil {
return false, err
}
if highBlock.Header.BlueScore() < headersSelectedTipInfo.BlueScore+flow.Config().NetParams().PruningDepth() {
return false, nil
}
return highBlock.Header.BlueWork().Cmp(headersSelectedTipInfo.BlueWork) > 0, nil
}
func (flow *handleRelayInvsFlow) syncAndValidatePruningPointProof() (*externalapi.DomainHash, error) {
log.Infof("Downloading the pruning point proof from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointProof())
if err != nil {
return nil, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
pruningPointProofMessage, ok := message.(*appmessage.MsgPruningPointProof)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPointProof, message.Command())
}
pruningPointProof := appmessage.MsgPruningPointProofToDomainPruningPointProof(pruningPointProofMessage)
err = flow.Domain().Consensus().ValidatePruningPointProof(pruningPointProof)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
return nil, protocolerrors.Wrapf(true, err, "pruning point proof validation failed")
}
return nil, err
}
err = flow.Domain().StagingConsensus().ApplyPruningPointProof(pruningPointProof)
if err != nil {
return nil, err
}
return consensushashing.HeaderHash(pruningPointProof.Headers[0][len(pruningPointProof.Headers[0])-1]), nil
}
func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(highHash *externalapi.DomainHash) error {
proofPruningPoint, err := flow.syncAndValidatePruningPointProof()
if err != nil {
return err
}
err = flow.syncPruningPointsAndPruningPointAnticone(proofPruningPoint)
if err != nil {
return err
}
// TODO: Remove this condition once there's more proper way to check finality violation
// in the headers proof.
if proofPruningPoint.Equal(flow.Config().NetParams().GenesisHash) {
return protocolerrors.Errorf(true, "the genesis pruning point violates finality")
}
err = flow.syncPruningPointFutureHeaders(flow.Domain().StagingConsensus(), proofPruningPoint, highHash)
if err != nil {
return err
}
log.Infof("Headers downloaded from peer %s", flow.peer)
highHashInfo, err := flow.Domain().StagingConsensus().GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashInfo.Exists {
return protocolerrors.Errorf(true, "the triggering IBD block was not sent")
}
err = flow.validatePruningPointFutureHeaderTimestamps()
if err != nil {
return err
}
log.Debugf("Syncing the current pruning point UTXO set")
syncedPruningPointUTXOSetSuccessfully, err := flow.syncPruningPointUTXOSet(flow.Domain().StagingConsensus(), proofPruningPoint)
if err != nil {
return err
}
if !syncedPruningPointUTXOSetSuccessfully {
log.Debugf("Aborting IBD because the pruning point UTXO set failed to sync")
return nil
}
log.Debugf("Finished syncing the current pruning point UTXO set")
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointsAndPruningPointAnticone(proofPruningPoint *externalapi.DomainHash) error {
log.Infof("Downloading the past pruning points and the pruning point anticone from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointAndItsAnticone())
if err != nil {
return err
}
err = flow.validateAndInsertPruningPoints(proofPruningPoint)
if err != nil {
return err
}
pruningPointWithMetaData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
}
if done {
return protocolerrors.Errorf(true, "got `done` message before receiving the pruning point")
}
if !pruningPointWithMetaData.Block.Header.BlockHash().Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "first block with trusted data is not the pruning point")
}
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), pruningPointWithMetaData)
if err != nil {
return err
}
for {
blockWithTrustedData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
}
if done {
break
}
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), blockWithTrustedData)
if err != nil {
return err
}
}
log.Infof("Finished downloading pruning point and its anticone from %s", flow.peer)
return nil
}
func (flow *handleRelayInvsFlow) processBlockWithTrustedData(
consensus externalapi.Consensus, block *appmessage.MsgBlockWithTrustedData) error {
_, err := consensus.ValidateAndInsertBlockWithTrustedData(appmessage.BlockWithTrustedDataToDomainBlockWithTrustedData(block), false)
return err
}
func (flow *handleRelayInvsFlow) receiveBlockWithTrustedData() (*appmessage.MsgBlockWithTrustedData, bool, error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch downCastedMessage := message.(type) {
case *appmessage.MsgBlockWithTrustedData:
return downCastedMessage, false, nil
case *appmessage.MsgDoneBlocksWithTrustedData:
return nil, true, nil
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s",
(&appmessage.MsgBlockWithTrustedData{}).Command(),
(&appmessage.MsgDoneBlocksWithTrustedData{}).Command(),
downCastedMessage.Command())
}
}
func (flow *handleRelayInvsFlow) receivePruningPoints() (*appmessage.MsgPruningPoints, error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
msgPruningPoints, ok := message.(*appmessage.MsgPruningPoints)
if !ok {
return nil,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPoints, message.Command())
}
return msgPruningPoints, nil
}
func (flow *handleRelayInvsFlow) validateAndInsertPruningPoints(proofPruningPoint *externalapi.DomainHash) error {
currentPruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return err
}
if currentPruningPoint.Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "the proposed pruning point is the same as the current pruning point")
}
pruningPoints, err := flow.receivePruningPoints()
if err != nil {
return err
}
headers := make([]externalapi.BlockHeader, len(pruningPoints.Headers))
for i, header := range pruningPoints.Headers {
headers[i] = appmessage.BlockHeaderToDomainBlockHeader(header)
}
arePruningPointsViolatingFinality, err := flow.Domain().Consensus().ArePruningPointsViolatingFinality(headers)
if err != nil {
return err
}
if arePruningPointsViolatingFinality {
// TODO: Find a better way to deal with finality conflicts.
return protocolerrors.Errorf(false, "pruning points are violating finality")
}
lastPruningPoint := consensushashing.HeaderHash(headers[len(headers)-1])
if !lastPruningPoint.Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "the proof pruning point is not equal to the last pruning "+
"point in the list")
}
err = flow.Domain().StagingConsensus().ImportPruningPoints(headers)
if err != nil {
return err
}
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet(consensus externalapi.Consensus,
pruningPoint *externalapi.DomainHash) (bool, error) {
log.Infof("Checking if the suggested pruning point %s is compatible to the node DAG", pruningPoint)
isValid, err := flow.Domain().StagingConsensus().IsValidPruningPoint(pruningPoint)
if err != nil {
return false, err
}
if !isValid {
return false, protocolerrors.Errorf(true, "invalid pruning point %s", pruningPoint)
}
log.Info("Fetching the pruning point UTXO set")
isSuccessful, err := flow.fetchMissingUTXOSet(consensus, pruningPoint)
if err != nil {
return false, err
}
if !isSuccessful {
log.Infof("Couldn't successfully fetch the pruning point UTXO set. Stopping IBD.")
return false, nil
}
log.Info("Fetched the new pruning point UTXO set")
return true, nil
}
func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (succeed bool, err error) {
defer func() {
err := flow.Domain().StagingConsensus().ClearImportedPruningPointData()
if err != nil {
panic(fmt.Sprintf("failed to clear imported pruning point data: %s", err))
}
}()
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointUTXOSet(pruningPointHash))
if err != nil {
return false, err
}
receivedAll, err := flow.receiveAndInsertPruningPointUTXOSet(consensus, pruningPointHash)
if err != nil {
return false, err
}
if !receivedAll {
return false, nil
}
err = flow.Domain().StagingConsensus().ValidateAndInsertImportedPruningPoint(pruningPointHash)
if err != nil {
// TODO: Find a better way to deal with finality conflicts.
if errors.Is(err, ruleerrors.ErrSuggestedPruningViolatesFinality) {
return false, nil
}
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return false, err
}
return true, nil
}

View File

@@ -4,12 +4,14 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// SendVirtualSelectedParentInvContext is the interface for the context needed for the SendVirtualSelectedParentInv flow.
type SendVirtualSelectedParentInvContext interface {
Domain() domain.Domain
Config() *config.Config
}
// SendVirtualSelectedParentInv sends a peer the selected parent hash of the virtual
@@ -21,6 +23,11 @@ func SendVirtualSelectedParentInv(context SendVirtualSelectedParentInvContext,
return err
}
if virtualSelectedParent.Equal(context.Config().NetParams().GenesisHash) {
log.Debugf("Skipping sending the virtual selected parent hash to peer %s because it's the genesis", peer)
return nil
}
log.Debugf("Sending virtual selected parent hash %s to peer %s", virtualSelectedParent, peer)
virtualSelectedParentInv := appmessage.NewMsgInvBlock(virtualSelectedParent)

View File

@@ -33,10 +33,5 @@ func (flow *handleRejectsFlow) start() error {
}
rejectMessage := message.(*appmessage.MsgReject)
const maxReasonLength = 255
if len(rejectMessage.Reason) > maxReasonLength {
return protocolerrors.Errorf(false, "got reject message longer than %d", maxReasonLength)
}
return protocolerrors.Errorf(false, "got reject message: `%s`", rejectMessage.Reason)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,16 @@
package testing
import (
"testing"
"time"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"testing"
"time"
)
type fakeReceiveAddressesContext struct{}
@@ -19,9 +20,9 @@ func (f fakeReceiveAddressesContext) AddressManager() *addressmanager.AddressMan
}
func TestReceiveAddressesErrors(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
incomingRoute := router.NewRoute()
outgoingRoute := router.NewRoute()
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
incomingRoute := router.NewRoute("incoming")
outgoingRoute := router.NewRoute("outgoing")
peer := peerpkg.New(nil)
errChan := make(chan error)
go func() {

View File

@@ -19,8 +19,8 @@ type TransactionsRelayContext interface {
NetAdapter() *netadapter.NetAdapter
Domain() domain.Domain
SharedRequestedTransactions() *SharedRequestedTransactions
Broadcast(message appmessage.Message) error
OnTransactionAddedToMempool()
EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error
}
type handleRelayedTransactionsFlow struct {
@@ -119,8 +119,7 @@ func (flow *handleRelayedTransactionsFlow) readInv() (*appmessage.MsgInvTransact
}
func (flow *handleRelayedTransactionsFlow) broadcastAcceptedTransactions(acceptedTxIDs []*externalapi.DomainTransactionID) error {
inv := appmessage.NewMsgInvTransaction(acceptedTxIDs)
return flow.Broadcast(inv)
return flow.EnqueueTransactionIDsForPropagation(acceptedTxIDs)
}
// readMsgTxOrNotFound returns the next msgTx or msgTransactionNotFound in incomingRoute,
@@ -173,17 +172,18 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
expectedID, txID)
}
err = flow.Domain().MiningManager().ValidateAndInsertTransaction(tx, true)
acceptedTransactions, err :=
flow.Domain().MiningManager().ValidateAndInsertTransaction(tx, false, true)
if err != nil {
ruleErr := &mempool.RuleError{}
if !errors.As(err, ruleErr) {
return errors.Wrapf(err, "failed to process transaction %s", txID)
}
shouldBan := true
shouldBan := false
if txRuleErr := (&mempool.TxRuleError{}); errors.As(ruleErr.Err, txRuleErr) {
if txRuleErr.RejectCode != mempool.RejectInvalid {
shouldBan = false
if txRuleErr.RejectCode == mempool.RejectInvalid {
shouldBan = true
}
}
@@ -193,7 +193,7 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
return protocolerrors.Errorf(true, "rejected transaction %s: %s", txID, ruleErr)
}
err = flow.broadcastAcceptedTransactions([]*externalapi.DomainTransactionID{txID})
err = flow.broadcastAcceptedTransactions(consensushashing.TransactionIDs(acceptedTransactions))
if err != nil {
return err
}

View File

@@ -0,0 +1,191 @@
package transactionrelay_test
import (
"errors"
"strings"
"testing"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
type mocTransactionsRelayContext struct {
netAdapter *netadapter.NetAdapter
domain domain.Domain
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
}
func (m *mocTransactionsRelayContext) NetAdapter() *netadapter.NetAdapter {
return m.netAdapter
}
func (m *mocTransactionsRelayContext) Domain() domain.Domain {
return m.domain
}
func (m *mocTransactionsRelayContext) SharedRequestedTransactions() *transactionrelay.SharedRequestedTransactions {
return m.sharedRequestedTransactions
}
func (m *mocTransactionsRelayContext) EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error {
return nil
}
func (m *mocTransactionsRelayContext) OnTransactionAddedToMempool() {
}
// TestHandleRelayedTransactionsNotFound tests the flow of HandleRelayedTransactions when the peer doesn't
// have the requested transactions in the mempool.
func TestHandleRelayedTransactionsNotFound(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestHandleRelayedTransactionsNotFound")
if err != nil {
t.Fatalf("Error setting up test consensus: %+v", err)
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to create a NetAdapter: %v", err)
}
domainInstance, err := domain.New(consensusConfig, mempool.DefaultConfig(&consensusConfig.Params), tc.Database())
if err != nil {
t.Fatalf("Failed to set up a domain instance: %v", err)
}
context := &mocTransactionsRelayContext{
netAdapter: adapter,
domain: domainInstance,
sharedRequestedTransactions: sharedRequestedTransactions,
}
incomingRoute := router.NewRoute("incoming")
defer incomingRoute.Close()
peerIncomingRoute := router.NewRoute("outgoing")
defer peerIncomingRoute.Close()
txID1 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
txID2 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02})
txIDs := []*externalapi.DomainTransactionID{txID1, txID2}
invMessage := appmessage.NewMsgInvTransaction(txIDs)
err = incomingRoute.Enqueue(invMessage)
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
// The goroutine is representing the peer's actions.
spawn("peerResponseToTheTransactionsRequest", func() {
msg, err := peerIncomingRoute.Dequeue()
if err != nil {
t.Fatalf("Dequeue: %v", err)
}
inv := msg.(*appmessage.MsgRequestTransactions)
if len(txIDs) != len(inv.IDs) {
t.Fatalf("TestHandleRelayedTransactions: expected %d transactions ID, but got %d", len(txIDs), len(inv.IDs))
}
for i, id := range inv.IDs {
if txIDs[i].String() != id.String() {
t.Fatalf("TestHandleRelayedTransactions: expected equal txID: expected %s, but got %s", txIDs[i].String(), id.String())
}
err = incomingRoute.Enqueue(appmessage.NewMsgTransactionNotFound(txIDs[i]))
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
}
// Insert an unexpected message type to stop the infinity loop.
err = incomingRoute.Enqueue(&appmessage.MsgAddresses{})
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
})
err = transactionrelay.HandleRelayedTransactions(context, incomingRoute, peerIncomingRoute)
// Since we inserted an unexpected message type to stop the infinity loop,
// we expect the error will be infected from this specific message and also the
// error will count as a protocol message.
if protocolErr := (protocolerrors.ProtocolError{}); err == nil || !errors.As(err, &protocolErr) {
t.Fatalf("Expected to protocol error")
} else {
if !protocolErr.ShouldBan {
t.Fatalf("Exepcted shouldBan true, but got false.")
}
if !strings.Contains(err.Error(), "unexpected Addresses [code 3] message in the block relay flow while expecting an inv message") {
t.Fatalf("Unexpected error: expected: an error due to existence of an Addresses message "+
"in the block relay flow, but got: %v", protocolErr.Cause)
}
}
})
}
// TestOnClosedIncomingRoute verifies that an appropriate error message will be returned when
// trying to dequeue a message from a closed route.
func TestOnClosedIncomingRoute(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestOnClosedIncomingRoute")
if err != nil {
t.Fatalf("Error setting up test consensus: %+v", err)
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to creat a NetAdapter : %v", err)
}
domainInstance, err := domain.New(consensusConfig, mempool.DefaultConfig(&consensusConfig.Params), tc.Database())
if err != nil {
t.Fatalf("Failed to set up a domain instance: %v", err)
}
context := &mocTransactionsRelayContext{
netAdapter: adapter,
domain: domainInstance,
sharedRequestedTransactions: sharedRequestedTransactions,
}
incomingRoute := router.NewRoute("incoming")
outgoingRoute := router.NewRoute("outgoing")
defer outgoingRoute.Close()
txID := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
txIDs := []*externalapi.DomainTransactionID{txID}
err = incomingRoute.Enqueue(&appmessage.MsgInvTransaction{TxIDs: txIDs})
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
incomingRoute.Close()
err = transactionrelay.HandleRelayedTransactions(context, incomingRoute, outgoingRoute)
if err == nil || !errors.Is(err, router.ErrRouteClosed) {
t.Fatalf("Unexpected error: expected: %v, got : %v", router.ErrRouteClosed, err)
}
})
}

View File

@@ -0,0 +1,90 @@
package transactionrelay_test
import (
"testing"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util/panics"
"github.com/pkg/errors"
)
// TestHandleRequestedTransactionsNotFound tests the flow of HandleRequestedTransactions
// when the requested transactions don't found in the mempool.
func TestHandleRequestedTransactionsNotFound(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestHandleRequestedTransactionsNotFound")
if err != nil {
t.Fatalf("Error setting up test Consensus: %+v", err)
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to create a NetAdapter: %v", err)
}
domainInstance, err := domain.New(consensusConfig, mempool.DefaultConfig(&consensusConfig.Params), tc.Database())
if err != nil {
t.Fatalf("Failed to set up a domain Instance: %v", err)
}
context := &mocTransactionsRelayContext{
netAdapter: adapter,
domain: domainInstance,
sharedRequestedTransactions: sharedRequestedTransactions,
}
incomingRoute := router.NewRoute("incoming")
outgoingRoute := router.NewRoute("outgoing")
defer outgoingRoute.Close()
txID1 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
txID2 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02})
txIDs := []*externalapi.DomainTransactionID{txID1, txID2}
msg := appmessage.NewMsgRequestTransactions(txIDs)
err = incomingRoute.Enqueue(msg)
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
// The goroutine is representing the peer's actions.
spawn("peerResponseToTheTransactionsMessages", func() {
for i, id := range txIDs {
msg, err := outgoingRoute.Dequeue()
if err != nil {
t.Fatalf("Dequeue: %s", err)
}
outMsg := msg.(*appmessage.MsgTransactionNotFound)
if txIDs[i].String() != outMsg.ID.String() {
t.Fatalf("TestHandleRelayedTransactions: expected equal txID: expected %s, but got %s", txIDs[i].String(), id.String())
}
}
// Closed the incomingRoute for stop the infinity loop.
incomingRoute.Close()
})
err = transactionrelay.HandleRequestedTransactions(context, incomingRoute, outgoingRoute)
// Make sure the error is due to the closed route.
if err == nil || !errors.Is(err, router.ErrRouteClosed) {
t.Fatalf("Unexpected error: expected: %v, got : %v", router.ErrRouteClosed, err)
}
})
}

View File

@@ -2,10 +2,11 @@ package protocol
import (
"fmt"
"github.com/pkg/errors"
"sync"
"sync/atomic"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
@@ -61,8 +62,8 @@ func (m *Manager) IBDPeer() *peerpkg.Peer {
}
// AddTransaction adds transaction to the mempool and propagates it.
func (m *Manager) AddTransaction(tx *externalapi.DomainTransaction) error {
return m.context.AddTransaction(tx)
func (m *Manager) AddTransaction(tx *externalapi.DomainTransaction, allowOrphan bool) error {
return m.context.AddTransaction(tx, allowOrphan)
}
// AddBlock adds the given block to the DAG and propagates it.
@@ -83,6 +84,11 @@ func (m *Manager) runFlows(flows []*flow, peer *peerpkg.Peer, errChan <-chan err
return <-errChan
}
// SetOnVirtualChange sets the onVirtualChangeHandler handler
func (m *Manager) SetOnVirtualChange(onVirtualChangeHandler flowcontext.OnVirtualChangeHandler) {
m.context.SetOnVirtualChangeHandler(onVirtualChangeHandler)
}
// SetOnBlockAddedToDAGHandler sets the onBlockAddedToDAG handler
func (m *Manager) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler flowcontext.OnBlockAddedToDAGHandler) {
m.context.SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler)

View File

@@ -15,7 +15,7 @@ import (
// maxProtocolVersion version is the maximum supported protocol
// version this kaspad node supports
const maxProtocolVersion = 1
const maxProtocolVersion = 3
// Peer holds data about a peer.
type Peer struct {

View File

@@ -1,8 +1,6 @@
package protocol
import (
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"sync"
"sync/atomic"
@@ -11,10 +9,12 @@ import (
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/handshake"
"github.com/kaspanet/kaspad/app/protocol/flows/ping"
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
@@ -102,11 +102,11 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection, outgoingRoute *routerpkg.Route) {
if protocolErr := (protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if !m.context.Config().DisableBanning && protocolErr.ShouldBan {
if m.context.Config().EnableBanning && protocolErr.ShouldBan {
log.Warnf("Banning %s (reason: %s)", netConnection, protocolErr.Cause)
err := m.context.ConnectionManager().Ban(netConnection)
if !errors.Is(err, connmanager.ErrCannotBanPermanent) {
if err != nil && !errors.Is(err, connmanager.ErrCannotBanPermanent) {
panic(err)
}
@@ -168,10 +168,13 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
}),
m.registerFlow("HandleRelayInvs", router, []appmessage.MessageCommand{
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator, appmessage.CmdIBDBlock,
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator,
appmessage.CmdDoneHeaders, appmessage.CmdUnexpectedPruningPoint, appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdBlockHeaders, appmessage.CmdPruningPointHash, appmessage.CmdIBDBlockLocatorHighestHash,
appmessage.CmdIBDBlockLocatorHighestHashNotFound, appmessage.CmdDonePruningPointUTXOSetChunks},
appmessage.CmdBlockHeaders, appmessage.CmdIBDBlockLocatorHighestHash, appmessage.CmdBlockWithTrustedData,
appmessage.CmdDoneBlocksWithTrustedData, appmessage.CmdIBDBlockLocatorHighestHashNotFound,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdIBDBlock, appmessage.CmdPruningPoints,
appmessage.CmdPruningPointProof,
},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayInvs(m.context, incomingRoute,
outgoingRoute, peer)
@@ -198,14 +201,6 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
},
),
m.registerFlow("HandleRequestPruningPointUTXOSetAndBlock", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointUTXOSetAndBlock,
appmessage.CmdRequestNextPruningPointUTXOSetChunk}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestPruningPointUTXOSetAndBlock(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleIBDBlockRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
@@ -213,10 +208,18 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
},
),
m.registerFlow("HandlePruningPointHashRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointHash}, isStopping, errChan,
m.registerFlow("HandleRequestPruningPointUTXOSet", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointUTXOSet,
appmessage.CmdRequestNextPruningPointUTXOSetChunk}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointHashRequests(m.context, incomingRoute, outgoingRoute)
return blockrelay.HandleRequestPruningPointUTXOSet(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandlePruningPointAndItsAnticoneRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointAndItsAnticone}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointAndItsAnticoneRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
@@ -226,6 +229,13 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
return blockrelay.HandleIBDBlockLocator(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandlePruningPointProofRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointProof}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointProofRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
}
}
@@ -282,7 +292,7 @@ func (m *Manager) registerRejectsFlow(router *routerpkg.Router, isStopping *uint
func (m *Manager) registerFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
route, err := router.AddIncomingRoute(messageTypes)
route, err := router.AddIncomingRoute(name, messageTypes)
if err != nil {
panic(err)
}
@@ -294,7 +304,7 @@ func (m *Manager) registerFlowWithCapacity(name string, capacity int, router *ro
messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
route, err := router.AddIncomingRouteWithCapacity(capacity, messageTypes)
route, err := router.AddIncomingRouteWithCapacity(name, capacity, messageTypes)
if err != nil {
panic(err)
}
@@ -320,7 +330,7 @@ func (m *Manager) registerFlowForRoute(route *routerpkg.Route, name string, isSt
func (m *Manager) registerOneTimeFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand,
isStopping *uint32, stopChan chan error, initializeFunc flowInitializeFunc) *flow {
route, err := router.AddIncomingRoute(messageTypes)
route, err := router.AddIncomingRoute(name, messageTypes)
if err != nil {
panic(err)
}
@@ -346,12 +356,12 @@ func (m *Manager) registerOneTimeFlow(name string, router *routerpkg.Router, mes
func registerHandshakeRoutes(router *routerpkg.Router) (
receiveVersionRoute *routerpkg.Route, sendVersionRoute *routerpkg.Route) {
receiveVersionRoute, err := router.AddIncomingRoute([]appmessage.MessageCommand{appmessage.CmdVersion})
receiveVersionRoute, err := router.AddIncomingRoute("recieveVersion - incoming", []appmessage.MessageCommand{appmessage.CmdVersion})
if err != nil {
panic(err)
}
sendVersionRoute, err = router.AddIncomingRoute([]appmessage.MessageCommand{appmessage.CmdVerAck})
sendVersionRoute, err = router.AddIncomingRoute("sendVersion - incoming", []appmessage.MessageCommand{appmessage.CmdVerAck})
if err != nil {
panic(err)
}

View File

@@ -48,23 +48,11 @@ func NewManager(
}
// NotifyBlockAddedToDAG notifies the manager that a block has been added to the DAG
func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error {
func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyBlockAddedToDAG")
defer onEnd()
if m.context.Config.UTXOIndex {
err := m.notifyUTXOsChanged(blockInsertionResult)
if err != nil {
return err
}
}
err := m.notifyVirtualSelectedParentBlueScoreChanged()
if err != nil {
return err
}
err = m.notifyVirtualSelectedParentChainChanged(blockInsertionResult)
err := m.NotifyVirtualChange(virtualChangeSet)
if err != nil {
return err
}
@@ -78,6 +66,36 @@ func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockIns
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
}
// NotifyVirtualChange notifies the manager that the virtual block has been changed.
func (m *Manager) NotifyVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyBlockAddedToDAG")
defer onEnd()
if m.context.Config.UTXOIndex {
err := m.notifyUTXOsChanged(virtualChangeSet)
if err != nil {
return err
}
}
err := m.notifyVirtualSelectedParentBlueScoreChanged()
if err != nil {
return err
}
err = m.notifyVirtualDaaScoreChanged()
if err != nil {
return err
}
err = m.notifyVirtualSelectedParentChainChanged(virtualChangeSet)
if err != nil {
return err
}
return nil
}
// NotifyPruningPointUTXOSetOverride notifies the manager whenever the UTXO index
// resets due to pruning point change via IBD.
func (m *Manager) NotifyPruningPointUTXOSetOverride() error {
@@ -112,11 +130,11 @@ func (m *Manager) NotifyFinalityConflictResolved(finalityBlockHash string) error
return m.context.NotificationManager.NotifyFinalityConflictResolved(notification)
}
func (m *Manager) notifyUTXOsChanged(blockInsertionResult *externalapi.BlockInsertionResult) error {
func (m *Manager) notifyUTXOsChanged(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyUTXOsChanged")
defer onEnd()
utxoIndexChanges, err := m.context.UTXOIndex.Update(blockInsertionResult)
utxoIndexChanges, err := m.context.UTXOIndex.Update(virtualChangeSet)
if err != nil {
return err
}
@@ -153,12 +171,25 @@ func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged() error {
return m.context.NotificationManager.NotifyVirtualSelectedParentBlueScoreChanged(notification)
}
func (m *Manager) notifyVirtualSelectedParentChainChanged(blockInsertionResult *externalapi.BlockInsertionResult) error {
func (m *Manager) notifyVirtualDaaScoreChanged() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualDaaScoreChanged")
defer onEnd()
virtualDAAScore, err := m.context.Domain.Consensus().GetVirtualDAAScore()
if err != nil {
return err
}
notification := appmessage.NewVirtualDaaScoreChangedNotificationMessage(virtualDAAScore)
return m.context.NotificationManager.NotifyVirtualDaaScoreChanged(notification)
}
func (m *Manager) notifyVirtualSelectedParentChainChanged(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentChainChanged")
defer onEnd()
notification, err := m.context.ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
blockInsertionResult.VirtualSelectedParentChainChanges)
virtualChangeSet.VirtualSelectedParentChainChanges)
if err != nil {
return err
}

View File

@@ -44,6 +44,8 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdGetInfoRequestMessage: rpchandlers.HandleGetInfo,
appmessage.CmdNotifyPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleNotifyPruningPointUTXOSetOverrideRequest,
appmessage.CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleStopNotifyingPruningPointUTXOSetOverrideRequest,
appmessage.CmdEstimateNetworkHashesPerSecondRequestMessage: rpchandlers.HandleEstimateNetworkHashesPerSecond,
appmessage.CmdNotifyVirtualDaaScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualDaaScoreChanged,
}
func (m *Manager) routerInitializer(router *router.Router, netConnection *netadapter.NetConnection) {
@@ -51,7 +53,7 @@ func (m *Manager) routerInitializer(router *router.Router, netConnection *netada
for messageType := range handlers {
messageTypes = append(messageTypes, messageType)
}
incomingRoute, err := router.AddIncomingRoute(messageTypes)
incomingRoute, err := router.AddIncomingRoute("rpc router", messageTypes)
if err != nil {
panic(err)
}

View File

@@ -3,7 +3,6 @@ package rpccontext
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage converts
@@ -16,29 +15,9 @@ func (ctx *Context) ConvertVirtualSelectedParentChainChangesToChainChangedNotifi
removedChainBlockHashes[i] = removed.String()
}
addedChainBlocks := make([]*appmessage.ChainBlock, len(selectedParentChainChanges.Added))
addedChainBlocks := make([]string, len(selectedParentChainChanges.Added))
for i, added := range selectedParentChainChanges.Added {
acceptanceData, err := ctx.Domain.Consensus().GetBlockAcceptanceData(added)
if err != nil {
return nil, err
}
acceptedBlocks := make([]*appmessage.AcceptedBlock, len(acceptanceData))
for j, acceptedBlock := range acceptanceData {
acceptedTransactionIDs := make([]string, len(acceptedBlock.TransactionAcceptanceData))
for k, transaction := range acceptedBlock.TransactionAcceptanceData {
transactionID := consensushashing.TransactionID(transaction.Transaction)
acceptedTransactionIDs[k] = transactionID.String()
}
acceptedBlocks[j] = &appmessage.AcceptedBlock{
Hash: acceptedBlock.BlockHash.String(),
AcceptedTransactionIDs: acceptedTransactionIDs,
}
}
addedChainBlocks[i] = &appmessage.ChainBlock{
Hash: added.String(),
AcceptedBlocks: acceptedBlocks,
}
addedChainBlocks[i] = added.String()
}
return appmessage.NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes, addedChainBlocks), nil

View File

@@ -30,6 +30,7 @@ type NotificationListener struct {
propagateFinalityConflictResolvedNotifications bool
propagateUTXOsChangedNotifications bool
propagateVirtualSelectedParentBlueScoreChangedNotifications bool
propagateVirtualDaaScoreChangedNotifications bool
propagatePruningPointUTXOSetOverrideNotifications bool
propagateUTXOsChangedNotificationAddresses map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress
@@ -181,6 +182,25 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentBlueScoreChanged(
return nil
}
// NotifyVirtualDaaScoreChanged notifies the notification manager that the DAG's
// virtual DAA score has changed
func (nm *NotificationManager) NotifyVirtualDaaScoreChanged(
notification *appmessage.VirtualDaaScoreChangedNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagateVirtualDaaScoreChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
return err
}
}
}
return nil
}
// NotifyPruningPointUTXOSetOverride notifies the notification manager that the UTXO index
// reset due to pruning point change via IBD.
func (nm *NotificationManager) NotifyPruningPointUTXOSetOverride() error {
@@ -308,6 +328,12 @@ func (nl *NotificationListener) PropagateVirtualSelectedParentBlueScoreChangedNo
nl.propagateVirtualSelectedParentBlueScoreChangedNotifications = true
}
// PropagateVirtualDaaScoreChangedNotifications instructs the listener to send
// virtual DAA score notifications to the remote listener
func (nl *NotificationListener) PropagateVirtualDaaScoreChangedNotifications() {
nl.propagateVirtualDaaScoreChangedNotifications = true
}
// PropagatePruningPointUTXOSetOverrideNotifications instructs the listener to send pruning point UTXO set override notifications
// to the remote listener.
func (nl *NotificationListener) PropagatePruningPointUTXOSetOverrideNotifications() {

View File

@@ -2,14 +2,14 @@ package rpccontext
import (
"encoding/hex"
difficultyPackage "github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
"math"
"math/big"
difficultyPackage "github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/estimatedsize"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/app/appmessage"
@@ -62,12 +62,15 @@ func (ctx *Context) PopulateBlockWithVerboseData(block *appmessage.RPCBlock, dom
}
block.VerboseData = &appmessage.RPCBlockVerboseData{
Hash: blockHash.String(),
Difficulty: ctx.GetDifficultyRatio(domainBlockHeader.Bits(), ctx.Config.ActiveNetParams),
ChildrenHashes: hashes.ToStrings(childrenHashes),
SelectedParentHash: selectedParentHash.String(),
IsHeaderOnly: blockInfo.BlockStatus == externalapi.StatusHeaderOnly,
BlueScore: blockInfo.BlueScore,
Hash: blockHash.String(),
Difficulty: ctx.GetDifficultyRatio(domainBlockHeader.Bits(), ctx.Config.ActiveNetParams),
ChildrenHashes: hashes.ToStrings(childrenHashes),
IsHeaderOnly: blockInfo.BlockStatus == externalapi.StatusHeaderOnly,
BlueScore: blockInfo.BlueScore,
}
// selectedParentHash will be nil in the genesis block
if selectedParentHash != nil {
block.VerboseData.SelectedParentHash = selectedParentHash.String()
}
if blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
@@ -76,7 +79,7 @@ func (ctx *Context) PopulateBlockWithVerboseData(block *appmessage.RPCBlock, dom
// Get the block if we didn't receive it previously
if domainBlock == nil {
domainBlock, err = ctx.Domain.Consensus().GetBlock(blockHash)
domainBlock, err = ctx.Domain.Consensus().GetBlockEvenIfHeaderOnly(blockHash)
if err != nil {
return err
}
@@ -110,10 +113,11 @@ func (ctx *Context) PopulateTransactionWithVerboseData(
return err
}
ctx.Domain.Consensus().PopulateMass(domainTransaction)
transaction.VerboseData = &appmessage.RPCTransactionVerboseData{
TransactionID: consensushashing.TransactionID(domainTransaction).String(),
Hash: consensushashing.TransactionHash(domainTransaction).String(),
Size: estimatedsize.TransactionEstimatedSerializedSize(domainTransaction),
Mass: domainTransaction.Mass,
}
if domainBlockHeader != nil {
transaction.VerboseData.BlockHash = consensushashing.HeaderHash(domainBlockHeader).String()

View File

@@ -12,8 +12,12 @@ func HandleBan(context *rpccontext.Context, _ *router.Router, request appmessage
banRequest := request.(*appmessage.BanRequestMessage)
ip := net.ParseIP(banRequest.IP)
if ip == nil {
hint := ""
if banRequest.IP[0] == '[' {
hint = " (try to remove “[” and “]” symbols)"
}
errorMessage := &appmessage.BanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP %s", banRequest.IP)
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP%s: %s", hint, banRequest.IP)
return errorMessage, nil
}

View File

@@ -0,0 +1,39 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleEstimateNetworkHashesPerSecond handles the respectively named RPC command
func HandleEstimateNetworkHashesPerSecond(
context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
estimateNetworkHashesPerSecondRequest := request.(*appmessage.EstimateNetworkHashesPerSecondRequestMessage)
windowSize := int(estimateNetworkHashesPerSecondRequest.WindowSize)
startHash := model.VirtualBlockHash
if estimateNetworkHashesPerSecondRequest.StartHash != "" {
var err error
startHash, err = externalapi.NewDomainHashFromString(estimateNetworkHashesPerSecondRequest.StartHash)
if err != nil {
response := &appmessage.EstimateNetworkHashesPerSecondResponseMessage{}
response.Error = appmessage.RPCErrorf("StartHash '%s' is not a valid block hash",
estimateNetworkHashesPerSecondRequest.StartHash)
return response, nil
}
}
networkHashesPerSecond, err := context.Domain.Consensus().EstimateNetworkHashesPerSecond(startHash, windowSize)
if err != nil {
response := &appmessage.EstimateNetworkHashesPerSecondResponseMessage{}
response.Error = appmessage.RPCErrorf("could not resolve network hashes per "+
"second for startHash %s and window size %d: %s", startHash, windowSize, err)
return response, nil
}
return appmessage.NewEstimateNetworkHashesPerSecondResponseMessage(networkHashesPerSecond), nil
}

View File

@@ -20,18 +20,22 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
return errorMessage, nil
}
header, err := context.Domain.Consensus().GetBlockHeader(hash)
block, err := context.Domain.Consensus().GetBlockEvenIfHeaderOnly(hash)
if err != nil {
errorMessage := &appmessage.GetBlockResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Block %s not found", hash)
return errorMessage, nil
}
block := &externalapi.DomainBlock{Header: header}
response := appmessage.NewGetBlockResponseMessage()
response.Block = appmessage.DomainBlockToRPCBlock(block)
err = context.PopulateBlockWithVerboseData(response.Block, header, nil, getBlockRequest.IncludeTransactionVerboseData)
if getBlockRequest.IncludeTransactions {
response.Block = appmessage.DomainBlockToRPCBlock(block)
} else {
response.Block = appmessage.DomainBlockToRPCBlock(&externalapi.DomainBlock{Header: block.Header})
}
err = context.PopulateBlockWithVerboseData(response.Block, block.Header, block, getBlockRequest.IncludeTransactions)
if err != nil {
if errors.Is(err, rpccontext.ErrBuildBlockVerboseDataInvalidBlock) {
errorMessage := &appmessage.GetBlockResponseMessage{}

View File

@@ -35,6 +35,7 @@ func HandleGetBlockDAGInfo(context *rpccontext.Context, _ *router.Router, _ appm
response.VirtualParentHashes = hashes.ToStrings(virtualInfo.ParentHashes)
response.Difficulty = context.GetDifficultyRatio(virtualInfo.Bits, context.Config.ActiveNetParams)
response.PastMedianTime = virtualInfo.PastMedianTime
response.VirtualDAAScore = virtualInfo.DAAScore
pruningPoint, err := context.Domain.Consensus().PruningPoint()
if err != nil {

View File

@@ -8,21 +8,15 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
const (
// maxBlocksInGetBlocksResponse is the max amount of blocks that are
// allowed in a GetBlocksResult.
maxBlocksInGetBlocksResponse = 1000
)
// HandleGetBlocks handles the respectively named RPC command
func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
getBlocksRequest := request.(*appmessage.GetBlocksRequestMessage)
// Validate that user didn't set IncludeTransactionVerboseData without setting IncludeBlocks
if !getBlocksRequest.IncludeBlocks && getBlocksRequest.IncludeTransactionVerboseData {
// Validate that user didn't set IncludeTransactions without setting IncludeBlocks
if !getBlocksRequest.IncludeBlocks && getBlocksRequest.IncludeTransactions {
return &appmessage.GetBlocksResponseMessage{
Error: appmessage.RPCErrorf(
"If includeTransactionVerboseData is set, then includeBlockVerboseData must be set as well"),
"If includeTransactions is set, then includeBlockVerboseData must be set as well"),
}, nil
}
@@ -55,7 +49,11 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
if err != nil {
return nil, err
}
blockHashes, highHash, err := context.Domain.Consensus().GetHashesBetween(lowHash, virtualSelectedParent, maxBlocksInGetBlocksResponse)
// We use +1 because lowHash is also returned
// maxBlocks MUST be >= MergeSetSizeLimit + 1
maxBlocks := context.Config.NetParams().MergeSetSizeLimit + 1
blockHashes, highHash, err := context.Domain.Consensus().GetHashesBetween(lowHash, virtualSelectedParent, maxBlocks)
if err != nil {
return nil, err
}
@@ -74,29 +72,28 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
blockHashes = append(blockHashes, virtualSelectedParentAnticone...)
}
// Both GetHashesBetween and Anticone might return more then the allowed number of blocks, so
// trim any extra blocks.
if len(blockHashes) > maxBlocksInGetBlocksResponse {
blockHashes = blockHashes[:maxBlocksInGetBlocksResponse]
}
// Prepare the response
response := appmessage.NewGetBlocksResponseMessage()
response.BlockHashes = hashes.ToStrings(blockHashes)
if getBlocksRequest.IncludeBlocks {
blocks := make([]*appmessage.RPCBlock, len(blockHashes))
rpcBlocks := make([]*appmessage.RPCBlock, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := context.Domain.Consensus().GetBlockHeader(blockHash)
block, err := context.Domain.Consensus().GetBlockEvenIfHeaderOnly(blockHash)
if err != nil {
return nil, err
}
block := &externalapi.DomainBlock{Header: blockHeader}
blocks[i] = appmessage.DomainBlockToRPCBlock(block)
err = context.PopulateBlockWithVerboseData(blocks[i], blockHeader, nil, getBlocksRequest.IncludeTransactionVerboseData)
if getBlocksRequest.IncludeTransactions {
rpcBlocks[i] = appmessage.DomainBlockToRPCBlock(block)
} else {
rpcBlocks[i] = appmessage.DomainBlockToRPCBlock(&externalapi.DomainBlock{Header: block.Header})
}
err = context.PopulateBlockWithVerboseData(rpcBlocks[i], block.Header, nil, getBlocksRequest.IncludeTransactions)
if err != nil {
return nil, err
}
}
response.Blocks = rpcBlocks
}
return response, nil

View File

@@ -15,7 +15,6 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/testapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/domain/miningmanager"
"github.com/kaspanet/kaspad/infrastructure/config"
)
@@ -24,22 +23,38 @@ type fakeDomain struct {
testapi.TestConsensus
}
func (d fakeDomain) DeleteStagingConsensus() error {
panic("implement me")
}
func (d fakeDomain) StagingConsensus() externalapi.Consensus {
panic("implement me")
}
func (d fakeDomain) InitStagingConsensus() error {
panic("implement me")
}
func (d fakeDomain) CommitStagingConsensus() error {
panic("implement me")
}
func (d fakeDomain) Consensus() externalapi.Consensus { return d }
func (d fakeDomain) MiningManager() miningmanager.MiningManager { return nil }
func TestHandleGetBlocks(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
stagingArea := model.NewStagingArea()
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(params, false, "TestHandleGetBlocks")
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestHandleGetBlocks")
if err != nil {
t.Fatalf("Error setting up consensus: %+v", err)
}
defer teardown(false)
fakeContext := rpccontext.Context{
Config: &config.Config{Flags: &config.Flags{NetworkFlags: config.NetworkFlags{ActiveNetParams: params}}},
Config: &config.Config{Flags: &config.Flags{NetworkFlags: config.NetworkFlags{ActiveNetParams: &consensusConfig.Params}}},
Domain: fakeDomain{tc},
}
@@ -81,7 +96,7 @@ func TestHandleGetBlocks(t *testing.T) {
// \ | /
// etc.
expectedOrder := make([]*externalapi.DomainHash, 0, 40)
mergingBlock := params.GenesisHash
mergingBlock := consensusConfig.GenesisHash
for i := 0; i < 10; i++ {
splitBlocks := make([]*externalapi.DomainHash, 0, 3)
for j := 0; j < 3; j++ {
@@ -134,13 +149,13 @@ func TestHandleGetBlocks(t *testing.T) {
virtualSelectedParent, actualBlocks.BlockHashes)
}
expectedOrder = append([]*externalapi.DomainHash{params.GenesisHash}, expectedOrder...)
expectedOrder = append([]*externalapi.DomainHash{consensusConfig.GenesisHash}, expectedOrder...)
actualOrder := getBlocks(nil)
if !reflect.DeepEqual(actualOrder.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: %v \nactual:\n%v", expectedOrder, actualOrder.BlockHashes)
}
requestAllExplictly := getBlocks(params.GenesisHash)
requestAllExplictly := getBlocks(consensusConfig.GenesisHash)
if !reflect.DeepEqual(requestAllExplictly.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: \n%v\n. actual:\n%v", expectedOrder, requestAllExplictly.BlockHashes)
}

View File

@@ -4,6 +4,7 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/version"
)
// HandleGetInfo handles the respectively named RPC command
@@ -11,6 +12,7 @@ func HandleGetInfo(context *rpccontext.Context, _ *router.Router, _ appmessage.M
response := appmessage.NewGetInfoResponseMessage(
context.NetAdapter.ID().String(),
uint64(context.Domain.MiningManager().TransactionCount()),
version.Version(),
)
return response, nil

View File

@@ -8,9 +8,14 @@ import (
// HandleGetVirtualSelectedParentBlueScore handles the respectively named RPC command
func HandleGetVirtualSelectedParentBlueScore(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
virtualInfo, err := context.Domain.Consensus().GetVirtualInfo()
c := context.Domain.Consensus()
selectedParent, err := c.GetVirtualSelectedParent()
if err != nil {
return nil, err
}
return appmessage.NewGetVirtualSelectedParentBlueScoreResponseMessage(virtualInfo.BlueScore), nil
blockInfo, err := c.GetBlockInfo(selectedParent)
if err != nil {
return nil, err
}
return appmessage.NewGetVirtualSelectedParentBlueScoreResponseMessage(blockInfo.BlueScore), nil
}

View File

@@ -32,6 +32,6 @@ func HandleGetVirtualSelectedParentChainFromBlock(context *rpccontext.Context, _
}
response := appmessage.NewGetVirtualSelectedParentChainFromBlockResponseMessage(
chainChangedNotification.RemovedChainBlockHashes, chainChangedNotification.AddedChainBlocks)
chainChangedNotification.RemovedChainBlockHashes, chainChangedNotification.AddedChainBlockHashes)
return response, nil
}

View File

@@ -0,0 +1,19 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleNotifyVirtualDaaScoreChanged handles the respectively named RPC command
func HandleNotifyVirtualDaaScoreChanged(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.PropagateVirtualDaaScoreChangedNotifications()
response := appmessage.NewNotifyVirtualDaaScoreChangedResponseMessage()
return response, nil
}

View File

@@ -14,9 +14,14 @@ import (
func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
submitBlockRequest := request.(*appmessage.SubmitBlockRequestMessage)
if context.ProtocolManager.IsIBDRunning() {
isSynced, err := context.ProtocolManager.ShouldMine()
if err != nil {
return nil, err
}
if !context.Config.AllowSubmitBlockWhenNotSynced && !isSynced {
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Block not submitted - IBD is running"),
Error: appmessage.RPCErrorf("Block not submitted - node is not synced"),
RejectReason: appmessage.RejectReasonIsInIBD,
}, nil
}

View File

@@ -21,7 +21,7 @@ func HandleSubmitTransaction(context *rpccontext.Context, _ *router.Router, requ
}
transactionID := consensushashing.TransactionID(domainTransaction)
err = context.ProtocolManager.AddTransaction(domainTransaction)
err = context.ProtocolManager.AddTransaction(domainTransaction, submitTransactionRequest.AllowOrphan)
if err != nil {
if !errors.As(err, &mempool.RuleError{}) {
return nil, err

View File

@@ -12,8 +12,12 @@ func HandleUnban(context *rpccontext.Context, _ *router.Router, request appmessa
unbanRequest := request.(*appmessage.UnbanRequestMessage)
ip := net.ParseIP(unbanRequest.IP)
if ip == nil {
hint := ""
if unbanRequest.IP[0] == '[' {
hint = " (try to remove “[” and “]” symbols)"
}
errorMessage := &appmessage.UnbanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP %s", unbanRequest.IP)
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP%s: %s", hint, unbanRequest.IP)
return errorMessage, nil
}
err := context.AddressManager.Unban(appmessage.NewNetAddressIPPort(ip, 0))

View File

@@ -15,13 +15,11 @@ golint -set_exit_status ./...
staticcheck -checks SA4006,SA4008,SA4009,SA4010,SA5003,SA1004,SA1014,SA1021,SA1023,SA1024,SA1025,SA1026,SA1027,SA1028,SA2000,SA2001,SA2003,SA4000,SA4001,SA4003,SA4004,SA4011,SA4012,SA4013,SA4014,SA4015,SA4016,SA4017,SA4018,SA4019,SA4020,SA4021,SA4022,SA4023,SA5000,SA5002,SA5004,SA5005,SA5007,SA5008,SA5009,SA5010,SA5011,SA5012,SA6001,SA6002,SA9001,SA9002,SA9003,SA9004,SA9005,SA9006,ST1019 ./...
go vet -composites=false $FLAGS ./...
go build $FLAGS -o kaspad .
if [ -n "${NO_PARALLEL}" ]
then
go test -parallel=1 $FLAGS ./...
go test -timeout 20m -parallel=1 $FLAGS ./...
else
go test $FLAGS ./...
go test -timeout 20m $FLAGS ./...
fi

View File

@@ -1,11 +1,123 @@
Kaspad v0.11.7 - 2021-12-11
===========================
Breaking changes:
* kaspawallet: show-address →new-address + show-addresses (#1870)
Bug fixes:
* Fix numThreads using getAEAD instead of decryptMnemonic (#1859)
* Apply ResolveVirtual diffs to the UTXO index (#1868)
Non-breaking changes:
* Ignore header mass in devnet and testnet (#1879)
* Remove unused args from CalcSubsidy (#1877)
* ExpectedHeaderPruningPoint fix (#1876)
* Changes to libkaspawallet to support Kaspaper (#1878)
* Get rid of genesis's UTXO dump (#1867)
Kaspad v0.11.2 - 2021-11-11
===========================
Bug fixes:
* Enlarge p2p max message size to 1gb
* Fix UTXO chunks logic
* Increase tests timeout to 20 minutes
Kaspad v0.11.1 - 2021-11-09
===========================
Non-breaking changes:
* Cache the miner state
Kaspad v0.10.2 - 2021-05-18
===========================
Non-breaking changes:
* Fix getBlock and getBlocks RPC commands to return blocks and transactions properly (#1716)
* serializeAddress should always serialize as IPv6, since it assumes the IP size is 16 bytes (#1720)
* Add VirtualDaaScore to GetBlockDagInfo (#1719)
* Fix calcTxSequenceLockFromReferencedUTXOEntries for loop break condition (#1723)
* Fix overflow when checking coinbase maturity and don't ban peers that send transactions with immature spend (#1722)
Kaspad v0.10.1 - 2021-05-11
===========================
* Calculate virtual's acceptance data and multiset after importing a new pruning point (#1700)
Kaspad v0.10.0 - 2021-04-26
===========================
Major changes include:
* Implementing a signature hashing scheme similar to BIP-143
* Replacing HASH160 with BLAKE2B
* Replacing ECMH with MuHash
* Removing RIPEMD160 and SHA1 from the codebase entirely
* Making P2PKH transactions non-standard
* Vastly enhancing the CLI wallet
* Restructuring kaspad's app/home directory
* Modifying block and transaction types in the RPC to be easier to consume clientside
A partial list of the more-important commits is as follows:
* Fix data race in GetBlockChildren (#1579)
* Remove payload hash (#1583)
* Add the mempool size to getInfo RPC command (#1584)
* Change the difficulty to be calculated based on the same block instead of its selected parent (#1591)
* Adjust the difficulty in the first difficultyAdjustmentWindowSize blocks (#1592)
* Adding DAA score (#1596)
* Use DAA score where needed (#1602)
* Remove the Services field from NetAddress. (#1610)
* Fix getBlocks to not add the anticone when some blocks were filtered by GetHashesBetween (#1611)
* Restructure the default ~/.kaspad directory layout (#1613)
* Replace the HomeDir flag with a AppDir flag (#1615)
* Implement BIP-143-like sighash (#1598)
* Change --datadir to --appdir and remove symmetrical connection in stability tests (#1617)
* Use BLAKE2B instead of HASH160, and get rid of any usage of RIPEMD160 and SHA1 (#1618)
* Replace ECMH with Muhash (#1624)
* Add support for multiple staging areas (#1633)
* Make sure the ghostdagDataStore cache is at least DifficultyAdjustmentBlockWindow sized (#1635)
* Resolve each block status in it's own staging area (#1634)
* Add mass limit to mempool (#1627)
* In RPC, use RPCTransactions and RPCBlocks instead of TransactionMessages and BlockMessages (#1609)
* Use go-secp256k1 v0.0.5 (#1640)
* Add a show-address subcommand to kaspawallet (#1653)
* Replace p2pkh with p2pk (#1650)
* Implement importing private keys into the wallet (#1655)
* Add dump unencrypted data sub command to the wallet (#1661)
* Add ECDSA support (#1657)
* Add OpCheckMultiSigECDSA (#1663)
* Add ECDSA support to the wallet (#1664)
* Make moving the pruning point faster (#1660)
* Implement new mechanism for updating UTXO Diffs (#1671)
Kaspad v0.9.2 - 2021-03-31
===========================
* Increase the route capacity of InvTransaction messages. (#1603) (#1637)
Kaspad v0.9.1 - 2021-03-14
===========================
* Testnet network reset
Kaspad v0.9.0 - 2021-03-04
===========================
* Merge big subdags in pick virtual parents (#1574)
* Write in the reject message the tx rejection reason (#1573)
* Add nil checks for protowire (#1570)
* Increase getBlocks limit to 1000 (#1572)
* Return RPC error if getBlock's lowHash doesn't exist (#1569)
* Add default dns-seeder to testnet (#1568)
* Fix utxoindex deserialization (#1566)
* Add pruning point hash to GetBlockDagInfo response (#1565)
* Use EmitUnpopulated so that kaspactl prints all fields, even the default ones (#1561)
* Stop logging an error whenever an RPC/P2P connection is canceled (#1562)
* Cleanup the logger and make it asynchronous (#1524)
* Close all iterators (#1542)
* Add childrenHashes to GetBlock/s RPC commands (#1560)
* Add ScriptPublicKey.Version to RPC (#1559)
* Fix the target block rate to create less bursty mining (#1554)
Kaspad v0.8.10 - 2021-02-25
===========================
[*] Fix bug where invalid mempool transactions were not removed (#1551)
[*] Add RPC reconnection to the miner (#1552)
[*] Remove virtual diff parents - only selectedTip is virtualDiffParent now (#1550)
[*] Fix UTXO index (#1548)
[*] Prevent fast failing (#1545)
[*] Increase the sleep time in kaspaminer when the node is not synced (#1544)
[*] Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537)
[*] Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538)
* Fix bug where invalid mempool transactions were not removed (#1551)
* Add RPC reconnection to the miner (#1552)
* Remove virtual diff parents - only selectedTip is virtualDiffParent now (#1550)
* Fix UTXO index (#1548)
* Prevent fast failing (#1545)
* Increase the sleep time in kaspaminer when the node is not synced (#1544)
* Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537)
* Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538)

View File

@@ -48,6 +48,10 @@ func setField(commandValue reflect.Value, parameterValue reflect.Value, paramete
}
func stringToValue(parameterDesc *parameterDescription, valueStr string) (reflect.Value, error) {
if valueStr == "-" {
return reflect.Zero(parameterDesc.typeof), nil
}
var value interface{}
var err error
switch parameterDesc.typeof.Kind() {

View File

@@ -24,6 +24,7 @@ var commandTypes = []reflect.Type{
reflect.TypeOf(protowire.KaspadMessage_GetVirtualSelectedParentBlueScoreRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetVirtualSelectedParentChainFromBlockRequest{}),
reflect.TypeOf(protowire.KaspadMessage_ResolveFinalityConflictRequest{}),
reflect.TypeOf(protowire.KaspadMessage_EstimateNetworkHashesPerSecondRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetBlockTemplateRequest{}),
reflect.TypeOf(protowire.KaspadMessage_SubmitBlockRequest{}),

View File

@@ -27,7 +27,8 @@ func parseConfig() (*configFlags, error) {
}
parser := flags.NewParser(cfg, flags.HelpFlag)
parser.Usage = "kaspactl [OPTIONS] [COMMAND] [COMMAND PARAMETERS].\n\nCommand can be supplied only if --json is not used." +
"\n\nUse `kaspactl --list-commands` to get a list of all commands and their parameters"
"\n\nUse `kaspactl --list-commands` to get a list of all commands and their parameters." +
"\nFor optional parameters- use '-' without quotes to not pass the parameter.\n"
remainingArgs, err := parser.Parse()
if err != nil {
return nil, err

View File

@@ -2,6 +2,7 @@ package main
import (
"fmt"
"github.com/kaspanet/kaspad/version"
"os"
"time"
@@ -33,6 +34,18 @@ func main() {
}
defer client.Disconnect()
kaspadMessage, err := client.Post(&protowire.KaspadMessage{Payload: &protowire.KaspadMessage_GetInfoRequest{GetInfoRequest: &protowire.GetInfoRequestMessage{}}})
if err != nil {
printErrorAndExit(fmt.Sprintf("Cannot post GetInfo message: %s", err))
}
localVersion := version.Version()
remoteVersion := kaspadMessage.GetGetInfoResponse().ServerVersion
if localVersion != remoteVersion {
printErrorAndExit(fmt.Sprintf("Server version mismatch, expect: %s, got: %s", localVersion, remoteVersion))
}
responseChan := make(chan string)
if cfg.RequestJSON != "" {

View File

@@ -6,17 +6,12 @@ import (
"sync/atomic"
"time"
"github.com/kaspanet/kaspad/cmd/kaspaminer/templatemanager"
"github.com/kaspanet/kaspad/domain/consensus/model/pow"
"github.com/kaspanet/kaspad/util/difficulty"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/cmd/kaspaminer/templatemanager"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/pow"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
)
@@ -147,20 +142,20 @@ func mineNextBlock(mineWhenNotSynced bool) *externalapi.DomainBlock {
// In the rare case where the nonce space is exhausted for a specific
// block, it'll keep looping the nonce until a new block template
// is discovered.
block := getBlockForMining(mineWhenNotSynced)
targetDifficulty := difficulty.CompactToBig(block.Header.Bits())
headerForMining := block.Header.ToMutable()
headerForMining.SetNonce(nonce)
block, state := getBlockForMining(mineWhenNotSynced)
state.Nonce = nonce
atomic.AddUint64(&hashesTried, 1)
if pow.CheckProofOfWorkWithTarget(headerForMining, targetDifficulty) {
block.Header = headerForMining.ToImmutable()
log.Infof("Found block %s with parents %s", consensushashing.BlockHash(block), block.Header.ParentHashes())
if state.CheckProofOfWork() {
mutHeader := block.Header.ToMutable()
mutHeader.SetNonce(nonce)
block.Header = mutHeader.ToImmutable()
log.Infof("Found block %s with parents %s", consensushashing.BlockHash(block), block.Header.DirectParents())
return block
}
}
}
func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
func getBlockForMining(mineWhenNotSynced bool) (*externalapi.DomainBlock, *pow.State) {
tryCount := 0
const sleepTime = 500 * time.Millisecond
@@ -170,7 +165,7 @@ func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
tryCount++
shouldLog := (tryCount-1)%10 == 0
template, isSynced := templatemanager.Get()
template, state, isSynced := templatemanager.Get()
if template == nil {
if shouldLog {
log.Info("Waiting for the initial template")
@@ -186,7 +181,7 @@ func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
continue
}
return template
return template, state
}
}

View File

@@ -3,23 +3,26 @@ package templatemanager
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/pow"
"sync"
)
var currentTemplate *externalapi.DomainBlock
var currentState *pow.State
var isSynced bool
var lock = &sync.Mutex{}
// Get returns the template to work on
func Get() (*externalapi.DomainBlock, bool) {
func Get() (*externalapi.DomainBlock, *pow.State, bool) {
lock.Lock()
defer lock.Unlock()
// Shallow copy the block so when the user replaces the header it won't affect the template here.
if currentTemplate == nil {
return nil, false
return nil, nil, false
}
block := *currentTemplate
return &block, isSynced
state := *currentState
return &block, &state, isSynced
}
// Set sets the current template to work on
@@ -31,6 +34,7 @@ func Set(template *appmessage.GetBlockTemplateResponseMessage) error {
lock.Lock()
defer lock.Unlock()
currentTemplate = block
currentState = pow.NewState(block.Header.ToMutable())
isSynced = template.IsSynced
return nil
}

View File

@@ -1,50 +1,30 @@
package main
import (
"context"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/client"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
)
func balance(conf *balanceConfig) error {
client, err := connectToRPC(conf.NetParams(), conf.RPCServer)
daemonClient, tearDown, err := client.Connect(conf.DaemonAddress)
if err != nil {
return err
}
defer tearDown()
ctx, cancel := context.WithTimeout(context.Background(), daemonTimeout)
defer cancel()
response, err := daemonClient.GetBalance(ctx, &pb.GetBalanceRequest{})
if err != nil {
return err
}
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
if err != nil {
return err
}
addr, err := libkaspawallet.Address(conf.NetParams(), keysFile.PublicKeys, keysFile.MinimumSignatures, keysFile.ECDSA)
if err != nil {
return err
}
getUTXOsByAddressesResponse, err := client.GetUTXOsByAddresses([]string{addr.String()})
if err != nil {
return err
}
virtualSelectedParentBlueScoreResponse, err := client.GetVirtualSelectedParentBlueScore()
if err != nil {
return err
}
virtualSelectedParentBlueScore := virtualSelectedParentBlueScoreResponse.BlueScore
var availableBalance, pendingBalance uint64
for _, entry := range getUTXOsByAddressesResponse.Entries {
if isUTXOSpendable(entry, virtualSelectedParentBlueScore, conf.ActiveNetParams.BlockCoinbaseMaturity) {
availableBalance += entry.UTXOEntry.Amount
} else {
pendingBalance += entry.UTXOEntry.Amount
}
}
fmt.Printf("Balance:\t\tKAS %f\n", float64(availableBalance)/util.SompiPerKaspa)
if pendingBalance > 0 {
fmt.Printf("Pending balance:\tKAS %f\n", float64(pendingBalance)/util.SompiPerKaspa)
fmt.Printf("Balance:\t\tKAS %f\n", float64(response.Available)/constants.SompiPerKaspa)
if response.Pending > 0 {
fmt.Printf("Pending balance:\tKAS %f\n", float64(response.Pending)/constants.SompiPerKaspa)
}
return nil

View File

@@ -1,34 +1,35 @@
package main
import (
"context"
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/client"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
)
func broadcast(conf *broadcastConfig) error {
client, err := connectToRPC(conf.NetParams(), conf.RPCServer)
daemonClient, tearDown, err := client.Connect(conf.DaemonAddress)
if err != nil {
return err
}
defer tearDown()
ctx, cancel := context.WithTimeout(context.Background(), daemonTimeout)
defer cancel()
transaction, err := hex.DecodeString(conf.Transaction)
if err != nil {
return err
}
psTxBytes, err := hex.DecodeString(conf.Transaction)
if err != nil {
return err
}
tx, err := libkaspawallet.ExtractTransaction(psTxBytes)
if err != nil {
return err
}
transactionID, err := sendTransaction(client, tx)
response, err := daemonClient.Broadcast(ctx, &pb.BroadcastRequest{Transaction: transaction})
if err != nil {
return err
}
fmt.Println("Transaction was sent successfully")
fmt.Printf("Transaction ID: \t%s\n", transactionID)
fmt.Printf("Transaction ID: \t%s\n", response.TxID)
return nil
}

View File

@@ -2,31 +2,13 @@ package main
import (
"fmt"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"os"
"time"
)
func isUTXOSpendable(entry *appmessage.UTXOsByAddressesEntry, virtualSelectedParentBlueScore uint64, coinbaseMaturity uint64) bool {
if !entry.UTXOEntry.IsCoinbase {
return true
}
blockBlueScore := entry.UTXOEntry.BlockDAAScore
// TODO: Check for a better alternative than virtualSelectedParentBlueScore
return blockBlueScore+coinbaseMaturity < virtualSelectedParentBlueScore
}
const daemonTimeout = 2 * time.Minute
func printErrorAndExit(err error) {
fmt.Fprintf(os.Stderr, "%s\n", err)
os.Exit(1)
}
func connectToRPC(params *dagconfig.Params, rpcServer string) (*rpcclient.RPCClient, error) {
rpcAddress, err := params.NormalizeRPCServerAddress(rpcServer)
if err != nil {
return nil, err
}
return rpcclient.NewRPCClient(rpcAddress)
}

View File

@@ -15,8 +15,15 @@ const (
createUnsignedTransactionSubCmd = "create-unsigned-transaction"
signSubCmd = "sign"
broadcastSubCmd = "broadcast"
showAddressSubCmd = "show-address"
showAddressesSubCmd = "show-addresses"
newAddressSubCmd = "new-address"
dumpUnencryptedDataSubCmd = "dump-unencrypted-data"
startDaemonSubCmd = "start-daemon"
)
const (
defaultListen = "localhost:8082"
defaultRPCServer = "localhost"
)
type configFlags struct {
@@ -25,6 +32,8 @@ type configFlags struct {
type createConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
Yes bool `long:"yes" short:"y" description:"Assume \"yes\" to all questions"`
MinimumSignatures uint32 `long:"min-signatures" short:"m" description:"Minimum required signatures" default:"1"`
NumPrivateKeys uint32 `long:"num-private-keys" short:"k" description:"Number of private keys" default:"1"`
NumPublicKeys uint32 `long:"num-public-keys" short:"n" description:"Total number of keys" default:"1"`
@@ -34,46 +43,61 @@ type createConfig struct {
}
type balanceConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
config.NetworkFlags
}
type sendConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
config.NetworkFlags
}
type createUnsignedTransactionConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
config.NetworkFlags
}
type signConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
Transaction string `long:"transaction" short:"t" description:"The unsigned transaction to sign on (encoded in hex)" required:"true"`
config.NetworkFlags
}
type broadcastConfig struct {
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
Transaction string `long:"transaction" short:"t" description:"The signed transaction to broadcast (encoded in hex)" required:"true"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
Transaction string `long:"transaction" short:"t" description:"The signed transaction to broadcast (encoded in hex)" required:"true"`
config.NetworkFlags
}
type showAddressConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
type showAddressesConfig struct {
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
config.NetworkFlags
}
type newAddressConfig struct {
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
config.NetworkFlags
}
type startDaemonConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
Listen string `short:"l" long:"listen" description:"Address to listen on (default: 0.0.0.0:8082)"`
config.NetworkFlags
}
type dumpUnencryptedDataConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
Yes bool `long:"yes" short:"y" description:"Assume \"yes\" to all questions"`
config.NetworkFlags
}
@@ -85,15 +109,15 @@ func parseCommandLine() (subCommand string, config interface{}) {
parser.AddCommand(createSubCmd, "Creates a new wallet",
"Creates a private key and 3 public addresses, one for each of MainNet, TestNet and DevNet", createConf)
balanceConf := &balanceConfig{}
balanceConf := &balanceConfig{DaemonAddress: defaultListen}
parser.AddCommand(balanceSubCmd, "Shows the balance of a public address",
"Shows the balance for a public address in Kaspa", balanceConf)
sendConf := &sendConfig{}
sendConf := &sendConfig{DaemonAddress: defaultListen}
parser.AddCommand(sendSubCmd, "Sends a Kaspa transaction to a public address",
"Sends a Kaspa transaction to a public address", sendConf)
createUnsignedTransactionConf := &createUnsignedTransactionConfig{}
createUnsignedTransactionConf := &createUnsignedTransactionConfig{DaemonAddress: defaultListen}
parser.AddCommand(createUnsignedTransactionSubCmd, "Create an unsigned Kaspa transaction",
"Create an unsigned Kaspa transaction", createUnsignedTransactionConf)
@@ -101,19 +125,29 @@ func parseCommandLine() (subCommand string, config interface{}) {
parser.AddCommand(signSubCmd, "Sign the given partially signed transaction",
"Sign the given partially signed transaction", signConf)
broadcastConf := &broadcastConfig{}
broadcastConf := &broadcastConfig{DaemonAddress: defaultListen}
parser.AddCommand(broadcastSubCmd, "Broadcast the given transaction",
"Broadcast the given transaction", broadcastConf)
showAddressConf := &showAddressConfig{}
parser.AddCommand(showAddressSubCmd, "Shows the public address of the current wallet",
"Shows the public address of the current wallet", showAddressConf)
showAddressesConf := &showAddressesConfig{DaemonAddress: defaultListen}
parser.AddCommand(showAddressesSubCmd, "Shows all generated public addresses of the current wallet",
"Shows all generated public addresses of the current wallet", showAddressesConf)
newAddressConf := &newAddressConfig{DaemonAddress: defaultListen}
parser.AddCommand(newAddressSubCmd, "Generates new public address of the current wallet and shows it",
"Generates new public address of the current wallet and shows it", newAddressConf)
dumpUnencryptedDataConf := &dumpUnencryptedDataConfig{}
parser.AddCommand(dumpUnencryptedDataSubCmd, "Prints the unencrypted wallet data",
"Prints the unencrypted wallet data including its private keys. Anyone that sees it can access "+
"the funds. Use only on safe environment.", dumpUnencryptedDataConf)
startDaemonConf := &startDaemonConfig{
RPCServer: defaultRPCServer,
Listen: defaultListen,
}
parser.AddCommand(startDaemonSubCmd, "Start the wallet daemon", "Start the wallet daemon", startDaemonConf)
_, err := parser.Parse()
if err != nil {
@@ -169,13 +203,20 @@ func parseCommandLine() (subCommand string, config interface{}) {
printErrorAndExit(err)
}
config = broadcastConf
case showAddressSubCmd:
combineNetworkFlags(&showAddressConf.NetworkFlags, &cfg.NetworkFlags)
err := showAddressConf.ResolveNetwork(parser)
case showAddressesSubCmd:
combineNetworkFlags(&showAddressesConf.NetworkFlags, &cfg.NetworkFlags)
err := showAddressesConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = showAddressConf
config = showAddressesConf
case newAddressSubCmd:
combineNetworkFlags(&newAddressConf.NetworkFlags, &cfg.NetworkFlags)
err := newAddressConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = newAddressConf
case dumpUnencryptedDataSubCmd:
combineNetworkFlags(&dumpUnencryptedDataConf.NetworkFlags, &cfg.NetworkFlags)
err := dumpUnencryptedDataConf.ResolveNetwork(parser)
@@ -183,6 +224,13 @@ func parseCommandLine() (subCommand string, config interface{}) {
printErrorAndExit(err)
}
config = dumpUnencryptedDataConf
case startDaemonSubCmd:
combineNetworkFlags(&startDaemonConf.NetworkFlags, &cfg.NetworkFlags)
err := startDaemonConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = startDaemonConf
}
return parser.Command.Active.Name, config

View File

@@ -2,68 +2,78 @@ package main
import (
"bufio"
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet/bip32"
"github.com/kaspanet/kaspad/cmd/kaspawallet/utils"
"github.com/pkg/errors"
"os"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
)
func create(conf *createConfig) error {
var encryptedPrivateKeys []*keys.EncryptedPrivateKey
var publicKeys [][]byte
var encryptedMnemonics []*keys.EncryptedMnemonic
var signerExtendedPublicKeys []string
var err error
isMultisig := conf.NumPublicKeys > 1
if !conf.Import {
encryptedPrivateKeys, publicKeys, err = keys.CreateKeyPairs(conf.NumPrivateKeys, conf.ECDSA)
encryptedMnemonics, signerExtendedPublicKeys, err = keys.CreateMnemonics(conf.NetParams(), conf.NumPrivateKeys, conf.Password, isMultisig)
} else {
encryptedPrivateKeys, publicKeys, err = keys.ImportKeyPairs(conf.NumPrivateKeys)
encryptedMnemonics, signerExtendedPublicKeys, err = keys.ImportMnemonics(conf.NetParams(), conf.NumPrivateKeys, conf.Password, isMultisig)
}
if err != nil {
return err
}
for i, publicKey := range publicKeys {
fmt.Printf("Public key of private key #%d:\n%x\n\n", i+1, publicKey)
for i, extendedPublicKey := range signerExtendedPublicKeys {
fmt.Printf("Extended public key of mnemonic #%d:\n%s\n\n", i+1, extendedPublicKey)
}
extendedPublicKeys := make([]string, conf.NumPrivateKeys, conf.NumPublicKeys)
copy(extendedPublicKeys, signerExtendedPublicKeys)
reader := bufio.NewReader(os.Stdin)
for i := conf.NumPrivateKeys; i < conf.NumPublicKeys; i++ {
fmt.Printf("Enter public key #%d here:\n", i+1)
line, isPrefix, err := reader.ReadLine()
extendedPublicKey, err := utils.ReadLine(reader)
if err != nil {
return err
}
_, err = bip32.DeserializeExtendedKey(string(extendedPublicKey))
if err != nil {
return errors.Wrapf(err, "%s is invalid extended public key", string(extendedPublicKey))
}
fmt.Println()
if isPrefix {
return errors.Errorf("Public key is too long")
}
publicKey, err := hex.DecodeString(string(line))
if err != nil {
return err
}
publicKeys = append(publicKeys, publicKey)
extendedPublicKeys = append(extendedPublicKeys, string(extendedPublicKey))
}
err = keys.WriteKeysFile(conf.KeysFile, encryptedPrivateKeys, publicKeys, conf.MinimumSignatures, conf.ECDSA)
cosignerIndex, err := libkaspawallet.MinimumCosignerIndex(signerExtendedPublicKeys, extendedPublicKeys)
if err != nil {
return err
}
keysFile, err := keys.ReadKeysFile(conf.KeysFile)
file := keys.File{
Version: keys.LastVersion,
EncryptedMnemonics: encryptedMnemonics,
ExtendedPublicKeys: extendedPublicKeys,
MinimumSignatures: conf.MinimumSignatures,
CosignerIndex: cosignerIndex,
ECDSA: conf.ECDSA,
}
err = file.SetPath(conf.NetParams(), conf.KeysFile, conf.Yes)
if err != nil {
return err
}
addr, err := libkaspawallet.Address(conf.NetParams(), keysFile.PublicKeys, keysFile.MinimumSignatures, keysFile.ECDSA)
err = file.Save()
if err != nil {
return err
}
fmt.Printf("The wallet address is:\n%s\n", addr)
fmt.Printf("Wrote the keys into %s\n", file.Path())
return nil
}

Some files were not shown because too many files have changed in this diff Show More