Compare commits

...

172 Commits

Author SHA1 Message Date
Michael Sutton
b8d36a1772 Modify DefaultTimeout to 120 seconds
A temporary workaround for nodes having trouble to sync (currently the download of pruning point related data during IBD takes more than 30 seconds)
2021-11-19 12:28:39 +02:00
Ori Newman
5c1ba9170e Don't set blocks from the pruning point anticone as the header select tip (#1850)
* Decrease the dial timeout to 1 second

* Don't set blocks from the pruning point anticone as the header selected tip.

Co-authored-by: Kaspa Profiler <>
2021-11-13 18:59:20 +02:00
Elichai Turkel
9d8c555bdf Fix a bug in the matrix ranking algorithm (#1849)
* Fix a bug in the matrix ranking algorithm

* Add tests and benchmarks for matrix generation and ranking

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-11-13 17:51:11 +02:00
Ori Newman
a2f574eab8 Update to version 0.11.3 2021-11-13 17:18:40 +02:00
Kaspa Profiler
7bed86dc1b Update changelog.txt 2021-11-11 09:30:12 +02:00
Ori Newman
9b81f5145e Increase p2p msg size and reset utxo after ibd (#1847)
* Don't build unnecessary binaries

* Reset UTXO index after IBD

* Enlarge max msg size to 1gb

* Fix UTXO chunks logic

* Revert UTXO set override change

* Fix sendPruningPointUTXOSet

* Increase tests timeout to 20 minutes

Co-authored-by: Kaspa Profiler <>
2021-11-11 09:27:07 +02:00
Kaspa Profiler
cd8341ef57 Update to version 0.11.2 2021-11-10 23:19:25 +02:00
Ori Newman
ad8bdbed21 Update changelog (#1845)
Co-authored-by: Kaspa Profiler <>
2021-11-09 00:32:56 +02:00
Elichai Turkel
7cdceb6df0 Cache the miner state (#1844)
* Implement a MinerState to cache the matrix and friends

* Modify the miner and related code to use the new MinerCache

* Change MinerState to State

* Make go lint happy

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Kaspa Profiler <>
2021-11-09 00:12:30 +02:00
stasatdaglabs
cc5248106e Update to version 0.11.1 2021-11-08 09:01:52 +02:00
Elichai Turkel
e3463b7268 Replace Keccak256 in oPoW with CSHAKE256 with domain seperation (#1842)
* Replace keccak with CSHAKE256 in oPoW

* Add benchmarks to hash writers to compare blake2b to the CSHAKE

* Update genesis blocks

* Update tests

* Define genesis's block level to be the maximal one

* Add message to genesis coinbase

* Add comments to genesis coinbase

* Fix tests

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-11-07 18:36:30 +02:00
Ori Newman
a2173ef80a Switch PoW to a keccak heavyhash variant (#1841)
* Add another hash domain for HeavyHash

* Add a xoShiRo256PlusPlus implementation

* Add a HeavyHash implementation

* Replace our current PoW algorithm with oPoW

* Change to pow hash to keccak256

* Fix genesis

* Fix tests

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-11-07 11:17:15 +02:00
stasatdaglabs
aeb4500b61 Add the daglabs-dev mainnet dnsseeder. (#1840) 2021-11-07 10:39:54 +02:00
Ori Newman
0a1daae319 Allow mainnet flag and raise wallet fee (#1838)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-11-07 10:04:27 +02:00
stasatdaglabs
131cd3357e Rename FixedSubsidySwitchHashRateDifference to FixedSubsidySwitchHashRateThreshold and set its value to 150GH/s. (#1837) 2021-11-07 09:33:39 +02:00
Ori Newman
ff72568d6b Fix pruning point anticone order (#1836)
* Send pruning point anticone in topological order
Fix a UTXO pagination bug
Lengthen the stabilization time for the last DAA test

* Extend "sudden hash rate drop" test length to 45 minutes

Co-authored-by: Kaspa Profiler <>
2021-11-07 08:21:34 +02:00
stasatdaglabs
2dddb650b9 Switch to a fixed block subsidy after a certain work threshold (#1831)
* Implement isBlockRewardFixed.

* Fix factory.go.

* Call isBlockRewardFixed from calcBlockSubsidy.

* Fix bad call to ghostdagDataStore.Get.

* Extract blue score and blue work from the header instead of from the ghostdagDataStore.

* Fix coinbasemanager constructor arguments order

* Format consensus_defaults.go

* Check the mainnet switch from the block's point of view rather than the virtual's.

* Don't call newBlockPruningPoint twice in buildBlock.

* Properly handle new pruning point blocks in isBlockRewardFixed.

* Use the correct variable.

* Add a comment explaining what we do when the pruning point is not found in isBlockRewardFixed.

* Implement TestBlockRewardSwitch.

* Add missing error handling.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-31 15:04:51 +02:00
Ori Newman
99aaacd649 Check blue score before requesting a pruning proof (#1835)
* Check blue score before requesting a pruning proof

* BuildPruningPointProof should return empty proof if the pruning point is genesis

* Don't fail many-tips if kaspad exits ungracefully
2021-10-31 12:48:18 +02:00
stasatdaglabs
77a344cc29 In IBD, validate the timestamps of the headers of the pruning point and selected tip (#1829)
* Implement validatePruningPointFutureHeaderTimestamps.

* Fix TestIBDWithPruning.

* Fix wrong logic.

* Add a comment.

* Fix a comment.

* Fix a variable name.

* Add a commment

* Fix TestIBDWithPruning.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-30 20:32:49 +03:00
stasatdaglabs
3dbc42b4f7 Implement the new block subsidy function (#1830)
* Replace the old blockSubsidy parameters with the new ones.

* Return subsidyGenesisReward if blockHash is the genesis hash.

* Traverse a block's past for the subsidy calculation.

* Partially implement SubsidyStore.

* Refer to SubsidyStore from CoinbaseManager.

* Wrap calcBlockSubsidy in getBlockSubsidy, which first checks the database.

* Fix finalityStore not calling GenerateShardingID.

* Implement calculateAveragePastSubsidy.

* Implement calculateMergeSetSubsidySum.

* Implement calculateSubsidyRandomVariable.

* Implement calcBlockSubsidy.

* Add a TODO about floats.

* Update the calcBlockSubsidy TODO.

* Use binary.LittleEndian in calculateSubsidyRandomVariable.

* Fix bad range in calculateSubsidyRandomVariable.

* Replace float64 with big.Rat everywhere except for subsidyRandomVariable.

* Fix a nil dereference.

* Use a random walk to approximate the normal distribution.

* In order to avoid unsupported fractional results from powInt64, flip the numerator and the denominator manually.

* Set standardDeviation to 0.25, MaxSompi to 10_000_000_000 * SompiPerKaspa and defaultSubsidyGenesisReward to 1_000.

* Set the standard deviation to 0.2.

* Use a binomial distribution instead of trying to estimate the normal distribution.

* Change some values around.

* Clamp the block subsidy.

* Remove the fake duplicate constants in the util package.

* Reduce MaxSompi to only 100m Kaspa to avoid hitting the uint64 ceiling.

* Lower MaxSompi further to avoid new and exciting ways for the uint64 ceiling to be hit.

* Remove debug logs.

* Fix a couple of failing tests.

* Fix TestBlockWindow.

* Fix limitTransactionCount sometimes crashing on index-out-of-bounds.

* In TrustedDataDataDAABlock, replace BlockHeader with DomainBlock

* In calculateAveragePastSubsidy, use blockWindow instead of doing a BFS manually.

* Remove the reference to DAGTopologyManager in coinbaseManager.

* Add subsidy to the coinbase payload.

* Get rid of the subsidy store and extract subsidies out of coinbase transactions.

* Keep a blockWindow amount of blocks under the virtual for IBD purposes.

* Manually remove the virtual genesis from the merge set.

* Fix simnet genesis.

* Fix TestPruning.

* Fix TestCheckBlockIsNotPruned.

* Fix TestBlockWindow.

* Fix TestCalculateSignatureHashSchnorr.

* Fix TestCalculateSignatureHashECDSA.

* Fix serializing the wrong value into the coinbase payload.

* Rename coinbaseOutputForBlueBlock to coinbaseOutputAndSubsidyForBlueBlock.

* Add a TODO about optimizing trusted data DAA window blocks.

* Expand on a comment in TestCheckBlockIsNotPruned.

* In calcBlockSubsidy, divide the big.Int numerator by the big.Int denominator instead of converting to float64.

* Clarify a comment.

* Rename SubsidyMinGenesisReward to MinSubsidy.

* Properly handle trusted data blocks in calculateMergeSetSubsidySum.

* Use the first two bytes of the selected parent's hash for randomness instead of math/rand.

* Restore maxSompi to what it used to be.

* Fix TestPruning.

* Fix TestAmountCreation.

* Fix TestBlockWindow.

* Fix TestAmountUnitConversions.

* Increase the timeout in many-tips to 30 minutes.

* Check coinbase subsidy for every block

* Re-rename functions

* Use shift instead of powInt64 to determine subsidyRandom

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-10-30 10:16:47 +03:00
Ori Newman
1b9be28613 Improve ExpectedHeaderPruningPoint performance (#1833)
* Improve ExpectedHeaderPruningPoint perf

* Add suggestedLowHash to nextPruningPointAndCandidateByBlockHash
2021-10-26 11:01:26 +03:00
Ori Newman
5dbb1da84b Implement pruning point proof (#1832)
* Calculate GHOSTDAG, reachability etc for each level

* Don't preallocate cache for dag stores except level 0 and reduce the number of connections in the integration test to 32

* Reduce the number of connections in the integration test to 16

* Increase page file

* BuildPruningPointProof

* BuildPruningPointProof

* Add PruningProofManager

* Implement ApplyPruningPointProof

* Add prefix and fix blockAtDepth and fill headersByLevel

* Some bug fixes

* Include all relevant blocks for each level in the proof

* Fix syncAndValidatePruningPointProof to return the right block hash

* Fix block window

* Fix isAncestorOfPruningPoint

* Ban for rule errors on pruning proof

* Find common ancestor for blockAtDepthMAtNextLevel

* Use pruning proof in TestValidateAndInsertImportedPruningPoint

* stage status and finality point for proof blocks

* Uncomment golint

* Change test timeouts

* Calculate merge set for ApplyPruningPointProof

* Increase test timeout

* Add better caching for daa window store

* Return to default timeout

* Add ErrPruningProofMissesBlocksBelowPruningPoint

* Add errDAAWindowBlockNotFound

* Force connection loop next iteration on connection manager stop

* Revert to Test64IncomingConnections

* Remove BlockAtDepth from DAGTraversalManager

* numBullies->16

* Set page file size to 8gb

* Increase p2p max message size

* Test64IncomingConnections->Test16IncomingConnections

* Add comment for PruningProofM

* Add comment in `func (c *ConnectionManager) Stop()`

* Rename isAncestorOfPruningPoint->isAncestorOfSelectedTip

* Revert page file to 16gb

* Improve ExpectedHeaderPruningPoint perf

* Fix comment

* Revert "Improve ExpectedHeaderPruningPoint perf"

This reverts commit bca1080e71.

* Don't test windows
2021-10-26 09:48:27 +03:00
Ori Newman
afaac28da1 Validate each level parents (#1827)
* Create BlockParentBuilder.

* Implement BuildParents.

* Explictly set level 0 blocks to be the same as direct parents.

* Add checkIndirectParents to validateBlockHeaderInContext.

* Fix test_block_builder.go and BlockLevelParents::Equal.

* Don't check indirect parents for blocks with trusted data.

* Handle pruned blocks when building block level parents.

* Fix bad deletions from unprocessedXxxParents.

* Fix merge errors.

* Fix bad pruning point parent replaces.

* Fix duplicates in newBlockLevelParents.

* Skip checkIndirectParents

* Get rid of staging constant IDs

* Fix BuildParents

* Fix tests

* Add comments

* Change order of directParentHashes

* Get rid of maybeAddDirectParentParents

* Add comments

* Add blockToReferences type

* Use ParentsAtLevel

Co-authored-by: stasatdaglabs <stas@daglabs.com>
2021-09-13 14:22:00 +03:00
stasatdaglabs
0053ee788d Use the BlueWork declared in an orphan block's header instead of requesting it explicitly from the peer that sent us the orphan (#1828) 2021-09-13 13:13:03 +03:00
stasatdaglabs
af7e7de247 Add PruningPointProof to the P2P protocol (#1825)
* Add PruningPointProof to externalapi.

* Add BuildPruningPointProof and ValidatePruningPointProof to Consensus.

* Add the pruning point proof to the protocol.

* Add the pruning point proof to the wire package.

* Add PruningPointBlueWork.

* Make go vet happy.

* Properly initialize PruningPointProof in consensus.go.

* Validate pruning point blue work.

* Populate PruningPointBlueWork with the actual blue work of the pruning point.

* Revert "Populate PruningPointBlueWork with the actual blue work of the pruning point."

This reverts commit f2a9829998.

* Revert "Validate pruning point blue work."

This reverts commit c6a90c5d2c.

* Revert "Properly initialize PruningPointProof in consensus.go."

This reverts commit 9391574bbf.

* Revert "Add PruningPointBlueWork."

This reverts commit 48182f652a.

* Fix PruningPointProof and MsgPruningPointProof to be two-dimensional.

* Fix wire PruningPointProof to be two-dimensional.
2021-09-05 17:20:15 +03:00
Ori Newman
02a08902a7 Fix current pruning point index cache (#1824)
* Fix ps.currentPruningPointIndexCache

* Remove redundant dependency from block builder

* Fix typo
2021-09-05 07:37:25 +03:00
Ori Newman
d9bc94a2a8 Replace header finality point with pruning point and enforce finality rules on IBD with headers proof (#1823)
* Replace header finality point with pruning point

* Fix TestTransactionAcceptance

* Fix pruning candidate

* Store all past pruning points

* Pass pruning points on IBD

* Add blue score to block header

* Simplify ArePruningPointsInValidChain

* Fix static check errors

* Fix genesis

* Renames and text fixing

* Use ExpectedHeaderPruningPoint in block builder

* Fix TestCheckPruningPointViolation
2021-08-31 08:01:48 +03:00
stasatdaglabs
837dac68b5 Update block headers to include multiple levels of parent blocks (#1822)
* Replace the old parents in the block header with BlockLevelParents.

* Begin fixing compilation errors.

* Implement database serialization for block level parents.

* Implement p2p serialization for block level parents.

* Implement rpc serialization for block level parents.

* Add DirectParents() to the block header interface.

* Use DirectParents() instead of Parents() in some places.

* Revert test_block_builder.go.

* Add block level parents to hash serialization.

* Use the zero hash for genesis finality points.

* Fix failing tests.

* Fix a variable name.

* Update headerEstimatedSerializedSize.

* Add comments in blocklevelparents.go.

* Fix the rpc-stability stability test.

* Change the field number for `parents` fields in p2p.proto and rpc.proto.

* Remove MsgBlockHeader::NumParentBlocks.
2021-08-24 12:06:39 +03:00
Ori Newman
ba5880fab1 Fix pruning candidate (#1821) 2021-08-23 07:26:43 +03:00
stasatdaglabs
7b5720a155 Implement GHOST (#1819)
* Implement GHOST.

* Implement TestGHOST.

* Make GHOST() take arbitrary subDAGs.

* Hold RootHashes in SubDAG rather than one GenesisHash.

* Select which root the GHOST chain starts with instead of passing a lowHash.

* If two child hashes have the same future size, decide which one is larger using the block hash.

* Extract blockHashWithLargestFutureSize to a separate function.

* Calculate future size for each block individually.

* Make TestGHOST deterministic.

* Increase the timeout for connecting 128 connections in TestRPCMaxInboundConnections.

* Implement BenchmarkGHOST.

* Fix an infinite loop.

* Use much larger benchmark data.

* Optimize `futureSizes` using reverse merge sets.

* Temporarily make the benchmark data smaller while GHOST is being optimized.

* Fix a bug in futureSizes.

* Fix a bug in populateReverseMergeSet.

* Choose a selectedChild at random instead of the one with the largest reverse merge set size.

* Rename populateReverseMergeSet to calculateReverseMergeSet.

* Use reachability to resolve isDescendantOf.

* Extract heightMaps to a separate object.

* Iterate using height maps in futureSizes.

* Don't store reverse merge sets in memory.

* Change calculateReverseMergeSet to calculateReverseMergeSetSize.

* Fix bad initial reverseMergeSetSize.

* Optimize calculateReverseMergeSetSize.

* Enlarge the benchmark data to 86k blocks.
2021-08-19 13:59:43 +03:00
stasatdaglabs
65b5a080e4 Fix the RPCClient leaking connections (#1820)
* Fix the RPCClient leaking connections.

* Wrap the error return from GetInfo.
2021-08-16 15:59:35 +03:00
stasatdaglabs
ce17348175 Limit the amount of inbound RPC connections (#1818)
* Limit the amount of inbound RPC connections.

* Increment/decrement the right variable.

* Implement TestRPCMaxInboundConnections.

* Make go vet happy.

* Increase RPCMaxInboundConnections to 128.

* Set NUM_CLIENTS=128 in the rpc-idle-clients stability test.

* Explain why the P2P server has unlimited inbound connections.
2021-08-12 14:40:49 +03:00
stasatdaglabs
d922ee1be2 Add header commitments for DAA score, blue work, and finality points (#1817)
* Add DAAScore, BlueWork, and FinalityPoint to externalapi.BlockHeader.

* Add DAAScore, BlueWork, and FinalityPoint to NewImmutableBlockHeader and fix compilation errors.

* Add DAAScore, BlueWork, and FinalityPoint to protowire header types and fix failing tests.

* Check for header DAA score in validateDifficulty.

* Add DAA score to buildBlock.

* Fix failing tests.

* Add a blue work check in validateDifficultyDAAAndBlueWork.

* Add blue work to buildBlock and fix failing tests.

* Add finality point validation to ValidateHeaderInContext.

* Fix genesis blocks' finality points.

* Add finalityPoint to blockBuilder.

* Fix tests that failed due to missing reachability data.

* Make blockBuilder use VirtualFinalityPoint instead of directly calling FinalityPoint with the virtual hash.

* Don't validate the finality point for blocks with trusted data.

* Add debug logs.

* Skip finality point validation for block whose finality points are the virtual genesis.

* Revert "Add debug logs."

This reverts commit 3c18f519cc.

* Move checkDAAScore and checkBlueWork to validateBlockHeaderInContext.

* Add checkCoinbaseBlueScore to validateBodyInContext.

* Fix failing tests.

* Add DAAScore, blueWork, and finalityPoint to blocks' hashes.

* Generate new genesis blocks.

* Fix failing tests.

* In BuildUTXOInvalidBlock, get the bits from StageDAADataAndReturnRequiredDifficulty instead of calling RequiredDifficulty separately.
2021-08-12 13:25:00 +03:00
stasatdaglabs
4132891ac9 In calculateDiffBetweenPreviousAndCurrentPruningPoints, collect diffChild hashes instead of UTXODiffs to give the GC a chance to clean up UTXODiffs. (#1815)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-08-08 12:46:21 +03:00
stasatdaglabs
2094f4facf Decrease the size of the small chains in many-small-chains-and-one-big-chain.json, since the merge set size limit was reduced to k*10. (#1816) 2021-08-08 11:22:27 +03:00
stasatdaglabs
2de68f43f0 Use blockStatusStore instead of blockStore in missingBlockBodyHashes. (#1814) 2021-08-05 09:45:09 +03:00
Ori Newman
d748089a14 Update the virtual after overriding the virtual UTXO set (#1811)
* Update the virtual after overriding the virtual utxo set

* Put the updateVirtual inside importVirtualUTXOSetAndPruningPointUTXOSet

* Add pruningPoint to importVirtualUTXOSetAndPruningPointUTXOSet

* Remove sanity check
2021-08-02 17:02:15 +03:00
stasatdaglabs
7d1071a9b1 Update testnet version to testnet-6 (#1808)
* Update testnet version to testnet-6.

* Fix failing test.
2021-07-29 10:12:23 +03:00
Ori Newman
f26a7fdedf Return headers first (#1806)
* Return headers first

* Delete TestHandleRelayInvs

* resolve virtual only after IBD

* Fix ResolveVirtual

* Fix comments and variable names
2021-07-27 17:07:29 +03:00
Ori Newman
d207888b67 Implement pruned headers node (#1787)
* Pruning headers p2p basic structure

* Remove headers-first

* Fix consensus tests except TestValidateAndInsertPruningPointWithSideBlocks and TestValidateAndInsertImportedPruningPoint

* Add virtual genesis

* Implement PruningPointAndItsAnticoneWithMetaData

* Start fixing TestValidateAndInsertImportedPruningPoint

* Fix TestValidateAndInsertImportedPruningPoint

* Fix BlockWindow

* Update p2p and gRPC

* Fix all tests except TestHandleRelayInvs

* Delete TestHandleRelayInvs parts that cover the old IBD flow

* Fix lint errors

* Add p2p_request_ibd_blocks.go

* Clean code

* Make MsgBlockWithMetaData implement its own representation

* Remove redundant check if highest share block is below the pruning point

* Fix TestCheckLockTimeVerifyConditionedByAbsoluteTimeWithWrongLockTime

* Fix comments, errors ane names

* Fix window size to the real value

* Check reindex root after each block at TestUpdateReindexRoot

* Remove irrelevant check

* Renames and comments

* Remove redundant argument from sendGetBlockLocator

* Don't delete staging on non-recoverable errors

* Renames and comments

* Remove redundant code

* Commit changes inside ResolveVirtual

* Add comment to IsRecoverableError

* Remove blocksWithMetaDataGHOSTDAGDataStore

* Increase windows pagefile

* Move DeleteStagingConsensus outside of defer

* Get rid of mustAccepted in receiveBlockWithMetaData

* Ban on invalid pruning point

* Rename interface_datastructures_daawindowstore.go to interface_datastructures_blocks_with_meta_data_daa_window_store.go

* * Change GetVirtualSelectedParentChainFromBlockResponseMessage and VirtualSelectedParentChainChangedNotificationMessage to show only added block hashes
*  Remove ResolveVirtual
* Use externalapi.ConsensusWrapper inside MiningManager
* Fix pruningmanager.blockwithmetadata

* Set pruning point selected child when importing the pruning point UTXO set

* Change virtual genesis hash

* replace the selected parent with virtual genesis on removePrunedBlocksFromGHOSTDAGData

* Get rid of low hash in block locators

* Remove +1 from everywhere we use difficultyAdjustmentWindowSize and increase the default value by one

* Add comments about consensus wrapper

* Don't use separate staging area when resolving resolveBlockStatus

* Fix netsync stability test

* Fix checkResolveVirtual

* Rename ConsensusWrapper->ConsensusReference

* Get rid of blockHeapNode

* Add comment to defaultDifficultyAdjustmentWindowSize

* Add SelectedChild to DAGTraversalManager

* Remove redundant copy

* Rename blockWindowHeap->calculateBlockWindowHeap

* Move isVirtualGenesisOnlyParent to utils

* Change BlockWithMetaData->BlockWithTrustedData

* Get rid of maxReasonLength

* Split IBD to 100 blocks each time

* Fix a bug in calculateBlockWindowHeap

* Switch to trusted data when encountering virtual genesis in blockWithTrustedData

* Move ConsensusReference to domain

* Update ConsensusReference comment

* Add comment

* Rename shouldNotAddGenesis->skipAddingGenesis
2021-07-26 12:24:07 +03:00
stasatdaglabs
38e2ee1b43 Change the log level of the transaction propagation log from Info to Debug. (#1804) 2021-07-19 10:30:29 +03:00
talelbaz
aba44e7bfb Disable relative time lock by time (#1800)
* ignore type flag

* Ignore type flag of relative time lock - interpret as DAA score

* Split verifyLockTime to functions with and without threshold.relative lockTimes dont need threshold check

* Change function name and order of the functions calls

Co-authored-by: tal <tal@daglabs.com>
2021-07-18 17:52:16 +03:00
Constantine Bitensky
c731d74bc0 Replace queue by stack in GetRedeemer (#1802) 2021-07-15 15:02:06 +03:00
Svarog
60e7a8ebed Make transaction propagation much more frequent (#1799)
* Make transaction propagation much more frequent

* Update f.transactionIDsToPropagate after brodacasting
2021-07-14 16:06:24 +03:00
Svarog
369a3bac09 Limit block mass instead of merge set limit + Introduce SigOpCount to TransactionInput (#1790)
* Update constants

* Add to transaction SigOpCount

* Update mass calculation, and move it from InContext to InIsolation

* Update block validation accordingly

* Add SigOpCount validation during TransactionInContext

* Remove checking of mass vs maxMassAcceptedByBlock from consensusStateManager

* Update mining manager with latest changes

* Add SigOpCount to MsgTx.Copy()

* Fix initTestTransactionAcceptanceDataForClone

* Fix all tests in transaction_equal_clone_test.go

* Fix TestBlockMass

* Fix tests in transactionvalidator package

* Add SigOpCount to sighash

* Fix TestPruningDepth

* Fix problems in libkaspawalelt

* Fix integration tests

* Fix CalculateSignatureHash tests

* Remove remaining places talking about block size

* Add sanity check to checkBlockMass to make sure all transactions have their mass filled

* always add own sigOpCount to sigHash

* Update protowire/rpc.md

* Start working on removing any remaining reference to block/tx size

* Update rpc transaction verbose data to include mass rather then size

* Convert verboseData and block size check to mass

* Remove remaining usages of tx size in mempool

* Move transactionEstimatedSerializedSize to transactionvalidator

* Add PopulateMass to fakeRelayInvsContext

* Move PopulateMass to beggining of ValidateAndInsertTransaction + fix in it

* Assign mass a new number for backward-compatibility
2021-07-14 14:21:57 +03:00
talelbaz
8022e4cbea Validate locktime when admitted into mempool and when building a block. (#1794)
* Validate locktime when admitted into mempool and when build a block. Also, fix isFinalized to use DAAscore instead of blue score.

* Change the function name:ValidateTransactionInContextIgnoringUTXO

Co-authored-by: tal <tal@daglabs.com>
2021-07-14 11:00:03 +03:00
talelbaz
28ac77b202 Tests for timelock - check lock time verify (CLTV) (#1751)
* Create a file

* Add tests for lockTime - CLTV scripts conditioned by time and block height

* Add a handle for an unhandled error.

* Renamed the test file

* Fix typo

* Add a counter for current block height.

* Change variable name

* Adds a test for wrong lock time, removed fundingTransaction variable

* Fix LockTimeThreshold constant, fix opcodeCheckLockTimeVerify and opcodeCheckSequenceVerify(padding in the end), add support for sequence and lock time number in the script builder, add more checks to the CLTV test.

* Call AddData instead of addData. Rename fixedSize to unpaddedSize

* Creating wrapper functions to lockTime&sequence numbers that call to a shared function in script builder.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-08 18:09:45 +03:00
Constantine Bitensky
28af7eb596 Add stability-fast action to pull requests (#1791)
* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests

* Add stability-fast action to pull requests
2021-07-08 16:08:21 +03:00
stasatdaglabs
a4d241c30a Batch transaction inv messages (#1788)
* Implement EnqueueTransactionIDsForPropagation.

* Fix tests.

* Fix TxRelayTest.

* Add a log for transaction propagation.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-07 17:54:57 +03:00
stasatdaglabs
487fab0e2b Implement a stability test to stress test the DAA algorithm (#1767)
* Copy over boilerplate and begin implementing TestDAA.

* Implement a fairly reliable method of hashing at a certain hashrate.

* Convert the DAA test to an application.

* Start kaspad and make sure that hashrate throttling works with that as well.

* Finish implementing testConstantHashRate.

* Tidied up a bit.

* Convert TestDAA back into a go test.

* Reorganize TestDAA to be more like a traditional test.

* Add sudden hashrate drop/jump tests.

* Simplify targetHashNanosecondsFunction.

* Improve progress logs.

* Add more tests.

* Remove the no-longer relevant `hashes` part of targetHashNanosecondsFunction.

* Implement a constant hashrate increase test.

* Implement a constant hashrate decrease test.

* Give the correct run duration to the constant hashrate decrease test.

* Add cooldowns to exponential functions.

* Add run.sh to the DAA test.

* Add a README.

* Add `daa` to run-slow.sh.

* Make go lint happy.

* Fix the README's title.

* Keep running tests even if one of them failed on high block rate deviation.

* Fix hashrate peak/valley tests.

* Preallocate arrays for hash and mining durations.

* Add more statistics to the "mined block" log.

* Make sure runDAATest stops when it's suppposed to.

* Add a newline after "5 minute cooldown."

* Fix variable names.

* Rename totalElapsedTime to tatalElapsedDuration.

* In measureMachineHashNanoseconds, generate a random nonce only once.

* In runDAATest, add "DAA" to the start/finish log.

* Remove --logdir from kaspadRunCommand.

* In runDAATest, enlarge the nonce range to the entirety of uint64.

* Explain what targetHashNanosecondsFunction is.

* Move RunKaspadForTesting into common.

* Rename runForDuration to loopForDuration.

* Make go lint happy.

* Extract fetchBlockForMining to a separate function.

* Extract waitUntilTargetHashDurationHadElapsed to a separate function.

* Extract pushHashDuration and pushMiningDuration to separate functions.

* Extract logMinedBlockStatsAndUpdateStatFields to a separate function.

* Extract submitMinedBlock to a separate function.

* Extract tryNonceForMiningAndIncrementNonce to a separate function.

* Add comments.

* Use a rolling average instead of appending to an array for performance/accuracy.

* Change a word in a comment.

* Explain why we wait for five minutes at the end of the exponential increase/decrease tests.

Co-authored-by: Svarog <feanorr@gmail.com>
2021-07-07 16:14:22 +03:00
stasatdaglabs
2f272cd517 Expire old transactions when both an amount of DAA and an amount of time had passed. (#1784)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-06 15:23:53 +03:00
Constantine Bitensky
e3a6d9e49a Added RPC connection server version checking, fixes https://github.com/kaspanet/kaspad/issues/1047 (#1783)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-07-06 15:10:35 +03:00
Svarog
069ee26e84 Adds name to route, and writes it in every error message (#1777)
* Adds name to route, and writes it in every error message

* Update all calls with route name

* Fixed a few missed points

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-07-04 14:40:27 +03:00
Svarog
61aa15fd61 Update lastRebroadcastTime when we are rebroadcasting + Add some logs to mempool (#1776)
* Update lastRebroadcastTime when we are rebroadcasting

* Add some logs to mempool

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-07-04 12:08:44 +03:00
Svarog
f7cce5cb39 Cache virtual past median time (#1775)
* Add cache for virtual pastMedianTime

* Implement InvalidateVirtualPastMedianTimeCache for mocPastMedianTimeManager
2021-07-04 11:47:43 +03:00
cbitensky
2f7a1395e7 Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween (#1772)
* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

* Make use of maxBlocks instead of maxBlueScoreDifference in antiPastHashesBetween

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-06-30 17:21:22 +03:00
Svarog
8b1ac86532 Modify locktime thresholds to accomodate 64 bits and millisecond timestamps (#1770)
* Change SequenceLockTimeDisabled to 1 << 63

* Move LockTimeThreshold to constants

* Update locktime constants according to new proposal

* Fix opcodeCheckSequenceVerify and failed tests

* Disallow numbers above 8 bytes in makeScriptNum

* Use littleEndian.Uint64 for sequence instead of ScriptNum

* Update comments on constants

* Update some more comments
2021-06-30 10:57:09 +03:00
Svarog
ab721f3ad6 All orphans inputs should be added to op.orphansByPreviousOutpoint even if outpoint is not missing (#1766)
* All orphans inputs should be added to op.orphansByPreviousOutpoint even if outpoint is not missing

* Remove redundant log

* processOrphansAfterAcceptedTransaction: wqCheck that UTXOEntry is empty before filling it

* Don't remove redeemers in expireOrphanTransactions
2021-06-24 17:09:16 +03:00
Svarog
798c5fab7d Add allowOrphans to rpcclient.SubmitTransaction (#1765) 2021-06-24 15:23:10 +03:00
Svarog
c13a4d90ed Mempool redesign (#1752)
* Added model and stubs for all main methods

* Add constructors to all main objects

* Implement BlockCandidateTransactions

* implement expireOldTransactions and expireOrphanTransactions

* Rename isHighPriority to neverExpires

* Add stub for checkDoubleSpends

* Revert "Rename isHighPriority to neverExpires"

This reverts commit b2da9a4a00.

* Imeplement transactionsOrderedByFeeRate

* Orphan maps should be idToOrphan

* Add error.go to mempool

* Invert the condition for banning when mempool rejects a transaction

* Move all model objects to model package

* Implement getParentsInPool

* Implemented mempoolUTXOSet.addTransaction

* Implement removeTransaction, remove sanity checks

* Implemented mempoolUTXOSet.checkDoubleSpends

* Implemented removeOrphan

* Implement removeOrphan

* Implement maybeAddOrphan and AddOrphan

* Implemented processOrphansAfterAcceptedTransaction

* Implement transactionsPool.addTransaction

* Implement RemoveTransaction

* If a transaction was removed from the mempool - update it's redeemers in orphan pool as well

* Use maximumOrphanTransactionCount

* Add allowOrphans to ValidateAndInsertTransaction stub

* Implement validateTransaction functions

* Implement fillInputs

* Implement ValidateAndInsertTransaction

* Implement HandleNewBlockTransactions

* Implement missing mempool interface methods

* Add comments to exported functions

* Call ValidateTransactionInIsolation where needed

* Implement RevalidateHighPriorityTransactions

* Rewire kaspad to use new mempool, and fix compilation errors

* Update rebroadcast logic to use new structure

* Handle non-standard transaction errors properly

* Add mutex to mempool

* bugfix: GetTransaction panics when ok is false

* properly calculate targetBlocksPerSecond in config.go

* Fix various lint errors and tests

* Fix expected text in test for duplicate transactions

* Skip the coinbase transaction in HandleNewBlockTransactions

* Unorphan the correct transactions

* Call ValidateTransactionAndPopulateWithConsensusData on unorphanTransaction

* Re-apply policy_test as check_transactions_standard_test

* removeTransaction: Remove redeemers in orphan pool as well

* Remove redundant check for uint64 < 0

* Export and rename isDust -> IsTransactionOutputDust to allow usage by rothschild

* Add allowOrphan to SubmitTransaction RPC request

* Remove all implementation from mempool.go

* tidy go mod

* Don't pass acceptedOrphans to handleNewBlockTransactions

* Use t.Errorf in stead of t.Fatalf

* Remove minimum relay fee from TestDust, as it's no longer configurable

* Add separate VirtualDAASCore method for faster retrieval where it's repeated multiple times

* Broadcast all transactions that were accepted

* Don't re-use GetVirtualDAAScore in GetVirtualInfo - this causes a deadlock

* Use real transaction count, and not Orphan

* Get mempool config from outside, incorporating values received from cli

* Use MinRelayFee and MaxOrphanTxs from global kaspad config

* Add explanation for the seemingly redundant check for transaction version in checkTransactionStandard

* Update some comment

* Convert creation of acceptedTransactions to a single line

* Move mempoolUTXOSet out of checkDoubleSpends

* Add test for attempt to insert double spend into mempool

* fillInputs: Skip check for coinbase - it's always false in mempool

* Clarify comment about removeRedeemers when removing random orphan

* Don't remove high-priority transactions in limitTransactionCount

* Use mempool.removeTransaction in limitTransactionCount

* Add mutex comment to handleNewBlockTransactions

* Return error from limitTransactionCount

* Pluralize the map types

* mempoolUTXOSet.removeTransaction: Don't restore utxo if it was not created in mempool

* Don't evacuate from orphanPool high-priority transactions

* Disallow double-spends in orphan pool

* Don't use exported (and locking) methods from inside mempool

* Check for double spends in mempool during revalidateTransaction

* Add checkOrphanDuplicate

* Add orphan to acceptedOrphans, not current

* Add TestHighPriorityTransactions

* Fix off-by-one error in limitTransactionCount

* Add TestRevalidateHighPriorityTransactions

* Remove checkDoubleSpends from revalidateTransaction

* Fix TestRevalidateHighPriorityTransactions

* Move check for MaximumOrphanCount to beggining of maybeAddOrphan

* Rename all map type to singulateToSingularMap

* limitOrphanPool only after the orphan was added

* TestDoubleSpendInMempool: use createChildTxWhenParentTxWasAddedByConsensus instead of createTransactionWithUTXOEntry

* Fix some comments

* Have separate min/max transaction versions for mempool

* Add comment on defaultMaximumOrphanTransactionCount to keep it small as long as we have recursion

* Fix comment

* Rename: createChildTxWhenParentTxWasAddedByConsensus -> createChildTxWhereParentTxWasAddedByConsensus

* Handle error from createChildTxWhereParentTxWasAddedByConsensus

* Rename createChildTxWhereParentTxWasAddedByConsensus -> createChildAndParentTxsAndAddParentToConsensus

* Convert all MaximumXXX constants to uint64

* Add comment

* remove mutex comments
2021-06-23 15:49:20 +03:00
Constantine Bitensky
4ba8b14675 Make GetVirtualSelectedParentBlueScore work properly (#1764) 2021-06-21 15:56:37 +03:00
Constantine Bitensky
319cbce768 Friendly error messages for ban and unban with ip with brackets (#1762) 2021-06-21 11:42:45 +03:00
Constantine Bitensky
bdd42903b4 Remove NOP1..10 (#1761) 2021-06-20 16:52:12 +03:00
Svarog
9bedf84740 kaspawallet: Add timeout to connect to daemon, and print friendly error (#1756)
message if it's not available
2021-06-16 15:05:19 +03:00
Svarog
f317f51cdd Change IBD finished log to specify if completed succesfully or not (#1754)
* Change IBD finished log to specify if complete succesfully or not

* Move log to outside UnsetIBDRunning

* Style inhancement of IBD finished string
2021-06-16 11:13:29 +03:00
Ori Newman
4207c82f5a Add database prefix (#1750)
* Add prefix to stores

* Add prefix to forgotten stores

* Add a special type for prefix

* Rename transaction->dbTx

* Change error message

* Use countKeyName

* Rename Temporary Consesnsus to Staging

* Add DeleteStagingConsensus to Domain interface

* Add lock to staging consensus

* Make prefix type-safer

* Use ioutil.TempDir instead of t.TempDir
2021-06-15 17:47:17 +03:00
Constantine Bitensky
70399dae2a Added a friendly error message when genesis hash is not relevant to database (#1745)
Co-authored-by: Constantine Bitensky <constantinebitensky@gmail.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-06-13 11:36:47 +03:00
Constantine Bitensky
2ae1b7853f Added comment to database lock error (#1746)
More readable database lock error string

Co-authored-by: Constantine Bitensky <constantinebitensky@gmail.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-06-10 19:49:08 +03:00
Constantine Bitensky
d53d040bee Add .github/workflows/stability-fast.yml to run stability test (#1744)
* Added .github/stability-fast.yml

Co-authored-by: Constantine Bitensky <constantine@daglabs.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-06-09 15:11:56 +03:00
Constantine Bitensky
79c74c482b Get connections from Seeder when no connections left (#1742)
Co-authored-by: Constantine Bitensky <constantinebitensky@gmail.com>
2021-06-08 17:42:13 +03:00
Ori Newman
3b0394eefe Skip solving the block if SkipProofOfWork (#1741) 2021-06-08 15:44:27 +03:00
tal
43e6467ff1 Fix merge errors 2021-06-06 17:00:29 +03:00
stasatdaglabs
363494ef7a Implement NotifyVirtualDaaScoreChanged (#1737)
* Add notifyVirtualDaaScoreChanged to protowire.

* Add notifyVirtualDaaScoreChanged to the rest of kaspad.

* Add notifyVirtualDaaScoreChanged to the rest of kaspad.

* Test the DAA score notification in TestVirtualSelectedParentBlueScore.

* Rename TestVirtualSelectedParentBlueScore to TestVirtualSelectedParentBlueScoreAndVirtualDAAScore.

(cherry picked from commit 83e631548f)
2021-06-06 16:46:02 +03:00
cbitensky
d1df97c4c5 added --password and --yes cmdline parameters (#1735)
* added --password and --yes cmdline parameters

* Renamed:
confPassword -> cmdLinePassword
yes -> forceOverride

Added password in ImportMnemonics

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-06-01 15:36:18 +03:00
Svarog
4f52a6de51 Check if err != nil returned from ConnectionManager.Ban() (#1736) 2021-05-31 10:50:12 +03:00
Svarog
4f4a8934e7 Add option to specify blockHash in EstimateNetworkHashesPerSecond (#1731)
* Add BlockHash optional parameter to EstimateNetworkBlockHashesPerSecond

* Allow to pass '-' for optional values in kaspactl

* Solve a division-by-zero in estimateNetworkHashesPerSecond

* Add BlockHash to toAppMessage/fromAppMessage functions

* Rename: topHash -> StartHash

* Return proper error message if provided startHash doesn't deserialize into a hash
2021-05-27 14:59:29 +03:00
Svarog
16ba2bd312 Export DefaultPath + Add logging to kaspawallet daemon (#1730)
* Export DefaultPath + Add logging to kaspawallet daemon

* Export purpose and CoinType instead of defaultPath

* Move TODO to correct place
2021-05-26 18:51:11 +03:00
Svarog
6613faee2d Invert coin-type and purpose in default derivation path (#1728) 2021-05-20 17:43:07 +03:00
Ori Newman
edc459ae1b Improve pickVirtualParents performance and add many-tips to the stability tests suite (#1725)
* First limit the candidates size to 3*csm.maxBlockParents before taking the bottom csm.maxBlockParents/2

* Change log level of printing all tips to Tracef

* Add many-tips to run-fast.sh and run-slow.sh

* Fix preallocation size

* Assign intermediate variables
2021-05-19 16:06:52 +03:00
talelbaz
d7f2cf81c0 Change merge set order to topological order (#1654)
* Change mergeSet to be ordered topologically.

* Add special condition for genesis.

* Add check that the coinbase is validated.

* Change names of variables(old: chainHash, blueHash).

* Fix the DAG diagram in the comment above the function.

* Fix variables names.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-05-19 14:40:55 +03:00
Ori Newman
4658f9d05c Implement BIP 39 and HD wallet features (#1705)
* Naive bip39 with address reuse

* Avoid address reuse in libkaspawallet

* Add wallet daemon

* Use daemon everywhere

* Add forceOverride

* Make CreateUnsignedTransaction endpoint receive amount in sompis

* Collect close UTXOs

* Filter out non-spendable UTXOs from selectUTXOs

* Use different paths for multisig and non multisig

* Fix tests to use non zero path

* Fix multisig cosigner index detection

* Add comments

* Fix dump_unencrypted_data.go according to bip39 and bip32

* Fix wrong derivation path for multisig on wallet creation

* Remove IsSynced endpoint and add validation if wallet is synced for the relevant endpoints

* Rename server address to daemon address

* Fix capacity for extendedPublicKeys

* Use ReadBytes instead of ReadLine

* Add validation when importing

* Increment before using index value, and use it as is

* Save keys file exactly where needed

* Use %+v printErrorAndExit

* Remove redundant consts

* Rnemae collectCloseUTXOs and collectFarUTXOs

* Move typedefs around

* Add comment to addressesToQuery

* Update collectUTXOsFromRecentAddresses comment about locks

* Split collectUTXOs to small functions

* Add sanity check

* Add addEntryToUTXOSet function

* Change validateIsSynced to isSynced

* Simplify createKeyPairsFromFunction logic

* Rename .Sync() to .Save()

* Fix typo

* Create bip39BitSize const

* Add consts to purposes

* Add multisig check for 'send'

* Rename updatedPSTxBytes to partiallySignedTransaction

* Change collectUTXOsFromFarAddresses's comment

* Use setters for last used indexes

* Don't use the pstx acronym

* Fix SetPath

* Remove spaces when reading lines

* Fix walletserver to daemonaddress

* Fix isUTXOSpendable to use DAA score

Co-authored-by: Svarog <feanorr@gmail.com>
2021-05-19 10:03:23 +03:00
Ori Newman
010df3b0d3 Rename IncludeTransactionVerboseData flag to IncludeTransactions (#1717)
* Rename IncludeTransactionVerboseData flag to IncludeTransactions

* Regenerate auto-generated files
2021-05-18 17:40:06 +03:00
stasatdaglabs
346598e67f Fix merge errors. 2021-05-18 17:09:11 +03:00
Ori Newman
268906a7ce Add VirtualDaaScore to GetBlockDagInfo (#1719)
Co-authored-by: Svarog <feanorr@gmail.com>

(cherry picked from commit 36c56f73bf)
2021-05-18 17:07:07 +03:00
stasatdaglabs
befc60b185 Update changelog.txt for v0.10.2.
(cherry picked from commit 91866dd61c)
2021-05-18 16:37:10 +03:00
Ori Newman
dd3e04e671 Fix overflow when checking coinbase maturity and don't ban peers that send transactions with immature spend (#1722)
* Fix overflow when checking coinbase maturity and don't ban peers that send transactions with immature spend

* Fix tests

Co-authored-by: Svarog <feanorr@gmail.com>
(cherry picked from commit a18f2f8802)
2021-05-18 16:33:29 +03:00
stasatdaglabs
9c743db4d6 Fix merge errors. 2021-05-18 16:33:01 +03:00
Ori Newman
eb3dba5c88 Fix calcTxSequenceLockFromReferencedUTXOEntries for loop break condition (#1723)
Co-authored-by: Svarog <feanorr@gmail.com>
(cherry picked from commit 04dc1947ff)
2021-05-18 16:30:26 +03:00
stasatdaglabs
e46e2580b1 Add VirtualDaaScore to GetBlockDagInfo (#1719)
Co-authored-by: Svarog <feanorr@gmail.com>
(cherry picked from commit 36c56f73bf)

# Conflicts:
#	infrastructure/network/netadapter/server/grpcserver/protowire/rpc.pb.go
2021-05-18 16:28:57 +03:00
Ori Newman
414f58fb90 serializeAddress should always serialize as IPv6, since it assumes the IP size is 16 bytes (#1720)
(cherry picked from commit b76ca41109)
2021-05-18 16:05:36 +03:00
Ori Newman
9df80957b1 Fix getBlock and getBlocks RPC commands to return blocks and transactions properly (#1716)
* Fix getBlock RPC command to return transactions

* Fix getBlocks RPC command to return transactions and blocks

* Add GetBlockEvenIfHeaderOnly and use it for getBlock and getBlocks

* Implement GetBlockEvenIfHeaderOnly for fakeRelayInvsContext

* Use less nested code

(cherry picked from commit 50fd86e287)
2021-05-13 16:00:42 +03:00
stasatdaglabs
268c9fa83c Update changelog for v0.10.1.
(cherry picked from commit 9e0b50c0dd)
2021-05-11 12:14:04 +03:00
Ori Newman
2e3592e351 Calculate virtual's acceptance data and multiset after importing a new pruning point (#1700)
(cherry picked from commit b405ea50e5)
2021-05-11 12:14:00 +03:00
talelbaz
19718ac102 Change removeTransactionAndItsChainedTransactions to be non-recursive (#1696)
* Change removeTransactionAndItsChainedTransactions to be non-recursive

* Split the variables assigning.

* Change names of function and variables.

* Append the correct queue.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-05-10 15:21:48 +03:00
talelbaz
28a8e96e65 New stability test - many tips (#1694)
* Unfinished code.

* Update the testnet version to testnet-5. (#1683)

* Generalize stability-tests/docker/Dockerfile. (#1685)

* Committed for rebasing.

* Adds stability-test many-tips, which tests kaspad handling with many tips in the DAG.

* Delete manytips_test.go.

* Add timeout to the test and create only one RPC client.

* Place the spawn before the for loop and remove a redundant condition.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-05-10 15:03:20 +03:00
Ori Newman
4df283934a Add a link to the kaspanet github project in the README (#1698)
* Add link to project

* Fix README.md
2021-05-04 11:56:51 +03:00
talelbaz
ab89efe3dc Update relayTransactions unit tests. (#1623)
* [NOD-1344] relaytransactions: simple unit tests

* [NOD-1344]  Add mid-complexity unit tests for relaytransactions
* Improve TestHandleRelayedTransactionssub tests
* Improve TestHandleRequestedTransactions sub tests

* [NOD-1344]  Fix Simple call test

* [NOD-1344]  Fix tests after redesign

* Divide transactionrelay_test.go to 2 separated tests and updates the tests.

* Changes due to review:change the test files name and the test function names, adds new comments and fix typo.

* Delete an unnecessary comparison to True in the if statement condition.

* Update the branch to v0.11.0-dev.

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-05-02 16:16:57 +03:00
Ori Newman
fa16c30cf3 Implement bip32 (#1676)
* Implement bip32

* Unite private and public extended keys

* Change variable names

* Change test name and add comment

* Rename var name

* Rename ckd.go to child_key_derivation.go

* Rename ser32 -> serializeUint32

* Add PrivateKey method

* Rename Path -> DeriveFromPath

* Add comment to validateChecksum

* Remove redundant condition from parsePath

* Rename Fingerprint->ParentFingerprint

* Merge hardened and non-hardened paths in calcI

* Change fingerPrintFromPoint to method

* Move hash160 to hash.go

* Fix a bug in calcI

* Simplify doubleSha256

* Remove slice end bound

* Split long line

* Change KaspaMainnetPrivate/public to represent kprv/kpub

* Add comments

* Fix comment

* Copy base58 library to kaspad

* Add versions for all networks

* Change versions to hex

* Add comments

Co-authored-by: Svarog <feanorr@gmail.com>
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-04-28 15:27:16 +03:00
stasatdaglabs
c28366eb50 Add v0.10.0 to the changelog. (#1692)
(cherry picked from commit 9dd8136e4b)
2021-04-26 15:12:50 +03:00
stasatdaglabs
dc0bf56bf3 Fix the mempool-limits stability test (#1690)
* Add -v to the `go test` command.

* Generate a new keypair for mempool-limits.

* Set mempool-limits to time out only after 24 hours.

(cherry picked from commit eb1703b948)
2021-04-26 15:12:46 +03:00
stasatdaglabs
91de1807ad Generalize stability-tests/docker/Dockerfile. (#1685)
(cherry picked from commit a6da3251d0)
2021-04-26 15:12:44 +03:00
stasatdaglabs
830684167c Update the testnet version to testnet-5. (#1683)
(cherry picked from commit bf198948c4)
2021-04-26 15:12:41 +03:00
stasatdaglabs
1f56a68a28 Add an RPC command: EstimateNetworkHashesPerSecond (#1686)
* Implement EstimateNetworkHashesPerSecond.

* Fix failing tests.

* Add request/response messages to the .proto files.

* Add the EstimateNetworkHashesPerSecond RPC command.

* Add the EstimateNetworkHashesPerSecond RPC client function.

* Add the EstimateNetworkHashesPerSecond RPC command to kaspactl.

* Disallow windowSize lesser than 2.

* Fix wrong scale (milliseconds instead of seconds).

* Handle windowHashes being 0.
2021-04-22 15:18:21 +03:00
stasatdaglabs
13a6b4cc51 Update to version 0.11.0 2021-04-20 13:40:11 +03:00
Svarog
dfd8b3423d Implement new mechanism for updating UTXO Diffs (#1671)
* Use selectedParent instead of selectedTip for non-selectedTip blocks in restoreSingleBlockStatus

* Cache the selectedParent for re-use in a resolveSingleBlockStatus chain

* Implement and use reverseUTXOSet

* Reverse blocks in correct order

* Support resolveBlockStatus without separate stagingAreas for usage of testConsensus

* Handle the case where the tip of the resolved block is not the next selectedTip

* Unify isResolveTip

* Some minor fixes and cleanup

* Add full finality window re-org test to stability-slow

* rename: useSeparateStagingAreasPerBlock -> useSeparateStagingAreaPerBlock

* Better logs in resolveSingleBlockStatus

* A few retouches to reverseUTXODiffs

* TEMPORARY COMMIT: EXTRAT ALL DIFFFROMS TO SEPARATE METHODS

* TEMPORARY COMMIT: REMOVE DIFFICULTY CHECKS IN DEVNET

* Don't pre-allocate in utxo-algebra, since the numbers are not known ahead-of-time

* Add some logs to reverseUTXODiffs

* Revert "TEMPORARY COMMIT: REMOVE DIFFICULTY CHECKS IN DEVNET"

This reverts commit c0af9dc6ad.

* Revert "TEMPORARY COMMIT: EXTRAT ALL DIFFFROMS TO SEPARATE METHODS"

This reverts commit 4fcca1b48c.

* Remove redundant paranthesis

* Revise some logs messages

* Rename:oneBlockBeforeCurrentUTXOSet -> lastResolvedBlockUTXOSet

* Don't break if the block was resolved as invalid

* rename unverifiedBlocks to recentlyVerifiedBlcks in reverseUTXODiffs

* Add errors.New to the panic, for a stack trace

* Reverse the UTXODiffs after the main block has been commited

* Use the correct value for previousUTXODiff

* Add test for ReverseUTXODiff

* Fix some names and comments

* Update TestReverseUTXODiffs to use consensus.Config

* Fix comments mentioning 'oneBlockBeforeTip'
2021-04-20 10:26:55 +03:00
Svarog
28bfc0fb9c Move pow package from model to utils (#1681) 2021-04-19 15:35:36 +03:00
Elichai Turkel
83beae4463 Add consensus.Config as a wrapper for dagParams (#1680)
* Add a new consensus.Config wrapper to dagParams

* Update all tests to use consensus.Config
2021-04-19 09:07:34 +03:00
Elichai Turkel
a6ebe83198 Disable validate commitment unless explictly requested (#1679)
* Add a flag for sanity check pruning point utxo set and do the sanity check only if it's enabled

* add description to EnableSanityCheckPruningUTXOSet

* review fix

Co-authored-by: Svarog <feanorr@gmail.com>
2021-04-18 17:55:17 +03:00
Svarog
acdc59b565 Add version file to database (#1678)
* Add version file to database

* Remove redundant code

* Check for version before opening the database, create version file after

* Create version file before opening the database
2021-04-14 12:35:39 +03:00
stasatdaglabs
15811b0bcb Fix missing VerboseData in BlockAddedNotifications. (#1675)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-04-13 11:35:06 +03:00
Svarog
a8a7e3dd9b Add windows to the CI + fix errors when testing on Windows (#1674)
* Add windows to the CI

* Cast syscall.Stdin into an integer

* DataDir -> AppDir in service_windows.go

* Rename mempool-limits package to something non-main

* Close database after re-assigining to it

* Up rpcTimout to 10 seconds
2021-04-12 14:53:34 +03:00
Svarog
3f193e9219 Use a separate folder for every network under ~/.kaspawallet (#1673) 2021-04-11 18:07:21 +03:00
stasatdaglabs
dfa24d8353 Implement a stability test for mempool limits (#1647)
* Copy some boilerplate from the other stability tests.

* Fix a copy+paste error in run.sh.

* Copy over some stability test boilerplate go code.

* Run kaspad in the background.

* Catch panics and initialize the RPC client.

* Mine enough blocks to fund filling up the mempool.

* Extract coinbase transactions out of the generated blocks.

* Tidy up a bit.

* Implement submitting transactions.

* Lower the amount of outputs in each transaction.

* Verify that the mempool size has the expected amount of transactions.

* Pregenerate enough funds before submitting the first transaction so that block creation doesn't interfere with the test.

* Empty mempool out by continuously adding blocks to the DAG.

* Handle orphan transactions when overfilling the mempool.

* Increase mempoolSizeLimit to 1m.

* Fix a comment.

* Fix a comment.

* Add mempool-limits to run-slow.sh.

* Rename generateTransactionsWithLotsOfOutputs to generateTransactionsWithMultipleOutputs.

* Rename generateCoinbaseTransaction to mineBlockAndGetCoinbaseTransaction.

* Make generateFundingCoinbaseTransactions return an object instead of store a global variable.

* Convert mempool-limits into a Go test.

* Convert panics to t.Fatalfs.

* Fix a comment.

* Increase mempoolSizeLimit to 1m.

* Run TestMempoolLimits only if RUN_STABILITY_TESTS is set.

* Move the run of mempool-limits in run-slow.sh.

* Add a comment above fundingCoinbaseTransactions.

* Make a couple of stylistic changes.

* Use transactionhelper.CoinbaseTransactionIndex instead of hardcoding 0.

* Make uninteresting errors print %+v instead of %s.

Co-authored-by: Svarog <feanorr@gmail.com>
2021-04-11 16:59:11 +03:00
Elichai Turkel
3c3ad1425d Make moving the pruning point faster (#1660)
* Add oldPruningPoint to pruningStore

* Make the pruning store work with utxo diff and return an iterator over pruning point utxoset

* Redesign pruning point utxo storage by creating a diff and modifying the old pruning utxo set

* Fix review comments

* Rename updatePruningPointUTXOSet
2021-04-11 11:17:13 +03:00
Svarog
9bb8123391 Don't include selectedParentHash in block verbose data if it's nil + Fix test vectors in rpc-stability (#1668)
* Fix submitBlockRequest in rpc-stability/commands.json

* Don't include selectedParentHash in block verbose data if it's nil (a.k.a. genesis)
2021-04-08 15:34:10 +03:00
Svarog
347dd8fc4b Support resolveBlockStatus without separate stagingAreas for usage of testConsensus (#1666) 2021-04-08 11:26:17 +03:00
Ori Newman
d2cccd2829 Add ECDSA support to the wallet (#1664)
* Add ECDSA support to the wallet

* Fix genkeypair

* Fix typo and rename var
2021-04-06 17:25:09 +03:00
Ori Newman
7186f83095 Add OpCheckMultiSigECDSA (#1663)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-04-06 16:29:16 +03:00
Ori Newman
5c394c2951 Add PubKeyECDSATy (#1662) 2021-04-06 15:56:31 +03:00
Ori Newman
a786cdc15e Add ECDSA support (#1657)
* Add ECDSA support

* Add domain separation to ECDSA sighash

* Use InfallibleWrite instead of Write

* Rename funcs

* Fix wrong use if vm.sigCache

* Add TestCalculateSignatureHashECDSA

* Add consts

* Fix comment and test name

* Move consts to the top

* Fix comment
2021-04-06 14:27:18 +03:00
Ori Newman
6dd3d4a9e7 Add dump unencrypted data sub command to the wallet (#1661) 2021-04-06 12:29:13 +03:00
stasatdaglabs
73b36f12f0 Implement importing private keys into the wallet (#1655)
* Implement importing private keys into the wallet.

* Fix bad --import default.

* Fix typo in --import annotation.

* Make go lint happy.

* Make go lint happier.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-04-05 18:10:33 +03:00
stasatdaglabs
a795a9e619 Add a size limit to the address manager (#1652)
* Remove a random address from the address manager if it's full.

* Implement TestOverfillAddressManager.

* Add connectionFailedCount to addresses.

* Mark connection failures.

* Mark connection successes.

* Implement removing by most connection failures.

* Expand TestOverfillAddressManager.

* Add comments.

* Use a better method for finding the address with the greatest connectionFailedCount.

* Fix a comment.

* Compare addresses by IP in TestOverfillAddressManager.

* Add a comment for updateNotBanned.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-04-05 17:56:13 +03:00
Ori Newman
0be1bba408 Fix TestAddresses (#1656) 2021-04-05 16:24:22 +03:00
Ori Newman
6afc06ce58 Replace p2pkh with p2pk (#1650)
* Replace p2pkh with p2pk

* Fix tests

* Fix comments and variable names

* Add README.md for genkeypair

* Rename pubkey->publicKey

* Rename p2pkh to p2pk

* Use util.PublicKeySize where needed

* Remove redundant pointer

* Fix comment

* Rename pubKey->publicKey
2021-04-05 14:35:34 +03:00
stasatdaglabs
d01a213f3d Add a show-address subcommand to kaspawallet (#1653)
* Add a show-address subcommand to kaspawallet.

* Update the description of the key-file command line parameter.
2021-04-05 14:22:03 +03:00
stasatdaglabs
7ad8ce521c Implement reconnection logic within the RPC client (#1643)
* Add a reconnect mechanism to RPCClient.

* Fix Reconnect().

* Connect the internal reconnection logic to the miner reconnection logic.

* Rename shouldReconnect to isClosed.

* Move safe reconnection logic from the miner to rpcclient.

* Remove sleep from HandleSubmitBlock.

* Properly handle client errors and only disconnect if we're already connected.

* Make go lint happy.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-04-05 13:57:28 +03:00
Ori Newman
86ba80a091 Improve wallet functionality (#1636)
* Add basic wallet library

* Add CLI

* Add multisig support

* Add persistence to wallet

* Add tests

* go mod tidy

* Fix lint errors

* Fix wallet send command

* Always use the password as byte slice

* Remove redundant empty string

* Use different salt per private key

* Don't sign a signed transaction

* Add comment

* Remove old wallet

* Change directory permissions

* Use NormalizeRPCServerAddress

* Fix compilation errors
2021-03-31 15:58:22 +03:00
Ori Newman
088e2114c2 Disconnect from RPC client after finishing the simple sync test (#1641) 2021-03-31 13:44:42 +03:00
stasatdaglabs
2854d91688 Add missing call to broadcastTransactionsAfterBlockAdded (#1639)
* Add missing call to broadcastTransactionsAfterBlockAdded.

* Fix a comment.

* Fix a comment some more.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-31 10:28:02 +03:00
Ori Newman
af10b59181 Use go-secp256k1 v0.0.5 (#1640) 2021-03-30 18:01:56 +03:00
stasatdaglabs
c5b0394bbc In RPC, use RPCTransactions and RPCBlocks instead of TransactionMessages and BlockMessages (#1609)
* Replace BlockMessage with RpcBlock in rpc.proto.

* Convert everything in kaspad to use RPCBlocks and fix tests.

* Fix compilation errors in stability tests and the miner.

* Update TransactionVerboseData in rpc.proto.

* Update TransactionVerboseData in the rest of kaspad.

* Make golint happy.

* Include RpcTransactionVerboseData in RpcTransaction instead of the other way around.

* Regenerate rpc.pb.go after merge.

* Update appmessage types.

* Update appmessage request and response types.

* Reimplement conversion functions between appmessage.RPCTransaction and protowire.RpcTransaction.

* Extract RpcBlockHeader toAppMessage/fromAppMessage out of RpcBlock.

* Fix compilation errors in getBlock, getBlocks, and submitBlock.

* Fix compilation errors in getMempoolEntry.

* Fix compilation errors in notifyBlockAdded.

* Update verbosedata.go.

* Fix compilation errors in getBlock and getBlocks.

* Fix compilation errors in getBlocks tests.

* Fix conversions between getBlocks message types.

* Fix integration tests.

* Fix a comment.

* Add selectedParent to the verbose block response.
2021-03-30 17:43:02 +03:00
Ori Newman
9266d179a9 Add a test with two signed inputs (#1628)
* Add TestSigningTwoInputs

* Rename fundingBlockHash to block1Hash

* Fix error message

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-03-30 16:54:54 +03:00
Ori Newman
321792778e Add mass limit to mempool (#1627)
* Add mass limit to mempool

* Pass only params instead of multiple configuration options

* Remove acceptNonStd from mempool constructor

* Remove acceptNonStd from mempool constructor

* Fix test compilation
2021-03-30 15:37:56 +03:00
talelbaz
70f3fa9893 Update miningManager test (#1593)
* [NOD-1429] add mining manager unit tests

* [NOD-1429] Add additional test

* found a bug, so stopped working on this test until the bug will be fix.

* Update miningmanager_test.go test.

* Delete payloadHash field - not used anymore in the current version.

* Change the condition for comparing slices instead of pointers.

* Fix due to review notes - change names, use testutils.CreateTransaction function and adds comments.

* Changes after fetch&merge to v0.10.0-dev

* Create a new function createChildTxWhenParentTxWasAddedByConsensus and add a comment

* Add an argument to create_transaction function and fix review notes

* Optimization

* Change to blockID(instead of the all transaction) in the error messages and fix review notes

* Change to blockID(instead of the all transaction) in the error messages and fix review notes

* Change format of error messages.

* Change name ofa variable

* Use go:embed to embed sample-kaspad.conf (only on go1.16)

* Revert "Use go:embed to embed sample-kaspad.conf (only on go1.16)"

This reverts commit bd28052b92.

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-30 13:52:40 +03:00
Svarog
4e18031483 Resolve each block status in it's own staging area (#1634) 2021-03-30 11:04:43 +03:00
Svarog
2abc284e3b Make sure the ghostdagDataStore cache is at least DifficultyAdjustmentBlockWindow sized (#1635) 2021-03-29 14:38:19 +03:00
Svarog
f1451406f7 Add support for multiple staging areas (#1633)
* Add StagingArea struct

* Implemented staging areas in blockStore

* Move blockStagingShard to separate folder

* Apply staging shard to acceptanceDataStore

* Update blockHeaderStore with StagingArea

* Add StagingArea to BlockRelationStore

* Add StagingArea to blockStatusStore

* Add StagingArea to consensusStateStore

* Add StagingArea to daaBlocksStore

* Add StagingArea to finalityStore

* Add StagingArea to ghostdagDataStore

* Add StagingArea to headersSelectedChainStore and headersSelectedTipStore

* Add StagingArea to multisetStore

* Add StagingArea to pruningStore

* Add StagingArea to reachabilityDataStore

* Add StagingArea to utxoDiffStore

* Fix forgotten compilation error

* Update reachability manager and some more things with StagingArea

* Add StagingArea to dagTopologyManager, and some more

* Add StagingArea to GHOSTDAGManager, and some more

* Add StagingArea to difficultyManager, and some more

* Add StagingArea to dagTraversalManager, and some more

* Add StagingArea to headerTipsManager, and some more

* Add StagingArea to constnsusStateManager, pastMedianTimeManager

* Add StagingArea to transactionValidator

* Add StagingArea to finalityManager

* Add StagingArea to mergeDepthManager

* Add StagingArea to pruningManager

* Add StagingArea to rest of ValidateAndInsertBlock

* Add StagingArea to blockValidator

* Add StagingArea to coinbaseManager

* Add StagingArea to syncManager

* Add StagingArea to blockBuilder

* Update consensus with StagingArea

* Add StagingArea to ghostdag2

* Fix remaining compilation errors

* Update names of stagingShards

* Fix forgotten stagingArea passing

* Mark stagingShard.isCommited = true once commited

* Move isStaged to stagingShard, so that it's available without going through store

* Make blockHeaderStore count be avilable from stagingShard

* Fix remaining forgotten stagingArea passing

* commitAllChanges should call dbTx.Commit in the end

* Fix all tests tests in blockValidator

* Fix all tests in consensusStateManager and some more

* Fix all tests in pruningManager

* Add many missing stagingAreas in tests

* Fix many tests

* Fix most of all other tests

* Fix ghostdag_test.go

* Add comment to StagingArea

* Make list of StagingShards an array

* Add comment to StagingShardID

* Make sure all staging shards are pointer-receiver

* Undo bucket rename in block_store

* Typo: isCommited -> isCommitted

* Add comment explaining why stagingArea.shards is an array
2021-03-29 10:34:11 +03:00
talelbaz
c12e180873 Use go:embed to embed sample-kaspad.conf (only on go1.16) (#1631)
* Use go:embed to embed sample-kaspad.conf (only on go1.16)

* Add a comment to justify the blank import.

* Change a variable name to sampleKaspad (instead configurationSampleKaspadString)

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-25 15:56:01 +02:00
Svarog
3959bc1e7c Fixes to stability tests: Move orphans test to simnet + Change fakePublicKeyHash size to 32 bytes (#1630)
* Move orphans test to simnet

* Change fakePublicKeyHash size to correct one
2021-03-25 12:04:41 +02:00
Elichai Turkel
6ec0a8a559 Replace ECMH with Muhash (#1624)
* Replace ECMH with MuHash

* Update genesis hash

* Update tests for new genesis
2021-03-22 18:15:16 +02:00
Svarog
6824be9216 Remove support for ServiceFlags out of DNSSeeder (#1622) 2021-03-18 18:02:57 +02:00
Ori Newman
d0511c1636 Use BLAKE2B instead of HASH160, and get rid of any usage of RIPEMD160 and SHA1 (#1618)
* Use BLAKE2B instead of HASH160, and get rid of any usage of RIPEMD160

* Change genesis coinbase payload script to OP_FALSE

* Fix tests after conflict

* Remove duplicate tests

* Change file name

* Change atomic swap to use proper hash size
2021-03-18 10:20:12 +02:00
Svarog
7d69b66c7c Change --datadir to --appdir and remove symmetrical connection in stability tests (#1617)
* Don't do simetric connects in netsync stability test

* Convert --datadir to --appdir everywhere

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-17 17:34:03 +02:00
Svarog
cebcab7f5c Implement BIP-143-like sighash (#1598)
* Move CalculateSignatureHash to consensushashing

* Added CalcSignatureHash_BIP143 with all parameters except the re-used hashes

* Add handling of outputHash

* Add sequencesHash to the mix

* Add previousOutputsHash to the mix

* Replace legacy CalculateSigHash with new one, and re-wire to all non-test code

* Add missing types to WriteElement

* Fix tests in txscript

* Fix tests in rest of code

* Add missing comments

* Add SubnetworkID and Gas to sigHash

* Add TestCalculateSignatureHash

* Invert condition in SigHashSingle getOutputsHash

* Explicitly define that payloadHash for native transactions is 0

* added benchmark to CalculateSignatureHash

* Reformat call for signAndCheck

* Change SigHashes to be true bit-fields

* Add check for transaction version

* Write length of byte array in WriteElement

* hashOutpoint should get outpoint, not txIn

* Use inputIndex instead of i to determine SigHashType

* Use correct transaction version + fix some typos

* Fix hashes in test

* Reformat an overly-long line

* Replace checkHashTypeEncoding with caalls to hashType.IsStandardSigHashType

* Convert hashType to uint8

* Add comment
2021-03-17 15:17:38 +02:00
Elichai Turkel
caf251b7a8 Replace the HomeDir flag with a AppDir flag (#1615) 2021-03-17 12:48:38 +02:00
Svarog
1a4161ffc0 Restructure the default ~/.kaspad directory layout (#1613) 2021-03-16 17:36:36 +02:00
Ori Newman
b84080f3d9 Fix getBlocks to not add the anticone when some blocks were filtered by GetHashesBetween (#1611)
* Fix getBlocks to not add the anticone when some blocks were filtered by GetHashesBetween

* Fix TestSyncManager_GetHashesBetween
2021-03-16 14:43:02 +02:00
stasatdaglabs
cbd0bb6d14 Remove the Services field from NetAddress. (#1610) 2021-03-16 14:22:52 +02:00
Ori Newman
d9449a32b8 Use DAA score where needed (#1602)
* Replace blue score with DAA score in UTXO entries

* Use DAA score for coinbase maturity

* Use DAA score for sequence lock

* Fix calcBlockSubsidy to use DAA score

* Don't pay to blocks that are not included in the DAA added blocks, and bestow red blocks reward to the merging block

* Fix TestGetPruningPointUTXOs

* Fix TestTransactionAcceptance

* Fix TestChainedTransactions

* Fix TestVirtualDiff

* Fix TestBlockWindow

* Fix TestPruning

* Use NewFromSlice instead of manually creating the hash set

* Add assert

* Add comment

* Remove redundant call to UpdateDAADataAndReturnDifficultyBits

* Add RequiredDifficulty, rename UpdateDAADataAndReturnDifficultyBits to StageDAADataAndReturnRequiredDifficulty and add comments

* Make buildUTXOInvalidHeader get bits as an argument

* Fix comments
2021-03-15 13:48:40 +02:00
Ori Newman
0ee8f2b631 Convert appmessage nil mempool entry to gRPC nil mempool entry (#1608) 2021-03-15 11:48:55 +02:00
Ori Newman
ff1c96c149 Wait for flows to finish before shutting down (#1605)
* Wait for flows to finish before shutting down

* Use CompareAndSwap

* Add comment

* Fix error message

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-03-14 20:18:40 +02:00
Elichai Turkel
5e335be5ab Update README.md (#1607) 2021-03-14 16:24:22 +02:00
Svarog
032eda4604 Update go-deploy workflow to upload all executables (#1604) 2021-03-14 13:48:36 +02:00
stasatdaglabs
e4e3541a30 Increase the route capacity of InvTransaction messages. (#1603) 2021-03-14 13:02:55 +02:00
talelbaz
b5933bc4fe Update general unit tests for Reachability (#1597)
* [NOD-1424] Write general unit-tests for Reachability

* Update the tests of reachabilityManager.

* Add a diagram for the created DAG in the test.

* Change tabs to spaces in the diagram.

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-03-14 11:42:56 +02:00
Ori Newman
ec446ac511 Adding DAA score (#1596)
* Save DAA score and DAA added blocks for each block

* Add test

* Add pruning support

* Replace 8 with uint64Length

* Separate DAABlocksStore cache size to DAA score and daaAddedBlocks
2021-03-14 09:44:44 +02:00
Elichai Turkel
3d668cc1bd Remove old constants and print actual grpc server address (#1595)
* Remove unneeded old constants in the p2p

* Print the actual address of the grpc server

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-10 16:33:51 +02:00
Elichai Turkel
1486a6312c Adjust the difficulty in the first difficultyAdjustmentWindowSize blocks (#1592)
* Move timesorter to its own package and remove unused functions

* Remove padding+genesis from BlockWindow

* Adjust the difficulty even when there's less than difficultyAdjustmentWindowSize blocks

* Remove unnecessary check from checkBlockTransactionsFinalized

* Update tests with new pastMedianTime and Difficulty

* Review nit
2021-03-10 16:11:46 +02:00
Elichai Turkel
cd27f2850e Delete the stability tests when doing code coverage (#1594) 2021-03-10 15:00:54 +02:00
Ori Newman
14cf7f81f3 Change the difficulty to be calculated based on the same block instead of its selected parent (#1591) 2021-03-09 17:07:16 +02:00
Ori Newman
74539f8f0b Fix TestDifficulty to better check red blocks (#1590)
* Write better tests for red blocks and DAA

* Fix comments

* Fix blue chain size

* Remove high timestamps from blue chain

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-09 16:38:17 +02:00
Svarog
a7299c1b87 Add stability tests (#1587)
* Add stability-tests

* Fix requires

* Fix golint errors

* Update README.md

* Remove payloadHash from everywhere

* don't run vet on kaspad in stability-tests/install_and_test
2021-03-09 15:01:08 +02:00
Elichai Turkel
27c1e4611e Add a github action deploy script to build and publish releases (#1585)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-09 13:00:57 +02:00
Ori Newman
b8413fcecb Add the mempool size to getInfo RPC command (#1584)
* Add the mempool size to getInfo RPC command

* Add mempool.Len()

* Rename mempool.Len() to mempool.TransactionCount()

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-09 12:48:33 +02:00
Ori Newman
c084c69771 Don't swallow orphan errors (#1581)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-09 12:30:35 +02:00
Ori Newman
53781eed4d Remove payload hash (#1583)
* Remove payload hash

* Fix tests
2021-03-08 15:15:03 +02:00
Elichai Turkel
837fa65735 kaspaminer: User tickers and regulate each block individually (#1580)
* User tickers and regulate each block individually

* Add comments, logs and rename variables

* Fix review comments
2021-03-08 10:51:35 +02:00
Elichai Turkel
dd3b2cf7d1 Fix data race in GetBlockChildren (#1579) 2021-03-07 16:33:47 +02:00
Mike Zak
3fd324ca28 Update to version 0.10.0 2021-03-03 16:34:45 +02:00
Ori Newman
0271858f25 Readd BlockHashes to getBlocks response (#1575) 2021-03-03 16:27:51 +02:00
Elichai Turkel
7909480757 Merge big subdags in pick virtual parents (#1574)
* Refactor mergeSetIncrease to return the current BFS block to allow easier merging

* Remove unneeded Heap/HashSet usages

* Add new IsAnyAncestorOf to DagTopolyManager

* Check if the new candidate is in the future of any existing candidate

* Add comments and fix off-by-one in the mergeSetIncrease queue

* Fixed DAGToplogy test mock

* Fix review comments
2021-03-03 16:17:16 +02:00
739 changed files with 43108 additions and 20477 deletions

View File

@@ -11,8 +11,8 @@
#>
param(
[System.UInt64] $MinimumSize = 8gb ,
[System.UInt64] $MaximumSize = 8gb ,
[System.UInt64] $MinimumSize = 16gb ,
[System.UInt64] $MaximumSize = 16gb ,
[System.String] $DiskRoot = "D:"
)

76
.github/workflows/deploy.yaml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Build and upload assets
on:
release:
types: [ published ]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-latest, windows-latest, macos-latest ]
name: Building, ${{ matrix.os }}
steps:
- name: Fix CRLF on Windows
if: runner.os == 'Windows'
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# Increase the pagefile size on Windows to aviod running out of memory
- name: Increase pagefile size on Windows
if: runner.os == 'Windows'
run: powershell -command .github\workflows\SetPageFileSize.ps1
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Build on Linux
if: runner.os == 'Linux'
# `-extldflags=-static` - means static link everything,
# `-tags netgo,osusergo` means use pure go replacements for "os/user" and "net"
# `-s -w` strips the binary to produce smaller size binaries
run: |
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-linux.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-linux.zip"
zip -r "${archive}" ./bin/*
echo "archive=${archive}" >> $GITHUB_ENV
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Build on Windows
if: runner.os == 'Windows'
shell: bash
run: |
go build -v -ldflags="-s -w" -o bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-win64.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-win64.zip"
powershell "Compress-Archive bin/* \"${archive}\""
echo "archive=${archive}" >> $GITHUB_ENV
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Build on MacOS
if: runner.os == 'macOS'
run: |
go build -v -ldflags="-s -w" -o ./bin/ . ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-osx.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-osx.zip"
zip -r "${archive}" ./bin/*
echo "archive=${archive}" >> $GITHUB_ENV
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Upload release asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ github.event.release.upload_url }}
asset_path: "./${{ env.archive }}"
asset_name: "${{ env.asset_name }}"
asset_content_type: application/zip

View File

@@ -1,70 +0,0 @@
name: Go
on:
push:
pull_request:
# edtited - "title, body, or the base branch of the PR is modified"
# synchronize - "commit(s) pushed to the pull request"
types: [opened, synchronize, edited, reopened]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-16.04, macos-10.15 ]
name: Testing on on ${{ matrix.os }}
steps:
- name: Fix windows CRLF
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# We need to increase the page size because the tests run out of memory on github CI windows.
# Use the powershell script from this github action: https://github.com/al-cheb/configure-pagefile-action/blob/master/scripts/SetPageFileSize.ps1
# MIT License (MIT) Copyright (c) 2020 Maxim Lobanov and contributors
- name: Increase page size on windows
if: runner.os == 'Windows'
shell: powershell
run: powershell -command .\.github\workflows\SetPageFileSize.ps1
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.16
# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
- name: Go Cache
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Test
shell: bash
run: ./build_and_test.sh -v
coverage:
runs-on: ubuntu-20.04
name: Produce code coverage
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Create coverage file
run: go test -v -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...
- name: Upload coverage file
run: bash <(curl -s https://codecov.io/bash)

View File

@@ -1,4 +1,4 @@
name: Go-Race
name: Race
on:
schedule:
@@ -7,7 +7,7 @@ on:
jobs:
race_test:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
@@ -19,10 +19,10 @@ jobs:
with:
fetch-depth: 0
- name: Set up Go 1.x
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.15
go-version: 1.16
- name: Set scheduled branch name
shell: bash

98
.github/workflows/tests.yaml vendored Normal file
View File

@@ -0,0 +1,98 @@
name: Tests
on:
push:
pull_request:
# edtited - because base branch can be modified
# synchronize - update commits on PR
types: [opened, synchronize, edited]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-latest, macos-latest ]
name: Tests, ${{ matrix.os }}
steps:
- name: Fix CRLF on Windows
if: runner.os == 'Windows'
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# Increase the pagefile size on Windows to aviod running out of memory
- name: Increase pagefile size on Windows
if: runner.os == 'Windows'
run: powershell -command .github\workflows\SetPageFileSize.ps1
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
- name: Go Cache
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Test
shell: bash
run: ./build_and_test.sh -v
stability-test-fast:
runs-on: ubuntu-latest
name: Fast stability tests, ${{ github.head_ref }}
steps:
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Install kaspad
run: go install ./...
- name: Install golint
run: go get -u golang.org/x/lint/golint
- name: Run fast stability tests
working-directory: stability-tests
run: ./install_and_test.sh
coverage:
runs-on: ubuntu-latest
name: Produce code coverage
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Delete the stability tests from coverage
run: rm -r stability-tests
- name: Create coverage file
run: go test -v -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...
- name: Upload coverage file
run: bash <(curl -s https://codecov.io/bash)

View File

@@ -56,13 +56,15 @@ $ kaspad
```
## Discord
Join our discord server using the following link: https://discord.gg/WmGhhzk
Join our discord server using the following link: https://discord.gg/YNYnNN5Pf2
## Issue Tracker
The [integrated github issue tracker](https://github.com/kaspanet/kaspad/issues)
is used for this project.
Issue priorities may be seen at https://github.com/orgs/kaspanet/projects/4
## Documentation
The [documentation](https://github.com/kaspanet/docs) is a work-in-progress

View File

@@ -85,12 +85,6 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
profiling.Start(app.cfg.Profile, log)
}
// Perform upgrades to kaspad as new versions require it.
if err := doUpgrades(); err != nil {
log.Error(err)
return err
}
// Return now if an interrupt signal was triggered.
if signal.InterruptRequested(interrupt) {
return nil
@@ -107,7 +101,7 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
// Open the database
databaseContext, err := openDB(app.cfg)
if err != nil {
log.Error(err)
log.Errorf("Loading database failed: %+v", err)
return err
}
@@ -163,15 +157,9 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
return nil
}
// doUpgrades performs upgrades to kaspad as new versions require it.
// currently it's a placeholder we got from kaspad upstream, that does nothing
func doUpgrades() error {
return nil
}
// dbPath returns the path to the block database given a database type.
func databasePath(cfg *config.Config) string {
return filepath.Join(cfg.DataDir, "db")
return filepath.Join(cfg.AppDir, "data")
}
func removeDatabase(cfg *config.Config) error {
@@ -181,6 +169,17 @@ func removeDatabase(cfg *config.Config) error {
func openDB(cfg *config.Config) (database.Database, error) {
dbPath := databasePath(cfg)
err := checkDatabaseVersion(dbPath)
if err != nil {
return nil, err
}
log.Infof("Loading database from '%s'", dbPath)
return ldb.NewLevelDB(dbPath, leveldbCacheSizeMiB)
db, err := ldb.NewLevelDB(dbPath, leveldbCacheSizeMiB)
if err != nil {
return nil, err
}
return db, nil
}

View File

@@ -2,7 +2,11 @@ package appmessage
import (
"encoding/hex"
"github.com/pkg/errors"
"math/big"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
@@ -27,13 +31,17 @@ func DomainBlockToMsgBlock(domainBlock *externalapi.DomainBlock) *MsgBlock {
func DomainBlockHeaderToBlockHeader(domainBlockHeader externalapi.BlockHeader) *MsgBlockHeader {
return &MsgBlockHeader{
Version: domainBlockHeader.Version(),
ParentHashes: domainBlockHeader.ParentHashes(),
Parents: domainBlockHeader.Parents(),
HashMerkleRoot: domainBlockHeader.HashMerkleRoot(),
AcceptedIDMerkleRoot: domainBlockHeader.AcceptedIDMerkleRoot(),
UTXOCommitment: domainBlockHeader.UTXOCommitment(),
Timestamp: mstime.UnixMilliseconds(domainBlockHeader.TimeInMilliseconds()),
Bits: domainBlockHeader.Bits(),
Nonce: domainBlockHeader.Nonce(),
BlueScore: domainBlockHeader.BlueScore(),
DAAScore: domainBlockHeader.DAAScore(),
BlueWork: domainBlockHeader.BlueWork(),
PruningPoint: domainBlockHeader.PruningPoint(),
}
}
@@ -54,13 +62,17 @@ func MsgBlockToDomainBlock(msgBlock *MsgBlock) *externalapi.DomainBlock {
func BlockHeaderToDomainBlockHeader(blockHeader *MsgBlockHeader) externalapi.BlockHeader {
return blockheader.NewImmutableBlockHeader(
blockHeader.Version,
blockHeader.ParentHashes,
blockHeader.Parents,
blockHeader.HashMerkleRoot,
blockHeader.AcceptedIDMerkleRoot,
blockHeader.UTXOCommitment,
blockHeader.Timestamp.UnixMilliseconds(),
blockHeader.Bits,
blockHeader.Nonce,
blockHeader.DAAScore,
blockHeader.BlueScore,
blockHeader.BlueWork,
blockHeader.PruningPoint,
)
}
@@ -83,7 +95,6 @@ func DomainTransactionToMsgTx(domainTransaction *externalapi.DomainTransaction)
LockTime: domainTransaction.LockTime,
SubnetworkID: domainTransaction.SubnetworkID,
Gas: domainTransaction.Gas,
PayloadHash: domainTransaction.PayloadHash,
Payload: domainTransaction.Payload,
}
}
@@ -100,6 +111,7 @@ func domainTransactionInputToTxIn(domainTransactionInput *externalapi.DomainTran
PreviousOutpoint: *domainOutpointToOutpoint(domainTransactionInput.PreviousOutpoint),
SignatureScript: domainTransactionInput.SignatureScript,
Sequence: domainTransactionInput.Sequence,
SigOpCount: domainTransactionInput.SigOpCount,
}
}
@@ -133,7 +145,6 @@ func MsgTxToDomainTransaction(msgTx *MsgTx) *externalapi.DomainTransaction {
LockTime: msgTx.LockTime,
SubnetworkID: msgTx.SubnetworkID,
Gas: msgTx.Gas,
PayloadHash: msgTx.PayloadHash,
Payload: payload,
}
}
@@ -149,6 +160,7 @@ func txInToDomainTransactionInput(txIn *TxIn) *externalapi.DomainTransactionInpu
return &externalapi.DomainTransactionInput{
PreviousOutpoint: *outpointToDomainOutpoint(&txIn.PreviousOutpoint), //TODO
SignatureScript: txIn.SignatureScript,
SigOpCount: txIn.SigOpCount,
Sequence: txIn.Sequence,
}
}
@@ -164,14 +176,10 @@ func outpointToDomainOutpoint(outpoint *Outpoint) *externalapi.DomainOutpoint {
func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externalapi.DomainTransaction, error) {
inputs := make([]*externalapi.DomainTransactionInput, len(rpcTransaction.Inputs))
for i, input := range rpcTransaction.Inputs {
transactionID, err := transactionid.FromString(input.PreviousOutpoint.TransactionID)
previousOutpoint, err := RPCOutpointToDomainOutpoint(input.PreviousOutpoint)
if err != nil {
return nil, err
}
previousOutpoint := &externalapi.DomainOutpoint{
TransactionID: *transactionID,
Index: input.PreviousOutpoint.Index,
}
signatureScript, err := hex.DecodeString(input.SignatureScript)
if err != nil {
return nil, err
@@ -180,6 +188,7 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
PreviousOutpoint: *previousOutpoint,
SignatureScript: signatureScript,
Sequence: input.Sequence,
SigOpCount: input.SigOpCount,
}
}
outputs := make([]*externalapi.DomainTransactionOutput, len(rpcTransaction.Outputs))
@@ -198,10 +207,6 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
if err != nil {
return nil, err
}
payloadHash, err := externalapi.NewDomainHashFromString(rpcTransaction.PayloadHash)
if err != nil {
return nil, err
}
payload, err := hex.DecodeString(rpcTransaction.Payload)
if err != nil {
return nil, err
@@ -214,11 +219,40 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
LockTime: rpcTransaction.LockTime,
SubnetworkID: *subnetworkID,
Gas: rpcTransaction.LockTime,
PayloadHash: *payloadHash,
Payload: payload,
}, nil
}
// RPCOutpointToDomainOutpoint converts RPCOutpoint to DomainOutpoint
func RPCOutpointToDomainOutpoint(outpoint *RPCOutpoint) (*externalapi.DomainOutpoint, error) {
transactionID, err := transactionid.FromString(outpoint.TransactionID)
if err != nil {
return nil, err
}
return &externalapi.DomainOutpoint{
TransactionID: *transactionID,
Index: outpoint.Index,
}, nil
}
// RPCUTXOEntryToUTXOEntry converts RPCUTXOEntry to UTXOEntry
func RPCUTXOEntryToUTXOEntry(entry *RPCUTXOEntry) (externalapi.UTXOEntry, error) {
script, err := hex.DecodeString(entry.ScriptPublicKey.Script)
if err != nil {
return nil, err
}
return utxo.NewUTXOEntry(
entry.Amount,
&externalapi.ScriptPublicKey{
Script: script,
Version: entry.ScriptPublicKey.Version,
},
entry.IsCoinbase,
entry.BlockDAAScore,
), nil
}
// DomainTransactionToRPCTransaction converts DomainTransactions to RPCTransactions
func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransaction) *RPCTransaction {
inputs := make([]*RPCTransactionInput, len(transaction.Inputs))
@@ -233,6 +267,7 @@ func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransactio
PreviousOutpoint: previousOutpoint,
SignatureScript: signatureScript,
Sequence: input.Sequence,
SigOpCount: input.SigOpCount,
}
}
outputs := make([]*RPCTransactionOutput, len(transaction.Outputs))
@@ -244,7 +279,6 @@ func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransactio
}
}
subnetworkID := transaction.SubnetworkID.String()
payloadHash := transaction.PayloadHash.String()
payload := hex.EncodeToString(transaction.Payload)
return &RPCTransaction{
Version: transaction.Version,
@@ -253,7 +287,6 @@ func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransactio
LockTime: transaction.LockTime,
SubnetworkID: subnetworkID,
Gas: transaction.LockTime,
PayloadHash: payloadHash,
Payload: payload,
}
}
@@ -265,22 +298,27 @@ func OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(
domainOutpointAndUTXOEntryPairs := make([]*externalapi.OutpointAndUTXOEntryPair, len(outpointAndUTXOEntryPairs))
for i, outpointAndUTXOEntryPair := range outpointAndUTXOEntryPairs {
domainOutpointAndUTXOEntryPairs[i] = &externalapi.OutpointAndUTXOEntryPair{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: outpointAndUTXOEntryPair.Outpoint.TxID,
Index: outpointAndUTXOEntryPair.Outpoint.Index,
},
UTXOEntry: utxo.NewUTXOEntry(
outpointAndUTXOEntryPair.UTXOEntry.Amount,
outpointAndUTXOEntryPair.UTXOEntry.ScriptPublicKey,
outpointAndUTXOEntryPair.UTXOEntry.IsCoinbase,
outpointAndUTXOEntryPair.UTXOEntry.BlockBlueScore,
),
}
domainOutpointAndUTXOEntryPairs[i] = outpointAndUTXOEntryPairToDomainOutpointAndUTXOEntryPair(outpointAndUTXOEntryPair)
}
return domainOutpointAndUTXOEntryPairs
}
func outpointAndUTXOEntryPairToDomainOutpointAndUTXOEntryPair(
outpointAndUTXOEntryPair *OutpointAndUTXOEntryPair) *externalapi.OutpointAndUTXOEntryPair {
return &externalapi.OutpointAndUTXOEntryPair{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: outpointAndUTXOEntryPair.Outpoint.TxID,
Index: outpointAndUTXOEntryPair.Outpoint.Index,
},
UTXOEntry: utxo.NewUTXOEntry(
outpointAndUTXOEntryPair.UTXOEntry.Amount,
outpointAndUTXOEntryPair.UTXOEntry.ScriptPublicKey,
outpointAndUTXOEntryPair.UTXOEntry.IsCoinbase,
outpointAndUTXOEntryPair.UTXOEntry.BlockDAAScore,
),
}
}
// DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs converts
// domain OutpointAndUTXOEntryPairs to OutpointAndUTXOEntryPairs
func DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(
@@ -297,9 +335,216 @@ func DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(
Amount: outpointAndUTXOEntryPair.UTXOEntry.Amount(),
ScriptPublicKey: outpointAndUTXOEntryPair.UTXOEntry.ScriptPublicKey(),
IsCoinbase: outpointAndUTXOEntryPair.UTXOEntry.IsCoinbase(),
BlockBlueScore: outpointAndUTXOEntryPair.UTXOEntry.BlockBlueScore(),
BlockDAAScore: outpointAndUTXOEntryPair.UTXOEntry.BlockDAAScore(),
},
}
}
return domainOutpointAndUTXOEntryPairs
}
// DomainBlockToRPCBlock converts DomainBlocks to RPCBlocks
func DomainBlockToRPCBlock(block *externalapi.DomainBlock) *RPCBlock {
parents := make([]*RPCBlockLevelParents, len(block.Header.Parents()))
for i, blockLevelParents := range block.Header.Parents() {
parents[i] = &RPCBlockLevelParents{
ParentHashes: hashes.ToStrings(blockLevelParents),
}
}
header := &RPCBlockHeader{
Version: uint32(block.Header.Version()),
Parents: parents,
HashMerkleRoot: block.Header.HashMerkleRoot().String(),
AcceptedIDMerkleRoot: block.Header.AcceptedIDMerkleRoot().String(),
UTXOCommitment: block.Header.UTXOCommitment().String(),
Timestamp: block.Header.TimeInMilliseconds(),
Bits: block.Header.Bits(),
Nonce: block.Header.Nonce(),
DAAScore: block.Header.DAAScore(),
BlueScore: block.Header.BlueScore(),
BlueWork: block.Header.BlueWork().Text(16),
PruningPoint: block.Header.PruningPoint().String(),
}
transactions := make([]*RPCTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
transactions[i] = DomainTransactionToRPCTransaction(transaction)
}
return &RPCBlock{
Header: header,
Transactions: transactions,
}
}
// RPCBlockToDomainBlock converts `block` into a DomainBlock
func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
parents := make([]externalapi.BlockLevelParents, len(block.Header.Parents))
for i, blockLevelParents := range block.Header.Parents {
parents[i] = make(externalapi.BlockLevelParents, len(blockLevelParents.ParentHashes))
for j, parentHash := range blockLevelParents.ParentHashes {
var err error
parents[i][j], err = externalapi.NewDomainHashFromString(parentHash)
if err != nil {
return nil, err
}
}
}
hashMerkleRoot, err := externalapi.NewDomainHashFromString(block.Header.HashMerkleRoot)
if err != nil {
return nil, err
}
acceptedIDMerkleRoot, err := externalapi.NewDomainHashFromString(block.Header.AcceptedIDMerkleRoot)
if err != nil {
return nil, err
}
utxoCommitment, err := externalapi.NewDomainHashFromString(block.Header.UTXOCommitment)
if err != nil {
return nil, err
}
blueWork, success := new(big.Int).SetString(block.Header.BlueWork, 16)
if !success {
return nil, errors.Errorf("failed to parse blue work: %s", block.Header.BlueWork)
}
pruningPoint, err := externalapi.NewDomainHashFromString(block.Header.PruningPoint)
if err != nil {
return nil, err
}
header := blockheader.NewImmutableBlockHeader(
uint16(block.Header.Version),
parents,
hashMerkleRoot,
acceptedIDMerkleRoot,
utxoCommitment,
block.Header.Timestamp,
block.Header.Bits,
block.Header.Nonce,
block.Header.DAAScore,
block.Header.BlueScore,
blueWork,
pruningPoint)
transactions := make([]*externalapi.DomainTransaction, len(block.Transactions))
for i, transaction := range block.Transactions {
domainTransaction, err := RPCTransactionToDomainTransaction(transaction)
if err != nil {
return nil, err
}
transactions[i] = domainTransaction
}
return &externalapi.DomainBlock{
Header: header,
Transactions: transactions,
}, nil
}
// BlockWithTrustedDataToDomainBlockWithTrustedData converts *MsgBlockWithTrustedData to *externalapi.BlockWithTrustedData
func BlockWithTrustedDataToDomainBlockWithTrustedData(block *MsgBlockWithTrustedData) *externalapi.BlockWithTrustedData {
daaWindow := make([]*externalapi.TrustedDataDataDAABlock, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &externalapi.TrustedDataDataDAABlock{
Block: MsgBlockToDomainBlock(daaBlock.Block),
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
ghostdagData := make([]*externalapi.BlockGHOSTDAGDataHashPair, len(block.GHOSTDAGData))
for i, datum := range block.GHOSTDAGData {
ghostdagData[i] = &externalapi.BlockGHOSTDAGDataHashPair{
Hash: datum.Hash,
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(datum.GHOSTDAGData),
}
}
return &externalapi.BlockWithTrustedData{
Block: MsgBlockToDomainBlock(block.Block),
DAAScore: block.DAAScore,
DAAWindow: daaWindow,
GHOSTDAGData: ghostdagData,
}
}
func ghostdagDataToDomainGHOSTDAGData(data *BlockGHOSTDAGData) *externalapi.BlockGHOSTDAGData {
bluesAnticoneSizes := make(map[externalapi.DomainHash]externalapi.KType, len(data.BluesAnticoneSizes))
for _, pair := range data.BluesAnticoneSizes {
bluesAnticoneSizes[*pair.BlueHash] = pair.AnticoneSize
}
return externalapi.NewBlockGHOSTDAGData(
data.BlueScore,
data.BlueWork,
data.SelectedParent,
data.MergeSetBlues,
data.MergeSetReds,
bluesAnticoneSizes,
)
}
func domainGHOSTDAGDataGHOSTDAGData(data *externalapi.BlockGHOSTDAGData) *BlockGHOSTDAGData {
bluesAnticoneSizes := make([]*BluesAnticoneSizes, 0, len(data.BluesAnticoneSizes()))
for blueHash, anticoneSize := range data.BluesAnticoneSizes() {
blueHashCopy := blueHash
bluesAnticoneSizes = append(bluesAnticoneSizes, &BluesAnticoneSizes{
BlueHash: &blueHashCopy,
AnticoneSize: anticoneSize,
})
}
return &BlockGHOSTDAGData{
BlueScore: data.BlueScore(),
BlueWork: data.BlueWork(),
SelectedParent: data.SelectedParent(),
MergeSetBlues: data.MergeSetBlues(),
MergeSetReds: data.MergeSetReds(),
BluesAnticoneSizes: bluesAnticoneSizes,
}
}
// DomainBlockWithTrustedDataToBlockWithTrustedData converts *externalapi.BlockWithTrustedData to *MsgBlockWithTrustedData
func DomainBlockWithTrustedDataToBlockWithTrustedData(block *externalapi.BlockWithTrustedData) *MsgBlockWithTrustedData {
daaWindow := make([]*TrustedDataDataDAABlock, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &TrustedDataDataDAABlock{
Block: DomainBlockToMsgBlock(daaBlock.Block),
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
ghostdagData := make([]*BlockGHOSTDAGDataHashPair, len(block.GHOSTDAGData))
for i, datum := range block.GHOSTDAGData {
ghostdagData[i] = &BlockGHOSTDAGDataHashPair{
Hash: datum.Hash,
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(datum.GHOSTDAGData),
}
}
return &MsgBlockWithTrustedData{
Block: DomainBlockToMsgBlock(block.Block),
DAAScore: block.DAAScore,
DAAWindow: daaWindow,
GHOSTDAGData: ghostdagData,
}
}
// MsgPruningPointProofToDomainPruningPointProof converts *MsgPruningPointProof to *externalapi.PruningPointProof
func MsgPruningPointProofToDomainPruningPointProof(pruningPointProofMessage *MsgPruningPointProof) *externalapi.PruningPointProof {
headers := make([][]externalapi.BlockHeader, len(pruningPointProofMessage.Headers))
for blockLevel, blockLevelParents := range pruningPointProofMessage.Headers {
headers[blockLevel] = make([]externalapi.BlockHeader, len(blockLevelParents))
for i, header := range blockLevelParents {
headers[blockLevel][i] = BlockHeaderToDomainBlockHeader(header)
}
}
return &externalapi.PruningPointProof{
Headers: headers,
}
}
// DomainPruningPointProofToMsgPruningPointProof converts *externalapi.PruningPointProof to *MsgPruningPointProof
func DomainPruningPointProofToMsgPruningPointProof(pruningPointProof *externalapi.PruningPointProof) *MsgPruningPointProof {
headers := make([][]*MsgBlockHeader, len(pruningPointProof.Headers))
for blockLevel, blockLevelParents := range pruningPointProof.Headers {
headers[blockLevel] = make([]*MsgBlockHeader, len(blockLevelParents))
for i, header := range blockLevelParents {
headers[blockLevel][i] = DomainBlockHeaderToBlockHeader(header)
}
}
return &MsgPruningPointProof{
Headers: headers,
}
}

View File

@@ -45,24 +45,27 @@ const (
CmdRequestRelayBlocks
CmdInvTransaction
CmdRequestTransactions
CmdIBDBlock
CmdDoneHeaders
CmdTransactionNotFound
CmdReject
CmdHeader
CmdRequestNextHeaders
CmdRequestPruningPointUTXOSetAndBlock
CmdRequestPruningPointUTXOSet
CmdPruningPointUTXOSetChunk
CmdRequestIBDBlocks
CmdUnexpectedPruningPoint
CmdRequestPruningPointHash
CmdPruningPointHash
CmdIBDBlockLocator
CmdIBDBlockLocatorHighestHash
CmdIBDBlockLocatorHighestHashNotFound
CmdBlockHeaders
CmdRequestNextPruningPointUTXOSetChunk
CmdDonePruningPointUTXOSetChunks
CmdBlockWithTrustedData
CmdDoneBlocksWithTrustedData
CmdRequestPruningPointAndItsAnticone
CmdIBDBlock
CmdRequestIBDBlocks
CmdPruningPoints
CmdRequestPruningPointProof
CmdPruningPointProof
// rpc
CmdGetCurrentNetworkRequestMessage
@@ -137,6 +140,11 @@ const (
CmdPruningPointUTXOSetOverrideNotificationMessage
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage
CmdEstimateNetworkHashesPerSecondRequestMessage
CmdEstimateNetworkHashesPerSecondResponseMessage
CmdNotifyVirtualDaaScoreChangedRequestMessage
CmdNotifyVirtualDaaScoreChangedResponseMessage
CmdVirtualDaaScoreChangedNotificationMessage
)
// ProtocolMessageCommandToString maps all MessageCommands to their string representation
@@ -145,7 +153,7 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdVerAck: "VerAck",
CmdRequestAddresses: "RequestAddresses",
CmdAddresses: "Addresses",
CmdRequestHeaders: "RequestHeaders",
CmdRequestHeaders: "CmdRequestHeaders",
CmdBlock: "Block",
CmdTx: "Tx",
CmdPing: "Ping",
@@ -156,24 +164,27 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdRequestRelayBlocks: "RequestRelayBlocks",
CmdInvTransaction: "InvTransaction",
CmdRequestTransactions: "RequestTransactions",
CmdIBDBlock: "IBDBlock",
CmdDoneHeaders: "DoneHeaders",
CmdTransactionNotFound: "TransactionNotFound",
CmdReject: "Reject",
CmdHeader: "Header",
CmdRequestNextHeaders: "RequestNextHeaders",
CmdRequestPruningPointUTXOSetAndBlock: "RequestPruningPointUTXOSetAndBlock",
CmdRequestPruningPointUTXOSet: "RequestPruningPointUTXOSet",
CmdPruningPointUTXOSetChunk: "PruningPointUTXOSetChunk",
CmdRequestIBDBlocks: "RequestIBDBlocks",
CmdUnexpectedPruningPoint: "UnexpectedPruningPoint",
CmdRequestPruningPointHash: "RequestPruningPointHashHash",
CmdPruningPointHash: "PruningPointHash",
CmdIBDBlockLocator: "IBDBlockLocator",
CmdIBDBlockLocatorHighestHash: "IBDBlockLocatorHighestHash",
CmdIBDBlockLocatorHighestHashNotFound: "IBDBlockLocatorHighestHashNotFound",
CmdBlockHeaders: "BlockHeaders",
CmdRequestNextPruningPointUTXOSetChunk: "RequestNextPruningPointUTXOSetChunk",
CmdDonePruningPointUTXOSetChunks: "DonePruningPointUTXOSetChunks",
CmdBlockWithTrustedData: "BlockWithTrustedData",
CmdDoneBlocksWithTrustedData: "DoneBlocksWithTrustedData",
CmdRequestPruningPointAndItsAnticone: "RequestPruningPointAndItsAnticoneHeaders",
CmdIBDBlock: "IBDBlock",
CmdRequestIBDBlocks: "RequestIBDBlocks",
CmdPruningPoints: "PruningPoints",
CmdRequestPruningPointProof: "RequestPruningPointProof",
CmdPruningPointProof: "PruningPointProof",
}
// RPCMessageCommandToString maps all MessageCommands to their string representation
@@ -248,6 +259,11 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdPruningPointUTXOSetOverrideNotificationMessage: "PruningPointUTXOSetOverrideNotification",
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: "StopNotifyingPruningPointUTXOSetOverrideRequest",
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage: "StopNotifyingPruningPointUTXOSetOverrideResponse",
CmdEstimateNetworkHashesPerSecondRequestMessage: "EstimateNetworkHashesPerSecondRequest",
CmdEstimateNetworkHashesPerSecondResponseMessage: "EstimateNetworkHashesPerSecondResponse",
CmdNotifyVirtualDaaScoreChangedRequestMessage: "NotifyVirtualDaaScoreChangedRequest",
CmdNotifyVirtualDaaScoreChangedResponseMessage: "NotifyVirtualDaaScoreChangedResponse",
CmdVirtualDaaScoreChangedNotificationMessage: "VirtualDaaScoreChangedNotification",
}
// Message is an interface that describes a kaspa message. A type that

View File

@@ -15,19 +15,6 @@ import (
// backing array multiple times.
const defaultTransactionAlloc = 2048
// MaxMassAcceptedByBlock is the maximum total transaction mass a block may accept.
const MaxMassAcceptedByBlock = 10000000
// MaxMassPerTx is the maximum total mass a transaction may have.
const MaxMassPerTx = MaxMassAcceptedByBlock / 2
// MaxTxPerBlock is the maximum number of transactions that could
// possibly fit into a block.
const MaxTxPerBlock = (MaxMassAcceptedByBlock / minTxPayload) + 1
// MaxBlockParents is the maximum allowed number of parents for block.
const MaxBlockParents = 10
// TxLoc holds locator data for the offset and length of where a transaction is
// located within a MsgBlock data buffer.
type TxLoc struct {

View File

@@ -21,13 +21,18 @@ func TestBlock(t *testing.T) {
pver := ProtocolVersion
// Block 1 header.
parentHashes := blockOne.Header.ParentHashes
parents := blockOne.Header.Parents
hashMerkleRoot := blockOne.Header.HashMerkleRoot
acceptedIDMerkleRoot := blockOne.Header.AcceptedIDMerkleRoot
utxoCommitment := blockOne.Header.UTXOCommitment
bits := blockOne.Header.Bits
nonce := blockOne.Header.Nonce
bh := NewBlockHeader(1, parentHashes, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce)
daaScore := blockOne.Header.DAAScore
blueScore := blockOne.Header.BlueScore
blueWork := blockOne.Header.BlueWork
pruningPoint := blockOne.Header.PruningPoint
bh := NewBlockHeader(1, parents, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce,
daaScore, blueScore, blueWork, pruningPoint)
// Ensure the command is expected value.
wantCmd := MessageCommand(5)
@@ -131,7 +136,7 @@ func TestConvertToPartial(t *testing.T) {
var blockOne = MsgBlock{
Header: MsgBlockHeader{
Version: 0,
ParentHashes: []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash},
Parents: []externalapi.BlockLevelParents{[]*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}},
HashMerkleRoot: mainnetGenesisMerkleRoot,
AcceptedIDMerkleRoot: exampleAcceptedIDMerkleRoot,
UTXOCommitment: exampleUTXOCommitment,

View File

@@ -5,13 +5,12 @@
package appmessage
import (
"math"
"math/big"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/pkg/errors"
)
// BaseBlockHeaderPayload is the base number of bytes a block header can be,
@@ -39,8 +38,8 @@ type MsgBlockHeader struct {
// Version of the block. This is not the same as the protocol version.
Version uint16
// Hashes of the parent block headers in the blockDAG.
ParentHashes []*externalapi.DomainHash
// Parents are the parent block hashes of the block in the DAG per superblock level.
Parents []externalapi.BlockLevelParents
// HashMerkleRoot is the merkle tree reference to hash of all transactions for the block.
HashMerkleRoot *externalapi.DomainHash
@@ -60,15 +59,16 @@ type MsgBlockHeader struct {
// Nonce used to generate the block.
Nonce uint64
}
// NumParentBlocks return the number of entries in ParentHashes
func (h *MsgBlockHeader) NumParentBlocks() byte {
numParents := len(h.ParentHashes)
if numParents > math.MaxUint8 {
panic(errors.Errorf("number of parents is %d, which is more than one byte can fit", numParents))
}
return byte(numParents)
// DAASCore is the DAA score of the block.
DAAScore uint64
BlueScore uint64
// BlueWork is the blue work of the block.
BlueWork *big.Int
PruningPoint *externalapi.DomainHash
}
// BlockHash computes the block identifier hash for the given block header.
@@ -76,33 +76,27 @@ func (h *MsgBlockHeader) BlockHash() *externalapi.DomainHash {
return consensushashing.HeaderHash(BlockHeaderToDomainBlockHeader(h))
}
// IsGenesis returns true iff this block is a genesis block
func (h *MsgBlockHeader) IsGenesis() bool {
return h.NumParentBlocks() == 0
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (h *MsgBlockHeader) Command() MessageCommand {
return CmdHeader
}
// NewBlockHeader returns a new MsgBlockHeader using the provided version, previous
// block hash, hash merkle root, accepted ID merkle root, difficulty bits, and nonce used to generate the
// block with defaults or calclulated values for the remaining fields.
func NewBlockHeader(version uint16, parentHashes []*externalapi.DomainHash, hashMerkleRoot *externalapi.DomainHash,
acceptedIDMerkleRoot *externalapi.DomainHash, utxoCommitment *externalapi.DomainHash, bits uint32, nonce uint64) *MsgBlockHeader {
func NewBlockHeader(version uint16, parents []externalapi.BlockLevelParents, hashMerkleRoot *externalapi.DomainHash,
acceptedIDMerkleRoot *externalapi.DomainHash, utxoCommitment *externalapi.DomainHash, bits uint32, nonce,
daaScore, blueScore uint64, blueWork *big.Int, pruningPoint *externalapi.DomainHash) *MsgBlockHeader {
// Limit the timestamp to one millisecond precision since the protocol
// doesn't support better.
return &MsgBlockHeader{
Version: version,
ParentHashes: parentHashes,
Parents: parents,
HashMerkleRoot: hashMerkleRoot,
AcceptedIDMerkleRoot: acceptedIDMerkleRoot,
UTXOCommitment: utxoCommitment,
Timestamp: mstime.Now(),
Bits: bits,
Nonce: nonce,
DAAScore: daaScore,
BlueScore: blueScore,
BlueWork: blueWork,
PruningPoint: pruningPoint,
}
}

View File

@@ -5,29 +5,34 @@
package appmessage
import (
"math/big"
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
)
// TestBlockHeader tests the MsgBlockHeader API.
func TestBlockHeader(t *testing.T) {
nonce := uint64(0xba4d87a69924a93d)
hashes := []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}
parents := []externalapi.BlockLevelParents{[]*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}}
merkleHash := mainnetGenesisMerkleRoot
acceptedIDMerkleRoot := exampleAcceptedIDMerkleRoot
bits := uint32(0x1d00ffff)
bh := NewBlockHeader(1, hashes, merkleHash, acceptedIDMerkleRoot, exampleUTXOCommitment, bits, nonce)
daaScore := uint64(123)
blueScore := uint64(456)
blueWork := big.NewInt(789)
pruningPoint := simnetGenesisHash
bh := NewBlockHeader(1, parents, merkleHash, acceptedIDMerkleRoot, exampleUTXOCommitment, bits, nonce,
daaScore, blueScore, blueWork, pruningPoint)
// Ensure we get the same data back out.
if !reflect.DeepEqual(bh.ParentHashes, hashes) {
t.Errorf("NewBlockHeader: wrong prev hashes - got %v, want %v",
spew.Sprint(bh.ParentHashes), spew.Sprint(hashes))
if !reflect.DeepEqual(bh.Parents, parents) {
t.Errorf("NewBlockHeader: wrong parents - got %v, want %v",
spew.Sprint(bh.Parents), spew.Sprint(parents))
}
if bh.HashMerkleRoot != merkleHash {
t.Errorf("NewBlockHeader: wrong merkle root - got %v, want %v",
@@ -41,44 +46,20 @@ func TestBlockHeader(t *testing.T) {
t.Errorf("NewBlockHeader: wrong nonce - got %v, want %v",
bh.Nonce, nonce)
}
}
func TestIsGenesis(t *testing.T) {
nonce := uint64(123123) // 0x1e0f3
bits := uint32(0x1d00ffff)
timestamp := mstime.UnixMilliseconds(0x495fab29000)
baseBlockHdr := &MsgBlockHeader{
Version: 1,
ParentHashes: []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash},
HashMerkleRoot: mainnetGenesisMerkleRoot,
Timestamp: timestamp,
Bits: bits,
Nonce: nonce,
if bh.DAAScore != daaScore {
t.Errorf("NewBlockHeader: wrong daaScore - got %v, want %v",
bh.DAAScore, daaScore)
}
genesisBlockHdr := &MsgBlockHeader{
Version: 1,
ParentHashes: []*externalapi.DomainHash{},
HashMerkleRoot: mainnetGenesisMerkleRoot,
Timestamp: timestamp,
Bits: bits,
Nonce: nonce,
if bh.BlueScore != blueScore {
t.Errorf("NewBlockHeader: wrong blueScore - got %v, want %v",
bh.BlueScore, blueScore)
}
tests := []struct {
in *MsgBlockHeader // Block header to encode
isGenesis bool // Expected result for call of .IsGenesis
}{
{genesisBlockHdr, true},
{baseBlockHdr, false},
if bh.BlueWork != blueWork {
t.Errorf("NewBlockHeader: wrong blueWork - got %v, want %v",
bh.BlueWork, blueWork)
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
isGenesis := test.in.IsGenesis()
if isGenesis != test.isGenesis {
t.Errorf("MsgBlockHeader.IsGenesis: #%d got: %t, want: %t",
i, isGenesis, test.isGenesis)
}
if !bh.PruningPoint.Equal(pruningPoint) {
t.Errorf("NewBlockHeader: wrong pruningPoint - got %v, want %v",
bh.PruningPoint, pruningPoint)
}
}

View File

@@ -0,0 +1,54 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"math/big"
)
// MsgBlockWithTrustedData represents a kaspa BlockWithTrustedData message
type MsgBlockWithTrustedData struct {
baseMessage
Block *MsgBlock
DAAScore uint64
DAAWindow []*TrustedDataDataDAABlock
GHOSTDAGData []*BlockGHOSTDAGDataHashPair
}
// Command returns the protocol command string for the message
func (msg *MsgBlockWithTrustedData) Command() MessageCommand {
return CmdBlockWithTrustedData
}
// NewMsgBlockWithTrustedData returns a new MsgBlockWithTrustedData.
func NewMsgBlockWithTrustedData() *MsgBlockWithTrustedData {
return &MsgBlockWithTrustedData{}
}
// TrustedDataDataDAABlock is an appmessage representation of externalapi.TrustedDataDataDAABlock
type TrustedDataDataDAABlock struct {
Block *MsgBlock
GHOSTDAGData *BlockGHOSTDAGData
}
// BlockGHOSTDAGData is an appmessage representation of externalapi.BlockGHOSTDAGData
type BlockGHOSTDAGData struct {
BlueScore uint64
BlueWork *big.Int
SelectedParent *externalapi.DomainHash
MergeSetBlues []*externalapi.DomainHash
MergeSetReds []*externalapi.DomainHash
BluesAnticoneSizes []*BluesAnticoneSizes
}
// BluesAnticoneSizes is an appmessage representation of the BluesAnticoneSizes part of GHOSTDAG data.
type BluesAnticoneSizes struct {
BlueHash *externalapi.DomainHash
AnticoneSize externalapi.KType
}
// BlockGHOSTDAGDataHashPair is an appmessage representation of externalapi.BlockGHOSTDAGDataHashPair
type BlockGHOSTDAGDataHashPair struct {
Hash *externalapi.DomainHash
GHOSTDAGData *BlockGHOSTDAGData
}

View File

@@ -0,0 +1,21 @@
package appmessage
// MsgDoneBlocksWithTrustedData implements the Message interface and represents a kaspa
// DoneBlocksWithTrustedData message
//
// This message has no payload.
type MsgDoneBlocksWithTrustedData struct {
baseMessage
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgDoneBlocksWithTrustedData) Command() MessageCommand {
return CmdDoneBlocksWithTrustedData
}
// NewMsgDoneBlocksWithTrustedData returns a new kaspa DoneBlocksWithTrustedData message that conforms to the
// Message interface.
func NewMsgDoneBlocksWithTrustedData() *MsgDoneBlocksWithTrustedData {
return &MsgDoneBlocksWithTrustedData{}
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgRequestPruningPointAndItsAnticone represents a kaspa RequestPruningPointAndItsAnticone message
type MsgRequestPruningPointAndItsAnticone struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointAndItsAnticone) Command() MessageCommand {
return CmdRequestPruningPointAndItsAnticone
}
// NewMsgRequestPruningPointAndItsAnticone returns a new MsgRequestPruningPointAndItsAnticone.
func NewMsgRequestPruningPointAndItsAnticone() *MsgRequestPruningPointAndItsAnticone {
return &MsgRequestPruningPointAndItsAnticone{}
}

View File

@@ -1,65 +0,0 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package appmessage
import (
"reflect"
"testing"
"github.com/davecgh/go-spew/spew"
)
// TestIBDBlock tests the MsgIBDBlock API.
func TestIBDBlock(t *testing.T) {
pver := ProtocolVersion
// Block 1 header.
parentHashes := blockOne.Header.ParentHashes
hashMerkleRoot := blockOne.Header.HashMerkleRoot
acceptedIDMerkleRoot := blockOne.Header.AcceptedIDMerkleRoot
utxoCommitment := blockOne.Header.UTXOCommitment
bits := blockOne.Header.Bits
nonce := blockOne.Header.Nonce
bh := NewBlockHeader(1, parentHashes, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce)
// Ensure the command is expected value.
wantCmd := MessageCommand(15)
msg := NewMsgIBDBlock(NewMsgBlock(bh))
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgIBDBlock: wrong command - got %v want %v",
cmd, wantCmd)
}
// Ensure max payload is expected value for latest protocol version.
wantPayload := uint32(1024 * 1024 * 32)
maxPayload := msg.MaxPayloadLength(pver)
if maxPayload != wantPayload {
t.Errorf("MaxPayloadLength: wrong max payload length for "+
"protocol version %d - got %v, want %v", pver,
maxPayload, wantPayload)
}
// Ensure we get the same block header data back out.
if !reflect.DeepEqual(&msg.Header, bh) {
t.Errorf("NewMsgIBDBlock: wrong block header - got %v, want %v",
spew.Sdump(&msg.Header), spew.Sdump(bh))
}
// Ensure transactions are added properly.
tx := blockOne.Transactions[0].Copy()
msg.AddTransaction(tx)
if !reflect.DeepEqual(msg.Transactions, blockOne.Transactions) {
t.Errorf("AddTransaction: wrong transactions - got %v, want %v",
spew.Sdump(msg.Transactions),
spew.Sdump(blockOne.Transactions))
}
// Ensure transactions are properly cleared.
msg.ClearTransactions()
if len(msg.Transactions) != 0 {
t.Errorf("ClearTransactions: wrong transactions - got %v, want %v",
len(msg.Transactions), 0)
}
}

View File

@@ -1,23 +0,0 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgPruningPointHashMessage represents a kaspa PruningPointHash message
type MsgPruningPointHashMessage struct {
baseMessage
Hash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPointHashMessage) Command() MessageCommand {
return CmdPruningPointHash
}
// NewPruningPointHashMessage returns a new kaspa PruningPointHash message
func NewPruningPointHashMessage(hash *externalapi.DomainHash) *MsgPruningPointHashMessage {
return &MsgPruningPointHashMessage{
Hash: hash,
}
}

View File

@@ -0,0 +1,20 @@
package appmessage
// MsgPruningPointProof represents a kaspa PruningPointProof message
type MsgPruningPointProof struct {
baseMessage
Headers [][]*MsgBlockHeader
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPointProof) Command() MessageCommand {
return CmdPruningPointProof
}
// NewMsgPruningPointProof returns a new MsgPruningPointProof.
func NewMsgPruningPointProof(headers [][]*MsgBlockHeader) *MsgPruningPointProof {
return &MsgPruningPointProof{
Headers: headers,
}
}

View File

@@ -0,0 +1,20 @@
package appmessage
// MsgPruningPoints represents a kaspa PruningPoints message
type MsgPruningPoints struct {
baseMessage
Headers []*MsgBlockHeader
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPoints) Command() MessageCommand {
return CmdPruningPoints
}
// NewMsgPruningPoints returns a new MsgPruningPoints.
func NewMsgPruningPoints(headers []*MsgBlockHeader) *MsgPruningPoints {
return &MsgPruningPoints{
Headers: headers,
}
}

View File

@@ -31,6 +31,6 @@ type OutpointAndUTXOEntryPair struct {
type UTXOEntry struct {
Amount uint64
ScriptPublicKey *externalapi.ScriptPublicKey
BlockBlueScore uint64
BlockDAAScore uint64
IsCoinbase bool
}

View File

@@ -10,7 +10,6 @@ import (
// The locator is returned via a locator message (MsgBlockLocator).
type MsgRequestBlockLocator struct {
baseMessage
LowHash *externalapi.DomainHash
HighHash *externalapi.DomainHash
Limit uint32
}
@@ -24,9 +23,8 @@ func (msg *MsgRequestBlockLocator) Command() MessageCommand {
// NewMsgRequestBlockLocator returns a new RequestBlockLocator message that conforms to the
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgRequestBlockLocator(lowHash, highHash *externalapi.DomainHash, limit uint32) *MsgRequestBlockLocator {
func NewMsgRequestBlockLocator(highHash *externalapi.DomainHash, limit uint32) *MsgRequestBlockLocator {
return &MsgRequestBlockLocator{
LowHash: lowHash,
HighHash: highHash,
Limit: limit,
}

View File

@@ -16,7 +16,7 @@ func TestRequestBlockLocator(t *testing.T) {
// Ensure the command is expected value.
wantCmd := MessageCommand(9)
msg := NewMsgRequestBlockLocator(highHash, &externalapi.DomainHash{}, 0)
msg := NewMsgRequestBlockLocator(highHash, 0)
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgRequestBlockLocator: wrong command - got %v want %v",
cmd, wantCmd)

View File

@@ -10,7 +10,7 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// TestRequstIBDBlocks tests the MsgRequestHeaders API.
// TestRequstIBDBlocks tests the MsgRequestIBDBlocks API.
func TestRequstIBDBlocks(t *testing.T) {
hashStr := "000000000002e7ad7b9eef9479e4aabc65cb831269cc20d2632c13684406dee0"
lowHash, err := externalapi.NewDomainHashFromString(hashStr)
@@ -27,14 +27,14 @@ func TestRequstIBDBlocks(t *testing.T) {
// Ensure we get the same data back out.
msg := NewMsgRequstHeaders(lowHash, highHash)
if !msg.HighHash.Equal(highHash) {
t.Errorf("NewMsgRequstHeaders: wrong high hash - got %v, want %v",
t.Errorf("NewMsgRequstIBDBlocks: wrong high hash - got %v, want %v",
msg.HighHash, highHash)
}
// Ensure the command is expected value.
wantCmd := MessageCommand(4)
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgRequstHeaders: wrong command - got %v want %v",
t.Errorf("NewMsgRequstIBDBlocks: wrong command - got %v want %v",
cmd, wantCmd)
}
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgRequestPruningPointProof represents a kaspa RequestPruningPointProof message
type MsgRequestPruningPointProof struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointProof) Command() MessageCommand {
return CmdRequestPruningPointProof
}
// NewMsgRequestPruningPointProof returns a new MsgRequestPruningPointProof.
func NewMsgRequestPruningPointProof() *MsgRequestPruningPointProof {
return &MsgRequestPruningPointProof{}
}

View File

@@ -4,20 +4,20 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestPruningPointUTXOSetAndBlock represents a kaspa RequestPruningPointUTXOSetAndBlock message
type MsgRequestPruningPointUTXOSetAndBlock struct {
// MsgRequestPruningPointUTXOSet represents a kaspa RequestPruningPointUTXOSet message
type MsgRequestPruningPointUTXOSet struct {
baseMessage
PruningPointHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointUTXOSetAndBlock) Command() MessageCommand {
return CmdRequestPruningPointUTXOSetAndBlock
func (msg *MsgRequestPruningPointUTXOSet) Command() MessageCommand {
return CmdRequestPruningPointUTXOSet
}
// NewMsgRequestPruningPointUTXOSetAndBlock returns a new MsgRequestPruningPointUTXOSetAndBlock
func NewMsgRequestPruningPointUTXOSetAndBlock(pruningPointHash *externalapi.DomainHash) *MsgRequestPruningPointUTXOSetAndBlock {
return &MsgRequestPruningPointUTXOSetAndBlock{
// NewMsgRequestPruningPointUTXOSet returns a new MsgRequestPruningPointUTXOSet
func NewMsgRequestPruningPointUTXOSet(pruningPointHash *externalapi.DomainHash) *MsgRequestPruningPointUTXOSet {
return &MsgRequestPruningPointUTXOSet{
PruningPointHash: pruningPointHash,
}
}

View File

@@ -6,7 +6,6 @@ package appmessage
import (
"encoding/binary"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"strconv"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
@@ -91,16 +90,18 @@ type TxIn struct {
PreviousOutpoint Outpoint
SignatureScript []byte
Sequence uint64
SigOpCount byte
}
// NewTxIn returns a new kaspa transaction input with the provided
// previous outpoint point and signature script with a default sequence of
// MaxTxInSequenceNum.
func NewTxIn(prevOut *Outpoint, signatureScript []byte, sequence uint64) *TxIn {
func NewTxIn(prevOut *Outpoint, signatureScript []byte, sequence uint64, sigOpCount byte) *TxIn {
return &TxIn{
PreviousOutpoint: *prevOut,
SignatureScript: signatureScript,
Sequence: sequence,
SigOpCount: sigOpCount,
}
}
@@ -133,7 +134,6 @@ type MsgTx struct {
LockTime uint64
SubnetworkID externalapi.DomainSubnetworkID
Gas uint64
PayloadHash externalapi.DomainHash
Payload []byte
}
@@ -179,7 +179,6 @@ func (msg *MsgTx) Copy() *MsgTx {
LockTime: msg.LockTime,
SubnetworkID: msg.SubnetworkID,
Gas: msg.Gas,
PayloadHash: msg.PayloadHash,
}
if msg.Payload != nil {
@@ -209,6 +208,7 @@ func (msg *MsgTx) Copy() *MsgTx {
PreviousOutpoint: newOutpoint,
SignatureScript: newScript,
Sequence: oldTxIn.Sequence,
SigOpCount: oldTxIn.SigOpCount,
}
// Finally, append this fully copied txin.
@@ -280,18 +280,12 @@ func newMsgTx(version uint16, txIn []*TxIn, txOut []*TxOut, subnetworkID *extern
txOut = make([]*TxOut, 0, defaultTxInOutAlloc)
}
var payloadHash externalapi.DomainHash
if *subnetworkID != subnetworks.SubnetworkIDNative {
payloadHash = *hashes.PayloadHash(payload)
}
return &MsgTx{
Version: version,
TxIn: txIn,
TxOut: txOut,
SubnetworkID: *subnetworkID,
Gas: gas,
PayloadHash: payloadHash,
Payload: payload,
LockTime: lockTime,
}

View File

@@ -68,7 +68,7 @@ func TestTx(t *testing.T) {
// Ensure we get the same transaction input back out.
sigScript := []byte{0x04, 0x31, 0xdc, 0x00, 0x1b, 0x01, 0x62}
txIn := NewTxIn(prevOut, sigScript, constants.MaxTxInSequenceNum)
txIn := NewTxIn(prevOut, sigScript, constants.MaxTxInSequenceNum, 1)
if !reflect.DeepEqual(&txIn.PreviousOutpoint, prevOut) {
t.Errorf("NewTxIn: wrong prev outpoint - got %v, want %v",
spew.Sprint(&txIn.PreviousOutpoint),
@@ -133,8 +133,8 @@ func TestTx(t *testing.T) {
// TestTxHash tests the ability to generate the hash of a transaction accurately.
func TestTxHashAndID(t *testing.T) {
txHash1Str := "4bee9ee495bd93a755de428376bd582a2bb6ec37c041753b711c0606d5745c13"
txID1Str := "f868bd20e816256b80eac976821be4589d24d21141bd1cec6e8005d0c16c6881"
txHash1Str := "93663e597f6c968d32d229002f76408edf30d6a0151ff679fc729812d8cb2acc"
txID1Str := "24079c6d2bdf602fc389cc307349054937744a9c8dc0f07c023e6af0e949a4e7"
wantTxID1, err := transactionid.FromString(txID1Str)
if err != nil {
t.Fatalf("NewTxIDFromStr: %v", err)
@@ -185,14 +185,14 @@ func TestTxHashAndID(t *testing.T) {
spew.Sprint(tx1ID), spew.Sprint(wantTxID1))
}
hash2Str := "cb1bdb4a83d4885535fb3cceb5c96597b7df903db83f0ffcd779d703affd8efd"
hash2Str := "8dafd1bec24527d8e3b443ceb0a3b92fffc0d60026317f890b2faf5e9afc177a"
wantHash2, err := externalapi.NewDomainHashFromString(hash2Str)
if err != nil {
t.Errorf("NewTxIDFromStr: %v", err)
return
}
id2Str := "ca080073d4ddf5b84443a0964af633f3c70a5b290fd3bc35a7e6f93fd33f9330"
id2Str := "89ffb49474637502d9059af38b8a95fc2f0d3baef5c801d7a9b9c8830671b711"
wantID2, err := transactionid.FromString(id2Str)
if err != nil {
t.Errorf("NewTxIDFromStr: %v", err)

View File

@@ -19,7 +19,7 @@ func TestVersion(t *testing.T) {
// Create version message data.
tcpAddrMe := &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 16111}
me := NewNetAddress(tcpAddrMe, SFNodeNetwork)
me := NewNetAddress(tcpAddrMe)
generatedID, err := id.GenerateID()
if err != nil {
t.Fatalf("id.GenerateID: %s", err)

View File

@@ -15,9 +15,6 @@ type NetAddress struct {
// Last time the address was seen.
Timestamp mstime.Time
// Bitfield which identifies the services supported by the address.
Services ServiceFlag
// IP address of the peer.
IP net.IP
@@ -26,17 +23,6 @@ type NetAddress struct {
Port uint16
}
// HasService returns whether the specified service is supported by the address.
func (na *NetAddress) HasService(service ServiceFlag) bool {
return na.Services&service == service
}
// AddService adds service as a supported service by the peer generating the
// message.
func (na *NetAddress) AddService(service ServiceFlag) {
na.Services |= service
}
// TCPAddress converts the NetAddress to *net.TCPAddr
func (na *NetAddress) TCPAddress() *net.TCPAddr {
return &net.TCPAddr{
@@ -47,20 +33,19 @@ func (na *NetAddress) TCPAddress() *net.TCPAddr {
// NewNetAddressIPPort returns a new NetAddress using the provided IP, port, and
// supported services with defaults for the remaining fields.
func NewNetAddressIPPort(ip net.IP, port uint16, services ServiceFlag) *NetAddress {
return NewNetAddressTimestamp(mstime.Now(), services, ip, port)
func NewNetAddressIPPort(ip net.IP, port uint16) *NetAddress {
return NewNetAddressTimestamp(mstime.Now(), ip, port)
}
// NewNetAddressTimestamp returns a new NetAddress using the provided
// timestamp, IP, port, and supported services. The timestamp is rounded to
// single millisecond precision.
func NewNetAddressTimestamp(
timestamp mstime.Time, services ServiceFlag, ip net.IP, port uint16) *NetAddress {
timestamp mstime.Time, ip net.IP, port uint16) *NetAddress {
// Limit the timestamp to one millisecond precision since the protocol
// doesn't support better.
na := NetAddress{
Timestamp: timestamp,
Services: services,
IP: ip,
Port: port,
}
@@ -69,6 +54,6 @@ func NewNetAddressTimestamp(
// NewNetAddress returns a new NetAddress using the provided TCP address and
// supported services with defaults for the remaining fields.
func NewNetAddress(addr *net.TCPAddr, services ServiceFlag) *NetAddress {
return NewNetAddressIPPort(addr.IP, uint16(addr.Port), services)
func NewNetAddress(addr *net.TCPAddr) *NetAddress {
return NewNetAddressIPPort(addr.IP, uint16(addr.Port))
}

View File

@@ -15,7 +15,7 @@ func TestNetAddress(t *testing.T) {
port := 16111
// Test NewNetAddress.
na := NewNetAddress(&net.TCPAddr{IP: ip, Port: port}, 0)
na := NewNetAddress(&net.TCPAddr{IP: ip, Port: port})
// Ensure we get the same ip, port, and services back out.
if !na.IP.Equal(ip) {
@@ -25,21 +25,4 @@ func TestNetAddress(t *testing.T) {
t.Errorf("NetNetAddress: wrong port - got %v, want %v", na.Port,
port)
}
if na.Services != 0 {
t.Errorf("NetNetAddress: wrong services - got %v, want %v",
na.Services, 0)
}
if na.HasService(SFNodeNetwork) {
t.Errorf("HasService: SFNodeNetwork service is set")
}
// Ensure adding the full service node flag works.
na.AddService(SFNodeNetwork)
if na.Services != SFNodeNetwork {
t.Errorf("AddService: wrong services - got %v, want %v",
na.Services, SFNodeNetwork)
}
if !na.HasService(SFNodeNetwork) {
t.Errorf("HasService: SFNodeNetwork service not set")
}
}

View File

@@ -1,16 +0,0 @@
package appmessage
// MsgRequestPruningPointHashMessage represents a kaspa RequestPruningPointHashMessage message
type MsgRequestPruningPointHashMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointHashMessage) Command() MessageCommand {
return CmdRequestPruningPointHash
}
// NewMsgRequestPruningPointHashMessage returns a new kaspa RequestPruningPointHash message
func NewMsgRequestPruningPointHashMessage() *MsgRequestPruningPointHashMessage {
return &MsgRequestPruningPointHashMessage{}
}

View File

@@ -0,0 +1,43 @@
package appmessage
// EstimateNetworkHashesPerSecondRequestMessage is an appmessage corresponding to
// its respective RPC message
type EstimateNetworkHashesPerSecondRequestMessage struct {
baseMessage
StartHash string
WindowSize uint32
}
// Command returns the protocol command string for the message
func (msg *EstimateNetworkHashesPerSecondRequestMessage) Command() MessageCommand {
return CmdEstimateNetworkHashesPerSecondRequestMessage
}
// NewEstimateNetworkHashesPerSecondRequestMessage returns a instance of the message
func NewEstimateNetworkHashesPerSecondRequestMessage(startHash string, windowSize uint32) *EstimateNetworkHashesPerSecondRequestMessage {
return &EstimateNetworkHashesPerSecondRequestMessage{
StartHash: startHash,
WindowSize: windowSize,
}
}
// EstimateNetworkHashesPerSecondResponseMessage is an appmessage corresponding to
// its respective RPC message
type EstimateNetworkHashesPerSecondResponseMessage struct {
baseMessage
NetworkHashesPerSecond uint64
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *EstimateNetworkHashesPerSecondResponseMessage) Command() MessageCommand {
return CmdEstimateNetworkHashesPerSecondResponseMessage
}
// NewEstimateNetworkHashesPerSecondResponseMessage returns a instance of the message
func NewEstimateNetworkHashesPerSecondResponseMessage(networkHashesPerSecond uint64) *EstimateNetworkHashesPerSecondResponseMessage {
return &EstimateNetworkHashesPerSecondResponseMessage{
NetworkHashesPerSecond: networkHashesPerSecond,
}
}

View File

@@ -4,8 +4,8 @@ package appmessage
// its respective RPC message
type GetBlockRequestMessage struct {
baseMessage
Hash string
IncludeTransactionVerboseData bool
Hash string
IncludeTransactions bool
}
// Command returns the protocol command string for the message
@@ -14,10 +14,10 @@ func (msg *GetBlockRequestMessage) Command() MessageCommand {
}
// NewGetBlockRequestMessage returns a instance of the message
func NewGetBlockRequestMessage(hash string, includeTransactionVerboseData bool) *GetBlockRequestMessage {
func NewGetBlockRequestMessage(hash string, includeTransactions bool) *GetBlockRequestMessage {
return &GetBlockRequestMessage{
Hash: hash,
IncludeTransactionVerboseData: includeTransactionVerboseData,
Hash: hash,
IncludeTransactions: includeTransactions,
}
}
@@ -25,7 +25,7 @@ func NewGetBlockRequestMessage(hash string, includeTransactionVerboseData bool)
// its respective RPC message
type GetBlockResponseMessage struct {
baseMessage
BlockVerboseData *BlockVerboseData
Block *RPCBlock
Error *RPCError
}
@@ -39,71 +39,3 @@ func (msg *GetBlockResponseMessage) Command() MessageCommand {
func NewGetBlockResponseMessage() *GetBlockResponseMessage {
return &GetBlockResponseMessage{}
}
// BlockVerboseData holds verbose data about a block
type BlockVerboseData struct {
Hash string
Version uint16
VersionHex string
HashMerkleRoot string
AcceptedIDMerkleRoot string
UTXOCommitment string
TxIDs []string
TransactionVerboseData []*TransactionVerboseData
Time int64
Nonce uint64
Bits string
Difficulty float64
ParentHashes []string
ChildrenHashes []string
SelectedParentHash string
BlueScore uint64
IsHeaderOnly bool
}
// TransactionVerboseData holds verbose data about a transaction
type TransactionVerboseData struct {
TxID string
Hash string
Size uint64
Version uint16
LockTime uint64
SubnetworkID string
Gas uint64
PayloadHash string
Payload string
TransactionVerboseInputs []*TransactionVerboseInput
TransactionVerboseOutputs []*TransactionVerboseOutput
BlockHash string
Time uint64
BlockTime uint64
}
// TransactionVerboseInput holds data about a transaction input
type TransactionVerboseInput struct {
TxID string
OutputIndex uint32
ScriptSig *ScriptSig
Sequence uint64
}
// ScriptSig holds data about a script signature
type ScriptSig struct {
Asm string
Hex string
}
// TransactionVerboseOutput holds data about a transaction output
type TransactionVerboseOutput struct {
Value uint64
Index uint32
ScriptPubKey *ScriptPubKeyResult
}
// ScriptPubKeyResult holds data about a script public key
type ScriptPubKeyResult struct {
Hex string
Type string
Address string
Version uint16
}

View File

@@ -28,6 +28,7 @@ type GetBlockDAGInfoResponseMessage struct {
Difficulty float64
PastMedianTime int64
PruningPointHash string
VirtualDAAScore uint64
Error *RPCError
}

View File

@@ -23,7 +23,7 @@ func NewGetBlockTemplateRequestMessage(payAddress string) *GetBlockTemplateReque
// its respective RPC message
type GetBlockTemplateResponseMessage struct {
baseMessage
MsgBlock *MsgBlock
Block *RPCBlock
IsSynced bool
Error *RPCError
@@ -35,9 +35,9 @@ func (msg *GetBlockTemplateResponseMessage) Command() MessageCommand {
}
// NewGetBlockTemplateResponseMessage returns a instance of the message
func NewGetBlockTemplateResponseMessage(msgBlock *MsgBlock, isSynced bool) *GetBlockTemplateResponseMessage {
func NewGetBlockTemplateResponseMessage(block *RPCBlock, isSynced bool) *GetBlockTemplateResponseMessage {
return &GetBlockTemplateResponseMessage{
MsgBlock: msgBlock,
Block: block,
IsSynced: isSynced,
}
}

View File

@@ -4,9 +4,9 @@ package appmessage
// its respective RPC message
type GetBlocksRequestMessage struct {
baseMessage
LowHash string
IncludeBlockVerboseData bool
IncludeTransactionVerboseData bool
LowHash string
IncludeBlocks bool
IncludeTransactions bool
}
// Command returns the protocol command string for the message
@@ -15,12 +15,12 @@ func (msg *GetBlocksRequestMessage) Command() MessageCommand {
}
// NewGetBlocksRequestMessage returns a instance of the message
func NewGetBlocksRequestMessage(lowHash string, includeBlockVerboseData bool,
includeTransactionVerboseData bool) *GetBlocksRequestMessage {
func NewGetBlocksRequestMessage(lowHash string, includeBlocks bool,
includeTransactions bool) *GetBlocksRequestMessage {
return &GetBlocksRequestMessage{
LowHash: lowHash,
IncludeBlockVerboseData: includeBlockVerboseData,
IncludeTransactionVerboseData: includeTransactionVerboseData,
LowHash: lowHash,
IncludeBlocks: includeBlocks,
IncludeTransactions: includeTransactions,
}
}
@@ -28,8 +28,8 @@ func NewGetBlocksRequestMessage(lowHash string, includeBlockVerboseData bool,
// its respective RPC message
type GetBlocksResponseMessage struct {
baseMessage
BlockHashes []string
BlockVerboseData []*BlockVerboseData
BlockHashes []string
Blocks []*RPCBlock
Error *RPCError
}
@@ -40,11 +40,6 @@ func (msg *GetBlocksResponseMessage) Command() MessageCommand {
}
// NewGetBlocksResponseMessage returns a instance of the message
func NewGetBlocksResponseMessage(blockHashes []string, blockHexes []string,
blockVerboseData []*BlockVerboseData) *GetBlocksResponseMessage {
return &GetBlocksResponseMessage{
BlockHashes: blockHashes,
BlockVerboseData: blockVerboseData,
}
func NewGetBlocksResponseMessage() *GetBlocksResponseMessage {
return &GetBlocksResponseMessage{}
}

View File

@@ -11,8 +11,8 @@ func (msg *GetInfoRequestMessage) Command() MessageCommand {
return CmdGetInfoRequestMessage
}
// NewGeInfoRequestMessage returns a instance of the message
func NewGeInfoRequestMessage() *GetInfoRequestMessage {
// NewGetInfoRequestMessage returns a instance of the message
func NewGetInfoRequestMessage() *GetInfoRequestMessage {
return &GetInfoRequestMessage{}
}
@@ -20,7 +20,9 @@ func NewGeInfoRequestMessage() *GetInfoRequestMessage {
// its respective RPC message
type GetInfoResponseMessage struct {
baseMessage
P2PID string
P2PID string
MempoolSize uint64
ServerVersion string
Error *RPCError
}
@@ -31,8 +33,10 @@ func (msg *GetInfoResponseMessage) Command() MessageCommand {
}
// NewGetInfoResponseMessage returns a instance of the message
func NewGetInfoResponseMessage(p2pID string) *GetInfoResponseMessage {
func NewGetInfoResponseMessage(p2pID string, mempoolSize uint64, serverVersion string) *GetInfoResponseMessage {
return &GetInfoResponseMessage{
P2PID: p2pID,
P2PID: p2pID,
MempoolSize: mempoolSize,
ServerVersion: serverVersion,
}
}

View File

@@ -28,8 +28,8 @@ type GetMempoolEntryResponseMessage struct {
// MempoolEntry represents a transaction in the mempool.
type MempoolEntry struct {
Fee uint64
TransactionVerboseData *TransactionVerboseData
Fee uint64
Transaction *RPCTransaction
}
// Command returns the protocol command string for the message
@@ -38,11 +38,11 @@ func (msg *GetMempoolEntryResponseMessage) Command() MessageCommand {
}
// NewGetMempoolEntryResponseMessage returns a instance of the message
func NewGetMempoolEntryResponseMessage(fee uint64, transactionVerboseData *TransactionVerboseData) *GetMempoolEntryResponseMessage {
func NewGetMempoolEntryResponseMessage(fee uint64, transaction *RPCTransaction) *GetMempoolEntryResponseMessage {
return &GetMempoolEntryResponseMessage{
Entry: &MempoolEntry{
Fee: fee,
TransactionVerboseData: transactionVerboseData,
Fee: fee,
Transaction: transaction,
},
}
}

View File

@@ -24,7 +24,7 @@ func NewGetVirtualSelectedParentChainFromBlockRequestMessage(startHash string) *
type GetVirtualSelectedParentChainFromBlockResponseMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
AddedChainBlockHashes []string
Error *RPCError
}
@@ -35,11 +35,11 @@ func (msg *GetVirtualSelectedParentChainFromBlockResponseMessage) Command() Mess
}
// NewGetVirtualSelectedParentChainFromBlockResponseMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockResponseMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock) *GetVirtualSelectedParentChainFromBlockResponseMessage {
func NewGetVirtualSelectedParentChainFromBlockResponseMessage(removedChainBlockHashes,
addedChainBlockHashes []string) *GetVirtualSelectedParentChainFromBlockResponseMessage {
return &GetVirtualSelectedParentChainFromBlockResponseMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
AddedChainBlockHashes: addedChainBlockHashes,
}
}

View File

@@ -37,8 +37,7 @@ func NewNotifyBlockAddedResponseMessage() *NotifyBlockAddedResponseMessage {
// its respective RPC message
type BlockAddedNotificationMessage struct {
baseMessage
Block *MsgBlock
BlockVerboseData *BlockVerboseData
Block *RPCBlock
}
// Command returns the protocol command string for the message
@@ -47,9 +46,8 @@ func (msg *BlockAddedNotificationMessage) Command() MessageCommand {
}
// NewBlockAddedNotificationMessage returns a instance of the message
func NewBlockAddedNotificationMessage(block *MsgBlock, blockVerboseData *BlockVerboseData) *BlockAddedNotificationMessage {
func NewBlockAddedNotificationMessage(block *RPCBlock) *BlockAddedNotificationMessage {
return &BlockAddedNotificationMessage{
Block: block,
BlockVerboseData: blockVerboseData,
Block: block,
}
}

View File

@@ -0,0 +1,55 @@
package appmessage
// NotifyVirtualDaaScoreChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualDaaScoreChangedRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualDaaScoreChangedRequestMessage) Command() MessageCommand {
return CmdNotifyVirtualDaaScoreChangedRequestMessage
}
// NewNotifyVirtualDaaScoreChangedRequestMessage returns a instance of the message
func NewNotifyVirtualDaaScoreChangedRequestMessage() *NotifyVirtualDaaScoreChangedRequestMessage {
return &NotifyVirtualDaaScoreChangedRequestMessage{}
}
// NotifyVirtualDaaScoreChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualDaaScoreChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualDaaScoreChangedResponseMessage) Command() MessageCommand {
return CmdNotifyVirtualDaaScoreChangedResponseMessage
}
// NewNotifyVirtualDaaScoreChangedResponseMessage returns a instance of the message
func NewNotifyVirtualDaaScoreChangedResponseMessage() *NotifyVirtualDaaScoreChangedResponseMessage {
return &NotifyVirtualDaaScoreChangedResponseMessage{}
}
// VirtualDaaScoreChangedNotificationMessage is an appmessage corresponding to
// its respective RPC message
type VirtualDaaScoreChangedNotificationMessage struct {
baseMessage
VirtualDaaScore uint64
}
// Command returns the protocol command string for the message
func (msg *VirtualDaaScoreChangedNotificationMessage) Command() MessageCommand {
return CmdVirtualDaaScoreChangedNotificationMessage
}
// NewVirtualDaaScoreChangedNotificationMessage returns a instance of the message
func NewVirtualDaaScoreChangedNotificationMessage(
virtualDaaScore uint64) *VirtualDaaScoreChangedNotificationMessage {
return &VirtualDaaScoreChangedNotificationMessage{
VirtualDaaScore: virtualDaaScore,
}
}

View File

@@ -38,19 +38,7 @@ func NewNotifyVirtualSelectedParentChainChangedResponseMessage() *NotifyVirtualS
type VirtualSelectedParentChainChangedNotificationMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
}
// ChainBlock represents a DAG chain-block
type ChainBlock struct {
Hash string
AcceptedBlocks []*AcceptedBlock
}
// AcceptedBlock represents a block accepted into the DAG
type AcceptedBlock struct {
Hash string
AcceptedTransactionIDs []string
AddedChainBlockHashes []string
}
// Command returns the protocol command string for the message
@@ -59,11 +47,11 @@ func (msg *VirtualSelectedParentChainChangedNotificationMessage) Command() Messa
}
// NewVirtualSelectedParentChainChangedNotificationMessage returns a instance of the message
func NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock) *VirtualSelectedParentChainChangedNotificationMessage {
func NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes,
addedChainBlocks []string) *VirtualSelectedParentChainChangedNotificationMessage {
return &VirtualSelectedParentChainChangedNotificationMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
AddedChainBlockHashes: addedChainBlocks,
}
}

View File

@@ -4,7 +4,7 @@ package appmessage
// its respective RPC message
type SubmitBlockRequestMessage struct {
baseMessage
Block *MsgBlock
Block *RPCBlock
}
// Command returns the protocol command string for the message
@@ -13,7 +13,7 @@ func (msg *SubmitBlockRequestMessage) Command() MessageCommand {
}
// NewSubmitBlockRequestMessage returns a instance of the message
func NewSubmitBlockRequestMessage(block *MsgBlock) *SubmitBlockRequestMessage {
func NewSubmitBlockRequestMessage(block *RPCBlock) *SubmitBlockRequestMessage {
return &SubmitBlockRequestMessage{
Block: block,
}
@@ -53,7 +53,48 @@ func (msg *SubmitBlockResponseMessage) Command() MessageCommand {
return CmdSubmitBlockResponseMessage
}
// NewSubmitBlockResponseMessage returns a instance of the message
// NewSubmitBlockResponseMessage returns an instance of the message
func NewSubmitBlockResponseMessage() *SubmitBlockResponseMessage {
return &SubmitBlockResponseMessage{}
}
// RPCBlock is a kaspad block representation meant to be
// used over RPC
type RPCBlock struct {
Header *RPCBlockHeader
Transactions []*RPCTransaction
VerboseData *RPCBlockVerboseData
}
// RPCBlockHeader is a kaspad block header representation meant to be
// used over RPC
type RPCBlockHeader struct {
Version uint32
Parents []*RPCBlockLevelParents
HashMerkleRoot string
AcceptedIDMerkleRoot string
UTXOCommitment string
Timestamp int64
Bits uint32
Nonce uint64
DAAScore uint64
BlueScore uint64
BlueWork string
PruningPoint string
}
// RPCBlockLevelParents holds parent hashes for one block level
type RPCBlockLevelParents struct {
ParentHashes []string
}
// RPCBlockVerboseData holds verbose data about a block
type RPCBlockVerboseData struct {
Hash string
Difficulty float64
SelectedParentHash string
TransactionIDs []string
IsHeaderOnly bool
BlueScore uint64
ChildrenHashes []string
}

View File

@@ -5,6 +5,7 @@ package appmessage
type SubmitTransactionRequestMessage struct {
baseMessage
Transaction *RPCTransaction
AllowOrphan bool
}
// Command returns the protocol command string for the message
@@ -13,9 +14,10 @@ func (msg *SubmitTransactionRequestMessage) Command() MessageCommand {
}
// NewSubmitTransactionRequestMessage returns a instance of the message
func NewSubmitTransactionRequestMessage(transaction *RPCTransaction) *SubmitTransactionRequestMessage {
func NewSubmitTransactionRequestMessage(transaction *RPCTransaction, allowOrphan bool) *SubmitTransactionRequestMessage {
return &SubmitTransactionRequestMessage{
Transaction: transaction,
AllowOrphan: allowOrphan,
}
}
@@ -49,8 +51,8 @@ type RPCTransaction struct {
LockTime uint64
SubnetworkID string
Gas uint64
PayloadHash string
Payload string
VerboseData *RPCTransactionVerboseData
}
// RPCTransactionInput is a kaspad transaction input representation
@@ -59,6 +61,8 @@ type RPCTransactionInput struct {
PreviousOutpoint *RPCOutpoint
SignatureScript string
Sequence uint64
SigOpCount byte
VerboseData *RPCTransactionInputVerboseData
}
// RPCScriptPublicKey is a kaspad ScriptPublicKey representation
@@ -72,6 +76,7 @@ type RPCScriptPublicKey struct {
type RPCTransactionOutput struct {
Amount uint64
ScriptPublicKey *RPCScriptPublicKey
VerboseData *RPCTransactionOutputVerboseData
}
// RPCOutpoint is a kaspad outpoint representation meant to be used
@@ -86,6 +91,25 @@ type RPCOutpoint struct {
type RPCUTXOEntry struct {
Amount uint64
ScriptPublicKey *RPCScriptPublicKey
BlockBlueScore uint64
BlockDAAScore uint64
IsCoinbase bool
}
// RPCTransactionVerboseData holds verbose data about a transaction
type RPCTransactionVerboseData struct {
TransactionID string
Hash string
Mass uint64
BlockHash string
BlockTime uint64
}
// RPCTransactionInputVerboseData holds data about a transaction input
type RPCTransactionInputVerboseData struct {
}
// RPCTransactionOutputVerboseData holds data about a transaction output
type RPCTransactionOutputVerboseData struct {
ScriptPublicKeyType string
ScriptPublicKeyAddress string
}

View File

@@ -4,23 +4,19 @@ import (
"fmt"
"sync/atomic"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
infrastructuredatabase "github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol"
"github.com/kaspanet/kaspad/app/rpc"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/infrastructure/config"
infrastructuredatabase "github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/infrastructure/network/dnsseed"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
"github.com/kaspanet/kaspad/util/panics"
)
@@ -50,8 +46,6 @@ func (a *ComponentManager) Start() {
panics.Exit(log, fmt.Sprintf("Error starting the net adapter: %+v", err))
}
a.maybeSeedFromDNS()
a.connectionManager.Start()
}
@@ -72,6 +66,8 @@ func (a *ComponentManager) Stop() {
log.Errorf("Error stopping the net adapter: %+v", err)
}
a.protocolManager.Close()
return
}
@@ -80,7 +76,16 @@ func (a *ComponentManager) Stop() {
func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database, interrupt chan<- struct{}) (
*ComponentManager, error) {
domain, err := domain.New(cfg.ActiveNetParams, db, cfg.IsArchivalNode)
consensusConfig := consensus.Config{
Params: *cfg.ActiveNetParams,
IsArchival: cfg.IsArchivalNode,
EnableSanityCheckPruningUTXOSet: cfg.EnableSanityCheckPruningUTXOSet,
}
mempoolConfig := mempool.DefaultConfig(&consensusConfig.Params)
mempoolConfig.MaximumOrphanTransactionCount = cfg.MaxOrphanTxs
mempoolConfig.MinimumRelayTransactionFee = cfg.MinRelayTxFee
domain, err := domain.New(&consensusConfig, mempoolConfig, db)
if err != nil {
return nil, err
}
@@ -153,23 +158,6 @@ func setupRPC(
return rpcManager
}
func (a *ComponentManager) maybeSeedFromDNS() {
if !a.cfg.DisableDNSSeed {
dnsseed.SeedFromDNS(a.cfg.NetParams(), a.cfg.DNSSeed, appmessage.SFNodeNetwork, false, nil,
a.cfg.Lookup, func(addresses []*appmessage.NetAddress) {
// Kaspad uses a lookup of the dns seeder here. Since seeder returns
// IPs of nodes and not its own IP, we can not know real IP of
// source. So we'll take first returned address as source.
a.addressManager.AddAddresses(addresses...)
})
dnsseed.SeedFromGRPC(a.cfg.NetParams(), a.cfg.GRPCSeed, appmessage.SFNodeNetwork, false, nil,
func(addresses []*appmessage.NetAddress) {
a.addressManager.AddAddresses(addresses...)
})
}
}
// P2PNodeID returns the network ID associated with this ComponentManager
func (a *ComponentManager) P2PNodeID() *id.ID {
return a.netAdapter.ID()

57
app/db_version.go Normal file
View File

@@ -0,0 +1,57 @@
package app
import (
"os"
"path"
"strconv"
"github.com/pkg/errors"
)
const currentDatabaseVersion = 1
func checkDatabaseVersion(dbPath string) (err error) {
versionFileName := versionFilePath(dbPath)
versionBytes, err := os.ReadFile(versionFileName)
if err != nil {
if os.IsNotExist(err) { // If version file doesn't exist, we assume that the database is new
return createDatabaseVersionFile(dbPath, versionFileName)
}
return err
}
databaseVersion, err := strconv.Atoi(string(versionBytes))
if err != nil {
return err
}
if databaseVersion != currentDatabaseVersion {
// TODO: Once there's more then one database version, it might make sense to add upgrade logic at this point
return errors.Errorf("Invalid database version %d. Expected version: %d", databaseVersion, currentDatabaseVersion)
}
return nil
}
func createDatabaseVersionFile(dbPath string, versionFileName string) error {
err := os.MkdirAll(dbPath, 0700)
if err != nil {
return err
}
versionFile, err := os.Create(versionFileName)
if err != nil {
return nil
}
defer versionFile.Close()
versionString := strconv.Itoa(currentDatabaseVersion)
_, err = versionFile.Write([]byte(versionString))
return err
}
func versionFilePath(dbPath string) string {
dbVersionFileName := path.Join(dbPath, "version")
return dbVersionFileName
}

View File

@@ -8,7 +8,7 @@ import (
// DefaultTimeout is the default duration to wait for enqueuing/dequeuing
// to/from routes.
const DefaultTimeout = 30 * time.Second
const DefaultTimeout = 120 * time.Second
// ErrPeerWithSameIDExists signifies that a peer with the same ID already exist.
var ErrPeerWithSameIDExists = errors.New("ready peer with the same ID already exists")

View File

@@ -1,13 +1,14 @@
package flowcontext
import (
"time"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
@@ -37,12 +38,14 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
newBlockInsertionResults = append(newBlockInsertionResults, unorphaningResult.blockInsertionResult)
}
allAcceptedTransactions := make([]*externalapi.DomainTransaction, 0)
for i, newBlock := range newBlocks {
log.Debugf("OnNewBlock: passing block %s transactions to mining manager", hash)
_, err = f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
acceptedTransactions, err := f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
if err != nil {
return err
}
allAcceptedTransactions = append(allAcceptedTransactions, acceptedTransactions...)
if f.onBlockAddedToDAGHandler != nil {
log.Debugf("OnNewBlock: calling f.onBlockAddedToDAGHandler for block %s", hash)
@@ -54,7 +57,7 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
}
}
return nil
return f.broadcastTransactionsAfterBlockAdded(newBlocks, allAcceptedTransactions)
}
// OnPruningPointUTXOSetOverride calls the handler function whenever the UTXO set
@@ -67,9 +70,7 @@ func (f *FlowContext) OnPruningPointUTXOSetOverride() error {
}
func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
block *externalapi.DomainBlock, transactionsAcceptedToMempool []*externalapi.DomainTransaction) error {
f.updateTransactionsToRebroadcast(block)
addedBlocks []*externalapi.DomainBlock, transactionsAcceptedToMempool []*externalapi.DomainTransaction) error {
// Don't relay transactions when in IBD.
if f.IsIBDRunning() {
@@ -78,7 +79,12 @@ func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
var txIDsToRebroadcast []*externalapi.DomainTransactionID
if f.shouldRebroadcastTransactions() {
txIDsToRebroadcast = f.txIDsToRebroadcast()
txsToRebroadcast, err := f.Domain().MiningManager().RevalidateHighPriorityTransactions()
if err != nil {
return err
}
txIDsToRebroadcast = consensushashing.TransactionIDs(txsToRebroadcast)
f.lastRebroadcastTime = time.Now()
}
txIDsToBroadcast := make([]*externalapi.DomainTransactionID, len(transactionsAcceptedToMempool)+len(txIDsToRebroadcast))
@@ -89,15 +95,7 @@ func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
for i, txID := range txIDsToRebroadcast {
txIDsToBroadcast[offset+i] = txID
}
if len(txIDsToBroadcast) == 0 {
return nil
}
if len(txIDsToBroadcast) > appmessage.MaxInvPerTxInvMsg {
txIDsToBroadcast = txIDsToBroadcast[:appmessage.MaxInvPerTxInvMsg]
}
inv := appmessage.NewMsgInvTransaction(txIDsToBroadcast)
return f.Broadcast(inv)
return f.EnqueueTransactionIDsForPropagation(txIDsToBroadcast)
}
// SharedRequestedBlocks returns a *blockrelay.SharedRequestedBlocks for sharing
@@ -112,7 +110,7 @@ func (f *FlowContext) AddBlock(block *externalapi.DomainBlock) error {
return protocolerrors.Errorf(false, "cannot add header only block")
}
blockInsertionResult, err := f.Domain().Consensus().ValidateAndInsertBlock(block)
blockInsertionResult, err := f.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for block %s: %s", consensushashing.BlockHash(block), err)
@@ -159,7 +157,6 @@ func (f *FlowContext) UnsetIBDRunning() {
}
f.ibdPeer = nil
log.Infof("IBD finished")
}
// IBDPeer returns the current IBD peer or null if the node is not

View File

@@ -29,3 +29,8 @@ func (*FlowContext) HandleError(err error, flowName string, isStopping *uint32,
errChan <- err
}
}
// IsRecoverableError returns whether the error is recoverable
func (*FlowContext) IsRecoverableError(err error) bool {
return err == nil || errors.Is(err, router.ErrRouteClosed) || errors.As(err, &protocolerrors.ProtocolError{})
}

View File

@@ -1,10 +1,11 @@
package flowcontext
import (
"github.com/kaspanet/kaspad/util/mstime"
"sync"
"time"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain"
@@ -46,10 +47,8 @@ type FlowContext struct {
onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
transactionsToRebroadcastLock sync.Mutex
transactionsToRebroadcast map[externalapi.DomainTransactionID]*externalapi.DomainTransaction
lastRebroadcastTime time.Time
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
lastRebroadcastTime time.Time
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
sharedRequestedBlocks *blockrelay.SharedRequestedBlocks
@@ -61,6 +60,12 @@ type FlowContext struct {
orphans map[externalapi.DomainHash]*externalapi.DomainBlock
orphansMutex sync.RWMutex
transactionIDsToPropagate []*externalapi.DomainTransactionID
lastTransactionIDPropagationTime time.Time
transactionIDPropagationLock sync.Mutex
shutdownChan chan struct{}
}
// New returns a new instance of FlowContext.
@@ -68,20 +73,33 @@ func New(cfg *config.Config, domain domain.Domain, addressManager *addressmanage
netAdapter *netadapter.NetAdapter, connectionManager *connmanager.ConnectionManager) *FlowContext {
return &FlowContext{
cfg: cfg,
netAdapter: netAdapter,
domain: domain,
addressManager: addressManager,
connectionManager: connectionManager,
sharedRequestedTransactions: transactionrelay.NewSharedRequestedTransactions(),
sharedRequestedBlocks: blockrelay.NewSharedRequestedBlocks(),
peers: make(map[id.ID]*peerpkg.Peer),
transactionsToRebroadcast: make(map[externalapi.DomainTransactionID]*externalapi.DomainTransaction),
orphans: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
timeStarted: mstime.Now().UnixMilliseconds(),
cfg: cfg,
netAdapter: netAdapter,
domain: domain,
addressManager: addressManager,
connectionManager: connectionManager,
sharedRequestedTransactions: transactionrelay.NewSharedRequestedTransactions(),
sharedRequestedBlocks: blockrelay.NewSharedRequestedBlocks(),
peers: make(map[id.ID]*peerpkg.Peer),
orphans: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
timeStarted: mstime.Now().UnixMilliseconds(),
transactionIDsToPropagate: []*externalapi.DomainTransactionID{},
lastTransactionIDPropagationTime: time.Now(),
shutdownChan: make(chan struct{}),
}
}
// Close signals to all flows the the protocol manager is closed.
func (f *FlowContext) Close() {
close(f.shutdownChan)
}
// ShutdownChan is a chan where flows can subscribe to shutdown
// event.
func (f *FlowContext) ShutdownChan() <-chan struct{} {
return f.shutdownChan
}
// SetOnBlockAddedToDAGHandler sets the onBlockAddedToDAG handler
func (f *FlowContext) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler OnBlockAddedToDAGHandler) {
f.onBlockAddedToDAGHandler = onBlockAddedToDAGHandler

View File

@@ -73,10 +73,10 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
orphanBlock := f.orphans[orphanHash]
log.Debugf("Considering to unorphan block %s with parents %s",
orphanHash, orphanBlock.Header.ParentHashes())
orphanHash, orphanBlock.Header.DirectParents())
canBeUnorphaned := true
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
for _, orphanBlockParentHash := range orphanBlock.Header.DirectParents() {
orphanBlockParentInfo, err := f.domain.Consensus().GetBlockInfo(orphanBlockParentHash)
if err != nil {
return nil, err
@@ -133,7 +133,7 @@ func (f *FlowContext) addChildOrphansToProcessQueue(blockHash *externalapi.Domai
func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash) []externalapi.DomainHash {
var childOrphans []externalapi.DomainHash
for orphanHash, orphanBlock := range f.orphans {
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
for _, orphanBlockParentHash := range orphanBlock.Header.DirectParents() {
if orphanBlockParentHash.Equal(blockHash) {
childOrphans = append(childOrphans, orphanHash)
break
@@ -150,7 +150,7 @@ func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (*externa
}
delete(f.orphans, orphanHash)
blockInsertionResult, err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock)
blockInsertionResult, err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for orphan block %s: %s", orphanHash, err)
@@ -201,7 +201,7 @@ func (f *FlowContext) GetOrphanRoots(orphan *externalapi.DomainHash) ([]*externa
continue
}
for _, parent := range block.Header.ParentHashes() {
for _, parent := range block.Header.DirectParents() {
if !addedToQueueSet.Contains(parent) {
queue = append(queue, parent)
addedToQueueSet.Add(parent)

View File

@@ -9,31 +9,18 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// AddTransaction adds transaction to the mempool and propagates it.
func (f *FlowContext) AddTransaction(tx *externalapi.DomainTransaction) error {
f.transactionsToRebroadcastLock.Lock()
defer f.transactionsToRebroadcastLock.Unlock()
// TransactionIDPropagationInterval is the interval between transaction IDs propagations
const TransactionIDPropagationInterval = 500 * time.Millisecond
err := f.Domain().MiningManager().ValidateAndInsertTransaction(tx, false)
// AddTransaction adds transaction to the mempool and propagates it.
func (f *FlowContext) AddTransaction(tx *externalapi.DomainTransaction, allowOrphan bool) error {
acceptedTransactions, err := f.Domain().MiningManager().ValidateAndInsertTransaction(tx, true, allowOrphan)
if err != nil {
return err
}
transactionID := consensushashing.TransactionID(tx)
f.transactionsToRebroadcast[*transactionID] = tx
inv := appmessage.NewMsgInvTransaction([]*externalapi.DomainTransactionID{transactionID})
return f.Broadcast(inv)
}
func (f *FlowContext) updateTransactionsToRebroadcast(block *externalapi.DomainBlock) {
f.transactionsToRebroadcastLock.Lock()
defer f.transactionsToRebroadcastLock.Unlock()
// Note: if the block is red, its transactions won't be rebroadcasted
// anymore, although they are not included in the UTXO set.
// This is probably ok, since red blocks are quite rare.
for _, tx := range block.Transactions {
delete(f.transactionsToRebroadcast, *consensushashing.TransactionID(tx))
}
acceptedTransactionIDs := consensushashing.TransactionIDs(acceptedTransactions)
return f.EnqueueTransactionIDsForPropagation(acceptedTransactionIDs)
}
func (f *FlowContext) shouldRebroadcastTransactions() bool {
@@ -41,19 +28,6 @@ func (f *FlowContext) shouldRebroadcastTransactions() bool {
return time.Since(f.lastRebroadcastTime) > rebroadcastInterval
}
func (f *FlowContext) txIDsToRebroadcast() []*externalapi.DomainTransactionID {
f.transactionsToRebroadcastLock.Lock()
defer f.transactionsToRebroadcastLock.Unlock()
txIDs := make([]*externalapi.DomainTransactionID, len(f.transactionsToRebroadcast))
i := 0
for _, tx := range f.transactionsToRebroadcast {
txIDs[i] = consensushashing.TransactionID(tx)
i++
}
return txIDs
}
// SharedRequestedTransactions returns a *transactionrelay.SharedRequestedTransactions for sharing
// data about requested transactions between different peers.
func (f *FlowContext) SharedRequestedTransactions() *transactionrelay.SharedRequestedTransactions {
@@ -67,3 +41,42 @@ func (f *FlowContext) OnTransactionAddedToMempool() {
f.onTransactionAddedToMempoolHandler()
}
}
// EnqueueTransactionIDsForPropagation add the given transactions IDs to a set of IDs to
// propagate. The IDs will be broadcast to all peers within a single transaction Inv message.
// The broadcast itself may happen only during a subsequent call to this method
func (f *FlowContext) EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error {
f.transactionIDPropagationLock.Lock()
defer f.transactionIDPropagationLock.Unlock()
f.transactionIDsToPropagate = append(f.transactionIDsToPropagate, transactionIDs...)
return f.maybePropagateTransactions()
}
func (f *FlowContext) maybePropagateTransactions() error {
if time.Since(f.lastTransactionIDPropagationTime) < TransactionIDPropagationInterval &&
len(f.transactionIDsToPropagate) < appmessage.MaxInvPerTxInvMsg {
return nil
}
for len(f.transactionIDsToPropagate) > 0 {
transactionIDsToBroadcast := f.transactionIDsToPropagate
if len(transactionIDsToBroadcast) > appmessage.MaxInvPerTxInvMsg {
transactionIDsToBroadcast = f.transactionIDsToPropagate[:len(transactionIDsToBroadcast)]
}
log.Debugf("Transaction propagation: broadcasting %d transactions", len(transactionIDsToBroadcast))
inv := appmessage.NewMsgInvTransaction(transactionIDsToBroadcast)
err := f.Broadcast(inv)
if err != nil {
return err
}
f.transactionIDsToPropagate = f.transactionIDsToPropagate[len(transactionIDsToBroadcast):]
}
f.lastTransactionIDPropagationTime = time.Now()
return nil
}

View File

@@ -7,10 +7,8 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
func (flow *handleRelayInvsFlow) sendGetBlockLocator(lowHash *externalapi.DomainHash,
highHash *externalapi.DomainHash, limit uint32) error {
msgGetBlockLocator := appmessage.NewMsgRequestBlockLocator(lowHash, highHash, limit)
func (flow *handleRelayInvsFlow) sendGetBlockLocator(highHash *externalapi.DomainHash, limit uint32) error {
msgGetBlockLocator := appmessage.NewMsgRequestBlockLocator(highHash, limit)
return flow.outgoingRoute.Enqueue(msgGetBlockLocator)
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
@@ -44,7 +45,9 @@ func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *
if err != nil {
return err
}
if !blockInfo.Exists {
// The IBD block locator is checking only existing blocks with bodies.
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
continue
}

View File

@@ -48,7 +48,7 @@ func HandleIBDBlockRequests(context HandleIBDBlockRequestsContext, incomingRoute
if err != nil {
return err
}
log.Debugf("sent %d out of %d", i, len(msgRequestIBDBlocks.Hashes))
log.Debugf("sent %d out of %d", i+1, len(msgRequestIBDBlocks.Hashes))
}
}
}

View File

@@ -0,0 +1,62 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// PruningPointAndItsAnticoneRequestsContext is the interface for the context needed for the HandlePruningPointAndItsAnticoneRequests flow.
type PruningPointAndItsAnticoneRequestsContext interface {
Domain() domain.Domain
}
// HandlePruningPointAndItsAnticoneRequests listens to appmessage.MsgRequestPruningPointAndItsAnticone messages and sends
// the pruning point and its anticone to the requesting peer.
func HandlePruningPointAndItsAnticoneRequests(context PruningPointAndItsAnticoneRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for pruning point and its anticone from %s", peer)
pruningPointHeaders, err := context.Domain().Consensus().PruningPointHeaders()
if err != nil {
return err
}
msgPruningPointHeaders := make([]*appmessage.MsgBlockHeader, len(pruningPointHeaders))
for i, header := range pruningPointHeaders {
msgPruningPointHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(header)
}
err = outgoingRoute.Enqueue(appmessage.NewMsgPruningPoints(msgPruningPointHeaders))
if err != nil {
return err
}
blocks, err := context.Domain().Consensus().PruningPointAndItsAnticoneWithTrustedData()
if err != nil {
return err
}
for _, block := range blocks {
err = outgoingRoute.Enqueue(appmessage.DomainBlockWithTrustedDataToBlockWithTrustedData(block))
if err != nil {
return err
}
}
err = outgoingRoute.Enqueue(appmessage.NewMsgDoneBlocksWithTrustedData())
if err != nil {
return err
}
log.Debugf("Sent pruning point and its anticone to %s", peer)
}
}

View File

@@ -1,51 +0,0 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandlePruningPointHashRequestsFlowContext is the interface for the context needed for the handlePruningPointHashRequestsFlow flow.
type HandlePruningPointHashRequestsFlowContext interface {
Domain() domain.Domain
}
type handlePruningPointHashRequestsFlow struct {
HandlePruningPointHashRequestsFlowContext
incomingRoute, outgoingRoute *router.Route
}
// HandlePruningPointHashRequests listens to appmessage.MsgRequestPruningPointHashMessage messages and sends
// the pruning point hash as response.
func HandlePruningPointHashRequests(context HandlePruningPointHashRequestsFlowContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handlePruningPointHashRequestsFlow{
HandlePruningPointHashRequestsFlowContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handlePruningPointHashRequestsFlow) start() error {
for {
_, err := flow.incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for a pruning point hash")
pruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return err
}
err = flow.outgoingRoute.Enqueue(appmessage.NewPruningPointHashMessage(pruningPoint))
if err != nil {
return err
}
log.Debugf("Sent pruning point hash %s", pruningPoint)
}
}

View File

@@ -0,0 +1,40 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// PruningPointProofRequestsContext is the interface for the context needed for the HandlePruningPointProofRequests flow.
type PruningPointProofRequestsContext interface {
Domain() domain.Domain
}
// HandlePruningPointProofRequests listens to appmessage.MsgRequestPruningPointProof messages and sends
// the pruning point proof to the requesting peer.
func HandlePruningPointProofRequests(context PruningPointProofRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for pruning point proof from %s", peer)
pruningPointProof, err := context.Domain().Consensus().BuildPruningPointProof()
if err != nil {
return err
}
pruningPointProofMessage := appmessage.DomainPruningPointProofToMsgPruningPointProof(pruningPointProof)
err = outgoingRoute.Enqueue(pruningPointProofMessage)
if err != nil {
return err
}
log.Debugf("Sent pruning point proof to %s", peer)
}
}

View File

@@ -33,6 +33,7 @@ type RelayInvsContext interface {
IsIBDRunning() bool
TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool
UnsetIBDRunning()
IsRecoverableError(err error) bool
}
type handleRelayInvsFlow struct {
@@ -126,7 +127,7 @@ func (flow *handleRelayInvsFlow) start() error {
}
if len(missingParents) > 0 {
log.Debugf("Block %s is orphan and has missing parents: %s", inv.Hash, missingParents)
err := flow.processOrphan(block, missingParents)
err := flow.processOrphan(block)
if err != nil {
return err
}
@@ -228,7 +229,7 @@ func (flow *handleRelayInvsFlow) readMsgBlock() (msgBlock *appmessage.MsgBlock,
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, *externalapi.BlockInsertionResult, error) {
blockHash := consensushashing.BlockHash(block)
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return nil, nil, errors.Wrapf(err, "failed to process block %s", blockHash)
@@ -249,7 +250,7 @@ func (flow *handleRelayInvsFlow) relayBlock(block *externalapi.DomainBlock) erro
return flow.Broadcast(appmessage.NewMsgInvBlock(blockHash))
}
func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, missingParents []*externalapi.DomainHash) error {
func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock) error {
blockHash := consensushashing.BlockHash(block)
// Return if the block has been orphaned from elsewhere already
@@ -274,7 +275,7 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, m
// Start IBD unless we already are in IBD
log.Debugf("Block %s is out of orphan resolution range. "+
"Attempting to start IBD against it.", blockHash)
return flow.runIBDIfNotRunning(blockHash)
return flow.runIBDIfNotRunning(block)
}
// isBlockInOrphanResolutionRange finds out whether the given blockHash should be
@@ -283,8 +284,7 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, m
// In the response, if we know none of the hashes, we should retrieve the given
// blockHash via IBD. Otherwise, via unorphaning.
func (flow *handleRelayInvsFlow) isBlockInOrphanResolutionRange(blockHash *externalapi.DomainHash) (bool, error) {
lowHash := flow.Config().ActiveNetParams.GenesisHash
err := flow.sendGetBlockLocator(lowHash, blockHash, orphanResolutionRange)
err := flow.sendGetBlockLocator(blockHash, orphanResolutionRange)
if err != nil {
return false, err
}

View File

@@ -32,20 +32,19 @@ func HandleRequestBlockLocator(context RequestBlockLocatorContext, incomingRoute
func (flow *handleRequestBlockLocatorFlow) start() error {
for {
lowHash, highHash, limit, err := flow.receiveGetBlockLocator()
highHash, limit, err := flow.receiveGetBlockLocator()
if err != nil {
return err
}
log.Debugf("Received getBlockLocator with lowHash: %s, highHash: %s, limit: %d",
lowHash, highHash, limit)
log.Debugf("Received getBlockLocator with highHash: %s, limit: %d", highHash, limit)
locator, err := flow.Domain().Consensus().CreateBlockLocator(lowHash, highHash, limit)
locator, err := flow.Domain().Consensus().CreateBlockLocatorFromPruningPoint(highHash, limit)
if err != nil || len(locator) == 0 {
if err != nil {
log.Debugf("Received error from CreateBlockLocator: %s", err)
log.Debugf("Received error from CreateBlockLocatorFromPruningPoint: %s", err)
}
return protocolerrors.Errorf(true, "couldn't build a block "+
"locator between blocks %s and %s", lowHash, highHash)
"locator between the pruning point and %s", highHash)
}
err = flow.sendBlockLocator(locator)
@@ -55,16 +54,15 @@ func (flow *handleRequestBlockLocatorFlow) start() error {
}
}
func (flow *handleRequestBlockLocatorFlow) receiveGetBlockLocator() (lowHash *externalapi.DomainHash,
highHash *externalapi.DomainHash, limit uint32, err error) {
func (flow *handleRequestBlockLocatorFlow) receiveGetBlockLocator() (highHash *externalapi.DomainHash, limit uint32, err error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, nil, 0, err
return nil, 0, err
}
msgGetBlockLocator := message.(*appmessage.MsgRequestBlockLocator)
return msgGetBlockLocator.LowHash, msgGetBlockLocator.HighHash, msgGetBlockLocator.Limit, nil
return msgGetBlockLocator.HighHash, msgGetBlockLocator.Limit, nil
}
func (flow *handleRequestBlockLocatorFlow) sendBlockLocator(locator externalapi.BlockLocator) error {

View File

@@ -12,26 +12,26 @@ import (
const ibdBatchSize = router.DefaultMaxMessages
// RequestIBDBlocksContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestIBDBlocksContext interface {
// RequestHeadersContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestHeadersContext interface {
Domain() domain.Domain
}
type handleRequestHeadersFlow struct {
RequestIBDBlocksContext
RequestHeadersContext
incomingRoute, outgoingRoute *router.Route
peer *peer.Peer
}
// HandleRequestHeaders handles RequestHeaders messages
func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router.Route,
func HandleRequestHeaders(context RequestHeadersContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
flow := &handleRequestHeadersFlow{
RequestIBDBlocksContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
RequestHeadersContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
@@ -49,8 +49,9 @@ func (flow *handleRequestHeadersFlow) start() error {
// GetHashesBetween is a relatively heavy operation so we limit it
// in order to avoid locking the consensus for too long
const maxBlueScoreDifference = 1 << 10
blockHashes, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash, maxBlueScoreDifference)
// maxBlocks MUST be >= MergeSetSizeLimit + 1
const maxBlocks = 1 << 10
blockHashes, _, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash, maxBlocks)
if err != nil {
return err
}

View File

@@ -0,0 +1,138 @@
package blockrelay
import (
"errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleRequestPruningPointUTXOSetContext is the interface for the context needed for the HandleRequestPruningPointUTXOSet flow.
type HandleRequestPruningPointUTXOSetContext interface {
Domain() domain.Domain
}
type handleRequestPruningPointUTXOSetFlow struct {
HandleRequestPruningPointUTXOSetContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestPruningPointUTXOSet listens to appmessage.MsgRequestPruningPointUTXOSet messages and sends
// the pruning point UTXO set and block body.
func HandleRequestPruningPointUTXOSet(context HandleRequestPruningPointUTXOSetContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handleRequestPruningPointUTXOSetFlow{
HandleRequestPruningPointUTXOSetContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestPruningPointUTXOSetFlow) start() error {
for {
msgRequestPruningPointUTXOSet, err := flow.waitForRequestPruningPointUTXOSetMessages()
if err != nil {
return err
}
err = flow.handleRequestPruningPointUTXOSetMessage(msgRequestPruningPointUTXOSet)
if err != nil {
return err
}
}
}
func (flow *handleRequestPruningPointUTXOSetFlow) handleRequestPruningPointUTXOSetMessage(
msgRequestPruningPointUTXOSet *appmessage.MsgRequestPruningPointUTXOSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "handleRequestPruningPointUTXOSetFlow")
defer onEnd()
log.Debugf("Got request for pruning point UTXO set")
return flow.sendPruningPointUTXOSet(msgRequestPruningPointUTXOSet)
}
func (flow *handleRequestPruningPointUTXOSetFlow) waitForRequestPruningPointUTXOSetMessages() (
*appmessage.MsgRequestPruningPointUTXOSet, error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, err
}
msgRequestPruningPointUTXOSet, ok := message.(*appmessage.MsgRequestPruningPointUTXOSet)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestPruningPointUTXOSet, message.Command())
}
return msgRequestPruningPointUTXOSet, nil
}
func (flow *handleRequestPruningPointUTXOSetFlow) sendPruningPointUTXOSet(
msgRequestPruningPointUTXOSet *appmessage.MsgRequestPruningPointUTXOSet) error {
// Send the UTXO set in `step`-sized chunks
const step = 1000
var fromOutpoint *externalapi.DomainOutpoint
chunksSent := 0
for {
pruningPointUTXOs, err := flow.Domain().Consensus().GetPruningPointUTXOs(
msgRequestPruningPointUTXOSet.PruningPointHash, fromOutpoint, step)
if err != nil {
if errors.Is(err, ruleerrors.ErrWrongPruningPointHash) {
return flow.outgoingRoute.Enqueue(appmessage.NewMsgUnexpectedPruningPoint())
}
}
log.Debugf("Retrieved %d UTXOs for pruning block %s",
len(pruningPointUTXOs), msgRequestPruningPointUTXOSet.PruningPointHash)
outpointAndUTXOEntryPairs :=
appmessage.DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(pruningPointUTXOs)
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgPruningPointUTXOSetChunk(outpointAndUTXOEntryPairs))
if err != nil {
return err
}
finished := len(pruningPointUTXOs) < step
if finished && chunksSent%ibdBatchSize != 0 {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSet.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
if len(pruningPointUTXOs) > 0 {
fromOutpoint = pruningPointUTXOs[len(pruningPointUTXOs)-1].Outpoint
}
chunksSent++
// Wait for the peer to request more chunks every `ibdBatchSize` chunks
if chunksSent%ibdBatchSize == 0 {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
_, ok := message.(*appmessage.MsgRequestNextPruningPointUTXOSetChunk)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointUTXOSetChunk, message.Command())
}
if finished {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSet.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
}
}
}

View File

@@ -1,144 +0,0 @@
package blockrelay
import (
"errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleRequestPruningPointUTXOSetAndBlockContext is the interface for the context needed for the HandleRequestPruningPointUTXOSetAndBlock flow.
type HandleRequestPruningPointUTXOSetAndBlockContext interface {
Domain() domain.Domain
}
type handleRequestPruningPointUTXOSetAndBlockFlow struct {
HandleRequestPruningPointUTXOSetAndBlockContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestPruningPointUTXOSetAndBlock listens to appmessage.MsgRequestPruningPointUTXOSetAndBlock messages and sends
// the pruning point UTXO set and block body.
func HandleRequestPruningPointUTXOSetAndBlock(context HandleRequestPruningPointUTXOSetAndBlockContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handleRequestPruningPointUTXOSetAndBlockFlow{
HandleRequestPruningPointUTXOSetAndBlockContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) start() error {
for {
msgRequestPruningPointUTXOSetAndBlock, err := flow.waitForRequestPruningPointUTXOSetAndBlockMessages()
if err != nil {
return err
}
err = flow.handleRequestPruningPointUTXOSetAndBlockMessage(msgRequestPruningPointUTXOSetAndBlock)
if err != nil {
return err
}
}
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) handleRequestPruningPointUTXOSetAndBlockMessage(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "handleRequestPruningPointUTXOSetAndBlockFlow")
defer onEnd()
log.Debugf("Got request for PruningPointHash UTXOSet and Block")
err := flow.sendPruningPointBlock(msgRequestPruningPointUTXOSetAndBlock)
if err != nil {
return err
}
return flow.sendPruningPointUTXOSet(msgRequestPruningPointUTXOSetAndBlock)
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) waitForRequestPruningPointUTXOSetAndBlockMessages() (
*appmessage.MsgRequestPruningPointUTXOSetAndBlock, error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, err
}
msgRequestPruningPointUTXOSetAndBlock, ok := message.(*appmessage.MsgRequestPruningPointUTXOSetAndBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestPruningPointUTXOSetAndBlock, message.Command())
}
return msgRequestPruningPointUTXOSetAndBlock, nil
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) sendPruningPointBlock(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
block, err := flow.Domain().Consensus().GetBlock(msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
if err != nil {
return err
}
log.Debugf("Retrieved pruning block %s", msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgIBDBlock(appmessage.DomainBlockToMsgBlock(block)))
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) sendPruningPointUTXOSet(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
// Send the UTXO set in `step`-sized chunks
const step = 1000
var fromOutpoint *externalapi.DomainOutpoint
chunksSent := 0
for {
pruningPointUTXOs, err := flow.Domain().Consensus().GetPruningPointUTXOs(
msgRequestPruningPointUTXOSetAndBlock.PruningPointHash, fromOutpoint, step)
if err != nil {
if errors.Is(err, ruleerrors.ErrWrongPruningPointHash) {
return flow.outgoingRoute.Enqueue(appmessage.NewMsgUnexpectedPruningPoint())
}
}
log.Debugf("Retrieved %d UTXOs for pruning block %s",
len(pruningPointUTXOs), msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
outpointAndUTXOEntryPairs :=
appmessage.DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(pruningPointUTXOs)
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgPruningPointUTXOSetChunk(outpointAndUTXOEntryPairs))
if err != nil {
return err
}
if len(pruningPointUTXOs) < step {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
fromOutpoint = pruningPointUTXOs[len(pruningPointUTXOs)-1].Outpoint
chunksSent++
// Wait for the peer to request more chunks every `ibdBatchSize` chunks
if chunksSent%ibdBatchSize == 0 {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
_, ok := message.(*appmessage.MsgRequestNextPruningPointUTXOSetChunk)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointUTXOSetChunk, message.Command())
}
}
}
}

View File

@@ -1,7 +1,6 @@
package blockrelay
import (
"fmt"
"time"
"github.com/kaspanet/kaspad/infrastructure/logger"
@@ -17,78 +16,67 @@ import (
"github.com/pkg/errors"
)
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.DomainHash) error {
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) error {
wasIBDNotRunning := flow.TrySetIBDRunning(flow.peer)
if !wasIBDNotRunning {
log.Debugf("IBD is already running")
return nil
}
defer flow.UnsetIBDRunning()
isFinishedSuccessfully := false
defer func() {
flow.UnsetIBDRunning()
flow.logIBDFinished(isFinishedSuccessfully)
}()
highHash := consensushashing.BlockHash(block)
log.Debugf("IBD started with peer %s and highHash %s", flow.peer, highHash)
log.Debugf("Syncing headers up to %s", highHash)
headersSynced, err := flow.syncHeaders(highHash)
log.Debugf("Syncing blocks up to %s", highHash)
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, highestSharedBlockFound, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return err
}
if !headersSynced {
log.Debugf("Aborting IBD because the headers failed to sync")
return nil
}
log.Debugf("Finished syncing headers up to %s", highHash)
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
log.Debugf("Syncing the current pruning point UTXO set")
syncedPruningPointUTXOSetSuccessfully, err := flow.syncPruningPointUTXOSet()
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(block, highestSharedBlockFound)
if err != nil {
return err
}
if !syncedPruningPointUTXOSetSuccessfully {
log.Debugf("Aborting IBD because the pruning point UTXO set failed to sync")
if !shouldSync {
return nil
}
log.Debugf("Finished syncing the current pruning point UTXO set")
log.Debugf("Downloading block bodies up to %s", highHash)
if shouldDownloadHeadersProof {
log.Infof("Starting IBD with headers proof")
err := flow.ibdWithHeadersProof(highHash)
if err != nil {
return err
}
} else {
err = flow.syncPruningPointFutureHeaders(flow.Domain().Consensus(), highestSharedBlockHash, highHash)
if err != nil {
return err
}
}
err = flow.syncMissingBlockBodies(highHash)
if err != nil {
return err
}
log.Debugf("Finished downloading block bodies up to %s", highHash)
log.Debugf("Finished syncing blocks up to %s", highHash)
isFinishedSuccessfully = true
return nil
}
// syncHeaders attempts to sync headers from the peer. This method may fail
// because the peer and us have conflicting pruning points. In that case we
// return (false, nil) so that we may stop IBD gracefully.
func (flow *handleRelayInvsFlow) syncHeaders(highHash *externalapi.DomainHash) (bool, error) {
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, highestSharedBlockFound, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return false, err
func (flow *handleRelayInvsFlow) logIBDFinished(isFinishedSuccessfully bool) {
successString := "successfully"
if !isFinishedSuccessfully {
successString = "(interrupted)"
}
if !highestSharedBlockFound {
return false, nil
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
err = flow.downloadHeaders(highestSharedBlockHash, highHash)
if err != nil {
return false, err
}
// If the highHash has not been received, the peer is misbehaving
highHashBlockInfo, err := flow.Domain().Consensus().GetBlockInfo(highHash)
if err != nil {
return false, err
}
if !highHashBlockInfo.Exists {
return false, protocolerrors.Errorf(true, "did not receive "+
"highHash header %s from peer %s during header download", highHash, flow.peer)
}
log.Debugf("Headers downloaded from peer %s", flow.peer)
return true, nil
log.Infof("IBD finished %s", successString)
}
// findHighestSharedBlock attempts to find the highest shared block between the peer
@@ -208,20 +196,22 @@ func (flow *handleRelayInvsFlow) fetchHighestHash(
}
}
func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externalapi.DomainHash,
func (flow *handleRelayInvsFlow) syncPruningPointFutureHeaders(consensus externalapi.Consensus, highestSharedBlockHash *externalapi.DomainHash,
highHash *externalapi.DomainHash) error {
log.Infof("Downloading headers from %s", flow.peer)
err := flow.sendRequestHeaders(highestSharedBlockHash, highHash)
if err != nil {
return err
}
// Keep a short queue of blockHeadersMessages so that there's
// Keep a short queue of BlockHeadersMessages so that there's
// never a moment when the node is not validating and inserting
// headers
blockHeadersMessageChan := make(chan *appmessage.BlockHeadersMessage, 2)
errChan := make(chan error)
spawn("handleRelayInvsFlow-downloadHeaders", func() {
spawn("handleRelayInvsFlow-syncPruningPointFutureHeaders", func() {
for {
blockHeadersMessage, doneIBD, err := flow.receiveHeaders()
if err != nil {
@@ -245,12 +235,21 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
for {
select {
case blockHeadersMessage, ok := <-blockHeadersMessageChan:
case ibdBlocksMessage, ok := <-blockHeadersMessageChan:
if !ok {
// If the highHash has not been received, the peer is misbehaving
highHashBlockInfo, err := consensus.GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashBlockInfo.Exists {
return protocolerrors.Errorf(true, "did not receive "+
"highHash block %s from peer %s during block download", highHash, flow.peer)
}
return nil
}
for _, header := range blockHeadersMessage.BlockHeaders {
err = flow.processHeader(header)
for _, header := range ibdBlocksMessage.BlockHeaders {
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
@@ -268,7 +267,7 @@ func (flow *handleRelayInvsFlow) sendRequestHeaders(highestSharedBlockHash *exte
return flow.outgoingRoute.Enqueue(msgGetBlockInvs)
}
func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneIBD bool, err error) {
func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneHeaders bool, err error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, false, err
@@ -281,11 +280,14 @@ func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.Block
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s", appmessage.CmdHeader, appmessage.CmdDoneHeaders, message.Command())
"expected: %s or %s, got: %s",
appmessage.CmdBlockHeaders,
appmessage.CmdDoneHeaders,
message.Command())
}
}
func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlockHeader) error {
func (flow *handleRelayInvsFlow) processHeader(consensus externalapi.Consensus, msgBlockHeader *appmessage.MsgBlockHeader) error {
header := appmessage.BlockHeaderToDomainBlockHeader(msgBlockHeader)
block := &externalapi.DomainBlock{
Header: header,
@@ -293,7 +295,7 @@ func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlo
}
blockHash := consensushashing.BlockHash(block)
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(blockHash)
blockInfo, err := consensus.GetBlockInfo(blockHash)
if err != nil {
return err
}
@@ -301,7 +303,7 @@ func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlo
log.Debugf("Block header %s is already in the DAG. Skipping...", blockHash)
return nil
}
_, err = flow.Domain().Consensus().ValidateAndInsertBlock(block)
_, err = consensus.ValidateAndInsertBlock(block, false)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
@@ -318,129 +320,42 @@ func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlo
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet() (bool, error) {
log.Debugf("Checking if a new pruning point is available")
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointHashMessage())
func (flow *handleRelayInvsFlow) validatePruningPointFutureHeaderTimestamps() error {
headerSelectedTipHash, err := flow.Domain().StagingConsensus().GetHeadersSelectedTip()
if err != nil {
return false, err
return err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
headerSelectedTipHeader, err := flow.Domain().StagingConsensus().GetBlockHeader(headerSelectedTipHash)
if err != nil {
return false, err
}
msgPruningPointHash, ok := message.(*appmessage.MsgPruningPointHashMessage)
if !ok {
return false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPointHash, message.Command())
return err
}
headerSelectedTipTimestamp := headerSelectedTipHeader.TimeInMilliseconds()
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(msgPruningPointHash.Hash)
currentSelectedTipHash, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return false, err
return err
}
if !blockInfo.Exists {
return false, errors.Errorf("The pruning point header is missing")
}
if blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
log.Debugf("Already has the block data of the new suggested pruning point %s", msgPruningPointHash.Hash)
return true, nil
}
log.Infof("Checking if the suggested pruning point %s is compatible to the node DAG", msgPruningPointHash.Hash)
isValid, err := flow.Domain().Consensus().IsValidPruningPoint(msgPruningPointHash.Hash)
currentSelectedTipHeader, err := flow.Domain().Consensus().GetBlockHeader(currentSelectedTipHash)
if err != nil {
return false, err
return err
}
currentSelectedTipTimestamp := currentSelectedTipHeader.TimeInMilliseconds()
if headerSelectedTipTimestamp < currentSelectedTipTimestamp {
return protocolerrors.Errorf(false, "the timestamp of the candidate selected "+
"tip is smaller than the current selected tip")
}
if !isValid {
log.Infof("The suggested pruning point %s is incompatible to this node DAG, so stopping IBD with this"+
" peer", msgPruningPointHash.Hash)
return false, nil
minTimestampDifferenceInMilliseconds := (10 * time.Minute).Milliseconds()
if headerSelectedTipTimestamp-currentSelectedTipTimestamp < minTimestampDifferenceInMilliseconds {
return protocolerrors.Errorf(false, "difference between the timestamps of "+
"the current pruning point and the candidate pruning point is too small. Aborting IBD...")
}
log.Info("Fetching the pruning point UTXO set")
succeed, err := flow.fetchMissingUTXOSet(msgPruningPointHash.Hash)
if err != nil {
return false, err
}
if !succeed {
log.Infof("Couldn't successfully fetch the pruning point UTXO set. Stopping IBD.")
return false, nil
}
log.Info("Fetched the new pruning point UTXO set")
return true, nil
}
func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(pruningPointHash *externalapi.DomainHash) (succeed bool, err error) {
defer func() {
err := flow.Domain().Consensus().ClearImportedPruningPointData()
if err != nil {
panic(fmt.Sprintf("failed to clear imported pruning point data: %s", err))
}
}()
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointUTXOSetAndBlock(pruningPointHash))
if err != nil {
return false, err
}
block, err := flow.receivePruningPointBlock()
if err != nil {
return false, err
}
receivedAll, err := flow.receiveAndInsertPruningPointUTXOSet(pruningPointHash)
if err != nil {
return false, err
}
if !receivedAll {
return false, nil
}
err = flow.Domain().Consensus().ValidateAndInsertImportedPruningPoint(block)
if err != nil {
// TODO: Find a better way to deal with finality conflicts.
if errors.Is(err, ruleerrors.ErrSuggestedPruningViolatesFinality) {
return false, nil
}
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return false, err
}
return true, nil
}
func (flow *handleRelayInvsFlow) receivePruningPointBlock() (*externalapi.DomainBlock, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receivePruningPointBlock")
defer onEnd()
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
ibdBlockMessage, ok := message.(*appmessage.MsgIBDBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlock, message.Command())
}
block := appmessage.MsgBlockToDomainBlock(ibdBlockMessage.MsgBlock)
log.Debugf("Received pruning point block %s", consensushashing.BlockHash(block))
return block, nil
return nil
}
func (flow *handleRelayInvsFlow) receiveAndInsertPruningPointUTXOSet(
pruningPointHash *externalapi.DomainHash) (bool, error) {
consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (bool, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receiveAndInsertPruningPointUTXOSet")
defer onEnd()
@@ -459,7 +374,7 @@ func (flow *handleRelayInvsFlow) receiveAndInsertPruningPointUTXOSet(
domainOutpointAndUTXOEntryPairs :=
appmessage.OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(message.OutpointAndUTXOEntryPairs)
err := flow.Domain().Consensus().AppendImportedPruningPointUTXOs(domainOutpointAndUTXOEntryPairs)
err := consensus.AppendImportedPruningPointUTXOs(domainOutpointAndUTXOEntryPairs)
if err != nil {
return false, err
}
@@ -543,7 +458,7 @@ func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.Do
return err
}
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, false)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping IBD Block %s as it has already been added to the DAG", blockHash)
@@ -558,7 +473,7 @@ func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.Do
}
}
return nil
return flow.Domain().Consensus().ResolveVirtual()
}
// dequeueIncomingMessageAndSkipInvs is a convenience method to be used during

View File

@@ -0,0 +1,364 @@
package blockrelay
import (
"fmt"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
)
func (flow *handleRelayInvsFlow) ibdWithHeadersProof(highHash *externalapi.DomainHash) error {
err := flow.Domain().InitStagingConsensus()
if err != nil {
return err
}
err = flow.downloadHeadersAndPruningUTXOSet(highHash)
if err != nil {
if !flow.IsRecoverableError(err) {
return err
}
deleteStagingConsensusErr := flow.Domain().DeleteStagingConsensus()
if deleteStagingConsensusErr != nil {
return deleteStagingConsensusErr
}
return err
}
err = flow.Domain().CommitStagingConsensus()
if err != nil {
return err
}
return nil
}
func (flow *handleRelayInvsFlow) shouldSyncAndShouldDownloadHeadersProof(highBlock *externalapi.DomainBlock,
highestSharedBlockFound bool) (shouldDownload, shouldSync bool, err error) {
if !highestSharedBlockFound {
hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock)
if err != nil {
return false, false, err
}
if hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore {
return true, true, nil
}
return false, false, nil
}
return false, true, nil
}
func (flow *handleRelayInvsFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock *externalapi.DomainBlock) (bool, error) {
headersSelectedTip, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return false, err
}
headersSelectedTipInfo, err := flow.Domain().Consensus().GetBlockInfo(headersSelectedTip)
if err != nil {
return false, err
}
if highBlock.Header.BlueScore() < headersSelectedTipInfo.BlueScore+flow.Config().NetParams().PruningDepth() {
return false, nil
}
return highBlock.Header.BlueWork().Cmp(headersSelectedTipInfo.BlueWork) > 0, nil
}
func (flow *handleRelayInvsFlow) syncAndValidatePruningPointProof() (*externalapi.DomainHash, error) {
log.Infof("Downloading the pruning point proof from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointProof())
if err != nil {
return nil, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
pruningPointProofMessage, ok := message.(*appmessage.MsgPruningPointProof)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPointProof, message.Command())
}
pruningPointProof := appmessage.MsgPruningPointProofToDomainPruningPointProof(pruningPointProofMessage)
err = flow.Domain().Consensus().ValidatePruningPointProof(pruningPointProof)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
return nil, protocolerrors.Wrapf(true, err, "pruning point proof validation failed")
}
return nil, err
}
err = flow.Domain().StagingConsensus().ApplyPruningPointProof(pruningPointProof)
if err != nil {
return nil, err
}
return consensushashing.HeaderHash(pruningPointProof.Headers[0][len(pruningPointProof.Headers[0])-1]), nil
}
func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(highHash *externalapi.DomainHash) error {
proofPruningPoint, err := flow.syncAndValidatePruningPointProof()
if err != nil {
return err
}
err = flow.syncPruningPointsAndPruningPointAnticone(proofPruningPoint)
if err != nil {
return err
}
// TODO: Remove this condition once there's more proper way to check finality violation
// in the headers proof.
if proofPruningPoint.Equal(flow.Config().NetParams().GenesisHash) {
return protocolerrors.Errorf(true, "the genesis pruning point violates finality")
}
err = flow.syncPruningPointFutureHeaders(flow.Domain().StagingConsensus(), proofPruningPoint, highHash)
if err != nil {
return err
}
log.Debugf("Headers downloaded from peer %s", flow.peer)
highHashInfo, err := flow.Domain().StagingConsensus().GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashInfo.Exists {
return protocolerrors.Errorf(true, "the triggering IBD block was not sent")
}
err = flow.validatePruningPointFutureHeaderTimestamps()
if err != nil {
return err
}
log.Debugf("Syncing the current pruning point UTXO set")
syncedPruningPointUTXOSetSuccessfully, err := flow.syncPruningPointUTXOSet(flow.Domain().StagingConsensus(), proofPruningPoint)
if err != nil {
return err
}
if !syncedPruningPointUTXOSetSuccessfully {
log.Debugf("Aborting IBD because the pruning point UTXO set failed to sync")
return nil
}
log.Debugf("Finished syncing the current pruning point UTXO set")
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointsAndPruningPointAnticone(proofPruningPoint *externalapi.DomainHash) error {
log.Infof("Downloading the past pruning points and the pruning point anticone from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointAndItsAnticone())
if err != nil {
return err
}
err = flow.validateAndInsertPruningPoints(proofPruningPoint)
if err != nil {
return err
}
pruningPointWithMetaData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
}
if done {
return protocolerrors.Errorf(true, "got `done` message before receiving the pruning point")
}
if !pruningPointWithMetaData.Block.Header.BlockHash().Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "first block with trusted data is not the pruning point")
}
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), pruningPointWithMetaData)
if err != nil {
return err
}
for {
blockWithTrustedData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
}
if done {
break
}
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), blockWithTrustedData)
if err != nil {
return err
}
}
log.Infof("Finished downloading pruning point and its anticone from %s", flow.peer)
return nil
}
func (flow *handleRelayInvsFlow) processBlockWithTrustedData(
consensus externalapi.Consensus, block *appmessage.MsgBlockWithTrustedData) error {
_, err := consensus.ValidateAndInsertBlockWithTrustedData(appmessage.BlockWithTrustedDataToDomainBlockWithTrustedData(block), false)
return err
}
func (flow *handleRelayInvsFlow) receiveBlockWithTrustedData() (*appmessage.MsgBlockWithTrustedData, bool, error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch downCastedMessage := message.(type) {
case *appmessage.MsgBlockWithTrustedData:
return downCastedMessage, false, nil
case *appmessage.MsgDoneBlocksWithTrustedData:
return nil, true, nil
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s",
(&appmessage.MsgBlockWithTrustedData{}).Command(),
(&appmessage.MsgDoneBlocksWithTrustedData{}).Command(),
downCastedMessage.Command())
}
}
func (flow *handleRelayInvsFlow) receivePruningPoints() (*appmessage.MsgPruningPoints, error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
msgPruningPoints, ok := message.(*appmessage.MsgPruningPoints)
if !ok {
return nil,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPoints, message.Command())
}
return msgPruningPoints, nil
}
func (flow *handleRelayInvsFlow) validateAndInsertPruningPoints(proofPruningPoint *externalapi.DomainHash) error {
currentPruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return err
}
if currentPruningPoint.Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "the proposed pruning point is the same as the current pruning point")
}
pruningPoints, err := flow.receivePruningPoints()
if err != nil {
return err
}
headers := make([]externalapi.BlockHeader, len(pruningPoints.Headers))
for i, header := range pruningPoints.Headers {
headers[i] = appmessage.BlockHeaderToDomainBlockHeader(header)
}
arePruningPointsViolatingFinality, err := flow.Domain().Consensus().ArePruningPointsViolatingFinality(headers)
if err != nil {
return err
}
if arePruningPointsViolatingFinality {
// TODO: Find a better way to deal with finality conflicts.
return protocolerrors.Errorf(false, "pruning points are violating finality")
}
lastPruningPoint := consensushashing.HeaderHash(headers[len(headers)-1])
if !lastPruningPoint.Equal(proofPruningPoint) {
return protocolerrors.Errorf(true, "the proof pruning point is not equal to the last pruning "+
"point in the list")
}
err = flow.Domain().StagingConsensus().ImportPruningPoints(headers)
if err != nil {
return err
}
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet(consensus externalapi.Consensus,
pruningPoint *externalapi.DomainHash) (bool, error) {
log.Infof("Checking if the suggested pruning point %s is compatible to the node DAG", pruningPoint)
isValid, err := flow.Domain().StagingConsensus().IsValidPruningPoint(pruningPoint)
if err != nil {
return false, err
}
if !isValid {
return false, protocolerrors.Errorf(true, "invalid pruning point %s", pruningPoint)
}
log.Info("Fetching the pruning point UTXO set")
isSuccessful, err := flow.fetchMissingUTXOSet(consensus, pruningPoint)
if err != nil {
return false, err
}
if !isSuccessful {
log.Infof("Couldn't successfully fetch the pruning point UTXO set. Stopping IBD.")
return false, nil
}
log.Info("Fetched the new pruning point UTXO set")
return true, nil
}
func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (succeed bool, err error) {
defer func() {
err := flow.Domain().StagingConsensus().ClearImportedPruningPointData()
if err != nil {
panic(fmt.Sprintf("failed to clear imported pruning point data: %s", err))
}
}()
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointUTXOSet(pruningPointHash))
if err != nil {
return false, err
}
receivedAll, err := flow.receiveAndInsertPruningPointUTXOSet(consensus, pruningPointHash)
if err != nil {
return false, err
}
if !receivedAll {
return false, nil
}
err = flow.Domain().StagingConsensus().ValidateAndInsertImportedPruningPoint(pruningPointHash)
if err != nil {
// TODO: Find a better way to deal with finality conflicts.
if errors.Is(err, ruleerrors.ErrSuggestedPruningViolatesFinality) {
return false, nil
}
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return false, err
}
return true, nil
}

View File

@@ -4,12 +4,14 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// SendVirtualSelectedParentInvContext is the interface for the context needed for the SendVirtualSelectedParentInv flow.
type SendVirtualSelectedParentInvContext interface {
Domain() domain.Domain
Config() *config.Config
}
// SendVirtualSelectedParentInv sends a peer the selected parent hash of the virtual
@@ -21,6 +23,11 @@ func SendVirtualSelectedParentInv(context SendVirtualSelectedParentInvContext,
return err
}
if virtualSelectedParent.Equal(context.Config().NetParams().GenesisHash) {
log.Debugf("Skipping sending the virtual selected parent hash to peer %s because it's the genesis", peer)
return nil
}
log.Debugf("Sending virtual selected parent hash %s to peer %s", virtualSelectedParent, peer)
virtualSelectedParentInv := appmessage.NewMsgInvBlock(virtualSelectedParent)

View File

@@ -13,6 +13,7 @@ import (
// SendPingsContext is the interface for the context needed for the SendPings flow.
type SendPingsContext interface {
ShutdownChan() <-chan struct{}
}
type sendPingsFlow struct {
@@ -39,7 +40,13 @@ func (flow *sendPingsFlow) start() error {
ticker := time.NewTicker(pingInterval)
defer ticker.Stop()
for range ticker.C {
for {
select {
case <-flow.ShutdownChan():
return nil
case <-ticker.C:
}
nonce, err := random.Uint64()
if err != nil {
return err
@@ -62,5 +69,4 @@ func (flow *sendPingsFlow) start() error {
}
flow.peer.SetPingIdle()
}
return nil
}

View File

@@ -33,10 +33,5 @@ func (flow *handleRejectsFlow) start() error {
}
rejectMessage := message.(*appmessage.MsgReject)
const maxReasonLength = 255
if len(rejectMessage.Reason) > maxReasonLength {
return protocolerrors.Errorf(false, "got reject message longer than %d", maxReasonLength)
}
return protocolerrors.Errorf(false, "got reject message: `%s`", rejectMessage.Reason)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,16 @@
package testing
import (
"testing"
"time"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"testing"
"time"
)
type fakeReceiveAddressesContext struct{}
@@ -19,9 +20,9 @@ func (f fakeReceiveAddressesContext) AddressManager() *addressmanager.AddressMan
}
func TestReceiveAddressesErrors(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
incomingRoute := router.NewRoute()
outgoingRoute := router.NewRoute()
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
incomingRoute := router.NewRoute("incoming")
outgoingRoute := router.NewRoute("outgoing")
peer := peerpkg.New(nil)
errChan := make(chan error)
go func() {

View File

@@ -19,8 +19,8 @@ type TransactionsRelayContext interface {
NetAdapter() *netadapter.NetAdapter
Domain() domain.Domain
SharedRequestedTransactions() *SharedRequestedTransactions
Broadcast(message appmessage.Message) error
OnTransactionAddedToMempool()
EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error
}
type handleRelayedTransactionsFlow struct {
@@ -119,8 +119,7 @@ func (flow *handleRelayedTransactionsFlow) readInv() (*appmessage.MsgInvTransact
}
func (flow *handleRelayedTransactionsFlow) broadcastAcceptedTransactions(acceptedTxIDs []*externalapi.DomainTransactionID) error {
inv := appmessage.NewMsgInvTransaction(acceptedTxIDs)
return flow.Broadcast(inv)
return flow.EnqueueTransactionIDsForPropagation(acceptedTxIDs)
}
// readMsgTxOrNotFound returns the next msgTx or msgTransactionNotFound in incomingRoute,
@@ -173,17 +172,18 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
expectedID, txID)
}
err = flow.Domain().MiningManager().ValidateAndInsertTransaction(tx, true)
acceptedTransactions, err :=
flow.Domain().MiningManager().ValidateAndInsertTransaction(tx, false, true)
if err != nil {
ruleErr := &mempool.RuleError{}
if !errors.As(err, ruleErr) {
return errors.Wrapf(err, "failed to process transaction %s", txID)
}
shouldBan := true
shouldBan := false
if txRuleErr := (&mempool.TxRuleError{}); errors.As(ruleErr.Err, txRuleErr) {
if txRuleErr.RejectCode != mempool.RejectInvalid {
shouldBan = false
if txRuleErr.RejectCode == mempool.RejectInvalid {
shouldBan = true
}
}
@@ -193,7 +193,7 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
return protocolerrors.Errorf(true, "rejected transaction %s: %s", txID, ruleErr)
}
err = flow.broadcastAcceptedTransactions([]*externalapi.DomainTransactionID{txID})
err = flow.broadcastAcceptedTransactions(consensushashing.TransactionIDs(acceptedTransactions))
if err != nil {
return err
}

View File

@@ -0,0 +1,191 @@
package transactionrelay_test
import (
"errors"
"strings"
"testing"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
type mocTransactionsRelayContext struct {
netAdapter *netadapter.NetAdapter
domain domain.Domain
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
}
func (m *mocTransactionsRelayContext) NetAdapter() *netadapter.NetAdapter {
return m.netAdapter
}
func (m *mocTransactionsRelayContext) Domain() domain.Domain {
return m.domain
}
func (m *mocTransactionsRelayContext) SharedRequestedTransactions() *transactionrelay.SharedRequestedTransactions {
return m.sharedRequestedTransactions
}
func (m *mocTransactionsRelayContext) EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error {
return nil
}
func (m *mocTransactionsRelayContext) OnTransactionAddedToMempool() {
}
// TestHandleRelayedTransactionsNotFound tests the flow of HandleRelayedTransactions when the peer doesn't
// have the requested transactions in the mempool.
func TestHandleRelayedTransactionsNotFound(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestHandleRelayedTransactionsNotFound")
if err != nil {
t.Fatalf("Error setting up test consensus: %+v", err)
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to create a NetAdapter: %v", err)
}
domainInstance, err := domain.New(consensusConfig, mempool.DefaultConfig(&consensusConfig.Params), tc.Database())
if err != nil {
t.Fatalf("Failed to set up a domain instance: %v", err)
}
context := &mocTransactionsRelayContext{
netAdapter: adapter,
domain: domainInstance,
sharedRequestedTransactions: sharedRequestedTransactions,
}
incomingRoute := router.NewRoute("incoming")
defer incomingRoute.Close()
peerIncomingRoute := router.NewRoute("outgoing")
defer peerIncomingRoute.Close()
txID1 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
txID2 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02})
txIDs := []*externalapi.DomainTransactionID{txID1, txID2}
invMessage := appmessage.NewMsgInvTransaction(txIDs)
err = incomingRoute.Enqueue(invMessage)
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
// The goroutine is representing the peer's actions.
spawn("peerResponseToTheTransactionsRequest", func() {
msg, err := peerIncomingRoute.Dequeue()
if err != nil {
t.Fatalf("Dequeue: %v", err)
}
inv := msg.(*appmessage.MsgRequestTransactions)
if len(txIDs) != len(inv.IDs) {
t.Fatalf("TestHandleRelayedTransactions: expected %d transactions ID, but got %d", len(txIDs), len(inv.IDs))
}
for i, id := range inv.IDs {
if txIDs[i].String() != id.String() {
t.Fatalf("TestHandleRelayedTransactions: expected equal txID: expected %s, but got %s", txIDs[i].String(), id.String())
}
err = incomingRoute.Enqueue(appmessage.NewMsgTransactionNotFound(txIDs[i]))
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
}
// Insert an unexpected message type to stop the infinity loop.
err = incomingRoute.Enqueue(&appmessage.MsgAddresses{})
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
})
err = transactionrelay.HandleRelayedTransactions(context, incomingRoute, peerIncomingRoute)
// Since we inserted an unexpected message type to stop the infinity loop,
// we expect the error will be infected from this specific message and also the
// error will count as a protocol message.
if protocolErr := (protocolerrors.ProtocolError{}); err == nil || !errors.As(err, &protocolErr) {
t.Fatalf("Expected to protocol error")
} else {
if !protocolErr.ShouldBan {
t.Fatalf("Exepcted shouldBan true, but got false.")
}
if !strings.Contains(err.Error(), "unexpected Addresses [code 3] message in the block relay flow while expecting an inv message") {
t.Fatalf("Unexpected error: expected: an error due to existence of an Addresses message "+
"in the block relay flow, but got: %v", protocolErr.Cause)
}
}
})
}
// TestOnClosedIncomingRoute verifies that an appropriate error message will be returned when
// trying to dequeue a message from a closed route.
func TestOnClosedIncomingRoute(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestOnClosedIncomingRoute")
if err != nil {
t.Fatalf("Error setting up test consensus: %+v", err)
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to creat a NetAdapter : %v", err)
}
domainInstance, err := domain.New(consensusConfig, mempool.DefaultConfig(&consensusConfig.Params), tc.Database())
if err != nil {
t.Fatalf("Failed to set up a domain instance: %v", err)
}
context := &mocTransactionsRelayContext{
netAdapter: adapter,
domain: domainInstance,
sharedRequestedTransactions: sharedRequestedTransactions,
}
incomingRoute := router.NewRoute("incoming")
outgoingRoute := router.NewRoute("outgoing")
defer outgoingRoute.Close()
txID := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
txIDs := []*externalapi.DomainTransactionID{txID}
err = incomingRoute.Enqueue(&appmessage.MsgInvTransaction{TxIDs: txIDs})
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
incomingRoute.Close()
err = transactionrelay.HandleRelayedTransactions(context, incomingRoute, outgoingRoute)
if err == nil || !errors.Is(err, router.ErrRouteClosed) {
t.Fatalf("Unexpected error: expected: %v, got : %v", router.ErrRouteClosed, err)
}
})
}

View File

@@ -0,0 +1,90 @@
package transactionrelay_test
import (
"testing"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util/panics"
"github.com/pkg/errors"
)
// TestHandleRequestedTransactionsNotFound tests the flow of HandleRequestedTransactions
// when the requested transactions don't found in the mempool.
func TestHandleRequestedTransactionsNotFound(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestHandleRequestedTransactionsNotFound")
if err != nil {
t.Fatalf("Error setting up test Consensus: %+v", err)
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to create a NetAdapter: %v", err)
}
domainInstance, err := domain.New(consensusConfig, mempool.DefaultConfig(&consensusConfig.Params), tc.Database())
if err != nil {
t.Fatalf("Failed to set up a domain Instance: %v", err)
}
context := &mocTransactionsRelayContext{
netAdapter: adapter,
domain: domainInstance,
sharedRequestedTransactions: sharedRequestedTransactions,
}
incomingRoute := router.NewRoute("incoming")
outgoingRoute := router.NewRoute("outgoing")
defer outgoingRoute.Close()
txID1 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
txID2 := externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02})
txIDs := []*externalapi.DomainTransactionID{txID1, txID2}
msg := appmessage.NewMsgRequestTransactions(txIDs)
err = incomingRoute.Enqueue(msg)
if err != nil {
t.Fatalf("Unexpected error from incomingRoute.Enqueue: %v", err)
}
// The goroutine is representing the peer's actions.
spawn("peerResponseToTheTransactionsMessages", func() {
for i, id := range txIDs {
msg, err := outgoingRoute.Dequeue()
if err != nil {
t.Fatalf("Dequeue: %s", err)
}
outMsg := msg.(*appmessage.MsgTransactionNotFound)
if txIDs[i].String() != outMsg.ID.String() {
t.Fatalf("TestHandleRelayedTransactions: expected equal txID: expected %s, but got %s", txIDs[i].String(), id.String())
}
}
// Closed the incomingRoute for stop the infinity loop.
incomingRoute.Close()
})
err = transactionrelay.HandleRequestedTransactions(context, incomingRoute, outgoingRoute)
// Make sure the error is due to the closed route.
if err == nil || !errors.Is(err, router.ErrRouteClosed) {
t.Fatalf("Unexpected error: expected: %v, got : %v", router.ErrRouteClosed, err)
}
})
}

View File

@@ -2,6 +2,10 @@ package protocol
import (
"fmt"
"sync"
"sync/atomic"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain"
@@ -17,7 +21,9 @@ import (
// Manager manages the p2p protocol
type Manager struct {
context *flowcontext.FlowContext
context *flowcontext.FlowContext
routersWaitGroup sync.WaitGroup
isClosed uint32
}
// NewManager creates a new instance of the p2p protocol manager
@@ -32,6 +38,18 @@ func NewManager(cfg *config.Config, domain domain.Domain, netAdapter *netadapter
return &manager, nil
}
// Close closes the protocol manager and waits until all p2p flows
// finish.
func (m *Manager) Close() {
if !atomic.CompareAndSwapUint32(&m.isClosed, 0, 1) {
panic(errors.New("The protocol manager was already closed"))
}
atomic.StoreUint32(&m.isClosed, 1)
m.context.Close()
m.routersWaitGroup.Wait()
}
// Peers returns the currently active peers
func (m *Manager) Peers() []*peerpkg.Peer {
return m.context.Peers()
@@ -44,8 +62,8 @@ func (m *Manager) IBDPeer() *peerpkg.Peer {
}
// AddTransaction adds transaction to the mempool and propagates it.
func (m *Manager) AddTransaction(tx *externalapi.DomainTransaction) error {
return m.context.AddTransaction(tx)
func (m *Manager) AddTransaction(tx *externalapi.DomainTransaction, allowOrphan bool) error {
return m.context.AddTransaction(tx, allowOrphan)
}
// AddBlock adds the given block to the DAG and propagates it.
@@ -53,11 +71,13 @@ func (m *Manager) AddBlock(block *externalapi.DomainBlock) error {
return m.context.AddBlock(block)
}
func (m *Manager) runFlows(flows []*flow, peer *peerpkg.Peer, errChan <-chan error) error {
func (m *Manager) runFlows(flows []*flow, peer *peerpkg.Peer, errChan <-chan error, flowsWaitGroup *sync.WaitGroup) error {
flowsWaitGroup.Add(len(flows))
for _, flow := range flows {
executeFunc := flow.executeFunc // extract to new variable so that it's not overwritten
spawn(fmt.Sprintf("flow-%s", flow.name), func() {
executeFunc(peer)
flowsWaitGroup.Done()
})
}

View File

@@ -1,20 +1,20 @@
package protocol
import (
"sync"
"sync/atomic"
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/handshake"
"github.com/kaspanet/kaspad/app/protocol/flows/ping"
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
@@ -41,6 +41,13 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
// After flows were registered - spawn a new thread that will wait for connection to finish initializing
// and start receiving messages
spawn("routerInitializer-runFlows", func() {
m.routersWaitGroup.Add(1)
defer m.routersWaitGroup.Done()
if atomic.LoadUint32(&m.isClosed) == 1 {
panic(errors.Errorf("tried to initialize router when the protocol manager is closed"))
}
isBanned, err := m.context.ConnectionManager().IsBanned(netConnection)
if err != nil && !errors.Is(err, addressmanager.ErrAddressNotFound) {
panic(err)
@@ -79,11 +86,17 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
removeHandshakeRoutes(router)
err = m.runFlows(flows, peer, errChan)
flowsWaitGroup := &sync.WaitGroup{}
err = m.runFlows(flows, peer, errChan, flowsWaitGroup)
if err != nil {
m.handleError(err, netConnection, router.OutgoingRoute())
// We call `flowsWaitGroup.Wait()` in two places instead of deferring, because
// we already defer `m.routersWaitGroup.Done()`, so we try to avoid error prone
// and confusing use of multiple dependent defers.
flowsWaitGroup.Wait()
return
}
flowsWaitGroup.Wait()
})
}
@@ -93,7 +106,7 @@ func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection
log.Warnf("Banning %s (reason: %s)", netConnection, protocolErr.Cause)
err := m.context.ConnectionManager().Ban(netConnection)
if !errors.Is(err, connmanager.ErrCannotBanPermanent) {
if err != nil && !errors.Is(err, connmanager.ErrCannotBanPermanent) {
panic(err)
}
@@ -155,10 +168,13 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
}),
m.registerFlow("HandleRelayInvs", router, []appmessage.MessageCommand{
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator, appmessage.CmdIBDBlock,
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator,
appmessage.CmdDoneHeaders, appmessage.CmdUnexpectedPruningPoint, appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdBlockHeaders, appmessage.CmdPruningPointHash, appmessage.CmdIBDBlockLocatorHighestHash,
appmessage.CmdIBDBlockLocatorHighestHashNotFound, appmessage.CmdDonePruningPointUTXOSetChunks},
appmessage.CmdBlockHeaders, appmessage.CmdIBDBlockLocatorHighestHash, appmessage.CmdBlockWithTrustedData,
appmessage.CmdDoneBlocksWithTrustedData, appmessage.CmdIBDBlockLocatorHighestHashNotFound,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdIBDBlock, appmessage.CmdPruningPoints,
appmessage.CmdPruningPointProof,
},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayInvs(m.context, incomingRoute,
outgoingRoute, peer)
@@ -185,14 +201,6 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
},
),
m.registerFlow("HandleRequestPruningPointUTXOSetAndBlock", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointUTXOSetAndBlock,
appmessage.CmdRequestNextPruningPointUTXOSetChunk}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestPruningPointUTXOSetAndBlock(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleIBDBlockRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
@@ -200,10 +208,18 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
},
),
m.registerFlow("HandlePruningPointHashRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointHash}, isStopping, errChan,
m.registerFlow("HandleRequestPruningPointUTXOSet", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointUTXOSet,
appmessage.CmdRequestNextPruningPointUTXOSetChunk}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointHashRequests(m.context, incomingRoute, outgoingRoute)
return blockrelay.HandleRequestPruningPointUTXOSet(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandlePruningPointAndItsAnticoneRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointAndItsAnticone}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointAndItsAnticoneRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
@@ -213,6 +229,13 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
return blockrelay.HandleIBDBlockLocator(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandlePruningPointProofRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointProof}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointProofRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
}
}
@@ -238,7 +261,7 @@ func (m *Manager) registerTransactionRelayFlow(router *routerpkg.Router, isStopp
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerFlow("HandleRelayedTransactions", router,
m.registerFlowWithCapacity("HandleRelayedTransactions", 10_000, router,
[]appmessage.MessageCommand{appmessage.CmdInvTransaction, appmessage.CmdTx, appmessage.CmdTransactionNotFound}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return transactionrelay.HandleRelayedTransactions(m.context, incomingRoute, outgoingRoute)
@@ -269,11 +292,29 @@ func (m *Manager) registerRejectsFlow(router *routerpkg.Router, isStopping *uint
func (m *Manager) registerFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
route, err := router.AddIncomingRoute(messageTypes)
route, err := router.AddIncomingRoute(name, messageTypes)
if err != nil {
panic(err)
}
return m.registerFlowForRoute(route, name, isStopping, errChan, initializeFunc)
}
func (m *Manager) registerFlowWithCapacity(name string, capacity int, router *routerpkg.Router,
messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
route, err := router.AddIncomingRouteWithCapacity(name, capacity, messageTypes)
if err != nil {
panic(err)
}
return m.registerFlowForRoute(route, name, isStopping, errChan, initializeFunc)
}
func (m *Manager) registerFlowForRoute(route *routerpkg.Route, name string, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
return &flow{
name: name,
executeFunc: func(peer *peerpkg.Peer) {
@@ -289,7 +330,7 @@ func (m *Manager) registerFlow(name string, router *routerpkg.Router, messageTyp
func (m *Manager) registerOneTimeFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand,
isStopping *uint32, stopChan chan error, initializeFunc flowInitializeFunc) *flow {
route, err := router.AddIncomingRoute(messageTypes)
route, err := router.AddIncomingRoute(name, messageTypes)
if err != nil {
panic(err)
}
@@ -315,12 +356,12 @@ func (m *Manager) registerOneTimeFlow(name string, router *routerpkg.Router, mes
func registerHandshakeRoutes(router *routerpkg.Router) (
receiveVersionRoute *routerpkg.Route, sendVersionRoute *routerpkg.Route) {
receiveVersionRoute, err := router.AddIncomingRoute([]appmessage.MessageCommand{appmessage.CmdVersion})
receiveVersionRoute, err := router.AddIncomingRoute("recieveVersion - incoming", []appmessage.MessageCommand{appmessage.CmdVersion})
if err != nil {
panic(err)
}
sendVersionRoute, err = router.AddIncomingRoute([]appmessage.MessageCommand{appmessage.CmdVerAck})
sendVersionRoute, err = router.AddIncomingRoute("sendVersion - incoming", []appmessage.MessageCommand{appmessage.CmdVerAck})
if err != nil {
panic(err)
}

View File

@@ -64,17 +64,22 @@ func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockIns
return err
}
err = m.notifyVirtualDaaScoreChanged()
if err != nil {
return err
}
err = m.notifyVirtualSelectedParentChainChanged(blockInsertionResult)
if err != nil {
return err
}
msgBlock := appmessage.DomainBlockToMsgBlock(block)
blockVerboseData, err := m.context.BuildBlockVerboseData(block.Header, block, false)
rpcBlock := appmessage.DomainBlockToRPCBlock(block)
err = m.context.PopulateBlockWithVerboseData(rpcBlock, block.Header, block, false)
if err != nil {
return err
}
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(msgBlock, blockVerboseData)
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(rpcBlock)
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
}
@@ -153,6 +158,19 @@ func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged() error {
return m.context.NotificationManager.NotifyVirtualSelectedParentBlueScoreChanged(notification)
}
func (m *Manager) notifyVirtualDaaScoreChanged() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualDaaScoreChanged")
defer onEnd()
virtualDAAScore, err := m.context.Domain.Consensus().GetVirtualDAAScore()
if err != nil {
return err
}
notification := appmessage.NewVirtualDaaScoreChangedNotificationMessage(virtualDAAScore)
return m.context.NotificationManager.NotifyVirtualDaaScoreChanged(notification)
}
func (m *Manager) notifyVirtualSelectedParentChainChanged(blockInsertionResult *externalapi.BlockInsertionResult) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentChainChanged")
defer onEnd()

View File

@@ -44,6 +44,8 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdGetInfoRequestMessage: rpchandlers.HandleGetInfo,
appmessage.CmdNotifyPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleNotifyPruningPointUTXOSetOverrideRequest,
appmessage.CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleStopNotifyingPruningPointUTXOSetOverrideRequest,
appmessage.CmdEstimateNetworkHashesPerSecondRequestMessage: rpchandlers.HandleEstimateNetworkHashesPerSecond,
appmessage.CmdNotifyVirtualDaaScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualDaaScoreChanged,
}
func (m *Manager) routerInitializer(router *router.Router, netConnection *netadapter.NetConnection) {
@@ -51,7 +53,7 @@ func (m *Manager) routerInitializer(router *router.Router, netConnection *netada
for messageType := range handlers {
messageTypes = append(messageTypes, messageType)
}
incomingRoute, err := router.AddIncomingRoute(messageTypes)
incomingRoute, err := router.AddIncomingRoute("rpc router", messageTypes)
if err != nil {
panic(err)
}

View File

@@ -3,7 +3,6 @@ package rpccontext
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage converts
@@ -16,29 +15,9 @@ func (ctx *Context) ConvertVirtualSelectedParentChainChangesToChainChangedNotifi
removedChainBlockHashes[i] = removed.String()
}
addedChainBlocks := make([]*appmessage.ChainBlock, len(selectedParentChainChanges.Added))
addedChainBlocks := make([]string, len(selectedParentChainChanges.Added))
for i, added := range selectedParentChainChanges.Added {
acceptanceData, err := ctx.Domain.Consensus().GetBlockAcceptanceData(added)
if err != nil {
return nil, err
}
acceptedBlocks := make([]*appmessage.AcceptedBlock, len(acceptanceData))
for j, acceptedBlock := range acceptanceData {
acceptedTransactionIDs := make([]string, len(acceptedBlock.TransactionAcceptanceData))
for k, transaction := range acceptedBlock.TransactionAcceptanceData {
transactionID := consensushashing.TransactionID(transaction.Transaction)
acceptedTransactionIDs[k] = transactionID.String()
}
acceptedBlocks[j] = &appmessage.AcceptedBlock{
Hash: acceptedBlock.BlockHash.String(),
AcceptedTransactionIDs: acceptedTransactionIDs,
}
}
addedChainBlocks[i] = &appmessage.ChainBlock{
Hash: added.String(),
AcceptedBlocks: acceptedBlocks,
}
addedChainBlocks[i] = added.String()
}
return appmessage.NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes, addedChainBlocks), nil

View File

@@ -30,6 +30,7 @@ type NotificationListener struct {
propagateFinalityConflictResolvedNotifications bool
propagateUTXOsChangedNotifications bool
propagateVirtualSelectedParentBlueScoreChangedNotifications bool
propagateVirtualDaaScoreChangedNotifications bool
propagatePruningPointUTXOSetOverrideNotifications bool
propagateUTXOsChangedNotificationAddresses map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress
@@ -181,6 +182,25 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentBlueScoreChanged(
return nil
}
// NotifyVirtualDaaScoreChanged notifies the notification manager that the DAG's
// virtual DAA score has changed
func (nm *NotificationManager) NotifyVirtualDaaScoreChanged(
notification *appmessage.VirtualDaaScoreChangedNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagateVirtualDaaScoreChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
return err
}
}
}
return nil
}
// NotifyPruningPointUTXOSetOverride notifies the notification manager that the UTXO index
// reset due to pruning point change via IBD.
func (nm *NotificationManager) NotifyPruningPointUTXOSetOverride() error {
@@ -308,6 +328,12 @@ func (nl *NotificationListener) PropagateVirtualSelectedParentBlueScoreChangedNo
nl.propagateVirtualSelectedParentBlueScoreChangedNotifications = true
}
// PropagateVirtualDaaScoreChangedNotifications instructs the listener to send
// virtual DAA score notifications to the remote listener
func (nl *NotificationListener) PropagateVirtualDaaScoreChangedNotifications() {
nl.propagateVirtualDaaScoreChangedNotifications = true
}
// PropagatePruningPointUTXOSetOverrideNotifications instructs the listener to send pruning point UTXO set override notifications
// to the remote listener.
func (nl *NotificationListener) PropagatePruningPointUTXOSetOverrideNotifications() {

View File

@@ -24,7 +24,7 @@ func ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(address string, pair
UTXOEntry: &appmessage.RPCUTXOEntry{
Amount: utxoEntry.Amount(),
ScriptPublicKey: &appmessage.RPCScriptPublicKey{Script: hex.EncodeToString(utxoEntry.ScriptPublicKey().Script), Version: utxoEntry.ScriptPublicKey().Version},
BlockBlueScore: utxoEntry.BlockBlueScore(),
BlockDAAScore: utxoEntry.BlockDAAScore(),
IsCoinbase: utxoEntry.IsCoinbase(),
},
})

View File

@@ -2,23 +2,16 @@ package rpccontext
import (
"encoding/hex"
"fmt"
"math"
"math/big"
"strconv"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/difficulty"
difficultyPackage "github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/estimatedsize"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
@@ -28,79 +21,6 @@ import (
// ErrBuildBlockVerboseDataInvalidBlock indicates that a block that was given to BuildBlockVerboseData is invalid.
var ErrBuildBlockVerboseDataInvalidBlock = errors.New("ErrBuildBlockVerboseDataInvalidBlock")
// BuildBlockVerboseData builds a BlockVerboseData from the given blockHeader.
// A block may optionally also be given if it's available in the calling context.
func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, block *externalapi.DomainBlock,
includeTransactionVerboseData bool) (*appmessage.BlockVerboseData, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "BuildBlockVerboseData")
defer onEnd()
hash := consensushashing.HeaderHash(blockHeader)
blockInfo, err := ctx.Domain.Consensus().GetBlockInfo(hash)
if err != nil {
return nil, err
}
if blockInfo.BlockStatus == externalapi.StatusInvalid {
return nil, errors.Wrap(ErrBuildBlockVerboseDataInvalidBlock, "cannot build verbose data for "+
"invalid block")
}
childrenHashes, err := ctx.Domain.Consensus().GetBlockChildren(hash)
if err != nil {
return nil, err
}
result := &appmessage.BlockVerboseData{
Hash: hash.String(),
Version: blockHeader.Version(),
VersionHex: fmt.Sprintf("%08x", blockHeader.Version()),
HashMerkleRoot: blockHeader.HashMerkleRoot().String(),
AcceptedIDMerkleRoot: blockHeader.AcceptedIDMerkleRoot().String(),
UTXOCommitment: blockHeader.UTXOCommitment().String(),
ParentHashes: hashes.ToStrings(blockHeader.ParentHashes()),
ChildrenHashes: hashes.ToStrings(childrenHashes),
Nonce: blockHeader.Nonce(),
Time: blockHeader.TimeInMilliseconds(),
Bits: strconv.FormatInt(int64(blockHeader.Bits()), 16),
Difficulty: ctx.GetDifficultyRatio(blockHeader.Bits(), ctx.Config.ActiveNetParams),
BlueScore: blockInfo.BlueScore,
IsHeaderOnly: blockInfo.BlockStatus == externalapi.StatusHeaderOnly,
}
if blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
if block == nil {
block, err = ctx.Domain.Consensus().GetBlock(hash)
if err != nil {
return nil, err
}
}
txIDs := make([]string, len(block.Transactions))
for i, tx := range block.Transactions {
txIDs[i] = consensushashing.TransactionID(tx).String()
}
result.TxIDs = txIDs
if includeTransactionVerboseData {
transactionVerboseData := make([]*appmessage.TransactionVerboseData, len(block.Transactions))
for i, tx := range block.Transactions {
txID := consensushashing.TransactionID(tx).String()
data, err := ctx.BuildTransactionVerboseData(tx, txID, blockHeader, hash.String())
if err != nil {
return nil, err
}
transactionVerboseData[i] = data
}
result.TransactionVerboseData = transactionVerboseData
}
}
return result, nil
}
// GetDifficultyRatio returns the proof-of-work difficulty as a multiple of the
// minimum difficulty using the passed bits field from the header of a block.
func (ctx *Context) GetDifficultyRatio(bits uint32, params *dagconfig.Params) float64 {
@@ -108,7 +28,7 @@ func (ctx *Context) GetDifficultyRatio(bits uint32, params *dagconfig.Params) fl
// converted back to a number. Note this is not the same as the proof of
// work limit directly because the block difficulty is encoded in a block
// with the compact form which loses precision.
target := difficulty.CompactToBig(bits)
target := difficultyPackage.CompactToBig(bits)
difficulty := new(big.Rat).SetFrac(params.PowMax, target)
diff, _ := difficulty.Float64()
@@ -119,106 +39,129 @@ func (ctx *Context) GetDifficultyRatio(bits uint32, params *dagconfig.Params) fl
return diff
}
// BuildTransactionVerboseData builds a TransactionVerboseData from
// the given parameters
func (ctx *Context) BuildTransactionVerboseData(tx *externalapi.DomainTransaction, txID string,
blockHeader externalapi.BlockHeader, blockHash string) (
*appmessage.TransactionVerboseData, error) {
// PopulateBlockWithVerboseData populates the given `block` with verbose
// data from `domainBlockHeader` and optionally from `domainBlock`
func (ctx *Context) PopulateBlockWithVerboseData(block *appmessage.RPCBlock, domainBlockHeader externalapi.BlockHeader,
domainBlock *externalapi.DomainBlock, includeTransactionVerboseData bool) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "BuildTransactionVerboseData")
defer onEnd()
blockHash := consensushashing.HeaderHash(domainBlockHeader)
var payloadHash string
if tx.SubnetworkID != subnetworks.SubnetworkIDNative {
payloadHash = tx.PayloadHash.String()
blockInfo, err := ctx.Domain.Consensus().GetBlockInfo(blockHash)
if err != nil {
return err
}
txReply := &appmessage.TransactionVerboseData{
TxID: txID,
Hash: consensushashing.TransactionHash(tx).String(),
Size: estimatedsize.TransactionEstimatedSerializedSize(tx),
TransactionVerboseInputs: ctx.buildTransactionVerboseInputs(tx),
TransactionVerboseOutputs: ctx.buildTransactionVerboseOutputs(tx, nil),
Version: tx.Version,
LockTime: tx.LockTime,
SubnetworkID: tx.SubnetworkID.String(),
Gas: tx.Gas,
PayloadHash: payloadHash,
Payload: hex.EncodeToString(tx.Payload),
if blockInfo.BlockStatus == externalapi.StatusInvalid {
return errors.Wrap(ErrBuildBlockVerboseDataInvalidBlock, "cannot build verbose data for "+
"invalid block")
}
if blockHeader != nil {
txReply.Time = uint64(blockHeader.TimeInMilliseconds())
txReply.BlockTime = uint64(blockHeader.TimeInMilliseconds())
txReply.BlockHash = blockHash
_, selectedParentHash, childrenHashes, err := ctx.Domain.Consensus().GetBlockRelations(blockHash)
if err != nil {
return err
}
return txReply, nil
}
block.VerboseData = &appmessage.RPCBlockVerboseData{
Hash: blockHash.String(),
Difficulty: ctx.GetDifficultyRatio(domainBlockHeader.Bits(), ctx.Config.ActiveNetParams),
ChildrenHashes: hashes.ToStrings(childrenHashes),
IsHeaderOnly: blockInfo.BlockStatus == externalapi.StatusHeaderOnly,
BlueScore: blockInfo.BlueScore,
}
// selectedParentHash will be nil in the genesis block
if selectedParentHash != nil {
block.VerboseData.SelectedParentHash = selectedParentHash.String()
}
func (ctx *Context) buildTransactionVerboseInputs(tx *externalapi.DomainTransaction) []*appmessage.TransactionVerboseInput {
inputs := make([]*appmessage.TransactionVerboseInput, len(tx.Inputs))
for i, transactionInput := range tx.Inputs {
// The disassembled string will contain [error] inline
// if the script doesn't fully parse, so ignore the
// error here.
disbuf, _ := txscript.DisasmString(constants.MaxScriptPublicKeyVersion, transactionInput.SignatureScript)
if blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return nil
}
input := &appmessage.TransactionVerboseInput{}
input.TxID = transactionInput.PreviousOutpoint.TransactionID.String()
input.OutputIndex = transactionInput.PreviousOutpoint.Index
input.Sequence = transactionInput.Sequence
input.ScriptSig = &appmessage.ScriptSig{
Asm: disbuf,
Hex: hex.EncodeToString(transactionInput.SignatureScript),
// Get the block if we didn't receive it previously
if domainBlock == nil {
domainBlock, err = ctx.Domain.Consensus().GetBlockEvenIfHeaderOnly(blockHash)
if err != nil {
return err
}
inputs[i] = input
}
return inputs
}
transactionIDs := make([]string, len(domainBlock.Transactions))
for i, transaction := range domainBlock.Transactions {
transactionIDs[i] = consensushashing.TransactionID(transaction).String()
}
block.VerboseData.TransactionIDs = transactionIDs
// buildTransactionVerboseOutputs returns a slice of JSON objects for the outputs of the passed
// transaction.
func (ctx *Context) buildTransactionVerboseOutputs(tx *externalapi.DomainTransaction, filterAddrMap map[string]struct{}) []*appmessage.TransactionVerboseOutput {
outputs := make([]*appmessage.TransactionVerboseOutput, len(tx.Outputs))
for i, transactionOutput := range tx.Outputs {
// Ignore the error here since an error means the script
// couldn't parse and there is no additional information about
// it anyways.
scriptClass, addr, _ := txscript.ExtractScriptPubKeyAddress(
transactionOutput.ScriptPublicKey, ctx.Config.ActiveNetParams)
// Encode the addresses while checking if the address passes the
// filter when needed.
passesFilter := len(filterAddrMap) == 0
var encodedAddr string
if addr != nil {
encodedAddr = addr.EncodeAddress()
// If the filter doesn't already pass, make it pass if
// the address exists in the filter.
if _, exists := filterAddrMap[encodedAddr]; exists {
passesFilter = true
if includeTransactionVerboseData {
for _, transaction := range block.Transactions {
err := ctx.PopulateTransactionWithVerboseData(transaction, domainBlockHeader)
if err != nil {
return err
}
}
if !passesFilter {
continue
}
output := &appmessage.TransactionVerboseOutput{}
output.Index = uint32(i)
output.Value = transactionOutput.Value
output.ScriptPubKey = &appmessage.ScriptPubKeyResult{
Version: transactionOutput.ScriptPublicKey.Version,
Address: encodedAddr,
Hex: hex.EncodeToString(transactionOutput.ScriptPublicKey.Script),
Type: scriptClass.String(),
}
outputs[i] = output
}
return outputs
return nil
}
// PopulateTransactionWithVerboseData populates the given `transaction` with
// verbose data from `domainTransaction`
func (ctx *Context) PopulateTransactionWithVerboseData(
transaction *appmessage.RPCTransaction, domainBlockHeader externalapi.BlockHeader) error {
domainTransaction, err := appmessage.RPCTransactionToDomainTransaction(transaction)
if err != nil {
return err
}
ctx.Domain.Consensus().PopulateMass(domainTransaction)
transaction.VerboseData = &appmessage.RPCTransactionVerboseData{
TransactionID: consensushashing.TransactionID(domainTransaction).String(),
Hash: consensushashing.TransactionHash(domainTransaction).String(),
Mass: domainTransaction.Mass,
}
if domainBlockHeader != nil {
transaction.VerboseData.BlockHash = consensushashing.HeaderHash(domainBlockHeader).String()
transaction.VerboseData.BlockTime = uint64(domainBlockHeader.TimeInMilliseconds())
}
for _, input := range transaction.Inputs {
ctx.populateTransactionInputWithVerboseData(input)
}
for _, output := range transaction.Outputs {
err := ctx.populateTransactionOutputWithVerboseData(output)
if err != nil {
return err
}
}
return nil
}
func (ctx *Context) populateTransactionInputWithVerboseData(transactionInput *appmessage.RPCTransactionInput) {
transactionInput.VerboseData = &appmessage.RPCTransactionInputVerboseData{}
}
func (ctx *Context) populateTransactionOutputWithVerboseData(transactionOutput *appmessage.RPCTransactionOutput) error {
scriptPublicKey, err := hex.DecodeString(transactionOutput.ScriptPublicKey.Script)
if err != nil {
return err
}
domainScriptPublicKey := &externalapi.ScriptPublicKey{
Script: scriptPublicKey,
Version: transactionOutput.ScriptPublicKey.Version,
}
// Ignore the error here since an error means the script
// couldn't be parsed and there's no additional information about
// it anyways
scriptPublicKeyType, scriptPublicKeyAddress, _ := txscript.ExtractScriptPubKeyAddress(
domainScriptPublicKey, ctx.Config.ActiveNetParams)
var encodedScriptPublicKeyAddress string
if scriptPublicKeyAddress != nil {
encodedScriptPublicKeyAddress = scriptPublicKeyAddress.EncodeAddress()
}
transactionOutput.VerboseData = &appmessage.RPCTransactionOutputVerboseData{
ScriptPublicKeyType: scriptPublicKeyType.String(),
ScriptPublicKeyAddress: encodedScriptPublicKeyAddress,
}
return nil
}

View File

@@ -12,8 +12,12 @@ func HandleBan(context *rpccontext.Context, _ *router.Router, request appmessage
banRequest := request.(*appmessage.BanRequestMessage)
ip := net.ParseIP(banRequest.IP)
if ip == nil {
hint := ""
if banRequest.IP[0] == '[' {
hint = " (try to remove “[” and “]” symbols)"
}
errorMessage := &appmessage.BanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP %s", banRequest.IP)
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP%s: %s", hint, banRequest.IP)
return errorMessage, nil
}

View File

@@ -0,0 +1,39 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleEstimateNetworkHashesPerSecond handles the respectively named RPC command
func HandleEstimateNetworkHashesPerSecond(
context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
estimateNetworkHashesPerSecondRequest := request.(*appmessage.EstimateNetworkHashesPerSecondRequestMessage)
windowSize := int(estimateNetworkHashesPerSecondRequest.WindowSize)
startHash := model.VirtualBlockHash
if estimateNetworkHashesPerSecondRequest.StartHash != "" {
var err error
startHash, err = externalapi.NewDomainHashFromString(estimateNetworkHashesPerSecondRequest.StartHash)
if err != nil {
response := &appmessage.EstimateNetworkHashesPerSecondResponseMessage{}
response.Error = appmessage.RPCErrorf("StartHash '%s' is not a valid block hash",
estimateNetworkHashesPerSecondRequest.StartHash)
return response, nil
}
}
networkHashesPerSecond, err := context.Domain.Consensus().EstimateNetworkHashesPerSecond(startHash, windowSize)
if err != nil {
response := &appmessage.EstimateNetworkHashesPerSecondResponseMessage{}
response.Error = appmessage.RPCErrorf("could not resolve network hashes per "+
"second for startHash %s and window size %d: %s", startHash, windowSize, err)
return response, nil
}
return appmessage.NewEstimateNetworkHashesPerSecondResponseMessage(networkHashesPerSecond), nil
}

View File

@@ -20,7 +20,7 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
return errorMessage, nil
}
header, err := context.Domain.Consensus().GetBlockHeader(hash)
block, err := context.Domain.Consensus().GetBlockEvenIfHeaderOnly(hash)
if err != nil {
errorMessage := &appmessage.GetBlockResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Block %s not found", hash)
@@ -29,7 +29,13 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
response := appmessage.NewGetBlockResponseMessage()
blockVerboseData, err := context.BuildBlockVerboseData(header, nil, getBlockRequest.IncludeTransactionVerboseData)
if getBlockRequest.IncludeTransactions {
response.Block = appmessage.DomainBlockToRPCBlock(block)
} else {
response.Block = appmessage.DomainBlockToRPCBlock(&externalapi.DomainBlock{Header: block.Header})
}
err = context.PopulateBlockWithVerboseData(response.Block, block.Header, block, getBlockRequest.IncludeTransactions)
if err != nil {
if errors.Is(err, rpccontext.ErrBuildBlockVerboseDataInvalidBlock) {
errorMessage := &appmessage.GetBlockResponseMessage{}
@@ -39,7 +45,5 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
return nil, err
}
response.BlockVerboseData = blockVerboseData
return response, nil
}

View File

@@ -35,6 +35,7 @@ func HandleGetBlockDAGInfo(context *rpccontext.Context, _ *router.Router, _ appm
response.VirtualParentHashes = hashes.ToStrings(virtualInfo.ParentHashes)
response.Difficulty = context.GetDifficultyRatio(virtualInfo.Bits, context.Config.ActiveNetParams)
response.PastMedianTime = virtualInfo.PastMedianTime
response.VirtualDAAScore = virtualInfo.DAAScore
pruningPoint, err := context.Domain.Consensus().PruningPoint()
if err != nil {

View File

@@ -31,12 +31,12 @@ func HandleGetBlockTemplate(context *rpccontext.Context, _ *router.Router, reque
if err != nil {
return nil, err
}
msgBlock := appmessage.DomainBlockToMsgBlock(templateBlock)
rpcBlock := appmessage.DomainBlockToRPCBlock(templateBlock)
isSynced, err := context.ProtocolManager.ShouldMine()
if err != nil {
return nil, err
}
return appmessage.NewGetBlockTemplateResponseMessage(msgBlock, isSynced), nil
return appmessage.NewGetBlockTemplateResponseMessage(rpcBlock, isSynced), nil
}

View File

@@ -8,21 +8,15 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
const (
// maxBlocksInGetBlocksResponse is the max amount of blocks that are
// allowed in a GetBlocksResult.
maxBlocksInGetBlocksResponse = 1000
)
// HandleGetBlocks handles the respectively named RPC command
func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
getBlocksRequest := request.(*appmessage.GetBlocksRequestMessage)
// Validate that user didn't set IncludeTransactionVerboseData without setting IncludeBlockVerboseData
if !getBlocksRequest.IncludeBlockVerboseData && getBlocksRequest.IncludeTransactionVerboseData {
// Validate that user didn't set IncludeTransactions without setting IncludeBlocks
if !getBlocksRequest.IncludeBlocks && getBlocksRequest.IncludeTransactions {
return &appmessage.GetBlocksResponseMessage{
Error: appmessage.RPCErrorf(
"If includeTransactionVerboseData is set, then includeBlockVerboseData must be set as well"),
"If includeTransactions is set, then includeBlockVerboseData must be set as well"),
}, nil
}
@@ -55,8 +49,11 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
if err != nil {
return nil, err
}
blockHashes, err := context.Domain.Consensus().GetHashesBetween(
lowHash, virtualSelectedParent, maxBlocksInGetBlocksResponse)
// We use +1 because lowHash is also returned
// maxBlocks MUST be >= MergeSetSizeLimit + 1
maxBlocks := context.Config.NetParams().MergeSetSizeLimit + 1
blockHashes, highHash, err := context.Domain.Consensus().GetHashesBetween(lowHash, virtualSelectedParent, maxBlocks)
if err != nil {
return nil, err
}
@@ -64,9 +61,10 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
// prepend low hash to make it inclusive
blockHashes = append([]*externalapi.DomainHash{lowHash}, blockHashes...)
// If there are no maxBlocksInGetBlocksResponse between lowHash and virtualSelectedParent -
// add virtualSelectedParent's anticone
if len(blockHashes) < maxBlocksInGetBlocksResponse {
// If the high hash is equal to virtualSelectedParent it means GetHashesBetween didn't skip any hashes, and
// there's space to add the virtualSelectedParent's anticone, otherwise you can't add the anticone because
// there's no guarantee that all of the anticone root ancestors will be present.
if highHash.Equal(virtualSelectedParent) {
virtualSelectedParentAnticone, err := context.Domain.Consensus().Anticone(virtualSelectedParent)
if err != nil {
return nil, err
@@ -74,33 +72,29 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
blockHashes = append(blockHashes, virtualSelectedParentAnticone...)
}
// Both GetHashesBetween and Anticone might return more then the allowed number of blocks, so
// trim any extra blocks.
if len(blockHashes) > maxBlocksInGetBlocksResponse {
blockHashes = blockHashes[:maxBlocksInGetBlocksResponse]
}
// Prepare the response
response := &appmessage.GetBlocksResponseMessage{
BlockHashes: hashes.ToStrings(blockHashes),
}
// Retrieve all block data in case BlockVerboseData was requested
if getBlocksRequest.IncludeBlockVerboseData {
response.BlockVerboseData = make([]*appmessage.BlockVerboseData, len(blockHashes))
response := appmessage.NewGetBlocksResponseMessage()
response.BlockHashes = hashes.ToStrings(blockHashes)
if getBlocksRequest.IncludeBlocks {
rpcBlocks := make([]*appmessage.RPCBlock, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := context.Domain.Consensus().GetBlockHeader(blockHash)
if err != nil {
return nil, err
}
blockVerboseData, err := context.BuildBlockVerboseData(blockHeader, nil,
getBlocksRequest.IncludeTransactionVerboseData)
block, err := context.Domain.Consensus().GetBlockEvenIfHeaderOnly(blockHash)
if err != nil {
return nil, err
}
response.BlockVerboseData[i] = blockVerboseData
if getBlocksRequest.IncludeTransactions {
rpcBlocks[i] = appmessage.DomainBlockToRPCBlock(block)
} else {
rpcBlocks[i] = appmessage.DomainBlockToRPCBlock(&externalapi.DomainBlock{Header: block.Header})
}
err = context.PopulateBlockWithVerboseData(rpcBlocks[i], block.Header, nil, getBlocksRequest.IncludeTransactions)
if err != nil {
return nil, err
}
}
response.Blocks = rpcBlocks
}
return response, nil
}

View File

@@ -5,6 +5,8 @@ import (
"sort"
"testing"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/app/rpc/rpchandlers"
@@ -13,7 +15,6 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/testapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/domain/miningmanager"
"github.com/kaspanet/kaspad/infrastructure/config"
)
@@ -22,20 +23,38 @@ type fakeDomain struct {
testapi.TestConsensus
}
func (d fakeDomain) DeleteStagingConsensus() error {
panic("implement me")
}
func (d fakeDomain) StagingConsensus() externalapi.Consensus {
panic("implement me")
}
func (d fakeDomain) InitStagingConsensus() error {
panic("implement me")
}
func (d fakeDomain) CommitStagingConsensus() error {
panic("implement me")
}
func (d fakeDomain) Consensus() externalapi.Consensus { return d }
func (d fakeDomain) MiningManager() miningmanager.MiningManager { return nil }
func TestHandleGetBlocks(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
testutils.ForAllNets(t, true, func(t *testing.T, consensusConfig *consensus.Config) {
stagingArea := model.NewStagingArea()
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(params, false, "TestHandleGetBlocks")
tc, teardown, err := factory.NewTestConsensus(consensusConfig, "TestHandleGetBlocks")
if err != nil {
t.Fatalf("Error setting up consensus: %+v", err)
}
defer teardown(false)
fakeContext := rpccontext.Context{
Config: &config.Config{Flags: &config.Flags{NetworkFlags: config.NetworkFlags{ActiveNetParams: params}}},
Config: &config.Config{Flags: &config.Flags{NetworkFlags: config.NetworkFlags{ActiveNetParams: &consensusConfig.Params}}},
Domain: fakeDomain{tc},
}
@@ -55,7 +74,7 @@ func TestHandleGetBlocks(t *testing.T) {
antipast := make([]*externalapi.DomainHash, 0, len(slice))
for _, blockHash := range slice {
isInPastOfPovBlock, err := tc.DAGTopologyManager().IsAncestorOf(blockHash, povBlock)
isInPastOfPovBlock, err := tc.DAGTopologyManager().IsAncestorOf(stagingArea, blockHash, povBlock)
if err != nil {
t.Fatalf("Failed doing reachability check: '%v'", err)
}
@@ -77,7 +96,7 @@ func TestHandleGetBlocks(t *testing.T) {
// \ | /
// etc.
expectedOrder := make([]*externalapi.DomainHash, 0, 40)
mergingBlock := params.GenesisHash
mergingBlock := consensusConfig.GenesisHash
for i := 0; i < 10; i++ {
splitBlocks := make([]*externalapi.DomainHash, 0, 3)
for j := 0; j < 3; j++ {
@@ -87,7 +106,7 @@ func TestHandleGetBlocks(t *testing.T) {
}
splitBlocks = append(splitBlocks, blockHash)
}
sort.Sort(sort.Reverse(testutils.NewTestGhostDAGSorter(splitBlocks, tc, t)))
sort.Sort(sort.Reverse(testutils.NewTestGhostDAGSorter(stagingArea, splitBlocks, tc, t)))
restOfSplitBlocks, selectedParent := splitBlocks[:len(splitBlocks)-1], splitBlocks[len(splitBlocks)-1]
expectedOrder = append(expectedOrder, selectedParent)
expectedOrder = append(expectedOrder, restOfSplitBlocks...)
@@ -130,13 +149,13 @@ func TestHandleGetBlocks(t *testing.T) {
virtualSelectedParent, actualBlocks.BlockHashes)
}
expectedOrder = append([]*externalapi.DomainHash{params.GenesisHash}, expectedOrder...)
expectedOrder = append([]*externalapi.DomainHash{consensusConfig.GenesisHash}, expectedOrder...)
actualOrder := getBlocks(nil)
if !reflect.DeepEqual(actualOrder.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: %v \nactual:\n%v", expectedOrder, actualOrder.BlockHashes)
}
requestAllExplictly := getBlocks(params.GenesisHash)
requestAllExplictly := getBlocks(consensusConfig.GenesisHash)
if !reflect.DeepEqual(requestAllExplictly.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: \n%v\n. actual:\n%v", expectedOrder, requestAllExplictly.BlockHashes)
}

View File

@@ -4,10 +4,16 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/version"
)
// HandleGetInfo handles the respectively named RPC command
func HandleGetInfo(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
response := appmessage.NewGetInfoResponseMessage(context.NetAdapter.ID().String())
response := appmessage.NewGetInfoResponseMessage(
context.NetAdapter.ID().String(),
uint64(context.Domain.MiningManager().TransactionCount()),
version.Version(),
)
return response, nil
}

View File

@@ -3,25 +3,22 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleGetMempoolEntries handles the respectively named RPC command
func HandleGetMempoolEntries(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
transactions := context.Domain.MiningManager().AllTransactions()
entries := make([]*appmessage.MempoolEntry, 0, len(transactions))
for _, tx := range transactions {
transactionVerboseData, err := context.BuildTransactionVerboseData(
tx, consensushashing.TransactionID(tx).String(), nil, "")
for _, transaction := range transactions {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
entries = append(entries, &appmessage.MempoolEntry{
Fee: tx.Fee,
TransactionVerboseData: transactionVerboseData,
Fee: transaction.Fee,
Transaction: rpcTransaction,
})
}

View File

@@ -24,12 +24,11 @@ func HandleGetMempoolEntry(context *rpccontext.Context, _ *router.Router, reques
errorMessage.Error = appmessage.RPCErrorf("Transaction %s was not found", transactionID)
return errorMessage, nil
}
transactionVerboseData, err := context.BuildTransactionVerboseData(
transaction, getMempoolEntryRequest.TxID, nil, "")
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err = context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
return appmessage.NewGetMempoolEntryResponseMessage(transaction.Fee, transactionVerboseData), nil
return appmessage.NewGetMempoolEntryResponseMessage(transaction.Fee, rpcTransaction), nil
}

View File

@@ -8,9 +8,14 @@ import (
// HandleGetVirtualSelectedParentBlueScore handles the respectively named RPC command
func HandleGetVirtualSelectedParentBlueScore(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
virtualInfo, err := context.Domain.Consensus().GetVirtualInfo()
c := context.Domain.Consensus()
selectedParent, err := c.GetVirtualSelectedParent()
if err != nil {
return nil, err
}
return appmessage.NewGetVirtualSelectedParentBlueScoreResponseMessage(virtualInfo.BlueScore), nil
blockInfo, err := c.GetBlockInfo(selectedParent)
if err != nil {
return nil, err
}
return appmessage.NewGetVirtualSelectedParentBlueScoreResponseMessage(blockInfo.BlueScore), nil
}

View File

@@ -32,6 +32,6 @@ func HandleGetVirtualSelectedParentChainFromBlock(context *rpccontext.Context, _
}
response := appmessage.NewGetVirtualSelectedParentChainFromBlockResponseMessage(
chainChangedNotification.RemovedChainBlockHashes, chainChangedNotification.AddedChainBlocks)
chainChangedNotification.RemovedChainBlockHashes, chainChangedNotification.AddedChainBlockHashes)
return response, nil
}

View File

@@ -0,0 +1,19 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleNotifyVirtualDaaScoreChanged handles the respectively named RPC command
func HandleNotifyVirtualDaaScoreChanged(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.PropagateVirtualDaaScoreChangedNotifications()
response := appmessage.NewNotifyVirtualDaaScoreChangedResponseMessage()
return response, nil
}

View File

@@ -14,9 +14,6 @@ import (
func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
submitBlockRequest := request.(*appmessage.SubmitBlockRequestMessage)
msgBlock := submitBlockRequest.Block
domainBlock := appmessage.MsgBlockToDomainBlock(msgBlock)
if context.ProtocolManager.IsIBDRunning() {
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Block not submitted - IBD is running"),
@@ -24,7 +21,15 @@ func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request ap
}, nil
}
err := context.ProtocolManager.AddBlock(domainBlock)
domainBlock, err := appmessage.RPCBlockToDomainBlock(submitBlockRequest.Block)
if err != nil {
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Could not parse block: %s", err),
RejectReason: appmessage.RejectReasonBlockInvalid,
}, nil
}
err = context.ProtocolManager.AddBlock(domainBlock)
if err != nil {
isProtocolOrRuleError := errors.As(err, &ruleerrors.RuleError{}) || errors.As(err, &protocolerrors.ProtocolError{})
if !isProtocolOrRuleError {

View File

@@ -21,7 +21,7 @@ func HandleSubmitTransaction(context *rpccontext.Context, _ *router.Router, requ
}
transactionID := consensushashing.TransactionID(domainTransaction)
err = context.ProtocolManager.AddTransaction(domainTransaction)
err = context.ProtocolManager.AddTransaction(domainTransaction, submitTransactionRequest.AllowOrphan)
if err != nil {
if !errors.As(err, &mempool.RuleError{}) {
return nil, err

View File

@@ -12,11 +12,15 @@ func HandleUnban(context *rpccontext.Context, _ *router.Router, request appmessa
unbanRequest := request.(*appmessage.UnbanRequestMessage)
ip := net.ParseIP(unbanRequest.IP)
if ip == nil {
hint := ""
if unbanRequest.IP[0] == '[' {
hint = " (try to remove “[” and “]” symbols)"
}
errorMessage := &appmessage.UnbanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP %s", unbanRequest.IP)
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP%s: %s", hint, unbanRequest.IP)
return errorMessage, nil
}
err := context.AddressManager.Unban(appmessage.NewNetAddressIPPort(ip, 0, 0))
err := context.AddressManager.Unban(appmessage.NewNetAddressIPPort(ip, 0))
if err != nil {
errorMessage := &appmessage.UnbanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not unban IP: %s", err)

View File

@@ -21,7 +21,7 @@ go build $FLAGS -o kaspad .
if [ -n "${NO_PARALLEL}" ]
then
go test -parallel=1 $FLAGS ./...
go test -timeout 20m -parallel=1 $FLAGS ./...
else
go test $FLAGS ./...
go test -timeout 20m $FLAGS ./...
fi

View File

@@ -1,11 +1,107 @@
Kaspad v0.11.2 - 2021-11-11
===========================
Bug fixes:
* Enlarge p2p max message size to 1gb
* Fix UTXO chunks logic
* Increase tests timeout to 20 minutes
Kaspad v0.11.1 - 2021-11-09
===========================
Non-breaking changes:
* Cache the miner state
Kaspad v0.10.2 - 2021-05-18
===========================
Non-breaking changes:
* Fix getBlock and getBlocks RPC commands to return blocks and transactions properly (#1716)
* serializeAddress should always serialize as IPv6, since it assumes the IP size is 16 bytes (#1720)
* Add VirtualDaaScore to GetBlockDagInfo (#1719)
* Fix calcTxSequenceLockFromReferencedUTXOEntries for loop break condition (#1723)
* Fix overflow when checking coinbase maturity and don't ban peers that send transactions with immature spend (#1722)
Kaspad v0.10.1 - 2021-05-11
===========================
* Calculate virtual's acceptance data and multiset after importing a new pruning point (#1700)
Kaspad v0.10.0 - 2021-04-26
===========================
Major changes include:
* Implementing a signature hashing scheme similar to BIP-143
* Replacing HASH160 with BLAKE2B
* Replacing ECMH with MuHash
* Removing RIPEMD160 and SHA1 from the codebase entirely
* Making P2PKH transactions non-standard
* Vastly enhancing the CLI wallet
* Restructuring kaspad's app/home directory
* Modifying block and transaction types in the RPC to be easier to consume clientside
A partial list of the more-important commits is as follows:
* Fix data race in GetBlockChildren (#1579)
* Remove payload hash (#1583)
* Add the mempool size to getInfo RPC command (#1584)
* Change the difficulty to be calculated based on the same block instead of its selected parent (#1591)
* Adjust the difficulty in the first difficultyAdjustmentWindowSize blocks (#1592)
* Adding DAA score (#1596)
* Use DAA score where needed (#1602)
* Remove the Services field from NetAddress. (#1610)
* Fix getBlocks to not add the anticone when some blocks were filtered by GetHashesBetween (#1611)
* Restructure the default ~/.kaspad directory layout (#1613)
* Replace the HomeDir flag with a AppDir flag (#1615)
* Implement BIP-143-like sighash (#1598)
* Change --datadir to --appdir and remove symmetrical connection in stability tests (#1617)
* Use BLAKE2B instead of HASH160, and get rid of any usage of RIPEMD160 and SHA1 (#1618)
* Replace ECMH with Muhash (#1624)
* Add support for multiple staging areas (#1633)
* Make sure the ghostdagDataStore cache is at least DifficultyAdjustmentBlockWindow sized (#1635)
* Resolve each block status in it's own staging area (#1634)
* Add mass limit to mempool (#1627)
* In RPC, use RPCTransactions and RPCBlocks instead of TransactionMessages and BlockMessages (#1609)
* Use go-secp256k1 v0.0.5 (#1640)
* Add a show-address subcommand to kaspawallet (#1653)
* Replace p2pkh with p2pk (#1650)
* Implement importing private keys into the wallet (#1655)
* Add dump unencrypted data sub command to the wallet (#1661)
* Add ECDSA support (#1657)
* Add OpCheckMultiSigECDSA (#1663)
* Add ECDSA support to the wallet (#1664)
* Make moving the pruning point faster (#1660)
* Implement new mechanism for updating UTXO Diffs (#1671)
Kaspad v0.9.2 - 2021-03-31
===========================
* Increase the route capacity of InvTransaction messages. (#1603) (#1637)
Kaspad v0.9.1 - 2021-03-14
===========================
* Testnet network reset
Kaspad v0.9.0 - 2021-03-04
===========================
* Merge big subdags in pick virtual parents (#1574)
* Write in the reject message the tx rejection reason (#1573)
* Add nil checks for protowire (#1570)
* Increase getBlocks limit to 1000 (#1572)
* Return RPC error if getBlock's lowHash doesn't exist (#1569)
* Add default dns-seeder to testnet (#1568)
* Fix utxoindex deserialization (#1566)
* Add pruning point hash to GetBlockDagInfo response (#1565)
* Use EmitUnpopulated so that kaspactl prints all fields, even the default ones (#1561)
* Stop logging an error whenever an RPC/P2P connection is canceled (#1562)
* Cleanup the logger and make it asynchronous (#1524)
* Close all iterators (#1542)
* Add childrenHashes to GetBlock/s RPC commands (#1560)
* Add ScriptPublicKey.Version to RPC (#1559)
* Fix the target block rate to create less bursty mining (#1554)
Kaspad v0.8.10 - 2021-02-25
===========================
[*] Fix bug where invalid mempool transactions were not removed (#1551)
[*] Add RPC reconnection to the miner (#1552)
[*] Remove virtual diff parents - only selectedTip is virtualDiffParent now (#1550)
[*] Fix UTXO index (#1548)
[*] Prevent fast failing (#1545)
[*] Increase the sleep time in kaspaminer when the node is not synced (#1544)
[*] Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537)
[*] Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538)
* Fix bug where invalid mempool transactions were not removed (#1551)
* Add RPC reconnection to the miner (#1552)
* Remove virtual diff parents - only selectedTip is virtualDiffParent now (#1550)
* Fix UTXO index (#1548)
* Prevent fast failing (#1545)
* Increase the sleep time in kaspaminer when the node is not synced (#1544)
* Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537)
* Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538)

Some files were not shown because too many files have changed in this diff Show More