Compare commits

...

212 Commits

Author SHA1 Message Date
stasatdaglabs
d5f2495522 Use SerializeUTXODiff and DeserializeUTXODiff in utxoDiffStore and implement TestUTXODiffSerializationAndDeserialization. 2021-01-28 14:19:06 +02:00
stasatdaglabs
8588774837 Finish implementing SerializeUTXODiff and DeserializeUTXODiff. 2021-01-28 14:01:02 +02:00
stasatdaglabs
94c3f4b80c Begin implementing SerializeUTXODiff and DeserializeUTXODiff. 2021-01-28 13:48:17 +02:00
stasatdaglabs
a5a84e9215 Take buildTestUTXODiff out of the benchmark loops. 2021-01-28 12:58:47 +02:00
stasatdaglabs
1cec4c91cf Use a random number for utxoEntryScriptPublicKeyVersion as well. 2021-01-28 12:51:13 +02:00
stasatdaglabs
3c0b74208a Rename a couple of variables. 2021-01-28 12:49:45 +02:00
stasatdaglabs
08d983b84a Implement BenchmarkUTXODiffSerializationAndDeserialization. 2021-01-28 12:49:04 +02:00
Michael Sutton
c9b591f2d3 Final reindex algorithm (#1430)
* Mine JSON

* [Reindex tests] add test_params and validate_mining flag to test_consensus

* Rename file and extend tests

* Ignore local test datasets

* Use spaces over tabs

* Reindex algorithm - full algorithm, initial commit, some tests fail

* Reindex algorithm - a few critical fixes

* Reindex algorithm - move reindex struct and all related operations to new file

* Reindex algorithm - added a validateIntervals method and modified tests to use it (instead of exact comparisons)

* Reindex algorithm - modified reindexIntervals to receive the new child as argument and fixed an important related bug

* Reindex attack tests - move logic to helper function and add stretch test

* Reindex algorithm - variable names and some comments

* Reindex algorithm - minor changes

* Reindex algorithm - minor changes 2

* Reindex algorithm - extended stretch test

* Reindex algorithm - small fix to validate function

* Reindex tests - move tests and add DAG files

* go format fixes

* TestParams doc comment

* Reindex tests - exact comparisons are not needed

* Update to version 0.8.6

* Remove TestParams and use AddUTXOInvalidHeader instead

* Use gzipeed test files

* This unintended change somehow slipped in through branch merges

* Rename test

* Move interval increase/decrease methods to reachability interval file

* Addressing a bunch of minor review comments

* Addressed a few more minor review comments

* Make code of offsetSiblingsBefore and offsetSiblingsAfter symmetric

* Optimize reindex logic in cases where reorg occurs + reorg test

* Do not change reindex root too fast (on reorg)

* Some comments

* A few more comments

* Addressing review comments

* Remove TestNoAttackAlternateReorg and assert chain attack

* Minor

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
Co-authored-by: Mike Zak <feanorr@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-27 17:09:20 +02:00
Ori Newman
8d6e71d490 Add IBD test cases and check for MsgUnexpectedPruningPoint on receivePruningPointBlock as well (#1459)
* Check for MsgUnexpectedPruningPoint on receivePruningPointBlock as well

* Add IBD test cases

* Revert "Check for MsgUnexpectedPruningPoint on receivePruningPointBlock as well"

This reverts commit 6a6d1ea180.

* Change log level for two logs

* Remove "testing a situation where the pruning point moved during IBD (before sending the pruning point block)"
2021-01-27 16:42:42 +02:00
Elichai Turkel
2823461fe2 Change AddressKey from string to port+ipv6 address (#1458) 2021-01-27 16:09:32 +02:00
Elichai Turkel
2075c585da Fix race condition in kaspaminer (#1455)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-27 13:08:35 +02:00
Elichai Turkel
01aee62cb0 Add log and measure to pruning points (#1457) 2021-01-27 11:40:51 +02:00
Ori Newman
a6ee871f7e Increase maxSelectedParentTimeDiffToAllowMiningInMilliSeconds to one hour (#1456) 2021-01-27 11:04:58 +02:00
Mike Zak
6393a8186a Update to version 0.9.0 2021-01-27 09:22:56 +02:00
stasatdaglabs
3916534a7e In kaspactl, prettify responses before printing them (#1453)
* In kaspactl, prettify responses before printing them.

* Indent with four spaces instead of a tab.

* Unwrap the response.

* Simplify unwrapping the response.

* Don't unwrap responses.

* Use protojson.MarshalOptions for prettification.
2021-01-27 09:13:48 +02:00
Elichai Turkel
0561347ff1 Remove default dns/grpc seeders (#1454) 2021-01-26 17:40:54 +02:00
Svarog
fb11981da1 Make kaspactl usable by human beings (#1452)
* Add request_types and their help

* Added command parser

* Updated main to use command if it's specified

* Some progress in making everything work

* Fix command parser for pointers to structs

* Cleanup code

* Enhance usage text

* Fixed typo

* Some minor style fixing, and remove temporary code

* Correctly fallthrough in stringToValue unsupported types

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-26 14:22:35 +02:00
Ori Newman
1742e76af7 Add TestHandleRelayInvsErrors (#1450) 2021-01-26 13:31:28 +02:00
Svarog
ddfe376388 Don't ban peers that sent a requested duplicate block (#1440)
* Ignore ErrDuplicateBlock for any blocks that were requested

* Don't check for DuplicateBlockErr in processHeaders + improve logs

* Return the check for ErrDuplicateBlock in processHeader

* Fix log message
2021-01-26 08:42:04 +02:00
Elichai Turkel
52da65077a codecov: Ignore protobuf autogenerated files (#1449) 2021-01-25 16:58:11 +02:00
Elichai Turkel
ed85f09742 Add P2P/RPC name to the gRPC logs (#1448)
* Add P2P/RPC name to the gRPC logs

* Add p2p/rpc name in more logs
2021-01-25 16:41:43 +02:00
stasatdaglabs
819ec9f2a7 Pass fromOutpoint into GetPruningPointUTXOs instead of offset, so it could Seek over the cursor (#1447)
* Implement TestGetPruningPointUTXOs.

* Fix a bad error message.

* Fix TestGetPruningPointUTXOs for testnet.

* Make sure all the UTXOs are returned in TestGetPruningPointUTXOs.

* Implement BenchmarkGetPruningPointUTXOs.

* Pass fromOutpoint into GetPruningPointUTXOs instead of offset, so it could Seek over the cursor.

* Fix weird benchmark timer calls.

* Remove unnecessary collection of outpointAndUTXOEntryPairs from BenchmarkGetPruningPointUTXOs.

* Fix a comment.
2021-01-25 13:38:59 +02:00
Ori Newman
7ea8a72a9e Add TestReceiveAddressesErrors (#1446)
* Add TestReceiveAddressesErrors

* Change errors to be more descriptive

* Fix checkFlowError
2021-01-24 17:35:20 +02:00
stasatdaglabs
ca04c049ab When the pruning point moves, update its UTXO set outside of a database transaction (#1444)
* Remove pruningPointUTXOSetStaging and implement UpdatePruningPointUTXOSet.

* Implement StageStartSavingNewPruningPointUTXOSet, HadStartedSavingNewPruningPointUTXOSet, and FinishSavingNewPruningPointUTXOSet.

* Fix a bad return.

* Implement UpdatePruningPointUTXOSetIfRequired.

* Call UpdatePruningPointUTXOSetIfRequired on consensus creation.

* Rename savingNewPruningPointUTXOSetKey to updatingPruningPointUTXOSet.

* Add a log.

* Add calls to runtime.GC() at its start and end.

* Rename a variable.

* Replace calls to runtime.GC to calls to LogMemoryStats.

* Wrap the contents of LogMemoryStats in a log closure.
2021-01-24 14:48:11 +02:00
Ori Newman
9a17198e7d Remove redundant type check (#1445) 2021-01-24 14:14:03 +02:00
stasatdaglabs
756f40c59a Sync pruning point UTXO sets incrementally instead of all at once (#1431)
* Replaced the content of MsgIBDRootUTXOSetChunk with pairs of outpoint-utxo entry pairs.

* Rename utxoIter to utxoIterator.

* Add a big stinky TODO on an assert.

* Replace pruningStore staging with a UTXO set iterator.

* Reimplement receiveAndInsertIBDRootUTXOSet.

* Extract OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs into domainconverters.go.

* Pass the outpoint and utxy entry pairs to the pruning store.

* Implement InsertCandidatePruningPointUTXOs.

* Implement ClearCandidatePruningPointUTXOs.

* Implement UpdateCandidatePruningPointMultiset.

* Use the candidate pruning point multiset in updatePruningPoint.

* Implement CandidatePruningPointUTXOIterator.

* Use the pruning point utxo set iterator for StageVirtualUTXOSet.

* Defer ClearCandidatePruningPointUTXOs.

* Implement OverwriteVirtualUTXOSet.

* Implement CommitCandidatePruningPointUTXOSet.

* Implement BeginOverwritingVirtualUTXOSet and FinishOverwritingVirtualUTXOSet.

* Implement overwriteVirtualUTXOSetAndCommitPruningPointUTXOSet.

* Rename ClearCandidatePruningPointUTXOs to ClearCandidatePruningPointData.

* Add missing methods to dbManager.

* Implement PruningPointUTXOs.

* Implement RecoverUTXOIfRequired.

* Delete the utxoserialization package.

* Fix compilation errors in TestValidateAndInsertPruningPoint.

* Switch order of operations in the if statements in PruningPointUTXOs so that Next() wouldn't be unnecessarily called.

* Fix missing pruning point utxo set staging and bad slice length.

* Fix no default multiset in InsertCandidatePruningPointUTXOs.

* Make go vet happy.

* Rename candidateXXX to importedXXX.

* Do some more renaming.

* Rename some more.

* Fix bad MsgIBDRootNotFound logic.

* Fix an error message.

* Simplify receiveIBDRootBlock.

* Fix error message in receiveAndInsertIBDRootUTXOSet.

* Do some more renaming.

* Fix merge errors.

* Fix a bug caused by calling iterator.First() unnecessarily.

* Remove databaseContext from stores and don't use a transaction in ClearXXX functions.

* Simplify receiveAndInsertIBDRootUTXOSet.

* Fix offset count in PruningPointUTXOs().

* Fix readOnlyUTXOIteratorWithDiff.First().

* Split handleRequestIBDRootUTXOSetAndBlockFlow into smaller methods.

* Rename IbdRootNotFound to UnexpectedPruningPoint.

* Rename requestIBDRootHash to requestPruningPointHash.

* Rename IBDRootHash to PruningPointHash.

* Rename RequestIBDRootUTXOSetAndBlock to RequestPruningPointUTXOSetAndBlock.

* Rename IBDRootUTXOSetChunk to PruningPointUTXOSetChunk.

* Rename RequestNextIBDRootUTXOSetChunk to RequestNextPruningPointUTXOSetChunk.

* Rename DoneIBDRootUTXOSetChunks to DonePruningPointUTXOSetChunks.

* Rename remaining references to IBD root.

* Fix an error message.

* Add a check for HadStartedImportingPruningPointUTXOSet in commitVirtualUTXODiff.

* Add a check for HadStartedImportingPruningPointUTXOSet in ImportPruningPointUTXOSetIntoVirtualUTXOSet.

* Move FinishImportingPruningPointUTXOSet closer to HadStartedImportingPruningPointUTXOSet.

* Remove reference to pruningStore in utxoSetIterator.

* Pointerify utxoSetIterator receivers.

* Fix bad insert in CommitImportedPruningPointUTXOSet.

* Rename commitImportedPruningPointUTXOSetAll to applyImportedPruningPointUTXOSet.

* Simplify PruningPointUTXOs.

* Add populateTransactionWithUTXOEntriesFromUTXOSet.

* Fix a TODO comment.

* Rename InsertImportedPruningPointUTXOs to AppendImportedPruningPointUTXOs.

* Extract handleRequestPruningPointUTXOSetAndBlockMessage to a separate method.

* Rename stuff in readOnlyUTXOIteratorWithDiff.First().

* Address toAddIterator in readOnlyUTXOIteratorWithDiff.First().

* Call First() before any full iteration on ReadOnlyUTXOSetIterator.

* Call First() before any full iteration on a database Cursor.

* Put StartImportingPruningPointUTXOSet inside the pruning point transaction.

* Make serializeOutpoint and serializeUTXOEntry free functions in pruningStore.

* Fix readOnlyUTXOIteratorWithDiff.First().

* Fix bad validations in importPruningPoint.

* Remove superfluous call to validateBlockTransactionsAgainstPastUTXO.
2021-01-21 17:24:52 +02:00
Elichai Turkel
6a03d31f98 Remove accidental pointer indirection in dbKey (#1441) 2021-01-21 12:14:52 +02:00
Ori Newman
319ab6cfcd Always request orphan roots, even when you get an inv of a known orphan (#1436)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-01-20 10:58:45 +02:00
Ori Newman
abef96e3de Add TestIBDWithPruning (#1425)
* Add TestIBDWithPruning

* Test block count

* Fix a typo

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
Co-authored-by: Mike Zak <feanorr@gmail.com>
2021-01-20 10:07:32 +02:00
stasatdaglabs
2e0bc0f8c4 Increase P2P connections' dial timeouts to 5 seconds (#1437)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-01-20 09:48:48 +02:00
Ori Newman
acf5423c63 Remove docker test parallelism (#1434)
* Remove docker parallelism

* Remove redundant dash
2021-01-19 17:30:24 +02:00
Ori Newman
effb545d20 Fix wrong condition and add logs (#1435) 2021-01-19 17:20:25 +02:00
Svarog
ad9c213a06 Restructure database to prevent double-slashes in keys, causing bugs in cursors (#1432)
* Add TestValidateAndInsertPruningPointWithSideBlocks

* Optimize infrastracture bucket paths

* Update infrastracture tests

* Refactor the consensus/database layer

* Remove utils/dbkeys

* Use consensus/database in consensus instead of infrastructure

* Fix a bug in dbBucketToDatabaseBucket and MakeBucket combination

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-19 14:19:08 +02:00
Svarog
a4adbabf96 TestBuildBlockErrorCases and remove redundant check of coinbase script length (#1427)
* Write structure of TestBlockBuilderErrorCases

* Remove double verification of script length in serializeCoinbasePayload

* Remove redundant code in TestBuildBlockErrorCases

* Rename povTransactionHash -> povBlockHash

* Convert coinbasePayloadScriptPublicKeyMaxLength to uint8

* Re-use consensus in TestBuildBlockErrorCases
2021-01-19 10:37:51 +02:00
Ori Newman
799eb7515c Test validateAndInsertPruningPoint (#1420)
* Add TestValidateAndInsertPruningPoint

* Check fake UTXO set and validate that the pruning point changed
2021-01-18 18:17:13 +02:00
Mike Zak
0769705b37 Update to version 0.8.6 2021-01-18 15:17:15 +02:00
Svarog
189e3b6be9 Fix missing utxo notifications (#1428)
* fix missing UTXO notifications (#1426)

* Remove redundant semicolon

Co-authored-by: aspect <anton.yemelyanov@gmail.com>
2021-01-18 13:08:16 +02:00
Mike Zak
e8dfbc8367 Merge remote-tracking branch 'origin/master' into v0.8.5-dev 2021-01-18 11:36:25 +02:00
Ori Newman
d70740331a Remove hashesQueueSet (#1424)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-01-18 09:10:26 +02:00
Svarog
9a81b1328a Add the Address of node to whom connected in log of send/receiveVersion (#1423)
* Add the Address of node to whom connected in log of send/receiveVersion

* Don't call functions before LogAndMeasureExecutionTime

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-17 16:31:48 +02:00
Ori Newman
d4f3a252ff Add TestIsFinalizedTransaction (#1422)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-01-17 15:47:49 +02:00
Ori Newman
f14527de4c Give different limit to the RPC server (#1421) 2021-01-17 13:58:42 +02:00
Ori Newman
dd57e6abe6 Fix checkParentHeadersExist and cover pruning_violation_proof_of_work_and_difficulty.go with tests (#1418)
* Fix checkParentHeadersExist and cover pruning_violation_proof_of_work_and_difficulty.go with tests

* Remove unused variable

* Change consensus violation

* Change condition order

* Get rid of irrelevant error codes in extractRejectCode

* Fix wrong test db names

* Fix checkParentHeadersExist
2021-01-17 11:27:04 +02:00
Ori Newman
67be4d82bf Don't mark bad merkle root as invalid (#1419)
* Don't mark bad merkle root as invalid

* Fix TestBlockStatus

* Move discardAllChanges inside the inner if
2021-01-17 10:40:05 +02:00
Ori Newman
a1381d6768 Add TestCheckParentBlockBodiesExist (#1405)
* Add TestCheckParentBlockBodiesExist

* Use block in pruning point's anticone for the test

* Fix test db name
2021-01-14 13:31:17 +02:00
Ori Newman
10b519a3e2 Add tests to ValidateHeaderInIsolation (#1415)
* Add tests to ValidateHeaderInIsolation

* Fix tests db names

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-14 11:06:08 +02:00
Ori Newman
a35f8269ea Add checkBlockIsNotPruned (#1413)
* Add checkBlockIsNotPruned

* Fix test name and comment

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-14 10:56:22 +02:00
stasatdaglabs
15af6641fc Send the IBD root UTXO set in chunks instead of a massive monolythic message (#1412)
* Extract syncPruningPointUTXOSet to a separate method.

* Implement logic to send pruning point utxo set chunks in a loop.

* Replace IBDRootUTXOSetAndBlockMessage with IbdRootUtxoSetChunkMessage.

* Add a new message: RequestNextIBDRootUTXOSetChunk.

* Add a new message: DoneIBDRootUTXOSetChunks.

* Protect HandleRequestIBDRootUTXOSetAndBlock from rogue messages.

* Reimplement receiveIBDRootUTXOSetAndBlock.

* Add CmdDoneIBDRootUTXOSetChunks to the HandleRelayInvs flow.

* Decrease the max message size to 10mb.

* Fix bad step.

* Fix confusion between outgoing/incoming routes.

* Measure how long it takes to send/receive the UTXO set.

* Use LogAndMeasure in handleRequestIBDRootUTXOSetAndBlockFlow.
2021-01-13 18:03:07 +02:00
Svarog
1b97cfb302 Prevent a race condition in findHighestSharedBlockHash where we get headersSelectedTip and then pass it as highHash to GetBlockLocator, without locking consensus (#1410)
* Prevent a race condition in findHighestSharedBlockHash where we get headersSelectedTip and then pass it as highHash to GetBlockLocator, without locking consensus

* Restart findHighestSharedBlockHash if lowHash or highHash are no longer in selectedParentChain

* Test for specifically ErrBlockNotInSelectedParentChain instead of database NotFound error

* Fix TestCreateHeadersSelectedChainBlockLocator

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-13 17:55:37 +02:00
Ori Newman
61be80a60c Add TestCheckMergeSizeLimit (#1408)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-13 16:19:52 +02:00
Elichai Turkel
83134cc2b5 Add a codecov yml, disable patch checks and make status checks always pass (#1414) 2021-01-13 15:57:57 +02:00
Svarog
4988817da1 Reject SubmitBlock if the node is in IBD (#1409)
* Reject SubmitBlock if the node is in IBD

* Add comments

* Don't use iota for RejectReason constants, since in .proto those are hard-coded
2021-01-13 15:04:55 +02:00
Elichai Turkel
68bd8330ac Log the networks hashrate (#1406)
* Log the hashrate of each block

* Add a test for GetHashrateString

* Move difficulty related functions to its own package

* Convert the validated log in validateAndInsertBlock to a log function

* Add tests for max/min int
2021-01-13 12:51:23 +02:00
Elichai Turkel
192dd2ba8f Add codecov to github (#1358) 2021-01-13 10:35:12 +02:00
Ori Newman
cc49b1826a Reset windowExpectedEndTime after each window (#1407) 2021-01-12 21:34:26 +02:00
Svarog
ce348373c6 Delete existing UTXOSet when commiting VirtualUTXOSet. (#1403) 2021-01-12 16:51:56 +02:00
talelbaz
8ad5725421 Adding a test for the error cases on the function 'checkBlockStatus()' (#1398)
* Adds test for error cases on the function checkBlockStatus.

* Fix review's comments.

* Move test to validateandinsertblock_test.go

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-12 15:55:02 +02:00
Ori Newman
23a2fbf401 Remove erroneous finality optimization from LowestChainBlockAboveOrEqualToBlueScore (#1402)
* Remove finality erroneous optimization from LowestChainBlockAboveOrEqualToBlueScore

* Add TestLowestChainBlockAboveOrEqualToBlueScore

* Remove unnecessary fields from dagTraversalManager
2021-01-12 15:27:08 +02:00
Elichai Turkel
c1361e5b3e Change log sizes and add some new features to logger (#1400)
* Increase default log sizes, and increase kaspad log sizes

* Add an option to not print logs to stdout

* Allow logs to be printed in the current working directory

* Add more pruning related logs

* Add comment and increase log rotations to save last 64 logs
2021-01-12 13:26:29 +02:00
Ori Newman
53744ceb45 Compare transaction IDs with Equal (#1401) 2021-01-12 12:53:33 +02:00
Ori Newman
bcf2302460 Add high hash to block locator, and add block locator tests (#1397)
* Include high hash in the block locator

* Add tests for block locator

* Remove redundant function

* Remove redundant assignments
2021-01-12 11:16:25 +02:00
Ori Newman
6101e6bdb6 Fix UTXO serialization, its test, and the static check that missed it (#1396)
* Fix UTXO serialization, its test, and the static check that missed it

* Remove duplicate case

* Use one line for static check

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-01-11 17:45:17 +02:00
stasatdaglabs
d9b97afb92 Don't swallow errors in HandleNewBlockTransactions. (#1390)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-11 17:16:15 +02:00
Ori Newman
b8ca33d91d Add selected chain store and optimize block locator with it (#1394)
* Add selected chain store and optimize block locator with it

* Fix build error

* Fix comments

* Fix IsStaged

* Rename CalculateSelectedParentChainChanges to CalculateChainPath and SelectedParentChainChanges->SelectedChainPath

* Use binary.LittleEndian directly to allow compiler optimizations

* Remove boolean from HeadersSelectedChainStore interface

* Prevent endless loop in block locator
2021-01-11 15:51:45 +02:00
Svarog
c7deda41c6 Fix deserialization of script version in UTXOSet deserialization (#1395)
* Initalize protoUTXOSetIterator with index = -1

* Handle error when failed to deserialize Script version

* Add support for (de)serialization of (u)int16

* Log the error when converting it into ErrMalformedUTXO
2021-01-11 15:23:27 +02:00
talelbaz
434cf45112 Adds a new test to validate POW, and Fix Main-net and Test-net genesis block data. (#1389)
* commit for do fetch&merge

* Adds a new test to validate POW, and Fix Main-net and Testnet genesis block data.

* Fix window's test for testnet and change the expected pruning point for mainnet and testnet.

* Delete function "solveBlock" on proof_of_work_test.go and call the function mining.SolvaBlock instead. Also, remove using of random in "solveBlockWithWrongPOW" function.

* Replace 0xFFFFFFFFFFFFFFFF to math.MaxUint64 in "solveBlockWithWrongPOW" function and change the function's comment of "TestPOW"

* Replace 0xFFFFFFFFFFFFFFFF to math.MaxUint64 in "solveBlockWithWrongPOW" function and change the function's comment of "TestPOW"

* Change from <= to < in the for statement in "solveBlockWithWrongPOW" function

* Adds one arg to the function call "NewTestConsensus" (the function sig has changed).

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-11 13:15:26 +02:00
Elichai Turkel
2cc0bf1639 Optimize block locator using finality store (#1386)
* Make sure block locator doesn't include a hash lower than the lowHash in
the block locator

* Use finalityStore to optimize LowestChainBlockAboveOrEqualToBlueScore
2021-01-10 13:36:02 +02:00
Ori Newman
0f2d0d45b5 Add TargetBlocksPerSecond for kaspaminer (#1385)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-01-10 12:44:24 +02:00
Svarog
09e1a73340 Added some logs to block-relay and IBD flows (#1384)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-10 12:05:34 +02:00
Svarog
c6d20c1f6f Start IBDBlockLocator from PruningPoint instead of Genesis (#1383) 2021-01-10 11:06:06 +02:00
Svarog
49e0a2a2e7 Add basic support for archival node (#1370)
* Add archival cli flag

* If --archival was activated - don't delete anything

* Fix tests

* Still change block status to StatusHeaderOnly even in archival nodes
2021-01-10 10:25:15 +02:00
Ori Newman
285ae5cd40 Update READMEs and add CONTRIBUTING.md (#1381)
* Update READMEs and add CONTRIBUTING.md

* Update go version

* Update READMEs and CONTRIBUTING.md

* Update README.md

* Update README.md

* Update README.md
2021-01-10 09:13:00 +02:00
stasatdaglabs
541205904e Add RPC documentation (#1379)
* Split messages.proto to p2p and rpc.

* Split messages.proto to p2p and rpc.

* Write a short intro to the RPC docs.

* Start documenting RPC calls.

* Use a custom protoc-gen-doc.

* Continue writing RPC documentation.

* Finish writing RPC documentation.

* Fix a formatting error.

* Fix merge errors.

* Fix formatting into protowire/README.md.

* Rerun go generate ..

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-07 16:55:47 +02:00
Ori Newman
256b7f25f1 Decrease grpc client dial timeout to one second (#1378) 2021-01-07 16:26:37 +02:00
Elichai Turkel
82d95c59a3 Increase log files sizes (#1374) 2021-01-07 11:17:13 +02:00
Ori Newman
79c5d4595e Remove gencerts (#1371) 2021-01-07 10:00:13 +02:00
stasatdaglabs
b195301a99 Add IsIBDPeer to GetConnectedPeerInfoResponse. (#1367)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-06 17:33:51 +02:00
Ori Newman
4ad89056c2 Implement getMempoolEntries (#1369)
* Implement getMempoolEntries

* Fix comment

* Fix comment
2021-01-06 17:26:21 +02:00
Elichai Turkel
64f6cd2178 Merge pull request #1368 from kaspanet/v0.8.4-dev
Upgrade master to 0.8.4rc0
2021-01-06 14:43:39 +02:00
Mike Zak
26368cd674 Update to version 0.8.5 2021-01-06 14:04:46 +02:00
Svarog
9ea4c0fa38 Add sanity check that makes sure that the utxoset that we save fits the pruningPoints's commitment. (#1366)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-01-06 13:27:02 +02:00
Elichai Turkel
a04a5462ae Update devnet genesis with older timestamp (#1364)
* Update devnet genesis with older timestamp

* Update BlueWindow test for new genesis
2021-01-06 12:50:27 +02:00
Elichai Turkel
4577023e44 Add script pubkey version to signature hash (#1360)
* Replace 0xffff with math.MaxUint16 on version checks

* Add script version to the signature hash

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-01-06 11:51:12 +02:00
Ori Newman
d8293ef635 Revert "Add recoverability for UTXO index (#1342)" (#1353)
This reverts commit 778375c4
2021-01-06 11:41:52 +02:00
Ori Newman
2059d6ba56 Delete sync rate mechanism (#1356)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-06 10:43:44 +02:00
Ori Newman
3ec1cbe236 Add TestBlockStatus (#1361)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-06 10:37:10 +02:00
Ori Newman
741e0962be Remove diffs from restoreUTXO logs (#1354)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-06 10:31:42 +02:00
Elichai Turkel
6279db2bf1 Reverse ghostdag tie break direction (#1359)
* Remove hash reversal in ghostdag

* Update tests after chaning ghostdag hash direction

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-01-05 19:10:59 +02:00
stasatdaglabs
e24bc527f3 Update the max block size and mass constants to reasonable values (#1344)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-01-05 18:36:08 +02:00
talelbaz
8a309a7d2a Upgradability mechanisms script version (#1313)
* ''

* ''

* ''

* Changes genesis block version to 0.

* a

* a

* All tests are done.

* All tests passed for changed block version from int32 to uint16

* Adds validation of rejecting blocks with unknown versions.

* Changes txn version from int32 to uint16.

* .

* Adds comments to exported functions.

* Change functions name from ConvertFromRpcScriptPubKeyToRPCScriptPubKey to ConvertFromAppMsgRPCScriptPubKeyToRPCScriptPubKey and from ConvertFromRPCScriptPubKeyToRpcScriptPubKey to ConvertFromRPCScriptPubKeyToAppMsgRPCScriptPubKey

* change comment to "ScriptPublicKey represents a Kaspad ScriptPublicKey"

* delete part (tx.Version < 0) that cannot be exist on the if statement.

* Revert protobuf version.

* Fix a comment.

* Fix a comment.

* Rename a variable.

* Rename a variable.

* Remove a const.

* Rename a type.

* Rename a field.

* Rename a field.

* Remove commented-out code.

* Remove dangerous nil case in DomainTransactionOutput.Clone().

* Remove a constant.

* Fix a string.

* Fix wrong totalScriptPubKeySize in transactionMassStandalonePart.

* Remove a constant.

* Remove an unused error.

* Fix a serialization error.

* Specify version types to be uint16 explicitly.

* Use constants.ScriptPublicKeyVersion.

* Fix a bad test.

* Remove some whitespace.

* Add a case to utxoEntry.Equal().

* Rename scriptPubKey to scriptPublicKey.

* Remove a TODO.

* Rename constants.

* Rename a variable.

* Add version to parseShortForm.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: stasatdaglabs <stas@daglabs.com>
2021-01-05 17:50:09 +02:00
Elichai Turkel
70d515a5a9 PruningManager: Delete tips that are in pruningPoint.Anticone from the tips list (#1351) 2021-01-05 14:14:40 +02:00
Elichai Turkel
72a7ca53e6 Save and expose the database in TestConsensus (#1349) 2021-01-05 14:13:33 +02:00
Svarog
119e7374e1 Move common log message to trace (#1348) 2021-01-05 12:51:48 +02:00
Elichai Turkel
e509cb1597 Smal performance improvements to BlueWindow (#1343)
* Convert BlockGHOSTDAGData from an interface to a public struct with getters

* Move hashes.Less to externalapi so it can access the hashes directly without copying

* Reduce calls to ghostdagstore.Get in blueWindow

* Simplify the logic in RequiredDifficulty and reuse big.Int

* Remove bigintpool as its no longer used

* Use ChooseSelectedParent in RequiredDifficulty instead of looping over the parents

* Remove comment
2021-01-05 12:13:02 +02:00
Ori Newman
0fb97a4f37 Add logs (#1346)
* Add logs

* Fix logInterval to const
2021-01-04 17:04:13 +02:00
Svarog
789a7379bd Require only inputs not be prefilled (#1345)
* bug invalidateAndInsertPruningPoint: if ValidateAndInsertBlock returned a non-RuleError error - the error was ignored

* Convert checkNoPrefilledFields into checkNoPrefilledInputs

* Add log line

* clone pruning point when passing to validateBlockTransactionsAgainstPastUTXO
2021-01-04 15:55:08 +02:00
Ori Newman
778375c4af Add recoverability for UTXO index (#1342)
* Add recoverability for UTXO index

* Add comment

* Rename UTXOOutpointPair->OutpointUTXOPair

* Get rid of the db transaction on resetStore and collect all keys before deleting

* Use VirtualSelectedParent instead of selected tip

* Fix error
2021-01-04 14:15:51 +02:00
stasatdaglabs
acef311fb4 Improve the performance of downloading headers (#1340)
* Add a new message: BlockHeadersMessage.

* Add a new message: BlockHeadersMessage.

* Send a lot of headers as a single message instead of many small messages.

* Keep a short queue of blockHeadersMessages so that there's never a moment when the node is not validating and inserting headers

* Add a missing return statement.

* Remove MsgBlockHeader from payloads.
2021-01-03 17:57:14 +02:00
stasatdaglabs
e8cad2b2f3 Send headers continuously without needing to run the BlockLocator protocol after every ~maxBlueScoreDifference blocks (#1339)
* Send headers continuously without needing to run the BlockLocator protocol after ever ~maxBlueScoreDifference blocks

* Add logging.

* Make logs more descriptive.
2021-01-03 15:50:21 +02:00
Ori Newman
97fddeff4b Don't mine when node is not connected (#1338) 2021-01-03 14:57:09 +02:00
Svarog
d6fe9a3017 Filter headers-only blocks out of parentChildren when selecting virtualSelectedParent (#1337) 2021-01-03 14:35:03 +02:00
Ori Newman
1abffd472c Add lock to mempool.GetTransaction (#1336) 2021-01-03 13:06:46 +02:00
Svarog
8c8da3b01f Convert all log messages in pick_virtual_parents.go to Debugf (#1335) 2021-01-03 10:20:37 +02:00
Elichai Turkel
51625e7967 Remove LevelDBSnapshot from LevelDBTransaction (#1334)
* Remove leveldb snapshot from LevelDBTransaction

* Update transactions_test.go to represent the new Transaction logic
2020-12-31 17:58:01 +02:00
Svarog
6fa3aa1dca Change SyncRateWindow to 15 minutes + update sync times on block headers as well (#1331)
* Change SyncRateWindow to 15 minutes + update sync times on block headers as well

* Rename result to isSyncRateTooLow

* Fix formula for expected blocks
2020-12-31 16:45:56 +02:00
Ori Newman
23304a4977 Add UTXO set cache (#1333)
* Add UTXO set cache

* Add clear method to cache
2020-12-31 16:34:46 +02:00
stasatdaglabs
b473a09111 Fix a crash in the UTXO index (#1332)
* Add a field to TransactionAcceptanceData: TransactionInputUTXOEntries.

* Fix failing tests.

* Add transactionInputUtxoEntries to the database.

* Populate transactionInputUTXOEntries in applyMergeSetBlocks and use them in the UTXO index.

* Remove UTXOEntry.Clone().

* Add an additional equality test.
2020-12-31 14:50:14 +02:00
Ori Newman
dd35669861 Check that there are no prefilled fields when validating a block (#1329)
* Check that there are no prefilled fields when validating a block

* Use cleanBlockPrefilledFields in AddBlock

* Move cleanBlockPrefilledFields to BuildBlockWithParents

* Move cleanBlockPrefilledFields to func (bb *testBlockBuilder) BuildBlockWithParents
2020-12-30 18:31:17 +02:00
stasatdaglabs
9401b77a4f Improve the performance of GetBlock and GetBlockHeader. (#1328) 2020-12-30 18:13:12 +02:00
Elichai Turkel
e87368157d Update go.yml (#1330) 2020-12-30 17:46:17 +02:00
stasatdaglabs
7dd0188838 Move the heavy lifting in BlockLocator from the syncer to the syncee (#1324)
* Add a new message: BlockLocatorHighestHash.

* Add a new message: IBDBlockLocator.

* Implement HandleIBDBlockLocator.

* Reimplement findHighestSharedBlockHash.

* Make HandleIBDBlockLocator only return hashes that are in the selected parent chain of the target hash.

* Increase the cache sizes of blockRelationStore, reachabilityDataStore, and ghostdagDataStore.

* Fix wrong initial highHash in findHighestSharedBlockHash.

* Make go vet happy.

* Protect against receiving wrong messages when expecting MsgIBDBlockLocatorHighestHash.
2020-12-30 15:44:14 +02:00
Ori Newman
6172e48adc Don't ban when capacity has been reached (#1326) 2020-12-30 15:15:41 +02:00
Elichai Turkel
739cffd918 Remove half the ghostdag store calls in LowestChainBlockAboveOrEqualToBlueScore (#1323) 2020-12-30 13:53:46 +02:00
Elichai Turkel
bd89ca2125 Log only tips length, and inspect ReachabilityData error instead of calling HasReachabilityData (#1321)
* Print the amount of tips instead of the tips themselves

* Inspect ReachabilityData error instead of calling HasReachabilityData
2020-12-30 13:02:34 +02:00
Svarog
d917a1fc1e Remove the block delay from kaspaminer (#1320) 2020-12-30 12:37:23 +02:00
Svarog
9b12b9c58a Make ReachabilityData a read-only interface with a writable variant, to prevent cloning (#1316)
* Rename reachabilityManager.data to dataForInsertion, and use it only during insertions

* Make reachabilityData an interface

* Fix db serialization of reachability data

* Fix reachabilityDataStore

* Fix all tests

* Cleanup debugging code

* Fix insertToFutureCoveringSet

* Add comments

* Rename to ReachabilityData and MutableReachabilityData
2020-12-30 09:48:38 +02:00
Elichai Turkel
533fa8c00e Change sirtual parents selection to allow faster branch merges in the network (#1315)
* if more candidates then max, choose half with highest blueWork and half with lowest

* Add a Test GhostDAG sorter

* Add a test for pick virtual parents

* Fix review nits
2020-12-29 21:42:31 +02:00
Svarog
0f93189c16 Don't print the whole UTXODiff to log, it might be quite huge (#1318) 2020-12-29 17:16:38 +02:00
Elichai Turkel
52427cb953 Reduce the amount of calls to FinalityPoint() (#1317) 2020-12-29 17:08:06 +02:00
talelbaz
d72f70fabe Adds new dags for GHOSTDAG tests (formatted as json files). (#1187)
* Adds new dags for ghostdag tests.

* Change the error msg for the number of tests (6 instead of 3).

Co-authored-by: tal <tal@daglabs.com>
2020-12-29 14:10:38 +02:00
Ori Newman
49b6cc6038 Add mutable and immutable header interfaces (#1305)
* Add mutable and immutable header interfaces

* Fix ShouldMine()

* Remove false comment

* Fix Equal signature

* Fix Equal implementation
2020-12-29 13:55:17 +02:00
Elichai Turkel
c10a087696 Remove virtualDiffParents that aren't in V.Past or in P.Future (#1310) 2020-12-29 12:07:45 +02:00
Ori Newman
02d5fb29cf Fix notifyVirtualSelectedParentBlueScoreChanged to show the selected tip blue score instead of the virtual's (#1309)
* Fix notifyVirtualSelectedParentBlueScoreChanged to show the selected tip blue score instead of the virtual's

* Fix ShouldMine() to fetch selected tip header
2020-12-29 12:07:05 +02:00
stasatdaglabs
48278bd1c0 Slightly improve the performance of antiPastHashesBetween. (#1312) 2020-12-29 10:36:18 +02:00
Ori Newman
d91afbfe3b Change most Tracef to Debugf (#1302)
* Change most Tracef to Debugf

* Remove diff from log
2020-12-29 10:32:28 +02:00
Ori Newman
5f22632836 Use sync rate for getBlockTemplate's isSynced (#1311)
* Use sync rate for getBlockTemplate's isSynced

* Fix a typo

Co-authored-by: Mike Zak <feanorr@gmail.com>
2020-12-29 09:28:02 +02:00
stasatdaglabs
4aafe8a630 Fix a crashed caused by orphans whose validation failed. (#1297) 2020-12-29 09:02:28 +02:00
Elichai Turkel
7e379028f3 Log the time it takes to delete blocks and save the utxo set for pruning point (#1307) 2020-12-28 16:22:00 +02:00
Elichai Turkel
af1b8c8490 Move version initializiation to init function to prevent race conditions (#1299) 2020-12-28 16:04:00 +02:00
Ori Newman
c7c8b25c09 Set stream max message size and increase the max message size to 1GB (#1300) 2020-12-28 12:53:11 +02:00
stasatdaglabs
b0251fe1a6 Add missing lock to IsValidPruningPoint. (#1296) 2020-12-28 10:08:36 +02:00
Ori Newman
cfe013eca7 Add IsHeaderOnly field to BlockVerboseData (#1295) 2020-12-27 18:23:51 +02:00
stasatdaglabs
50e74bf412 Add BlueScore to BlockVerboseData. (#1294) 2020-12-27 17:57:43 +02:00
stasatdaglabs
12f1c3dfab Fix a crash in GetMissingBlockBodyHashes (#1289)
* Remove the limit on the amount of hashes returned from antiPastHashesBetween.

* Guard against requests with a non-existing block hash.

* Move missing-block-hash guards to consensus.go.

* Ban a peer that doesn't send us all the requested headers during IBD.

* Extract blockHeap.ToSlice.

* Re-request headers in requestHeaders if we didn't receive the highHash.
2020-12-27 17:03:21 +02:00
Ori Newman
a231ec7214 Make only one transaction in validateAndInsertBlock (#1292) 2020-12-27 16:37:09 +02:00
Ori Newman
8aecf961bc Red inclusion (#1275)
* Accept red blocks transactions

* Add comments to TestTransactionAcceptance

* Fix tests

* Remove fetchUTXOSetIfMissing

* Remove redundant dependency

* Fix comments
2020-12-24 18:12:46 +02:00
stasatdaglabs
0dea766373 Fix the stopOldTemplateSolving channel in the miner getting closed twice (#1286)
* Fix the stopOldTemplateSolving channel in the miner getting closed twice.

* Remove a unused variable.
2020-12-24 17:58:28 +02:00
Ori Newman
830ddf4735 Fill testConsensus's dagParams (#1283) 2020-12-24 17:47:03 +02:00
stasatdaglabs
9d0f513e49 Implement a simple mechanism to stop a miner from mining while kaspad is not synced (#1284)
* Reintroduce isSynced into the GetBlockTemplate response.

* Add a warning for when kaspad is not synced.

* Rephrase a log.
2020-12-24 17:15:44 +02:00
stasatdaglabs
bd97075e07 Remove the limit to the returned headers from buildMsgBlockHeaders (#1281)
* Remove the limit to the returned header hashes from buildMsgBlockHeaders.

* Build msgBlockHeaders in batches rather than all at once.
2020-12-24 16:53:26 +02:00
Svarog
05941a76e7 Make DomainHash and TransactionID read-only structs (#1282)
* Increase size of reachability cache

* Change DomainHash to struct with unexported hashArray

* Fixing compilation errors stemming from new DomainHash structure

* Remove obsolete Read/WriteElement methods in appmessage

* Fix all tests

* Fix all tests

* Add comments

* A few renamings

* go mod tidy
2020-12-24 16:15:23 +02:00
stasatdaglabs
7cbda3b018 Fix RPC requests with unknown payloads crashing kaspad (#1203)
* [NOD-1596] Return an error on an unknown field.

* [NOD-1596] Don't use unknownFields to check whether a message is invalid.
2020-12-24 15:17:34 +02:00
Ori Newman
a0b93e1230 Fix deletePastBlocks (#1280) 2020-12-24 13:53:16 +02:00
stasatdaglabs
b749b2db0b Fix transaction relay not working (#1279)
* Implement HandleGetMempoolEntry.

* Fix equality bug in handleRelayedTransactionsFlow.
2020-12-24 10:40:56 +02:00
Svarog
717914319a Increase size of reachability and block-relations cache (#1272)
* Increase size of reachability cache

* Increase cache size for BlockRelationStore
2020-12-24 10:27:05 +02:00
stasatdaglabs
6ef8eaf133 Fix AddBlock not returning validation failure errors (#1268)
* Fix AddBlock not returning validation failure errors.

* Elevate a log from Info to Warning.

* Elevate a log from Info to Warning.
2020-12-23 15:08:02 +02:00
Elichai Turkel
273c271771 Remove hash reversal (#1270)
* Remove hash reversing

* Move toBig to pow.go

* Update some tests
2020-12-23 12:42:37 +02:00
Ori Newman
729e3db145 Pruning calculation changes (#1250)
* 1) Calculate pruning point incrementally
2) Add IsValidPruningPoint to pruning manager and consensus
3) Use reachability children for selected child iterator

* Add request IBD root hash flow

* Fix UpdatePruningPointByVirtual and IsValidPruningPoint

* Regenerate messages.pb.go

* Make the pruning point the earliest chain block with finality interval higher than the previous pruning point

* Fix merge errors
2020-12-23 11:37:39 +02:00
Elichai Turkel
43c00f5e7f Remove the sorting requirement from BlueWindow (#1266)
* Remove the requirement for sorting in BlueWindow

* Sort the BlueWindow in window_test
2020-12-23 10:00:14 +02:00
stasatdaglabs
90d4dbcba1 Implement a simple CLI wallet (#1261)
* Copy over the CLI wallet from Kasparov.

* Fix trivial compilation errors.

* Reimplement the balance command.

* Extract isUTXOSpendable to a separate function.

* Reimplement the send command.

* Fix bad transaction ID parsing.

* Add a missing newline in a log.

* Don't use msgTx in send().

* Fix isUTXOSpendable not checking whether a UTXO is of a coinbase transaction.

* Add --devnet, --testnet, etc. to command line flags.

* In `create`, only print the public key of the active network.

* Use coinbase maturity in isUTXOSpendable.

* Add a readme.

* Fix formatting in readme.
2020-12-23 09:41:48 +02:00
Ori Newman
cb9d7e313d Implement Clone and Equal for all model types (#1155)
* [NOD-1575] Implement Clone and Equal for all model types

* [NOD-1575] Add assertion for transaction ID equality

* [NOD-1575] Use DomainTransaction.Equal to compare to expected coinbase transaction

* [NOD-1575] Add TestDomainBlockHeader_Clone

* [NOD-1575] Don't clone nil values

* [NOD-1575] Add type assertions

* [NOD-1575] Don't clone nil values

* [NOD-1575] Add missing Equals

* [NOD-1575] Add length checks

* [NOD-1575] Update comment

* [NOD-1575] Check length for TransactionAcceptanceData

* [NOD-1575] Explicitly clone nils where needed

* [NOD-1575] Clone tx id

* [NOD-1575] Flip condition

* Nod 1576 make coverage tests for equal clone inside model externalapi (#1177)

* [NOD-1576] Make coverage tests for equal and clone inside model and externalapi

* Some formatting and naming fixes

* Made transactionToCompare type exported

* Added some tests and made some changes to the tests code

* No changes made

* Some formatting and naming changes made

* Made better test coverage for externalapi clone and equal functions

* Changed expected result for two cases

* Added equal and clone functions tests for ghostdag and utxodiff

* Added tests

* [NOD-1576] Implement reachabilitydata equal/clone unit tests

* [NOD-1576]  Full coverage of reachabilitydata equal/clone unit tests

* Made changes and handling panic to transaction_equal_clone_test.go and formating of utxodiff_equal_clone_test.go

* Added recoverForEqual2 for handling panic to transaction_equal_clone_test.go

* [NOD-1576]  Full coverage of transaction equal unit test

* [NOD-1576] Add expects panic

* [NOD-1576] Allow composites in go vet

* [NOD-1576] Code review fixes (#1223)

* [NOD-1576] Code review fixes

* [NOD-1576] Code review fixes part 2

* [NOD-1576] Fix wrong name

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Karim <karim1king@users.noreply.github.com>

* Fix merge errors

* Use Equal where possible

* Use Equal where possible

* Use Equal where possible

Co-authored-by: andrey-hash <74914043+andrey-hash@users.noreply.github.com>
Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: Karim <karim1king@users.noreply.github.com>
2020-12-22 17:38:54 +02:00
stasatdaglabs
c2cec2f170 Fix the Sequence field in transaction inputs always getting deserialized to MaxUint64. (#1258) 2020-12-22 16:29:38 +02:00
Svarog
e7edfaceb7 Remove redundant rule errors (#1256)
* Remove ErrTimeTooNew and rename ErrBlockIsTooMuchInTheFuture to ErrTimeTooMuchInTheFuture

* Remove ErrBlockMassTooHigh

* Remove ErrHighHash

* Remove ErrInvalidSubnetwork + some cleanup around subnetwork validation

* Remove ErrTxMassTooHigh

* Remove ErrBadTxInput

* Remove ErrOverwriteTx

* Remove ErrTooManySigOps

* Remove ErrParentBlockUnknown

* Remove ErrParentBlockIsNotCurrentTips

* Remove ErrWithDiff

* Remove ErrFinality

* Remove ErrDelayedBlockIsNotAllowed + ErrOrphanBlockIsNotAllowed

* Remove ErrSelectedParentDisqualifiedFromChain

* Remove ErrBuildInTransactionHasGas

* Remove ErrBadFees
2020-12-22 14:05:21 +02:00
FestinaLente666
5632bee49d comments on default constants (#1253)
* comments on default constants

* more comments on default constants

* more comments on default constants

* more comments on default constants

* gofmt

* small typos
2020-12-22 11:44:32 +02:00
stasatdaglabs
21a459c0f4 Implement virtual selected parent chain RPC methods (#1249)
* [NOD-1579] Rename AcceptedTxIDs to AcceptedTransactionIDs.

* [NOD-1579] Add InsertBlockResult to ValidateAndInsertBlock results.

* [NOD-1593] Rename InsertBlockResult to BlockInsertionResult.

* [NOD-1593] Add SelectedParentChainChanges to AddBlockToVirtual's result.

* [NOD-1593] Implement findSelectedParentChainChanges.

* [NOD-1593] Implement TestFindSelectedParentChainChanges.

* [NOD-1593] Fix a string.

* [NOD-1593] Finish implementing TestFindSelectedParentChainChanges.

* [NOD-1593] Fix merge errors.

* [NOD-1597] Begin implementing UTXOIndex.

* [NOD-1597] Connect UTXOIndex to RPC.

* [NOD-1597] Connect Consensus to UTXOIndex.

* [NOD-1597] Add AcceptanceData to BlockInfo.

* [NOD-1597] Implement UTXOIndex.Update().

* [NOD-1597] Implement add(), remove(), and discard() in utxoIndexStore.

* [NOD-1597] Add error cases to add() and remove().

* [NOD-1597] Add special cases to add() and remove().

* [NOD-1597] Implement commit.

* [NOD-1597] Add a mutex around UTXOIndex.Update().

* [NOD-1597] Return changes to the UTXO from Update().

* [NOD-1597] Add NotifyUTXOsChangedRequestMessage and related structs.

* [NOD-1597] Implement HandleNotifyUTXOsChanged.

* [NOD-1597] Begin implementing TestUTXOIndex.

* [NOD-1597] Implement RegisterForUTXOsChangedNotifications.

* [NOD-1597] Fix bad transaction.ID usage.

* [NOD-1597] Implement convertUTXOChangesToUTXOsChangedNotification.

* [NOD-1597] Make UTXOsChangedNotificationMessage.Removed UTXOsByAddressesEntry instead of just RPCOutpoint so that the client can discern which address was the UTXO removed for.

* [NOD-1597] Collect outpoints in TestUTXOIndex.

* [NOD-1597] Rename RPC stuff.

* [NOD-1597] Add messages for GetUTXOsByAddresses.

* [NOD-1597] Implement HandleGetUTXOsByAddresses.

* [NOD-1597] Implement GetUTXOsByAddresses.

* [NOD-1597] Implement UTXOs().

* [NOD-1597] Implement getUTXOOutpointEntryPairs().

* [NOD-1597] Expand TestUTXOIndex.

* [NOD-1597] Convert SubmitTransaction to use RPCTransaction instead of MsgTx.

* [NOD-1597] Finish implementing TestUTXOIndex.

* [NOD-1597] Add messages for GetVirtualSelectedParentBlueScore.

* [NOD-1597] Implement HandleGetVirtualSelectedParentBlueScore and GetVirtualSelectedParentBlueScore.

* [NOD-1597] Implement TestVirtualSelectedParentBlueScore.

* [NOD-1597] Implement NotifyVirtualSelectedParentBlueScoreChanged.

* [NOD-1597] Expand TestVirtualSelectedParentBlueScore.

* [NOD-1597] Implement notifyVirtualSelectedParentBlueScoreChanged.

* [NOD-1597] Make go lint happy.

* [NOD-1593] Fix merge errors.

* [NOD-1593] Rename findSelectedParentChainChanges to calculateSelectedParentChainChanges.

* [NOD-1593] Expand TestCalculateSelectedParentChainChanges.

* [NOD-1597] Add logs to utxoindex.go.

* [NOD-1597] Add logs to utxoindex/store.go.

* [NOD-1597] Add logs to RPCManager.NotifyXXX functions.

* Implement notifySelectedParentChainChanged.

* Implement TestSelectedParentChain.

* Rename NotifyChainChanged to NotifyVirtualSelectedParentChainChanged.

* Rename GetChainFromBlock to GetVirtualSelectedParentChainFromBlock.

* Remove AcceptanceIndex from the config.

* Implement HandleGetVirtualSelectedParentChainFromBlock.

* Expand TestVirtualSelectedParentChain.

* Fix merge errors.

* Add a comment.

* Move a comment.
2020-12-21 14:43:32 +02:00
Elichai Turkel
45edacfbfa Replace Double-SHA256 with blake2b and implement domain seperation (#1245)
* Replace default hasher (Double-SHA256) with domain seperated blake2b

* Replace all hashes with domain seperated blake2b

* Update the genesis blocks

* Replace OP_HASH256 with OP_BLAKE2B

* Fix the merkle tree by appending zeros instead of duplicating the hash when there is 1 branch left

* Update tests

* Add a payloadHash function

* Update gitignore to ignore binaries

* Fix a bug in the blake2b opcode
2020-12-21 12:51:45 +02:00
Svarog
9f8f0fd747 Added safeguard against running TestDifficulty with a fresh genesis block (#1251) 2020-12-21 11:30:43 +02:00
stasatdaglabs
053bb351b5 [NOD-1597] Implement a UTXO index (#1221)
* [NOD-1579] Rename AcceptedTxIDs to AcceptedTransactionIDs.

* [NOD-1579] Add InsertBlockResult to ValidateAndInsertBlock results.

* [NOD-1593] Rename InsertBlockResult to BlockInsertionResult.

* [NOD-1593] Add SelectedParentChainChanges to AddBlockToVirtual's result.

* [NOD-1593] Implement findSelectedParentChainChanges.

* [NOD-1593] Implement TestFindSelectedParentChainChanges.

* [NOD-1593] Fix a string.

* [NOD-1593] Finish implementing TestFindSelectedParentChainChanges.

* [NOD-1593] Fix merge errors.

* [NOD-1597] Begin implementing UTXOIndex.

* [NOD-1597] Connect UTXOIndex to RPC.

* [NOD-1597] Connect Consensus to UTXOIndex.

* [NOD-1597] Add AcceptanceData to BlockInfo.

* [NOD-1597] Implement UTXOIndex.Update().

* [NOD-1597] Implement add(), remove(), and discard() in utxoIndexStore.

* [NOD-1597] Add error cases to add() and remove().

* [NOD-1597] Add special cases to add() and remove().

* [NOD-1597] Implement commit.

* [NOD-1597] Add a mutex around UTXOIndex.Update().

* [NOD-1597] Return changes to the UTXO from Update().

* [NOD-1597] Add NotifyUTXOsChangedRequestMessage and related structs.

* [NOD-1597] Implement HandleNotifyUTXOsChanged.

* [NOD-1597] Begin implementing TestUTXOIndex.

* [NOD-1597] Implement RegisterForUTXOsChangedNotifications.

* [NOD-1597] Fix bad transaction.ID usage.

* [NOD-1597] Implement convertUTXOChangesToUTXOsChangedNotification.

* [NOD-1597] Make UTXOsChangedNotificationMessage.Removed UTXOsByAddressesEntry instead of just RPCOutpoint so that the client can discern which address was the UTXO removed for.

* [NOD-1597] Collect outpoints in TestUTXOIndex.

* [NOD-1597] Rename RPC stuff.

* [NOD-1597] Add messages for GetUTXOsByAddresses.

* [NOD-1597] Implement HandleGetUTXOsByAddresses.

* [NOD-1597] Implement GetUTXOsByAddresses.

* [NOD-1597] Implement UTXOs().

* [NOD-1597] Implement getUTXOOutpointEntryPairs().

* [NOD-1597] Expand TestUTXOIndex.

* [NOD-1597] Convert SubmitTransaction to use RPCTransaction instead of MsgTx.

* [NOD-1597] Finish implementing TestUTXOIndex.

* [NOD-1597] Add messages for GetVirtualSelectedParentBlueScore.

* [NOD-1597] Implement HandleGetVirtualSelectedParentBlueScore and GetVirtualSelectedParentBlueScore.

* [NOD-1597] Implement TestVirtualSelectedParentBlueScore.

* [NOD-1597] Implement NotifyVirtualSelectedParentBlueScoreChanged.

* [NOD-1597] Expand TestVirtualSelectedParentBlueScore.

* [NOD-1597] Implement notifyVirtualSelectedParentBlueScoreChanged.

* [NOD-1597] Make go lint happy.

* [NOD-1593] Fix merge errors.

* [NOD-1593] Rename findSelectedParentChainChanges to calculateSelectedParentChainChanges.

* [NOD-1593] Expand TestCalculateSelectedParentChainChanges.

* [NOD-1597] Add logs to utxoindex.go.

* [NOD-1597] Add logs to utxoindex/store.go.

* [NOD-1597] Add logs to RPCManager.NotifyXXX functions.

* [NOD-1597] Ignore transactions that aren't accepted.

* [NOD-1597] Use GetBlockAcceptanceData instead of GetBlockInfo.

* [NOD-1597] Convert scriptPublicKey to string directly, instead of using hex.

* [NOD-1597] Add a comment.

* [NOD-1597] Guard against calling utxoindex methods when utxoindex is turned off.

* [NOD-1597] Add lock to UTXOs.

* [NOD-1597] Guard against calls to getUTXOOutpointEntryPairs when staging isn't empty.
2020-12-20 17:24:56 +02:00
stasatdaglabs
843edc4ba5 Limit the orphan collection (#1238)
* Limit the orphan collection.

* Fix grammar in a comment.

* Fix a bad log.
2020-12-20 11:20:51 +02:00
Ori Newman
bd5f4e8c6a Pruning fixes (#1243)
* Pruning related fixes

* Rename setBlockStatus->setBlockStatusAfterBlockValidation

* Rename StatusValid->StatusUTXOValid

* Add comment

* Fix typo

* Rename hasValidatedOnlyHeader->hasValidatedHeader

* Rename checkBlockBodiesExist->checkParentBlockBodiesExist

* Add comments and logs

* Adding logs

* Add logs and assert

* Add comment

* Fix typo

* Fix log
2020-12-20 09:38:34 +02:00
Elichai Turkel
6b1e691a57 Add GitHub actions in preperation for deprecating Jenkins (#1164)
* Add a test script

* add gh action for build and test

* added all the test

* Change github workflow to use the new test script

* Change the docker file to use the new test script

* Add doc comment for ProtocolError.Unwrap()

* Use another github action to increase windows page size

* Run the action after any edit to the PR metadata/base

* Change go version from 1.15 to 1.14

* Rename test.sh to build_and_test.sh

Co-authored-by: Isabella Liu <isabellaliu77@gmail.com>
2020-12-17 15:48:55 +02:00
Ori Newman
bf67c6351e Add TestPruning (#1222)
* Add TestPruning

* Add missing argument to teardown

* Add missing return value to AddBlock
2020-12-16 14:49:55 +02:00
Mike Zak
99a14c5999 Update to version 0.8.4 2020-12-16 14:00:09 +02:00
Ori Newman
b510fc08a7 Change PoW error (#1234)
* Use proper error for invalid PoW

* Add comment
2020-12-16 13:33:10 +02:00
Ori Newman
dc3ae4d3ac Use pointer receivers when needed (#1237) 2020-12-16 12:50:17 +02:00
Svarog
1ebda36b17 Remove IsAwaitingUTXOSet from validateAndInsertBlock log, to prevent long operation (#1235) 2020-12-16 11:33:48 +02:00
Ori Newman
12379bedb6 Fix UTXO serialization errors (#1233) 2020-12-16 11:27:43 +02:00
stasatdaglabs
f90d7d796a [NOD-1593] Return SelectedParentChainChanged from ValidateAndInsertBlock (#1202)
* [NOD-1579] Rename AcceptedTxIDs to AcceptedTransactionIDs.

* [NOD-1579] Add InsertBlockResult to ValidateAndInsertBlock results.

* [NOD-1593] Rename InsertBlockResult to BlockInsertionResult.

* [NOD-1593] Add SelectedParentChainChanges to AddBlockToVirtual's result.

* [NOD-1593] Implement findSelectedParentChainChanges.

* [NOD-1593] Implement TestFindSelectedParentChainChanges.

* [NOD-1593] Fix a string.

* [NOD-1593] Finish implementing TestFindSelectedParentChainChanges.

* [NOD-1593] Fix merge errors.

* [NOD-1593] Fix merge errors.

* [NOD-1593] Rename findSelectedParentChainChanges to calculateSelectedParentChainChanges.

* [NOD-1593] Expand TestCalculateSelectedParentChainChanges.
2020-12-15 11:37:52 +02:00
Ori Newman
fddda46d4f Fix infinite loop on antiPastHashesBetween (#1226)
* Fix infinite loop on antiPastHashesBetween

* Get rid of highBlockBlueScore and lowBlockBlueScore
2020-12-15 10:37:35 +02:00
Svarog
77adb6c99f Make consensus.databaseContext a DBManager and allow keeping data dir in TestConsensus
* Make consensus.databaseContext a DBManager

* Allow keeping data dir in TestConsensus
2020-12-15 10:11:14 +02:00
Ori Newman
48e1a2c396 New headers first flow (#1211)
* Get rid of insertMode

* Rename AddBlockToVirtual->AddBlock

* When F is not in the future of P, enforce finality with P and not with F.

* Don't allow blocks with invalid parents or with missing block body

* Check finality violation before checking block status

* Implement CalculateIndependentPruningPoint

* Move checkBlockStatus to validateBlock

* Add ValidateBlock to block processor interface

* Adjust SetPruningPoint to the new IBD flow

* Add pruning store to CSM's constructor

* Flip wrong condition on AddHeaderTip

* Fix func (hts *headerSelectedTipStore) Has

* Fix block stage order

* Call to ValidateBodyInContext from validatePostProofOfWork

* Enable overrideDAGParams

* Update log

* Rename SetPruningPoint to ValidateAndInsertPruningPoint and move most of its logic inside block processor

* Rename hasValidatedHeader->hasValidatedOnlyHeader

* Fix typo

* Name return values for fetchMissingUTXOSet

* Add comment

* Return ErrMissingParents when block body is missing

* Add logs and comments

* Fix merge error

* Fix pruning point calculation to be by virtual selected parent

* Replace CalculateIndependentPruningPoint to CalculatePruningPointByHeaderSelectedTip

* Fix isAwaitingUTXOSet to check pruning point by headers

* Change isAwaitingUTXOSet indication

* Remove IsBlockInHeaderPruningPointFuture from BlockInfo

* Fix LowestChainBlockAboveOrEqualToBlueScore

* Add validateNewPruningPointTransactions

* Add validateNewPruningAgainstPastUTXO

* Rename set_pruning_utxo_set.go to update_pruning_utxo_set.go

* Check missing block body hashes by missing block instead of status

* Validate pruning point against past UTXO with the pruning point as block hash

* Remove virtualHeaderHash

* Fix comment

* Fix imports
2020-12-14 17:53:08 +02:00
oudeis
6926a7ab81 Update to version 0.8.3 2020-12-14 12:38:03 +00:00
Svarog
a9e0c33e5c Lower minimum difficulty for mainnet and testnet (#1220) 2020-12-14 12:39:54 +02:00
Svarog
2c1688909d Move TestNet to use GRPCSeeds by default (#1217) 2020-12-14 09:09:18 +02:00
Svarog
6714e084e9 Small fix in proof-of-work log (#1205) 2020-12-09 18:31:36 +02:00
Elichai Turkel
3354ac67c8 Parallelize all the tests in ForAllNets (#1199) 2020-12-09 17:43:36 +02:00
Svarog
0d8f7bba40 [NOD-1595] Implement all fields of GetBlockDAGInfo (#1200)
* [NOD-1595] Implement all fields of GetBlockDAGInfo

* [NOD-1595]

* [NOD-1595] Don't swallow errors in GetDifficultyRatio

* [NOD-1595] Change roundingPrecision in GetDifficultyRatio to 2 decimal places
2020-12-09 12:14:15 +02:00
Svarog
e04f76b800 [NOD-1594] Add HeaderCount to GetBlockCount rpc call (#1197) 2020-12-08 19:11:35 +02:00
Elichai Turkel
82fa8e6831 Abstract CheckProofOfWork as a public function and change PoW structure (#1198)
* Expose CheckProofOfWork from model/pow

* Update blockvalidator to call the new CheckProofOfWork

* Update genesis blocks

* Update tools to use the new CheckProofOfWork

* Update tests with new PoW
2020-12-08 17:17:30 +02:00
Svarog
b7ca3f4461 [NOD-1590] Implement optimized finalityPoint calculation mechanism (#1190)
* [NOD-1590] Moved all finality logic to FinalityManager

* [NOD-1590] Add finality store

* [NOD-1590] Implement optimized finalityPoint calculation mechanism

* [NOD-1590] Add comments

* [NOD-1590] Add finalityStore to consensus object, and TestConsensus

* [NOD-1590] Added logs to finalityPoint calculation
2020-12-08 10:26:39 +02:00
Svarog
37bf261da1 [NOD-1581] Enable fsync in database writes (#1168) 2020-12-07 14:18:33 +02:00
Ori Newman
d90e18ec51 Fix v0.8.2-dev build (#1189) 2020-12-07 12:39:40 +02:00
Elichai Turkel
9962527793 Replace BlueWindow implementation to accomadate a better DAA scheme (#1179)
* Change DifficultyAdjustmentWindowSize and TimestampDeviationTolerance from uint64 to int

* refactor block_heap for readability and usage

* Add a new SizedUpHeap

* Refactor BlueWindow with the new DAA

* Update TestBlueBlockWindow with the new DAA window

* Fix review requested changes
2020-12-06 18:42:49 +02:00
talelbaz
78550d3639 Adds "checkDelayedBlock" - checks if the block timeStamp is in the future. (#1183)
* [#1056] Adds "checkDelayedBlock" - check if the block timeStamp is in future.
Adds 2 new fields to blockValidator struct.
Adds new Rule Error "ErrDelayedBlock" .

* [#1056] Replace "ErrDelayedBlock" to "ErrBlockIsTooMuchInTheFuture".

* [#1056] Replace "checkDelayedBlock" to "checkBlockTimeStampInIsolation".

* [#1056] Cosmetics changes: timeStamp -> timestamp .

* Merge remote-tracking branch 'origin/v0.8.2-dev' into droppedDelayedBlock

# Conflicts:
#	domain/consensus/factory.go
#	domain/consensus/processes/blockvalidator/block_header_in_isolation.go
#	domain/consensus/processes/blockvalidator/blockvalidator.go

Co-authored-by: tal <tal@daglabs.com>
2020-12-06 18:41:07 +02:00
stasatdaglabs
7f899b0d09 [NOD-1579] Improve the IBD mechanism (#1174)
* [NOD-1579] Remove selected tip hash messages.

* [NOD-1579] Start moving IBD stuff into blockrelay.

* [NOD-1579] Rename relaytransactions to transactionrelay.

* [NOD-1579] Move IBD files into blockrelay.

* [NOD-1579] Remove flow stuff from ibd.go.

* [NOD-1579] Bring back IsInIBD().

* [NOD-1579] Simplify block relay flow.

* [NOD-1579] Check orphan pool for missing parents to avoid unnecessary processing.

* [NOD-1579] Implement processOrphan.

* [NOD-1579] Implement addToOrphanSetAndRequestMissingParents.

* [NOD-1579] Fix TestIBD.

* [NOD-1579] Implement isBlockInOrphanResolutionRange.

* [NOD-1579] Implement limited block locators.

* [NOD-1579] Add some comments.

* [NOD-1579] Specifically check for StatusHeaderOnly in blockrelay.

* [NOD-1579] Simplify runIBDIfNotRunning.

* [NOD-1579] Don't run IBD if it is already running.

* [NOD-1579] Fix a comment.

* [NOD-1579] Rename mode to syncInfo.

* [NOD-1579] Simplify validateAndInsertBlock.

* [NOD-1579] Fix bad SyncStateSynced condition.

* [NOD-1579] Implement validateAgainstSyncStateAndResolveInsertMode.

* [NOD-1579] Use insertModeHeader.

* [NOD-1579] Add logs to TrySetIBDRunning and UnsetIBDRunning.

* [NOD-1579] Implement and use dequeueIncomingMessageAndSkipInvs.

* [NOD-1579] Fix a log.

* [NOD-1579] Fix a bug in createBlockLocator.

* [NOD-1579] Rename a variable.

* [NOD-1579] Fix a slew of bugs in missingBlockBodyHashes and selectedChildIterator.

* [NOD-1579] Fix bad chunk size in syncMissingBlockBodies.

* [NOD-1579] Remove maxOrphanBlueScoreDiff.

* [NOD-1579] Fix merge errors.

* [NOD-1579] Remove a debug log.

* [NOD-1579] Add logs.

* [NOD-1579] Make various go quality tools happy.

* [NOD-1579] Fix a typo in a variable name.

* [NOD-1579] Fix full blocks over header-only blocks not failing the missing-parents validation.

* [NOD-1579] Add an error log about a condition that should never happen.

* [NOD-1579] Check all antiPast hashes instead of just the lowHash's anticone to filter for header-only blocks.

* [NOD-1579] Remove the nil stuff from GetBlockLocator.

* [NOD-1579] Remove superfluous condition in handleRelayInvsFlow.start().

* [NOD-1579] Return a boolean from requestBlock instead of comparing to nil.

* [NOD-1579] Fix a bad log.Debugf.

* [NOD-1579] Remove a redundant check.

* [NOD-1579] Change an info log to a warning log.

* [NOD-1579] Move OnNewBlock out of relayBlock.

* [NOD-1579] Remove redundant exists check from runIBDIfNotRunning.

* [NOD-1579] Fix bad call to OnNewBlock.

* [NOD-1579] Remove an impossible check.

* [NOD-1579] Added a log.

* [NOD-1579] Rename insertModeBlockWithoutUpdatingVirtual to insertModeBlockBody.

* [NOD-1579] Add a check for duplicate headers.

* [NOD-1579] Added a comment.

* [NOD-1579] Tighten a stop condition.

* [NOD-1579] Simplify a log.

* [NOD-1579] Clarify a log.

* [NOD-1579] Move a log.
2020-12-06 16:23:56 +02:00
Svarog
4886425caf [NOD-1589] Re-enable DisableDifficultyAdjustment (#1182)
* [NOD-1589] Re-enable DisableDifficultyAdjustment

* [NOD-1589] Remove simnet from TestDifficulty

* [NOD-1589] Update comment
2020-12-06 16:02:48 +02:00
Elichai Turkel
c3902ed7a8 Replace blue score with blue work in ghostdag (#1172)
* Replace blueScore with blueWork in ghostDAG SelectedParent selection

* Add blueWork to protopuf ghostdag data

* Auto generate protobuf go code

* Serialize/Deserialize blueWork when converting to protobuf

* pass block header store to ghostdagmanager

* Convert tal's ghostdag2 implementation to blueWork

* Change finality test to check the blueWork instead of blueScore

* Update ghostdag_test to pass blockHeaderStore to ghostdag, and test all networks genesis headers

* Add sanity blueWork check to ghostdag_test
2020-12-06 14:45:21 +02:00
Svarog
33eaf9edac [NOD-1548] Re-add test difficulty + Make GHOSTDAGData immutable + don't clone in store (#1178)
* [NOD-1548] Readd TestDifficulty

* [NOD-1548] Make GHOSTDAGData immutable + don't clone in store

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2020-12-06 12:35:14 +02:00
Svarog
f97b8f7580 [NOD-1587] Add DAGParams to TestConsensus (#1181) 2020-12-06 10:57:57 +02:00
Ori Newman
05979de705 [NOD-1586] Return routes.Disconnect (#1180) 2020-12-06 10:54:34 +02:00
Ori Newman
32a04d1811 Allow to configure consensus (closes #1067)
* Allow to configure consensus with a JSON file

* Define everywhere maxBlockParents as KType

* Move consensus default to consensus_defaults.go
2020-12-03 18:30:01 +02:00
Svarog
a585f32763 [NOD-1551] Make UTXODiff immutable + skip cloning it in datastore (#1167)
* [NOD-1551] Make UTXO-Diff implemented fully in utils/utxo

* [NOD-1551] Fixes everywhere except database

* [NOD-1551] Fix database

* [NOD-1551] Add comments

* [NOD-1551] Partial commit

* [NOD-1551] Comlete making UTXOEntry immutable + don't clone it in UTXOCollectionClone

* [NOD-1551] Rename ToUnmutable -> ToImmutable

* [NOD-1551] Track immutable references generated from mutable UTXODiff, and invalidate them if the mutable one changed

* [NOD-1551] Clone scriptPubKey in NewUTXOEntry

* [NOD-1551] Remove redundant code

* [NOD-1551] Remove redundant call for .CloneMutable and then .ToImmutable

* [NOD-1551] Make utxoEntry pointert-receiver + clone ScriptPubKey in getter
2020-12-03 13:24:24 +02:00
Mike Zak
9866abb75a [NOD-1583] Split consensusserialization to consensushashing and serialization 2020-12-02 13:18:50 +02:00
Mike Zak
ab3c81c552 [NOD-1583] Move all TestXXX interfaces to testapi 2020-12-02 13:18:50 +02:00
Ori Newman
9756d64f28 [NOD-1582] Fix orphan resolution (#1169)
* [NOD-1582] Fix multiple request per missing ancestor

* [NOD-1582] Don't remove peer on routerpkg.ErrRouteClosed from RPC

* [NOD-1582] Use LogAndMeasureExecutionTime where possible
2020-12-02 13:05:33 +02:00
Elichai Turkel
21fc2d4219 [NOD-1433] Write specified unit tests for GHOSTDAG (#1010)
commit 3830df34b2
Merge: 46dc2e977 17e7819c2
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Tue Dec 1 16:29:51 2020 +0200

    Merge pull request #1170 from kaspanet/tal-ghost-fix

    Fix GhostDAG tests and jsons

commit 17e7819c27
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Tue Dec 1 16:24:01 2020 +0200

    Remove non-json ghostdag tests

commit 4bebb1d96a
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Tue Dec 1 13:26:06 2020 +0200

    Add a coment above tal's ghostdag2 impl

commit faf21a042e
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Tue Dec 1 13:20:08 2020 +0200

    fix the interfaces after merge

commit a8b7a25b2e
Merge: af91b69b2 f1c6df48c
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Tue Dec 1 13:19:08 2020 +0200

    Merge branch 'v0.8.2-dev' into tal-ghost-fix

commit af91b69b20
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Tue Dec 1 13:18:41 2020 +0200

    Fix the non-json tests

commit c56f34b73b
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Tue Dec 1 13:18:17 2020 +0200

    Fix the jsons

commit 46dc2e9773
Author: tal <tal@daglabs.com>
Date:   Mon Nov 30 17:15:20 2020 +0200

    [NOD - 1143] Cosmetics changes.

commit b28e5ce816
Author: tal <tal@daglabs.com>
Date:   Mon Nov 30 15:48:08 2020 +0200

    [#1126] Place selectedParent to be first on blueMergeSet.

commit 4b56ed2da9
Author: tal <tal@daglabs.com>
Date:   Mon Nov 30 14:51:50 2020 +0200

    [#1126] Change pacement between blockRight and blockLeft .

commit b09f31be93
Merge: e17a98b7b 0db39833f
Author: talelbaz <63008512+talelbaz@users.noreply.github.com>
Date:   Mon Nov 30 14:30:22 2020 +0200

    Merge pull request #1162 from kaspanet/new-jsons

    Update the dag json tests

commit e17a98b7ba
Author: tal <tal@daglabs.com>
Date:   Mon Nov 30 14:08:25 2020 +0200

    [#1126] Use WALK function in tests & cosmetic changes.

commit 0db39833f3
Author: Elichai Turkel <elichai.turkel@gmail.com>
Date:   Mon Nov 30 12:20:13 2020 +0200

    Update the dag json tests

commit 5a3da43dd4
Author: tal <tal@daglabs.com>
Date:   Sun Nov 29 12:03:37 2020 +0200

    [NOD-1433] Remove unneccessry code.

commit a6cde558ac
Author: tal <tal@daglabs.com>
Date:   Mon Nov 23 17:05:56 2020 +0200

    [NOD-1433] Change "Stage" sig function according to the new interface - added error as a return type.

commit 07859b6218
Author: tal <tal@daglabs.com>
Date:   Mon Nov 23 17:03:26 2020 +0200

    [NOD-1433]  Print formats changed & Cosmetics code changes.

commit e1a851664e
Author: tal <tal@daglabs.com>
Date:   Sun Nov 15 17:34:59 2020 +0200

    [NOD-1433] Travers the tests dir and run each test.

commit 4c7474edc1
Author: tal <tal@daglabs.com>
Date:   Mon Nov 9 12:44:53 2020 +0200

    [NOD-1433] Travers the tests dir and run each test.

commit 89dd1e61d3
Author: tal <tal@daglabs.com>
Date:   Mon Nov 9 11:48:36 2020 +0200

    [NOD-1433] Change implementation to adjust genesis's score 0.
    Also, keep changing the test file to fit the new implementation.

commit 6acdcd17de
Author: tal <tal@daglabs.com>
Date:   Sun Nov 8 17:07:22 2020 +0200

    [NOD-1433] New test was added(Test 6).

commit bf23889317
Author: tal <tal@daglabs.com>
Date:   Sun Nov 8 14:59:36 2020 +0200

    Fix golint errors

commit 79ff990b5f
Author: tal <tal@daglabs.com>
Date:   Sun Nov 8 14:47:12 2020 +0200

    added "Optimize imports".

commit 73d0128f63
Author: tal <tal@daglabs.com>
Date:   Sun Nov 8 13:03:22 2020 +0200

    Added an implementation factory.

commit 61ca8b2e7e
Author: tal <tal@daglabs.com>
Date:   Thu Nov 5 16:03:18 2020 +0200

    1. impl - choose the highest hash.
    2. test - changed the test accordingly.

commit ef0943ca29
Author: tal <tal@daglabs.com>
Date:   Thu Oct 29 18:00:45 2020 +0200

    Update Tests

commit 6e5936abff
Author: tal <tal@daglabs.com>
Date:   Tue Oct 27 10:22:45 2020 +0200

    Change to the new API

commit 5a70dc48b3
Author: tal <tal@daglabs.com>
Date:   Mon Oct 26 18:35:31 2020 +0200

    1. Added tests for ori

commit 2b9f78353f
Author: tal <tal@daglabs.com>
Date:   Mon Oct 26 13:04:37 2020 +0200

    1. Added structure "isolatedTest" {k, test}
    2. Added for loop on the tests.
    3. New test - Test 5.

commit c026d7b7a2
Author: tal <tal@daglabs.com>
Date:   Thu Oct 22 17:35:56 2020 +0300

    Fix bugs in the GHOSTDAG : counters, conntains and isAncestorOf.
    Added more tests.

commit 74493b27d2
Author: tal <tal@daglabs.com>
Date:   Thu Oct 22 16:49:27 2020 +0300

    added compare between Hashes

commit f689253463
Author: tal <tal@daglabs.com>
Date:   Thu Oct 22 11:49:01 2020 +0300

    added compare between Hashes

commit 66be07f616
Author: tal <tal@daglabs.com>
Date:   Mon Oct 19 18:42:40 2020 +0300

    First test - pass.

commit 327f34f2dc
Author: tal <tal@daglabs.com>
Date:   Mon Oct 19 15:20:27 2020 +0300

    Add alternative implementation for ghostdag.
    change all function's signatures (add error type)

commit fd2ea3d84a
Author: tal <tal@daglabs.com>
Date:   Mon Oct 19 11:57:05 2020 +0300

    add alternative implementation for ghostdag
2020-12-01 16:54:13 +02:00
Elichai Turkel
f1c6df48c9 [#1028] Replace oldschnorr with the BIP340 schnorr variant (#1165)
* Update go-secp256k1 to v0.0.3

* Update the txscript engine to support only 32 bytes pubkeys

* Update the txscript engine tests

* Update txscript/sign.go to use the new Schnorr KeyPair API

* Update txscript sign_test to use the new schnorr

* Update sigcache tests to use new schnorr pubkey

* Update integration tests to use the new txscript and new schnorr pubkey
2020-12-01 08:48:23 +02:00
Svarog
80c445c78b [NOD-1551] Optimize binary writes + remove redundant VarInt code (#1163)
* [NOD-1551] Optimize binary writes + remove redundant VarInt code

* [NOD-1551] Remove varInt tests

* [NOD-1551] Fix TestBech32 for Go1.15
2020-11-30 13:45:06 +02:00
Svarog
3b6eb73e53 [NOD-1551] Remove lock in SigCache (#1161) 2020-11-30 11:34:19 +02:00
talelbaz
f407c44a8d [Issue - #1126] - Checking pruning point violation - pruning point in the past. (#1160)
* [NOD-1126]
1. Change function name in BlockValidator interface from: "ValidateProofOfWorkAndDifficulty" to "ValidatePruningPointViolationAndProofOfWorkAndDifficulty".
2. Add to the blockValidator struct the pruningManager (also added to the function "New" Respectively).
3. Added new function "checkPruningPointViolation" of blockValidator type.
4. Add new internal check - "checkPruningPointViolation", on the function "ValidateProofOfWorkAndDifficulty".(The third check).
5. Add new error rule - "ErrPruningPointViolation".

* [Issue-1126]
1. Remove the function "PruningPoint" from PruningManager interface.
2. Changes in blockValidator struct - remove pruningManager, and adding pruningStore.
3. Reads for "pruningPoint" function from pruningStore instead of pruningManager (because of note 1 above) in the functions: * "checkPruningPointViolation" of type blockValidator.
             * "FindNextPruningPoint" of type pruningManager.

* [Issue-1126]
1. Add missing error handling.

* [Issue-1126] Changes in function "checkPruningPointViolation": If header = genesis, stop checking and return nil.

* [Issue-1126] In function "checkPruningPointViolation" - change from a for loop to the "IsAncestorOfAny" function.

* [#1126] "FindNextPruningPoint" - save the pruning point in case the point is the genesis and change code internal order.

* [#1126] "FindNextPruningPoint" - cosmetics change.

* [#1126] "FindNextPruningPoint" - remove "return nil" when there is no pruning point on the if expression.

Co-authored-by: tal <tal@daglabs.com>
2020-11-30 09:57:15 +02:00
stasatdaglabs
a1af992d15 [NOD-1578] Fix areHeaderTipsSyncedMaxTimeDifference (#1157)
* [NOD-1578] Fix areHeaderTipsSyncedMaxTimeDifference.

* [NOD-1578] Return errors that occur in the new logClosure.
2020-11-29 10:44:50 +02:00
Svarog
048caebda3 [NOD-1551] Add SigCache to TransactionValidator + Option to manipulate it in TestConsensus (#1159)
* [NOD-1551] Add SigCache

* [NOD-1551] Add option to edit SigCache in TestConsensus

* [NOD-1551] Fix comments and make SetSigCache pointer-receiver
2020-11-29 10:18:00 +02:00
oudeis
baa4311a34 Update to version 0.8.2 2020-11-29 05:12:30 +00:00
alexandratran
f6dfce8180 Update README.md 2020-11-26 21:46:57 -08:00
Yuval Shaul
a361d62945 Merge remote-tracking branch 'origin/v0.6.2-dev' 2020-08-16 13:51:25 +03:00
Yuval Shaul
aac173ed72 Merge remote-tracking branch 'origin/v0.6.1-dev' 2020-08-12 10:48:51 +03:00
stasatdaglabs
5f3fb0bf9f [NOD-1238] Fix acceptance index never being initialized. (#859) 2020-08-11 12:04:54 +03:00
Mike Zak
61f383a713 Merge remote-tracking branch 'origin/v0.6.0-dev' 2020-08-09 09:09:27 +03:00
Mike Zak
c62bdb2fa1 Merge remote-tracking branch 'origin/v0.5.0-dev' 2020-07-01 15:05:02 +03:00
Svarog
c88869778d [NOD-869] Add a print after os.Exit(1) to see if it is ever called (#701) 2020-04-22 11:37:30 +03:00
Ori Newman
3fd647b291 [NOD-858] Don't switch sync peer if the syncing process hasn't yet started with the current sync peer (#700)
* [NOD-858] Don't switch sync peer if the syncing process hasn't yet started with the current sync peer

* [NOD-858] SetShouldSendBlockLocator(false) on OnBlockLocator

* [NOD-858] Rename shouldSendBlockLocator->wasBlockLocatorRequested

* [NOD-858] Move panic to shouldReplaceSyncPeer
2020-04-13 15:49:46 +03:00
Mike Zak
2f255952b7 Updated to version v0.3.1 2020-04-13 15:10:27 +03:00
560 changed files with 67622 additions and 20205 deletions

9
.codecov.yml Normal file
View File

@@ -0,0 +1,9 @@
coverage:
status:
patch: off
project:
default:
informational: true
ignore:
- "**/*.pb.go" # Ignore all auto generated protobuf structures.

196
.github/workflows/SetPageFileSize.ps1 vendored Normal file
View File

@@ -0,0 +1,196 @@
<#
# MIT License (MIT) Copyright (c) 2020 Maxim Lobanov and contributors
# Source: https://github.com/al-cheb/configure-pagefile-action/blob/master/scripts/SetPageFileSize.ps1
.SYNOPSIS
Configure Pagefile on Windows machine
.NOTES
Author: Aleksandr Chebotov
.EXAMPLE
SetPageFileSize.ps1 -MinimumSize 4GB -MaximumSize 8GB -DiskRoot "D:"
#>
param(
[System.UInt64] $MinimumSize = 8gb ,
[System.UInt64] $MaximumSize = 8gb ,
[System.String] $DiskRoot = "D:"
)
# https://referencesource.microsoft.com/#System.IdentityModel/System/IdentityModel/NativeMethods.cs,619688d876febbe1
# https://www.geoffchappell.com/studies/windows/km/ntoskrnl/api/mm/modwrite/create.htm
# https://referencesource.microsoft.com/#mscorlib/microsoft/win32/safehandles/safefilehandle.cs,9b08210f3be75520
# https://referencesource.microsoft.com/#mscorlib/system/security/principal/tokenaccesslevels.cs,6eda91f498a38586
# https://www.autoitscript.com/forum/topic/117993-api-ntcreatepagingfile/
$source = @'
using System;
using System.ComponentModel;
using System.Diagnostics;
using System.Runtime.InteropServices;
using System.Security.Principal;
using System.Text;
using Microsoft.Win32;
using Microsoft.Win32.SafeHandles;
namespace Util
{
class NativeMethods
{
[StructLayout(LayoutKind.Sequential)]
internal struct LUID
{
internal uint LowPart;
internal uint HighPart;
}
[StructLayout(LayoutKind.Sequential)]
internal struct LUID_AND_ATTRIBUTES
{
internal LUID Luid;
internal uint Attributes;
}
[StructLayout(LayoutKind.Sequential)]
internal struct TOKEN_PRIVILEGE
{
internal uint PrivilegeCount;
internal LUID_AND_ATTRIBUTES Privilege;
internal static readonly uint Size = (uint)Marshal.SizeOf(typeof(TOKEN_PRIVILEGE));
}
[StructLayoutAttribute(LayoutKind.Sequential, CharSet = CharSet.Unicode)]
internal struct UNICODE_STRING
{
internal UInt16 length;
internal UInt16 maximumLength;
internal string buffer;
}
[DllImport("kernel32.dll", SetLastError=true)]
internal static extern IntPtr LocalFree(IntPtr handle);
[DllImport("advapi32.dll", ExactSpelling = true, CharSet = CharSet.Unicode, SetLastError = true, PreserveSig = false)]
internal static extern bool LookupPrivilegeValueW(
[In] string lpSystemName,
[In] string lpName,
[Out] out LUID luid
);
[DllImport("advapi32.dll", SetLastError = true, PreserveSig = false)]
internal static extern bool AdjustTokenPrivileges(
[In] SafeCloseHandle tokenHandle,
[In] bool disableAllPrivileges,
[In] ref TOKEN_PRIVILEGE newState,
[In] uint bufferLength,
[Out] out TOKEN_PRIVILEGE previousState,
[Out] out uint returnLength
);
[DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true, PreserveSig = false)]
internal static extern bool OpenProcessToken(
[In] IntPtr processToken,
[In] int desiredAccess,
[Out] out SafeCloseHandle tokenHandle
);
[DllImport("ntdll.dll", CharSet = CharSet.Unicode, SetLastError = true, CallingConvention = CallingConvention.StdCall)]
internal static extern Int32 NtCreatePagingFile(
[In] ref UNICODE_STRING pageFileName,
[In] ref Int64 minimumSize,
[In] ref Int64 maximumSize,
[In] UInt32 flags
);
[DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
internal static extern uint QueryDosDeviceW(
string lpDeviceName,
StringBuilder lpTargetPath,
int ucchMax
);
}
public sealed class SafeCloseHandle: SafeHandleZeroOrMinusOneIsInvalid
{
[DllImport("kernel32.dll", ExactSpelling = true, SetLastError = true)]
internal extern static bool CloseHandle(IntPtr handle);
private SafeCloseHandle() : base(true)
{
}
public SafeCloseHandle(IntPtr preexistingHandle, bool ownsHandle) : base(ownsHandle)
{
SetHandle(preexistingHandle);
}
override protected bool ReleaseHandle()
{
return CloseHandle(handle);
}
}
public class PageFile
{
public static void SetPageFileSize(long minimumValue, long maximumValue, string lpDeviceName)
{
SetPageFilePrivilege();
StringBuilder lpTargetPath = new StringBuilder(260);
UInt32 resultQueryDosDevice = NativeMethods.QueryDosDeviceW(lpDeviceName, lpTargetPath, lpTargetPath.Capacity);
if (resultQueryDosDevice == 0)
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
string pageFilePath = lpTargetPath.ToString() + "\\pagefile.sys";
NativeMethods.UNICODE_STRING pageFileName = new NativeMethods.UNICODE_STRING
{
length = (ushort)(pageFilePath.Length * 2),
maximumLength = (ushort)(2 * (pageFilePath.Length + 1)),
buffer = pageFilePath
};
Int32 resultNtCreatePagingFile = NativeMethods.NtCreatePagingFile(ref pageFileName, ref minimumValue, ref maximumValue, 0);
if (resultNtCreatePagingFile != 0)
{
throw new Win32Exception(Marshal.GetLastWin32Error());
}
Console.WriteLine("PageFile: {0} / {1} bytes for {2}", minimumValue, maximumValue, pageFilePath);
}
static void SetPageFilePrivilege()
{
const int SE_PRIVILEGE_ENABLED = 0x00000002;
const int AdjustPrivileges = 0x00000020;
const int Query = 0x00000008;
NativeMethods.LUID luid;
NativeMethods.LookupPrivilegeValueW(null, "SeCreatePagefilePrivilege", out luid);
SafeCloseHandle hToken;
NativeMethods.OpenProcessToken(
Process.GetCurrentProcess().Handle,
AdjustPrivileges | Query,
out hToken
);
NativeMethods.TOKEN_PRIVILEGE previousState;
NativeMethods.TOKEN_PRIVILEGE newState;
uint previousSize = 0;
newState.PrivilegeCount = 1;
newState.Privilege.Luid = luid;
newState.Privilege.Attributes = SE_PRIVILEGE_ENABLED;
NativeMethods.AdjustTokenPrivileges(hToken, false, ref newState, NativeMethods.TOKEN_PRIVILEGE.Size, out previousState, out previousSize);
}
}
}
'@
Add-Type -TypeDefinition $source
# Set SetPageFileSize
[Util.PageFile]::SetPageFileSize($minimumSize, $maximumSize, $diskRoot)

70
.github/workflows/go.yml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: Go
on:
push:
pull_request:
# edtited - "title, body, or the base branch of the PR is modified"
# synchronize - "commit(s) pushed to the pull request"
types: [opened, synchronize, edited, reopened]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-16.04, macos-10.15 ]
name: Testing on on ${{ matrix.os }}
steps:
- name: Fix windows CRLF
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# We need to increase the page size because the tests run out of memory on github CI windows.
# Use the powershell script from this github action: https://github.com/al-cheb/configure-pagefile-action/blob/master/scripts/SetPageFileSize.ps1
# MIT License (MIT) Copyright (c) 2020 Maxim Lobanov and contributors
- name: Increase page size on windows
if: runner.os == 'Windows'
shell: powershell
run: powershell -command .\.github\workflows\SetPageFileSize.ps1
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.14
# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
- name: Go Cache
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Test
shell: bash
run: ./build_and_test.sh
coverage:
runs-on: ubuntu-20.04
name: Produce code coverage
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.14
- name: Create coverage file
# Because of https://github.com/golang/go/issues/27333 this seem to "fail" even though nothing is wrong, so ignore the failure
run: go test -json -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./... || true
- name: Upload coverage file
run: bash <(curl -s https://codecov.io/bash)

18
.gitignore vendored
View File

@@ -13,6 +13,21 @@ kaspad.db
*.o
*.a
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Real binaries, build with `go build .`
kaspad
cmd/gencerts/gencerts
cmd/kaspactl/kaspactl
cmd/kasminer/kaspaminer
*.exe
*.exe~
# Output of the go coverage tool
*.out
# Folders
_obj
@@ -31,8 +46,7 @@ _cgo_export.*
_testmain.go
*.exe
# IDE
.idea
.vscode

19
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,19 @@
# Contributing to Kaspad
Any contribution to Kaspad is very welcome.
## Getting started
If you want to start contributing to Kaspad and don't know where to start, you can pick an issue from
the [list](https://github.com/kaspanet/kaspad/issues).
If you want to make a big change it's better to discuss it first by opening an issue or talk about it in
[Discord](https://discord.gg/WmGhhzk) to avoid duplicate work.
## Pull Request process
Any pull request should be opened against the development branch of the target version. The development branch format is
as follows: `vx.y.z-dev`, for example: `v0.8.5-dev`.
All pull requests should pass the checks written in `build_and_test.sh`, so it's recommended to run this script before
submitting your PR.

View File

@@ -9,12 +9,16 @@ Warning: This is pre-alpha software. There's no guarantee anything works.
Kaspad is the reference full node Kaspa implementation written in Go (golang).
This project is currently under active development and is in a pre-Alpha state.
This project is currently under active development and is in a pre-Alpha state.
Some things still don't work and APIs are far from finalized. The code is provided for reference only.
## What is kaspa
Kaspa is an attempt at a proof-of-work cryptocurrency with instant confirmations and sub-second block times. It is based on [the PHANTOM protocol](https://eprint.iacr.org/2018/104.pdf), a generalization of Nakamoto consensus.
## Requirements
Latest version of [Go](http://golang.org) (currently 1.13).
Go 1.14 or later.
## Installation
@@ -27,23 +31,17 @@ Latest version of [Go](http://golang.org) (currently 1.13).
```bash
$ go version
$ go env GOROOT GOPATH
```
NOTE: The `GOROOT` and `GOPATH` above must not be the same path. It is
recommended that `GOPATH` is set to a directory in your home directory such as
`~/dev/go` to avoid write permission issues. It is also recommended to add
`$GOPATH/bin` to your `PATH` at this point.
- Run the following commands to obtain and install kaspad including all dependencies:
```bash
$ git clone https://github.com/kaspanet/kaspad $GOPATH/src/github.com/kaspanet/kaspad
$ cd $GOPATH/src/github.com/kaspanet/kaspad
$ git clone https://github.com/kaspanet/kaspad
$ cd kaspad
$ go install . ./cmd/...
```
- Kaspad (and utilities) should now be installed in `$GOPATH/bin`. If you did
- Kaspad (and utilities) should now be installed in `$(go env GOPATH)/bin`. If you did
not already add the bin directory to your system path during Go installation,
you are encouraged to do so now.
@@ -53,10 +51,8 @@ $ go install . ./cmd/...
Kaspad has several configuration options available to tweak how it runs, but all
of the basic operations work with zero configuration.
#### Linux/BSD/POSIX/Source
```bash
$ ./kaspad
$ kaspad
```
## Discord
@@ -69,9 +65,8 @@ is used for this project.
## Documentation
The documentation is a work-in-progress. It is located in the [docs](https://github.com/kaspanet/kaspad/tree/master/docs) folder.
The documentation is a work-in-progress.
## License
Kaspad is licensed under the copyfree [ISC License](https://choosealicense.com/licenses/isc/).

View File

@@ -5,34 +5,12 @@
package appmessage
import (
"encoding/binary"
"fmt"
"io"
"math"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
"github.com/kaspanet/kaspad/util/binaryserializer"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/pkg/errors"
)
// MaxVarIntPayload is the maximum payload size for a variable length integer.
const MaxVarIntPayload = 9
// MaxInvPerMsg is the maximum number of inventory vectors that can be in any type of kaspa inv message.
const MaxInvPerMsg = 1 << 17
var (
// littleEndian is a convenience variable since binary.LittleEndian is
// quite long.
littleEndian = binary.LittleEndian
// bigEndian is a convenience variable since binary.BigEndian is quite
// long.
bigEndian = binary.BigEndian
)
// errNonCanonicalVarInt is the common format string used for non-canonically
// encoded variable length integer errors.
var errNonCanonicalVarInt = "non-canonical varint %x - discriminant %x must " +
@@ -40,473 +18,3 @@ var errNonCanonicalVarInt = "non-canonical varint %x - discriminant %x must " +
// errNoEncodingForType signifies that there's no encoding for the given type.
var errNoEncodingForType = errors.New("there's no encoding for this type")
// int64Time represents a unix timestamp with milliseconds precision encoded with
// an int64. It is used as a way to signal the readElement function how to decode
// a timestamp into a Go mstime.Time since it is otherwise ambiguous.
type int64Time mstime.Time
// ReadElement reads the next sequence of bytes from r using little endian
// depending on the concrete type of element pointed to.
func ReadElement(r io.Reader, element interface{}) error {
// Attempt to read the element based on the concrete type via fast
// type assertions first.
switch e := element.(type) {
case *int32:
rv, err := binaryserializer.Uint32(r, littleEndian)
if err != nil {
return err
}
*e = int32(rv)
return nil
case *uint32:
rv, err := binaryserializer.Uint32(r, littleEndian)
if err != nil {
return err
}
*e = rv
return nil
case *int64:
rv, err := binaryserializer.Uint64(r, littleEndian)
if err != nil {
return err
}
*e = int64(rv)
return nil
case *uint64:
rv, err := binaryserializer.Uint64(r, littleEndian)
if err != nil {
return err
}
*e = rv
return nil
case *uint8:
rv, err := binaryserializer.Uint8(r)
if err != nil {
return err
}
*e = rv
return nil
case *bool:
rv, err := binaryserializer.Uint8(r)
if err != nil {
return err
}
if rv == 0x00 {
*e = false
} else {
*e = true
}
return nil
// Unix timestamp encoded as an int64.
case *int64Time:
rv, err := binaryserializer.Uint64(r, binary.LittleEndian)
if err != nil {
return err
}
*e = int64Time(mstime.UnixMilliseconds(int64(rv)))
return nil
// Message header checksum.
case *[4]byte:
_, err := io.ReadFull(r, e[:])
if err != nil {
return err
}
return nil
// Message header command.
case *MessageCommand:
rv, err := binaryserializer.Uint32(r, littleEndian)
if err != nil {
return err
}
*e = MessageCommand(rv)
return nil
// IP address.
case *[16]byte:
_, err := io.ReadFull(r, e[:])
if err != nil {
return err
}
return nil
case *externalapi.DomainHash:
_, err := io.ReadFull(r, e[:])
if err != nil {
return err
}
return nil
case *id.ID:
return e.Deserialize(r)
case *externalapi.DomainSubnetworkID:
_, err := io.ReadFull(r, e[:])
if err != nil {
return err
}
return nil
case *ServiceFlag:
rv, err := binaryserializer.Uint64(r, littleEndian)
if err != nil {
return err
}
*e = ServiceFlag(rv)
return nil
case *KaspaNet:
rv, err := binaryserializer.Uint32(r, littleEndian)
if err != nil {
return err
}
*e = KaspaNet(rv)
return nil
}
return errors.Wrapf(errNoEncodingForType, "couldn't find a way to read type %T", element)
}
// readElements reads multiple items from r. It is equivalent to multiple
// calls to readElement.
func readElements(r io.Reader, elements ...interface{}) error {
for _, element := range elements {
err := ReadElement(r, element)
if err != nil {
return err
}
}
return nil
}
// WriteElement writes the little endian representation of element to w.
func WriteElement(w io.Writer, element interface{}) error {
// Attempt to write the element based on the concrete type via fast
// type assertions first.
switch e := element.(type) {
case int32:
err := binaryserializer.PutUint32(w, littleEndian, uint32(e))
if err != nil {
return err
}
return nil
case uint32:
err := binaryserializer.PutUint32(w, littleEndian, e)
if err != nil {
return err
}
return nil
case int64:
err := binaryserializer.PutUint64(w, littleEndian, uint64(e))
if err != nil {
return err
}
return nil
case uint64:
err := binaryserializer.PutUint64(w, littleEndian, e)
if err != nil {
return err
}
return nil
case uint8:
err := binaryserializer.PutUint8(w, e)
if err != nil {
return err
}
return nil
case bool:
var err error
if e {
err = binaryserializer.PutUint8(w, 0x01)
} else {
err = binaryserializer.PutUint8(w, 0x00)
}
if err != nil {
return err
}
return nil
// Message header checksum.
case [4]byte:
_, err := w.Write(e[:])
if err != nil {
return err
}
return nil
// Message header command.
case MessageCommand:
err := binaryserializer.PutUint32(w, littleEndian, uint32(e))
if err != nil {
return err
}
return nil
// IP address.
case [16]byte:
_, err := w.Write(e[:])
if err != nil {
return err
}
return nil
case *externalapi.DomainHash:
_, err := w.Write(e[:])
if err != nil {
return err
}
return nil
case *id.ID:
return e.Serialize(w)
case *externalapi.DomainSubnetworkID:
_, err := w.Write(e[:])
if err != nil {
return err
}
return nil
case ServiceFlag:
err := binaryserializer.PutUint64(w, littleEndian, uint64(e))
if err != nil {
return err
}
return nil
case KaspaNet:
err := binaryserializer.PutUint32(w, littleEndian, uint32(e))
if err != nil {
return err
}
return nil
}
return errors.Wrapf(errNoEncodingForType, "couldn't find a way to write type %T", element)
}
// writeElements writes multiple items to w. It is equivalent to multiple
// calls to writeElement.
func writeElements(w io.Writer, elements ...interface{}) error {
for _, element := range elements {
err := WriteElement(w, element)
if err != nil {
return err
}
}
return nil
}
// ReadVarInt reads a variable length integer from r and returns it as a uint64.
func ReadVarInt(r io.Reader) (uint64, error) {
discriminant, err := binaryserializer.Uint8(r)
if err != nil {
return 0, err
}
var rv uint64
switch discriminant {
case 0xff:
sv, err := binaryserializer.Uint64(r, littleEndian)
if err != nil {
return 0, err
}
rv = sv
// The encoding is not canonical if the value could have been
// encoded using fewer bytes.
min := uint64(0x100000000)
if rv < min {
return 0, messageError("readVarInt", fmt.Sprintf(
errNonCanonicalVarInt, rv, discriminant, min))
}
case 0xfe:
sv, err := binaryserializer.Uint32(r, littleEndian)
if err != nil {
return 0, err
}
rv = uint64(sv)
// The encoding is not canonical if the value could have been
// encoded using fewer bytes.
min := uint64(0x10000)
if rv < min {
return 0, messageError("readVarInt", fmt.Sprintf(
errNonCanonicalVarInt, rv, discriminant, min))
}
case 0xfd:
sv, err := binaryserializer.Uint16(r, littleEndian)
if err != nil {
return 0, err
}
rv = uint64(sv)
// The encoding is not canonical if the value could have been
// encoded using fewer bytes.
min := uint64(0xfd)
if rv < min {
return 0, messageError("readVarInt", fmt.Sprintf(
errNonCanonicalVarInt, rv, discriminant, min))
}
default:
rv = uint64(discriminant)
}
return rv, nil
}
// WriteVarInt serializes val to w using a variable number of bytes depending
// on its value.
func WriteVarInt(w io.Writer, val uint64) error {
if val < 0xfd {
_, err := w.Write([]byte{uint8(val)})
return errors.WithStack(err)
}
if val <= math.MaxUint16 {
var buf [3]byte
buf[0] = 0xfd
littleEndian.PutUint16(buf[1:], uint16(val))
_, err := w.Write(buf[:])
return errors.WithStack(err)
}
if val <= math.MaxUint32 {
var buf [5]byte
buf[0] = 0xfe
littleEndian.PutUint32(buf[1:], uint32(val))
_, err := w.Write(buf[:])
return errors.WithStack(err)
}
var buf [9]byte
buf[0] = 0xff
littleEndian.PutUint64(buf[1:], val)
_, err := w.Write(buf[:])
return errors.WithStack(err)
}
// VarIntSerializeSize returns the number of bytes it would take to serialize
// val as a variable length integer.
func VarIntSerializeSize(val uint64) int {
// The value is small enough to be represented by itself, so it's
// just 1 byte.
if val < 0xfd {
return 1
}
// Discriminant 1 byte plus 2 bytes for the uint16.
if val <= math.MaxUint16 {
return 3
}
// Discriminant 1 byte plus 4 bytes for the uint32.
if val <= math.MaxUint32 {
return 5
}
// Discriminant 1 byte plus 8 bytes for the uint64.
return 9
}
// ReadVarString reads a variable length string from r and returns it as a Go
// string. A variable length string is encoded as a variable length integer
// containing the length of the string followed by the bytes that represent the
// string itself. An error is returned if the length is greater than the
// maximum block payload size since it helps protect against memory exhaustion
// attacks and forced panics through malformed messages.
func ReadVarString(r io.Reader, pver uint32) (string, error) {
count, err := ReadVarInt(r)
if err != nil {
return "", err
}
// Prevent variable length strings that are larger than the maximum
// message size. It would be possible to cause memory exhaustion and
// panics without a sane upper bound on this count.
if count > MaxMessagePayload {
str := fmt.Sprintf("variable length string is too long "+
"[count %d, max %d]", count, MaxMessagePayload)
return "", messageError("ReadVarString", str)
}
buf := make([]byte, count)
_, err = io.ReadFull(r, buf)
if err != nil {
return "", err
}
return string(buf), nil
}
// WriteVarString serializes str to w as a variable length integer containing
// the length of the string followed by the bytes that represent the string
// itself.
func WriteVarString(w io.Writer, str string) error {
err := WriteVarInt(w, uint64(len(str)))
if err != nil {
return err
}
_, err = w.Write([]byte(str))
return err
}
// ReadVarBytes reads a variable length byte array. A byte array is encoded
// as a varInt containing the length of the array followed by the bytes
// themselves. An error is returned if the length is greater than the
// passed maxAllowed parameter which helps protect against memory exhaustion
// attacks and forced panics through malformed messages. The fieldName
// parameter is only used for the error message so it provides more context in
// the error.
func ReadVarBytes(r io.Reader, pver uint32, maxAllowed uint32,
fieldName string) ([]byte, error) {
count, err := ReadVarInt(r)
if err != nil {
return nil, err
}
// Prevent byte array larger than the max message size. It would
// be possible to cause memory exhaustion and panics without a sane
// upper bound on this count.
if count > uint64(maxAllowed) {
str := fmt.Sprintf("%s is larger than the max allowed size "+
"[count %d, max %d]", fieldName, count, maxAllowed)
return nil, messageError("ReadVarBytes", str)
}
b := make([]byte, count)
_, err = io.ReadFull(r, b)
if err != nil {
return nil, err
}
return b, nil
}
// WriteVarBytes serializes a variable length byte array to w as a varInt
// containing the number of bytes, followed by the bytes themselves.
func WriteVarBytes(w io.Writer, pver uint32, bytes []byte) error {
slen := uint64(len(bytes))
err := WriteVarInt(w, slen)
if err != nil {
return err
}
_, err = w.Write(bytes)
return err
}

View File

@@ -1,696 +1,44 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package appmessage
import (
"bytes"
"io"
"reflect"
"strings"
"testing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/pkg/errors"
"github.com/davecgh/go-spew/spew"
)
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// mainnetGenesisHash is the hash of the first block in the block DAG for the
// main network (genesis block).
var mainnetGenesisHash = &externalapi.DomainHash{
var mainnetGenesisHash = externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0xdc, 0x5f, 0x5b, 0x5b, 0x1d, 0xc2, 0xa7, 0x25,
0x49, 0xd5, 0x1d, 0x4d, 0xee, 0xd7, 0xa4, 0x8b,
0xaf, 0xd3, 0x14, 0x4b, 0x56, 0x78, 0x98, 0xb1,
0x8c, 0xfd, 0x9f, 0x69, 0xdd, 0xcf, 0xbb, 0x63,
}
})
// simnetGenesisHash is the hash of the first block in the block DAG for the
// simulation test network.
var simnetGenesisHash = &externalapi.DomainHash{
var simnetGenesisHash = externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x9d, 0x89, 0xb0, 0x6e, 0xb3, 0x47, 0xb5, 0x6e,
0xcd, 0x6c, 0x63, 0x99, 0x45, 0x91, 0xd5, 0xce,
0x9b, 0x43, 0x05, 0xc1, 0xa5, 0x5e, 0x2a, 0xda,
0x90, 0x4c, 0xf0, 0x6c, 0x4d, 0x5f, 0xd3, 0x62,
}
})
// mainnetGenesisMerkleRoot is the hash of the first transaction in the genesis
// block for the main network.
var mainnetGenesisMerkleRoot = &externalapi.DomainHash{
var mainnetGenesisMerkleRoot = externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x4a, 0x5e, 0x1e, 0x4b, 0xaa, 0xb8, 0x9f, 0x3a,
0x32, 0x51, 0x8a, 0x88, 0xc3, 0x1b, 0xc8, 0x7f,
0x61, 0x8f, 0x76, 0x67, 0x3e, 0x2c, 0xc7, 0x7a,
0xb2, 0x12, 0x7b, 0x7a, 0xfd, 0xed, 0xa3, 0x3b,
}
})
var exampleAcceptedIDMerkleRoot = &externalapi.DomainHash{
var exampleAcceptedIDMerkleRoot = externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x09, 0x3B, 0xC7, 0xE3, 0x67, 0x11, 0x7B, 0x3C,
0x30, 0xC1, 0xF8, 0xFD, 0xD0, 0xD9, 0x72, 0x87,
0x7F, 0x16, 0xC5, 0x96, 0x2E, 0x8B, 0xD9, 0x63,
0x65, 0x9C, 0x79, 0x3C, 0xE3, 0x70, 0xD9, 0x5F,
}
})
var exampleUTXOCommitment = &externalapi.DomainHash{
var exampleUTXOCommitment = externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{
0x10, 0x3B, 0xC7, 0xE3, 0x67, 0x11, 0x7B, 0x3C,
0x30, 0xC1, 0xF8, 0xFD, 0xD0, 0xD9, 0x72, 0x87,
0x7F, 0x16, 0xC5, 0x96, 0x2E, 0x8B, 0xD9, 0x63,
0x65, 0x9C, 0x79, 0x3C, 0xE3, 0x70, 0xD9, 0x5F,
}
// TestElementEncoding tests appmessage encode and decode for various element types. This
// is mainly to test the "fast" paths in readElement and writeElement which use
// type assertions to avoid reflection when possible.
func TestElementEncoding(t *testing.T) {
tests := []struct {
in interface{} // Value to encode
buf []byte // Encoded value
}{
{int32(1), []byte{0x01, 0x00, 0x00, 0x00}},
{uint32(256), []byte{0x00, 0x01, 0x00, 0x00}},
{
int64(65536),
[]byte{0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00},
},
{
uint64(4294967296),
[]byte{0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00},
},
{
true,
[]byte{0x01},
},
{
false,
[]byte{0x00},
},
{
[4]byte{0x01, 0x02, 0x03, 0x04},
[]byte{0x01, 0x02, 0x03, 0x04},
},
{
MessageCommand(0x10),
[]byte{
0x10, 0x00, 0x00, 0x00,
},
},
{
[16]byte{
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
},
[]byte{
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
},
},
{
(*externalapi.DomainHash)(&[externalapi.DomainHashSize]byte{
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20,
}),
[]byte{
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20,
},
},
{
ServiceFlag(SFNodeNetwork),
[]byte{0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
},
{
KaspaNet(Mainnet),
[]byte{0x1d, 0xf7, 0xdc, 0x3d},
},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Write to appmessage format.
var buf bytes.Buffer
err := WriteElement(&buf, test.in)
if err != nil {
t.Errorf("writeElement #%d error %v", i, err)
continue
}
if !bytes.Equal(buf.Bytes(), test.buf) {
t.Errorf("writeElement #%d\n got: %s want: %s", i,
spew.Sdump(buf.Bytes()), spew.Sdump(test.buf))
continue
}
// Read from appmessage format.
rbuf := bytes.NewReader(test.buf)
val := test.in
if reflect.ValueOf(test.in).Kind() != reflect.Ptr {
val = reflect.New(reflect.TypeOf(test.in)).Interface()
}
err = ReadElement(rbuf, val)
if err != nil {
t.Errorf("readElement #%d error %v", i, err)
continue
}
ival := val
if reflect.ValueOf(test.in).Kind() != reflect.Ptr {
ival = reflect.Indirect(reflect.ValueOf(val)).Interface()
}
if !reflect.DeepEqual(ival, test.in) {
t.Errorf("readElement #%d\n got: %s want: %s", i,
spew.Sdump(ival), spew.Sdump(test.in))
continue
}
}
}
// TestElementEncodingErrors performs negative tests against appmessage encode and decode
// of various element types to confirm error paths work correctly.
func TestElementEncodingErrors(t *testing.T) {
type writeElementReflect int32
tests := []struct {
in interface{} // Value to encode
max int // Max size of fixed buffer to induce errors
writeErr error // Expected write error
readErr error // Expected read error
}{
{int32(1), 0, io.ErrShortWrite, io.EOF},
{uint32(256), 0, io.ErrShortWrite, io.EOF},
{int64(65536), 0, io.ErrShortWrite, io.EOF},
{true, 0, io.ErrShortWrite, io.EOF},
{[4]byte{0x01, 0x02, 0x03, 0x04}, 0, io.ErrShortWrite, io.EOF},
{
MessageCommand(10),
0, io.ErrShortWrite, io.EOF,
},
{
[16]byte{
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
},
0, io.ErrShortWrite, io.EOF,
},
{
(*externalapi.DomainHash)(&[externalapi.DomainHashSize]byte{
0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20,
}),
0, io.ErrShortWrite, io.EOF,
},
{ServiceFlag(SFNodeNetwork), 0, io.ErrShortWrite, io.EOF},
{KaspaNet(Mainnet), 0, io.ErrShortWrite, io.EOF},
// Type with no supported encoding.
{writeElementReflect(0), 0, errNoEncodingForType, errNoEncodingForType},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Encode to appmessage format.
w := newFixedWriter(test.max)
err := WriteElement(w, test.in)
if !errors.Is(err, test.writeErr) {
t.Errorf("writeElement #%d wrong error got: %v, want: %v",
i, err, test.writeErr)
continue
}
// Decode from appmessage format.
r := newFixedReader(test.max, nil)
val := test.in
if reflect.ValueOf(test.in).Kind() != reflect.Ptr {
val = reflect.New(reflect.TypeOf(test.in)).Interface()
}
err = ReadElement(r, val)
if !errors.Is(err, test.readErr) {
t.Errorf("readElement #%d wrong error got: %v, want: %v",
i, err, test.readErr)
continue
}
}
}
// TestVarIntEncoding tests appmessage encode and decode for variable length integers.
func TestVarIntEncoding(t *testing.T) {
tests := []struct {
value uint64 // Value to encode
buf []byte // Encoded value
}{
// Latest protocol version.
// Single byte
{0, []byte{0x00}},
// Max single byte
{0xfc, []byte{0xfc}},
// Min 2-byte
{0xfd, []byte{0xfd, 0x0fd, 0x00}},
// Max 2-byte
{0xffff, []byte{0xfd, 0xff, 0xff}},
// Min 4-byte
{0x10000, []byte{0xfe, 0x00, 0x00, 0x01, 0x00}},
// Max 4-byte
{0xffffffff, []byte{0xfe, 0xff, 0xff, 0xff, 0xff}},
// Min 8-byte
{
0x100000000,
[]byte{0xff, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00},
},
// Max 8-byte
{
0xffffffffffffffff,
[]byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Encode to appmessage format.
buf := &bytes.Buffer{}
err := WriteVarInt(buf, test.value)
if err != nil {
t.Errorf("WriteVarInt #%d error %v", i, err)
continue
}
if !bytes.Equal(buf.Bytes(), test.buf) {
t.Errorf("WriteVarInt #%d\n got: %s want: %s", i,
spew.Sdump(buf.Bytes()), spew.Sdump(test.buf))
continue
}
// Decode from appmessage format.
rbuf := bytes.NewReader(test.buf)
val, err := ReadVarInt(rbuf)
if err != nil {
t.Errorf("ReadVarInt #%d error %v", i, err)
continue
}
if val != test.value {
t.Errorf("ReadVarInt #%d\n got: %x want: %x", i,
val, test.value)
continue
}
}
}
// TestVarIntEncodingErrors performs negative tests against appmessage encode and decode
// of variable length integers to confirm error paths work correctly.
func TestVarIntEncodingErrors(t *testing.T) {
tests := []struct {
in uint64 // Value to encode
buf []byte // Encoded value
max int // Max size of fixed buffer to induce errors
writeErr error // Expected write error
readErr error // Expected read error
}{
// Force errors on discriminant.
{0, []byte{0x00}, 0, io.ErrShortWrite, io.EOF},
// Force errors on 2-byte read/write.
{0xfd, []byte{0xfd}, 0, io.ErrShortWrite, io.EOF}, // error on writing length
{0xfd, []byte{0xfd}, 2, io.ErrShortWrite, io.ErrUnexpectedEOF}, // error on writing actual data
// Force errors on 4-byte read/write.
{0x10000, []byte{0xfe}, 0, io.ErrShortWrite, io.EOF}, // error on writing length
{0x10000, []byte{0xfe}, 2, io.ErrShortWrite, io.ErrUnexpectedEOF}, // error on writing actual data
// Force errors on 8-byte read/write.
{0x100000000, []byte{0xff}, 0, io.ErrShortWrite, io.EOF}, // error on writing length
{0x100000000, []byte{0xff}, 2, io.ErrShortWrite, io.ErrUnexpectedEOF}, // error on writing actual data
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Encode to appmessage format.
w := newFixedWriter(test.max)
err := WriteVarInt(w, test.in)
if !errors.Is(err, test.writeErr) {
t.Errorf("WriteVarInt #%d wrong error got: %v, want: %v",
i, err, test.writeErr)
continue
}
// Decode from appmessage format.
r := newFixedReader(test.max, test.buf)
_, err = ReadVarInt(r)
if !errors.Is(err, test.readErr) {
t.Errorf("ReadVarInt #%d wrong error got: %v, want: %v",
i, err, test.readErr)
continue
}
}
}
// TestVarIntNonCanonical ensures variable length integers that are not encoded
// canonically return the expected error.
func TestVarIntNonCanonical(t *testing.T) {
pver := ProtocolVersion
tests := []struct {
name string // Test name for easier identification
in []byte // Value to decode
pver uint32 // Protocol version for appmessage encoding
}{
{
"0 encoded with 3 bytes", []byte{0xfd, 0x00, 0x00},
pver,
},
{
"max single-byte value encoded with 3 bytes",
[]byte{0xfd, 0xfc, 0x00}, pver,
},
{
"0 encoded with 5 bytes",
[]byte{0xfe, 0x00, 0x00, 0x00, 0x00}, pver,
},
{
"max three-byte value encoded with 5 bytes",
[]byte{0xfe, 0xff, 0xff, 0x00, 0x00}, pver,
},
{
"0 encoded with 9 bytes",
[]byte{0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
pver,
},
{
"max five-byte value encoded with 9 bytes",
[]byte{0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00},
pver,
},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Decode from appmessage format.
rbuf := bytes.NewReader(test.in)
val, err := ReadVarInt(rbuf)
if msgErr := &(MessageError{}); !errors.As(err, &msgErr) {
t.Errorf("ReadVarInt #%d (%s) unexpected error %v", i,
test.name, err)
continue
}
if val != 0 {
t.Errorf("ReadVarInt #%d (%s)\n got: %d want: 0", i,
test.name, val)
continue
}
}
}
// TestVarIntEncoding tests the serialize size for variable length integers.
func TestVarIntSerializeSize(t *testing.T) {
tests := []struct {
val uint64 // Value to get the serialized size for
size int // Expected serialized size
}{
// Single byte
{0, 1},
// Max single byte
{0xfc, 1},
// Min 2-byte
{0xfd, 3},
// Max 2-byte
{0xffff, 3},
// Min 4-byte
{0x10000, 5},
// Max 4-byte
{0xffffffff, 5},
// Min 8-byte
{0x100000000, 9},
// Max 8-byte
{0xffffffffffffffff, 9},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
serializedSize := VarIntSerializeSize(test.val)
if serializedSize != test.size {
t.Errorf("VarIntSerializeSize #%d got: %d, want: %d", i,
serializedSize, test.size)
continue
}
}
}
// TestVarStringEncoding tests appmessage encode and decode for variable length strings.
func TestVarStringEncoding(t *testing.T) {
pver := ProtocolVersion
// str256 is a string that takes a 2-byte varint to encode.
str256 := strings.Repeat("test", 64)
tests := []struct {
in string // String to encode
out string // String to decoded value
buf []byte // Encoded value
pver uint32 // Protocol version for appmessage encoding
}{
// Latest protocol version.
// Empty string
{"", "", []byte{0x00}, pver},
// Single byte varint + string
{"Test", "Test", append([]byte{0x04}, []byte("Test")...), pver},
// 2-byte varint + string
{str256, str256, append([]byte{0xfd, 0x00, 0x01}, []byte(str256)...), pver},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Encode to appmessage format.
var buf bytes.Buffer
err := WriteVarString(&buf, test.in)
if err != nil {
t.Errorf("WriteVarString #%d error %v", i, err)
continue
}
if !bytes.Equal(buf.Bytes(), test.buf) {
t.Errorf("WriteVarString #%d\n got: %s want: %s", i,
spew.Sdump(buf.Bytes()), spew.Sdump(test.buf))
continue
}
// Decode from appmessage format.
rbuf := bytes.NewReader(test.buf)
val, err := ReadVarString(rbuf, test.pver)
if err != nil {
t.Errorf("ReadVarString #%d error %v", i, err)
continue
}
if val != test.out {
t.Errorf("ReadVarString #%d\n got: %s want: %s", i,
val, test.out)
continue
}
}
}
// TestVarStringEncodingErrors performs negative tests against appmessage encode and
// decode of variable length strings to confirm error paths work correctly.
func TestVarStringEncodingErrors(t *testing.T) {
pver := ProtocolVersion
// str256 is a string that takes a 2-byte varint to encode.
str256 := strings.Repeat("test", 64)
tests := []struct {
in string // Value to encode
buf []byte // Encoded value
pver uint32 // Protocol version for appmessage encoding
max int // Max size of fixed buffer to induce errors
writeErr error // Expected write error
readErr error // Expected read error
}{
// Latest protocol version with intentional read/write errors.
// Force errors on empty string.
{"", []byte{0x00}, pver, 0, io.ErrShortWrite, io.EOF},
// Force error on single byte varint + string.
{"Test", []byte{0x04}, pver, 2, io.ErrShortWrite, io.ErrUnexpectedEOF},
// Force errors on 2-byte varint + string.
{str256, []byte{0xfd}, pver, 2, io.ErrShortWrite, io.ErrUnexpectedEOF},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Encode to appmessage format.
w := newFixedWriter(test.max)
err := WriteVarString(w, test.in)
if !errors.Is(err, test.writeErr) {
t.Errorf("WriteVarString #%d wrong error got: %v, want: %v",
i, err, test.writeErr)
continue
}
// Decode from appmessage format.
r := newFixedReader(test.max, test.buf)
_, err = ReadVarString(r, test.pver)
if !errors.Is(err, test.readErr) {
t.Errorf("ReadVarString #%d wrong error got: %v, want: %v",
i, err, test.readErr)
continue
}
}
}
// TestVarStringOverflowErrors performs tests to ensure deserializing variable
// length strings intentionally crafted to use large values for the string
// length are handled properly. This could otherwise potentially be used as an
// attack vector.
func TestVarStringOverflowErrors(t *testing.T) {
pver := ProtocolVersion
tests := []struct {
buf []byte // Encoded value
pver uint32 // Protocol version for appmessage encoding
err error // Expected error
}{
{[]byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
pver, &MessageError{}},
{[]byte{0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
pver, &MessageError{}},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Decode from appmessage format.
rbuf := bytes.NewReader(test.buf)
_, err := ReadVarString(rbuf, test.pver)
if reflect.TypeOf(err) != reflect.TypeOf(test.err) {
t.Errorf("ReadVarString #%d wrong error got: %v, "+
"want: %v", i, err, reflect.TypeOf(test.err))
continue
}
}
}
// TestVarBytesEncoding tests appmessage encode and decode for variable length byte array.
func TestVarBytesEncoding(t *testing.T) {
pver := ProtocolVersion
// bytes256 is a byte array that takes a 2-byte varint to encode.
bytes256 := bytes.Repeat([]byte{0x01}, 256)
tests := []struct {
in []byte // Byte Array to write
buf []byte // Encoded value
pver uint32 // Protocol version for appmessage encoding
}{
// Latest protocol version.
// Empty byte array
{[]byte{}, []byte{0x00}, pver},
// Single byte varint + byte array
{[]byte{0x01}, []byte{0x01, 0x01}, pver},
// 2-byte varint + byte array
{bytes256, append([]byte{0xfd, 0x00, 0x01}, bytes256...), pver},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Encode to appmessage format.
var buf bytes.Buffer
err := WriteVarBytes(&buf, test.pver, test.in)
if err != nil {
t.Errorf("WriteVarBytes #%d error %v", i, err)
continue
}
if !bytes.Equal(buf.Bytes(), test.buf) {
t.Errorf("WriteVarBytes #%d\n got: %s want: %s", i,
spew.Sdump(buf.Bytes()), spew.Sdump(test.buf))
continue
}
// Decode from appmessage format.
rbuf := bytes.NewReader(test.buf)
val, err := ReadVarBytes(rbuf, test.pver, MaxMessagePayload,
"test payload")
if err != nil {
t.Errorf("ReadVarBytes #%d error %v", i, err)
continue
}
if !bytes.Equal(buf.Bytes(), test.buf) {
t.Errorf("ReadVarBytes #%d\n got: %s want: %s", i,
val, test.buf)
continue
}
}
}
// TestVarBytesEncodingErrors performs negative tests against appmessage encode and
// decode of variable length byte arrays to confirm error paths work correctly.
func TestVarBytesEncodingErrors(t *testing.T) {
pver := ProtocolVersion
// bytes256 is a byte array that takes a 2-byte varint to encode.
bytes256 := bytes.Repeat([]byte{0x01}, 256)
tests := []struct {
in []byte // Byte Array to write
buf []byte // Encoded value
pver uint32 // Protocol version for appmessage encoding
max int // Max size of fixed buffer to induce errors
writeErr error // Expected write error
readErr error // Expected read error
}{
// Latest protocol version with intentional read/write errors.
// Force errors on empty byte array.
{[]byte{}, []byte{0x00}, pver, 0, io.ErrShortWrite, io.EOF},
// Force error on single byte varint + byte array.
{[]byte{0x01, 0x02, 0x03}, []byte{0x04}, pver, 2, io.ErrShortWrite, io.ErrUnexpectedEOF},
// Force errors on 2-byte varint + byte array.
{bytes256, []byte{0xfd}, pver, 2, io.ErrShortWrite, io.ErrUnexpectedEOF},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Encode to appmessage format.
w := newFixedWriter(test.max)
err := WriteVarBytes(w, test.pver, test.in)
if !errors.Is(err, test.writeErr) {
t.Errorf("WriteVarBytes #%d wrong error got: %v, want: %v",
i, err, test.writeErr)
continue
}
// Decode from appmessage format.
r := newFixedReader(test.max, test.buf)
_, err = ReadVarBytes(r, test.pver, MaxMessagePayload,
"test payload")
if !errors.Is(err, test.readErr) {
t.Errorf("ReadVarBytes #%d wrong error got: %v, want: %v",
i, err, test.readErr)
continue
}
}
}
// TestVarBytesOverflowErrors performs tests to ensure deserializing variable
// length byte arrays intentionally crafted to use large values for the array
// length are handled properly. This could otherwise potentially be used as an
// attack vector.
func TestVarBytesOverflowErrors(t *testing.T) {
pver := ProtocolVersion
tests := []struct {
buf []byte // Encoded value
pver uint32 // Protocol version for appmessage encoding
err error // Expected error
}{
{[]byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
pver, &MessageError{}},
{[]byte{0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
pver, &MessageError{}},
}
t.Logf("Running %d tests", len(tests))
for i, test := range tests {
// Decode from appmessage format.
rbuf := bytes.NewReader(test.buf)
_, err := ReadVarBytes(rbuf, test.pver, MaxMessagePayload,
"test payload")
if reflect.TypeOf(err) != reflect.TypeOf(test.err) {
t.Errorf("ReadVarBytes #%d wrong error got: %v, "+
"want: %v", i, err, reflect.TypeOf(test.err))
continue
}
}
}
})

View File

@@ -1,7 +1,13 @@
package appmessage
import (
"encoding/hex"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionid"
"github.com/kaspanet/kaspad/util/mstime"
)
@@ -17,17 +23,17 @@ func DomainBlockToMsgBlock(domainBlock *externalapi.DomainBlock) *MsgBlock {
}
}
// DomainBlockHeaderToBlockHeader converts an externalapi.DomainBlockHeader to MsgBlockHeader
func DomainBlockHeaderToBlockHeader(domainBlockHeader *externalapi.DomainBlockHeader) *MsgBlockHeader {
// DomainBlockHeaderToBlockHeader converts an externalapi.BlockHeader to MsgBlockHeader
func DomainBlockHeaderToBlockHeader(domainBlockHeader externalapi.BlockHeader) *MsgBlockHeader {
return &MsgBlockHeader{
Version: domainBlockHeader.Version,
ParentHashes: domainBlockHeader.ParentHashes,
HashMerkleRoot: &domainBlockHeader.HashMerkleRoot,
AcceptedIDMerkleRoot: &domainBlockHeader.AcceptedIDMerkleRoot,
UTXOCommitment: &domainBlockHeader.UTXOCommitment,
Timestamp: mstime.UnixMilliseconds(domainBlockHeader.TimeInMilliseconds),
Bits: domainBlockHeader.Bits,
Nonce: domainBlockHeader.Nonce,
Version: domainBlockHeader.Version(),
ParentHashes: domainBlockHeader.ParentHashes(),
HashMerkleRoot: domainBlockHeader.HashMerkleRoot(),
AcceptedIDMerkleRoot: domainBlockHeader.AcceptedIDMerkleRoot(),
UTXOCommitment: domainBlockHeader.UTXOCommitment(),
Timestamp: mstime.UnixMilliseconds(domainBlockHeader.TimeInMilliseconds()),
Bits: domainBlockHeader.Bits(),
Nonce: domainBlockHeader.Nonce(),
}
}
@@ -44,18 +50,18 @@ func MsgBlockToDomainBlock(msgBlock *MsgBlock) *externalapi.DomainBlock {
}
}
// BlockHeaderToDomainBlockHeader converts a MsgBlockHeader to externalapi.DomainBlockHeader
func BlockHeaderToDomainBlockHeader(blockHeader *MsgBlockHeader) *externalapi.DomainBlockHeader {
return &externalapi.DomainBlockHeader{
Version: blockHeader.Version,
ParentHashes: blockHeader.ParentHashes,
HashMerkleRoot: *blockHeader.HashMerkleRoot,
AcceptedIDMerkleRoot: *blockHeader.AcceptedIDMerkleRoot,
UTXOCommitment: *blockHeader.UTXOCommitment,
TimeInMilliseconds: blockHeader.Timestamp.UnixMilliseconds(),
Bits: blockHeader.Bits,
Nonce: blockHeader.Nonce,
}
// BlockHeaderToDomainBlockHeader converts a MsgBlockHeader to externalapi.BlockHeader
func BlockHeaderToDomainBlockHeader(blockHeader *MsgBlockHeader) externalapi.BlockHeader {
return blockheader.NewImmutableBlockHeader(
blockHeader.Version,
blockHeader.ParentHashes,
blockHeader.HashMerkleRoot,
blockHeader.AcceptedIDMerkleRoot,
blockHeader.UTXOCommitment,
blockHeader.Timestamp.UnixMilliseconds(),
blockHeader.Bits,
blockHeader.Nonce,
)
}
// DomainTransactionToMsgTx converts an externalapi.DomainTransaction into an MsgTx
@@ -153,3 +159,159 @@ func outpointToDomainOutpoint(outpoint *Outpoint) *externalapi.DomainOutpoint {
Index: outpoint.Index,
}
}
// RPCTransactionToDomainTransaction converts RPCTransactions to DomainTransactions
func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externalapi.DomainTransaction, error) {
inputs := make([]*externalapi.DomainTransactionInput, len(rpcTransaction.Inputs))
for i, input := range rpcTransaction.Inputs {
transactionIDBytes, err := hex.DecodeString(input.PreviousOutpoint.TransactionID)
if err != nil {
return nil, err
}
transactionID, err := transactionid.FromBytes(transactionIDBytes)
if err != nil {
return nil, err
}
previousOutpoint := &externalapi.DomainOutpoint{
TransactionID: *transactionID,
Index: input.PreviousOutpoint.Index,
}
signatureScript, err := hex.DecodeString(input.SignatureScript)
if err != nil {
return nil, err
}
inputs[i] = &externalapi.DomainTransactionInput{
PreviousOutpoint: *previousOutpoint,
SignatureScript: signatureScript,
Sequence: input.Sequence,
}
}
outputs := make([]*externalapi.DomainTransactionOutput, len(rpcTransaction.Outputs))
for i, output := range rpcTransaction.Outputs {
scriptPublicKey, err := hex.DecodeString(output.ScriptPublicKey.Script)
if err != nil {
return nil, err
}
outputs[i] = &externalapi.DomainTransactionOutput{
Value: output.Amount,
ScriptPublicKey: &externalapi.ScriptPublicKey{Script: scriptPublicKey, Version: output.ScriptPublicKey.Version},
}
}
subnetworkIDBytes, err := hex.DecodeString(rpcTransaction.SubnetworkID)
if err != nil {
return nil, err
}
subnetworkID, err := subnetworks.FromBytes(subnetworkIDBytes)
if err != nil {
return nil, err
}
payloadHashBytes, err := hex.DecodeString(rpcTransaction.PayloadHash)
if err != nil {
return nil, err
}
payloadHash, err := externalapi.NewDomainHashFromByteSlice(payloadHashBytes)
if err != nil {
return nil, err
}
payload, err := hex.DecodeString(rpcTransaction.Payload)
if err != nil {
return nil, err
}
return &externalapi.DomainTransaction{
Version: rpcTransaction.Version,
Inputs: inputs,
Outputs: outputs,
LockTime: rpcTransaction.LockTime,
SubnetworkID: *subnetworkID,
Gas: rpcTransaction.LockTime,
PayloadHash: *payloadHash,
Payload: payload,
}, nil
}
// DomainTransactionToRPCTransaction converts DomainTransactions to RPCTransactions
func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransaction) *RPCTransaction {
inputs := make([]*RPCTransactionInput, len(transaction.Inputs))
for i, input := range transaction.Inputs {
transactionID := input.PreviousOutpoint.TransactionID.String()
previousOutpoint := &RPCOutpoint{
TransactionID: transactionID,
Index: input.PreviousOutpoint.Index,
}
signatureScript := hex.EncodeToString(input.SignatureScript)
inputs[i] = &RPCTransactionInput{
PreviousOutpoint: previousOutpoint,
SignatureScript: signatureScript,
Sequence: input.Sequence,
}
}
outputs := make([]*RPCTransactionOutput, len(transaction.Outputs))
for i, output := range transaction.Outputs {
scriptPublicKey := hex.EncodeToString(output.ScriptPublicKey.Script)
outputs[i] = &RPCTransactionOutput{
Amount: output.Value,
ScriptPublicKey: &RPCScriptPublicKey{Script: scriptPublicKey, Version: output.ScriptPublicKey.Version},
}
}
subnetworkID := hex.EncodeToString(transaction.SubnetworkID[:])
payloadHash := transaction.PayloadHash.String()
payload := hex.EncodeToString(transaction.Payload)
return &RPCTransaction{
Version: transaction.Version,
Inputs: inputs,
Outputs: outputs,
LockTime: transaction.LockTime,
SubnetworkID: subnetworkID,
Gas: transaction.LockTime,
PayloadHash: payloadHash,
Payload: payload,
}
}
// OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs converts
// OutpointAndUTXOEntryPairs to domain OutpointAndUTXOEntryPairs
func OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(
outpointAndUTXOEntryPairs []*OutpointAndUTXOEntryPair) []*externalapi.OutpointAndUTXOEntryPair {
domainOutpointAndUTXOEntryPairs := make([]*externalapi.OutpointAndUTXOEntryPair, len(outpointAndUTXOEntryPairs))
for i, outpointAndUTXOEntryPair := range outpointAndUTXOEntryPairs {
domainOutpointAndUTXOEntryPairs[i] = &externalapi.OutpointAndUTXOEntryPair{
Outpoint: &externalapi.DomainOutpoint{
TransactionID: outpointAndUTXOEntryPair.Outpoint.TxID,
Index: outpointAndUTXOEntryPair.Outpoint.Index,
},
UTXOEntry: utxo.NewUTXOEntry(
outpointAndUTXOEntryPair.UTXOEntry.Amount,
outpointAndUTXOEntryPair.UTXOEntry.ScriptPublicKey,
outpointAndUTXOEntryPair.UTXOEntry.IsCoinbase,
outpointAndUTXOEntryPair.UTXOEntry.BlockBlueScore,
),
}
}
return domainOutpointAndUTXOEntryPairs
}
// DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs converts
// domain OutpointAndUTXOEntryPairs to OutpointAndUTXOEntryPairs
func DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(
outpointAndUTXOEntryPairs []*externalapi.OutpointAndUTXOEntryPair) []*OutpointAndUTXOEntryPair {
domainOutpointAndUTXOEntryPairs := make([]*OutpointAndUTXOEntryPair, len(outpointAndUTXOEntryPairs))
for i, outpointAndUTXOEntryPair := range outpointAndUTXOEntryPairs {
domainOutpointAndUTXOEntryPairs[i] = &OutpointAndUTXOEntryPair{
Outpoint: &Outpoint{
TxID: outpointAndUTXOEntryPair.Outpoint.TransactionID,
Index: outpointAndUTXOEntryPair.Outpoint.Index,
},
UTXOEntry: &UTXOEntry{
Amount: outpointAndUTXOEntryPair.UTXOEntry.Amount(),
ScriptPublicKey: outpointAndUTXOEntryPair.UTXOEntry.ScriptPublicKey(),
IsCoinbase: outpointAndUTXOEntryPair.UTXOEntry.IsCoinbase(),
BlockBlueScore: outpointAndUTXOEntryPair.UTXOEntry.BlockBlueScore(),
},
}
}
return domainOutpointAndUTXOEntryPairs
}

View File

@@ -41,8 +41,6 @@ const (
CmdPong
CmdRequestBlockLocator
CmdBlockLocator
CmdSelectedTip
CmdRequestSelectedTip
CmdInvRelayBlock
CmdRequestRelayBlocks
CmdInvTransaction
@@ -53,10 +51,17 @@ const (
CmdReject
CmdHeader
CmdRequestNextHeaders
CmdRequestIBDRootUTXOSetAndBlock
CmdIBDRootUTXOSetAndBlock
CmdRequestPruningPointUTXOSetAndBlock
CmdPruningPointUTXOSetChunk
CmdRequestIBDBlocks
CmdIBDRootNotFound
CmdUnexpectedPruningPoint
CmdRequestPruningPointHash
CmdPruningPointHash
CmdIBDBlockLocator
CmdIBDBlockLocatorHighestHash
CmdBlockHeaders
CmdRequestNextPruningPointUTXOSetChunk
CmdDonePruningPointUTXOSetChunks
// rpc
CmdGetCurrentNetworkRequestMessage
@@ -81,15 +86,15 @@ const (
CmdAddPeerResponseMessage
CmdSubmitTransactionRequestMessage
CmdSubmitTransactionResponseMessage
CmdNotifyChainChangedRequestMessage
CmdNotifyChainChangedResponseMessage
CmdChainChangedNotificationMessage
CmdNotifyVirtualSelectedParentChainChangedRequestMessage
CmdNotifyVirtualSelectedParentChainChangedResponseMessage
CmdVirtualSelectedParentChainChangedNotificationMessage
CmdGetBlockRequestMessage
CmdGetBlockResponseMessage
CmdGetSubnetworkRequestMessage
CmdGetSubnetworkResponseMessage
CmdGetChainFromBlockRequestMessage
CmdGetChainFromBlockResponseMessage
CmdGetVirtualSelectedParentChainFromBlockRequestMessage
CmdGetVirtualSelectedParentChainFromBlockResponseMessage
CmdGetBlocksRequestMessage
CmdGetBlocksResponseMessage
CmdGetBlockCountRequestMessage
@@ -108,88 +113,113 @@ const (
CmdShutDownResponseMessage
CmdGetHeadersRequestMessage
CmdGetHeadersResponseMessage
CmdNotifyUTXOsChangedRequestMessage
CmdNotifyUTXOsChangedResponseMessage
CmdUTXOsChangedNotificationMessage
CmdGetUTXOsByAddressesRequestMessage
CmdGetUTXOsByAddressesResponseMessage
CmdGetVirtualSelectedParentBlueScoreRequestMessage
CmdGetVirtualSelectedParentBlueScoreResponseMessage
CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage
CmdNotifyVirtualSelectedParentBlueScoreChangedResponseMessage
CmdVirtualSelectedParentBlueScoreChangedNotificationMessage
)
// ProtocolMessageCommandToString maps all MessageCommands to their string representation
var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdVersion: "Version",
CmdVerAck: "VerAck",
CmdRequestAddresses: "RequestAddresses",
CmdAddresses: "Addresses",
CmdRequestHeaders: "RequestHeaders",
CmdBlock: "Block",
CmdTx: "Tx",
CmdPing: "Ping",
CmdPong: "Pong",
CmdRequestBlockLocator: "RequestBlockLocator",
CmdBlockLocator: "BlockLocator",
CmdSelectedTip: "SelectedTip",
CmdRequestSelectedTip: "RequestSelectedTip",
CmdInvRelayBlock: "InvRelayBlock",
CmdRequestRelayBlocks: "RequestRelayBlocks",
CmdInvTransaction: "InvTransaction",
CmdRequestTransactions: "RequestTransactions",
CmdIBDBlock: "IBDBlock",
CmdDoneHeaders: "DoneHeaders",
CmdTransactionNotFound: "TransactionNotFound",
CmdReject: "Reject",
CmdHeader: "Header",
CmdRequestNextHeaders: "RequestNextHeaders",
CmdRequestIBDRootUTXOSetAndBlock: "RequestPruningUTXOSetAndBlock",
CmdIBDRootUTXOSetAndBlock: "IBDRootUTXOSetAndBlock",
CmdRequestIBDBlocks: "RequestIBDBlocks",
CmdIBDRootNotFound: "IBDRootNotFound",
CmdVersion: "Version",
CmdVerAck: "VerAck",
CmdRequestAddresses: "RequestAddresses",
CmdAddresses: "Addresses",
CmdRequestHeaders: "RequestHeaders",
CmdBlock: "Block",
CmdTx: "Tx",
CmdPing: "Ping",
CmdPong: "Pong",
CmdRequestBlockLocator: "RequestBlockLocator",
CmdBlockLocator: "BlockLocator",
CmdInvRelayBlock: "InvRelayBlock",
CmdRequestRelayBlocks: "RequestRelayBlocks",
CmdInvTransaction: "InvTransaction",
CmdRequestTransactions: "RequestTransactions",
CmdIBDBlock: "IBDBlock",
CmdDoneHeaders: "DoneHeaders",
CmdTransactionNotFound: "TransactionNotFound",
CmdReject: "Reject",
CmdHeader: "Header",
CmdRequestNextHeaders: "RequestNextHeaders",
CmdRequestPruningPointUTXOSetAndBlock: "RequestPruningPointUTXOSetAndBlock",
CmdPruningPointUTXOSetChunk: "PruningPointUTXOSetChunk",
CmdRequestIBDBlocks: "RequestIBDBlocks",
CmdUnexpectedPruningPoint: "UnexpectedPruningPoint",
CmdRequestPruningPointHash: "RequestPruningPointHashHash",
CmdPruningPointHash: "PruningPointHash",
CmdIBDBlockLocator: "IBDBlockLocator",
CmdIBDBlockLocatorHighestHash: "IBDBlockLocatorHighestHash",
CmdBlockHeaders: "BlockHeaders",
CmdRequestNextPruningPointUTXOSetChunk: "RequestNextPruningPointUTXOSetChunk",
CmdDonePruningPointUTXOSetChunks: "DonePruningPointUTXOSetChunks",
}
// RPCMessageCommandToString maps all MessageCommands to their string representation
var RPCMessageCommandToString = map[MessageCommand]string{
CmdGetCurrentNetworkRequestMessage: "GetCurrentNetworkRequest",
CmdGetCurrentNetworkResponseMessage: "GetCurrentNetworkResponse",
CmdSubmitBlockRequestMessage: "SubmitBlockRequest",
CmdSubmitBlockResponseMessage: "SubmitBlockResponse",
CmdGetBlockTemplateRequestMessage: "GetBlockTemplateRequest",
CmdGetBlockTemplateResponseMessage: "GetBlockTemplateResponse",
CmdGetBlockTemplateTransactionMessage: "CmdGetBlockTemplateTransaction",
CmdNotifyBlockAddedRequestMessage: "NotifyBlockAddedRequest",
CmdNotifyBlockAddedResponseMessage: "NotifyBlockAddedResponse",
CmdBlockAddedNotificationMessage: "BlockAddedNotification",
CmdGetPeerAddressesRequestMessage: "GetPeerAddressesRequest",
CmdGetPeerAddressesResponseMessage: "GetPeerAddressesResponse",
CmdGetSelectedTipHashRequestMessage: "GetSelectedTipHashRequest",
CmdGetSelectedTipHashResponseMessage: "GetSelectedTipHashResponse",
CmdGetMempoolEntryRequestMessage: "GetMempoolEntryRequest",
CmdGetMempoolEntryResponseMessage: "GetMempoolEntryResponse",
CmdGetConnectedPeerInfoRequestMessage: "GetConnectedPeerInfoRequest",
CmdGetConnectedPeerInfoResponseMessage: "GetConnectedPeerInfoResponse",
CmdAddPeerRequestMessage: "AddPeerRequest",
CmdAddPeerResponseMessage: "AddPeerResponse",
CmdSubmitTransactionRequestMessage: "SubmitTransactionRequest",
CmdSubmitTransactionResponseMessage: "SubmitTransactionResponse",
CmdNotifyChainChangedRequestMessage: "NotifyChainChangedRequest",
CmdNotifyChainChangedResponseMessage: "NotifyChainChangedResponse",
CmdChainChangedNotificationMessage: "ChainChangedNotification",
CmdGetBlockRequestMessage: "GetBlockRequest",
CmdGetBlockResponseMessage: "GetBlockResponse",
CmdGetSubnetworkRequestMessage: "GetSubnetworkRequest",
CmdGetSubnetworkResponseMessage: "GetSubnetworkResponse",
CmdGetChainFromBlockRequestMessage: "GetChainFromBlockRequest",
CmdGetChainFromBlockResponseMessage: "GetChainFromBlockResponse",
CmdGetBlocksRequestMessage: "GetBlocksRequest",
CmdGetBlocksResponseMessage: "GetBlocksResponse",
CmdGetBlockCountRequestMessage: "GetBlockCountRequest",
CmdGetBlockCountResponseMessage: "GetBlockCountResponse",
CmdGetBlockDAGInfoRequestMessage: "GetBlockDAGInfoRequest",
CmdGetBlockDAGInfoResponseMessage: "GetBlockDAGInfoResponse",
CmdResolveFinalityConflictRequestMessage: "ResolveFinalityConflictRequest",
CmdResolveFinalityConflictResponseMessage: "ResolveFinalityConflictResponse",
CmdNotifyFinalityConflictsRequestMessage: "NotifyFinalityConflictsRequest",
CmdNotifyFinalityConflictsResponseMessage: "NotifyFinalityConflictsResponse",
CmdFinalityConflictNotificationMessage: "FinalityConflictNotification",
CmdFinalityConflictResolvedNotificationMessage: "FinalityConflictResolvedNotification",
CmdGetMempoolEntriesRequestMessage: "GetMempoolEntriesRequestMessage",
CmdGetMempoolEntriesResponseMessage: "GetMempoolEntriesResponseMessage",
CmdGetHeadersRequestMessage: "GetHeadersRequest",
CmdGetHeadersResponseMessage: "GetHeadersResponse",
CmdGetCurrentNetworkRequestMessage: "GetCurrentNetworkRequest",
CmdGetCurrentNetworkResponseMessage: "GetCurrentNetworkResponse",
CmdSubmitBlockRequestMessage: "SubmitBlockRequest",
CmdSubmitBlockResponseMessage: "SubmitBlockResponse",
CmdGetBlockTemplateRequestMessage: "GetBlockTemplateRequest",
CmdGetBlockTemplateResponseMessage: "GetBlockTemplateResponse",
CmdGetBlockTemplateTransactionMessage: "CmdGetBlockTemplateTransaction",
CmdNotifyBlockAddedRequestMessage: "NotifyBlockAddedRequest",
CmdNotifyBlockAddedResponseMessage: "NotifyBlockAddedResponse",
CmdBlockAddedNotificationMessage: "BlockAddedNotification",
CmdGetPeerAddressesRequestMessage: "GetPeerAddressesRequest",
CmdGetPeerAddressesResponseMessage: "GetPeerAddressesResponse",
CmdGetSelectedTipHashRequestMessage: "GetSelectedTipHashRequest",
CmdGetSelectedTipHashResponseMessage: "GetSelectedTipHashResponse",
CmdGetMempoolEntryRequestMessage: "GetMempoolEntryRequest",
CmdGetMempoolEntryResponseMessage: "GetMempoolEntryResponse",
CmdGetConnectedPeerInfoRequestMessage: "GetConnectedPeerInfoRequest",
CmdGetConnectedPeerInfoResponseMessage: "GetConnectedPeerInfoResponse",
CmdAddPeerRequestMessage: "AddPeerRequest",
CmdAddPeerResponseMessage: "AddPeerResponse",
CmdSubmitTransactionRequestMessage: "SubmitTransactionRequest",
CmdSubmitTransactionResponseMessage: "SubmitTransactionResponse",
CmdNotifyVirtualSelectedParentChainChangedRequestMessage: "NotifyVirtualSelectedParentChainChangedRequest",
CmdNotifyVirtualSelectedParentChainChangedResponseMessage: "NotifyVirtualSelectedParentChainChangedResponse",
CmdVirtualSelectedParentChainChangedNotificationMessage: "VirtualSelectedParentChainChangedNotification",
CmdGetBlockRequestMessage: "GetBlockRequest",
CmdGetBlockResponseMessage: "GetBlockResponse",
CmdGetSubnetworkRequestMessage: "GetSubnetworkRequest",
CmdGetSubnetworkResponseMessage: "GetSubnetworkResponse",
CmdGetVirtualSelectedParentChainFromBlockRequestMessage: "GetVirtualSelectedParentChainFromBlockRequest",
CmdGetVirtualSelectedParentChainFromBlockResponseMessage: "GetVirtualSelectedParentChainFromBlockResponse",
CmdGetBlocksRequestMessage: "GetBlocksRequest",
CmdGetBlocksResponseMessage: "GetBlocksResponse",
CmdGetBlockCountRequestMessage: "GetBlockCountRequest",
CmdGetBlockCountResponseMessage: "GetBlockCountResponse",
CmdGetBlockDAGInfoRequestMessage: "GetBlockDAGInfoRequest",
CmdGetBlockDAGInfoResponseMessage: "GetBlockDAGInfoResponse",
CmdResolveFinalityConflictRequestMessage: "ResolveFinalityConflictRequest",
CmdResolveFinalityConflictResponseMessage: "ResolveFinalityConflictResponse",
CmdNotifyFinalityConflictsRequestMessage: "NotifyFinalityConflictsRequest",
CmdNotifyFinalityConflictsResponseMessage: "NotifyFinalityConflictsResponse",
CmdFinalityConflictNotificationMessage: "FinalityConflictNotification",
CmdFinalityConflictResolvedNotificationMessage: "FinalityConflictResolvedNotification",
CmdGetMempoolEntriesRequestMessage: "GetMempoolEntriesRequest",
CmdGetMempoolEntriesResponseMessage: "GetMempoolEntriesResponse",
CmdGetHeadersRequestMessage: "GetHeadersRequest",
CmdGetHeadersResponseMessage: "GetHeadersResponse",
CmdNotifyUTXOsChangedRequestMessage: "NotifyUTXOsChangedRequest",
CmdNotifyUTXOsChangedResponseMessage: "NotifyUTXOsChangedResponse",
CmdUTXOsChangedNotificationMessage: "UTXOsChangedNotification",
CmdGetUTXOsByAddressesRequestMessage: "GetUTXOsByAddressesRequest",
CmdGetUTXOsByAddressesResponseMessage: "GetUTXOsByAddressesResponse",
CmdGetVirtualSelectedParentBlueScoreRequestMessage: "GetVirtualSelectedParentBlueScoreRequest",
CmdGetVirtualSelectedParentBlueScoreResponseMessage: "GetVirtualSelectedParentBlueScoreResponse",
CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage: "NotifyVirtualSelectedParentBlueScoreChangedRequest",
CmdNotifyVirtualSelectedParentBlueScoreChangedResponseMessage: "NotifyVirtualSelectedParentBlueScoreChangedResponse",
CmdVirtualSelectedParentBlueScoreChangedNotificationMessage: "VirtualSelectedParentBlueScoreChangedNotification",
}
// Message is an interface that describes a kaspa message. A type that

View File

@@ -0,0 +1,19 @@
package appmessage
// BlockHeadersMessage represents a kaspa BlockHeaders message
type BlockHeadersMessage struct {
baseMessage
BlockHeaders []*MsgBlockHeader
}
// Command returns the protocol command string for the message
func (msg *BlockHeadersMessage) Command() MessageCommand {
return CmdBlockHeaders
}
// NewBlockHeadersMessage returns a new kaspa BlockHeaders message
func NewBlockHeadersMessage(blockHeaders []*MsgBlockHeader) *BlockHeadersMessage {
return &BlockHeadersMessage{
BlockHeaders: blockHeaders,
}
}

View File

@@ -1,22 +0,0 @@
package appmessage
// MsgIBDRootNotFound implements the Message interface and represents a kaspa
// IBDRootNotFound message. It is used to notify the IBD root that was requested
// by other peer was not found.
//
// This message has no payload.
type MsgIBDRootNotFound struct {
baseMessage
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgIBDRootNotFound) Command() MessageCommand {
return CmdIBDRootNotFound
}
// NewMsgIBDRootNotFound returns a new kaspa IBDRootNotFound message that conforms to the
// Message interface.
func NewMsgIBDRootNotFound() *MsgDoneHeaders {
return &MsgDoneHeaders{}
}

View File

@@ -71,7 +71,7 @@ func (msg *MsgBlock) MaxPayloadLength(pver uint32) uint32 {
// Note: this operation modifies the block in place.
func (msg *MsgBlock) ConvertToPartial(subnetworkID *externalapi.DomainSubnetworkID) {
for _, tx := range msg.Transactions {
if tx.SubnetworkID != *subnetworkID {
if !tx.SubnetworkID.Equal(subnetworkID) {
tx.Payload = []byte{}
}
}

View File

@@ -5,15 +5,15 @@
package appmessage
import (
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/util/mstime"
"math"
"reflect"
"testing"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
)
// TestBlock tests the MsgBlock API.
@@ -110,7 +110,7 @@ func TestConvertToPartial(t *testing.T) {
for _, testTransaction := range transactions {
var subnetworkTx *MsgTx
for _, blockTransaction := range block.Transactions {
if blockTransaction.SubnetworkID == *testTransaction.subnetworkID {
if blockTransaction.SubnetworkID.Equal(testTransaction.subnetworkID) {
subnetworkTx = blockTransaction
}
}
@@ -127,10 +127,10 @@ func TestConvertToPartial(t *testing.T) {
}
}
// blockOne is the first block in the mainnet block DAG.
//blockOne is the first block in the mainnet block DAG.
var blockOne = MsgBlock{
Header: MsgBlockHeader{
Version: 1,
Version: 0,
ParentHashes: []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash},
HashMerkleRoot: mainnetGenesisMerkleRoot,
AcceptedIDMerkleRoot: exampleAcceptedIDMerkleRoot,
@@ -156,19 +156,21 @@ var blockOne = MsgBlock{
[]*TxOut{
{
Value: 0x12a05f200,
ScriptPubKey: []byte{
0x41, // OP_DATA_65
0x04, 0x96, 0xb5, 0x38, 0xe8, 0x53, 0x51, 0x9c,
0x72, 0x6a, 0x2c, 0x91, 0xe6, 0x1e, 0xc1, 0x16,
0x00, 0xae, 0x13, 0x90, 0x81, 0x3a, 0x62, 0x7c,
0x66, 0xfb, 0x8b, 0xe7, 0x94, 0x7b, 0xe6, 0x3c,
0x52, 0xda, 0x75, 0x89, 0x37, 0x95, 0x15, 0xd4,
0xe0, 0xa6, 0x04, 0xf8, 0x14, 0x17, 0x81, 0xe6,
0x22, 0x94, 0x72, 0x11, 0x66, 0xbf, 0x62, 0x1e,
0x73, 0xa8, 0x2c, 0xbf, 0x23, 0x42, 0xc8, 0x58,
0xee, // 65-byte signature
0xac, // OP_CHECKSIG
},
ScriptPubKey: &externalapi.ScriptPublicKey{
Script: []byte{
0x41, // OP_DATA_65
0x04, 0x96, 0xb5, 0x38, 0xe8, 0x53, 0x51, 0x9c,
0x72, 0x6a, 0x2c, 0x91, 0xe6, 0x1e, 0xc1, 0x16,
0x00, 0xae, 0x13, 0x90, 0x81, 0x3a, 0x62, 0x7c,
0x66, 0xfb, 0x8b, 0xe7, 0x94, 0x7b, 0xe6, 0x3c,
0x52, 0xda, 0x75, 0x89, 0x37, 0x95, 0x15, 0xd4,
0xe0, 0xa6, 0x04, 0xf8, 0x14, 0x17, 0x81, 0xe6,
0x22, 0x94, 0x72, 0x11, 0x66, 0xbf, 0x62, 0x1e,
0x73, 0xa8, 0x2c, 0xbf, 0x23, 0x42, 0xc8, 0x58,
0xee, // 65-byte signature
0xac, // OP_CHECKSIG
},
Version: 0},
},
}),
},
@@ -176,7 +178,7 @@ var blockOne = MsgBlock{
// Block one serialized bytes.
var blockOneBytes = []byte{
0x01, 0x00, 0x00, 0x00, // Version 1
0x00, 0x00, // Version 0
0x02, // NumParentBlocks
0xdc, 0x5f, 0x5b, 0x5b, 0x1d, 0xc2, 0xa7, 0x25, // mainnetGenesisHash
0x49, 0xd5, 0x1d, 0x4d, 0xee, 0xd7, 0xa4, 0x8b,
@@ -202,7 +204,7 @@ var blockOneBytes = []byte{
0xff, 0xff, 0x00, 0x1d, // Bits
0x01, 0xe3, 0x62, 0x99, 0x00, 0x00, 0x00, 0x00, // Fake Nonce
0x01, // TxnCount
0x01, 0x00, 0x00, 0x00, // Version
0x00, 0x00, 0x00, 0x00, // Version
0x01, // Varint for number of transaction inputs
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,

View File

@@ -7,7 +7,7 @@ package appmessage
import (
"math"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
@@ -37,7 +37,7 @@ type MsgBlockHeader struct {
baseMessage
// Version of the block. This is not the same as the protocol version.
Version int32
Version uint16
// Hashes of the parent block headers in the blockDAG.
ParentHashes []*externalapi.DomainHash
@@ -73,7 +73,7 @@ func (h *MsgBlockHeader) NumParentBlocks() byte {
// BlockHash computes the block identifier hash for the given block header.
func (h *MsgBlockHeader) BlockHash() *externalapi.DomainHash {
return consensusserialization.HeaderHash(BlockHeaderToDomainBlockHeader(h))
return consensushashing.HeaderHash(BlockHeaderToDomainBlockHeader(h))
}
// IsGenesis returns true iff this block is a genesis block
@@ -90,7 +90,7 @@ func (h *MsgBlockHeader) Command() MessageCommand {
// NewBlockHeader returns a new MsgBlockHeader using the provided version, previous
// block hash, hash merkle root, accepted ID merkle root, difficulty bits, and nonce used to generate the
// block with defaults or calclulated values for the remaining fields.
func NewBlockHeader(version int32, parentHashes []*externalapi.DomainHash, hashMerkleRoot *externalapi.DomainHash,
func NewBlockHeader(version uint16, parentHashes []*externalapi.DomainHash, hashMerkleRoot *externalapi.DomainHash,
acceptedIDMerkleRoot *externalapi.DomainHash, utxoCommitment *externalapi.DomainHash, bits uint32, nonce uint64) *MsgBlockHeader {
// Limit the timestamp to one millisecond precision since the protocol

View File

@@ -3,8 +3,6 @@ package appmessage
import (
"testing"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/davecgh/go-spew/spew"
@@ -13,7 +11,7 @@ import (
// TestBlockLocator tests the MsgBlockLocator API.
func TestBlockLocator(t *testing.T) {
hashStr := "000000000002e7ad7b9eef9479e4aabc65cb831269cc20d2632c13684406dee0"
locatorHash, err := hashes.FromString(hashStr)
locatorHash, err := externalapi.NewDomainHashFromString(hashStr)
if err != nil {
t.Errorf("NewHashFromStr: %v", err)
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgDonePruningPointUTXOSetChunks represents a kaspa DonePruningPointUTXOSetChunks message
type MsgDonePruningPointUTXOSetChunks struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgDonePruningPointUTXOSetChunks) Command() MessageCommand {
return CmdDonePruningPointUTXOSetChunks
}
// NewMsgDonePruningPointUTXOSetChunks returns a new MsgDonePruningPointUTXOSetChunks.
func NewMsgDonePruningPointUTXOSetChunks() *MsgDonePruningPointUTXOSetChunks {
return &MsgDonePruningPointUTXOSetChunks{}
}

View File

@@ -25,7 +25,7 @@ func TestIBDBlock(t *testing.T) {
bh := NewBlockHeader(1, parentHashes, hashMerkleRoot, acceptedIDMerkleRoot, utxoCommitment, bits, nonce)
// Ensure the command is expected value.
wantCmd := MessageCommand(17)
wantCmd := MessageCommand(15)
msg := NewMsgIBDBlock(NewMsgBlock(bh))
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgIBDBlock: wrong command - got %v want %v",

View File

@@ -0,0 +1,27 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgIBDBlockLocator represents a kaspa ibdBlockLocator message
type MsgIBDBlockLocator struct {
baseMessage
TargetHash *externalapi.DomainHash
BlockLocatorHashes []*externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgIBDBlockLocator) Command() MessageCommand {
return CmdIBDBlockLocator
}
// NewMsgIBDBlockLocator returns a new kaspa ibdBlockLocator message
func NewMsgIBDBlockLocator(targetHash *externalapi.DomainHash,
blockLocatorHashes []*externalapi.DomainHash) *MsgIBDBlockLocator {
return &MsgIBDBlockLocator{
TargetHash: targetHash,
BlockLocatorHashes: blockLocatorHashes,
}
}

View File

@@ -0,0 +1,23 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgIBDBlockLocatorHighestHash represents a kaspa BlockLocatorHighestHash message
type MsgIBDBlockLocatorHighestHash struct {
baseMessage
HighestHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgIBDBlockLocatorHighestHash) Command() MessageCommand {
return CmdIBDBlockLocatorHighestHash
}
// NewMsgIBDBlockLocatorHighestHash returns a new BlockLocatorHighestHash message
func NewMsgIBDBlockLocatorHighestHash(highestHash *externalapi.DomainHash) *MsgIBDBlockLocatorHighestHash {
return &MsgIBDBlockLocatorHighestHash{
HighestHash: highestHash,
}
}

View File

@@ -1,23 +0,0 @@
package appmessage
// MsgIBDRootUTXOSetAndBlock implements the Message interface and represents a kaspa
// IBDRootUTXOSetAndBlock message. It is used to answer RequestIBDRootUTXOSetAndBlock messages.
type MsgIBDRootUTXOSetAndBlock struct {
baseMessage
UTXOSet []byte
Block *MsgBlock
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgIBDRootUTXOSetAndBlock) Command() MessageCommand {
return CmdIBDRootUTXOSetAndBlock
}
// NewMsgIBDRootUTXOSetAndBlock returns a new MsgIBDRootUTXOSetAndBlock.
func NewMsgIBDRootUTXOSetAndBlock(utxoSet []byte, block *MsgBlock) *MsgIBDRootUTXOSetAndBlock {
return &MsgIBDRootUTXOSetAndBlock{
UTXOSet: utxoSet,
Block: block,
}
}

View File

@@ -0,0 +1,23 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgPruningPointHashMessage represents a kaspa PruningPointHash message
type MsgPruningPointHashMessage struct {
baseMessage
Hash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPointHashMessage) Command() MessageCommand {
return CmdPruningPointHash
}
// NewPruningPointHashMessage returns a new kaspa PruningPointHash message
func NewPruningPointHashMessage(hash *externalapi.DomainHash) *MsgPruningPointHashMessage {
return &MsgPruningPointHashMessage{
Hash: hash,
}
}

View File

@@ -0,0 +1,36 @@
package appmessage
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// MsgPruningPointUTXOSetChunk represents a kaspa PruningPointUTXOSetChunk message
type MsgPruningPointUTXOSetChunk struct {
baseMessage
OutpointAndUTXOEntryPairs []*OutpointAndUTXOEntryPair
}
// Command returns the protocol command string for the message
func (msg *MsgPruningPointUTXOSetChunk) Command() MessageCommand {
return CmdPruningPointUTXOSetChunk
}
// NewMsgPruningPointUTXOSetChunk returns a new MsgPruningPointUTXOSetChunk.
func NewMsgPruningPointUTXOSetChunk(outpointAndUTXOEntryPairs []*OutpointAndUTXOEntryPair) *MsgPruningPointUTXOSetChunk {
return &MsgPruningPointUTXOSetChunk{
OutpointAndUTXOEntryPairs: outpointAndUTXOEntryPairs,
}
}
// OutpointAndUTXOEntryPair is an outpoint along with its
// respective UTXO entry
type OutpointAndUTXOEntryPair struct {
Outpoint *Outpoint
UTXOEntry *UTXOEntry
}
// UTXOEntry houses details about an individual transaction output in a UTXO
type UTXOEntry struct {
Amount uint64
ScriptPublicKey *externalapi.ScriptPublicKey
BlockBlueScore uint64
IsCoinbase bool
}

View File

@@ -5,13 +5,14 @@ import (
)
// MsgRequestBlockLocator implements the Message interface and represents a kaspa
// RequestBlockLocator message. It is used to request a block locator between high
// and low hash.
// RequestBlockLocator message. It is used to request a block locator between low
// and high hash.
// The locator is returned via a locator message (MsgBlockLocator).
type MsgRequestBlockLocator struct {
baseMessage
HighHash *externalapi.DomainHash
LowHash *externalapi.DomainHash
HighHash *externalapi.DomainHash
Limit uint32
}
// Command returns the protocol command string for the message. This is part
@@ -23,9 +24,10 @@ func (msg *MsgRequestBlockLocator) Command() MessageCommand {
// NewMsgRequestBlockLocator returns a new RequestBlockLocator message that conforms to the
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgRequestBlockLocator(highHash, lowHash *externalapi.DomainHash) *MsgRequestBlockLocator {
func NewMsgRequestBlockLocator(lowHash, highHash *externalapi.DomainHash, limit uint32) *MsgRequestBlockLocator {
return &MsgRequestBlockLocator{
HighHash: highHash,
LowHash: lowHash,
HighHash: highHash,
Limit: limit,
}
}

View File

@@ -4,21 +4,19 @@ import (
"testing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
)
// TestRequestBlockLocator tests the MsgRequestBlockLocator API.
func TestRequestBlockLocator(t *testing.T) {
hashStr := "000000000002e7ad7b9eef9479e4aabc65cb831269cc20d2632c13684406dee0"
highHash, err := hashes.FromString(hashStr)
highHash, err := externalapi.NewDomainHashFromString(hashStr)
if err != nil {
t.Errorf("NewHashFromStr: %v", err)
}
// Ensure the command is expected value.
wantCmd := MessageCommand(9)
msg := NewMsgRequestBlockLocator(highHash, &externalapi.DomainHash{})
msg := NewMsgRequestBlockLocator(highHash, &externalapi.DomainHash{}, 0)
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgRequestBlockLocator: wrong command - got %v want %v",
cmd, wantCmd)

View File

@@ -7,26 +7,26 @@ package appmessage
import (
"testing"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// TestRequstIBDBlocks tests the MsgRequestHeaders API.
func TestRequstIBDBlocks(t *testing.T) {
hashStr := "000000000002e7ad7b9eef9479e4aabc65cb831269cc20d2632c13684406dee0"
lowHash, err := hashes.FromString(hashStr)
lowHash, err := externalapi.NewDomainHashFromString(hashStr)
if err != nil {
t.Errorf("NewHashFromStr: %v", err)
}
hashStr = "000000000003ba27aa200b1cecaad478d2b00432346c3f1f3986da1afd33e506"
highHash, err := hashes.FromString(hashStr)
highHash, err := externalapi.NewDomainHashFromString(hashStr)
if err != nil {
t.Errorf("NewHashFromStr: %v", err)
}
// Ensure we get the same data back out.
msg := NewMsgRequstHeaders(lowHash, highHash)
if *msg.HighHash != *highHash {
if !msg.HighHash.Equal(highHash) {
t.Errorf("NewMsgRequstHeaders: wrong high hash - got %v, want %v",
msg.HighHash, highHash)
}

View File

@@ -4,10 +4,6 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MaxRequestIBDBlocksHashes is the maximum number of hashes that can
// be in a single RequestIBDBlocks message.
const MaxRequestIBDBlocksHashes = MaxInvPerMsg
// MsgRequestIBDBlocks implements the Message interface and represents a kaspa
// RequestIBDBlocks message. It is used to request blocks as part of the IBD
// protocol.

View File

@@ -1,26 +0,0 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestIBDRootUTXOSetAndBlock implements the Message interface and represents a kaspa
// RequestIBDRootUTXOSetAndBlock message. It is used to request the UTXO set and block body
// of the IBD root block.
type MsgRequestIBDRootUTXOSetAndBlock struct {
baseMessage
IBDRoot *externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestIBDRootUTXOSetAndBlock) Command() MessageCommand {
return CmdRequestIBDRootUTXOSetAndBlock
}
// NewMsgRequestIBDRootUTXOSetAndBlock returns a new MsgRequestIBDRootUTXOSetAndBlock.
func NewMsgRequestIBDRootUTXOSetAndBlock(ibdRoot *externalapi.DomainHash) *MsgRequestIBDRootUTXOSetAndBlock {
return &MsgRequestIBDRootUTXOSetAndBlock{
IBDRoot: ibdRoot,
}
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgRequestNextPruningPointUTXOSetChunk represents a kaspa RequestNextPruningPointUTXOSetChunk message
type MsgRequestNextPruningPointUTXOSetChunk struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestNextPruningPointUTXOSetChunk) Command() MessageCommand {
return CmdRequestNextPruningPointUTXOSetChunk
}
// NewMsgRequestNextPruningPointUTXOSetChunk returns a new MsgRequestNextPruningPointUTXOSetChunk.
func NewMsgRequestNextPruningPointUTXOSetChunk() *MsgRequestNextPruningPointUTXOSetChunk {
return &MsgRequestNextPruningPointUTXOSetChunk{}
}

View File

@@ -0,0 +1,23 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestPruningPointUTXOSetAndBlock represents a kaspa RequestPruningPointUTXOSetAndBlock message
type MsgRequestPruningPointUTXOSetAndBlock struct {
baseMessage
PruningPointHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointUTXOSetAndBlock) Command() MessageCommand {
return CmdRequestPruningPointUTXOSetAndBlock
}
// NewMsgRequestPruningPointUTXOSetAndBlock returns a new MsgRequestPruningPointUTXOSetAndBlock
func NewMsgRequestPruningPointUTXOSetAndBlock(pruningPointHash *externalapi.DomainHash) *MsgRequestPruningPointUTXOSetAndBlock {
return &MsgRequestPruningPointUTXOSetAndBlock{
PruningPointHash: pruningPointHash,
}
}

View File

@@ -1,21 +0,0 @@
package appmessage
// MsgRequestSelectedTip implements the Message interface and represents a kaspa
// RequestSelectedTip message. It is used to request the selected tip of another peer.
//
// This message has no payload.
type MsgRequestSelectedTip struct {
baseMessage
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestSelectedTip) Command() MessageCommand {
return CmdRequestSelectedTip
}
// NewMsgRequestSelectedTip returns a new kaspa RequestSelectedTip message that conforms to the
// Message interface.
func NewMsgRequestSelectedTip() *MsgRequestSelectedTip {
return &MsgRequestSelectedTip{}
}

View File

@@ -1,20 +0,0 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package appmessage
import (
"testing"
)
// TestRequestSelectedTip tests the MsgRequestSelectedTip API.
func TestRequestSelectedTip(t *testing.T) {
// Ensure the command is expected value.
wantCmd := MessageCommand(12)
msg := NewMsgRequestSelectedTip()
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgRequestSelectedTip: wrong command - got %v want %v",
cmd, wantCmd)
}
}

View File

@@ -1,28 +0,0 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgSelectedTip implements the Message interface and represents a kaspa
// selectedtip message. It is used to answer getseltip messages and tell
// the asking peer what is the selected tip of this peer.
type MsgSelectedTip struct {
baseMessage
// The selected tip hash of the generator of the message.
SelectedTipHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgSelectedTip) Command() MessageCommand {
return CmdSelectedTip
}
// NewMsgSelectedTip returns a new kaspa selectedtip message that conforms to the
// Message interface.
func NewMsgSelectedTip(selectedTipHash *externalapi.DomainHash) *MsgSelectedTip {
return &MsgSelectedTip{
SelectedTipHash: selectedTipHash,
}
}

View File

@@ -1,18 +0,0 @@
package appmessage
import (
"testing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// TestSelectedTip tests the MsgSelectedTip API.
func TestSelectedTip(t *testing.T) {
// Ensure the command is expected value.
wantCmd := MessageCommand(11)
msg := NewMsgSelectedTip(&externalapi.DomainHash{})
if cmd := msg.Command(); cmd != wantCmd {
t.Errorf("NewMsgSelectedTip: wrong command - got %v want %v",
cmd, wantCmd)
}
}

View File

@@ -6,13 +6,10 @@ package appmessage
import (
"encoding/binary"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"strconv"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
@@ -41,8 +38,8 @@ const (
maxTxInPerMessage = (MaxMessagePayload / minTxInPayload) + 1
// MinTxOutPayload is the minimum payload size for a transaction output.
// Value 8 bytes + Varint for ScriptPubKey length 1 byte.
MinTxOutPayload = 9
// Value 8 bytes + version 2 bytes + Varint for ScriptPublicKey length 1 byte.
MinTxOutPayload = 11
// maxTxOutPerMessage is the maximum number of transactions outputs that
// a transaction which fits into a message could possibly have.
@@ -99,23 +96,23 @@ type TxIn struct {
// NewTxIn returns a new kaspa transaction input with the provided
// previous outpoint point and signature script with a default sequence of
// MaxTxInSequenceNum.
func NewTxIn(prevOut *Outpoint, signatureScript []byte) *TxIn {
func NewTxIn(prevOut *Outpoint, signatureScript []byte, sequence uint64) *TxIn {
return &TxIn{
PreviousOutpoint: *prevOut,
SignatureScript: signatureScript,
Sequence: constants.MaxTxInSequenceNum,
Sequence: sequence,
}
}
// TxOut defines a kaspa transaction output.
type TxOut struct {
Value uint64
ScriptPubKey []byte
ScriptPubKey *externalapi.ScriptPublicKey
}
// NewTxOut returns a new kaspa transaction output with the provided
// transaction value and public key script.
func NewTxOut(value uint64, scriptPubKey []byte) *TxOut {
func NewTxOut(value uint64, scriptPubKey *externalapi.ScriptPublicKey) *TxOut {
return &TxOut{
Value: value,
ScriptPubKey: scriptPubKey,
@@ -130,7 +127,7 @@ func NewTxOut(value uint64, scriptPubKey []byte) *TxOut {
// inputs and outputs.
type MsgTx struct {
baseMessage
Version int32
Version uint16
TxIn []*TxIn
TxOut []*TxOut
LockTime uint64
@@ -162,12 +159,12 @@ func (msg *MsgTx) IsCoinBase() bool {
// TxHash generates the Hash for the transaction.
func (msg *MsgTx) TxHash() *externalapi.DomainHash {
return consensusserialization.TransactionHash(MsgTxToDomainTransaction(msg))
return consensushashing.TransactionHash(MsgTxToDomainTransaction(msg))
}
// TxID generates the Hash for the transaction without the signature script, gas and payload fields.
func (msg *MsgTx) TxID() *externalapi.DomainTransactionID {
return consensusserialization.TransactionID(MsgTxToDomainTransaction(msg))
return consensushashing.TransactionID(MsgTxToDomainTransaction(msg))
}
// Copy creates a deep copy of a transaction so that the original does not get
@@ -220,20 +217,20 @@ func (msg *MsgTx) Copy() *MsgTx {
// Deep copy the old TxOut data.
for _, oldTxOut := range msg.TxOut {
// Deep copy the old ScriptPubKey
var newScript []byte
// Deep copy the old ScriptPublicKey
var newScript externalapi.ScriptPublicKey
oldScript := oldTxOut.ScriptPubKey
oldScriptLen := len(oldScript)
oldScriptLen := len(oldScript.Script)
if oldScriptLen > 0 {
newScript = make([]byte, oldScriptLen)
copy(newScript, oldScript[:oldScriptLen])
newScript = externalapi.ScriptPublicKey{Script: make([]byte, oldScriptLen), Version: oldScript.Version}
copy(newScript.Script, oldScript.Script[:oldScriptLen])
}
// Create new txOut with the deep copied data and append it to
// new Tx.
newTxOut := TxOut{
Value: oldTxOut.Value,
ScriptPubKey: newScript,
ScriptPubKey: &newScript,
}
newTx.TxOut = append(newTx.TxOut, &newTxOut)
}
@@ -259,8 +256,8 @@ func (msg *MsgTx) MaxPayloadLength(pver uint32) uint32 {
// 3. The transaction's subnetwork
func (msg *MsgTx) IsSubnetworkCompatible(subnetworkID *externalapi.DomainSubnetworkID) bool {
return subnetworkID == nil ||
*subnetworkID == subnetworks.SubnetworkIDNative ||
*subnetworkID == msg.SubnetworkID
subnetworkID.Equal(&subnetworks.SubnetworkIDNative) ||
subnetworkID.Equal(&msg.SubnetworkID)
}
// newMsgTx returns a new tx message that conforms to the Message interface.
@@ -272,7 +269,7 @@ func (msg *MsgTx) IsSubnetworkCompatible(subnetworkID *externalapi.DomainSubnetw
// The payload hash is calculated automatically according to provided payload.
// Also, the lock time is set to zero to indicate the transaction is valid
// immediately as opposed to some time in future.
func newMsgTx(version int32, txIn []*TxIn, txOut []*TxOut, subnetworkID *externalapi.DomainSubnetworkID,
func newMsgTx(version uint16, txIn []*TxIn, txOut []*TxOut, subnetworkID *externalapi.DomainSubnetworkID,
gas uint64, payload []byte, lockTime uint64) *MsgTx {
if txIn == nil {
@@ -285,7 +282,7 @@ func newMsgTx(version int32, txIn []*TxIn, txOut []*TxOut, subnetworkID *externa
var payloadHash externalapi.DomainHash
if *subnetworkID != subnetworks.SubnetworkIDNative {
payloadHash = *hashes.HashData(payload)
payloadHash = *hashes.PayloadHash(payload)
}
return &MsgTx{
@@ -301,12 +298,12 @@ func newMsgTx(version int32, txIn []*TxIn, txOut []*TxOut, subnetworkID *externa
}
// NewNativeMsgTx returns a new tx message in the native subnetwork
func NewNativeMsgTx(version int32, txIn []*TxIn, txOut []*TxOut) *MsgTx {
func NewNativeMsgTx(version uint16, txIn []*TxIn, txOut []*TxOut) *MsgTx {
return newMsgTx(version, txIn, txOut, &subnetworks.SubnetworkIDNative, 0, nil, 0)
}
// NewSubnetworkMsgTx returns a new tx message in the specified subnetwork with specified gas and payload
func NewSubnetworkMsgTx(version int32, txIn []*TxIn, txOut []*TxOut, subnetworkID *externalapi.DomainSubnetworkID,
func NewSubnetworkMsgTx(version uint16, txIn []*TxIn, txOut []*TxOut, subnetworkID *externalapi.DomainSubnetworkID,
gas uint64, payload []byte) *MsgTx {
return newMsgTx(version, txIn, txOut, subnetworkID, gas, payload, 0)
@@ -315,12 +312,12 @@ func NewSubnetworkMsgTx(version int32, txIn []*TxIn, txOut []*TxOut, subnetworkI
// NewNativeMsgTxWithLocktime returns a new tx message in the native subnetwork with a locktime.
//
// See newMsgTx for further documntation of the parameters
func NewNativeMsgTxWithLocktime(version int32, txIn []*TxIn, txOut []*TxOut, locktime uint64) *MsgTx {
func NewNativeMsgTxWithLocktime(version uint16, txIn []*TxIn, txOut []*TxOut, locktime uint64) *MsgTx {
return newMsgTx(version, txIn, txOut, &subnetworks.SubnetworkIDNative, 0, nil, locktime)
}
// NewRegistryMsgTx creates a new MsgTx that registers a new subnetwork
func NewRegistryMsgTx(version int32, txIn []*TxIn, txOut []*TxOut, gasLimit uint64) *MsgTx {
func NewRegistryMsgTx(version uint16, txIn []*TxIn, txOut []*TxOut, gasLimit uint64) *MsgTx {
payload := make([]byte, 8)
binary.LittleEndian.PutUint64(payload, gasLimit)

View File

@@ -11,7 +11,7 @@ import (
"reflect"
"testing"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/domain/consensus/utils/subnetworks"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionid"
@@ -52,7 +52,7 @@ func TestTx(t *testing.T) {
// testing package functionality.
prevOutIndex := uint32(1)
prevOut := NewOutpoint(txID, prevOutIndex)
if prevOut.TxID != *txID {
if !prevOut.TxID.Equal(txID) {
t.Errorf("NewOutpoint: wrong ID - got %v, want %v",
spew.Sprint(&prevOut.TxID), spew.Sprint(txID))
}
@@ -68,7 +68,7 @@ func TestTx(t *testing.T) {
// Ensure we get the same transaction input back out.
sigScript := []byte{0x04, 0x31, 0xdc, 0x00, 0x1b, 0x01, 0x62}
txIn := NewTxIn(prevOut, sigScript)
txIn := NewTxIn(prevOut, sigScript, constants.MaxTxInSequenceNum)
if !reflect.DeepEqual(&txIn.PreviousOutpoint, prevOut) {
t.Errorf("NewTxIn: wrong prev outpoint - got %v, want %v",
spew.Sprint(&txIn.PreviousOutpoint),
@@ -82,26 +82,28 @@ func TestTx(t *testing.T) {
// Ensure we get the same transaction output back out.
txValue := uint64(5000000000)
scriptPubKey := []byte{
0x41, // OP_DATA_65
0x04, 0xd6, 0x4b, 0xdf, 0xd0, 0x9e, 0xb1, 0xc5,
0xfe, 0x29, 0x5a, 0xbd, 0xeb, 0x1d, 0xca, 0x42,
0x81, 0xbe, 0x98, 0x8e, 0x2d, 0xa0, 0xb6, 0xc1,
0xc6, 0xa5, 0x9d, 0xc2, 0x26, 0xc2, 0x86, 0x24,
0xe1, 0x81, 0x75, 0xe8, 0x51, 0xc9, 0x6b, 0x97,
0x3d, 0x81, 0xb0, 0x1c, 0xc3, 0x1f, 0x04, 0x78,
0x34, 0xbc, 0x06, 0xd6, 0xd6, 0xed, 0xf6, 0x20,
0xd1, 0x84, 0x24, 0x1a, 0x6a, 0xed, 0x8b, 0x63,
0xa6, // 65-byte signature
0xac, // OP_CHECKSIG
}
scriptPubKey := &externalapi.ScriptPublicKey{
Script: []byte{
0x41, // OP_DATA_65
0x04, 0xd6, 0x4b, 0xdf, 0xd0, 0x9e, 0xb1, 0xc5,
0xfe, 0x29, 0x5a, 0xbd, 0xeb, 0x1d, 0xca, 0x42,
0x81, 0xbe, 0x98, 0x8e, 0x2d, 0xa0, 0xb6, 0xc1,
0xc6, 0xa5, 0x9d, 0xc2, 0x26, 0xc2, 0x86, 0x24,
0xe1, 0x81, 0x75, 0xe8, 0x51, 0xc9, 0x6b, 0x97,
0x3d, 0x81, 0xb0, 0x1c, 0xc3, 0x1f, 0x04, 0x78,
0x34, 0xbc, 0x06, 0xd6, 0xd6, 0xed, 0xf6, 0x20,
0xd1, 0x84, 0x24, 0x1a, 0x6a, 0xed, 0x8b, 0x63,
0xa6, // 65-byte signature
0xac, // OP_CHECKSIG
},
Version: 0}
txOut := NewTxOut(txValue, scriptPubKey)
if txOut.Value != txValue {
t.Errorf("NewTxOut: wrong scriptPubKey - got %v, want %v",
txOut.Value, txValue)
}
if !bytes.Equal(txOut.ScriptPubKey, scriptPubKey) {
if !bytes.Equal(txOut.ScriptPubKey.Script, scriptPubKey.Script) {
t.Errorf("NewTxOut: wrong scriptPubKey - got %v, want %v",
spew.Sdump(txOut.ScriptPubKey),
spew.Sdump(scriptPubKey))
@@ -131,11 +133,15 @@ func TestTx(t *testing.T) {
// TestTxHash tests the ability to generate the hash of a transaction accurately.
func TestTxHashAndID(t *testing.T) {
txID1Str := "a3d29c39bfb578235e4813cc8138a9ba10def63acad193a7a880159624840d7f"
txHash1Str := "4bee9ee495bd93a755de428376bd582a2bb6ec37c041753b711c0606d5745c13"
txID1Str := "f868bd20e816256b80eac976821be4589d24d21141bd1cec6e8005d0c16c6881"
wantTxID1, err := transactionid.FromString(txID1Str)
if err != nil {
t.Errorf("NewTxIDFromStr: %v", err)
return
t.Fatalf("NewTxIDFromStr: %v", err)
}
wantTxHash1, err := transactionid.FromString(txHash1Str)
if err != nil {
t.Fatalf("NewTxIDFromStr: %v", err)
}
// A coinbase transaction
@@ -149,7 +155,7 @@ func TestTxHashAndID(t *testing.T) {
}
txOut := &TxOut{
Value: 5000000000,
ScriptPubKey: []byte{
ScriptPubKey: &externalapi.ScriptPublicKey{Script: []byte{
0x41, // OP_DATA_65
0x04, 0xd6, 0x4b, 0xdf, 0xd0, 0x9e, 0xb1, 0xc5,
0xfe, 0x29, 0x5a, 0xbd, 0xeb, 0x1d, 0xca, 0x42,
@@ -161,32 +167,32 @@ func TestTxHashAndID(t *testing.T) {
0xd1, 0x84, 0x24, 0x1a, 0x6a, 0xed, 0x8b, 0x63,
0xa6, // 65-byte signature
0xac, // OP_CHECKSIG
},
}, Version: 0},
}
tx1 := NewSubnetworkMsgTx(1, []*TxIn{txIn}, []*TxOut{txOut}, &subnetworks.SubnetworkIDCoinbase, 0, nil)
tx1 := NewSubnetworkMsgTx(0, []*TxIn{txIn}, []*TxOut{txOut}, &subnetworks.SubnetworkIDCoinbase, 0, nil)
// Ensure the hash produced is expected.
tx1Hash := tx1.TxHash()
if *tx1Hash != (externalapi.DomainHash)(*wantTxID1) {
if *tx1Hash != (externalapi.DomainHash)(*wantTxHash1) {
t.Errorf("TxHash: wrong hash - got %v, want %v",
spew.Sprint(tx1Hash), spew.Sprint(wantTxID1))
spew.Sprint(tx1Hash), spew.Sprint(wantTxHash1))
}
// Ensure the TxID for coinbase transaction is the same as TxHash.
tx1ID := tx1.TxID()
if *tx1ID != *wantTxID1 {
if !tx1ID.Equal(wantTxID1) {
t.Errorf("TxID: wrong ID - got %v, want %v",
spew.Sprint(tx1ID), spew.Sprint(wantTxID1))
}
hash2Str := "c84f3009b337aaa3adeb2ffd41010d5f62dd773ca25b39c908a77da91f87b729"
wantHash2, err := hashes.FromString(hash2Str)
hash2Str := "cb1bdb4a83d4885535fb3cceb5c96597b7df903db83f0ffcd779d703affd8efd"
wantHash2, err := externalapi.NewDomainHashFromString(hash2Str)
if err != nil {
t.Errorf("NewTxIDFromStr: %v", err)
return
}
id2Str := "7c919f676109743a1271a88beeb43849a6f9cc653f6082e59a7266f3df4802b9"
id2Str := "ca080073d4ddf5b84443a0964af633f3c70a5b290fd3bc35a7e6f93fd33f9330"
wantID2, err := transactionid.FromString(id2Str)
if err != nil {
t.Errorf("NewTxIDFromStr: %v", err)
@@ -196,7 +202,7 @@ func TestTxHashAndID(t *testing.T) {
txIns := []*TxIn{{
PreviousOutpoint: Outpoint{
Index: 0,
TxID: externalapi.DomainTransactionID{1, 2, 3},
TxID: *externalapi.NewDomainTransactionIDFromByteArray(&[externalapi.DomainHashSize]byte{1, 2, 3}),
},
SignatureScript: []byte{
0x49, 0x30, 0x46, 0x02, 0x21, 0x00, 0xDA, 0x0D, 0xC6, 0xAE, 0xCE, 0xFE, 0x1E, 0x06, 0xEF, 0xDF,
@@ -214,42 +220,42 @@ func TestTxHashAndID(t *testing.T) {
txOuts := []*TxOut{
{
Value: 244623243,
ScriptPubKey: []byte{
ScriptPubKey: &externalapi.ScriptPublicKey{Script: []byte{
0x76, 0xA9, 0x14, 0xBA, 0xDE, 0xEC, 0xFD, 0xEF, 0x05, 0x07, 0x24, 0x7F, 0xC8, 0xF7, 0x42, 0x41,
0xD7, 0x3B, 0xC0, 0x39, 0x97, 0x2D, 0x7B, 0x88, 0xAC,
},
}, Version: 0},
},
{
Value: 44602432,
ScriptPubKey: []byte{
ScriptPubKey: &externalapi.ScriptPublicKey{Script: []byte{
0x76, 0xA9, 0x14, 0xC1, 0x09, 0x32, 0x48, 0x3F, 0xEC, 0x93, 0xED, 0x51, 0xF5, 0xFE, 0x95, 0xE7,
0x25, 0x59, 0xF2, 0xCC, 0x70, 0x43, 0xF9, 0x88, 0xAC,
},
}, Version: 0},
},
}
tx2 := NewSubnetworkMsgTx(1, txIns, txOuts, &externalapi.DomainSubnetworkID{1, 2, 3}, 0, payload)
// Ensure the hash produced is expected.
tx2Hash := tx2.TxHash()
if *tx2Hash != *wantHash2 {
if !tx2Hash.Equal(wantHash2) {
t.Errorf("TxHash: wrong hash - got %v, want %v",
spew.Sprint(tx2Hash), spew.Sprint(wantHash2))
}
// Ensure the TxID for coinbase transaction is the same as TxHash.
tx2ID := tx2.TxID()
if *tx2ID != *wantID2 {
if !tx2ID.Equal(wantID2) {
t.Errorf("TxID: wrong ID - got %v, want %v",
spew.Sprint(tx2ID), spew.Sprint(wantID2))
}
if *tx2ID == (externalapi.DomainTransactionID)(*tx2Hash) {
if tx2ID.Equal((*externalapi.DomainTransactionID)(tx2Hash)) {
t.Errorf("tx2ID and tx2Hash shouldn't be the same for non-coinbase transaction with signature and/or payload")
}
tx2.TxIn[0].SignatureScript = []byte{}
newTx2Hash := tx2.TxHash()
if *tx2ID != (externalapi.DomainTransactionID)(*newTx2Hash) {
t.Errorf("tx2ID and newTx2Hash should be the same for transaction with an empty signature")
if *tx2ID == (externalapi.DomainTransactionID)(*newTx2Hash) {
t.Errorf("tx2ID and newTx2Hash should not be the same even for transaction with an empty signature")
}
}

View File

@@ -53,9 +53,6 @@ type MsgVersion struct {
// on the appmessage. This has a max length of MaxUserAgentLen.
UserAgent string
// The selected tip hash of the generator of the version message.
SelectedTipHash *externalapi.DomainHash
// Don't announce transactions to peer.
DisableRelayTx bool
@@ -85,7 +82,7 @@ func (msg *MsgVersion) Command() MessageCommand {
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgVersion(addr *NetAddress, id *id.ID, network string,
selectedTipHash *externalapi.DomainHash, subnetworkID *externalapi.DomainSubnetworkID) *MsgVersion {
subnetworkID *externalapi.DomainSubnetworkID) *MsgVersion {
// Limit the timestamp to one millisecond precision since the protocol
// doesn't support better.
@@ -97,7 +94,6 @@ func NewMsgVersion(addr *NetAddress, id *id.ID, network string,
Address: addr,
ID: id,
UserAgent: DefaultUserAgent,
SelectedTipHash: selectedTipHash,
DisableRelayTx: false,
SubnetworkID: subnetworkID,
}

View File

@@ -10,7 +10,6 @@ import (
"testing"
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
)
@@ -19,7 +18,6 @@ func TestVersion(t *testing.T) {
pver := ProtocolVersion
// Create version message data.
selectedTipHash := &externalapi.DomainHash{12, 34}
tcpAddrMe := &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 16111}
me := NewNetAddress(tcpAddrMe, SFNodeNetwork)
generatedID, err := id.GenerateID()
@@ -28,7 +26,7 @@ func TestVersion(t *testing.T) {
}
// Ensure we get the correct data back out.
msg := NewMsgVersion(me, generatedID, "mainnet", selectedTipHash, nil)
msg := NewMsgVersion(me, generatedID, "mainnet", nil)
if msg.ProtocolVersion != pver {
t.Errorf("NewMsgVersion: wrong protocol version - got %v, want %v",
msg.ProtocolVersion, pver)
@@ -45,10 +43,6 @@ func TestVersion(t *testing.T) {
t.Errorf("NewMsgVersion: wrong user agent - got %v, want %v",
msg.UserAgent, DefaultUserAgent)
}
if *msg.SelectedTipHash != *selectedTipHash {
t.Errorf("NewMsgVersion: wrong selected tip hash - got %s, want %s",
msg.SelectedTipHash, selectedTipHash)
}
if msg.DisableRelayTx {
t.Errorf("NewMsgVersion: disable relay tx is not false by "+
"default - got %v, want %v", msg.DisableRelayTx, false)

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgRequestPruningPointHashMessage represents a kaspa RequestPruningPointHashMessage message
type MsgRequestPruningPointHashMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgRequestPruningPointHashMessage) Command() MessageCommand {
return CmdRequestPruningPointHash
}
// NewMsgRequestPruningPointHashMessage returns a new kaspa RequestPruningPointHash message
func NewMsgRequestPruningPointHashMessage() *MsgRequestPruningPointHashMessage {
return &MsgRequestPruningPointHashMessage{}
}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgUnexpectedPruningPoint represents a kaspa UnexpectedPruningPoint message
type MsgUnexpectedPruningPoint struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgUnexpectedPruningPoint) Command() MessageCommand {
return CmdUnexpectedPruningPoint
}
// NewMsgUnexpectedPruningPoint returns a new kaspa UnexpectedPruningPoint message
func NewMsgUnexpectedPruningPoint() *MsgUnexpectedPruningPoint {
return &MsgUnexpectedPruningPoint{}
}

View File

@@ -5,7 +5,6 @@ package appmessage
type GetBlockRequestMessage struct {
baseMessage
Hash string
SubnetworkID string
IncludeTransactionVerboseData bool
}
@@ -15,10 +14,9 @@ func (msg *GetBlockRequestMessage) Command() MessageCommand {
}
// NewGetBlockRequestMessage returns a instance of the message
func NewGetBlockRequestMessage(hash string, subnetworkID string, includeTransactionVerboseData bool) *GetBlockRequestMessage {
func NewGetBlockRequestMessage(hash string, includeTransactionVerboseData bool) *GetBlockRequestMessage {
return &GetBlockRequestMessage{
Hash: hash,
SubnetworkID: subnetworkID,
IncludeTransactionVerboseData: includeTransactionVerboseData,
}
}
@@ -45,7 +43,7 @@ func NewGetBlockResponseMessage() *GetBlockResponseMessage {
// BlockVerboseData holds verbose data about a block
type BlockVerboseData struct {
Hash string
Version int32
Version uint16
VersionHex string
HashMerkleRoot string
AcceptedIDMerkleRoot string
@@ -59,6 +57,7 @@ type BlockVerboseData struct {
ParentHashes []string
SelectedParentHash string
BlueScore uint64
IsHeaderOnly bool
}
// TransactionVerboseData holds verbose data about a transaction
@@ -66,7 +65,7 @@ type TransactionVerboseData struct {
TxID string
Hash string
Size uint64
Version int32
Version uint16
LockTime uint64
SubnetworkID string
Gas uint64
@@ -102,7 +101,6 @@ type TransactionVerboseOutput struct {
// ScriptPubKeyResult holds data about a script public key
type ScriptPubKeyResult struct {
Asm string
Hex string
Type string
Address string

View File

@@ -1,5 +1,7 @@
package appmessage
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// GetBlockCountRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetBlockCountRequestMessage struct {
@@ -20,7 +22,8 @@ func NewGetBlockCountRequestMessage() *GetBlockCountRequestMessage {
// its respective RPC message
type GetBlockCountResponseMessage struct {
baseMessage
BlockCount uint64
BlockCount uint64
HeaderCount uint64
Error *RPCError
}
@@ -31,8 +34,9 @@ func (msg *GetBlockCountResponseMessage) Command() MessageCommand {
}
// NewGetBlockCountResponseMessage returns a instance of the message
func NewGetBlockCountResponseMessage(blockCount uint64) *GetBlockCountResponseMessage {
func NewGetBlockCountResponseMessage(syncInfo *externalapi.SyncInfo) *GetBlockCountResponseMessage {
return &GetBlockCountResponseMessage{
BlockCount: blockCount,
BlockCount: syncInfo.BlockCount,
HeaderCount: syncInfo.HeaderCount,
}
}

View File

@@ -22,6 +22,7 @@ type GetBlockDAGInfoResponseMessage struct {
baseMessage
NetworkName string
BlockCount uint64
HeaderCount uint64
TipHashes []string
VirtualParentHashes []string
Difficulty float64

View File

@@ -24,6 +24,7 @@ func NewGetBlockTemplateRequestMessage(payAddress string) *GetBlockTemplateReque
type GetBlockTemplateResponseMessage struct {
baseMessage
MsgBlock *MsgBlock
IsSynced bool
Error *RPCError
}
@@ -34,6 +35,9 @@ func (msg *GetBlockTemplateResponseMessage) Command() MessageCommand {
}
// NewGetBlockTemplateResponseMessage returns a instance of the message
func NewGetBlockTemplateResponseMessage(msgBlock *MsgBlock) *GetBlockTemplateResponseMessage {
return &GetBlockTemplateResponseMessage{MsgBlock: msgBlock}
func NewGetBlockTemplateResponseMessage(msgBlock *MsgBlock, isSynced bool) *GetBlockTemplateResponseMessage {
return &GetBlockTemplateResponseMessage{
MsgBlock: msgBlock,
IsSynced: isSynced,
}
}

View File

@@ -1,49 +0,0 @@
package appmessage
// GetChainFromBlockRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetChainFromBlockRequestMessage struct {
baseMessage
StartHash string
IncludeBlockVerboseData bool
}
// Command returns the protocol command string for the message
func (msg *GetChainFromBlockRequestMessage) Command() MessageCommand {
return CmdGetChainFromBlockRequestMessage
}
// NewGetChainFromBlockRequestMessage returns a instance of the message
func NewGetChainFromBlockRequestMessage(startHash string, includeBlockVerboseData bool) *GetChainFromBlockRequestMessage {
return &GetChainFromBlockRequestMessage{
StartHash: startHash,
IncludeBlockVerboseData: includeBlockVerboseData,
}
}
// GetChainFromBlockResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetChainFromBlockResponseMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
BlockVerboseData []*BlockVerboseData
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetChainFromBlockResponseMessage) Command() MessageCommand {
return CmdGetChainFromBlockResponseMessage
}
// NewGetChainFromBlockResponseMessage returns a instance of the message
func NewGetChainFromBlockResponseMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock, blockVerboseData []*BlockVerboseData) *GetChainFromBlockResponseMessage {
return &GetChainFromBlockResponseMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
BlockVerboseData: blockVerboseData,
}
}

View File

@@ -41,11 +41,10 @@ type GetConnectedPeerInfoMessage struct {
ID string
Address string
LastPingDuration int64
SelectedTipHash string
IsSyncNode bool
IsOutbound bool
TimeOffset int64
UserAgent string
AdvertisedProtocolVersion uint32
TimeConnected int64
IsIBDPeer bool
}

View File

@@ -0,0 +1,41 @@
package appmessage
// GetUTXOsByAddressesRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetUTXOsByAddressesRequestMessage struct {
baseMessage
Addresses []string
}
// Command returns the protocol command string for the message
func (msg *GetUTXOsByAddressesRequestMessage) Command() MessageCommand {
return CmdGetUTXOsByAddressesRequestMessage
}
// NewGetUTXOsByAddressesRequestMessage returns a instance of the message
func NewGetUTXOsByAddressesRequestMessage(addresses []string) *GetUTXOsByAddressesRequestMessage {
return &GetUTXOsByAddressesRequestMessage{
Addresses: addresses,
}
}
// GetUTXOsByAddressesResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetUTXOsByAddressesResponseMessage struct {
baseMessage
Entries []*UTXOsByAddressesEntry
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetUTXOsByAddressesResponseMessage) Command() MessageCommand {
return CmdGetUTXOsByAddressesResponseMessage
}
// NewGetUTXOsByAddressesResponseMessage returns a instance of the message
func NewGetUTXOsByAddressesResponseMessage(entries []*UTXOsByAddressesEntry) *GetUTXOsByAddressesResponseMessage {
return &GetUTXOsByAddressesResponseMessage{
Entries: entries,
}
}

View File

@@ -0,0 +1,38 @@
package appmessage
// GetVirtualSelectedParentBlueScoreRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetVirtualSelectedParentBlueScoreRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *GetVirtualSelectedParentBlueScoreRequestMessage) Command() MessageCommand {
return CmdGetVirtualSelectedParentBlueScoreRequestMessage
}
// NewGetVirtualSelectedParentBlueScoreRequestMessage returns a instance of the message
func NewGetVirtualSelectedParentBlueScoreRequestMessage() *GetVirtualSelectedParentBlueScoreRequestMessage {
return &GetVirtualSelectedParentBlueScoreRequestMessage{}
}
// GetVirtualSelectedParentBlueScoreResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetVirtualSelectedParentBlueScoreResponseMessage struct {
baseMessage
BlueScore uint64
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetVirtualSelectedParentBlueScoreResponseMessage) Command() MessageCommand {
return CmdGetVirtualSelectedParentBlueScoreResponseMessage
}
// NewGetVirtualSelectedParentBlueScoreResponseMessage returns a instance of the message
func NewGetVirtualSelectedParentBlueScoreResponseMessage(blueScore uint64) *GetVirtualSelectedParentBlueScoreResponseMessage {
return &GetVirtualSelectedParentBlueScoreResponseMessage{
BlueScore: blueScore,
}
}

View File

@@ -0,0 +1,45 @@
package appmessage
// GetVirtualSelectedParentChainFromBlockRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetVirtualSelectedParentChainFromBlockRequestMessage struct {
baseMessage
StartHash string
}
// Command returns the protocol command string for the message
func (msg *GetVirtualSelectedParentChainFromBlockRequestMessage) Command() MessageCommand {
return CmdGetVirtualSelectedParentChainFromBlockRequestMessage
}
// NewGetVirtualSelectedParentChainFromBlockRequestMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockRequestMessage(startHash string) *GetVirtualSelectedParentChainFromBlockRequestMessage {
return &GetVirtualSelectedParentChainFromBlockRequestMessage{
StartHash: startHash,
}
}
// GetVirtualSelectedParentChainFromBlockResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetVirtualSelectedParentChainFromBlockResponseMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetVirtualSelectedParentChainFromBlockResponseMessage) Command() MessageCommand {
return CmdGetVirtualSelectedParentChainFromBlockResponseMessage
}
// NewGetVirtualSelectedParentChainFromBlockResponseMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockResponseMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock) *GetVirtualSelectedParentChainFromBlockResponseMessage {
return &GetVirtualSelectedParentChainFromBlockResponseMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
}
}

View File

@@ -1,69 +0,0 @@
package appmessage
// NotifyChainChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyChainChangedRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyChainChangedRequestMessage) Command() MessageCommand {
return CmdNotifyChainChangedRequestMessage
}
// NewNotifyChainChangedRequestMessage returns a instance of the message
func NewNotifyChainChangedRequestMessage() *NotifyChainChangedRequestMessage {
return &NotifyChainChangedRequestMessage{}
}
// NotifyChainChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyChainChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyChainChangedResponseMessage) Command() MessageCommand {
return CmdNotifyChainChangedResponseMessage
}
// NewNotifyChainChangedResponseMessage returns a instance of the message
func NewNotifyChainChangedResponseMessage() *NotifyChainChangedResponseMessage {
return &NotifyChainChangedResponseMessage{}
}
// ChainChangedNotificationMessage is an appmessage corresponding to
// its respective RPC message
type ChainChangedNotificationMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
}
// ChainBlock represents a DAG chain-block
type ChainBlock struct {
Hash string
AcceptedBlocks []*AcceptedBlock
}
// AcceptedBlock represents a block accepted into the DAG
type AcceptedBlock struct {
Hash string
AcceptedTxIDs []string
}
// Command returns the protocol command string for the message
func (msg *ChainChangedNotificationMessage) Command() MessageCommand {
return CmdChainChangedNotificationMessage
}
// NewChainChangedNotificationMessage returns a instance of the message
func NewChainChangedNotificationMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock) *ChainChangedNotificationMessage {
return &ChainChangedNotificationMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
}
}

View File

@@ -0,0 +1,62 @@
package appmessage
// NotifyUTXOsChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyUTXOsChangedRequestMessage struct {
baseMessage
Addresses []string
}
// Command returns the protocol command string for the message
func (msg *NotifyUTXOsChangedRequestMessage) Command() MessageCommand {
return CmdNotifyUTXOsChangedRequestMessage
}
// NewNotifyUTXOsChangedRequestMessage returns a instance of the message
func NewNotifyUTXOsChangedRequestMessage(addresses []string) *NotifyUTXOsChangedRequestMessage {
return &NotifyUTXOsChangedRequestMessage{
Addresses: addresses,
}
}
// NotifyUTXOsChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyUTXOsChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyUTXOsChangedResponseMessage) Command() MessageCommand {
return CmdNotifyUTXOsChangedResponseMessage
}
// NewNotifyUTXOsChangedResponseMessage returns a instance of the message
func NewNotifyUTXOsChangedResponseMessage() *NotifyUTXOsChangedResponseMessage {
return &NotifyUTXOsChangedResponseMessage{}
}
// UTXOsChangedNotificationMessage is an appmessage corresponding to
// its respective RPC message
type UTXOsChangedNotificationMessage struct {
baseMessage
Added []*UTXOsByAddressesEntry
Removed []*UTXOsByAddressesEntry
}
// UTXOsByAddressesEntry represents a UTXO of some address
type UTXOsByAddressesEntry struct {
Address string
Outpoint *RPCOutpoint
UTXOEntry *RPCUTXOEntry
}
// Command returns the protocol command string for the message
func (msg *UTXOsChangedNotificationMessage) Command() MessageCommand {
return CmdUTXOsChangedNotificationMessage
}
// NewUTXOsChangedNotificationMessage returns a instance of the message
func NewUTXOsChangedNotificationMessage() *UTXOsChangedNotificationMessage {
return &UTXOsChangedNotificationMessage{}
}

View File

@@ -0,0 +1,55 @@
package appmessage
// NotifyVirtualSelectedParentBlueScoreChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualSelectedParentBlueScoreChangedRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualSelectedParentBlueScoreChangedRequestMessage) Command() MessageCommand {
return CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage
}
// NewNotifyVirtualSelectedParentBlueScoreChangedRequestMessage returns a instance of the message
func NewNotifyVirtualSelectedParentBlueScoreChangedRequestMessage() *NotifyVirtualSelectedParentBlueScoreChangedRequestMessage {
return &NotifyVirtualSelectedParentBlueScoreChangedRequestMessage{}
}
// NotifyVirtualSelectedParentBlueScoreChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualSelectedParentBlueScoreChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualSelectedParentBlueScoreChangedResponseMessage) Command() MessageCommand {
return CmdNotifyVirtualSelectedParentBlueScoreChangedResponseMessage
}
// NewNotifyVirtualSelectedParentBlueScoreChangedResponseMessage returns a instance of the message
func NewNotifyVirtualSelectedParentBlueScoreChangedResponseMessage() *NotifyVirtualSelectedParentBlueScoreChangedResponseMessage {
return &NotifyVirtualSelectedParentBlueScoreChangedResponseMessage{}
}
// VirtualSelectedParentBlueScoreChangedNotificationMessage is an appmessage corresponding to
// its respective RPC message
type VirtualSelectedParentBlueScoreChangedNotificationMessage struct {
baseMessage
VirtualSelectedParentBlueScore uint64
}
// Command returns the protocol command string for the message
func (msg *VirtualSelectedParentBlueScoreChangedNotificationMessage) Command() MessageCommand {
return CmdVirtualSelectedParentBlueScoreChangedNotificationMessage
}
// NewVirtualSelectedParentBlueScoreChangedNotificationMessage returns a instance of the message
func NewVirtualSelectedParentBlueScoreChangedNotificationMessage(
virtualSelectedParentBlueScore uint64) *VirtualSelectedParentBlueScoreChangedNotificationMessage {
return &VirtualSelectedParentBlueScoreChangedNotificationMessage{
VirtualSelectedParentBlueScore: virtualSelectedParentBlueScore,
}
}

View File

@@ -0,0 +1,69 @@
package appmessage
// NotifyVirtualSelectedParentChainChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualSelectedParentChainChangedRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualSelectedParentChainChangedRequestMessage) Command() MessageCommand {
return CmdNotifyVirtualSelectedParentChainChangedRequestMessage
}
// NewNotifyVirtualSelectedParentChainChangedRequestMessage returns a instance of the message
func NewNotifyVirtualSelectedParentChainChangedRequestMessage() *NotifyVirtualSelectedParentChainChangedRequestMessage {
return &NotifyVirtualSelectedParentChainChangedRequestMessage{}
}
// NotifyVirtualSelectedParentChainChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyVirtualSelectedParentChainChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyVirtualSelectedParentChainChangedResponseMessage) Command() MessageCommand {
return CmdNotifyVirtualSelectedParentChainChangedResponseMessage
}
// NewNotifyVirtualSelectedParentChainChangedResponseMessage returns a instance of the message
func NewNotifyVirtualSelectedParentChainChangedResponseMessage() *NotifyVirtualSelectedParentChainChangedResponseMessage {
return &NotifyVirtualSelectedParentChainChangedResponseMessage{}
}
// VirtualSelectedParentChainChangedNotificationMessage is an appmessage corresponding to
// its respective RPC message
type VirtualSelectedParentChainChangedNotificationMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlocks []*ChainBlock
}
// ChainBlock represents a DAG chain-block
type ChainBlock struct {
Hash string
AcceptedBlocks []*AcceptedBlock
}
// AcceptedBlock represents a block accepted into the DAG
type AcceptedBlock struct {
Hash string
AcceptedTransactionIDs []string
}
// Command returns the protocol command string for the message
func (msg *VirtualSelectedParentChainChangedNotificationMessage) Command() MessageCommand {
return CmdVirtualSelectedParentChainChangedNotificationMessage
}
// NewVirtualSelectedParentChainChangedNotificationMessage returns a instance of the message
func NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes []string,
addedChainBlocks []*ChainBlock) *VirtualSelectedParentChainChangedNotificationMessage {
return &VirtualSelectedParentChainChangedNotificationMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlocks: addedChainBlocks,
}
}

View File

@@ -19,11 +19,33 @@ func NewSubmitBlockRequestMessage(block *MsgBlock) *SubmitBlockRequestMessage {
}
}
// RejectReason describes the reason why a block sent by SubmitBlock was rejected
type RejectReason byte
// RejectReason constants
// Not using iota, since in the .proto file those are hardcoded
const (
RejectReasonNone RejectReason = 0
RejectReasonBlockInvalid RejectReason = 1
RejectReasonIsInIBD RejectReason = 2
)
var rejectReasonToString = map[RejectReason]string{
RejectReasonNone: "None",
RejectReasonBlockInvalid: "Block is invalid",
RejectReasonIsInIBD: "Node is in IBD",
}
func (rr RejectReason) String() string {
return rejectReasonToString[rr]
}
// SubmitBlockResponseMessage is an appmessage corresponding to
// its respective RPC message
type SubmitBlockResponseMessage struct {
baseMessage
Error *RPCError
RejectReason RejectReason
Error *RPCError
}
// Command returns the protocol command string for the message

View File

@@ -4,7 +4,7 @@ package appmessage
// its respective RPC message
type SubmitTransactionRequestMessage struct {
baseMessage
Transaction *MsgTx
Transaction *RPCTransaction
}
// Command returns the protocol command string for the message
@@ -13,7 +13,7 @@ func (msg *SubmitTransactionRequestMessage) Command() MessageCommand {
}
// NewSubmitTransactionRequestMessage returns a instance of the message
func NewSubmitTransactionRequestMessage(transaction *MsgTx) *SubmitTransactionRequestMessage {
func NewSubmitTransactionRequestMessage(transaction *RPCTransaction) *SubmitTransactionRequestMessage {
return &SubmitTransactionRequestMessage{
Transaction: transaction,
}
@@ -23,7 +23,7 @@ func NewSubmitTransactionRequestMessage(transaction *MsgTx) *SubmitTransactionRe
// its respective RPC message
type SubmitTransactionResponseMessage struct {
baseMessage
TxID string
TransactionID string
Error *RPCError
}
@@ -34,8 +34,58 @@ func (msg *SubmitTransactionResponseMessage) Command() MessageCommand {
}
// NewSubmitTransactionResponseMessage returns a instance of the message
func NewSubmitTransactionResponseMessage(txID string) *SubmitTransactionResponseMessage {
func NewSubmitTransactionResponseMessage(transactionID string) *SubmitTransactionResponseMessage {
return &SubmitTransactionResponseMessage{
TxID: txID,
TransactionID: transactionID,
}
}
// RPCTransaction is a kaspad transaction representation meant to be
// used over RPC
type RPCTransaction struct {
Version uint16
Inputs []*RPCTransactionInput
Outputs []*RPCTransactionOutput
LockTime uint64
SubnetworkID string
Gas uint64
PayloadHash string
Payload string
}
// RPCTransactionInput is a kaspad transaction input representation
// meant to be used over RPC
type RPCTransactionInput struct {
PreviousOutpoint *RPCOutpoint
SignatureScript string
Sequence uint64
}
// RPCScriptPublicKey is a kaspad ScriptPublicKey representation
type RPCScriptPublicKey struct {
Version uint16
Script string
}
// RPCTransactionOutput is a kaspad transaction output representation
// meant to be used over RPC
type RPCTransactionOutput struct {
Amount uint64
ScriptPublicKey *RPCScriptPublicKey
}
// RPCOutpoint is a kaspad outpoint representation meant to be used
// over RPC
type RPCOutpoint struct {
TransactionID string
Index uint32
}
// RPCUTXOEntry is a kaspad utxo entry representation meant to be used
// over RPC
type RPCUTXOEntry struct {
Amount uint64
ScriptPublicKey *RPCScriptPublicKey
BlockBlueScore uint64
IsCoinbase bool
}

View File

@@ -4,6 +4,8 @@ import (
"fmt"
"sync/atomic"
"github.com/kaspanet/kaspad/domain/utxoindex"
infrastructuredatabase "github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/domain"
@@ -78,7 +80,7 @@ func (a *ComponentManager) Stop() {
func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database, interrupt chan<- struct{}) (
*ComponentManager, error) {
domain, err := domain.New(cfg.ActiveNetParams, db)
domain, err := domain.New(cfg.ActiveNetParams, db, cfg.IsArchivalNode)
if err != nil {
return nil, err
}
@@ -93,6 +95,12 @@ func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database,
return nil, err
}
var utxoIndex *utxoindex.UTXOIndex
if cfg.UTXOIndex {
utxoIndex = utxoindex.New(domain.Consensus(), db)
log.Infof("UTXO index started")
}
connectionManager, err := connmanager.New(cfg, netAdapter, addressManager)
if err != nil {
return nil, err
@@ -101,7 +109,7 @@ func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database,
if err != nil {
return nil, err
}
rpcManager := setupRPC(cfg, domain, netAdapter, protocolManager, connectionManager, addressManager, interrupt)
rpcManager := setupRPC(cfg, domain, netAdapter, protocolManager, connectionManager, addressManager, utxoIndex, interrupt)
return &ComponentManager{
cfg: cfg,
@@ -121,11 +129,20 @@ func setupRPC(
protocolManager *protocol.Manager,
connectionManager *connmanager.ConnectionManager,
addressManager *addressmanager.AddressManager,
utxoIndex *utxoindex.UTXOIndex,
shutDownChan chan<- struct{},
) *rpc.Manager {
rpcManager := rpc.NewManager(
cfg, domain, netAdapter, protocolManager, connectionManager, addressManager, shutDownChan)
cfg,
domain,
netAdapter,
protocolManager,
connectionManager,
addressManager,
utxoIndex,
shutDownChan,
)
protocolManager.SetOnBlockAddedToDAGHandler(rpcManager.NotifyBlockAddedToDAG)
return rpcManager
@@ -140,9 +157,7 @@ func (a *ComponentManager) maybeSeedFromDNS() {
// source. So we'll take first returned address as source.
a.addressManager.AddAddresses(addresses...)
})
}
if a.cfg.GRPCSeed != "" {
dnsseed.SeedFromGRPC(a.cfg.NetParams(), a.cfg.GRPCSeed, appmessage.SFNodeNetwork, false, nil,
func(addresses []*appmessage.NetAddress) {
a.addressManager.AddAddresses(addresses...)

View File

@@ -50,7 +50,7 @@ func LogBlock(block *externalapi.DomainBlock) {
log.Infof("Processed %d %s in the last %s (%d %s, %s)",
receivedLogBlocks, blockStr, tDuration, receivedLogTx,
txStr, mstime.UnixMilliseconds(block.Header.TimeInMilliseconds))
txStr, mstime.UnixMilliseconds(block.Header.TimeInMilliseconds()))
receivedLogBlocks = 0
receivedLogTx = 0

View File

@@ -2,12 +2,12 @@ package flowcontext
import (
"github.com/kaspanet/kaspad/app/protocol/blocklogger"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/pkg/errors"
"sync/atomic"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
@@ -16,20 +16,40 @@ import (
// OnNewBlock updates the mempool after a new block arrival, and
// relays newly unorphaned transactions and possibly rebroadcast
// manually added transactions when not in IBD.
func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock) error {
unorphanedBlocks, err := f.UnorphanBlocks(block)
func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
blockInsertionResult *externalapi.BlockInsertionResult) error {
hash := consensushashing.BlockHash(block)
log.Debugf("OnNewBlock start for block %s", hash)
defer log.Debugf("OnNewBlock end for block %s", hash)
unorphaningResults, err := f.UnorphanBlocks(block)
if err != nil {
return err
}
newBlocks := append([]*externalapi.DomainBlock{block}, unorphanedBlocks...)
for _, newBlock := range newBlocks {
log.Debugf("OnNewBlock: block %s unorphaned %d blocks", hash, len(unorphaningResults))
newBlocks := []*externalapi.DomainBlock{block}
newBlockInsertionResults := []*externalapi.BlockInsertionResult{blockInsertionResult}
for _, unorphaningResult := range unorphaningResults {
newBlocks = append(newBlocks, unorphaningResult.block)
newBlockInsertionResults = append(newBlockInsertionResults, unorphaningResult.blockInsertionResult)
}
for i, newBlock := range newBlocks {
blocklogger.LogBlock(block)
_ = f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
log.Debugf("OnNewBlock: passing block %s transactions to mining manager", hash)
_, err = f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
if err != nil {
return err
}
if f.onBlockAddedToDAGHandler != nil {
err := f.onBlockAddedToDAGHandler(newBlock)
log.Debugf("OnNewBlock: calling f.onBlockAddedToDAGHandler for block %s", hash)
blockInsertionResult = newBlockInsertionResults[i]
err := f.onBlockAddedToDAGHandler(newBlock, blockInsertionResult)
if err != nil {
return err
}
@@ -45,7 +65,7 @@ func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
f.updateTransactionsToRebroadcast(block)
// Don't relay transactions when in IBD.
if atomic.LoadUint32(&f.isInIBD) != 0 {
if f.IsIBDRunning() {
return nil
}
@@ -56,7 +76,7 @@ func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
txIDsToBroadcast := make([]*externalapi.DomainTransactionID, len(transactionsAcceptedToMempool)+len(txIDsToRebroadcast))
for i, tx := range transactionsAcceptedToMempool {
txIDsToBroadcast[i] = consensusserialization.TransactionID(tx)
txIDsToBroadcast[i] = consensushashing.TransactionID(tx)
}
offset := len(transactionsAcceptedToMempool)
for i, txID := range txIDsToRebroadcast {
@@ -81,17 +101,61 @@ func (f *FlowContext) SharedRequestedBlocks() *blockrelay.SharedRequestedBlocks
// AddBlock adds the given block to the DAG and propagates it.
func (f *FlowContext) AddBlock(block *externalapi.DomainBlock) error {
err := f.Domain().Consensus().ValidateAndInsertBlock(block)
blockInsertionResult, err := f.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Infof("Validation failed for block %s: %s", consensusserialization.BlockHash(block), err)
return nil
log.Warnf("Validation failed for block %s: %s", consensushashing.BlockHash(block), err)
}
return err
}
err = f.OnNewBlock(block)
err = f.OnNewBlock(block, blockInsertionResult)
if err != nil {
return err
}
return f.Broadcast(appmessage.NewMsgInvBlock(consensusserialization.BlockHash(block)))
return f.Broadcast(appmessage.NewMsgInvBlock(consensushashing.BlockHash(block)))
}
// IsIBDRunning returns true if IBD is currently marked as running
func (f *FlowContext) IsIBDRunning() bool {
f.ibdPeerMutex.RLock()
defer f.ibdPeerMutex.RUnlock()
return f.ibdPeer != nil
}
// TrySetIBDRunning attempts to set `isInIBD`. Returns false
// if it is already set
func (f *FlowContext) TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool {
f.ibdPeerMutex.Lock()
defer f.ibdPeerMutex.Unlock()
if f.ibdPeer != nil {
return false
}
f.ibdPeer = ibdPeer
log.Infof("IBD started")
return true
}
// UnsetIBDRunning unsets isInIBD
func (f *FlowContext) UnsetIBDRunning() {
f.ibdPeerMutex.Lock()
defer f.ibdPeerMutex.Unlock()
if f.ibdPeer == nil {
panic("attempted to unset isInIBD when it was not set to begin with")
}
f.ibdPeer = nil
log.Infof("IBD finished")
}
// IBDPeer returns the current IBD peer or null if the node is not
// in IBD
func (f *FlowContext) IBDPeer() *peerpkg.Peer {
f.ibdPeerMutex.RLock()
defer f.ibdPeerMutex.RUnlock()
return f.ibdPeer
}

View File

@@ -1,6 +1,7 @@
package flowcontext
import (
"github.com/kaspanet/kaspad/util/mstime"
"sync"
"time"
@@ -9,7 +10,7 @@ import (
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/relaytransactions"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
@@ -20,7 +21,7 @@ import (
// OnBlockAddedToDAGHandler is a handler function that's triggered
// when a block is added to the DAG
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock) error
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
// OnTransactionAddedToMempoolHandler is a handler function that's triggered
// when a transaction is added to the mempool
@@ -35,19 +36,20 @@ type FlowContext struct {
addressManager *addressmanager.AddressManager
connectionManager *connmanager.ConnectionManager
timeStarted int64
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
transactionsToRebroadcastLock sync.Mutex
transactionsToRebroadcast map[externalapi.DomainTransactionID]*externalapi.DomainTransaction
lastRebroadcastTime time.Time
sharedRequestedTransactions *relaytransactions.SharedRequestedTransactions
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
sharedRequestedBlocks *blockrelay.SharedRequestedBlocks
isInIBD uint32
startIBDMutex sync.Mutex
ibdPeer *peerpkg.Peer
ibdPeer *peerpkg.Peer
ibdPeerMutex sync.RWMutex
peers map[id.ID]*peerpkg.Peer
peersMutex sync.RWMutex
@@ -66,11 +68,12 @@ func New(cfg *config.Config, domain domain.Domain, addressManager *addressmanage
domain: domain,
addressManager: addressManager,
connectionManager: connectionManager,
sharedRequestedTransactions: relaytransactions.NewSharedRequestedTransactions(),
sharedRequestedTransactions: transactionrelay.NewSharedRequestedTransactions(),
sharedRequestedBlocks: blockrelay.NewSharedRequestedBlocks(),
peers: make(map[id.ID]*peerpkg.Peer),
transactionsToRebroadcast: make(map[externalapi.DomainTransactionID]*externalapi.DomainTransaction),
orphans: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
timeStarted: mstime.Now().UnixMilliseconds(),
}
}

View File

@@ -1,130 +0,0 @@
package flowcontext
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"sync/atomic"
"time"
"github.com/kaspanet/kaspad/util/mstime"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
)
// StartIBDIfRequired selects a peer and starts IBD against it
// if required
func (f *FlowContext) StartIBDIfRequired() error {
f.startIBDMutex.Lock()
defer f.startIBDMutex.Unlock()
if f.IsInIBD() {
return nil
}
syncInfo, err := f.domain.Consensus().GetSyncInfo()
if err != nil {
return err
}
if syncInfo.State == externalapi.SyncStateRelay {
return nil
}
peer, err := f.selectPeerForIBD(syncInfo)
if err != nil {
return err
}
if peer == nil {
spawn("StartIBDIfRequired-requestSelectedTipsIfRequired", f.requestSelectedTipsIfRequired)
return nil
}
atomic.StoreUint32(&f.isInIBD, 1)
f.ibdPeer = peer
spawn("StartIBDIfRequired-peer.StartIBD", peer.StartIBD)
return nil
}
// IsInIBD is true if IBD is currently running
func (f *FlowContext) IsInIBD() bool {
return atomic.LoadUint32(&f.isInIBD) != 0
}
// selectPeerForIBD returns the first peer whose selected tip
// hash is not in our DAG
func (f *FlowContext) selectPeerForIBD(syncInfo *externalapi.SyncInfo) (*peerpkg.Peer, error) {
f.peersMutex.RLock()
defer f.peersMutex.RUnlock()
for _, peer := range f.peers {
peerSelectedTipHash := peer.SelectedTipHash()
if f.IsOrphan(peerSelectedTipHash) {
continue
}
blockInfo, err := f.domain.Consensus().GetBlockInfo(peerSelectedTipHash)
if err != nil {
return nil, err
}
if syncInfo.State == externalapi.SyncStateHeadersFirst {
if !blockInfo.Exists {
return peer, nil
}
} else {
if blockInfo.Exists && blockInfo.BlockStatus == externalapi.StatusHeaderOnly &&
blockInfo.IsBlockInHeaderPruningPointFuture {
return peer, nil
}
}
}
return nil, nil
}
func (f *FlowContext) requestSelectedTipsIfRequired() {
dagTimeCurrent, err := f.shouldRequestSelectedTips()
if err != nil {
panic(err)
}
if dagTimeCurrent {
return
}
f.requestSelectedTips()
}
func (f *FlowContext) shouldRequestSelectedTips() (bool, error) {
const minDurationToRequestSelectedTips = time.Minute
virtualSelectedParent, err := f.domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
virtualSelectedParentTime := mstime.UnixMilliseconds(virtualSelectedParent.Header.TimeInMilliseconds)
return mstime.Now().Sub(virtualSelectedParentTime) > minDurationToRequestSelectedTips, nil
}
func (f *FlowContext) requestSelectedTips() {
f.peersMutex.RLock()
defer f.peersMutex.RUnlock()
for _, peer := range f.peers {
peer.RequestSelectedTipIfRequired()
}
}
// FinishIBD finishes the current IBD flow and starts a new one if required.
func (f *FlowContext) FinishIBD() error {
f.ibdPeer = nil
atomic.StoreUint32(&f.isInIBD, 0)
return f.StartIBDIfRequired()
}
// IBDPeer returns the currently active IBD peer.
// Returns nil if we aren't currently in IBD
func (f *FlowContext) IBDPeer() *peerpkg.Peer {
if !f.IsInIBD() {
return nil
}
return f.ibdPeer
}

View File

@@ -3,21 +3,50 @@ package flowcontext
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashset"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/pkg/errors"
)
// maxOrphans is the maximum amount of orphans allowed in the
// orphans collection. This number is an approximation of how
// many orphans there can possibly be on average. It is based
// on: 2^orphanResolutionRange * PHANTOM K.
const maxOrphans = 600
// UnorphaningResult is the result of unorphaning a block
type UnorphaningResult struct {
block *externalapi.DomainBlock
blockInsertionResult *externalapi.BlockInsertionResult
}
// AddOrphan adds the block to the orphan set
func (f *FlowContext) AddOrphan(orphanBlock *externalapi.DomainBlock) {
f.orphansMutex.Lock()
defer f.orphansMutex.Unlock()
orphanHash := consensusserialization.BlockHash(orphanBlock)
orphanHash := consensushashing.BlockHash(orphanBlock)
f.orphans[*orphanHash] = orphanBlock
if len(f.orphans) > maxOrphans {
log.Debugf("Orphan collection size exceeded. Evicting a random orphan")
f.evictRandomOrphan()
}
log.Infof("Received a block with missing parents, adding to orphan pool: %s", orphanHash)
}
func (f *FlowContext) evictRandomOrphan() {
var toEvict externalapi.DomainHash
for hash := range f.orphans {
toEvict = hash
break
}
delete(f.orphans, toEvict)
log.Debugf("Evicted %s from the orphan collection", toEvict)
}
// IsOrphan returns whether the given blockHash belongs to an orphan block
func (f *FlowContext) IsOrphan(blockHash *externalapi.DomainHash) bool {
f.orphansMutex.RLock()
@@ -28,32 +57,32 @@ func (f *FlowContext) IsOrphan(blockHash *externalapi.DomainHash) bool {
}
// UnorphanBlocks removes the block from the orphan set, and remove all of the blocks that are not orphans anymore.
func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*externalapi.DomainBlock, error) {
func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*UnorphaningResult, error) {
f.orphansMutex.Lock()
defer f.orphansMutex.Unlock()
// Find all the children of rootBlock among the orphans
// and add them to the process queue
rootBlockHash := consensusserialization.BlockHash(rootBlock)
rootBlockHash := consensushashing.BlockHash(rootBlock)
processQueue := f.addChildOrphansToProcessQueue(rootBlockHash, []externalapi.DomainHash{})
var unorphanedBlocks []*externalapi.DomainBlock
var unorphaningResults []*UnorphaningResult
for len(processQueue) > 0 {
var orphanHash externalapi.DomainHash
orphanHash, processQueue = processQueue[0], processQueue[1:]
orphanBlock := f.orphans[orphanHash]
log.Tracef("Considering to unorphan block %s with parents %s",
orphanHash, orphanBlock.Header.ParentHashes)
log.Debugf("Considering to unorphan block %s with parents %s",
orphanHash, orphanBlock.Header.ParentHashes())
canBeUnorphaned := true
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes {
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
orphanBlockParentInfo, err := f.domain.Consensus().GetBlockInfo(orphanBlockParentHash)
if err != nil {
return nil, err
}
if !orphanBlockParentInfo.Exists {
log.Tracef("Cannot unorphan block %s. It's missing at "+
if !orphanBlockParentInfo.Exists || orphanBlockParentInfo.BlockStatus == externalapi.StatusHeaderOnly {
log.Debugf("Cannot unorphan block %s. It's missing at "+
"least the following parent: %s", orphanHash, orphanBlockParentHash)
canBeUnorphaned = false
@@ -61,16 +90,21 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*ext
}
}
if canBeUnorphaned {
err := f.unorphanBlock(orphanHash)
blockInsertionResult, unorphaningSucceeded, err := f.unorphanBlock(orphanHash)
if err != nil {
return nil, err
}
unorphanedBlocks = append(unorphanedBlocks, orphanBlock)
processQueue = f.addChildOrphansToProcessQueue(&orphanHash, processQueue)
if unorphaningSucceeded {
unorphaningResults = append(unorphaningResults, &UnorphaningResult{
block: orphanBlock,
blockInsertionResult: blockInsertionResult,
})
processQueue = f.addChildOrphansToProcessQueue(&orphanHash, processQueue)
}
}
}
return unorphanedBlocks, nil
return unorphaningResults, nil
}
// addChildOrphansToProcessQueue finds all child orphans of `blockHash`
@@ -99,8 +133,8 @@ func (f *FlowContext) addChildOrphansToProcessQueue(blockHash *externalapi.Domai
func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash) []externalapi.DomainHash {
var childOrphans []externalapi.DomainHash
for orphanHash, orphanBlock := range f.orphans {
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes {
if *orphanBlockParentHash == *blockHash {
for _, orphanBlockParentHash := range orphanBlock.Header.ParentHashes() {
if orphanBlockParentHash.Equal(blockHash) {
childOrphans = append(childOrphans, orphanHash)
break
}
@@ -109,22 +143,68 @@ func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash)
return childOrphans
}
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) error {
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (*externalapi.BlockInsertionResult, bool, error) {
orphanBlock, ok := f.orphans[orphanHash]
if !ok {
return errors.Errorf("attempted to unorphan a non-orphan block %s", orphanHash)
return nil, false, errors.Errorf("attempted to unorphan a non-orphan block %s", orphanHash)
}
delete(f.orphans, orphanHash)
err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock)
blockInsertionResult, err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Infof("Validation failed for orphan block %s: %s", orphanHash, err)
return nil
log.Warnf("Validation failed for orphan block %s: %s", orphanHash, err)
return nil, false, nil
}
return err
return nil, false, err
}
log.Infof("Unorphaned block %s", orphanHash)
return nil
return blockInsertionResult, true, nil
}
// GetOrphanRoots returns the roots of the missing ancestors DAG of the given orphan
func (f *FlowContext) GetOrphanRoots(orphan *externalapi.DomainHash) ([]*externalapi.DomainHash, bool, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "GetOrphanRoots")
defer onEnd()
f.orphansMutex.RLock()
defer f.orphansMutex.RUnlock()
_, ok := f.orphans[*orphan]
if !ok {
return nil, false, nil
}
queue := []*externalapi.DomainHash{orphan}
addedToQueueSet := hashset.New()
addedToQueueSet.Add(orphan)
roots := []*externalapi.DomainHash{}
for len(queue) > 0 {
var current *externalapi.DomainHash
current, queue = queue[0], queue[1:]
block, ok := f.orphans[*current]
if !ok {
blockInfo, err := f.domain.Consensus().GetBlockInfo(current)
if err != nil {
return nil, false, err
}
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
roots = append(roots, current)
}
continue
}
for _, parent := range block.Header.ParentHashes() {
if !addedToQueueSet.Contains(parent) {
queue = append(queue, parent)
addedToQueueSet.Add(parent)
}
}
}
return roots, true, nil
}

View File

@@ -0,0 +1,43 @@
package flowcontext
import "github.com/kaspanet/kaspad/util/mstime"
const (
maxSelectedParentTimeDiffToAllowMiningInMilliSeconds = 60 * 60 * 1000 // 1 Hour
)
// ShouldMine returns whether it's ok to use block template from this node
// for mining purposes.
func (f *FlowContext) ShouldMine() (bool, error) {
peers := f.Peers()
if len(peers) == 0 {
log.Debugf("The node is not connected, so ShouldMine returns false")
return false, nil
}
if f.IsIBDRunning() {
log.Debugf("IBD is running, so ShouldMine returns false")
return false, nil
}
virtualSelectedParent, err := f.domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
virtualSelectedParentHeader, err := f.domain.Consensus().GetBlockHeader(virtualSelectedParent)
if err != nil {
return false, err
}
now := mstime.Now().UnixMilliseconds()
if now-virtualSelectedParentHeader.TimeInMilliseconds() < maxSelectedParentTimeDiffToAllowMiningInMilliSeconds {
log.Debugf("The selected tip timestamp is recent (%d), so ShouldMine returns true",
virtualSelectedParentHeader.TimeInMilliseconds())
return true, nil
}
log.Debugf("The selected tip timestamp is old (%d), so ShouldMine returns false",
virtualSelectedParentHeader.TimeInMilliseconds())
return false, nil
}

View File

@@ -4,9 +4,9 @@ import (
"time"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/relaytransactions"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// AddTransaction adds transaction to the mempool and propagates it.
@@ -19,7 +19,7 @@ func (f *FlowContext) AddTransaction(tx *externalapi.DomainTransaction) error {
return err
}
transactionID := consensusserialization.TransactionID(tx)
transactionID := consensushashing.TransactionID(tx)
f.transactionsToRebroadcast[*transactionID] = tx
inv := appmessage.NewMsgInvTransaction([]*externalapi.DomainTransactionID{transactionID})
return f.Broadcast(inv)
@@ -32,7 +32,7 @@ func (f *FlowContext) updateTransactionsToRebroadcast(block *externalapi.DomainB
// anymore, although they are not included in the UTXO set.
// This is probably ok, since red blocks are quite rare.
for _, tx := range block.Transactions {
delete(f.transactionsToRebroadcast, *consensusserialization.TransactionID(tx))
delete(f.transactionsToRebroadcast, *consensushashing.TransactionID(tx))
}
}
@@ -48,15 +48,15 @@ func (f *FlowContext) txIDsToRebroadcast() []*externalapi.DomainTransactionID {
txIDs := make([]*externalapi.DomainTransactionID, len(f.transactionsToRebroadcast))
i := 0
for _, tx := range f.transactionsToRebroadcast {
txIDs[i] = consensusserialization.TransactionID(tx)
txIDs[i] = consensushashing.TransactionID(tx)
i++
}
return txIDs
}
// SharedRequestedTransactions returns a *relaytransactions.SharedRequestedTransactions for sharing
// SharedRequestedTransactions returns a *transactionrelay.SharedRequestedTransactions for sharing
// data about requested transactions between different peers.
func (f *FlowContext) SharedRequestedTransactions() *relaytransactions.SharedRequestedTransactions {
func (f *FlowContext) SharedRequestedTransactions() *transactionrelay.SharedRequestedTransactions {
return f.sharedRequestedTransactions
}

View File

@@ -5,14 +5,12 @@ import (
"github.com/kaspanet/kaspad/app/protocol/common"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// ReceiveAddressesContext is the interface for the context needed for the ReceiveAddresses flow.
type ReceiveAddressesContext interface {
Config() *config.Config
AddressManager() *addressmanager.AddressManager
}

View File

@@ -1,7 +1,6 @@
package addressexchange
import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"math/rand"
"github.com/kaspanet/kaspad/app/appmessage"
@@ -17,16 +16,11 @@ type SendAddressesContext interface {
// SendAddresses sends addresses to a peer that requests it.
func SendAddresses(context SendAddressesContext, incomingRoute *router.Route, outgoingRoute *router.Route) error {
for {
message, err := incomingRoute.Dequeue()
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
_, ok := message.(*appmessage.MsgRequestAddresses)
if !ok {
return protocolerrors.Errorf(true, "unexpected message. "+
"Expected: %s, got: %s", appmessage.CmdRequestAddresses, message.Command())
}
addresses := context.AddressManager().Addresses()
msgAddresses := appmessage.NewMsgAddresses(shuffleAddresses(addresses))

View File

@@ -0,0 +1,29 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
func (flow *handleRelayInvsFlow) sendGetBlockLocator(lowHash *externalapi.DomainHash,
highHash *externalapi.DomainHash, limit uint32) error {
msgGetBlockLocator := appmessage.NewMsgRequestBlockLocator(lowHash, highHash, limit)
return flow.outgoingRoute.Enqueue(msgGetBlockLocator)
}
func (flow *handleRelayInvsFlow) receiveBlockLocator() (blockLocatorHashes []*externalapi.DomainHash, err error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
msgBlockLocator, ok := message.(*appmessage.MsgBlockLocator)
if !ok {
return nil,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdBlockLocator, message.Command())
}
return msgBlockLocator.BlockLocatorHashes, nil
}

View File

@@ -0,0 +1,77 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleIBDBlockLocatorContext is the interface for the context needed for the HandleIBDBlockLocator flow.
type HandleIBDBlockLocatorContext interface {
Domain() domain.Domain
}
// HandleIBDBlockLocator listens to appmessage.MsgIBDBlockLocator messages and sends
// the highest known block that's in the selected parent chain of `targetHash` to the
// requesting peer.
func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
for {
message, err := incomingRoute.Dequeue()
if err != nil {
return err
}
ibdBlockLocatorMessage := message.(*appmessage.MsgIBDBlockLocator)
targetHash := ibdBlockLocatorMessage.TargetHash
log.Debugf("Received IBDBlockLocator from %s with targetHash %s", peer, targetHash)
blockInfo, err := context.Domain().Consensus().GetBlockInfo(targetHash)
if err != nil {
return err
}
if !blockInfo.Exists {
return protocolerrors.Errorf(true, "received IBDBlockLocator "+
"with an unknown targetHash %s", targetHash)
}
foundHighestHashInTheSelectedParentChainOfTargetHash := false
for _, blockLocatorHash := range ibdBlockLocatorMessage.BlockLocatorHashes {
blockInfo, err := context.Domain().Consensus().GetBlockInfo(blockLocatorHash)
if err != nil {
return err
}
if !blockInfo.Exists {
continue
}
isBlockLocatorHashInSelectedParentChainOfHighHash, err :=
context.Domain().Consensus().IsInSelectedParentChainOf(blockLocatorHash, targetHash)
if err != nil {
return err
}
if !isBlockLocatorHashInSelectedParentChainOfHighHash {
continue
}
foundHighestHashInTheSelectedParentChainOfTargetHash = true
log.Debugf("Found a known hash %s amongst peer %s's "+
"blockLocator that's in the selected parent chain of targetHash %s", blockLocatorHash, peer, targetHash)
ibdBlockLocatorHighestHashMessage := appmessage.NewMsgIBDBlockLocatorHighestHash(blockLocatorHash)
err = outgoingRoute.Enqueue(ibdBlockLocatorHighestHashMessage)
if err != nil {
return err
}
break
}
if !foundHighestHashInTheSelectedParentChainOfTargetHash {
return protocolerrors.Errorf(true, "no hash was found in the blockLocator "+
"that was in the selected parent chain of targetHash %s", targetHash)
}
}
}

View File

@@ -1,9 +1,10 @@
package ibd
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
@@ -24,13 +25,14 @@ func HandleIBDBlockRequests(context HandleIBDBlockRequestsContext, incomingRoute
return err
}
msgRequestIBDBlocks := message.(*appmessage.MsgRequestIBDBlocks)
for _, hash := range msgRequestIBDBlocks.Hashes {
log.Debugf("Got request for %d ibd blocks", len(msgRequestIBDBlocks.Hashes))
for i, hash := range msgRequestIBDBlocks.Hashes {
// Fetch the block from the database.
blockInfo, err := context.Domain().Consensus().GetBlockInfo(hash)
if err != nil {
return err
}
if !blockInfo.Exists {
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return protocolerrors.Errorf(true, "block %s not found", hash)
}
block, err := context.Domain().Consensus().GetBlock(hash)
@@ -46,6 +48,7 @@ func HandleIBDBlockRequests(context HandleIBDBlockRequestsContext, incomingRoute
if err != nil {
return err
}
log.Debugf("sent %d out of %d", i, len(msgRequestIBDBlocks.Hashes))
}
}
}

View File

@@ -0,0 +1,51 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandlePruningPointHashRequestsFlowContext is the interface for the context needed for the handlePruningPointHashRequestsFlow flow.
type HandlePruningPointHashRequestsFlowContext interface {
Domain() domain.Domain
}
type handlePruningPointHashRequestsFlow struct {
HandlePruningPointHashRequestsFlowContext
incomingRoute, outgoingRoute *router.Route
}
// HandlePruningPointHashRequests listens to appmessage.MsgRequestPruningPointHashMessage messages and sends
// the pruning point hash as response.
func HandlePruningPointHashRequests(context HandlePruningPointHashRequestsFlowContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handlePruningPointHashRequestsFlow{
HandlePruningPointHashRequestsFlowContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handlePruningPointHashRequestsFlow) start() error {
for {
_, err := flow.incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for a pruning point hash")
pruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return err
}
err = flow.outgoingRoute.Enqueue(appmessage.NewPruningPointHashMessage(pruningPoint))
if err != nil {
return err
}
log.Debugf("Sent pruning point hash %s", pruningPoint)
}
}

View File

@@ -5,6 +5,7 @@ import (
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
@@ -25,13 +26,14 @@ func HandleRelayBlockRequests(context RelayBlockRequestsContext, incomingRoute *
return err
}
getRelayBlocksMessage := message.(*appmessage.MsgRequestRelayBlocks)
log.Debugf("Got request for relay blocks with hashes %s", getRelayBlocksMessage.Hashes)
for _, hash := range getRelayBlocksMessage.Hashes {
// Fetch the block from the database.
blockInfo, err := context.Domain().Consensus().GetBlockInfo(hash)
if err != nil {
return err
}
if !blockInfo.Exists {
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return protocolerrors.Errorf(true, "block %s not found", hash)
}
block, err := context.Domain().Consensus().GetBlock(hash)
@@ -45,6 +47,7 @@ func HandleRelayBlockRequests(context RelayBlockRequestsContext, incomingRoute *
if err != nil {
return err
}
log.Debugf("Relayed block with hash %s", hash)
}
}
}

View File

@@ -8,25 +8,30 @@ import (
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/blocks"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
mathUtil "github.com/kaspanet/kaspad/util/math"
"github.com/pkg/errors"
)
// orphanResolutionRange is the maximum amount of blockLocator hashes
// to search for known blocks. See isBlockInOrphanResolutionRange for
// further details
var orphanResolutionRange uint32 = 5
// RelayInvsContext is the interface for the context needed for the HandleRelayInvs flow.
type RelayInvsContext interface {
Domain() domain.Domain
NetAdapter() *netadapter.NetAdapter
OnNewBlock(block *externalapi.DomainBlock) error
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
SharedRequestedBlocks() *SharedRequestedBlocks
StartIBDIfRequired() error
IsInIBD() bool
Broadcast(message appmessage.Message) error
AddOrphan(orphanBlock *externalapi.DomainBlock)
GetOrphanRoots(orphanHash *externalapi.DomainHash) ([]*externalapi.DomainHash, bool, error)
IsOrphan(blockHash *externalapi.DomainHash) bool
IsIBDRunning() bool
TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool
UnsetIBDRunning()
}
type handleRelayInvsFlow struct {
@@ -53,6 +58,7 @@ func HandleRelayInvs(context RelayInvsContext, incomingRoute *router.Route, outg
func (flow *handleRelayInvsFlow) start() error {
for {
log.Debugf("Waiting for inv")
inv, err := flow.readInv()
if err != nil {
return err
@@ -64,35 +70,67 @@ func (flow *handleRelayInvsFlow) start() error {
if err != nil {
return err
}
if blockInfo.Exists {
if blockInfo.Exists && blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
if blockInfo.BlockStatus == externalapi.StatusInvalid {
return protocolerrors.Errorf(true, "sent inv of an invalid block %s",
inv.Hash)
}
log.Debugf("Block %s already exists. continuing...", inv.Hash)
continue
}
if flow.IsOrphan(inv.Hash) {
continue
}
err = flow.StartIBDIfRequired()
if err != nil {
return err
}
if flow.IsInIBD() {
// Block relay is disabled during IBD
continue
}
requestQueue := newHashesQueueSet()
requestQueue.enqueueIfNotExists(inv.Hash)
for requestQueue.len() > 0 {
err := flow.requestBlocks(requestQueue)
log.Debugf("Block %s is a known orphan. Requesting its missing ancestors", inv.Hash)
err := flow.AddOrphanRootsToQueue(inv.Hash)
if err != nil {
return err
}
continue
}
// Block relay is disabled during IBD
if flow.IsIBDRunning() {
log.Debugf("Got block %s while in IBD. continuing...", inv.Hash)
continue
}
log.Debugf("Requesting block %s", inv.Hash)
block, exists, err := flow.requestBlock(inv.Hash)
if err != nil {
return err
}
if exists {
log.Debugf("Aborting requesting block %s because it already exists", inv.Hash)
continue
}
log.Debugf("Processing block %s", inv.Hash)
missingParents, blockInsertionResult, err := flow.processBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Infof("Ignoring duplicate block %s", inv.Hash)
continue
}
return err
}
if len(missingParents) > 0 {
log.Debugf("Block %s is orphan and has missing parents: %s", inv.Hash, missingParents)
err := flow.processOrphan(block, missingParents)
if err != nil {
return err
}
continue
}
log.Debugf("Relaying block %s", inv.Hash)
err = flow.relayBlock(block)
if err != nil {
return err
}
log.Infof("Accepted block %s via relay", inv.Hash)
err = flow.OnNewBlock(block, blockInsertionResult)
if err != nil {
return err
}
}
}
@@ -117,68 +155,34 @@ func (flow *handleRelayInvsFlow) readInv() (*appmessage.MsgInvRelayBlock, error)
return inv, nil
}
func (flow *handleRelayInvsFlow) requestBlocks(requestQueue *hashesQueueSet) error {
numHashesToRequest := mathUtil.MinInt(appmessage.MaxRequestRelayBlocksHashes, requestQueue.len())
hashesToRequest := requestQueue.dequeue(numHashesToRequest)
pendingBlocks := map[externalapi.DomainHash]struct{}{}
var filteredHashesToRequest []*externalapi.DomainHash
for _, hash := range hashesToRequest {
exists := flow.SharedRequestedBlocks().addIfNotExists(hash)
if exists {
continue
}
// The block can become known from another peer in the process of orphan resolution
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(hash)
if err != nil {
return err
}
if blockInfo.Exists {
continue
}
pendingBlocks[*hash] = struct{}{}
filteredHashesToRequest = append(filteredHashesToRequest, hash)
func (flow *handleRelayInvsFlow) requestBlock(requestHash *externalapi.DomainHash) (*externalapi.DomainBlock, bool, error) {
exists := flow.SharedRequestedBlocks().addIfNotExists(requestHash)
if exists {
return nil, true, nil
}
// Exit early if we've filtered out all the hashes
if len(filteredHashesToRequest) == 0 {
return nil
}
// In case the function returns earlier than expected, we want to make sure requestedBlocks is
// In case the function returns earlier than expected, we want to make sure flow.SharedRequestedBlocks() is
// clean from any pending blocks.
defer flow.SharedRequestedBlocks().removeSet(pendingBlocks)
defer flow.SharedRequestedBlocks().remove(requestHash)
getRelayBlocksMsg := appmessage.NewMsgRequestRelayBlocks(filteredHashesToRequest)
getRelayBlocksMsg := appmessage.NewMsgRequestRelayBlocks([]*externalapi.DomainHash{requestHash})
err := flow.outgoingRoute.Enqueue(getRelayBlocksMsg)
if err != nil {
return err
return nil, false, err
}
for len(pendingBlocks) > 0 {
msgBlock, err := flow.readMsgBlock()
if err != nil {
return err
}
block := appmessage.MsgBlockToDomainBlock(msgBlock)
blockHash := consensusserialization.BlockHash(block)
if _, ok := pendingBlocks[*blockHash]; !ok {
return protocolerrors.Errorf(true, "got unrequested block %s", blockHash)
}
err = flow.processAndRelayBlock(requestQueue, block)
if err != nil {
return err
}
delete(pendingBlocks, *blockHash)
flow.SharedRequestedBlocks().remove(blockHash)
msgBlock, err := flow.readMsgBlock()
if err != nil {
return nil, false, err
}
return nil
block := appmessage.MsgBlockToDomainBlock(msgBlock)
blockHash := consensushashing.BlockHash(block)
if !blockHash.Equal(requestHash) {
return nil, false, protocolerrors.Errorf(true, "got unrequested block %s", blockHash)
}
return block, false, nil
}
// readMsgBlock returns the next msgBlock in msgChan, and populates invsQueue with any inv messages that meanwhile arrive.
@@ -202,68 +206,104 @@ func (flow *handleRelayInvsFlow) readMsgBlock() (msgBlock *appmessage.MsgBlock,
}
}
func (flow *handleRelayInvsFlow) processAndRelayBlock(requestQueue *hashesQueueSet, block *externalapi.DomainBlock) error {
blockHash := consensusserialization.BlockHash(block)
err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, *externalapi.BlockInsertionResult, error) {
blockHash := consensushashing.BlockHash(block)
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return errors.Wrapf(err, "failed to process block %s", blockHash)
return nil, nil, errors.Wrapf(err, "failed to process block %s", blockHash)
}
missingParentsError := &ruleerrors.ErrMissingParents{}
if errors.As(err, missingParentsError) {
blueScore, err := blocks.ExtractBlueScore(block)
if err != nil {
return protocolerrors.Errorf(true, "received an orphan "+
"block %s with malformed blue score", blockHash)
}
const maxOrphanBlueScoreDiff = 10000
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return err
}
selectedTipBlueScore, err := blocks.ExtractBlueScore(virtualSelectedParent)
if err != nil {
return err
}
if blueScore > selectedTipBlueScore+maxOrphanBlueScoreDiff {
log.Infof("Orphan block %s has blue score %d and the selected tip blue score is "+
"%d. Ignoring orphans with a blue score difference from the selected tip greater than %d",
blockHash, blueScore, selectedTipBlueScore, maxOrphanBlueScoreDiff)
return nil
}
// Add the orphan to the orphan pool
flow.AddOrphan(block)
// Request the parents for the orphan block from the peer that sent it.
for _, missingAncestor := range missingParentsError.MissingParentHashes {
requestQueue.enqueueIfNotExists(missingAncestor)
}
return nil
return missingParentsError.MissingParentHashes, nil, nil
}
log.Infof("Rejected block %s from %s: %s", blockHash, flow.peer, err)
log.Warnf("Rejected block %s from %s: %s", blockHash, flow.peer, err)
return nil, nil, protocolerrors.Wrapf(true, err, "got invalid block %s from relay", blockHash)
}
return nil, blockInsertionResult, nil
}
return protocolerrors.Wrapf(true, err, "got invalid block %s from relay", blockHash)
func (flow *handleRelayInvsFlow) relayBlock(block *externalapi.DomainBlock) error {
blockHash := consensushashing.BlockHash(block)
return flow.Broadcast(appmessage.NewMsgInvBlock(blockHash))
}
func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock, missingParents []*externalapi.DomainHash) error {
blockHash := consensushashing.BlockHash(block)
// Return if the block has been orphaned from elsewhere already
if flow.IsOrphan(blockHash) {
log.Debugf("Skipping orphan processing for block %s because it is already an orphan", blockHash)
return nil
}
err = flow.Broadcast(appmessage.NewMsgInvBlock(blockHash))
// Add the block to the orphan set if it's within orphan resolution range
isBlockInOrphanResolutionRange, err := flow.isBlockInOrphanResolutionRange(blockHash)
if err != nil {
return err
}
if isBlockInOrphanResolutionRange {
log.Debugf("Block %s is within orphan resolution range. "+
"Adding it to the orphan set", blockHash)
flow.AddOrphan(block)
log.Debugf("Requesting block %s missing ancestors", blockHash)
return flow.AddOrphanRootsToQueue(blockHash)
}
// Start IBD unless we already are in IBD
log.Debugf("Block %s is out of orphan resolution range. "+
"Attempting to start IBD against it.", blockHash)
return flow.runIBDIfNotRunning(blockHash)
}
// isBlockInOrphanResolutionRange finds out whether the given blockHash should be
// retrieved via the unorphaning mechanism or via IBD. This method sends a
// getBlockLocator request to the peer with a limit of orphanResolutionRange.
// In the response, if we know none of the hashes, we should retrieve the given
// blockHash via IBD. Otherwise, via unorphaning.
func (flow *handleRelayInvsFlow) isBlockInOrphanResolutionRange(blockHash *externalapi.DomainHash) (bool, error) {
lowHash := flow.Config().ActiveNetParams.GenesisHash
err := flow.sendGetBlockLocator(lowHash, blockHash, orphanResolutionRange)
if err != nil {
return false, err
}
blockLocatorHashes, err := flow.receiveBlockLocator()
if err != nil {
return false, err
}
for _, blockLocatorHash := range blockLocatorHashes {
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(blockLocatorHash)
if err != nil {
return false, err
}
if blockInfo.Exists && blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
return true, nil
}
}
return false, nil
}
func (flow *handleRelayInvsFlow) AddOrphanRootsToQueue(orphan *externalapi.DomainHash) error {
orphanRoots, orphanExists, err := flow.GetOrphanRoots(orphan)
if err != nil {
return err
}
log.Infof("Accepted block %s via relay", blockHash)
err = flow.StartIBDIfRequired()
if err != nil {
return err
}
err = flow.OnNewBlock(block)
if err != nil {
return err
if !orphanExists {
log.Infof("Orphan block %s was missing from the orphan pool while requesting for its roots. This "+
"probably happened because it was randomly evicted immediately after it was added.", orphan)
}
log.Infof("Block %s has %d missing ancestors. Adding them to the invs queue...", orphan, len(orphanRoots))
invMessages := make([]*appmessage.MsgInvRelayBlock, len(orphanRoots))
for i, root := range orphanRoots {
log.Debugf("Adding block %s missing ancestor %s to the invs queue", orphan, root)
invMessages[i] = appmessage.NewMsgInvBlock(root)
}
flow.invsQueue = append(invMessages, flow.invsQueue...)
return nil
}

View File

@@ -1,4 +1,4 @@
package ibd
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
@@ -32,13 +32,18 @@ func HandleRequestBlockLocator(context RequestBlockLocatorContext, incomingRoute
func (flow *handleRequestBlockLocatorFlow) start() error {
for {
lowHash, highHash, err := flow.receiveGetBlockLocator()
lowHash, highHash, limit, err := flow.receiveGetBlockLocator()
if err != nil {
return err
}
log.Debugf("Received getBlockLocator with lowHash: %s, highHash: %s, limit: %d",
lowHash, highHash, limit)
locator, err := flow.Domain().Consensus().CreateBlockLocator(lowHash, highHash)
locator, err := flow.Domain().Consensus().CreateBlockLocator(lowHash, highHash, limit)
if err != nil || len(locator) == 0 {
if err != nil {
log.Debugf("Received error from CreateBlockLocator: %s", err)
}
return protocolerrors.Errorf(true, "couldn't build a block "+
"locator between blocks %s and %s", lowHash, highHash)
}
@@ -51,15 +56,15 @@ func (flow *handleRequestBlockLocatorFlow) start() error {
}
func (flow *handleRequestBlockLocatorFlow) receiveGetBlockLocator() (lowHash *externalapi.DomainHash,
highHash *externalapi.DomainHash, err error) {
highHash *externalapi.DomainHash, limit uint32, err error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, nil, err
return nil, nil, 0, err
}
msgGetBlockLocator := message.(*appmessage.MsgRequestBlockLocator)
return msgGetBlockLocator.LowHash, msgGetBlockLocator.HighHash, nil
return msgGetBlockLocator.LowHash, msgGetBlockLocator.HighHash, msgGetBlockLocator.Limit, nil
}
func (flow *handleRequestBlockLocatorFlow) sendBlockLocator(locator externalapi.BlockLocator) error {

View File

@@ -1,16 +1,16 @@
package ibd
package blockrelay
import (
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
const ibdBatchSize = router.DefaultMaxMessages
const maxHeaders = appmessage.MaxInvPerMsg
// RequestIBDBlocksContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestIBDBlocksContext interface {
@@ -20,14 +20,18 @@ type RequestIBDBlocksContext interface {
type handleRequestBlocksFlow struct {
RequestIBDBlocksContext
incomingRoute, outgoingRoute *router.Route
peer *peer.Peer
}
// HandleRequestHeaders handles RequestHeaders messages
func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router.Route, outgoingRoute *router.Route) error {
func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
flow := &handleRequestBlocksFlow{
RequestIBDBlocksContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
@@ -38,40 +42,48 @@ func (flow *handleRequestBlocksFlow) start() error {
if err != nil {
return err
}
log.Debugf("Recieved requestHeaders with lowHash: %s, highHash: %s", lowHash, highHash)
msgHeaders, err := flow.buildMsgBlockHeaders(lowHash, highHash)
if err != nil {
return err
}
for !lowHash.Equal(highHash) {
log.Debugf("Getting block headers between %s and %s to %s", lowHash, highHash, flow.peer)
for offset := 0; offset < len(msgHeaders); offset += ibdBatchSize {
end := offset + ibdBatchSize
if end > len(msgHeaders) {
end = len(msgHeaders)
}
blocksToSend := msgHeaders[offset:end]
err = flow.sendHeaders(blocksToSend)
// GetHashesBetween is a relatively heavy operation so we limit it
// in order to avoid locking the consensus for too long
const maxBlueScoreDifference = 1 << 10
blockHashes, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash, maxBlueScoreDifference)
if err != nil {
return nil
return err
}
log.Debugf("Got %d header hashes above lowHash %s", len(blockHashes), lowHash)
blockHeaders := make([]*appmessage.MsgBlockHeader, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := flow.Domain().Consensus().GetBlockHeader(blockHash)
if err != nil {
return err
}
blockHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(blockHeader)
}
// Exit the loop and don't wait for the GetNextIBDBlocks message if the last batch was
// less than ibdBatchSize.
if len(blocksToSend) < ibdBatchSize {
break
blockHeadersMessage := appmessage.NewBlockHeadersMessage(blockHeaders)
err = flow.outgoingRoute.Enqueue(blockHeadersMessage)
if err != nil {
return err
}
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return err
}
if _, ok := message.(*appmessage.MsgRequestNextHeaders); !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextHeaders, message.Command())
}
// The next lowHash is the last element in blockHashes
lowHash = blockHashes[len(blockHashes)-1]
}
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
return err
@@ -90,37 +102,3 @@ func receiveRequestHeaders(incomingRoute *router.Route) (lowHash *externalapi.Do
return msgRequestIBDBlocks.LowHash, msgRequestIBDBlocks.HighHash, nil
}
func (flow *handleRequestBlocksFlow) buildMsgBlockHeaders(lowHash *externalapi.DomainHash,
highHash *externalapi.DomainHash) ([]*appmessage.MsgBlockHeader, error) {
blockHashes, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash)
if err != nil {
return nil, err
}
if len(blockHashes) > maxHeaders {
blockHashes = blockHashes[:maxHeaders]
}
msgBlockHeaders := make([]*appmessage.MsgBlockHeader, len(blockHashes))
for i, blockHash := range blockHashes {
header, err := flow.Domain().Consensus().GetBlockHeader(blockHash)
if err != nil {
return nil, err
}
msgBlockHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(header)
}
return msgBlockHeaders, nil
}
func (flow *handleRequestBlocksFlow) sendHeaders(headers []*appmessage.MsgBlockHeader) error {
for _, msgBlockHeader := range headers {
err := flow.outgoingRoute.Enqueue(msgBlockHeader)
if err != nil {
return err
}
}
return nil
}

View File

@@ -0,0 +1,144 @@
package blockrelay
import (
"errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleRequestPruningPointUTXOSetAndBlockContext is the interface for the context needed for the HandleRequestPruningPointUTXOSetAndBlock flow.
type HandleRequestPruningPointUTXOSetAndBlockContext interface {
Domain() domain.Domain
}
type handleRequestPruningPointUTXOSetAndBlockFlow struct {
HandleRequestPruningPointUTXOSetAndBlockContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestPruningPointUTXOSetAndBlock listens to appmessage.MsgRequestPruningPointUTXOSetAndBlock messages and sends
// the pruning point UTXO set and block body.
func HandleRequestPruningPointUTXOSetAndBlock(context HandleRequestPruningPointUTXOSetAndBlockContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handleRequestPruningPointUTXOSetAndBlockFlow{
HandleRequestPruningPointUTXOSetAndBlockContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) start() error {
for {
msgRequestPruningPointUTXOSetAndBlock, err := flow.waitForRequestPruningPointUTXOSetAndBlockMessages()
if err != nil {
return err
}
err = flow.handleRequestPruningPointUTXOSetAndBlockMessage(msgRequestPruningPointUTXOSetAndBlock)
if err != nil {
return err
}
}
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) handleRequestPruningPointUTXOSetAndBlockMessage(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "handleRequestPruningPointUTXOSetAndBlockFlow")
defer onEnd()
log.Debugf("Got request for PruningPointHash UTXOSet and Block")
err := flow.sendPruningPointBlock(msgRequestPruningPointUTXOSetAndBlock)
if err != nil {
return err
}
return flow.sendPruningPointUTXOSet(msgRequestPruningPointUTXOSetAndBlock)
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) waitForRequestPruningPointUTXOSetAndBlockMessages() (
*appmessage.MsgRequestPruningPointUTXOSetAndBlock, error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, err
}
msgRequestPruningPointUTXOSetAndBlock, ok := message.(*appmessage.MsgRequestPruningPointUTXOSetAndBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestPruningPointUTXOSetAndBlock, message.Command())
}
return msgRequestPruningPointUTXOSetAndBlock, nil
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) sendPruningPointBlock(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
block, err := flow.Domain().Consensus().GetBlock(msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
if err != nil {
return err
}
log.Debugf("Retrieved pruning block %s", msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgIBDBlock(appmessage.DomainBlockToMsgBlock(block)))
}
func (flow *handleRequestPruningPointUTXOSetAndBlockFlow) sendPruningPointUTXOSet(
msgRequestPruningPointUTXOSetAndBlock *appmessage.MsgRequestPruningPointUTXOSetAndBlock) error {
// Send the UTXO set in `step`-sized chunks
const step = 1000
var fromOutpoint *externalapi.DomainOutpoint
chunksSent := 0
for {
pruningPointUTXOs, err := flow.Domain().Consensus().GetPruningPointUTXOs(
msgRequestPruningPointUTXOSetAndBlock.PruningPointHash, fromOutpoint, step)
if err != nil {
if errors.Is(err, ruleerrors.ErrWrongPruningPointHash) {
return flow.outgoingRoute.Enqueue(appmessage.NewMsgUnexpectedPruningPoint())
}
}
log.Debugf("Retrieved %d UTXOs for pruning block %s",
len(pruningPointUTXOs), msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
outpointAndUTXOEntryPairs :=
appmessage.DomainOutpointAndUTXOEntryPairsToOutpointAndUTXOEntryPairs(pruningPointUTXOs)
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgPruningPointUTXOSetChunk(outpointAndUTXOEntryPairs))
if err != nil {
return err
}
if len(pruningPointUTXOs) < step {
log.Debugf("Finished sending UTXOs for pruning block %s",
msgRequestPruningPointUTXOSetAndBlock.PruningPointHash)
return flow.outgoingRoute.Enqueue(appmessage.NewMsgDonePruningPointUTXOSetChunks())
}
fromOutpoint = pruningPointUTXOs[len(pruningPointUTXOs)-1].Outpoint
chunksSent++
// Wait for the peer to request more chunks every `ibdBatchSize` chunks
if chunksSent%ibdBatchSize == 0 {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
_, ok := message.(*appmessage.MsgRequestNextPruningPointUTXOSetChunk)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointUTXOSetChunk, message.Command())
}
}
}
}

View File

@@ -1,37 +0,0 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
type hashesQueueSet struct {
queue []*externalapi.DomainHash
set map[externalapi.DomainHash]struct{}
}
func (r *hashesQueueSet) enqueueIfNotExists(hash *externalapi.DomainHash) {
if _, ok := r.set[*hash]; ok {
return
}
r.queue = append(r.queue, hash)
r.set[*hash] = struct{}{}
}
func (r *hashesQueueSet) dequeue(numItems int) []*externalapi.DomainHash {
var hashes []*externalapi.DomainHash
hashes, r.queue = r.queue[:numItems], r.queue[numItems:]
for _, hash := range hashes {
delete(r.set, *hash)
}
return hashes
}
func (r *hashesQueueSet) len() int {
return len(r.queue)
}
func newHashesQueueSet() *hashesQueueSet {
return &hashesQueueSet{
set: make(map[externalapi.DomainHash]struct{}),
}
}

View File

@@ -0,0 +1,534 @@
package blockrelay
import (
"fmt"
"time"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
)
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.DomainHash) error {
wasIBDNotRunning := flow.TrySetIBDRunning(flow.peer)
if !wasIBDNotRunning {
log.Debugf("IBD is already running")
return nil
}
defer flow.UnsetIBDRunning()
log.Debugf("IBD started with peer %s and highHash %s", flow.peer, highHash)
log.Debugf("Syncing headers up to %s", highHash)
err := flow.syncHeaders(highHash)
if err != nil {
return err
}
log.Debugf("Finished syncing headers up to %s", highHash)
log.Debugf("Syncing the current pruning point UTXO set")
syncedPruningPointUTXOSetSuccessfully, err := flow.syncPruningPointUTXOSet()
if err != nil {
return err
}
if !syncedPruningPointUTXOSetSuccessfully {
log.Debugf("Aborting IBD because the pruning point UTXO set failed to sync")
return nil
}
log.Debugf("Finished syncing the current pruning point UTXO set")
log.Debugf("Downloading block bodies up to %s", highHash)
err = flow.syncMissingBlockBodies(highHash)
if err != nil {
return err
}
log.Debugf("Finished downloading block bodies up to %s", highHash)
return nil
}
func (flow *handleRelayInvsFlow) syncHeaders(highHash *externalapi.DomainHash) error {
highHashReceived := false
for !highHashReceived {
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return err
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
err = flow.downloadHeaders(highestSharedBlockHash, highHash)
if err != nil {
return err
}
// We're finished once highHash has been inserted into the DAG
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(highHash)
if err != nil {
return err
}
highHashReceived = blockInfo.Exists
log.Debugf("Headers downloaded from peer %s. Are further headers required: %t", flow.peer, !highHashReceived)
}
return nil
}
func (flow *handleRelayInvsFlow) findHighestSharedBlockHash(targetHash *externalapi.DomainHash) (*externalapi.DomainHash, error) {
log.Debugf("Sending a blockLocator to %s between pruning point and headers selected tip", flow.peer)
blockLocator, err := flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
if err != nil {
return nil, err
}
for {
highestHash, err := flow.fetchHighestHash(targetHash, blockLocator)
if err != nil {
return nil, err
}
highestHashIndex, err := flow.findHighestHashIndex(highestHash, blockLocator)
if err != nil {
return nil, err
}
if highestHashIndex == 0 ||
// If the block locator contains only two adjacent chain blocks, the
// syncer will always find the same highest chain block, so to avoid
// an endless loop, we explicitly stop the loop in such situation.
(len(blockLocator) == 2 && highestHashIndex == 1) {
return highestHash, nil
}
locatorHashAboveHighestHash := highestHash
if highestHashIndex > 0 {
locatorHashAboveHighestHash = blockLocator[highestHashIndex-1]
}
blockLocator, err = flow.nextBlockLocator(highestHash, locatorHashAboveHighestHash)
if err != nil {
return nil, err
}
}
}
func (flow *handleRelayInvsFlow) nextBlockLocator(lowHash, highHash *externalapi.DomainHash) (externalapi.BlockLocator, error) {
log.Debugf("Sending a blockLocator to %s between %s and %s", flow.peer, lowHash, highHash)
blockLocator, err := flow.Domain().Consensus().CreateHeadersSelectedChainBlockLocator(lowHash, highHash)
if err != nil {
if errors.Is(model.ErrBlockNotInSelectedParentChain, err) {
return nil, err
}
log.Debugf("Headers selected parent chain moved since findHighestSharedBlockHash - " +
"restarting with full block locator")
blockLocator, err = flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
if err != nil {
return nil, err
}
}
return blockLocator, nil
}
func (flow *handleRelayInvsFlow) findHighestHashIndex(
highestHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (int, error) {
highestHashIndex := 0
highestHashIndexFound := false
for i, blockLocatorHash := range blockLocator {
if highestHash.Equal(blockLocatorHash) {
highestHashIndex = i
highestHashIndexFound = true
break
}
}
if !highestHashIndexFound {
return 0, protocolerrors.Errorf(true, "highest hash %s "+
"returned from peer %s is not in the original blockLocator", highestHash, flow.peer)
}
log.Debugf("The index of the highest hash in the original "+
"blockLocator sent to %s is %d", flow.peer, highestHashIndex)
return highestHashIndex, nil
}
func (flow *handleRelayInvsFlow) fetchHighestHash(
targetHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (*externalapi.DomainHash, error) {
ibdBlockLocatorMessage := appmessage.NewMsgIBDBlockLocator(targetHash, blockLocator)
err := flow.outgoingRoute.Enqueue(ibdBlockLocatorMessage)
if err != nil {
return nil, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
ibdBlockLocatorHighestHashMessage, ok := message.(*appmessage.MsgIBDBlockLocatorHighestHash)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlockLocatorHighestHash, message.Command())
}
highestHash := ibdBlockLocatorHighestHashMessage.HighestHash
log.Debugf("The highest hash the peer %s knows is %s", flow.peer, highestHash)
return highestHash, nil
}
func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externalapi.DomainHash,
highHash *externalapi.DomainHash) error {
err := flow.sendRequestHeaders(highestSharedBlockHash, highHash)
if err != nil {
return err
}
// Keep a short queue of blockHeadersMessages so that there's
// never a moment when the node is not validating and inserting
// headers
blockHeadersMessageChan := make(chan *appmessage.BlockHeadersMessage, 2)
errChan := make(chan error)
doneChan := make(chan interface{})
spawn("handleRelayInvsFlow-downloadHeaders", func() {
for {
blockHeadersMessage, doneIBD, err := flow.receiveHeaders()
if err != nil {
errChan <- err
return
}
if doneIBD {
doneChan <- struct{}{}
return
}
blockHeadersMessageChan <- blockHeadersMessage
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestNextHeaders())
if err != nil {
errChan <- err
return
}
}
})
for {
select {
case blockHeadersMessage := <-blockHeadersMessageChan:
for _, header := range blockHeadersMessage.BlockHeaders {
err = flow.processHeader(header)
if err != nil {
return err
}
}
case err := <-errChan:
return err
case <-doneChan:
return nil
}
}
}
func (flow *handleRelayInvsFlow) sendRequestHeaders(highestSharedBlockHash *externalapi.DomainHash,
peerSelectedTipHash *externalapi.DomainHash) error {
msgGetBlockInvs := appmessage.NewMsgRequstHeaders(highestSharedBlockHash, peerSelectedTipHash)
return flow.outgoingRoute.Enqueue(msgGetBlockInvs)
}
func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneIBD bool, err error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch message := message.(type) {
case *appmessage.BlockHeadersMessage:
return message, false, nil
case *appmessage.MsgDoneHeaders:
return nil, true, nil
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s", appmessage.CmdHeader, appmessage.CmdDoneHeaders, message.Command())
}
}
func (flow *handleRelayInvsFlow) processHeader(msgBlockHeader *appmessage.MsgBlockHeader) error {
header := appmessage.BlockHeaderToDomainBlockHeader(msgBlockHeader)
block := &externalapi.DomainBlock{
Header: header,
Transactions: nil,
}
blockHash := consensushashing.BlockHash(block)
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(blockHash)
if err != nil {
return err
}
if blockInfo.Exists {
log.Debugf("Block header %s is already in the DAG. Skipping...", blockHash)
return nil
}
_, err = flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
}
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping block header %s as it is a duplicate", blockHash)
} else {
log.Infof("Rejected block header %s from %s during IBD: %s", blockHash, flow.peer, err)
return protocolerrors.Wrapf(true, err, "got invalid block header %s during IBD", blockHash)
}
}
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet() (bool, error) {
log.Debugf("Checking if a new pruning point is available")
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointHashMessage())
if err != nil {
return false, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return false, err
}
msgPruningPointHash, ok := message.(*appmessage.MsgPruningPointHashMessage)
if !ok {
return false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdPruningPointHash, message.Command())
}
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(msgPruningPointHash.Hash)
if err != nil {
return false, err
}
if !blockInfo.Exists {
return false, errors.Errorf("The pruning point header is missing")
}
if blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
log.Debugf("Already has the block data of the new suggested pruning point %s", msgPruningPointHash.Hash)
return true, nil
}
log.Infof("Checking if the suggested pruning point %s is compatible to the node DAG", msgPruningPointHash.Hash)
isValid, err := flow.Domain().Consensus().IsValidPruningPoint(msgPruningPointHash.Hash)
if err != nil {
return false, err
}
if !isValid {
log.Infof("The suggested pruning point %s is incompatible to this node DAG, so stopping IBD with this"+
" peer", msgPruningPointHash.Hash)
return false, nil
}
log.Info("Fetching the pruning point UTXO set")
succeed, err := flow.fetchMissingUTXOSet(msgPruningPointHash.Hash)
if err != nil {
return false, err
}
if !succeed {
log.Infof("Couldn't successfully fetch the pruning point UTXO set. Stopping IBD.")
return false, nil
}
log.Info("Fetched the new pruning point UTXO set")
return true, nil
}
func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(pruningPointHash *externalapi.DomainHash) (succeed bool, err error) {
defer func() {
err := flow.Domain().Consensus().ClearImportedPruningPointData()
if err != nil {
panic(fmt.Sprintf("failed to clear imported pruning point data: %s", err))
}
}()
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointUTXOSetAndBlock(pruningPointHash))
if err != nil {
return false, err
}
block, err := flow.receivePruningPointBlock()
if err != nil {
return false, err
}
receivedAll, err := flow.receiveAndInsertPruningPointUTXOSet(pruningPointHash)
if err != nil {
return false, err
}
if !receivedAll {
return false, nil
}
err = flow.Domain().Consensus().ValidateAndInsertImportedPruningPoint(block)
if err != nil {
// TODO: Find a better way to deal with finality conflicts.
if errors.Is(err, ruleerrors.ErrSuggestedPruningViolatesFinality) {
return false, nil
}
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
return true, nil
}
func (flow *handleRelayInvsFlow) receivePruningPointBlock() (*externalapi.DomainBlock, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receivePruningPointBlock")
defer onEnd()
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
}
ibdBlockMessage, ok := message.(*appmessage.MsgIBDBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlock, message.Command())
}
block := appmessage.MsgBlockToDomainBlock(ibdBlockMessage.MsgBlock)
log.Debugf("Received pruning point block %s", consensushashing.BlockHash(block))
return block, nil
}
func (flow *handleRelayInvsFlow) receiveAndInsertPruningPointUTXOSet(
pruningPointHash *externalapi.DomainHash) (bool, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receiveAndInsertPruningPointUTXOSet")
defer onEnd()
receivedChunkCount := 0
receivedUTXOCount := 0
for {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return false, err
}
switch message := message.(type) {
case *appmessage.MsgPruningPointUTXOSetChunk:
receivedUTXOCount += len(message.OutpointAndUTXOEntryPairs)
domainOutpointAndUTXOEntryPairs :=
appmessage.OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(message.OutpointAndUTXOEntryPairs)
err := flow.Domain().Consensus().AppendImportedPruningPointUTXOs(domainOutpointAndUTXOEntryPairs)
if err != nil {
return false, err
}
receivedChunkCount++
if receivedChunkCount%ibdBatchSize == 0 {
log.Debugf("Received %d UTXO set chunks so far, totaling in %d UTXOs",
receivedChunkCount, receivedUTXOCount)
requestNextPruningPointUTXOSetChunkMessage := appmessage.NewMsgRequestNextPruningPointUTXOSetChunk()
err := flow.outgoingRoute.Enqueue(requestNextPruningPointUTXOSetChunkMessage)
if err != nil {
return false, err
}
}
case *appmessage.MsgDonePruningPointUTXOSetChunks:
log.Infof("Finished receiving the UTXO set. Total UTXOs: %d", receivedUTXOCount)
return true, nil
case *appmessage.MsgUnexpectedPruningPoint:
log.Infof("Could not receive the next UTXO chunk because the pruning point %s "+
"is no longer the pruning point of peer %s", pruningPointHash, flow.peer)
return false, nil
default:
return false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s or %s, got: %s", appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdUnexpectedPruningPoint, message.Command(),
)
}
}
}
func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.DomainHash) error {
hashes, err := flow.Domain().Consensus().GetMissingBlockBodyHashes(highHash)
if err != nil {
return err
}
for offset := 0; offset < len(hashes); offset += ibdBatchSize {
var hashesToRequest []*externalapi.DomainHash
if offset+ibdBatchSize < len(hashes) {
hashesToRequest = hashes[offset : offset+ibdBatchSize]
} else {
hashesToRequest = hashes[offset:]
}
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestIBDBlocks(hashesToRequest))
if err != nil {
return err
}
for _, expectedHash := range hashesToRequest {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return err
}
msgIBDBlock, ok := message.(*appmessage.MsgIBDBlock)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlock, message.Command())
}
block := appmessage.MsgBlockToDomainBlock(msgIBDBlock.MsgBlock)
blockHash := consensushashing.BlockHash(block)
if !expectedHash.Equal(blockHash) {
return protocolerrors.Errorf(true, "expected block %s but got %s", expectedHash, blockHash)
}
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping IBD Block %s as it has already been added to the DAG", blockHash)
continue
}
return protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "invalid block %s", blockHash)
}
err = flow.OnNewBlock(block, blockInsertionResult)
if err != nil {
return err
}
}
}
return nil
}
// dequeueIncomingMessageAndSkipInvs is a convenience method to be used during
// IBD. Inv messages are expected to arrive at any given moment, but should be
// ignored while we're in IBD
func (flow *handleRelayInvsFlow) dequeueIncomingMessageAndSkipInvs(timeout time.Duration) (appmessage.Message, error) {
for {
message, err := flow.incomingRoute.DequeueWithTimeout(timeout)
if err != nil {
return nil, err
}
if _, ok := message.(*appmessage.MsgInvRelayBlock); !ok {
return message, nil
}
}
}

View File

@@ -24,7 +24,6 @@ type HandleHandshakeContext interface {
NetAdapter() *netadapter.NetAdapter
Domain() domain.Domain
AddressManager() *addressmanager.AddressManager
StartIBDIfRequired() error
AddToPeers(peer *peerpkg.Peer) error
HandleError(err error, flowName string, isStopping *uint32, errChan chan<- error)
}
@@ -83,7 +82,7 @@ func HandleHandshake(context HandleHandshakeContext, netConnection *netadapter.N
err := context.AddToPeers(peer)
if err != nil {
if errors.As(err, &common.ErrPeerWithSameIDExists) {
if errors.Is(err, common.ErrPeerWithSameIDExists) {
return nil, protocolerrors.Wrap(false, err, "peer already exists")
}
return nil, err
@@ -92,12 +91,6 @@ func HandleHandshake(context HandleHandshakeContext, netConnection *netadapter.N
if peerAddress != nil {
context.AddressManager().AddAddresses(peerAddress)
}
err = context.StartIBDIfRequired()
if err != nil {
return nil, err
}
return peer, nil
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/kaspanet/kaspad/app/protocol/common"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
@@ -41,11 +42,18 @@ func ReceiveVersion(context HandleHandshakeContext, incomingRoute *router.Route,
}
func (flow *receiveVersionFlow) start() (*appmessage.NetAddress, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receiveVersionFlow.start")
defer onEnd()
log.Debugf("Starting receiveVersionFlow with %s", flow.peer.Address())
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, err
}
log.Debugf("Got version message")
msgVersion, ok := message.(*appmessage.MsgVersion)
if !ok {
return nil, protocolerrors.New(true, "a version message must precede all others")
@@ -84,7 +92,7 @@ func (flow *receiveVersionFlow) start() (*appmessage.NetAddress, error) {
isRemoteNodeFull := msgVersion.SubnetworkID == nil
isOutbound := flow.peer.Connection().IsOutbound()
if (isLocalNodeFull && !isRemoteNodeFull && isOutbound) ||
(!isLocalNodeFull && !isRemoteNodeFull && *msgVersion.SubnetworkID != *localSubnetworkID) {
(!isLocalNodeFull && !isRemoteNodeFull && !msgVersion.SubnetworkID.Equal(localSubnetworkID)) {
return nil, protocolerrors.New(false, "incompatible subnetworks")
}

View File

@@ -4,7 +4,7 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/version"
)
@@ -48,17 +48,16 @@ func SendVersion(context HandleHandshakeContext, incomingRoute *router.Route,
}
func (flow *sendVersionFlow) start() error {
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return err
}
selectedTipHash := consensusserialization.BlockHash(virtualSelectedParent)
subnetworkID := flow.Config().SubnetworkID
onEnd := logger.LogAndMeasureExecutionTime(log, "sendVersionFlow.start")
defer onEnd()
log.Debugf("Starting sendVersionFlow with %s", flow.peer.Address())
// Version message.
localAddress := flow.AddressManager().BestLocalAddress(flow.peer.Connection().NetAddress())
subnetworkID := flow.Config().SubnetworkID
msg := appmessage.NewMsgVersion(localAddress, flow.NetAdapter().ID(),
flow.Config().ActiveNetParams.Name, selectedTipHash, subnetworkID)
flow.Config().ActiveNetParams.Name, subnetworkID)
msg.AddUserAgent(userAgentName, userAgentVersion, flow.Config().UserAgentComments...)
// Advertise the services flag
@@ -70,15 +69,17 @@ func (flow *sendVersionFlow) start() error {
// Advertise if inv messages for transactions are desired.
msg.DisableRelayTx = flow.Config().BlocksOnly
err = flow.outgoingRoute.Enqueue(msg)
err := flow.outgoingRoute.Enqueue(msg)
if err != nil {
return err
}
// Wait for verack
log.Debugf("Waiting for verack")
_, err = flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
log.Debugf("Got verack")
return nil
}

View File

@@ -1,15 +0,0 @@
package ibd
import (
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"testing"
)
func TestMaxHeaders(t *testing.T) {
testutils.ForAllNets(t, false, func(t *testing.T, params *dagconfig.Params) {
if params.FinalityDepth() > maxHeaders {
t.Errorf("FinalityDepth() in %s should be lower or equal to appmessage.MaxInvPerMsg", params.Name)
}
})
}

View File

@@ -1,65 +0,0 @@
package ibd
import (
"errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleRequestIBDRootUTXOSetAndBlockContext is the interface for the context needed for the HandleRequestIBDRootUTXOSetAndBlock flow.
type HandleRequestIBDRootUTXOSetAndBlockContext interface {
Domain() domain.Domain
}
type handleRequestIBDRootUTXOSetAndBlockFlow struct {
HandleRequestIBDRootUTXOSetAndBlockContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestIBDRootUTXOSetAndBlock listens to appmessage.MsgRequestIBDRootUTXOSetAndBlock messages and sends
// the IBD root UTXO set and block body.
func HandleRequestIBDRootUTXOSetAndBlock(context HandleRequestIBDRootUTXOSetAndBlockContext, incomingRoute,
outgoingRoute *router.Route) error {
flow := &handleRequestIBDRootUTXOSetAndBlockFlow{
HandleRequestIBDRootUTXOSetAndBlockContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestIBDRootUTXOSetAndBlockFlow) start() error {
for {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return err
}
msgRequestIBDRootUTXOSetAndBlock := message.(*appmessage.MsgRequestIBDRootUTXOSetAndBlock)
utxoSet, err := flow.Domain().Consensus().GetPruningPointUTXOSet(msgRequestIBDRootUTXOSetAndBlock.IBDRoot)
if err != nil {
if errors.Is(err, ruleerrors.ErrWrongPruningPointHash) {
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgIBDRootNotFound())
if err != nil {
return err
}
continue
}
}
block, err := flow.Domain().Consensus().GetBlock(msgRequestIBDRootUTXOSetAndBlock.IBDRoot)
if err != nil {
return err
}
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgIBDRootUTXOSetAndBlock(utxoSet,
appmessage.DomainBlockToMsgBlock(block)))
if err != nil {
return err
}
}
}

View File

@@ -1,355 +0,0 @@
package ibd
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
// HandleIBDContext is the interface for the context needed for the HandleIBD flow.
type HandleIBDContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock) error
FinishIBD() error
}
type handleIBDFlow struct {
HandleIBDContext
incomingRoute, outgoingRoute *router.Route
peer *peerpkg.Peer
}
// HandleIBD waits for IBD start and handles it when IBD is triggered for this peer
func HandleIBD(context HandleIBDContext, incomingRoute *router.Route, outgoingRoute *router.Route,
peer *peerpkg.Peer) error {
flow := &handleIBDFlow{
HandleIBDContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
func (flow *handleIBDFlow) start() error {
for {
err := flow.runIBD()
if err != nil {
return err
}
}
}
func (flow *handleIBDFlow) runIBD() error {
flow.peer.WaitForIBDStart()
err := flow.ibdLoop()
if err != nil {
finishIBDErr := flow.FinishIBD()
if finishIBDErr != nil {
return finishIBDErr
}
return err
}
return flow.FinishIBD()
}
func (flow *handleIBDFlow) ibdLoop() error {
for {
syncInfo, err := flow.Domain().Consensus().GetSyncInfo()
if err != nil {
return err
}
switch syncInfo.State {
case externalapi.SyncStateHeadersFirst:
err := flow.syncHeaders()
if err != nil {
return err
}
case externalapi.SyncStateMissingUTXOSet:
found, err := flow.fetchMissingUTXOSet(syncInfo.IBDRootUTXOBlockHash)
if err != nil {
return err
}
if !found {
return nil
}
case externalapi.SyncStateMissingBlockBodies:
err := flow.syncMissingBlockBodies()
if err != nil {
return err
}
case externalapi.SyncStateRelay:
return nil
default:
return errors.Errorf("unexpected state %s", syncInfo.State)
}
}
}
func (flow *handleIBDFlow) syncHeaders() error {
peerSelectedTipHash := flow.peer.SelectedTipHash()
log.Debugf("Trying to find highest shared chain block with peer %s with selected tip %s", flow.peer, peerSelectedTipHash)
highestSharedBlockHash, err := flow.findHighestSharedBlockHash(peerSelectedTipHash)
if err != nil {
return err
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
return flow.downloadHeaders(highestSharedBlockHash, peerSelectedTipHash)
}
func (flow *handleIBDFlow) syncMissingBlockBodies() error {
hashes, err := flow.Domain().Consensus().GetMissingBlockBodyHashes(flow.peer.SelectedTipHash())
if err != nil {
return err
}
for offset := 0; offset < len(hashes); offset += appmessage.MaxRequestIBDBlocksHashes {
var hashesToRequest []*externalapi.DomainHash
if offset+appmessage.MaxRequestIBDBlocksHashes < len(hashes) {
hashesToRequest = hashes[offset : offset+appmessage.MaxRequestIBDBlocksHashes]
} else {
hashesToRequest = hashes[offset:]
}
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestIBDBlocks(hashesToRequest))
if err != nil {
return err
}
for _, expectedHash := range hashesToRequest {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
msgIBDBlock, ok := message.(*appmessage.MsgIBDBlock)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlock, message.Command())
}
block := appmessage.MsgBlockToDomainBlock(msgIBDBlock.MsgBlock)
blockHash := consensusserialization.BlockHash(block)
if *expectedHash != *blockHash {
return protocolerrors.Errorf(true, "expected block %s but got %s", expectedHash, blockHash)
}
err = flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
return protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "invalid block %s", blockHash)
}
err = flow.OnNewBlock(block)
if err != nil {
return err
}
}
}
return nil
}
func (flow *handleIBDFlow) fetchMissingUTXOSet(ibdRootHash *externalapi.DomainHash) (bool, error) {
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestIBDRootUTXOSetAndBlock(ibdRootHash))
if err != nil {
return false, err
}
utxoSet, block, found, err := flow.receiveIBDRootUTXOSetAndBlock()
if err != nil {
return false, err
}
if !found {
return false, nil
}
err = flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
blockHash := consensusserialization.BlockHash(block)
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "got invalid block %s during IBD", blockHash)
}
err = flow.Domain().Consensus().SetPruningPointUTXOSet(utxoSet)
if err != nil {
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with IBD root UTXO set")
}
return true, nil
}
func (flow *handleIBDFlow) receiveIBDRootUTXOSetAndBlock() ([]byte, *externalapi.DomainBlock, bool, error) {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, nil, false, err
}
switch message := message.(type) {
case *appmessage.MsgIBDRootUTXOSetAndBlock:
return message.UTXOSet,
appmessage.MsgBlockToDomainBlock(message.Block), true, nil
case *appmessage.MsgIBDRootNotFound:
return nil, nil, false, nil
default:
return nil, nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s",
appmessage.CmdIBDRootUTXOSetAndBlock, appmessage.CmdIBDRootNotFound, message.Command(),
)
}
}
func (flow *handleIBDFlow) findHighestSharedBlockHash(peerSelectedTipHash *externalapi.DomainHash) (lowHash *externalapi.DomainHash,
err error) {
lowHash = flow.Config().ActiveNetParams.GenesisHash
highHash := peerSelectedTipHash
for {
err := flow.sendGetBlockLocator(lowHash, highHash)
if err != nil {
return nil, err
}
blockLocatorHashes, err := flow.receiveBlockLocator()
if err != nil {
return nil, err
}
// We check whether the locator's highest hash is in the local DAG.
// If it is, return it. If it isn't, we need to narrow our
// getBlockLocator request and try again.
locatorHighHash := blockLocatorHashes[0]
locatorHighHashInfo, err := flow.Domain().Consensus().GetBlockInfo(locatorHighHash)
if err != nil {
return nil, err
}
if locatorHighHashInfo.Exists {
return locatorHighHash, nil
}
highHash, lowHash, err = flow.Domain().Consensus().FindNextBlockLocatorBoundaries(blockLocatorHashes)
if err != nil {
return nil, err
}
}
}
func (flow *handleIBDFlow) sendGetBlockLocator(lowHash *externalapi.DomainHash, highHash *externalapi.DomainHash) error {
msgGetBlockLocator := appmessage.NewMsgRequestBlockLocator(highHash, lowHash)
return flow.outgoingRoute.Enqueue(msgGetBlockLocator)
}
func (flow *handleIBDFlow) receiveBlockLocator() (blockLocatorHashes []*externalapi.DomainHash, err error) {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, err
}
msgBlockLocator, ok := message.(*appmessage.MsgBlockLocator)
if !ok {
return nil,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdBlockLocator, message.Command())
}
return msgBlockLocator.BlockLocatorHashes, nil
}
func (flow *handleIBDFlow) downloadHeaders(highestSharedBlockHash *externalapi.DomainHash,
peerSelectedTipHash *externalapi.DomainHash) error {
err := flow.sendRequestHeaders(highestSharedBlockHash, peerSelectedTipHash)
if err != nil {
return err
}
blocksReceived := 0
for {
msgBlockHeader, doneIBD, err := flow.receiveHeader()
if err != nil {
return err
}
if doneIBD {
return nil
}
err = flow.processHeader(msgBlockHeader)
if err != nil {
return err
}
blocksReceived++
if blocksReceived%ibdBatchSize == 0 {
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestNextHeaders())
if err != nil {
return err
}
}
}
}
func (flow *handleIBDFlow) sendRequestHeaders(highestSharedBlockHash *externalapi.DomainHash,
peerSelectedTipHash *externalapi.DomainHash) error {
msgGetBlockInvs := appmessage.NewMsgRequstHeaders(highestSharedBlockHash, peerSelectedTipHash)
return flow.outgoingRoute.Enqueue(msgGetBlockInvs)
}
func (flow *handleIBDFlow) receiveHeader() (msgIBDBlock *appmessage.MsgBlockHeader, doneIBD bool, err error) {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch message := message.(type) {
case *appmessage.MsgBlockHeader:
return message, false, nil
case *appmessage.MsgDoneHeaders:
return nil, true, nil
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s", appmessage.CmdHeader, appmessage.CmdDoneHeaders, message.Command())
}
}
func (flow *handleIBDFlow) processHeader(msgBlockHeader *appmessage.MsgBlockHeader) error {
header := appmessage.BlockHeaderToDomainBlockHeader(msgBlockHeader)
block := &externalapi.DomainBlock{
Header: header,
Transactions: nil,
}
blockHash := consensusserialization.BlockHash(block)
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(blockHash)
if err != nil {
return err
}
if blockInfo.Exists {
log.Debugf("Block header %s is already in the DAG. Skipping...", blockHash)
return nil
}
err = flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
}
log.Infof("Rejected block header %s from %s during IBD: %s", blockHash, flow.peer, err)
return protocolerrors.Wrapf(true, err, "got invalid block %s during IBD", blockHash)
}
return nil
}

View File

@@ -1,9 +0,0 @@
package ibd
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.IBDS)
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -1,66 +0,0 @@
package selectedtip
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
// HandleRequestSelectedTipContext is the interface for the context needed for the HandleRequestSelectedTip flow.
type HandleRequestSelectedTipContext interface {
Domain() domain.Domain
}
type handleRequestSelectedTipFlow struct {
HandleRequestSelectedTipContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestSelectedTip handles getSelectedTip messages
func HandleRequestSelectedTip(context HandleRequestSelectedTipContext, incomingRoute *router.Route, outgoingRoute *router.Route) error {
flow := &handleRequestSelectedTipFlow{
HandleRequestSelectedTipContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestSelectedTipFlow) start() error {
for {
err := flow.receiveGetSelectedTip()
if err != nil {
return err
}
err = flow.sendSelectedTipHash()
if err != nil {
return err
}
}
}
func (flow *handleRequestSelectedTipFlow) receiveGetSelectedTip() error {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return err
}
_, ok := message.(*appmessage.MsgRequestSelectedTip)
if !ok {
return errors.Errorf("received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestSelectedTip, message.Command())
}
return nil
}
func (flow *handleRequestSelectedTipFlow) sendSelectedTipHash() error {
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return err
}
msgSelectedTip := appmessage.NewMsgSelectedTip(consensusserialization.BlockHash(virtualSelectedParent))
return flow.outgoingRoute.Enqueue(msgSelectedTip)
}

View File

@@ -1,78 +0,0 @@
package selectedtip
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// RequestSelectedTipContext is the interface for the context needed for the RequestSelectedTip flow.
type RequestSelectedTipContext interface {
Domain() domain.Domain
StartIBDIfRequired() error
}
type requestSelectedTipFlow struct {
RequestSelectedTipContext
incomingRoute, outgoingRoute *router.Route
peer *peerpkg.Peer
}
// RequestSelectedTip waits for selected tip requests and handles them
func RequestSelectedTip(context RequestSelectedTipContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
flow := &requestSelectedTipFlow{
RequestSelectedTipContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
func (flow *requestSelectedTipFlow) start() error {
for {
err := flow.runSelectedTipRequest()
if err != nil {
return err
}
}
}
func (flow *requestSelectedTipFlow) runSelectedTipRequest() error {
flow.peer.WaitForSelectedTipRequests()
defer flow.peer.FinishRequestingSelectedTip()
err := flow.requestSelectedTip()
if err != nil {
return err
}
peerSelectedTipHash, err := flow.receiveSelectedTip()
if err != nil {
return err
}
flow.peer.SetSelectedTipHash(peerSelectedTipHash)
return flow.StartIBDIfRequired()
}
func (flow *requestSelectedTipFlow) requestSelectedTip() error {
msgGetSelectedTip := appmessage.NewMsgRequestSelectedTip()
return flow.outgoingRoute.Enqueue(msgGetSelectedTip)
}
func (flow *requestSelectedTipFlow) receiveSelectedTip() (selectedTipHash *externalapi.DomainHash, err error) {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, err
}
msgSelectedTip := message.(*appmessage.MsgSelectedTip)
return msgSelectedTip.SelectedTipHash, nil
}

View File

@@ -0,0 +1,23 @@
package testing
import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/pkg/errors"
"strings"
"testing"
)
func checkFlowError(t *testing.T, err error, isProtocolError bool, shouldBan bool, contains string) {
pErr := &protocolerrors.ProtocolError{}
if errors.As(err, &pErr) != isProtocolError {
t.Fatalf("Unexepcted error %+v", err)
}
if pErr.ShouldBan != shouldBan {
t.Fatalf("Exepcted shouldBan %t but got %t", shouldBan, pErr.ShouldBan)
}
if !strings.Contains(err.Error(), contains) {
t.Fatalf("Unexpected error. Expected error to contain '%s' but got: %+v", contains, err)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,50 @@
package testing
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"testing"
"time"
)
type fakeReceiveAddressesContext struct{}
func (f fakeReceiveAddressesContext) AddressManager() *addressmanager.AddressManager {
return nil
}
func TestReceiveAddressesErrors(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
incomingRoute := router.NewRoute()
outgoingRoute := router.NewRoute()
peer := peerpkg.New(nil)
errChan := make(chan error)
go func() {
errChan <- addressexchange.ReceiveAddresses(fakeReceiveAddressesContext{}, incomingRoute, outgoingRoute, peer)
}()
_, err := outgoingRoute.DequeueWithTimeout(time.Second)
if err != nil {
t.Fatalf("DequeueWithTimeout: %+v", err)
}
// Sending addressmanager.GetAddressesMax+1 addresses should trigger a ban
err = incomingRoute.Enqueue(appmessage.NewMsgAddresses(make([]*appmessage.NetAddress,
addressmanager.GetAddressesMax+1)))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
}
select {
case err := <-errChan:
checkFlowError(t, err, true, true, "address count exceeded")
case <-time.After(time.Second):
t.Fatalf("timed out after %s", time.Second)
}
})
}

View File

@@ -1,4 +1,4 @@
package relaytransactions
package transactionrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
@@ -6,7 +6,7 @@ import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensusserialization"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
@@ -159,7 +159,7 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
return err
}
if msgTxNotFound != nil {
if msgTxNotFound.ID != expectedID {
if !msgTxNotFound.ID.Equal(expectedID) {
return protocolerrors.Errorf(true, "expected transaction %s, but got %s",
expectedID, msgTxNotFound.ID)
}
@@ -167,8 +167,8 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
continue
}
tx := appmessage.MsgTxToDomainTransaction(msgTx)
txID := consensusserialization.TransactionID(tx)
if txID != expectedID {
txID := consensushashing.TransactionID(tx)
if !txID.Equal(expectedID) {
return protocolerrors.Errorf(true, "expected transaction %s, but got %s",
expectedID, txID)
}

View File

@@ -1,4 +1,4 @@
package relaytransactions
package transactionrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"

View File

@@ -1,4 +1,4 @@
package relaytransactions
package transactionrelay
import (
"sync"

View File

@@ -37,8 +37,8 @@ func (m *Manager) Peers() []*peerpkg.Peer {
return m.context.Peers()
}
// IBDPeer returns the currently active IBD peer.
// Returns nil if we aren't currently in IBD
// IBDPeer returns the current IBD peer or null if the node is not
// in IBD
func (m *Manager) IBDPeer() *peerpkg.Peer {
return m.context.IBDPeer()
}
@@ -73,3 +73,14 @@ func (m *Manager) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler flowconte
func (m *Manager) SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler flowcontext.OnTransactionAddedToMempoolHandler) {
m.context.SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler)
}
// ShouldMine returns whether it's ok to use block template from this node
// for mining purposes.
func (m *Manager) ShouldMine() (bool, error) {
return m.context.ShouldMine()
}
// IsIBDRunning returns true if IBD is currently marked as running
func (m *Manager) IsIBDRunning() bool {
return m.context.IsIBDRunning()
}

View File

@@ -2,7 +2,6 @@ package peer
import (
"sync"
"sync/atomic"
"time"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
@@ -22,9 +21,6 @@ const maxProtocolVersion = 1
type Peer struct {
connection *netadapter.NetConnection
selectedTipHashMtx sync.RWMutex
selectedTipHash *externalapi.DomainHash
userAgent string
services appmessage.ServiceFlag
advertisedProtocolVerion uint32 // protocol version advertised by remote
@@ -39,21 +35,13 @@ type Peer struct {
lastPingNonce uint64 // The nonce of the last ping we sent
lastPingTime time.Time // Time we sent last ping
lastPingDuration time.Duration // Time for last ping to return
isSelectedTipRequested uint32
selectedTipRequestChan chan struct{}
lastSelectedTipRequest mstime.Time
ibdStartChan chan struct{}
}
// New returns a new Peer
func New(connection *netadapter.NetConnection) *Peer {
return &Peer{
connection: connection,
selectedTipRequestChan: make(chan struct{}),
ibdStartChan: make(chan struct{}),
connectionStarted: time.Now(),
connection: connection,
connectionStarted: time.Now(),
}
}
@@ -62,20 +50,6 @@ func (p *Peer) Connection() *netadapter.NetConnection {
return p.connection
}
// SelectedTipHash returns the selected tip of the peer.
func (p *Peer) SelectedTipHash() *externalapi.DomainHash {
p.selectedTipHashMtx.RLock()
defer p.selectedTipHashMtx.RUnlock()
return p.selectedTipHash
}
// SetSelectedTipHash sets the selected tip of the peer.
func (p *Peer) SetSelectedTipHash(hash *externalapi.DomainHash) {
p.selectedTipHashMtx.Lock()
defer p.selectedTipHashMtx.Unlock()
p.selectedTipHash = hash
}
// SubnetworkID returns the subnetwork the peer is associated with.
// It is nil in full nodes.
func (p *Peer) SubnetworkID() *externalapi.DomainSubnetworkID {
@@ -128,7 +102,6 @@ func (p *Peer) UpdateFieldsFromMsgVersion(msg *appmessage.MsgVersion) {
p.userAgent = msg.UserAgent
p.disableRelayTx = msg.DisableRelayTx
p.selectedTipHash = msg.SelectedTipHash
p.subnetworkID = msg.SubnetworkID
p.timeOffset = mstime.Since(msg.Timestamp)
@@ -156,46 +129,6 @@ func (p *Peer) String() string {
return p.connection.String()
}
// RequestSelectedTipIfRequired notifies the peer that requesting
// a selected tip is required. This triggers the selected tip
// request flow.
func (p *Peer) RequestSelectedTipIfRequired() {
if atomic.SwapUint32(&p.isSelectedTipRequested, 1) != 0 {
return
}
const minGetSelectedTipInterval = time.Minute
if mstime.Since(p.lastSelectedTipRequest) < minGetSelectedTipInterval {
return
}
p.lastSelectedTipRequest = mstime.Now()
p.selectedTipRequestChan <- struct{}{}
}
// WaitForSelectedTipRequests blocks the current thread until
// a selected tip is requested from this peer
func (p *Peer) WaitForSelectedTipRequests() {
<-p.selectedTipRequestChan
}
// FinishRequestingSelectedTip finishes requesting the selected
// tip from this peer
func (p *Peer) FinishRequestingSelectedTip() {
atomic.StoreUint32(&p.isSelectedTipRequested, 0)
}
// StartIBD starts the IBD process for this peer
func (p *Peer) StartIBD() {
p.ibdStartChan <- struct{}{}
}
// WaitForIBDStart blocks the current thread until
// IBD start is requested from this peer
func (p *Peer) WaitForIBDStart() {
<-p.ibdStartChan
}
// Address returns the address associated with this connection
func (p *Peer) Address() string {
return p.connection.Address()

View File

@@ -8,10 +8,8 @@ import (
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/handshake"
"github.com/kaspanet/kaspad/app/protocol/flows/ibd"
"github.com/kaspanet/kaspad/app/protocol/flows/ibd/selectedtip"
"github.com/kaspanet/kaspad/app/protocol/flows/ping"
"github.com/kaspanet/kaspad/app/protocol/flows/relaytransactions"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
@@ -46,6 +44,7 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
panic(err)
}
if isBanned {
log.Infof("Peer %s is banned. Disconnecting...", netConnection)
netConnection.Disconnect()
return
}
@@ -89,6 +88,7 @@ func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection
panic(err)
}
}
log.Debugf("Disconnecting from %s (reason: %s)", netConnection, protocolErr.Cause)
netConnection.Disconnect()
return
}
@@ -107,7 +107,6 @@ func (m *Manager) registerFlows(router *routerpkg.Router, errChan chan error, is
flows = m.registerAddressFlows(router, isStopping, errChan)
flows = append(flows, m.registerBlockRelayFlows(router, isStopping, errChan)...)
flows = append(flows, m.registerPingFlows(router, isStopping, errChan)...)
flows = append(flows, m.registerIBDFlows(router, isStopping, errChan)...)
flows = append(flows, m.registerTransactionRelayFlow(router, isStopping, errChan)...)
flows = append(flows, m.registerRejectsFlow(router, isStopping, errChan)...)
@@ -136,8 +135,12 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerFlow("HandleRelayInvs", router, []appmessage.MessageCommand{appmessage.CmdInvRelayBlock, appmessage.CmdBlock}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
m.registerFlow("HandleRelayInvs", router, []appmessage.MessageCommand{
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator, appmessage.CmdIBDBlock,
appmessage.CmdDoneHeaders, appmessage.CmdUnexpectedPruningPoint, appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdBlockHeaders, appmessage.CmdPruningPointHash, appmessage.CmdIBDBlockLocatorHighestHash,
appmessage.CmdDonePruningPointUTXOSetChunks},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayInvs(m.context, incomingRoute,
outgoingRoute, peer)
},
@@ -148,6 +151,49 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
return blockrelay.HandleRelayBlockRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandleRequestBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdRequestBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestBlockLocator(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestHeaders", router,
[]appmessage.MessageCommand{appmessage.CmdRequestHeaders, appmessage.CmdRequestNextHeaders}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestHeaders(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandleRequestPruningPointUTXOSetAndBlock", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointUTXOSetAndBlock,
appmessage.CmdRequestNextPruningPointUTXOSetChunk}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestPruningPointUTXOSetAndBlock(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleIBDBlockRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBDBlockRequests(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandlePruningPointHashRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointHash}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointHashRequests(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleIBDBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdIBDBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBDBlockLocator(m.context, incomingRoute, outgoingRoute, peer)
},
),
}
}
@@ -169,62 +215,6 @@ func (m *Manager) registerPingFlows(router *routerpkg.Router, isStopping *uint32
}
}
func (m *Manager) registerIBDFlows(router *routerpkg.Router, isStopping *uint32, errChan chan error) []*flow {
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerFlow("HandleIBD", router, []appmessage.MessageCommand{appmessage.CmdBlockLocator, appmessage.CmdIBDBlock,
appmessage.CmdDoneHeaders, appmessage.CmdIBDRootNotFound, appmessage.CmdIBDRootUTXOSetAndBlock, appmessage.CmdHeader},
isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ibd.HandleIBD(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("RequestSelectedTip", router,
[]appmessage.MessageCommand{appmessage.CmdSelectedTip}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return selectedtip.RequestSelectedTip(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandleRequestSelectedTip", router,
[]appmessage.MessageCommand{appmessage.CmdRequestSelectedTip}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return selectedtip.HandleRequestSelectedTip(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdRequestBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ibd.HandleRequestBlockLocator(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestHeaders", router,
[]appmessage.MessageCommand{appmessage.CmdRequestHeaders, appmessage.CmdRequestNextHeaders}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ibd.HandleRequestHeaders(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestIBDRootUTXOSetAndBlock", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDRootUTXOSetAndBlock}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ibd.HandleRequestIBDRootUTXOSetAndBlock(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleIBDBlockRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ibd.HandleIBDBlockRequests(m.context, incomingRoute, outgoingRoute)
},
),
}
}
func (m *Manager) registerTransactionRelayFlow(router *routerpkg.Router, isStopping *uint32, errChan chan error) []*flow {
outgoingRoute := router.OutgoingRoute()
@@ -232,13 +222,13 @@ func (m *Manager) registerTransactionRelayFlow(router *routerpkg.Router, isStopp
m.registerFlow("HandleRelayedTransactions", router,
[]appmessage.MessageCommand{appmessage.CmdInvTransaction, appmessage.CmdTx, appmessage.CmdTransactionNotFound}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return relaytransactions.HandleRelayedTransactions(m.context, incomingRoute, outgoingRoute)
return transactionrelay.HandleRelayedTransactions(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestTransactions", router,
[]appmessage.MessageCommand{appmessage.CmdRequestTransactions}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return relaytransactions.HandleRequestedTransactions(m.context, incomingRoute, outgoingRoute)
return transactionrelay.HandleRequestedTransactions(m.context, incomingRoute, outgoingRoute)
},
),
}

View File

@@ -16,6 +16,7 @@ func (e *ProtocolError) Error() string {
return e.Cause.Error()
}
// Unwrap returns the cause of ProtocolError, to be used with `errors.Unwrap()`
func (e *ProtocolError) Unwrap() error {
return e.Cause
}

View File

@@ -6,7 +6,9 @@ import (
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
@@ -25,6 +27,7 @@ func NewManager(
protocolManager *protocol.Manager,
connectionManager *connmanager.ConnectionManager,
addressManager *addressmanager.AddressManager,
utxoIndex *utxoindex.UTXOIndex,
shutDownChan chan<- struct{}) *Manager {
manager := Manager{
@@ -35,6 +38,7 @@ func NewManager(
protocolManager,
connectionManager,
addressManager,
utxoIndex,
shutDownChan,
),
}
@@ -44,19 +48,86 @@ func NewManager(
}
// NotifyBlockAddedToDAG notifies the manager that a block has been added to the DAG
func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock) error {
notification := appmessage.NewBlockAddedNotificationMessage(appmessage.DomainBlockToMsgBlock(block))
return m.context.NotificationManager.NotifyBlockAdded(notification)
func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyBlockAddedToDAG")
defer onEnd()
if m.context.Config.UTXOIndex {
err := m.notifyUTXOsChanged(blockInsertionResult)
if err != nil {
return err
}
}
err := m.notifyVirtualSelectedParentBlueScoreChanged()
if err != nil {
return err
}
err = m.notifyVirtualSelectedParentChainChanged(blockInsertionResult)
if err != nil {
return err
}
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(appmessage.DomainBlockToMsgBlock(block))
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
}
// NotifyFinalityConflict notifies the manager that there's a finality conflict in the DAG
func (m *Manager) NotifyFinalityConflict(violatingBlockHash string) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyFinalityConflict")
defer onEnd()
notification := appmessage.NewFinalityConflictNotificationMessage(violatingBlockHash)
return m.context.NotificationManager.NotifyFinalityConflict(notification)
}
// NotifyFinalityConflictResolved notifies the manager that a finality conflict in the DAG has been resolved
func (m *Manager) NotifyFinalityConflictResolved(finalityBlockHash string) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyFinalityConflictResolved")
defer onEnd()
notification := appmessage.NewFinalityConflictResolvedNotificationMessage(finalityBlockHash)
return m.context.NotificationManager.NotifyFinalityConflictResolved(notification)
}
func (m *Manager) notifyUTXOsChanged(blockInsertionResult *externalapi.BlockInsertionResult) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyUTXOsChanged")
defer onEnd()
utxoIndexChanges, err := m.context.UTXOIndex.Update(blockInsertionResult.VirtualSelectedParentChainChanges)
if err != nil {
return err
}
return m.context.NotificationManager.NotifyUTXOsChanged(utxoIndexChanges)
}
func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentBlueScoreChanged")
defer onEnd()
virtualSelectedParent, err := m.context.Domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return err
}
blockInfo, err := m.context.Domain.Consensus().GetBlockInfo(virtualSelectedParent)
if err != nil {
return err
}
notification := appmessage.NewVirtualSelectedParentBlueScoreChangedNotificationMessage(blockInfo.BlueScore)
return m.context.NotificationManager.NotifyVirtualSelectedParentBlueScoreChanged(notification)
}
func (m *Manager) notifyVirtualSelectedParentChainChanged(blockInsertionResult *externalapi.BlockInsertionResult) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentChainChanged")
defer onEnd()
notification, err := m.context.ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
blockInsertionResult.VirtualSelectedParentChainChanges)
if err != nil {
return err
}
return m.context.NotificationManager.NotifyVirtualSelectedParentChainChanged(notification)
}

View File

@@ -12,28 +12,32 @@ import (
type handler func(context *rpccontext.Context, router *router.Router, request appmessage.Message) (appmessage.Message, error)
var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdGetCurrentNetworkRequestMessage: rpchandlers.HandleGetCurrentNetwork,
appmessage.CmdSubmitBlockRequestMessage: rpchandlers.HandleSubmitBlock,
appmessage.CmdGetBlockTemplateRequestMessage: rpchandlers.HandleGetBlockTemplate,
appmessage.CmdNotifyBlockAddedRequestMessage: rpchandlers.HandleNotifyBlockAdded,
appmessage.CmdGetPeerAddressesRequestMessage: rpchandlers.HandleGetPeerAddresses,
appmessage.CmdGetSelectedTipHashRequestMessage: rpchandlers.HandleGetSelectedTipHash,
appmessage.CmdGetMempoolEntryRequestMessage: rpchandlers.HandleGetMempoolEntry,
appmessage.CmdGetConnectedPeerInfoRequestMessage: rpchandlers.HandleGetConnectedPeerInfo,
appmessage.CmdAddPeerRequestMessage: rpchandlers.HandleAddPeer,
appmessage.CmdSubmitTransactionRequestMessage: rpchandlers.HandleSubmitTransaction,
appmessage.CmdNotifyChainChangedRequestMessage: rpchandlers.HandleNotifyChainChanged,
appmessage.CmdGetBlockRequestMessage: rpchandlers.HandleGetBlock,
appmessage.CmdGetSubnetworkRequestMessage: rpchandlers.HandleGetSubnetwork,
appmessage.CmdGetChainFromBlockRequestMessage: rpchandlers.HandleGetChainFromBlock,
appmessage.CmdGetBlocksRequestMessage: rpchandlers.HandleGetBlocks,
appmessage.CmdGetBlockCountRequestMessage: rpchandlers.HandleGetBlockCount,
appmessage.CmdGetBlockDAGInfoRequestMessage: rpchandlers.HandleGetBlockDAGInfo,
appmessage.CmdResolveFinalityConflictRequestMessage: rpchandlers.HandleResolveFinalityConflict,
appmessage.CmdNotifyFinalityConflictsRequestMessage: rpchandlers.HandleNotifyFinalityConflicts,
appmessage.CmdGetMempoolEntriesRequestMessage: rpchandlers.HandleGetMempoolEntries,
appmessage.CmdShutDownRequestMessage: rpchandlers.HandleShutDown,
appmessage.CmdGetHeadersRequestMessage: rpchandlers.HandleGetHeaders,
appmessage.CmdGetCurrentNetworkRequestMessage: rpchandlers.HandleGetCurrentNetwork,
appmessage.CmdSubmitBlockRequestMessage: rpchandlers.HandleSubmitBlock,
appmessage.CmdGetBlockTemplateRequestMessage: rpchandlers.HandleGetBlockTemplate,
appmessage.CmdNotifyBlockAddedRequestMessage: rpchandlers.HandleNotifyBlockAdded,
appmessage.CmdGetPeerAddressesRequestMessage: rpchandlers.HandleGetPeerAddresses,
appmessage.CmdGetSelectedTipHashRequestMessage: rpchandlers.HandleGetSelectedTipHash,
appmessage.CmdGetMempoolEntryRequestMessage: rpchandlers.HandleGetMempoolEntry,
appmessage.CmdGetConnectedPeerInfoRequestMessage: rpchandlers.HandleGetConnectedPeerInfo,
appmessage.CmdAddPeerRequestMessage: rpchandlers.HandleAddPeer,
appmessage.CmdSubmitTransactionRequestMessage: rpchandlers.HandleSubmitTransaction,
appmessage.CmdNotifyVirtualSelectedParentChainChangedRequestMessage: rpchandlers.HandleNotifyVirtualSelectedParentChainChanged,
appmessage.CmdGetBlockRequestMessage: rpchandlers.HandleGetBlock,
appmessage.CmdGetSubnetworkRequestMessage: rpchandlers.HandleGetSubnetwork,
appmessage.CmdGetVirtualSelectedParentChainFromBlockRequestMessage: rpchandlers.HandleGetVirtualSelectedParentChainFromBlock,
appmessage.CmdGetBlocksRequestMessage: rpchandlers.HandleGetBlocks,
appmessage.CmdGetBlockCountRequestMessage: rpchandlers.HandleGetBlockCount,
appmessage.CmdGetBlockDAGInfoRequestMessage: rpchandlers.HandleGetBlockDAGInfo,
appmessage.CmdResolveFinalityConflictRequestMessage: rpchandlers.HandleResolveFinalityConflict,
appmessage.CmdNotifyFinalityConflictsRequestMessage: rpchandlers.HandleNotifyFinalityConflicts,
appmessage.CmdGetMempoolEntriesRequestMessage: rpchandlers.HandleGetMempoolEntries,
appmessage.CmdShutDownRequestMessage: rpchandlers.HandleShutDown,
appmessage.CmdGetHeadersRequestMessage: rpchandlers.HandleGetHeaders,
appmessage.CmdNotifyUTXOsChangedRequestMessage: rpchandlers.HandleNotifyUTXOsChanged,
appmessage.CmdGetUTXOsByAddressesRequestMessage: rpchandlers.HandleGetUTXOsByAddresses,
appmessage.CmdGetVirtualSelectedParentBlueScoreRequestMessage: rpchandlers.HandleGetVirtualSelectedParentBlueScore,
appmessage.CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualSelectedParentBlueScoreChanged,
}
func (m *Manager) routerInitializer(router *router.Router, netConnection *netadapter.NetConnection) {

View File

@@ -0,0 +1,45 @@
package rpccontext
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage converts
// VirtualSelectedParentChainChanges to VirtualSelectedParentChainChangedNotificationMessage
func (ctx *Context) ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
selectedParentChainChanges *externalapi.SelectedChainPath) (*appmessage.VirtualSelectedParentChainChangedNotificationMessage, error) {
removedChainBlockHashes := make([]string, len(selectedParentChainChanges.Removed))
for i, removed := range selectedParentChainChanges.Removed {
removedChainBlockHashes[i] = removed.String()
}
addedChainBlocks := make([]*appmessage.ChainBlock, len(selectedParentChainChanges.Added))
for i, added := range selectedParentChainChanges.Added {
acceptanceData, err := ctx.Domain.Consensus().GetBlockAcceptanceData(added)
if err != nil {
return nil, err
}
acceptedBlocks := make([]*appmessage.AcceptedBlock, len(acceptanceData))
for j, acceptedBlock := range acceptanceData {
acceptedTransactionIDs := make([]string, len(acceptedBlock.TransactionAcceptanceData))
for k, transaction := range acceptedBlock.TransactionAcceptanceData {
transactionID := consensushashing.TransactionID(transaction.Transaction)
acceptedTransactionIDs[k] = transactionID.String()
}
acceptedBlocks[j] = &appmessage.AcceptedBlock{
Hash: acceptedBlock.BlockHash.String(),
AcceptedTransactionIDs: acceptedTransactionIDs,
}
}
addedChainBlocks[i] = &appmessage.ChainBlock{
Hash: added.String(),
AcceptedBlocks: acceptedBlocks,
}
}
return appmessage.NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes, addedChainBlocks), nil
}

View File

@@ -3,6 +3,7 @@ package rpccontext
import (
"github.com/kaspanet/kaspad/app/protocol"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
@@ -17,6 +18,7 @@ type Context struct {
ProtocolManager *protocol.Manager
ConnectionManager *connmanager.ConnectionManager
AddressManager *addressmanager.AddressManager
UTXOIndex *utxoindex.UTXOIndex
ShutDownChan chan<- struct{}
NotificationManager *NotificationManager
@@ -29,6 +31,7 @@ func NewContext(cfg *config.Config,
protocolManager *protocol.Manager,
connectionManager *connmanager.ConnectionManager,
addressManager *addressmanager.AddressManager,
utxoIndex *utxoindex.UTXOIndex,
shutDownChan chan<- struct{}) *Context {
context := &Context{
@@ -38,8 +41,10 @@ func NewContext(cfg *config.Config,
ProtocolManager: protocolManager,
ConnectionManager: connectionManager,
AddressManager: addressManager,
UTXOIndex: utxoIndex,
ShutDownChan: shutDownChan,
}
context.NotificationManager = NewNotificationManager()
return context
}

View File

@@ -1,10 +1,12 @@
package rpccontext
import (
"sync"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/utxoindex"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
"sync"
)
// NotificationManager manages notifications for the RPC
@@ -13,12 +15,23 @@ type NotificationManager struct {
listeners map[*routerpkg.Router]*NotificationListener
}
// UTXOsChangedNotificationAddress represents a kaspad address.
// This type is meant to be used in UTXOsChanged notifications
type UTXOsChangedNotificationAddress struct {
Address string
ScriptPublicKeyString utxoindex.ScriptPublicKeyString
}
// NotificationListener represents a registered RPC notification listener
type NotificationListener struct {
propagateBlockAddedNotifications bool
propagateChainChangedNotifications bool
propagateFinalityConflictNotifications bool
propagateFinalityConflictResolvedNotifications bool
propagateBlockAddedNotifications bool
propagateVirtualSelectedParentChainChangedNotifications bool
propagateFinalityConflictNotifications bool
propagateFinalityConflictResolvedNotifications bool
propagateUTXOsChangedNotifications bool
propagateVirtualSelectedParentBlueScoreChangedNotifications bool
propagateUTXOsChangedNotificationAddresses []*UTXOsChangedNotificationAddress
}
// NewNotificationManager creates a new NotificationManager
@@ -65,7 +78,9 @@ func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAd
for router, listener := range nm.listeners {
if listener.propagateBlockAddedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
if errors.Is(err, routerpkg.ErrRouteClosed) {
log.Warnf("Couldn't send notification: %s", err)
} else if err != nil {
return err
}
}
@@ -73,13 +88,13 @@ func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAd
return nil
}
// NotifyChainChanged notifies the notification manager that the DAG's selected parent chain has changed
func (nm *NotificationManager) NotifyChainChanged(notification *appmessage.ChainChangedNotificationMessage) error {
// NotifyVirtualSelectedParentChainChanged notifies the notification manager that the DAG's selected parent chain has changed
func (nm *NotificationManager) NotifyVirtualSelectedParentChainChanged(notification *appmessage.VirtualSelectedParentChainChangedNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagateChainChangedNotifications {
if listener.propagateVirtualSelectedParentChainChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
return err
@@ -121,12 +136,58 @@ func (nm *NotificationManager) NotifyFinalityConflictResolved(notification *appm
return nil
}
// NotifyUTXOsChanged notifies the notification manager that UTXOs have been changed
func (nm *NotificationManager) NotifyUTXOsChanged(utxoChanges *utxoindex.UTXOChanges) error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagateUTXOsChangedNotifications {
// Filter utxoChanges and create a notification
notification := listener.convertUTXOChangesToUTXOsChangedNotification(utxoChanges)
// Don't send the notification if it's empty
if len(notification.Added) == 0 && len(notification.Removed) == 0 {
continue
}
// Enqueue the notification
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
return err
}
}
}
return nil
}
// NotifyVirtualSelectedParentBlueScoreChanged notifies the notification manager that the DAG's
// virtual selected parent blue score has changed
func (nm *NotificationManager) NotifyVirtualSelectedParentBlueScoreChanged(
notification *appmessage.VirtualSelectedParentBlueScoreChangedNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagateVirtualSelectedParentBlueScoreChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
return err
}
}
}
return nil
}
func newNotificationListener() *NotificationListener {
return &NotificationListener{
propagateBlockAddedNotifications: false,
propagateChainChangedNotifications: false,
propagateFinalityConflictNotifications: false,
propagateFinalityConflictResolvedNotifications: false,
propagateBlockAddedNotifications: false,
propagateVirtualSelectedParentChainChangedNotifications: false,
propagateFinalityConflictNotifications: false,
propagateFinalityConflictResolvedNotifications: false,
propagateUTXOsChangedNotifications: false,
propagateVirtualSelectedParentBlueScoreChangedNotifications: false,
}
}
@@ -136,10 +197,10 @@ func (nl *NotificationListener) PropagateBlockAddedNotifications() {
nl.propagateBlockAddedNotifications = true
}
// PropagateChainChangedNotifications instructs the listener to send chain changed notifications
// PropagateVirtualSelectedParentChainChangedNotifications instructs the listener to send chain changed notifications
// to the remote listener
func (nl *NotificationListener) PropagateChainChangedNotifications() {
nl.propagateChainChangedNotifications = true
func (nl *NotificationListener) PropagateVirtualSelectedParentChainChangedNotifications() {
nl.propagateVirtualSelectedParentChainChangedNotifications = true
}
// PropagateFinalityConflictNotifications instructs the listener to send finality conflict notifications
@@ -153,3 +214,41 @@ func (nl *NotificationListener) PropagateFinalityConflictNotifications() {
func (nl *NotificationListener) PropagateFinalityConflictResolvedNotifications() {
nl.propagateFinalityConflictResolvedNotifications = true
}
// PropagateUTXOsChangedNotifications instructs the listener to send UTXOs changed notifications
// to the remote listener
func (nl *NotificationListener) PropagateUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
nl.propagateUTXOsChangedNotifications = true
nl.propagateUTXOsChangedNotificationAddresses = addresses
}
func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
utxoChanges *utxoindex.UTXOChanges) *appmessage.UTXOsChangedNotificationMessage {
notification := &appmessage.UTXOsChangedNotificationMessage{}
for _, listenerAddress := range nl.propagateUTXOsChangedNotificationAddresses {
listenerScriptPublicKeyString := listenerAddress.ScriptPublicKeyString
if addedPairs, ok := utxoChanges.Added[listenerScriptPublicKeyString]; ok {
notification.Added = append(notification.Added,
ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)...)
}
if removedOutpoints, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
for outpoint := range removedOutpoints {
notification.Removed = append(notification.Removed, &appmessage.UTXOsByAddressesEntry{
Address: listenerAddress.Address,
Outpoint: &appmessage.RPCOutpoint{
TransactionID: outpoint.TransactionID.String(),
Index: outpoint.Index,
},
})
}
}
}
return notification
}
// PropagateVirtualSelectedParentBlueScoreChangedNotifications instructs the listener to send
// virtual selected parent blue score notifications to the remote listener
func (nl *NotificationListener) PropagateVirtualSelectedParentBlueScoreChangedNotifications() {
nl.propagateVirtualSelectedParentBlueScoreChangedNotifications = true
}

Some files were not shown because too many files have changed in this diff Show More