Compare commits

...

89 Commits

Author SHA1 Message Date
Elichai Turkel
7de07cdc97 Add a github action deploy script to build and publish releases 2021-03-08 18:17:35 +02:00
Svarog
921ca19b42 Update testnet genesis and testnet network name (#1582)
* Update testnet gensis and testnet network name

* Move genesis closer to the present
2021-03-08 15:04:18 +02:00
Mike Zak
98c2dc8189 Update to version 0.9.1 2021-03-08 11:49:38 +02:00
Mike Zak
37654156a6 Update changelog for v0.9.0 2021-03-04 10:53:00 +02:00
Ori Newman
0271858f25 Readd BlockHashes to getBlocks response (#1575) 2021-03-03 16:27:51 +02:00
Elichai Turkel
7909480757 Merge big subdags in pick virtual parents (#1574)
* Refactor mergeSetIncrease to return the current BFS block to allow easier merging

* Remove unneeded Heap/HashSet usages

* Add new IsAnyAncestorOf to DagTopolyManager

* Check if the new candidate is in the future of any existing candidate

* Add comments and fix off-by-one in the mergeSetIncrease queue

* Fixed DAGToplogy test mock

* Fix review comments
2021-03-03 16:17:16 +02:00
Ori Newman
18274c236b Write in the reject message the tx rejection reason (#1573)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-03-02 20:11:31 +02:00
Elichai Turkel
7829a9fd76 Add nil checks for protowire (#1570)
* Handle errors in p2p handshake better

* Add a new errNil in protowire

* Add nil checks for all protowire.toAppMessage() functions for the p2p

* Add nil checks for all protowire.toAppMessage() functions for the RPC

* Add nil check for protwire KaspadMessage

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-02 19:14:31 +02:00
Ori Newman
1548ed9629 Increase getBlocks limit to 1000 (#1572) 2021-03-02 18:24:37 +02:00
Svarog
32cd643e8b Clone transactions before returning them out of mempool (#1571)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-03-02 17:28:53 +02:00
Ori Newman
05df6e3e4e Return RPC error if getBlock's lowHash doesn't exist (#1569)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-02 16:21:55 +02:00
Svarog
ce326b3c7d Add default dns-seeder to testnet (#1568) 2021-03-02 12:57:16 +02:00
Svarog
a4ae263ed5 Don't print stack trace on disconnected (#1567) 2021-03-02 12:25:33 +02:00
Elichai Turkel
79be1edaa5 Fix utxoindex deserialization (#1566)
* Fix broken deserialization in utxoindex

* Add Tests for hashes serialization in utxo index
2021-03-01 19:06:19 +02:00
Svarog
c1ef5f0c56 Add pruning point hash to GetBlockDagInfo response (#1565)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-01 18:46:45 +02:00
Elichai Turkel
458e409654 Rename handleRequestBlocksFlow to handleRequestHeadersFlow (#1563) 2021-03-01 15:01:59 +02:00
Svarog
df19bdfaf3 Use EmitUnpopulated so that kaspactl prints all fields, even the default ones (#1561)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-03-01 14:47:34 +02:00
Elichai Turkel
103edf97d0 Stop logging an error whenever an RPC/P2P connection is canceled (#1562)
* Don't log error when connectionLoop is canceled

* Print new line after Exiting...

* Add stacktrace to the unknown error from connectionLoops
2021-03-01 14:37:42 +02:00
Elichai Turkel
1f69f9eed9 Cleanup the logger and make it asynchronous (#1524)
* Remove Subsystems map and replace with RegisterSubSystem

* Clean up the logger

* Fix LOGFLAGS and make LongFile work correctly

* Parallelize the logger backend

* More logger cleanup

* Initialize and close the logger backend wherever it's needed

* Move the location where the backend is closed, also print the log if it panics while writing

* Add TestMain to reachability manager tests to preserve the same log level

* Fix review comments

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-01 14:04:40 +02:00
Elichai Turkel
6a12428504 Close iterators (#1542)
* Add Close() function to all the iterators

* Add defer iterator.Close() whenever we open an iterator

* Add isClosed to all iterators and panic/return error if used after closing

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-01 11:15:59 +02:00
Svarog
63b1d2a05a Add childrenHashes to GetBlock/s RPC commands (#1560)
* Add childrenHashes to GetBlock/s RPC commands

* Fix missed error + implement GetBlockChildren in fakeRelayInvsContext
2021-02-28 18:28:29 +02:00
Svarog
089115c389 Add ScriptPublicKey.Version to RPC (#1559)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-25 17:43:06 +02:00
Elichai Turkel
24a12cf2a1 Fix subnetworks FromString and disable reversal (#1557)
* Remove DomainSubnetworkID reversal

* Fix DomaiNSubnetworkID FromString implementation

* Change RPC conversation logic to use Stringer/FromString

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-25 16:35:31 +02:00
Mike Zak
6597f24bab Add changelog for v0.8.10 2021-02-25 14:56:05 +02:00
Svarog
dc84913214 Convert ProtocolError to value-type, so that it can be used withh errors.As + fix SubmitBlock ProtocolError condition (#1555)
* Fix condition from || to &&

* Convert ProtocolError to value-type, so that it can be used wihth errors.As

* Simplify condition further
2021-02-24 17:13:59 +02:00
Ori Newman
201646282b Fix the target block rate to create less bursty mining (#1554)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-24 16:35:38 +02:00
Svarog
3f08bf87a8 Fix mempool.RemoveTransactions (#1551)
* Fix mempool.RemoveTransactions

* Don't crash if mempool.RemoveTransactions returns an error. log.Critical instead
2021-02-24 12:47:21 +02:00
Ori Newman
581a12db96 Add RPC reconnection to the miner (#1552)
* Add RPC reconnection to the miner

* Fix wrapf

* Change logs
2021-02-24 10:25:13 +02:00
Svarog
fb6c9c8f21 Remove virtual diff parents (#1550)
* resolveSingleBlockStatus: If the block being resolved is not going to be the next selectedTip - set it's diffParent to be the old selectedTip

* resolveSingleBlockStatus: If the block being resolved is going to be the next selectedTip - set it as old selectedTip's diffChild

* Remove any mentions of virtualDiffParents

* If block is genesis - don't do all the mumbo-jumbo with oldSelectedTip

* Check an unchecked error

* Write a better log message
2021-02-23 17:19:35 +02:00
Ori Newman
2adb4f5d0f Fix UTXO index (#1548)
* Add VirtualUTXODiff and VirtualParents to block insertion result

* Add GetVirtualUTXOs

* Add OnPruningPointUTXOSetOverrideHandler

* Add recovery to UTXO index

* Add UTXO set override notification

* Fix compilation error

* Fix iterators in UTXO index and fix TestUTXOIndex

* Change Dialing to DEBUG

* Change LogBlock location

* Rename StopNotify to StopNotifying

* Add sanity check

* Add comment

* Remove receiver from serialization functions

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-02-23 16:51:51 +02:00
Ori Newman
9ffda2b1da Change disconnect log level to INFO (#1549)
* Change disconnect log level to INFO

* Add disconnected log and change log levels
2021-02-23 13:52:47 +02:00
Svarog
bee0893660 Renamed BlueBlockWindow to just BlockWindow (#1547)
* Renamed BlueBlockWindow to just BlockWindow

* Update comment
2021-02-22 11:33:39 +02:00
talelbaz
a7bb1853f9 Adds tests for transaction validator and block validators (#1531)
* [NOD-1453] cover failing block validation

* [NOD-1453] Complete covering test for invalid block

* [NOD-1453] Fix validator tests after rebase

* [NOD-1453] Cover tests for valid blocks

* [NOD-1453] Implement unit tests for ValidateTransactionInIsolation

* [NOD-1453] Add tests for ValidateTransactionInContextAndPopulateMassAndFee

* [NOD-1453] Cover ValidateHeaderInContext test

* [NOD-1453] Fix after rebase

* not finish

* commited for update the branch.

* Adds new tests to block_body_in_isolation_test.go according to (and instead of ) blockvalisator_test.go

* Adds a comment to type MEDIAN.

* Fixes according to the review notes: add notes and change variables name.

* Fix comment.

* Remove an unused test( all the tests in this file were passed to other test files).

* Change a variable name(txWithAnEmptyInvalidScript to txWithInvalidSignature).

* adds missing '}'.

* Change spaces to tab

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: Karim A <karim.a@it-dimension.com>
Co-authored-by: tal <tal@daglabs.com>
2021-02-21 17:46:22 +02:00
talelbaz
35e555e959 Tests validateDifficulty ( ValidatePruningPointViolationAndProofOfWorkAndDifficulty) (#1532)
* Adds tests for validateDifficulty

* fixes according to the review notes: adding the test's goal and fix an unmatch test name on the NewTestConsensus.

* Fixes according to the review notes:delete the function genesisBits - No usages.

* Fix according to review - fix comments.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-21 17:00:34 +02:00
Elichai Turkel
a250f697ee Prevent fast failing (#1545) 2021-02-21 16:09:19 +02:00
Elichai Turkel
f66708b3c6 Make the CI more verbose and cache kaspad dependencies in the dockerfile (#1541)
* Make the CI more verbose

* Improve the docker caching of kaspad dependencies

Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-21 12:15:32 +02:00
Svarog
8fbea5d239 Increase the sleep time in kaspaminer when the node is not synced (#1544)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-21 11:46:06 +02:00
Svarog
5fa06fe7d7 Upgrade everything to go1.16 (#1539)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-21 09:17:21 +02:00
Ori Newman
06fd6f1b95 Parallelize tests on Dockerfile (#1540)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-18 11:38:44 +02:00
Ori Newman
d2f4ed660c Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537) 2021-02-18 10:39:12 +02:00
Elichai Turkel
19878aa062 Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538) 2021-02-18 00:59:11 +02:00
Elichai Turkel
6415e525c3 go test race detector in github actions at cron job (#1534)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-17 18:59:42 +02:00
Svarog
995e526dae Make antiPastHashesBetween return blocks sorted in ghostdag-order (#1536)
* Make antiPastHashesBetween return blocks sorted in ghostdag-order

* Return sortedMergeSet instead of blueMergeSet

* Invert the order of parameters of IsAncestorOf

* Add RenderDAGToDot to TestConsensus

* Add HighHash explicitly, unless lowHash == highHash

* Use Equal instead of == when comparing hashes

* Fixed TestSyncManager_GetHashesBetween

* Fix tests

* findHighHashAccordingToMaxBlueScoreDifference: don't start looking if the whole thing fits

* Handle a missed error

* Remove redundant call to RenderToDot

* Fix bug in findHighHashAccordingToMaxBlueScoreDifference
2021-02-17 18:22:08 +02:00
Elichai Turkel
00a023620d Fix a data race in the block logger (#1533) 2021-02-17 17:05:25 +02:00
Ori Newman
2908a46441 Don't ban when sending pruned blocks (#1530) 2021-02-15 16:43:35 +02:00
Ori Newman
e78cdff3d0 Don't mark block that got rejected because of ruleerrors.ErrPrunedBlock as invalid (#1529)
* Don't mark block that got rejected because of ruleerrors.ErrPrunedBlock as invalid

* Update comment
2021-02-15 15:34:21 +02:00
Ori Newman
2a31074fc4 Make getBlock return an error for invalid blocks (#1528) 2021-02-15 14:39:25 +02:00
stasatdaglabs
d835f72e74 Make AddressManager persistent (#1525)
* Move existing address/bannedAddress functionality to a new addressStore object.

* Implement TestAddressManager.

* Implement serializeAddressKey and deserializeAddressKey.

* Implement serializeNetAddress and deserializeNetAddress.

* Store addresses and banned addresses to disk.

* Implement restoreNotBannedAddresses.

* Fix bannedDatabaseKey.

* Implement restoreBannedAddresses.

* Implement TestRestoreAddressManager.

* Defer closing the database in TestRestoreAddressManager.

* Defer closing the database in TestRestoreAddressManager.

* Add a log.

* Return errors where appropriate.

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-02-14 19:08:06 +02:00
Elichai Turkel
a581dea127 Remove unused utils and structures (#1526)
* Remove unused utils

* Remove unneeded randomness from tests

* Remove more unused functions

* Remove unused protobuf structures

* Fix small errors
2021-02-14 18:13:20 +02:00
stasatdaglabs
7b4b5668e2 Enhance UTXOsChanged notifications (#1522)
* In PropagateUTXOsChangedNotifications, add the given addresses to the address list instead of replacing them.

* Add StopNotifyingUtxosChangedRequestMessage to rpc.proto.

* Implement StopNotifyingUTXOsChanged.

* Optimize convertUTXOChangesToUTXOsChangedNotification.
2021-02-14 12:58:29 +02:00
Elichai Turkel
0e2061d838 Make RPC command GetBlocks prepend lowHash to return value and fix error when lowHash=highHash (#1520)
* Don't error out if antiPastHashesBetween have 2 blocks with the same blue score

* Prepend lowHash to RPC GetBlocks request

* Add a test for GetHashesBetween

* Add a test for GetBlocks RPC call

* Update antipast.go

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-11 18:13:46 +02:00
Svarog
0a579e7f78 DownloadHeaders: Instead of using doneChan - close blockHeadersMessageChan. (#1523) 2021-02-11 17:01:15 +02:00
hashdag
1ed6c4c086 Update README.md 2021-02-11 15:04:15 +02:00
Svarog
fea83e5c6c Change Testnet name to kaspad-testnet-2 (#1521)
* Change Testnet name to kaspad-testnet-2

* Fix tests that hardcoded network names
2021-02-11 15:02:25 +02:00
Elichai Turkel
7c3beb526e Limit stdout log level to info (#1518)
* Rename debuglevel to loglevel

* Limit stdout level to info by default

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-10 18:35:13 +02:00
Svarog
171deded4e Implement GetBlocks RPC command (#1514)
* Remove BlockHexes from GetBlocks request and response

* Add GetBlocks RPC

* Include the selectedTip's anticone in GetBlocks

* Add Anticone to fakeRelayInvsContext

* Include verbose data only if it was requested + Add comments to HandleGetBlocks

* Allow antiPastHashesBetween to receive unrelated low and high hashes

* Convert to/from protowire GetBlocksResponse with no verbose data correctly

* Removed NextLowHash

* Update GetBlocks in rpc_client

* Validate in consensus.Anticone that blockHash exists

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-10 18:27:04 +02:00
stasatdaglabs
94cdc77481 Send peers the hash of the virtual selected parent once connection is established (#1519)
* Send peers the hash of the virtual selected parent once connection is established.

* Add a log to SendVirtualSelectedParentInv.

* Fix TestIBDWithPruning.

* Fix TestIBDWithPruning better and signal from the IBD syncer to the IBD syncee that the DAG is split amongst them.

* Fix TestVirtualSelectedParentChain.

* Add comments.
2021-02-10 18:09:25 +02:00
Svarog
1222a555f2 Prune blocks below pruning point when moving pruning point during IBD (#1513)
* Split deletePastBlocks into sub-routines

* Remove SelectedParentIterator, and refactor SelectedChildIterator to support First and Error

* Implement PruneAllBlocks

* Prune all blocks in the store

* Prune only blocks that are below the pruning point

* Defer call onEnd of LogAndMeasureExecutionTime

* Handle a forgotten error

* Minor style fixes
2021-02-10 16:39:36 +02:00
talelbaz
f13fc35b9e Adds new tests for "BlockAtDepth" function and validate the old tests on DAGTraversal. (#1500)
* Adds tests for the "blockAtDepth" function and verify old other tests.

* Optimization on create the Dag chain.

* Changes according to the review - more detailed error messages, added constants, changed to 3 independent graphs (instead of extending), and changes all the abbreviations.

* Changes according to the review - divide the test into three separate tests and change names to variables.

* Changes according to the review - the order of the function has changed.

* delete double lines

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-09 15:28:37 +02:00
Elichai Turkel
2d61a67592 Change some logs (#1511) 2021-02-09 14:00:02 +02:00
stasatdaglabs
3a4fa6e0e1 Add blockVerboseData to blockAddedNotifications (#1508)
* Add blockVerboseData to blockAddedNotifications.

* Run the documentation generator.
2021-02-09 10:30:16 +02:00
Elichai Turkel
2edf6bfd07 Minimize memory usage in tests (#1495)
* Make leveldb cache configurable

* Fix leveldb tests

* Add a preallocate option to all caches and disable in tests

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-08 18:37:02 +02:00
Ori Newman
8225f7fb3c Add GetInfo RPC command (#1504)
* Add GetInfo RPC command

* Rename ID to p2p ID
2021-02-08 16:33:21 +02:00
Elichai Turkel
3d0a2a47b2 Move testGHOSTDagSorter to testutils, and build a boilerplate for overriding specific managers (#1486)
* Move testGHOSTDagSorter to testutils

* Allow overriding managers in consensus, starting with ghostdag

* Add test prefix to SetDataDir and SetGHOSTDAGManager

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-08 15:24:26 +02:00
Ori Newman
4a354cd538 Validate transactions on BuildBlock (#1491)
* Validate transactions on BuildBlock

* Rename tx -> transactions

* Add transaction validator to block builder constructor and fix TestValidateAndInsertErrors

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-08 14:59:43 +02:00
Ori Newman
1a3b16aaa3 Don't change the new reindex root if the blue score of the selected tip is lower than the current reindex root (#1501) 2021-02-08 14:00:53 +02:00
Ori Newman
5b5a7e60af Add aggregated headers processing logs (#1487)
* Add aggregated headers processing logs

* Unite headers and blocks log

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-08 10:45:13 +02:00
Ori Newman
d30f05b250 Remove IsPushOnlyScript from mempool validation (#1492)
* Remove IsPushOnlyScript from mempool validation

* Fix TestCheckTransactionStandard
2021-02-08 10:04:19 +02:00
Svarog
6bc7a4eb85 Allow GetMissingBlockBodyHashes return an empty list if the missing blocks were requested before IBD start (#1498)
* Allow GetMissingBlockBodyHashes return an empty list if the missing blocks were requested before IBD start

* Add link to issue in comment about error to be fixed
2021-02-07 16:12:15 +02:00
Ori Newman
608d1f8ef9 Add TestBlueWork (#1488)
* Add TestBlueWork

* Add comments and blue score check

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-04 17:12:33 +02:00
stasatdaglabs
a792d4a19e Don't fsync immediately after all writes (#1490) 2021-02-04 16:36:46 +02:00
stasatdaglabs
8941c518fc Remove the no-longer relevant highHashReceived mechanism in syncHeaders. (#1489) 2021-02-04 16:06:20 +02:00
Ori Newman
6f53da18b1 Increase stores cache (#1485)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-04 10:06:02 +02:00
stasatdaglabs
44280b9006 Require the --miningaddr parameter in kaspaminer. (#1482)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-04 09:42:02 +02:00
Ori Newman
dbababb978 Limit mempool size to million transactions and remove the least profitable transactions (#1483)
* Limit mempool size to million transactions and remove the least profitable transactions

* Simplify insert

* Fix typo

* Improve findTxIndexInOrderedTransactionsByFeeRate readability
2021-02-03 19:45:39 +02:00
Ori Newman
238950cb98 Add logs (#1484)
* Add logs

* Fix log name
2021-02-03 17:47:59 +02:00
Ori Newman
ee8fa32ff8 Refactor miner and mine when waiting for block to validate (#1481)
* Refactor miner and mine when waiting for block to validate

* Fix -n to work after the refactor.
Change foundBlockChan capacity.
Use lock instead of atomic in the template manager.

* Fix self assignment

* Fix lock

* Fix Dockerfile

* Add comment
2021-02-03 11:53:55 +02:00
Elichai Turkel
e7f9606683 Add dummy go files for test only package, to mitigate golang/go#27333 (#1480)
* Add dummy go files for test only package, to mitigate golang/go#27333

* Stop ignoring errors when producing the coverage

* Add comments explaining the dummy go files

* Make the coverage output non-json
2021-02-02 18:20:15 +02:00
stasatdaglabs
97be133cee Add logs to help debug long virtual parent selection. (#1470)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-01 19:15:50 +02:00
Ori Newman
aeb8e9d2cd Unban address after one day (#1479)
* Unban address after one day

* Unban addresses one by one

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-01 18:52:49 +02:00
Ori Newman
b636ae234e Add ban and unban RPC commands (#1478)
* Add ban and unban RPC commands

* Fix names

* Fix commands strings

* Update RPC documentation

* Rename functions

* Simplify return

* Use IP strings in app messages

* Add parse IP error

* Fix wrong condition
2021-02-01 17:34:43 +02:00
Elichai Turkel
a3913dbf80 Update to version 0.9.0 2021-02-01 15:39:39 +02:00
Elichai Turkel
2871a6a527 Update to version 0.8.7 2021-02-01 15:38:40 +02:00
Svarog
d5a3a96bde Use hard-coded sample config instead of assumed path (#1466)
* Use hard-coded sample config instead of assumed path

* Fix bad path to sample-kaspad.conf in TestCreateDefaultConfigFile

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-02-01 15:15:37 +02:00
Elichai Turkel
12c438d389 Fix data races in ConnectionManager and flow tests (#1474)
* Reuse the ticker in ConnectionManager.waitTillNextIteration

* Fix a data race in ConnectionManager by locking the mutex

* Add a mutex to fakeRelayInvsContext in block relay flow test

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-01 15:03:31 +02:00
Elichai Turkel
280fa3de46 Prevent infinite ticker leaks in kaspaminer (#1476)
* Prevent infinite tickers leaks in kaspaminer

* Reset ticker in ConnectionManager instead of allocating a new one

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-01 14:52:17 +02:00
Elichai Turkel
d281dabdb4 Bump Go version to 1.15 (#1477) 2021-02-01 14:35:11 +02:00
Ori Newman
331042edf1 Add defaultTargetBlocksPerSecond (#1473)
* Add defaultTargetBlocksPerSecond

* Use different default per network
2021-02-01 14:26:45 +02:00
Ori Newman
669a9ab4c3 Ban by IP (#1471)
* Ban by IP

* Fix panic

* Fix error format

* Remove failed addresses

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-01 10:51:18 +02:00
375 changed files with 12143 additions and 5874 deletions

68
.github/workflows/go-deploy.yml vendored Normal file
View File

@@ -0,0 +1,68 @@
name: Build and Upload assets
on:
release:
types: [published]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-latest, windows-latest, macos-latest ]
name: Building For ${{ matrix.os }}
steps:
- name: Fix windows CRLF
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# We need to increase the page size because the tests run out of memory on github CI windows.
# Use the powershell script from this github action: https://github.com/al-cheb/configure-pagefile-action/blob/master/scripts/SetPageFileSize.ps1
# MIT License (MIT) Copyright (c) 2020 Maxim Lobanov and contributors
- name: Increase page size on windows
if: runner.os == 'Windows'
shell: powershell
run: powershell -command .\.github\workflows\SetPageFileSize.ps1
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Build on linux
if: runner.os == 'Linux'
# `-extldflags=-static` - means static link everything, `-tags netgo,osusergo` means use pure go replacements for "os/user" and "net"
# `-s -w` strips the binary to produce smaller size binaries
run: |
binary="kaspad-${{ github.event.release.tag_name }}-linux"
echo "binary=${binary}" >> $GITHUB_ENV
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o "${binary}"
- name: Build on Windows
if: runner.os == 'Windows'
shell: bash
run: |
binary="kaspad-${{ github.event.release.tag_name }}-win64.exe"
echo "binary=${binary}" >> $GITHUB_ENV
go build -v -ldflags="-s -w" -o "${binary}"
- name: Build on MacOS
if: runner.os == 'macOS'
run: |
binary="kaspad-${{ github.event.release.tag_name }}-osx"
echo "binary=${binary}" >> $GITHUB_ENV
go build -v -ldflags="-s -w" -o "${binary}"
- name: Upload Release Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ github.event.release.upload_url }}
asset_path: "./${{ env.binary }}"
asset_name: "${{ env.binary }}"
asset_content_type: application/zip

49
.github/workflows/go-race.yml vendored Normal file
View File

@@ -0,0 +1,49 @@
name: Go-Race
on:
schedule:
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
race_test:
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
branch: [ master, latest ]
name: Race detection on ${{ matrix.branch }}
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.15
- name: Set scheduled branch name
shell: bash
if: github.event_name == 'schedule'
run: |
if [ "${{ matrix.branch }}" == "master" ]; then
echo "run_on=master" >> $GITHUB_ENV
fi
if [ "${{ matrix.branch }}" == "latest" ]; then
branch=$(git branch -r | grep 'v\([0-9]\+\.\)\([0-9]\+\.\)\([0-9]\+\)-dev' | sort -Vr | head -1 | xargs)
echo "run_on=${branch}" >> $GITHUB_ENV
fi
- name: Set manual branch name
shell: bash
if: github.event_name == 'workflow_dispatch'
run: echo "run_on=${{ github.ref }}" >> $GITHUB_ENV
- name: Test with race detector
shell: bash
run: |
git checkout "${{ env.run_on }}"
git status
go test -race ./...

View File

@@ -11,6 +11,7 @@ jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-16.04, macos-10.15 ]
name: Testing on on ${{ matrix.os }}
@@ -34,7 +35,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.14
go-version: 1.16
# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
@@ -48,7 +49,7 @@ jobs:
- name: Test
shell: bash
run: ./build_and_test.sh
run: ./build_and_test.sh -v
coverage:
runs-on: ubuntu-20.04
@@ -60,11 +61,10 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.14
go-version: 1.16
- name: Create coverage file
# Because of https://github.com/golang/go/issues/27333 this seem to "fail" even though nothing is wrong, so ignore the failure
run: go test -json -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./... || true
run: go test -v -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...
- name: Upload coverage file
run: bash <(curl -s https://codecov.io/bash)

View File

@@ -18,7 +18,7 @@ Kaspa is an attempt at a proof-of-work cryptocurrency with instant confirmations
## Requirements
Go 1.14 or later.
Go 1.16 or later.
## Installation
@@ -65,7 +65,7 @@ is used for this project.
## Documentation
The documentation is a work-in-progress.
The [documentation](https://github.com/kaspanet/docs) is a work-in-progress
## License

View File

@@ -7,21 +7,21 @@ import (
"runtime"
"time"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/infrastructure/db/database/ldb"
"github.com/kaspanet/kaspad/infrastructure/os/signal"
"github.com/kaspanet/kaspad/util/profiling"
"github.com/kaspanet/kaspad/version"
"github.com/kaspanet/kaspad/util/panics"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/os/execenv"
"github.com/kaspanet/kaspad/infrastructure/os/limits"
"github.com/kaspanet/kaspad/infrastructure/os/signal"
"github.com/kaspanet/kaspad/infrastructure/os/winservice"
"github.com/kaspanet/kaspad/util/panics"
"github.com/kaspanet/kaspad/util/profiling"
"github.com/kaspanet/kaspad/version"
)
const leveldbCacheSizeMiB = 256
var desiredLimits = &limits.DesiredLimits{
FileLimitWant: 2048,
FileLimitMin: 1024,
@@ -49,6 +49,7 @@ func StartApp() error {
fmt.Fprintln(os.Stderr, err)
return err
}
defer logger.BackendLog.Close()
defer panics.HandlePanic(log, "MAIN", nil)
app := &kaspadApp{cfg: cfg}
@@ -181,5 +182,5 @@ func removeDatabase(cfg *config.Config) error {
func openDB(cfg *config.Config) (database.Database, error) {
dbPath := databasePath(cfg)
log.Infof("Loading database from '%s'", dbPath)
return ldb.NewLevelDB(dbPath)
return ldb.NewLevelDB(dbPath, leveldbCacheSizeMiB)
}

View File

@@ -164,11 +164,7 @@ func outpointToDomainOutpoint(outpoint *Outpoint) *externalapi.DomainOutpoint {
func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externalapi.DomainTransaction, error) {
inputs := make([]*externalapi.DomainTransactionInput, len(rpcTransaction.Inputs))
for i, input := range rpcTransaction.Inputs {
transactionIDBytes, err := hex.DecodeString(input.PreviousOutpoint.TransactionID)
if err != nil {
return nil, err
}
transactionID, err := transactionid.FromBytes(transactionIDBytes)
transactionID, err := transactionid.FromString(input.PreviousOutpoint.TransactionID)
if err != nil {
return nil, err
}
@@ -198,19 +194,11 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
}
}
subnetworkIDBytes, err := hex.DecodeString(rpcTransaction.SubnetworkID)
subnetworkID, err := subnetworks.FromString(rpcTransaction.SubnetworkID)
if err != nil {
return nil, err
}
subnetworkID, err := subnetworks.FromBytes(subnetworkIDBytes)
if err != nil {
return nil, err
}
payloadHashBytes, err := hex.DecodeString(rpcTransaction.PayloadHash)
if err != nil {
return nil, err
}
payloadHash, err := externalapi.NewDomainHashFromByteSlice(payloadHashBytes)
payloadHash, err := externalapi.NewDomainHashFromString(rpcTransaction.PayloadHash)
if err != nil {
return nil, err
}
@@ -255,7 +243,7 @@ func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransactio
ScriptPublicKey: &RPCScriptPublicKey{Script: scriptPublicKey, Version: output.ScriptPublicKey.Version},
}
}
subnetworkID := hex.EncodeToString(transaction.SubnetworkID[:])
subnetworkID := transaction.SubnetworkID.String()
payloadHash := transaction.PayloadHash.String()
payload := hex.EncodeToString(transaction.Payload)
return &RPCTransaction{

View File

@@ -59,6 +59,7 @@ const (
CmdPruningPointHash
CmdIBDBlockLocator
CmdIBDBlockLocatorHighestHash
CmdIBDBlockLocatorHighestHashNotFound
CmdBlockHeaders
CmdRequestNextPruningPointUTXOSetChunk
CmdDonePruningPointUTXOSetChunks
@@ -116,6 +117,8 @@ const (
CmdNotifyUTXOsChangedRequestMessage
CmdNotifyUTXOsChangedResponseMessage
CmdUTXOsChangedNotificationMessage
CmdStopNotifyingUTXOsChangedRequestMessage
CmdStopNotifyingUTXOsChangedResponseMessage
CmdGetUTXOsByAddressesRequestMessage
CmdGetUTXOsByAddressesResponseMessage
CmdGetVirtualSelectedParentBlueScoreRequestMessage
@@ -123,6 +126,17 @@ const (
CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage
CmdNotifyVirtualSelectedParentBlueScoreChangedResponseMessage
CmdVirtualSelectedParentBlueScoreChangedNotificationMessage
CmdBanRequestMessage
CmdBanResponseMessage
CmdUnbanRequestMessage
CmdUnbanResponseMessage
CmdGetInfoRequestMessage
CmdGetInfoResponseMessage
CmdNotifyPruningPointUTXOSetOverrideRequestMessage
CmdNotifyPruningPointUTXOSetOverrideResponseMessage
CmdPruningPointUTXOSetOverrideNotificationMessage
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage
)
// ProtocolMessageCommandToString maps all MessageCommands to their string representation
@@ -156,6 +170,7 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdPruningPointHash: "PruningPointHash",
CmdIBDBlockLocator: "IBDBlockLocator",
CmdIBDBlockLocatorHighestHash: "IBDBlockLocatorHighestHash",
CmdIBDBlockLocatorHighestHashNotFound: "IBDBlockLocatorHighestHashNotFound",
CmdBlockHeaders: "BlockHeaders",
CmdRequestNextPruningPointUTXOSetChunk: "RequestNextPruningPointUTXOSetChunk",
CmdDonePruningPointUTXOSetChunks: "DonePruningPointUTXOSetChunks",
@@ -213,6 +228,8 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdNotifyUTXOsChangedRequestMessage: "NotifyUTXOsChangedRequest",
CmdNotifyUTXOsChangedResponseMessage: "NotifyUTXOsChangedResponse",
CmdUTXOsChangedNotificationMessage: "UTXOsChangedNotification",
CmdStopNotifyingUTXOsChangedRequestMessage: "StopNotifyingUTXOsChangedRequest",
CmdStopNotifyingUTXOsChangedResponseMessage: "StopNotifyingUTXOsChangedResponse",
CmdGetUTXOsByAddressesRequestMessage: "GetUTXOsByAddressesRequest",
CmdGetUTXOsByAddressesResponseMessage: "GetUTXOsByAddressesResponse",
CmdGetVirtualSelectedParentBlueScoreRequestMessage: "GetVirtualSelectedParentBlueScoreRequest",
@@ -220,6 +237,17 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage: "NotifyVirtualSelectedParentBlueScoreChangedRequest",
CmdNotifyVirtualSelectedParentBlueScoreChangedResponseMessage: "NotifyVirtualSelectedParentBlueScoreChangedResponse",
CmdVirtualSelectedParentBlueScoreChangedNotificationMessage: "VirtualSelectedParentBlueScoreChangedNotification",
CmdBanRequestMessage: "BanRequest",
CmdBanResponseMessage: "BanResponse",
CmdUnbanRequestMessage: "UnbanRequest",
CmdUnbanResponseMessage: "UnbanResponse",
CmdGetInfoRequestMessage: "GetInfoRequest",
CmdGetInfoResponseMessage: "GeInfoResponse",
CmdNotifyPruningPointUTXOSetOverrideRequestMessage: "NotifyPruningPointUTXOSetOverrideRequest",
CmdNotifyPruningPointUTXOSetOverrideResponseMessage: "NotifyPruningPointUTXOSetOverrideResponse",
CmdPruningPointUTXOSetOverrideNotificationMessage: "PruningPointUTXOSetOverrideNotification",
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: "StopNotifyingPruningPointUTXOSetOverrideRequest",
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage: "StopNotifyingPruningPointUTXOSetOverrideResponse",
}
// Message is an interface that describes a kaspa message. A type that

View File

@@ -11,15 +11,11 @@ import (
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/kaspanet/kaspad/util/random"
)
// TestBlockHeader tests the MsgBlockHeader API.
func TestBlockHeader(t *testing.T) {
nonce, err := random.Uint64()
if err != nil {
t.Errorf("random.Uint64: Error generating nonce: %v", err)
}
nonce := uint64(0xba4d87a69924a93d)
hashes := []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}

View File

@@ -0,0 +1,16 @@
package appmessage
// MsgIBDBlockLocatorHighestHashNotFound represents a kaspa BlockLocatorHighestHashNotFound message
type MsgIBDBlockLocatorHighestHashNotFound struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *MsgIBDBlockLocatorHighestHashNotFound) Command() MessageCommand {
return CmdIBDBlockLocatorHighestHashNotFound
}
// NewMsgIBDBlockLocatorHighestHashNotFound returns a new IBDBlockLocatorHighestHashNotFound message
func NewMsgIBDBlockLocatorHighestHashNotFound() *MsgIBDBlockLocatorHighestHashNotFound {
return &MsgIBDBlockLocatorHighestHashNotFound{}
}

View File

@@ -6,17 +6,12 @@ package appmessage
import (
"testing"
"github.com/kaspanet/kaspad/util/random"
)
// TestPing tests the MsgPing API against the latest protocol version.
func TestPing(t *testing.T) {
// Ensure we get the same nonce back out.
nonce, err := random.Uint64()
if err != nil {
t.Errorf("random.Uint64: Error generating nonce: %v", err)
}
nonce := uint64(0x61c2c5535902862)
msg := NewMsgPing(nonce)
if msg.Nonce != nonce {
t.Errorf("NewMsgPing: wrong nonce - got %v, want %v",

View File

@@ -6,16 +6,11 @@ package appmessage
import (
"testing"
"github.com/kaspanet/kaspad/util/random"
)
// TestPongLatest tests the MsgPong API against the latest protocol version.
func TestPongLatest(t *testing.T) {
nonce, err := random.Uint64()
if err != nil {
t.Errorf("random.Uint64: error generating nonce: %v", err)
}
nonce := uint64(0x1a05b581a5182c)
msg := NewMsgPong(nonce)
if msg.Nonce != nonce {
t.Errorf("NewMsgPong: wrong nonce - got %v, want %v",

39
app/appmessage/rpc_ban.go Normal file
View File

@@ -0,0 +1,39 @@
package appmessage
// BanRequestMessage is an appmessage corresponding to
// its respective RPC message
type BanRequestMessage struct {
baseMessage
IP string
}
// Command returns the protocol command string for the message
func (msg *BanRequestMessage) Command() MessageCommand {
return CmdBanRequestMessage
}
// NewBanRequestMessage returns an instance of the message
func NewBanRequestMessage(ip string) *BanRequestMessage {
return &BanRequestMessage{
IP: ip,
}
}
// BanResponseMessage is an appmessage corresponding to
// its respective RPC message
type BanResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *BanResponseMessage) Command() MessageCommand {
return CmdBanResponseMessage
}
// NewBanResponseMessage returns a instance of the message
func NewBanResponseMessage() *BanResponseMessage {
return &BanResponseMessage{}
}

View File

@@ -55,6 +55,7 @@ type BlockVerboseData struct {
Bits string
Difficulty float64
ParentHashes []string
ChildrenHashes []string
SelectedParentHash string
BlueScore uint64
IsHeaderOnly bool
@@ -104,4 +105,5 @@ type ScriptPubKeyResult struct {
Hex string
Type string
Address string
Version uint16
}

View File

@@ -27,6 +27,7 @@ type GetBlockDAGInfoResponseMessage struct {
VirtualParentHashes []string
Difficulty float64
PastMedianTime int64
PruningPointHash string
Error *RPCError
}

View File

@@ -4,9 +4,9 @@ package appmessage
// its respective RPC message
type GetBlocksRequestMessage struct {
baseMessage
LowHash string
IncludeBlockHexes bool
IncludeBlockVerboseData bool
LowHash string
IncludeBlockVerboseData bool
IncludeTransactionVerboseData bool
}
// Command returns the protocol command string for the message
@@ -15,11 +15,12 @@ func (msg *GetBlocksRequestMessage) Command() MessageCommand {
}
// NewGetBlocksRequestMessage returns a instance of the message
func NewGetBlocksRequestMessage(lowHash string, includeBlockHexes bool, includeBlockVerboseData bool) *GetBlocksRequestMessage {
func NewGetBlocksRequestMessage(lowHash string, includeBlockVerboseData bool,
includeTransactionVerboseData bool) *GetBlocksRequestMessage {
return &GetBlocksRequestMessage{
LowHash: lowHash,
IncludeBlockHexes: includeBlockHexes,
IncludeBlockVerboseData: includeBlockVerboseData,
LowHash: lowHash,
IncludeBlockVerboseData: includeBlockVerboseData,
IncludeTransactionVerboseData: includeTransactionVerboseData,
}
}
@@ -28,7 +29,6 @@ func NewGetBlocksRequestMessage(lowHash string, includeBlockHexes bool, includeB
type GetBlocksResponseMessage struct {
baseMessage
BlockHashes []string
BlockHexes []string
BlockVerboseData []*BlockVerboseData
Error *RPCError
@@ -45,7 +45,6 @@ func NewGetBlocksResponseMessage(blockHashes []string, blockHexes []string,
return &GetBlocksResponseMessage{
BlockHashes: blockHashes,
BlockHexes: blockHexes,
BlockVerboseData: blockVerboseData,
}
}

View File

@@ -0,0 +1,38 @@
package appmessage
// GetInfoRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetInfoRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *GetInfoRequestMessage) Command() MessageCommand {
return CmdGetInfoRequestMessage
}
// NewGeInfoRequestMessage returns a instance of the message
func NewGeInfoRequestMessage() *GetInfoRequestMessage {
return &GetInfoRequestMessage{}
}
// GetInfoResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetInfoResponseMessage struct {
baseMessage
P2PID string
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetInfoResponseMessage) Command() MessageCommand {
return CmdGetInfoResponseMessage
}
// NewGetInfoResponseMessage returns a instance of the message
func NewGetInfoResponseMessage(p2pID string) *GetInfoResponseMessage {
return &GetInfoResponseMessage{
P2PID: p2pID,
}
}

View File

@@ -37,7 +37,8 @@ func NewNotifyBlockAddedResponseMessage() *NotifyBlockAddedResponseMessage {
// its respective RPC message
type BlockAddedNotificationMessage struct {
baseMessage
Block *MsgBlock
Block *MsgBlock
BlockVerboseData *BlockVerboseData
}
// Command returns the protocol command string for the message
@@ -46,8 +47,9 @@ func (msg *BlockAddedNotificationMessage) Command() MessageCommand {
}
// NewBlockAddedNotificationMessage returns a instance of the message
func NewBlockAddedNotificationMessage(block *MsgBlock) *BlockAddedNotificationMessage {
func NewBlockAddedNotificationMessage(block *MsgBlock, blockVerboseData *BlockVerboseData) *BlockAddedNotificationMessage {
return &BlockAddedNotificationMessage{
Block: block,
Block: block,
BlockVerboseData: blockVerboseData,
}
}

View File

@@ -0,0 +1,83 @@
package appmessage
// NotifyPruningPointUTXOSetOverrideRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyPruningPointUTXOSetOverrideRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyPruningPointUTXOSetOverrideRequestMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideRequestMessage
}
// NewNotifyPruningPointUTXOSetOverrideRequestMessage returns a instance of the message
func NewNotifyPruningPointUTXOSetOverrideRequestMessage() *NotifyPruningPointUTXOSetOverrideRequestMessage {
return &NotifyPruningPointUTXOSetOverrideRequestMessage{}
}
// NotifyPruningPointUTXOSetOverrideResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyPruningPointUTXOSetOverrideResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyPruningPointUTXOSetOverrideResponseMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideResponseMessage
}
// NewNotifyPruningPointUTXOSetOverrideResponseMessage returns a instance of the message
func NewNotifyPruningPointUTXOSetOverrideResponseMessage() *NotifyPruningPointUTXOSetOverrideResponseMessage {
return &NotifyPruningPointUTXOSetOverrideResponseMessage{}
}
// PruningPointUTXOSetOverrideNotificationMessage is an appmessage corresponding to
// its respective RPC message
type PruningPointUTXOSetOverrideNotificationMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *PruningPointUTXOSetOverrideNotificationMessage) Command() MessageCommand {
return CmdPruningPointUTXOSetOverrideNotificationMessage
}
// NewPruningPointUTXOSetOverrideNotificationMessage returns a instance of the message
func NewPruningPointUTXOSetOverrideNotificationMessage() *PruningPointUTXOSetOverrideNotificationMessage {
return &PruningPointUTXOSetOverrideNotificationMessage{}
}
// StopNotifyingPruningPointUTXOSetOverrideRequestMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingPruningPointUTXOSetOverrideRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingPruningPointUTXOSetOverrideRequestMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideRequestMessage
}
// NewStopNotifyingPruningPointUTXOSetOverrideRequestMessage returns a instance of the message
func NewStopNotifyingPruningPointUTXOSetOverrideRequestMessage() *StopNotifyingPruningPointUTXOSetOverrideRequestMessage {
return &StopNotifyingPruningPointUTXOSetOverrideRequestMessage{}
}
// StopNotifyingPruningPointUTXOSetOverrideResponseMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingPruningPointUTXOSetOverrideResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingPruningPointUTXOSetOverrideResponseMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideResponseMessage
}
// NewStopNotifyingPruningPointUTXOSetOverrideResponseMessage returns a instance of the message
func NewStopNotifyingPruningPointUTXOSetOverrideResponseMessage() *StopNotifyingPruningPointUTXOSetOverrideResponseMessage {
return &StopNotifyingPruningPointUTXOSetOverrideResponseMessage{}
}

View File

@@ -0,0 +1,37 @@
package appmessage
// StopNotifyingUTXOsChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingUTXOsChangedRequestMessage struct {
baseMessage
Addresses []string
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingUTXOsChangedRequestMessage) Command() MessageCommand {
return CmdStopNotifyingUTXOsChangedRequestMessage
}
// NewStopNotifyingUTXOsChangedRequestMessage returns a instance of the message
func NewStopNotifyingUTXOsChangedRequestMessage(addresses []string) *StopNotifyingUTXOsChangedRequestMessage {
return &StopNotifyingUTXOsChangedRequestMessage{
Addresses: addresses,
}
}
// StopNotifyingUTXOsChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingUTXOsChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingUTXOsChangedResponseMessage) Command() MessageCommand {
return CmdStopNotifyingUTXOsChangedResponseMessage
}
// NewStopNotifyingUTXOsChangedResponseMessage returns a instance of the message
func NewStopNotifyingUTXOsChangedResponseMessage() *StopNotifyingUTXOsChangedResponseMessage {
return &StopNotifyingUTXOsChangedResponseMessage{}
}

View File

@@ -0,0 +1,39 @@
package appmessage
// UnbanRequestMessage is an appmessage corresponding to
// its respective RPC message
type UnbanRequestMessage struct {
baseMessage
IP string
}
// Command returns the protocol command string for the message
func (msg *UnbanRequestMessage) Command() MessageCommand {
return CmdUnbanRequestMessage
}
// NewUnbanRequestMessage returns an instance of the message
func NewUnbanRequestMessage(ip string) *UnbanRequestMessage {
return &UnbanRequestMessage{
IP: ip,
}
}
// UnbanResponseMessage is an appmessage corresponding to
// its respective RPC message
type UnbanResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *UnbanResponseMessage) Command() MessageCommand {
return CmdUnbanResponseMessage
}
// NewUnbanResponseMessage returns a instance of the message
func NewUnbanResponseMessage() *UnbanResponseMessage {
return &UnbanResponseMessage{}
}

View File

@@ -90,14 +90,18 @@ func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database,
return nil, err
}
addressManager, err := addressmanager.New(addressmanager.NewConfig(cfg))
addressManager, err := addressmanager.New(addressmanager.NewConfig(cfg), db)
if err != nil {
return nil, err
}
var utxoIndex *utxoindex.UTXOIndex
if cfg.UTXOIndex {
utxoIndex = utxoindex.New(domain.Consensus(), db)
utxoIndex, err = utxoindex.New(domain.Consensus(), db)
if err != nil {
return nil, err
}
log.Infof("UTXO index started")
}
@@ -144,6 +148,7 @@ func setupRPC(
shutDownChan,
)
protocolManager.SetOnBlockAddedToDAGHandler(rpcManager.NotifyBlockAddedToDAG)
protocolManager.SetOnPruningPointUTXOSetOverrideHandler(rpcManager.NotifyPruningPointUTXOSetOverride)
return rpcManager
}

View File

@@ -7,8 +7,6 @@ package app
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.KASD)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("KASD")

View File

@@ -1,58 +0,0 @@
// Copyright (c) 2015-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package blocklogger
import (
"sync"
"time"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
)
var (
receivedLogBlocks int64
receivedLogTx int64
lastBlockLogTime = mstime.Now()
mtx sync.Mutex
)
// LogBlock logs a new block blue score as an information message
// to show progress to the user. In order to prevent spam, it limits logging to
// one message every 10 seconds with duration and totals included.
func LogBlock(block *externalapi.DomainBlock) {
mtx.Lock()
defer mtx.Unlock()
receivedLogBlocks++
receivedLogTx += int64(len(block.Transactions))
now := mstime.Now()
duration := now.Sub(lastBlockLogTime)
if duration < time.Second*10 {
return
}
// Truncate the duration to 10s of milliseconds.
tDuration := duration.Round(10 * time.Millisecond)
// Log information about new block blue score.
blockStr := "blocks"
if receivedLogBlocks == 1 {
blockStr = "block"
}
txStr := "transactions"
if receivedLogTx == 1 {
txStr = "transaction"
}
log.Infof("Processed %d %s in the last %s (%d %s, %s)",
receivedLogBlocks, blockStr, tDuration, receivedLogTx,
txStr, mstime.UnixMilliseconds(block.Header.TimeInMilliseconds()))
receivedLogBlocks = 0
receivedLogTx = 0
lastBlockLogTime = now
}

View File

@@ -1,8 +1,8 @@
package flowcontext
import (
"github.com/kaspanet/kaspad/app/protocol/blocklogger"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/pkg/errors"
@@ -38,8 +38,6 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
}
for i, newBlock := range newBlocks {
blocklogger.LogBlock(block)
log.Debugf("OnNewBlock: passing block %s transactions to mining manager", hash)
_, err = f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
if err != nil {
@@ -59,6 +57,15 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
return nil
}
// OnPruningPointUTXOSetOverride calls the handler function whenever the UTXO set
// resets due to pruning point change via IBD.
func (f *FlowContext) OnPruningPointUTXOSetOverride() error {
if f.onPruningPointUTXOSetOverrideHandler != nil {
return f.onPruningPointUTXOSetOverrideHandler()
}
return nil
}
func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
block *externalapi.DomainBlock, transactionsAcceptedToMempool []*externalapi.DomainTransaction) error {
@@ -101,6 +108,10 @@ func (f *FlowContext) SharedRequestedBlocks() *blockrelay.SharedRequestedBlocks
// AddBlock adds the given block to the DAG and propagates it.
func (f *FlowContext) AddBlock(block *externalapi.DomainBlock) error {
if len(block.Transactions) == 0 {
return protocolerrors.Errorf(false, "cannot add header only block")
}
blockInsertionResult, err := f.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {

View File

@@ -18,11 +18,11 @@ import (
func (*FlowContext) HandleError(err error, flowName string, isStopping *uint32, errChan chan<- error) {
isErrRouteClosed := errors.Is(err, router.ErrRouteClosed)
if !isErrRouteClosed {
if protocolErr := &(protocolerrors.ProtocolError{}); !errors.As(err, &protocolErr) {
if protocolErr := (protocolerrors.ProtocolError{}); !errors.As(err, &protocolErr) {
panic(err)
}
log.Errorf("error from %s: %+v", flowName, err)
log.Errorf("error from %s: %s", flowName, err)
}
if atomic.AddUint32(isStopping, 1) == 1 {

View File

@@ -23,6 +23,10 @@ import (
// when a block is added to the DAG
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
// OnPruningPointUTXOSetOverrideHandler is a handle function that's triggered whenever the UTXO set
// resets due to pruning point change via IBD.
type OnPruningPointUTXOSetOverrideHandler func() error
// OnTransactionAddedToMempoolHandler is a handler function that's triggered
// when a transaction is added to the mempool
type OnTransactionAddedToMempoolHandler func()
@@ -38,8 +42,9 @@ type FlowContext struct {
timeStarted int64
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
transactionsToRebroadcastLock sync.Mutex
transactionsToRebroadcast map[externalapi.DomainTransactionID]*externalapi.DomainTransaction
@@ -82,6 +87,11 @@ func (f *FlowContext) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler OnBlo
f.onBlockAddedToDAGHandler = onBlockAddedToDAGHandler
}
// SetOnPruningPointUTXOSetOverrideHandler sets the onPruningPointUTXOSetOverrideHandler handler
func (f *FlowContext) SetOnPruningPointUTXOSetOverrideHandler(onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler) {
f.onPruningPointUTXOSetOverrideHandler = onPruningPointUTXOSetOverrideHandler
}
// SetOnTransactionAddedToMempoolHandler sets the onTransactionAddedToMempool handler
func (f *FlowContext) SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler) {
f.onTransactionAddedToMempoolHandler = onTransactionAddedToMempoolHandler

View File

@@ -2,8 +2,6 @@ package flowcontext
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("PROT")

View File

@@ -194,6 +194,9 @@ func (f *FlowContext) GetOrphanRoots(orphan *externalapi.DomainHash) ([]*externa
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
roots = append(roots, current)
} else {
log.Debugf("Block %s was skipped when checking for orphan roots: "+
"exists: %t, status: %s", current, blockInfo.Exists, blockInfo.BlockStatus)
}
continue
}

View File

@@ -35,6 +35,5 @@ func ReceiveAddresses(context ReceiveAddressesContext, incomingRoute *router.Rou
return protocolerrors.Errorf(true, "address count exceeded %d", addressmanager.GetAddressesMax)
}
context.AddressManager().AddAddresses(msgAddresses.AddressList...)
return nil
return context.AddressManager().AddAddresses(msgAddresses.AddressList...)
}

View File

@@ -70,8 +70,14 @@ func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *
}
if !foundHighestHashInTheSelectedParentChainOfTargetHash {
return protocolerrors.Errorf(true, "no hash was found in the blockLocator "+
log.Warnf("no hash was found in the blockLocator "+
"that was in the selected parent chain of targetHash %s", targetHash)
ibdBlockLocatorHighestHashNotFoundMessage := appmessage.NewMsgIBDBlockLocatorHighestHashNotFound()
err = outgoingRoute.Enqueue(ibdBlockLocatorHighestHashNotFoundMessage)
if err != nil {
return err
}
}
}
}

View File

@@ -24,6 +24,7 @@ type RelayInvsContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
OnPruningPointUTXOSetOverride() error
SharedRequestedBlocks() *SharedRequestedBlocks
Broadcast(message appmessage.Message) error
AddOrphan(orphanBlock *externalapi.DomainBlock)
@@ -104,9 +105,19 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
err = flow.banIfBlockIsHeaderOnly(block)
if err != nil {
return err
}
log.Debugf("Processing block %s", inv.Hash)
missingParents, blockInsertionResult, err := flow.processBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrPrunedBlock) {
log.Infof("Ignoring pruned block %s", inv.Hash)
continue
}
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Infof("Ignoring duplicate block %s", inv.Hash)
continue
@@ -135,6 +146,15 @@ func (flow *handleRelayInvsFlow) start() error {
}
}
func (flow *handleRelayInvsFlow) banIfBlockIsHeaderOnly(block *externalapi.DomainBlock) error {
if len(block.Transactions) == 0 {
return protocolerrors.Errorf(true, "sent header of %s block where expected block with body",
consensushashing.BlockHash(block))
}
return nil
}
func (flow *handleRelayInvsFlow) readInv() (*appmessage.MsgInvRelayBlock, error) {
if len(flow.invsQueue) > 0 {
var inv *appmessage.MsgInvRelayBlock

View File

@@ -17,7 +17,7 @@ type RequestIBDBlocksContext interface {
Domain() domain.Domain
}
type handleRequestBlocksFlow struct {
type handleRequestHeadersFlow struct {
RequestIBDBlocksContext
incomingRoute, outgoingRoute *router.Route
peer *peer.Peer
@@ -27,7 +27,7 @@ type handleRequestBlocksFlow struct {
func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
flow := &handleRequestBlocksFlow{
flow := &handleRequestHeadersFlow{
RequestIBDBlocksContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
@@ -36,7 +36,7 @@ func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router
return flow.start()
}
func (flow *handleRequestBlocksFlow) start() error {
func (flow *handleRequestHeadersFlow) start() error {
for {
lowHash, highHash, err := receiveRequestHeaders(flow.incomingRoute)
if err != nil {

View File

@@ -28,10 +28,14 @@ func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.Domain
log.Debugf("IBD started with peer %s and highHash %s", flow.peer, highHash)
log.Debugf("Syncing headers up to %s", highHash)
err := flow.syncHeaders(highHash)
headersSynced, err := flow.syncHeaders(highHash)
if err != nil {
return err
}
if !headersSynced {
log.Debugf("Aborting IBD because the headers failed to sync")
return nil
}
log.Debugf("Finished syncing headers up to %s", highHash)
log.Debugf("Syncing the current pruning point UTXO set")
@@ -55,47 +59,61 @@ func (flow *handleRelayInvsFlow) runIBDIfNotRunning(highHash *externalapi.Domain
return nil
}
func (flow *handleRelayInvsFlow) syncHeaders(highHash *externalapi.DomainHash) error {
highHashReceived := false
for !highHashReceived {
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return err
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
err = flow.downloadHeaders(highestSharedBlockHash, highHash)
if err != nil {
return err
}
// We're finished once highHash has been inserted into the DAG
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(highHash)
if err != nil {
return err
}
highHashReceived = blockInfo.Exists
log.Debugf("Headers downloaded from peer %s. Are further headers required: %t", flow.peer, !highHashReceived)
// syncHeaders attempts to sync headers from the peer. This method may fail
// because the peer and us have conflicting pruning points. In that case we
// return (false, nil) so that we may stop IBD gracefully.
func (flow *handleRelayInvsFlow) syncHeaders(highHash *externalapi.DomainHash) (bool, error) {
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, highestSharedBlockFound, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return false, err
}
return nil
if !highestSharedBlockFound {
return false, nil
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
err = flow.downloadHeaders(highestSharedBlockHash, highHash)
if err != nil {
return false, err
}
// If the highHash has not been received, the peer is misbehaving
highHashBlockInfo, err := flow.Domain().Consensus().GetBlockInfo(highHash)
if err != nil {
return false, err
}
if !highHashBlockInfo.Exists {
return false, protocolerrors.Errorf(true, "did not receive "+
"highHash header %s from peer %s during header download", highHash, flow.peer)
}
log.Debugf("Headers downloaded from peer %s", flow.peer)
return true, nil
}
func (flow *handleRelayInvsFlow) findHighestSharedBlockHash(targetHash *externalapi.DomainHash) (*externalapi.DomainHash, error) {
// findHighestSharedBlock attempts to find the highest shared block between the peer
// and this node. This method may fail because the peer and us have conflicting pruning
// points. In that case we return (nil, false, nil) so that we may stop IBD gracefully.
func (flow *handleRelayInvsFlow) findHighestSharedBlockHash(
targetHash *externalapi.DomainHash) (*externalapi.DomainHash, bool, error) {
log.Debugf("Sending a blockLocator to %s between pruning point and headers selected tip", flow.peer)
blockLocator, err := flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
if err != nil {
return nil, err
return nil, false, err
}
for {
highestHash, err := flow.fetchHighestHash(targetHash, blockLocator)
highestHash, highestHashFound, err := flow.fetchHighestHash(targetHash, blockLocator)
if err != nil {
return nil, err
return nil, false, err
}
if !highestHashFound {
return nil, false, nil
}
highestHashIndex, err := flow.findHighestHashIndex(highestHash, blockLocator)
if err != nil {
return nil, err
return nil, false, err
}
if highestHashIndex == 0 ||
@@ -104,7 +122,7 @@ func (flow *handleRelayInvsFlow) findHighestSharedBlockHash(targetHash *external
// an endless loop, we explicitly stop the loop in such situation.
(len(blockLocator) == 2 && highestHashIndex == 1) {
return highestHash, nil
return highestHash, true, nil
}
locatorHashAboveHighestHash := highestHash
@@ -114,7 +132,7 @@ func (flow *handleRelayInvsFlow) findHighestSharedBlockHash(targetHash *external
blockLocator, err = flow.nextBlockLocator(highestHash, locatorHashAboveHighestHash)
if err != nil {
return nil, err
return nil, false, err
}
}
}
@@ -159,27 +177,35 @@ func (flow *handleRelayInvsFlow) findHighestHashIndex(
return highestHashIndex, nil
}
// fetchHighestHash attempts to fetch the highest hash the peer knows amongst the given
// blockLocator. This method may fail because the peer and us have conflicting pruning
// points. In that case we return (nil, false, nil) so that we may stop IBD gracefully.
func (flow *handleRelayInvsFlow) fetchHighestHash(
targetHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (*externalapi.DomainHash, error) {
targetHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (*externalapi.DomainHash, bool, error) {
ibdBlockLocatorMessage := appmessage.NewMsgIBDBlockLocator(targetHash, blockLocator)
err := flow.outgoingRoute.Enqueue(ibdBlockLocatorMessage)
if err != nil {
return nil, err
return nil, false, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, err
return nil, false, err
}
ibdBlockLocatorHighestHashMessage, ok := message.(*appmessage.MsgIBDBlockLocatorHighestHash)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
switch message := message.(type) {
case *appmessage.MsgIBDBlockLocatorHighestHash:
highestHash := message.HighestHash
log.Debugf("The highest hash the peer %s knows is %s", flow.peer, highestHash)
return highestHash, true, nil
case *appmessage.MsgIBDBlockLocatorHighestHashNotFound:
log.Debugf("Peer %s does not know any block within our blockLocator. "+
"This should only happen if there's a DAG split deeper than the pruning point.", flow.peer)
return nil, false, nil
default:
return nil, false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlockLocatorHighestHash, message.Command())
}
highestHash := ibdBlockLocatorHighestHashMessage.HighestHash
log.Debugf("The highest hash the peer %s knows is %s", flow.peer, highestHash)
return highestHash, nil
}
func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externalapi.DomainHash,
@@ -195,7 +221,6 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
// headers
blockHeadersMessageChan := make(chan *appmessage.BlockHeadersMessage, 2)
errChan := make(chan error)
doneChan := make(chan interface{})
spawn("handleRelayInvsFlow-downloadHeaders", func() {
for {
blockHeadersMessage, doneIBD, err := flow.receiveHeaders()
@@ -204,7 +229,7 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
return
}
if doneIBD {
doneChan <- struct{}{}
close(blockHeadersMessageChan)
return
}
@@ -220,7 +245,10 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
for {
select {
case blockHeadersMessage := <-blockHeadersMessageChan:
case blockHeadersMessage, ok := <-blockHeadersMessageChan:
if !ok {
return nil
}
for _, header := range blockHeadersMessage.BlockHeaders {
err = flow.processHeader(header)
if err != nil {
@@ -229,8 +257,6 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
}
case err := <-errChan:
return err
case <-doneChan:
return nil
}
}
}
@@ -384,6 +410,11 @@ func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(pruningPointHash *externala
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return false, err
}
return true, nil
}
@@ -468,6 +499,13 @@ func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.Do
if err != nil {
return err
}
if len(hashes) == 0 {
// Blocks can be inserted inside the DAG during IBD if those were requested before IBD started.
// In rare cases, all the IBD blocks might be already inserted by the time we reach this point.
// In these cases - GetMissingBlockBodyHashes would return an empty array.
log.Debugf("No missing block body hashes found.")
return nil
}
for offset := 0; offset < len(hashes); offset += ibdBatchSize {
var hashesToRequest []*externalapi.DomainHash
@@ -500,6 +538,11 @@ func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.Do
return protocolerrors.Errorf(true, "expected block %s but got %s", expectedHash, blockHash)
}
err = flow.banIfBlockIsHeaderOnly(block)
if err != nil {
return err
}
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -0,0 +1,28 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// SendVirtualSelectedParentInvContext is the interface for the context needed for the SendVirtualSelectedParentInv flow.
type SendVirtualSelectedParentInvContext interface {
Domain() domain.Domain
}
// SendVirtualSelectedParentInv sends a peer the selected parent hash of the virtual
func SendVirtualSelectedParentInv(context SendVirtualSelectedParentInvContext,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
virtualSelectedParent, err := context.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return err
}
log.Debugf("Sending virtual selected parent hash %s to peer %s", virtualSelectedParent, peer)
virtualSelectedParentInv := appmessage.NewMsgInvBlock(virtualSelectedParent)
return outgoingRoute.Enqueue(virtualSelectedParentInv)
}

View File

@@ -89,7 +89,10 @@ func HandleHandshake(context HandleHandshakeContext, netConnection *netadapter.N
}
if peerAddress != nil {
context.AddressManager().AddAddresses(peerAddress)
err := context.AddressManager().AddAddresses(peerAddress)
if err != nil {
return nil, err
}
}
return peer, nil
}
@@ -104,7 +107,7 @@ func handleError(err error, flowName string, isStopping *uint32, errChan chan er
return
}
if protocolErr := &(protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if protocolErr := (protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
log.Errorf("Handshake protocol error from %s: %s", flowName, err)
if atomic.AddUint32(isStopping, 1) == 1 {
errChan <- err

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -60,7 +60,7 @@ func (flow *receiveVersionFlow) start() (*appmessage.NetAddress, error) {
}
if !allowSelfConnections && flow.NetAdapter().ID().IsEqual(msgVersion.ID) {
return nil, protocolerrors.New(true, "connected to self")
return nil, protocolerrors.New(false, "connected to self")
}
// Disconnect and ban peers from a different network

View File

@@ -1,14 +1,15 @@
package testing
import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/pkg/errors"
"strings"
"testing"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/pkg/errors"
)
func checkFlowError(t *testing.T, err error, isProtocolError bool, shouldBan bool, contains string) {
pErr := &protocolerrors.ProtocolError{}
pErr := protocolerrors.ProtocolError{}
if errors.As(err, &pErr) != isProtocolError {
t.Fatalf("Unexepcted error %+v", err)
}

View File

@@ -2,6 +2,10 @@ package testing
import (
"fmt"
"sync"
"testing"
"time"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
@@ -18,10 +22,21 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/pkg/errors"
"testing"
"time"
)
var headerOnlyBlock = &externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
constants.MaxBlockVersion,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
&externalapi.DomainHash{},
&externalapi.DomainHash{},
&externalapi.DomainHash{},
0,
0,
0,
),
}
var orphanBlock = &externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
constants.MaxBlockVersion,
@@ -33,6 +48,7 @@ var orphanBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var validPruningPointBlock = &externalapi.DomainBlock{
@@ -46,6 +62,7 @@ var validPruningPointBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var invalidPruningPointBlock = &externalapi.DomainBlock{
@@ -59,6 +76,7 @@ var invalidPruningPointBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var unexpectedIBDBlock = &externalapi.DomainBlock{
@@ -72,6 +90,7 @@ var unexpectedIBDBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var invalidBlock = &externalapi.DomainBlock{
@@ -85,6 +104,7 @@ var invalidBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var unknownBlockHash = externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})
@@ -93,6 +113,7 @@ var validPruningPointHash = consensushashing.BlockHash(validPruningPointBlock)
var invalidBlockHash = consensushashing.BlockHash(invalidBlock)
var invalidPruningPointHash = consensushashing.BlockHash(invalidPruningPointBlock)
var orphanBlockHash = consensushashing.BlockHash(orphanBlock)
var headerOnlyBlockHash = consensushashing.BlockHash(headerOnlyBlock)
type fakeRelayInvsContext struct {
testName string
@@ -105,6 +126,23 @@ type fakeRelayInvsContext struct {
validateAndInsertImportedPruningPointResponse error
getBlockInfoResponse *externalapi.BlockInfo
validateAndInsertBlockResponse error
rwLock sync.RWMutex
}
func (f *fakeRelayInvsContext) GetBlockChildren(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
panic(errors.Errorf("called unimplemented function from test '%s'", f.testName))
}
func (f *fakeRelayInvsContext) OnPruningPointUTXOSetOverride() error {
return nil
}
func (f *fakeRelayInvsContext) GetVirtualUTXOs(expectedVirtualParents []*externalapi.DomainHash, fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error) {
panic(errors.Errorf("called unimplemented function from test '%s'", f.testName))
}
func (f *fakeRelayInvsContext) Anticone(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
panic(errors.Errorf("called unimplemented function from test '%s'", f.testName))
}
func (f *fakeRelayInvsContext) BuildBlock(coinbaseData *externalapi.DomainCoinbaseData, transactions []*externalapi.DomainTransaction) (*externalapi.DomainBlock, error) {
@@ -128,6 +166,8 @@ func (f *fakeRelayInvsContext) GetBlockHeader(blockHash *externalapi.DomainHash)
}
func (f *fakeRelayInvsContext) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalapi.BlockInfo, error) {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
if f.getBlockInfoResponse != nil {
return f.getBlockInfoResponse, nil
}
@@ -167,6 +207,8 @@ func (f *fakeRelayInvsContext) AppendImportedPruningPointUTXOs(outpointAndUTXOEn
}
func (f *fakeRelayInvsContext) ValidateAndInsertImportedPruningPoint(newPruningPoint *externalapi.DomainBlock) error {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return f.validateAndInsertImportedPruningPointResponse
}
@@ -179,12 +221,16 @@ func (f *fakeRelayInvsContext) CreateBlockLocator(lowHash, highHash *externalapi
}
func (f *fakeRelayInvsContext) CreateHeadersSelectedChainBlockLocator(lowHash, highHash *externalapi.DomainHash) (externalapi.BlockLocator, error) {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return externalapi.BlockLocator{
f.params.GenesisHash,
}, nil
}
func (f *fakeRelayInvsContext) CreateFullHeadersSelectedChainBlockLocator() (externalapi.BlockLocator, error) {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return externalapi.BlockLocator{
f.params.GenesisHash,
}, nil
@@ -203,6 +249,8 @@ func (f *fakeRelayInvsContext) GetVirtualInfo() (*externalapi.VirtualInfo, error
}
func (f *fakeRelayInvsContext) IsValidPruningPoint(blockHash *externalapi.DomainHash) (bool, error) {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return f.isValidPruningPointResponse, nil
}
@@ -231,6 +279,8 @@ func (f *fakeRelayInvsContext) Domain() domain.Domain {
}
func (f *fakeRelayInvsContext) Config() *config.Config {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return &config.Config{
Flags: &config.Flags{
NetworkFlags: config.NetworkFlags{
@@ -269,13 +319,59 @@ func (f *fakeRelayInvsContext) IsIBDRunning() bool {
}
func (f *fakeRelayInvsContext) TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return f.trySetIBDRunningResponse
}
func (f *fakeRelayInvsContext) UnsetIBDRunning() {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
close(f.finishedIBD)
}
func (f *fakeRelayInvsContext) SetValidateAndInsertBlockResponse(err error) {
f.rwLock.Lock()
defer f.rwLock.Unlock()
f.validateAndInsertBlockResponse = err
}
func (f *fakeRelayInvsContext) SetValidateAndInsertImportedPruningPointResponse(err error) {
f.rwLock.Lock()
defer f.rwLock.Unlock()
f.validateAndInsertImportedPruningPointResponse = err
}
func (f *fakeRelayInvsContext) SetGetBlockInfoResponse(info externalapi.BlockInfo) {
f.rwLock.Lock()
defer f.rwLock.Unlock()
f.getBlockInfoResponse = &info
}
func (f *fakeRelayInvsContext) SetTrySetIBDRunningResponse(b bool) {
f.rwLock.Lock()
defer f.rwLock.Unlock()
f.trySetIBDRunningResponse = b
}
func (f *fakeRelayInvsContext) SetIsValidPruningPointResponse(b bool) {
f.rwLock.Lock()
defer f.rwLock.Unlock()
f.isValidPruningPointResponse = b
}
func (f *fakeRelayInvsContext) GetGenesisHeader() externalapi.BlockHeader {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return f.params.GenesisBlock.Header
}
func (f *fakeRelayInvsContext) GetFinishedIBDChan() chan struct{} {
f.rwLock.RLock()
defer f.rwLock.RUnlock()
return f.finishedIBD
}
func TestHandleRelayInvs(t *testing.T) {
triggerIBD := func(t *testing.T, incomingRoute, outgoingRoute *router.Route, context *fakeRelayInvsContext) {
err := incomingRoute.Enqueue(appmessage.NewMsgInvBlock(consensushashing.BlockHash(orphanBlock)))
@@ -289,10 +385,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestRelayBlocks)
context.validateAndInsertBlockResponse = ruleerrors.NewErrMissingParents(orphanBlock.Header.ParentHashes())
defer func() {
context.validateAndInsertBlockResponse = nil
}()
context.SetValidateAndInsertBlockResponse(ruleerrors.NewErrMissingParents(orphanBlock.Header.ParentHashes()))
err = incomingRoute.Enqueue(appmessage.DomainBlockToMsgBlock(orphanBlock))
if err != nil {
@@ -342,10 +435,10 @@ func TestHandleRelayInvs(t *testing.T) {
name: "sending a known invalid inv",
funcToExecute: func(t *testing.T, incomingRoute, outgoingRoute *router.Route, context *fakeRelayInvsContext) {
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusInvalid,
}
})
err := incomingRoute.Enqueue(appmessage.NewMsgInvBlock(knownInvalidBlockHash))
if err != nil {
@@ -388,6 +481,29 @@ func TestHandleRelayInvs(t *testing.T) {
expectsBan: true,
expectsErrToContain: "got unrequested block",
},
{
name: "sending header only block on relay",
funcToExecute: func(t *testing.T, incomingRoute, outgoingRoute *router.Route, context *fakeRelayInvsContext) {
err := incomingRoute.Enqueue(appmessage.NewMsgInvBlock(headerOnlyBlockHash))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
}
msg, err := outgoingRoute.DequeueWithTimeout(time.Second)
if err != nil {
t.Fatalf("DequeueWithTimeout: %+v", err)
}
_ = msg.(*appmessage.MsgRequestRelayBlocks)
err = incomingRoute.Enqueue(appmessage.DomainBlockToMsgBlock(headerOnlyBlock))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
}
},
expectsProtocolError: true,
expectsBan: true,
expectsErrToContain: "block where expected block with body",
},
{
name: "sending invalid block",
funcToExecute: func(t *testing.T, incomingRoute, outgoingRoute *router.Route, context *fakeRelayInvsContext) {
@@ -402,7 +518,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestRelayBlocks)
context.validateAndInsertBlockResponse = ruleerrors.ErrBadMerkleRoot
context.SetValidateAndInsertBlockResponse(ruleerrors.ErrBadMerkleRoot)
err = incomingRoute.Enqueue(appmessage.DomainBlockToMsgBlock(invalidBlock))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -426,7 +542,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestRelayBlocks)
context.validateAndInsertBlockResponse = ruleerrors.NewErrMissingParents(orphanBlock.Header.ParentHashes())
context.SetValidateAndInsertBlockResponse(ruleerrors.NewErrMissingParents(orphanBlock.Header.ParentHashes()))
err = incomingRoute.Enqueue(appmessage.DomainBlockToMsgBlock(orphanBlock))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -452,7 +568,7 @@ func TestHandleRelayInvs(t *testing.T) {
{
name: "starting IBD when peer is already in IBD",
funcToExecute: func(t *testing.T, incomingRoute, outgoingRoute *router.Route, context *fakeRelayInvsContext) {
context.trySetIBDRunningResponse = false
context.SetTrySetIBDRunningResponse(false)
triggerIBD(t, incomingRoute, outgoingRoute, context)
checkNoActivity(t, outgoingRoute)
@@ -558,15 +674,15 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestHeaders)
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(
appmessage.NewBlockHeadersMessage(
[]*appmessage.MsgBlockHeader{
appmessage.DomainBlockHeaderToBlockHeader(context.params.GenesisBlock.Header)},
appmessage.DomainBlockHeaderToBlockHeader(context.GetGenesisHeader())},
),
)
if err != nil {
@@ -581,10 +697,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
// Finish the IBD by sending DoneHeaders and send incompatible pruning point
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
@@ -598,7 +714,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestPruningPointHashMessage)
context.isValidPruningPointResponse = false
context.SetIsValidPruningPointResponse(false)
err = incomingRoute.Enqueue(appmessage.NewPruningPointHashMessage(invalidPruningPointHash))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -630,11 +746,11 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestHeaders)
context.validateAndInsertBlockResponse = ruleerrors.ErrDuplicateBlock
context.SetValidateAndInsertBlockResponse(ruleerrors.ErrDuplicateBlock)
err = incomingRoute.Enqueue(
appmessage.NewBlockHeadersMessage(
[]*appmessage.MsgBlockHeader{
appmessage.DomainBlockHeaderToBlockHeader(context.params.GenesisBlock.Header)},
appmessage.DomainBlockHeaderToBlockHeader(context.GetGenesisHeader())},
),
)
if err != nil {
@@ -649,10 +765,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
// Finish the IBD by sending DoneHeaders and send incompatible pruning point
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
@@ -666,7 +782,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestPruningPointHashMessage)
context.isValidPruningPointResponse = false
context.SetIsValidPruningPointResponse(false)
err = incomingRoute.Enqueue(appmessage.NewPruningPointHashMessage(invalidPruningPointHash))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -698,7 +814,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestHeaders)
context.validateAndInsertBlockResponse = ruleerrors.ErrBadMerkleRoot
context.SetValidateAndInsertBlockResponse(ruleerrors.ErrBadMerkleRoot)
err = incomingRoute.Enqueue(
appmessage.NewBlockHeadersMessage(
[]*appmessage.MsgBlockHeader{
@@ -738,10 +854,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -790,10 +906,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -806,7 +922,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestPruningPointHashMessage)
context.isValidPruningPointResponse = false
context.SetIsValidPruningPointResponse(false)
err = incomingRoute.Enqueue(appmessage.NewPruningPointHashMessage(invalidPruningPointHash))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -840,10 +956,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -905,10 +1021,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -968,10 +1084,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -1037,10 +1153,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -1064,7 +1180,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestPruningPointUTXOSetAndBlock)
context.validateAndInsertImportedPruningPointResponse = ruleerrors.ErrBadMerkleRoot
context.SetValidateAndInsertImportedPruningPointResponse(ruleerrors.ErrBadMerkleRoot)
err = incomingRoute.Enqueue(appmessage.NewMsgIBDBlock(appmessage.DomainBlockToMsgBlock(invalidPruningPointBlock)))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -1104,10 +1220,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -1131,7 +1247,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestPruningPointUTXOSetAndBlock)
context.validateAndInsertImportedPruningPointResponse = ruleerrors.ErrSuggestedPruningViolatesFinality
context.SetValidateAndInsertImportedPruningPointResponse(ruleerrors.ErrSuggestedPruningViolatesFinality)
err = incomingRoute.Enqueue(appmessage.NewMsgIBDBlock(appmessage.DomainBlockToMsgBlock(validPruningPointBlock)))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -1168,10 +1284,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -1247,10 +1363,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -1324,10 +1440,10 @@ func TestHandleRelayInvs(t *testing.T) {
// This is done so it'll think it added the high hash to the DAG and proceed with fetching
// the pruning point UTXO set.
context.getBlockInfoResponse = &externalapi.BlockInfo{
context.SetGetBlockInfoResponse(externalapi.BlockInfo{
Exists: true,
BlockStatus: externalapi.StatusHeaderOnly,
}
})
err = incomingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
@@ -1367,7 +1483,7 @@ func TestHandleRelayInvs(t *testing.T) {
}
_ = msg.(*appmessage.MsgRequestIBDBlocks)
context.validateAndInsertBlockResponse = ruleerrors.ErrBadMerkleRoot
context.SetValidateAndInsertImportedPruningPointResponse(ruleerrors.ErrBadMerkleRoot)
err = incomingRoute.Enqueue(appmessage.NewMsgIBDBlock(appmessage.DomainBlockToMsgBlock(invalidBlock)))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
@@ -1417,11 +1533,11 @@ func TestHandleRelayInvs(t *testing.T) {
}
select {
case <-context.finishedIBD:
case <-context.GetFinishedIBDChan():
if !test.expectsIBDToFinish {
t.Fatalf("IBD unexpecetedly finished")
}
case <-time.After(time.Second):
case <-time.After(10 * time.Second):
if test.expectsIBDToFinish {
t.Fatalf("IBD didn't finished after %d", time.Second)
}
@@ -1436,7 +1552,7 @@ func TestHandleRelayInvs(t *testing.T) {
if !errors.Is(err, router.ErrRouteClosed) {
t.Fatalf("unexpected error %+v", err)
}
case <-time.After(time.Second):
case <-time.After(10 * time.Second):
t.Fatalf("waiting for flow to finish timed out after %s", time.Second)
}
}

View File

@@ -0,0 +1,4 @@
package testing
// Because of a bug in Go coverage fails if you have packages with test files only. See https://github.com/golang/go/issues/27333
// So this is a dummy non-test go file in the package.

View File

@@ -191,7 +191,7 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
continue
}
return protocolerrors.Errorf(true, "rejected transaction %s", txID)
return protocolerrors.Errorf(true, "rejected transaction %s: %s", txID, ruleErr)
}
err = flow.broadcastAcceptedTransactions([]*externalapi.DomainTransactionID{txID})
if err != nil {

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -69,6 +69,11 @@ func (m *Manager) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler flowconte
m.context.SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler)
}
// SetOnPruningPointUTXOSetOverrideHandler sets the OnPruningPointUTXOSetOverride handler
func (m *Manager) SetOnPruningPointUTXOSetOverrideHandler(onPruningPointUTXOSetOverrideHandler flowcontext.OnPruningPointUTXOSetOverrideHandler) {
m.context.SetOnPruningPointUTXOSetOverrideHandler(onPruningPointUTXOSetOverrideHandler)
}
// SetOnTransactionAddedToMempoolHandler sets the onTransactionAddedToMempool handler
func (m *Manager) SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler flowcontext.OnTransactionAddedToMempoolHandler) {
m.context.SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler)

View File

@@ -2,8 +2,6 @@ package peer
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("PROT")

View File

@@ -1,9 +1,11 @@
package protocol
import (
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"sync/atomic"
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
@@ -57,8 +59,20 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
peer, err := handshake.HandleHandshake(m.context, netConnection, receiveVersionRoute,
sendVersionRoute, router.OutgoingRoute())
if err != nil {
m.handleError(err, netConnection, router.OutgoingRoute())
// non-blocking read from channel
select {
case innerError := <-errChan:
if errors.Is(err, routerpkg.ErrRouteClosed) {
m.handleError(innerError, netConnection, router.OutgoingRoute())
} else {
log.Errorf("Peer %s sent invalid message: %s", netConnection, innerError)
m.handleError(err, netConnection, router.OutgoingRoute())
}
default:
m.handleError(err, netConnection, router.OutgoingRoute())
}
return
}
defer m.context.RemoveFromPeers(peer)
@@ -74,12 +88,12 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
}
func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection, outgoingRoute *routerpkg.Route) {
if protocolErr := &(protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if protocolErr := (protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if !m.context.Config().DisableBanning && protocolErr.ShouldBan {
log.Warnf("Banning %s (reason: %s)", netConnection, protocolErr.Cause)
err := m.context.ConnectionManager().Ban(netConnection)
if err != nil && !errors.Is(err, addressmanager.ErrAddressNotFound) {
if !errors.Is(err, connmanager.ErrCannotBanPermanent) {
panic(err)
}
@@ -88,7 +102,7 @@ func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection
panic(err)
}
}
log.Debugf("Disconnecting from %s (reason: %s)", netConnection, protocolErr.Cause)
log.Infof("Disconnecting from %s (reason: %s)", netConnection, protocolErr.Cause)
netConnection.Disconnect()
return
}
@@ -135,11 +149,16 @@ func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerOneTimeFlow("SendVirtualSelectedParentInv", router, []appmessage.MessageCommand{},
isStopping, errChan, func(route *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.SendVirtualSelectedParentInv(m.context, outgoingRoute, peer)
}),
m.registerFlow("HandleRelayInvs", router, []appmessage.MessageCommand{
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator, appmessage.CmdIBDBlock,
appmessage.CmdDoneHeaders, appmessage.CmdUnexpectedPruningPoint, appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdBlockHeaders, appmessage.CmdPruningPointHash, appmessage.CmdIBDBlockLocatorHighestHash,
appmessage.CmdDonePruningPointUTXOSetChunks},
appmessage.CmdIBDBlockLocatorHighestHashNotFound, appmessage.CmdDonePruningPointUTXOSetChunks},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayInvs(m.context, incomingRoute,
outgoingRoute, peer)

View File

@@ -12,19 +12,19 @@ type ProtocolError struct {
Cause error
}
func (e *ProtocolError) Error() string {
func (e ProtocolError) Error() string {
return e.Cause.Error()
}
// Unwrap returns the cause of ProtocolError, to be used with `errors.Unwrap()`
func (e *ProtocolError) Unwrap() error {
func (e ProtocolError) Unwrap() error {
return e.Cause
}
// Errorf formats according to a format specifier and returns the string
// as a ProtocolError.
func Errorf(shouldBan bool, format string, args ...interface{}) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.Errorf(format, args...),
}
@@ -33,7 +33,7 @@ func Errorf(shouldBan bool, format string, args ...interface{}) error {
// New returns a ProtocolError with the supplied message.
// New also records the stack trace at the point it was called.
func New(shouldBan bool, message string) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.New(message),
}
@@ -41,7 +41,7 @@ func New(shouldBan bool, message string) error {
// Wrap wraps the given error and returns it as a ProtocolError.
func Wrap(shouldBan bool, err error, message string) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.Wrap(err, message),
}
@@ -49,7 +49,7 @@ func Wrap(shouldBan bool, err error, message string) error {
// Wrapf wraps the given error with the given format and returns it as a ProtocolError.
func Wrapf(shouldBan bool, err error, format string, args ...interface{}) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.Wrapf(err, format, args...),
}

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.RPCS)
var log = logger.RegisterSubSystem("RPCS")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -69,10 +69,31 @@ func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockIns
return err
}
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(appmessage.DomainBlockToMsgBlock(block))
msgBlock := appmessage.DomainBlockToMsgBlock(block)
blockVerboseData, err := m.context.BuildBlockVerboseData(block.Header, block, false)
if err != nil {
return err
}
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(msgBlock, blockVerboseData)
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
}
// NotifyPruningPointUTXOSetOverride notifies the manager whenever the UTXO index
// resets due to pruning point change via IBD.
func (m *Manager) NotifyPruningPointUTXOSetOverride() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyPruningPointUTXOSetOverride")
defer onEnd()
if m.context.Config.UTXOIndex {
err := m.notifyPruningPointUTXOSetOverride()
if err != nil {
return err
}
}
return nil
}
// NotifyFinalityConflict notifies the manager that there's a finality conflict in the DAG
func (m *Manager) NotifyFinalityConflict(violatingBlockHash string) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyFinalityConflict")
@@ -95,13 +116,25 @@ func (m *Manager) notifyUTXOsChanged(blockInsertionResult *externalapi.BlockInse
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyUTXOsChanged")
defer onEnd()
utxoIndexChanges, err := m.context.UTXOIndex.Update(blockInsertionResult.VirtualSelectedParentChainChanges)
utxoIndexChanges, err := m.context.UTXOIndex.Update(blockInsertionResult)
if err != nil {
return err
}
return m.context.NotificationManager.NotifyUTXOsChanged(utxoIndexChanges)
}
func (m *Manager) notifyPruningPointUTXOSetOverride() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.notifyPruningPointUTXOSetOverride")
defer onEnd()
err := m.context.UTXOIndex.Reset()
if err != nil {
return err
}
return m.context.NotificationManager.NotifyPruningPointUTXOSetOverride()
}
func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentBlueScoreChanged")
defer onEnd()

View File

@@ -35,9 +35,15 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdShutDownRequestMessage: rpchandlers.HandleShutDown,
appmessage.CmdGetHeadersRequestMessage: rpchandlers.HandleGetHeaders,
appmessage.CmdNotifyUTXOsChangedRequestMessage: rpchandlers.HandleNotifyUTXOsChanged,
appmessage.CmdStopNotifyingUTXOsChangedRequestMessage: rpchandlers.HandleStopNotifyingUTXOsChanged,
appmessage.CmdGetUTXOsByAddressesRequestMessage: rpchandlers.HandleGetUTXOsByAddresses,
appmessage.CmdGetVirtualSelectedParentBlueScoreRequestMessage: rpchandlers.HandleGetVirtualSelectedParentBlueScore,
appmessage.CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualSelectedParentBlueScoreChanged,
appmessage.CmdBanRequestMessage: rpchandlers.HandleBan,
appmessage.CmdUnbanRequestMessage: rpchandlers.HandleUnban,
appmessage.CmdGetInfoRequestMessage: rpchandlers.HandleGetInfo,
appmessage.CmdNotifyPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleNotifyPruningPointUTXOSetOverrideRequest,
appmessage.CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleStopNotifyingPruningPointUTXOSetOverrideRequest,
}
func (m *Manager) routerInitializer(router *router.Router, netConnection *netadapter.NetConnection) {

View File

@@ -2,8 +2,6 @@ package rpccontext
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.RPCS)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("RPCS")

View File

@@ -30,8 +30,9 @@ type NotificationListener struct {
propagateFinalityConflictResolvedNotifications bool
propagateUTXOsChangedNotifications bool
propagateVirtualSelectedParentBlueScoreChangedNotifications bool
propagatePruningPointUTXOSetOverrideNotifications bool
propagateUTXOsChangedNotificationAddresses []*UTXOsChangedNotificationAddress
propagateUTXOsChangedNotificationAddresses map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress
}
// NewNotificationManager creates a new NotificationManager
@@ -180,6 +181,23 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentBlueScoreChanged(
return nil
}
// NotifyPruningPointUTXOSetOverride notifies the notification manager that the UTXO index
// reset due to pruning point change via IBD.
func (nm *NotificationManager) NotifyPruningPointUTXOSetOverride() error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagatePruningPointUTXOSetOverrideNotifications {
err := router.OutgoingRoute().Enqueue(appmessage.NewPruningPointUTXOSetOverrideNotificationMessage())
if err != nil {
return err
}
}
}
return nil
}
func newNotificationListener() *NotificationListener {
return &NotificationListener{
propagateBlockAddedNotifications: false,
@@ -188,6 +206,7 @@ func newNotificationListener() *NotificationListener {
propagateFinalityConflictResolvedNotifications: false,
propagateUTXOsChangedNotifications: false,
propagateVirtualSelectedParentBlueScoreChangedNotifications: false,
propagatePruningPointUTXOSetOverrideNotifications: false,
}
}
@@ -216,34 +235,70 @@ func (nl *NotificationListener) PropagateFinalityConflictResolvedNotifications()
}
// PropagateUTXOsChangedNotifications instructs the listener to send UTXOs changed notifications
// to the remote listener
// to the remote listener for the given addresses. Subsequent calls instruct the listener to
// send UTXOs changed notifications for those addresses along with the old ones. Duplicate addresses
// are ignored.
func (nl *NotificationListener) PropagateUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
nl.propagateUTXOsChangedNotifications = true
nl.propagateUTXOsChangedNotificationAddresses = addresses
if !nl.propagateUTXOsChangedNotifications {
nl.propagateUTXOsChangedNotifications = true
nl.propagateUTXOsChangedNotificationAddresses =
make(map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress, len(addresses))
}
for _, address := range addresses {
nl.propagateUTXOsChangedNotificationAddresses[address.ScriptPublicKeyString] = address
}
}
// StopPropagatingUTXOsChangedNotifications instructs the listener to stop sending UTXOs
// changed notifications to the remote listener for the given addresses. Addresses for which
// notifications are not currently sent are ignored.
func (nl *NotificationListener) StopPropagatingUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
if !nl.propagateUTXOsChangedNotifications {
return
}
for _, address := range addresses {
delete(nl.propagateUTXOsChangedNotificationAddresses, address.ScriptPublicKeyString)
}
}
func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
utxoChanges *utxoindex.UTXOChanges) *appmessage.UTXOsChangedNotificationMessage {
// As an optimization, we iterate over the smaller set (O(n)) among the two below
// and check existence over the larger set (O(1))
utxoChangesSize := len(utxoChanges.Added) + len(utxoChanges.Removed)
addressesSize := len(nl.propagateUTXOsChangedNotificationAddresses)
notification := &appmessage.UTXOsChangedNotificationMessage{}
for _, listenerAddress := range nl.propagateUTXOsChangedNotificationAddresses {
listenerScriptPublicKeyString := listenerAddress.ScriptPublicKeyString
if addedPairs, ok := utxoChanges.Added[listenerScriptPublicKeyString]; ok {
notification.Added = append(notification.Added,
ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)...)
if utxoChangesSize < addressesSize {
for scriptPublicKeyString, addedPairs := range utxoChanges.Added {
if listenerAddress, ok := nl.propagateUTXOsChangedNotificationAddresses[scriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
}
if removedOutpoints, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
for outpoint := range removedOutpoints {
notification.Removed = append(notification.Removed, &appmessage.UTXOsByAddressesEntry{
Address: listenerAddress.Address,
Outpoint: &appmessage.RPCOutpoint{
TransactionID: outpoint.TransactionID.String(),
Index: outpoint.Index,
},
})
for scriptPublicKeyString, removedOutpoints := range utxoChanges.Removed {
if listenerAddress, ok := nl.propagateUTXOsChangedNotificationAddresses[scriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
} else {
for _, listenerAddress := range nl.propagateUTXOsChangedNotificationAddresses {
listenerScriptPublicKeyString := listenerAddress.ScriptPublicKeyString
if addedPairs, ok := utxoChanges.Added[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
if removedOutpoints, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
}
return notification
}
@@ -252,3 +307,15 @@ func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
func (nl *NotificationListener) PropagateVirtualSelectedParentBlueScoreChangedNotifications() {
nl.propagateVirtualSelectedParentBlueScoreChangedNotifications = true
}
// PropagatePruningPointUTXOSetOverrideNotifications instructs the listener to send pruning point UTXO set override notifications
// to the remote listener.
func (nl *NotificationListener) PropagatePruningPointUTXOSetOverrideNotifications() {
nl.propagatePruningPointUTXOSetOverrideNotifications = true
}
// StopPropagatingPruningPointUTXOSetOverrideNotifications instructs the listener to stop sending pruning
// point UTXO set override notifications to the remote listener.
func (nl *NotificationListener) StopPropagatingPruningPointUTXOSetOverrideNotifications() {
nl.propagatePruningPointUTXOSetOverrideNotifications = false
}

View File

@@ -2,6 +2,9 @@ package rpccontext
import (
"encoding/hex"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/utxoindex"
@@ -28,3 +31,43 @@ func ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(address string, pair
}
return utxosByAddressesEntries
}
// convertUTXOOutpointsToUTXOsByAddressesEntries converts
// UTXOOutpoints to a slice of UTXOsByAddressesEntry
func convertUTXOOutpointsToUTXOsByAddressesEntries(address string, outpoints utxoindex.UTXOOutpoints) []*appmessage.UTXOsByAddressesEntry {
utxosByAddressesEntries := make([]*appmessage.UTXOsByAddressesEntry, 0, len(outpoints))
for outpoint := range outpoints {
utxosByAddressesEntries = append(utxosByAddressesEntries, &appmessage.UTXOsByAddressesEntry{
Address: address,
Outpoint: &appmessage.RPCOutpoint{
TransactionID: outpoint.TransactionID.String(),
Index: outpoint.Index,
},
})
}
return utxosByAddressesEntries
}
// ConvertAddressStringsToUTXOsChangedNotificationAddresses converts address strings
// to UTXOsChangedNotificationAddresses
func (ctx *Context) ConvertAddressStringsToUTXOsChangedNotificationAddresses(
addressStrings []string) ([]*UTXOsChangedNotificationAddress, error) {
addresses := make([]*UTXOsChangedNotificationAddress, len(addressStrings))
for i, addressString := range addressStrings {
address, err := util.DecodeAddress(addressString, ctx.Config.ActiveNetParams.Prefix)
if err != nil {
return nil, errors.Errorf("Could not decode address '%s': %s", addressString, err)
}
scriptPublicKey, err := txscript.PayToAddrScript(address)
if err != nil {
return nil, errors.Errorf("Could not create a scriptPublicKey for address '%s': %s", addressString, err)
}
scriptPublicKeyString := utxoindex.ConvertScriptPublicKeyToString(scriptPublicKey)
addresses[i] = &UTXOsChangedNotificationAddress{
Address: addressString,
ScriptPublicKeyString: scriptPublicKeyString,
}
}
return addresses, nil
}

View File

@@ -3,12 +3,15 @@ package rpccontext
import (
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/util/difficulty"
"math"
"math/big"
"strconv"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/estimatedsize"
@@ -20,17 +23,36 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/util/pointers"
)
// BuildBlockVerboseData builds a BlockVerboseData from the given block.
func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, includeTransactionVerboseData bool) (*appmessage.BlockVerboseData, error) {
// ErrBuildBlockVerboseDataInvalidBlock indicates that a block that was given to BuildBlockVerboseData is invalid.
var ErrBuildBlockVerboseDataInvalidBlock = errors.New("ErrBuildBlockVerboseDataInvalidBlock")
// BuildBlockVerboseData builds a BlockVerboseData from the given blockHeader.
// A block may optionally also be given if it's available in the calling context.
func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, block *externalapi.DomainBlock,
includeTransactionVerboseData bool) (*appmessage.BlockVerboseData, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "BuildBlockVerboseData")
defer onEnd()
hash := consensushashing.HeaderHash(blockHeader)
blockInfo, err := ctx.Domain.Consensus().GetBlockInfo(hash)
if err != nil {
return nil, err
}
if blockInfo.BlockStatus == externalapi.StatusInvalid {
return nil, errors.Wrap(ErrBuildBlockVerboseDataInvalidBlock, "cannot build verbose data for "+
"invalid block")
}
childrenHashes, err := ctx.Domain.Consensus().GetBlockChildren(hash)
if err != nil {
return nil, err
}
result := &appmessage.BlockVerboseData{
Hash: hash.String(),
Version: blockHeader.Version(),
@@ -39,6 +61,7 @@ func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, i
AcceptedIDMerkleRoot: blockHeader.AcceptedIDMerkleRoot().String(),
UTXOCommitment: blockHeader.UTXOCommitment().String(),
ParentHashes: hashes.ToStrings(blockHeader.ParentHashes()),
ChildrenHashes: hashes.ToStrings(childrenHashes),
Nonce: blockHeader.Nonce(),
Time: blockHeader.TimeInMilliseconds(),
Bits: strconv.FormatInt(int64(blockHeader.Bits()), 16),
@@ -48,9 +71,11 @@ func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, i
}
if blockInfo.BlockStatus != externalapi.StatusHeaderOnly {
block, err := ctx.Domain.Consensus().GetBlock(hash)
if err != nil {
return nil, err
if block == nil {
block, err = ctx.Domain.Consensus().GetBlock(hash)
if err != nil {
return nil, err
}
}
txIDs := make([]string, len(block.Transactions))
@@ -100,6 +125,9 @@ func (ctx *Context) BuildTransactionVerboseData(tx *externalapi.DomainTransactio
blockHeader externalapi.BlockHeader, blockHash string) (
*appmessage.TransactionVerboseData, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "BuildTransactionVerboseData")
defer onEnd()
var payloadHash string
if tx.SubnetworkID != subnetworks.SubnetworkIDNative {
payloadHash = tx.PayloadHash.String()
@@ -167,7 +195,7 @@ func (ctx *Context) buildTransactionVerboseOutputs(tx *externalapi.DomainTransac
passesFilter := len(filterAddrMap) == 0
var encodedAddr string
if addr != nil {
encodedAddr = *pointers.String(addr.EncodeAddress())
encodedAddr = addr.EncodeAddress()
// If the filter doesn't already pass, make it pass if
// the address exists in the filter.
@@ -184,6 +212,7 @@ func (ctx *Context) buildTransactionVerboseOutputs(tx *externalapi.DomainTransac
output.Index = uint32(i)
output.Value = transactionOutput.Value
output.ScriptPubKey = &appmessage.ScriptPubKeyResult{
Version: transactionOutput.ScriptPublicKey.Version,
Address: encodedAddr,
Hex: hex.EncodeToString(transactionOutput.ScriptPublicKey.Script),
Type: scriptClass.String(),

View File

@@ -0,0 +1,28 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"net"
)
// HandleBan handles the respectively named RPC command
func HandleBan(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
banRequest := request.(*appmessage.BanRequestMessage)
ip := net.ParseIP(banRequest.IP)
if ip == nil {
errorMessage := &appmessage.BanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP %s", banRequest.IP)
return errorMessage, nil
}
err := context.ConnectionManager.BanByIP(ip)
if err != nil {
errorMessage := &appmessage.BanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not ban IP: %s", err)
return errorMessage, nil
}
response := appmessage.NewBanResponseMessage()
return response, nil
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
// HandleGetBlock handles the respectively named RPC command
@@ -28,10 +29,16 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
response := appmessage.NewGetBlockResponseMessage()
blockVerboseData, err := context.BuildBlockVerboseData(header, getBlockRequest.IncludeTransactionVerboseData)
blockVerboseData, err := context.BuildBlockVerboseData(header, nil, getBlockRequest.IncludeTransactionVerboseData)
if err != nil {
if errors.Is(err, rpccontext.ErrBuildBlockVerboseDataInvalidBlock) {
errorMessage := &appmessage.GetBlockResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Block %s is invalid", hash)
return errorMessage, nil
}
return nil, err
}
response.BlockVerboseData = blockVerboseData
return response, nil

View File

@@ -36,5 +36,11 @@ func HandleGetBlockDAGInfo(context *rpccontext.Context, _ *router.Router, _ appm
response.Difficulty = context.GetDifficultyRatio(virtualInfo.Bits, context.Config.ActiveNetParams)
response.PastMedianTime = virtualInfo.PastMedianTime
pruningPoint, err := context.Domain.Consensus().PruningPoint()
if err != nil {
return nil, err
}
response.PruningPointHash = pruningPoint.String()
return response, nil
}

View File

@@ -3,18 +3,104 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
const (
// maxBlocksInGetBlocksResponse is the max amount of blocks that are
// allowed in a GetBlocksResult.
maxBlocksInGetBlocksResponse = 100
maxBlocksInGetBlocksResponse = 1000
)
// HandleGetBlocks handles the respectively named RPC command
func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
response := &appmessage.GetBlocksResponseMessage{}
response.Error = appmessage.RPCErrorf("not implemented")
getBlocksRequest := request.(*appmessage.GetBlocksRequestMessage)
// Validate that user didn't set IncludeTransactionVerboseData without setting IncludeBlockVerboseData
if !getBlocksRequest.IncludeBlockVerboseData && getBlocksRequest.IncludeTransactionVerboseData {
return &appmessage.GetBlocksResponseMessage{
Error: appmessage.RPCErrorf(
"If includeTransactionVerboseData is set, then includeBlockVerboseData must be set as well"),
}, nil
}
// Decode lowHash
// If lowHash is empty - use genesis instead.
lowHash := context.Config.ActiveNetParams.GenesisHash
if getBlocksRequest.LowHash != "" {
var err error
lowHash, err = externalapi.NewDomainHashFromString(getBlocksRequest.LowHash)
if err != nil {
return &appmessage.GetBlocksResponseMessage{
Error: appmessage.RPCErrorf("Could not decode lowHash %s: %s", getBlocksRequest.LowHash, err),
}, nil
}
blockInfo, err := context.Domain.Consensus().GetBlockInfo(lowHash)
if err != nil {
return nil, err
}
if !blockInfo.Exists {
return &appmessage.GetBlocksResponseMessage{
Error: appmessage.RPCErrorf("Could not find lowHash %s", getBlocksRequest.LowHash),
}, nil
}
}
// Get hashes between lowHash and virtualSelectedParent
virtualSelectedParent, err := context.Domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return nil, err
}
blockHashes, err := context.Domain.Consensus().GetHashesBetween(
lowHash, virtualSelectedParent, maxBlocksInGetBlocksResponse)
if err != nil {
return nil, err
}
// prepend low hash to make it inclusive
blockHashes = append([]*externalapi.DomainHash{lowHash}, blockHashes...)
// If there are no maxBlocksInGetBlocksResponse between lowHash and virtualSelectedParent -
// add virtualSelectedParent's anticone
if len(blockHashes) < maxBlocksInGetBlocksResponse {
virtualSelectedParentAnticone, err := context.Domain.Consensus().Anticone(virtualSelectedParent)
if err != nil {
return nil, err
}
blockHashes = append(blockHashes, virtualSelectedParentAnticone...)
}
// Both GetHashesBetween and Anticone might return more then the allowed number of blocks, so
// trim any extra blocks.
if len(blockHashes) > maxBlocksInGetBlocksResponse {
blockHashes = blockHashes[:maxBlocksInGetBlocksResponse]
}
// Prepare the response
response := &appmessage.GetBlocksResponseMessage{
BlockHashes: hashes.ToStrings(blockHashes),
}
// Retrieve all block data in case BlockVerboseData was requested
if getBlocksRequest.IncludeBlockVerboseData {
response.BlockVerboseData = make([]*appmessage.BlockVerboseData, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := context.Domain.Consensus().GetBlockHeader(blockHash)
if err != nil {
return nil, err
}
blockVerboseData, err := context.BuildBlockVerboseData(blockHeader, nil,
getBlocksRequest.IncludeTransactionVerboseData)
if err != nil {
return nil, err
}
response.BlockVerboseData[i] = blockVerboseData
}
}
return response, nil
}

View File

@@ -0,0 +1,144 @@
package rpchandlers_test
import (
"reflect"
"sort"
"testing"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/app/rpc/rpchandlers"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/model/testapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/domain/miningmanager"
"github.com/kaspanet/kaspad/infrastructure/config"
)
type fakeDomain struct {
testapi.TestConsensus
}
func (d fakeDomain) Consensus() externalapi.Consensus { return d }
func (d fakeDomain) MiningManager() miningmanager.MiningManager { return nil }
func TestHandleGetBlocks(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(params, false, "TestHandleGetBlocks")
if err != nil {
t.Fatalf("Error setting up consensus: %+v", err)
}
defer teardown(false)
fakeContext := rpccontext.Context{
Config: &config.Config{Flags: &config.Flags{NetworkFlags: config.NetworkFlags{ActiveNetParams: params}}},
Domain: fakeDomain{tc},
}
getBlocks := func(lowHash *externalapi.DomainHash) *appmessage.GetBlocksResponseMessage {
request := appmessage.GetBlocksRequestMessage{}
if lowHash != nil {
request.LowHash = lowHash.String()
}
response, err := rpchandlers.HandleGetBlocks(&fakeContext, nil, &request)
if err != nil {
t.Fatalf("Expected empty request to not fail, instead: '%v'", err)
}
return response.(*appmessage.GetBlocksResponseMessage)
}
filterAntiPast := func(povBlock *externalapi.DomainHash, slice []*externalapi.DomainHash) []*externalapi.DomainHash {
antipast := make([]*externalapi.DomainHash, 0, len(slice))
for _, blockHash := range slice {
isInPastOfPovBlock, err := tc.DAGTopologyManager().IsAncestorOf(blockHash, povBlock)
if err != nil {
t.Fatalf("Failed doing reachability check: '%v'", err)
}
if !isInPastOfPovBlock {
antipast = append(antipast, blockHash)
}
}
return antipast
}
// Create a DAG with the following structure:
// merging block
// / | \
// split1 split2 split3
// \ | /
// merging block
// / | \
// split1 split2 split3
// \ | /
// etc.
expectedOrder := make([]*externalapi.DomainHash, 0, 40)
mergingBlock := params.GenesisHash
for i := 0; i < 10; i++ {
splitBlocks := make([]*externalapi.DomainHash, 0, 3)
for j := 0; j < 3; j++ {
blockHash, _, err := tc.AddBlock([]*externalapi.DomainHash{mergingBlock}, nil, nil)
if err != nil {
t.Fatalf("Failed adding block: %v", err)
}
splitBlocks = append(splitBlocks, blockHash)
}
sort.Sort(sort.Reverse(testutils.NewTestGhostDAGSorter(splitBlocks, tc, t)))
restOfSplitBlocks, selectedParent := splitBlocks[:len(splitBlocks)-1], splitBlocks[len(splitBlocks)-1]
expectedOrder = append(expectedOrder, selectedParent)
expectedOrder = append(expectedOrder, restOfSplitBlocks...)
mergingBlock, _, err = tc.AddBlock(splitBlocks, nil, nil)
if err != nil {
t.Fatalf("Failed adding block: %v", err)
}
expectedOrder = append(expectedOrder, mergingBlock)
}
virtualSelectedParent, err := tc.GetVirtualSelectedParent()
if err != nil {
t.Fatalf("Failed getting SelectedParent: %v", err)
}
if !virtualSelectedParent.Equal(expectedOrder[len(expectedOrder)-1]) {
t.Fatalf("Expected %s to be selectedParent, instead found: %s", expectedOrder[len(expectedOrder)-1], virtualSelectedParent)
}
requestSelectedParent := getBlocks(virtualSelectedParent)
if !reflect.DeepEqual(requestSelectedParent.BlockHashes, hashes.ToStrings([]*externalapi.DomainHash{virtualSelectedParent})) {
t.Fatalf("TestHandleGetBlocks expected:\n%v\nactual:\n%v", virtualSelectedParent, requestSelectedParent.BlockHashes)
}
for i, blockHash := range expectedOrder {
expectedBlocks := filterAntiPast(blockHash, expectedOrder)
expectedBlocks = append([]*externalapi.DomainHash{blockHash}, expectedBlocks...)
actualBlocks := getBlocks(blockHash)
if !reflect.DeepEqual(actualBlocks.BlockHashes, hashes.ToStrings(expectedBlocks)) {
t.Fatalf("TestHandleGetBlocks %d \nexpected: \n%v\nactual:\n%v", i,
hashes.ToStrings(expectedBlocks), actualBlocks.BlockHashes)
}
}
// Make explicitly sure that if lowHash==highHash we get a slice with a single hash.
actualBlocks := getBlocks(virtualSelectedParent)
if !reflect.DeepEqual(actualBlocks.BlockHashes, []string{virtualSelectedParent.String()}) {
t.Fatalf("TestHandleGetBlocks expected blocks to contain just '%s', instead got: \n%v",
virtualSelectedParent, actualBlocks.BlockHashes)
}
expectedOrder = append([]*externalapi.DomainHash{params.GenesisHash}, expectedOrder...)
actualOrder := getBlocks(nil)
if !reflect.DeepEqual(actualOrder.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: %v \nactual:\n%v", expectedOrder, actualOrder.BlockHashes)
}
requestAllExplictly := getBlocks(params.GenesisHash)
if !reflect.DeepEqual(requestAllExplictly.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: \n%v\n. actual:\n%v", expectedOrder, requestAllExplictly.BlockHashes)
}
})
}

View File

@@ -0,0 +1,13 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleGetInfo handles the respectively named RPC command
func HandleGetInfo(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
response := appmessage.NewGetInfoResponseMessage(context.NetAdapter.ID().String())
return response, nil
}

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.RPCS)
var log = logger.RegisterSubSystem("RPCS")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -0,0 +1,19 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleNotifyPruningPointUTXOSetOverrideRequest handles the respectively named RPC command
func HandleNotifyPruningPointUTXOSetOverrideRequest(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.PropagatePruningPointUTXOSetOverrideNotifications()
response := appmessage.NewNotifyPruningPointUTXOSetOverrideResponseMessage()
return response, nil
}

View File

@@ -3,10 +3,7 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util"
)
// HandleNotifyUTXOsChanged handles the respectively named RPC command
@@ -18,26 +15,11 @@ func HandleNotifyUTXOsChanged(context *rpccontext.Context, router *router.Router
}
notifyUTXOsChangedRequest := request.(*appmessage.NotifyUTXOsChangedRequestMessage)
addresses := make([]*rpccontext.UTXOsChangedNotificationAddress, len(notifyUTXOsChangedRequest.Addresses))
for i, addressString := range notifyUTXOsChangedRequest.Addresses {
address, err := util.DecodeAddress(addressString, context.Config.ActiveNetParams.Prefix)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Could not decode address '%s': %s", addressString, err)
return errorMessage, nil
}
scriptPublicKey, err := txscript.PayToAddrScript(address)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Could not create a scriptPublicKey for address '%s': %s", addressString, err)
return errorMessage, nil
}
scriptPublicKeyString := utxoindex.ConvertScriptPublicKeyToString(scriptPublicKey)
addresses[i] = &rpccontext.UTXOsChangedNotificationAddress{
Address: addressString,
ScriptPublicKeyString: scriptPublicKeyString,
}
addresses, err := context.ConvertAddressStringsToUTXOsChangedNotificationAddresses(notifyUTXOsChangedRequest.Addresses)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Parsing error: %s", err)
return errorMessage, nil
}
listener, err := context.NotificationManager.Listener(router)

View File

@@ -0,0 +1,19 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleStopNotifyingPruningPointUTXOSetOverrideRequest handles the respectively named RPC command
func HandleStopNotifyingPruningPointUTXOSetOverrideRequest(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.StopPropagatingPruningPointUTXOSetOverrideNotifications()
response := appmessage.NewStopNotifyingPruningPointUTXOSetOverrideResponseMessage()
return response, nil
}

View File

@@ -0,0 +1,33 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleStopNotifyingUTXOsChanged handles the respectively named RPC command
func HandleStopNotifyingUTXOsChanged(context *rpccontext.Context, router *router.Router, request appmessage.Message) (appmessage.Message, error) {
if !context.Config.UTXOIndex {
errorMessage := appmessage.NewStopNotifyingUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Method unavailable when kaspad is run without --utxoindex")
return errorMessage, nil
}
stopNotifyingUTXOsChangedRequest := request.(*appmessage.StopNotifyingUTXOsChangedRequestMessage)
addresses, err := context.ConvertAddressStringsToUTXOsChangedNotificationAddresses(stopNotifyingUTXOsChangedRequest.Addresses)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Parsing error: %s", err)
return errorMessage, nil
}
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.StopPropagatingUTXOsChangedNotifications(addresses)
response := appmessage.NewStopNotifyingUTXOsChangedResponseMessage()
return response, nil
}

View File

@@ -2,6 +2,7 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
@@ -25,9 +26,11 @@ func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request ap
err := context.ProtocolManager.AddBlock(domainBlock)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
isProtocolOrRuleError := errors.As(err, &ruleerrors.RuleError{}) || errors.As(err, &protocolerrors.ProtocolError{})
if !isProtocolOrRuleError {
return nil, err
}
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Block rejected. Reason: %s", err),
RejectReason: appmessage.RejectReasonBlockInvalid,

View File

@@ -0,0 +1,27 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"net"
)
// HandleUnban handles the respectively named RPC command
func HandleUnban(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
unbanRequest := request.(*appmessage.UnbanRequestMessage)
ip := net.ParseIP(unbanRequest.IP)
if ip == nil {
errorMessage := &appmessage.UnbanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not parse IP %s", unbanRequest.IP)
return errorMessage, nil
}
err := context.AddressManager.Unban(appmessage.NewNetAddressIPPort(ip, 0, 0))
if err != nil {
errorMessage := &appmessage.UnbanResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not unban IP: %s", err)
return errorMessage, nil
}
response := appmessage.NewUnbanResponseMessage()
return response, nil
}

31
changelog.txt Normal file
View File

@@ -0,0 +1,31 @@
Kaspad v0.9.0 - 2021-03-04
===========================
* Merge big subdags in pick virtual parents (#1574)
* Write in the reject message the tx rejection reason (#1573)
* Add nil checks for protowire (#1570)
* Increase getBlocks limit to 1000 (#1572)
* Return RPC error if getBlock's lowHash doesn't exist (#1569)
* Add default dns-seeder to testnet (#1568)
* Fix utxoindex deserialization (#1566)
* Add pruning point hash to GetBlockDagInfo response (#1565)
* Use EmitUnpopulated so that kaspactl prints all fields, even the default ones (#1561)
* Stop logging an error whenever an RPC/P2P connection is canceled (#1562)
* Cleanup the logger and make it asynchronous (#1524)
* Close all iterators (#1542)
* Add childrenHashes to GetBlock/s RPC commands (#1560)
* Add ScriptPublicKey.Version to RPC (#1559)
* Fix the target block rate to create less bursty mining (#1554)
Kaspad v0.8.10 - 2021-02-25
===========================
* Fix bug where invalid mempool transactions were not removed (#1551)
* Add RPC reconnection to the miner (#1552)
* Remove virtual diff parents - only selectedTip is virtualDiffParent now (#1550)
* Fix UTXO index (#1548)
* Prevent fast failing (#1545)
* Increase the sleep time in kaspaminer when the node is not synced (#1544)
* Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537)
* Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538)

View File

@@ -4,7 +4,7 @@ kaspactl is an RPC client for kaspad
## Requirements
Go 1.14 or later.
Go 1.16 or later.
## Installation

View File

@@ -13,6 +13,7 @@ var commandTypes = []reflect.Type{
reflect.TypeOf(protowire.KaspadMessage_GetConnectedPeerInfoRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetPeerAddressesRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetCurrentNetworkRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetInfoRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetBlockRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetBlocksRequest{}),
@@ -32,6 +33,9 @@ var commandTypes = []reflect.Type{
reflect.TypeOf(protowire.KaspadMessage_SubmitTransactionRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetUtxosByAddressesRequest{}),
reflect.TypeOf(protowire.KaspadMessage_BanRequest{}),
reflect.TypeOf(protowire.KaspadMessage_UnbanRequest{}),
}
type commandDescription struct {

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.14-alpine AS build
FROM golang:1.16-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad

View File

@@ -2,10 +2,11 @@ package main
import (
"fmt"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/server/grpcserver/protowire"
"os"
"time"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/server/grpcserver/protowire"
"github.com/pkg/errors"
"google.golang.org/protobuf/encoding/protojson"
@@ -67,7 +68,7 @@ func postCommand(cfg *configFlags, client *grpcclient.GRPCClient, responseChan c
if err != nil {
printErrorAndExit(fmt.Sprintf("error posting the request to the RPC server: %s", err))
}
responseBytes, err := protojson.Marshal(response)
responseBytes, err := protojson.MarshalOptions{EmitUnpopulated: true}.Marshal(response)
if err != nil {
printErrorAndExit(errors.Wrapf(err, "error parsing the response from the RPC server").Error())
}
@@ -92,6 +93,7 @@ func prettifyResponse(response string) string {
marshalOptions := &protojson.MarshalOptions{}
marshalOptions.Indent = " "
marshalOptions.EmitUnpopulated = true
return marshalOptions.Format(kaspadMessage)
}

View File

@@ -4,7 +4,7 @@ Kaspaminer is a CPU-based miner for kaspad
## Requirements
Go 1.14 or later.
Go 1.16 or later.
## Installation

View File

@@ -5,42 +5,95 @@ import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"github.com/pkg/errors"
"sync"
"sync/atomic"
"time"
)
const minerTimeout = 10 * time.Second
type minerClient struct {
*rpcclient.RPCClient
isReconnecting uint32
clientLock sync.RWMutex
rpcClient *rpcclient.RPCClient
cfg *configFlags
blockAddedNotificationChan chan struct{}
}
func newMinerClient(cfg *configFlags) (*minerClient, error) {
rpcAddress, err := cfg.NetParams().NormalizeRPCServerAddress(cfg.RPCServer)
if err != nil {
return nil, err
}
rpcClient, err := rpcclient.NewRPCClient(rpcAddress)
if err != nil {
return nil, err
}
rpcClient.SetTimeout(minerTimeout)
rpcClient.SetLogger(backendLog, logger.LevelTrace)
func (mc *minerClient) safeRPCClient() *rpcclient.RPCClient {
mc.clientLock.RLock()
defer mc.clientLock.RUnlock()
return mc.rpcClient
}
minerClient := &minerClient{
RPCClient: rpcClient,
blockAddedNotificationChan: make(chan struct{}),
func (mc *minerClient) reconnect() {
swapped := atomic.CompareAndSwapUint32(&mc.isReconnecting, 0, 1)
if !swapped {
return
}
err = rpcClient.RegisterForBlockAddedNotifications(func(_ *appmessage.BlockAddedNotificationMessage) {
defer atomic.StoreUint32(&mc.isReconnecting, 0)
mc.clientLock.Lock()
defer mc.clientLock.Unlock()
retryDuration := time.Second
const maxRetryDuration = time.Minute
log.Infof("Reconnecting RPC connection")
for {
err := mc.connect()
if err == nil {
return
}
if retryDuration < time.Minute {
retryDuration *= 2
} else {
retryDuration = maxRetryDuration
}
log.Errorf("Got error '%s' while reconnecting. Trying again in %s", err, retryDuration)
time.Sleep(retryDuration)
}
}
func (mc *minerClient) connect() error {
rpcAddress, err := mc.cfg.NetParams().NormalizeRPCServerAddress(mc.cfg.RPCServer)
if err != nil {
return err
}
mc.rpcClient, err = rpcclient.NewRPCClient(rpcAddress)
if err != nil {
return err
}
mc.rpcClient.SetTimeout(minerTimeout)
mc.rpcClient.SetLogger(backendLog, logger.LevelTrace)
err = mc.rpcClient.RegisterForBlockAddedNotifications(func(_ *appmessage.BlockAddedNotificationMessage) {
select {
case minerClient.blockAddedNotificationChan <- struct{}{}:
case mc.blockAddedNotificationChan <- struct{}{}:
default:
}
})
if err != nil {
return nil, errors.Wrapf(err, "error requesting block-added notifications")
return errors.Wrapf(err, "error requesting block-added notifications")
}
log.Infof("Connected to %s", rpcAddress)
return nil
}
func newMinerClient(cfg *configFlags) (*minerClient, error) {
minerClient := &minerClient{
cfg: cfg,
blockAddedNotificationChan: make(chan struct{}),
}
err := minerClient.connect()
if err != nil {
return nil, err
}
return minerClient, nil

View File

@@ -17,8 +17,9 @@ import (
)
const (
defaultLogFilename = "kaspaminer.log"
defaultErrLogFilename = "kaspaminer_err.log"
defaultLogFilename = "kaspaminer.log"
defaultErrLogFilename = "kaspaminer_err.log"
defaultTargetBlockRateRatio = 2.0
)
var (
@@ -30,13 +31,13 @@ var (
)
type configFlags struct {
ShowVersion bool `short:"V" long:"version" description:"Display version information and exit"`
RPCServer string `short:"s" long:"rpcserver" description:"RPC server to connect to"`
MiningAddr string `long:"miningaddr" description:"Address to mine to"`
NumberOfBlocks uint64 `short:"n" long:"numblocks" description:"Number of blocks to mine. If omitted, will mine until the process is interrupted."`
MineWhenNotSynced bool `long:"mine-when-not-synced" description:"Mine even if the node is not synced with the rest of the network."`
Profile string `long:"profile" description:"Enable HTTP profiling on given port -- NOTE port must be between 1024 and 65536"`
TargetBlocksPerSecond float64 `long:"target-blocks-per-second" description:"Sets a maximum block rate. This flag is for debugging purposes."`
ShowVersion bool `short:"V" long:"version" description:"Display version information and exit"`
RPCServer string `short:"s" long:"rpcserver" description:"RPC server to connect to"`
MiningAddr string `long:"miningaddr" description:"Address to mine to"`
NumberOfBlocks uint64 `short:"n" long:"numblocks" description:"Number of blocks to mine. If omitted, will mine until the process is interrupted."`
MineWhenNotSynced bool `long:"mine-when-not-synced" description:"Mine even if the node is not synced with the rest of the network."`
Profile string `long:"profile" description:"Enable HTTP profiling on given port -- NOTE port must be between 1024 and 65536"`
TargetBlocksPerSecond *float64 `long:"target-blocks-per-second" description:"Sets a maximum block rate. 0 means no limit (The default one is 2 * target network block rate)"`
config.NetworkFlags
}
@@ -64,6 +65,11 @@ func parseConfig() (*configFlags, error) {
return nil, err
}
if cfg.TargetBlocksPerSecond == nil {
targetBlocksPerSecond := defaultTargetBlockRateRatio / cfg.NetParams().TargetTimePerBlock.Seconds()
cfg.TargetBlocksPerSecond = &targetBlocksPerSecond
}
if cfg.Profile != "" {
profilePort, err := strconv.Atoi(cfg.Profile)
if err != nil || profilePort < 1024 || profilePort > 65535 {
@@ -71,6 +77,10 @@ func parseConfig() (*configFlags, error) {
}
}
if cfg.MiningAddr == "" {
return nil, errors.New("--miningaddr is required")
}
initLog(defaultLogFile, defaultErrLogFile)
return cfg, nil

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.14-alpine AS build
FROM golang:1.16-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad

View File

@@ -14,6 +14,7 @@ var (
)
func initLog(logFile, errLogFile string) {
log.SetLevel(logger.LevelDebug)
err := backendLog.AddLogFile(logFile, logger.LevelTrace)
if err != nil {
fmt.Fprintf(os.Stderr, "Error adding log file %s as log rotator for level %s: %s", logFile, logger.LevelTrace, err)
@@ -24,4 +25,15 @@ func initLog(logFile, errLogFile string) {
fmt.Fprintf(os.Stderr, "Error adding log file %s as log rotator for level %s: %s", errLogFile, logger.LevelWarn, err)
os.Exit(1)
}
err = backendLog.AddLogWriter(os.Stdout, logger.LevelInfo)
if err != nil {
fmt.Fprintf(os.Stderr, "Error adding stdout to the loggerfor level %s: %s", logger.LevelWarn, err)
os.Exit(1)
}
err = backendLog.Run()
if err != nil {
fmt.Fprintf(os.Stderr, "Error starting the logger: %s ", err)
os.Exit(1)
}
}

View File

@@ -26,6 +26,7 @@ func main() {
fmt.Fprintf(os.Stderr, "Error parsing command-line arguments: %s\n", err)
os.Exit(1)
}
defer backendLog.Close()
// Show version at startup.
log.Infof("Version %s", version.Version())
@@ -39,7 +40,7 @@ func main() {
if err != nil {
panic(errors.Wrap(err, "error connecting to the RPC server"))
}
defer client.Disconnect()
defer client.safeRPCClient().Disconnect()
miningAddr, err := util.DecodeAddress(cfg.MiningAddr, cfg.ActiveNetParams.Prefix)
if err != nil {
@@ -48,7 +49,7 @@ func main() {
doneChan := make(chan struct{})
spawn("mineLoop", func() {
err = mineLoop(client, cfg.NumberOfBlocks, cfg.TargetBlocksPerSecond, cfg.MineWhenNotSynced, miningAddr)
err = mineLoop(client, cfg.NumberOfBlocks, *cfg.TargetBlocksPerSecond, cfg.MineWhenNotSynced, miningAddr)
if err != nil {
panic(errors.Wrap(err, "error in mine loop"))
}

View File

@@ -2,12 +2,13 @@ package main
import (
nativeerrors "errors"
"github.com/kaspanet/kaspad/util/difficulty"
"math/rand"
"sync/atomic"
"time"
"github.com/kaspanet/kaspad/cmd/kaspaminer/templatemanager"
"github.com/kaspanet/kaspad/domain/consensus/model/pow"
"github.com/kaspanet/kaspad/util/difficulty"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
@@ -29,11 +30,19 @@ func mineLoop(client *minerClient, numberOfBlocks uint64, targetBlocksPerSecond
rand.Seed(time.Now().UnixNano()) // Seed the global concurrent-safe random source.
errChan := make(chan error)
templateStopChan := make(chan struct{})
doneChan := make(chan struct{})
spawn("mineLoop-internalLoop", func() {
// We don't want to send router.DefaultMaxMessages blocks at once because there's
// a high chance we'll get disconnected from the node, so we make the channel
// capacity router.DefaultMaxMessages/2 (we give some slack for getBlockTemplate
// requests)
foundBlockChan := make(chan *externalapi.DomainBlock, router.DefaultMaxMessages/2)
spawn("templatesLoop", func() {
templatesLoop(client, miningAddr, errChan)
})
spawn("blocksLoop", func() {
const windowSize = 10
var expectedDurationForWindow time.Duration
var windowExpectedEndTime time.Time
@@ -44,31 +53,37 @@ func mineLoop(client *minerClient, numberOfBlocks uint64, targetBlocksPerSecond
}
blockInWindowIndex := 0
for i := uint64(0); numberOfBlocks == 0 || i < numberOfBlocks; i++ {
sleepTime := 0 * time.Second
foundBlock := make(chan *externalapi.DomainBlock)
mineNextBlock(client, miningAddr, foundBlock, mineWhenNotSynced, templateStopChan, errChan)
block := <-foundBlock
templateStopChan <- struct{}{}
err := handleFoundBlock(client, block)
if err != nil {
errChan <- err
}
for {
foundBlockChan <- mineNextBlock(mineWhenNotSynced)
if hasBlockRateTarget {
blockInWindowIndex++
if blockInWindowIndex == windowSize-1 {
deviation := windowExpectedEndTime.Sub(time.Now())
if deviation > 0 {
log.Infof("Finished to mine %d blocks %s earlier than expected. Sleeping %s to compensate",
windowSize, deviation, deviation)
time.Sleep(deviation)
sleepTime = deviation / windowSize
log.Infof("Finished to mine %d blocks %s earlier than expected. Setting the miner "+
"to sleep %s between blocks to compensate",
windowSize, deviation, sleepTime)
}
blockInWindowIndex = 0
windowExpectedEndTime = time.Now().Add(expectedDurationForWindow)
}
time.Sleep(sleepTime)
}
}
})
spawn("handleFoundBlock", func() {
for i := uint64(0); numberOfBlocks == 0 || i < numberOfBlocks; i++ {
block := <-foundBlockChan
err := handleFoundBlock(client, block)
if err != nil {
errChan <- err
return
}
}
doneChan <- struct{}{}
})
@@ -99,26 +114,15 @@ func logHashRate() {
})
}
func mineNextBlock(client *minerClient, miningAddr util.Address, foundBlock chan *externalapi.DomainBlock, mineWhenNotSynced bool,
templateStopChan chan struct{}, errChan chan error) {
newTemplateChan := make(chan *appmessage.GetBlockTemplateResponseMessage)
spawn("templatesLoop", func() {
templatesLoop(client, miningAddr, newTemplateChan, errChan, templateStopChan)
})
spawn("solveLoop", func() {
solveLoop(newTemplateChan, foundBlock, mineWhenNotSynced)
})
}
func handleFoundBlock(client *minerClient, block *externalapi.DomainBlock) error {
blockHash := consensushashing.BlockHash(block)
log.Infof("Found block %s with parents %s. Submitting to %s", blockHash, block.Header.ParentHashes(), client.Address())
log.Infof("Submitting block %s to %s", blockHash, client.safeRPCClient().Address())
rejectReason, err := client.SubmitBlock(block)
rejectReason, err := client.safeRPCClient().SubmitBlock(block)
if err != nil {
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while submitting block %s to %s: %s", blockHash, client.Address(), err)
log.Warnf("Got timeout while submitting block %s to %s: %s", blockHash, client.safeRPCClient().Address(), err)
client.reconnect()
return nil
}
if rejectReason == appmessage.RejectReasonIsInIBD {
@@ -127,82 +131,88 @@ func handleFoundBlock(client *minerClient, block *externalapi.DomainBlock) error
time.Sleep(waitTime)
return nil
}
return errors.Errorf("Error submitting block %s to %s: %s", blockHash, client.Address(), err)
return errors.Wrapf(err, "Error submitting block %s to %s", blockHash, client.safeRPCClient().Address())
}
return nil
}
func solveBlock(block *externalapi.DomainBlock, stopChan chan struct{}, foundBlock chan *externalapi.DomainBlock) {
targetDifficulty := difficulty.CompactToBig(block.Header.Bits())
headerForMining := block.Header.ToMutable()
initialNonce := rand.Uint64() // Use the global concurrent-safe random source.
for i := initialNonce; i != initialNonce-1; i++ {
select {
case <-stopChan:
return
default:
headerForMining.SetNonce(i)
atomic.AddUint64(&hashesTried, 1)
if pow.CheckProofOfWorkWithTarget(headerForMining, targetDifficulty) {
block.Header = headerForMining.ToImmutable()
foundBlock <- block
return
}
}
}
}
func templatesLoop(client *minerClient, miningAddr util.Address,
newTemplateChan chan *appmessage.GetBlockTemplateResponseMessage, errChan chan error, stopChan chan struct{}) {
getBlockTemplate := func() {
template, err := client.GetBlockTemplate(miningAddr.String())
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while requesting block template from %s: %s", client.Address(), err)
return
} else if err != nil {
errChan <- errors.Errorf("Error getting block template from %s: %s", client.Address(), err)
return
}
newTemplateChan <- template
}
getBlockTemplate()
func mineNextBlock(mineWhenNotSynced bool) *externalapi.DomainBlock {
nonce := rand.Uint64() // Use the global concurrent-safe random source.
for {
select {
case <-stopChan:
close(newTemplateChan)
return
case <-client.blockAddedNotificationChan:
getBlockTemplate()
case <-time.Tick(500 * time.Millisecond):
getBlockTemplate()
nonce++
// For each nonce we try to build a block from the most up to date
// block template.
// In the rare case where the nonce space is exhausted for a specific
// block, it'll keep looping the nonce until a new block template
// is discovered.
block := getBlockForMining(mineWhenNotSynced)
targetDifficulty := difficulty.CompactToBig(block.Header.Bits())
headerForMining := block.Header.ToMutable()
headerForMining.SetNonce(nonce)
atomic.AddUint64(&hashesTried, 1)
if pow.CheckProofOfWorkWithTarget(headerForMining, targetDifficulty) {
block.Header = headerForMining.ToImmutable()
log.Infof("Found block %s with parents %s", consensushashing.BlockHash(block), block.Header.ParentHashes())
return block
}
}
}
func solveLoop(newTemplateChan chan *appmessage.GetBlockTemplateResponseMessage, foundBlock chan *externalapi.DomainBlock,
mineWhenNotSynced bool) {
func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
tryCount := 0
var stopOldTemplateSolving chan struct{}
for template := range newTemplateChan {
if !template.IsSynced && !mineWhenNotSynced {
log.Warnf("Kaspad is not synced. Skipping current block template")
const sleepTime = 500 * time.Millisecond
const sleepTimeWhenNotSynced = 5 * time.Second
for {
tryCount++
shouldLog := (tryCount-1)%10 == 0
template, isSynced := templatemanager.Get()
if template == nil {
if shouldLog {
log.Info("Waiting for the initial template")
}
time.Sleep(sleepTime)
continue
}
if !isSynced && !mineWhenNotSynced {
if shouldLog {
log.Warnf("Kaspad is not synced. Skipping current block template")
}
time.Sleep(sleepTimeWhenNotSynced)
continue
}
if stopOldTemplateSolving != nil {
close(stopOldTemplateSolving)
}
stopOldTemplateSolving = make(chan struct{})
block := appmessage.MsgBlockToDomainBlock(template.MsgBlock)
stopOldTemplateSolvingCopy := stopOldTemplateSolving
spawn("solveBlock", func() {
solveBlock(block, stopOldTemplateSolvingCopy, foundBlock)
})
}
if stopOldTemplateSolving != nil {
close(stopOldTemplateSolving)
return template
}
}
func templatesLoop(client *minerClient, miningAddr util.Address, errChan chan error) {
getBlockTemplate := func() {
template, err := client.safeRPCClient().GetBlockTemplate(miningAddr.String())
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while requesting block template from %s: %s", client.safeRPCClient().Address(), err)
client.reconnect()
return
}
if err != nil {
errChan <- errors.Wrapf(err, "Error getting block template from %s", client.safeRPCClient().Address())
return
}
templatemanager.Set(template)
}
getBlockTemplate()
const tickerTime = 500 * time.Millisecond
ticker := time.NewTicker(tickerTime)
for {
select {
case <-client.blockAddedNotificationChan:
getBlockTemplate()
ticker.Reset(tickerTime)
case <-ticker.C:
getBlockTemplate()
}
}
}

View File

@@ -0,0 +1,32 @@
package templatemanager
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"sync"
)
var currentTemplate *externalapi.DomainBlock
var isSynced bool
var lock = &sync.Mutex{}
// Get returns the template to work on
func Get() (*externalapi.DomainBlock, bool) {
lock.Lock()
defer lock.Unlock()
// Shallow copy the block so when the user replaces the header it won't affect the template here.
if currentTemplate == nil {
return nil, false
}
block := *currentTemplate
return &block, isSynced
}
// Set sets the current template to work on
func Set(template *appmessage.GetBlockTemplateResponseMessage) {
block := appmessage.MsgBlockToDomainBlock(template.MsgBlock)
lock.Lock()
defer lock.Unlock()
currentTemplate = block
isSynced = template.IsSynced
}

View File

@@ -10,7 +10,7 @@ It is capable of generating wallet key-pairs, printing a wallet's current balanc
## Requirements
Go 1.14 or later.
Go 1.16 or later.
## Installation

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.14-alpine AS build
FROM golang:1.16-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad
@@ -16,9 +16,12 @@ RUN go get -u golang.org/x/lint/golint \
COPY go.mod .
COPY go.sum .
# Cache kaspad dependencies
RUN go mod download
COPY . .
RUN NO_PARALLEL=1 ./build_and_test.sh
RUN ./build_and_test.sh
# --- multistage docker build: stage #2: runtime image
FROM alpine
@@ -27,7 +30,7 @@ WORKDIR /app
RUN apk add --no-cache ca-certificates tini
COPY --from=build /go/src/github.com/kaspanet/kaspad/kaspad /app/
COPY --from=build /go/src/github.com/kaspanet/kaspad/sample-kaspad.conf /app/
COPY --from=build /go/src/github.com/kaspanet/kaspad/infrastructure/config/sample-kaspad.conf /app/
USER nobody
ENTRYPOINT [ "/sbin/tini", "--" ]

View File

@@ -157,6 +157,15 @@ func (s *consensus) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalap
return blockInfo, nil
}
func (s *consensus) GetBlockChildren(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
blockRelation, err := s.blockRelationStore.BlockRelation(s.databaseContext, blockHash)
if err != nil {
return nil, err
}
return blockRelation.Children, nil
}
func (s *consensus) GetBlockAcceptanceData(blockHash *externalapi.DomainHash) (externalapi.AcceptanceData, error) {
s.lock.Lock()
defer s.lock.Unlock()
@@ -223,6 +232,30 @@ func (s *consensus) GetPruningPointUTXOs(expectedPruningPointHash *externalapi.D
return pruningPointUTXOs, nil
}
func (s *consensus) GetVirtualUTXOs(expectedVirtualParents []*externalapi.DomainHash,
fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error) {
s.lock.Lock()
defer s.lock.Unlock()
virtualParents, err := s.dagTopologyManager.Parents(model.VirtualBlockHash)
if err != nil {
return nil, err
}
if !externalapi.HashesEqual(expectedVirtualParents, virtualParents) {
return nil, errors.Wrapf(ruleerrors.ErrGetVirtualUTXOsWrongVirtualParents, "expected virtual parents %s but got %s",
expectedVirtualParents,
virtualParents)
}
virtualUTXOs, err := s.consensusStateStore.VirtualUTXOs(s.databaseContext, fromOutpoint, limit)
if err != nil {
return nil, err
}
return virtualUTXOs, nil
}
func (s *consensus) PruningPoint() (*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()
@@ -403,3 +436,15 @@ func (s *consensus) GetHeadersSelectedTip() (*externalapi.DomainHash, error) {
return s.headersSelectedTipStore.HeadersSelectedTip(s.databaseContext)
}
func (s *consensus) Anticone(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()
err := s.validateBlockHashExists(blockHash)
if err != nil {
return nil, err
}
return s.dagTraversalManager.Anticone(blockHash)
}

View File

@@ -0,0 +1,21 @@
package consensus
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"math/big"
"time"
)
// GHOSTDAGManagerConstructor is the function signature for a constructor of a type implementing model.GHOSTDAGManager
type GHOSTDAGManagerConstructor func(model.DBReader, model.DAGTopologyManager,
model.GHOSTDAGDataStore, model.BlockHeaderStore, model.KType) model.GHOSTDAGManager
// DifficultyManagerConstructor is the function signature for a constructor of a type implementing model.DifficultyManager
type DifficultyManagerConstructor func(model.DBReader, model.GHOSTDAGManager, model.GHOSTDAGDataStore,
model.BlockHeaderStore, model.DAGTopologyManager, model.DAGTraversalManager, *big.Int, int, bool, time.Duration,
*externalapi.DomainHash) model.DifficultyManager
// PastMedianTimeManagerConstructor is the function signature for a constructor of a type implementing model.PastMedianTimeManager
type PastMedianTimeManagerConstructor func(int, model.DBReader, model.DAGTraversalManager, model.BlockHeaderStore,
model.GHOSTDAGDataStore) model.PastMedianTimeManager

View File

@@ -3,25 +3,40 @@ package database
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/pkg/errors"
)
type dbCursor struct {
cursor database.Cursor
cursor database.Cursor
isClosed bool
}
func (d dbCursor) Next() bool {
if d.isClosed {
panic("Tried using a closed DBCursor")
}
return d.cursor.Next()
}
func (d dbCursor) First() bool {
if d.isClosed {
panic("Tried using a closed DBCursor")
}
return d.cursor.First()
}
func (d dbCursor) Seek(key model.DBKey) error {
if d.isClosed {
return errors.New("Tried using a closed DBCursor")
}
return d.cursor.Seek(dbKeyToDatabaseKey(key))
}
func (d dbCursor) Key() (model.DBKey, error) {
if d.isClosed {
return nil, errors.New("Tried using a closed DBCursor")
}
key, err := d.cursor.Key()
if err != nil {
return nil, err
@@ -31,11 +46,23 @@ func (d dbCursor) Key() (model.DBKey, error) {
}
func (d dbCursor) Value() ([]byte, error) {
if d.isClosed {
return nil, errors.New("Tried using a closed DBCursor")
}
return d.cursor.Value()
}
func (d dbCursor) Close() error {
return d.cursor.Close()
if d.isClosed {
return errors.New("Tried using a closed DBCursor")
}
d.isClosed = true
err := d.cursor.Close()
if err != nil {
return err
}
d.cursor = nil
return nil
}
func newDBCursor(cursor database.Cursor) model.DBCursor {

View File

@@ -1473,100 +1473,6 @@ func (x *DbUtxoDiff) GetToRemove() []*DbUtxoCollectionItem {
return nil
}
type DbPruningPointUTXOSetBytes struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Bytes []byte `protobuf:"bytes,1,opt,name=bytes,proto3" json:"bytes,omitempty"`
}
func (x *DbPruningPointUTXOSetBytes) Reset() {
*x = DbPruningPointUTXOSetBytes{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[24]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DbPruningPointUTXOSetBytes) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DbPruningPointUTXOSetBytes) ProtoMessage() {}
func (x *DbPruningPointUTXOSetBytes) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[24]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DbPruningPointUTXOSetBytes.ProtoReflect.Descriptor instead.
func (*DbPruningPointUTXOSetBytes) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{24}
}
func (x *DbPruningPointUTXOSetBytes) GetBytes() []byte {
if x != nil {
return x.Bytes
}
return nil
}
type DbHeaderTips struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Tips []*DbHash `protobuf:"bytes,1,rep,name=tips,proto3" json:"tips,omitempty"`
}
func (x *DbHeaderTips) Reset() {
*x = DbHeaderTips{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[25]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DbHeaderTips) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DbHeaderTips) ProtoMessage() {}
func (x *DbHeaderTips) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[25]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DbHeaderTips.ProtoReflect.Descriptor instead.
func (*DbHeaderTips) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{25}
}
func (x *DbHeaderTips) GetTips() []*DbHash {
if x != nil {
return x.Tips
}
return nil
}
type DbTips struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -1578,7 +1484,7 @@ type DbTips struct {
func (x *DbTips) Reset() {
*x = DbTips{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[26]
mi := &file_dbobjects_proto_msgTypes[24]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1591,7 +1497,7 @@ func (x *DbTips) String() string {
func (*DbTips) ProtoMessage() {}
func (x *DbTips) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[26]
mi := &file_dbobjects_proto_msgTypes[24]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1604,7 +1510,7 @@ func (x *DbTips) ProtoReflect() protoreflect.Message {
// Deprecated: Use DbTips.ProtoReflect.Descriptor instead.
func (*DbTips) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{26}
return file_dbobjects_proto_rawDescGZIP(), []int{24}
}
func (x *DbTips) GetTips() []*DbHash {
@@ -1614,53 +1520,6 @@ func (x *DbTips) GetTips() []*DbHash {
return nil
}
type DbVirtualDiffParents struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
VirtualDiffParents []*DbHash `protobuf:"bytes,1,rep,name=virtualDiffParents,proto3" json:"virtualDiffParents,omitempty"`
}
func (x *DbVirtualDiffParents) Reset() {
*x = DbVirtualDiffParents{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[27]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DbVirtualDiffParents) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DbVirtualDiffParents) ProtoMessage() {}
func (x *DbVirtualDiffParents) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[27]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DbVirtualDiffParents.ProtoReflect.Descriptor instead.
func (*DbVirtualDiffParents) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{27}
}
func (x *DbVirtualDiffParents) GetVirtualDiffParents() []*DbHash {
if x != nil {
return x.VirtualDiffParents
}
return nil
}
type DbBlockCount struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -1672,7 +1531,7 @@ type DbBlockCount struct {
func (x *DbBlockCount) Reset() {
*x = DbBlockCount{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[28]
mi := &file_dbobjects_proto_msgTypes[25]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1685,7 +1544,7 @@ func (x *DbBlockCount) String() string {
func (*DbBlockCount) ProtoMessage() {}
func (x *DbBlockCount) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[28]
mi := &file_dbobjects_proto_msgTypes[25]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1698,7 +1557,7 @@ func (x *DbBlockCount) ProtoReflect() protoreflect.Message {
// Deprecated: Use DbBlockCount.ProtoReflect.Descriptor instead.
func (*DbBlockCount) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{28}
return file_dbobjects_proto_rawDescGZIP(), []int{25}
}
func (x *DbBlockCount) GetCount() uint64 {
@@ -1719,7 +1578,7 @@ type DbBlockHeaderCount struct {
func (x *DbBlockHeaderCount) Reset() {
*x = DbBlockHeaderCount{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[29]
mi := &file_dbobjects_proto_msgTypes[26]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1732,7 +1591,7 @@ func (x *DbBlockHeaderCount) String() string {
func (*DbBlockHeaderCount) ProtoMessage() {}
func (x *DbBlockHeaderCount) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[29]
mi := &file_dbobjects_proto_msgTypes[26]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1745,7 +1604,7 @@ func (x *DbBlockHeaderCount) ProtoReflect() protoreflect.Message {
// Deprecated: Use DbBlockHeaderCount.ProtoReflect.Descriptor instead.
func (*DbBlockHeaderCount) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{29}
return file_dbobjects_proto_rawDescGZIP(), []int{26}
}
func (x *DbBlockHeaderCount) GetCount() uint64 {
@@ -1981,32 +1840,19 @@ var file_dbobjects_proto_rawDesc = []byte{
0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x73,
0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x55,
0x74, 0x78, 0x6f, 0x43, 0x6f, 0x6c, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x74, 0x65,
0x6d, 0x52, 0x08, 0x74, 0x6f, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x22, 0x32, 0x0a, 0x1a, 0x44,
0x62, 0x50, 0x72, 0x75, 0x6e, 0x69, 0x6e, 0x67, 0x50, 0x6f, 0x69, 0x6e, 0x74, 0x55, 0x54, 0x58,
0x4f, 0x53, 0x65, 0x74, 0x42, 0x79, 0x74, 0x65, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x62, 0x79, 0x74,
0x65, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x62, 0x79, 0x74, 0x65, 0x73, 0x22,
0x39, 0x0a, 0x0c, 0x44, 0x62, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x54, 0x69, 0x70, 0x73, 0x12,
0x29, 0x0a, 0x04, 0x74, 0x69, 0x70, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e,
0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62,
0x48, 0x61, 0x73, 0x68, 0x52, 0x04, 0x74, 0x69, 0x70, 0x73, 0x22, 0x33, 0x0a, 0x06, 0x44, 0x62,
0x54, 0x69, 0x70, 0x73, 0x12, 0x29, 0x0a, 0x04, 0x74, 0x69, 0x70, 0x73, 0x18, 0x01, 0x20, 0x03,
0x28, 0x0b, 0x32, 0x15, 0x2e, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x48, 0x61, 0x73, 0x68, 0x52, 0x04, 0x74, 0x69, 0x70, 0x73, 0x22,
0x5d, 0x0a, 0x14, 0x44, 0x62, 0x56, 0x69, 0x72, 0x74, 0x75, 0x61, 0x6c, 0x44, 0x69, 0x66, 0x66,
0x50, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x73, 0x12, 0x45, 0x0a, 0x12, 0x76, 0x69, 0x72, 0x74, 0x75,
0x61, 0x6c, 0x44, 0x69, 0x66, 0x66, 0x50, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x01, 0x20,
0x6d, 0x52, 0x08, 0x74, 0x6f, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x22, 0x33, 0x0a, 0x06, 0x44,
0x62, 0x54, 0x69, 0x70, 0x73, 0x12, 0x29, 0x0a, 0x04, 0x74, 0x69, 0x70, 0x73, 0x18, 0x01, 0x20,
0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74,
0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x48, 0x61, 0x73, 0x68, 0x52, 0x12, 0x76, 0x69, 0x72, 0x74,
0x75, 0x61, 0x6c, 0x44, 0x69, 0x66, 0x66, 0x50, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x73, 0x22, 0x24,
0x0a, 0x0c, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x14,
0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63,
0x6f, 0x75, 0x6e, 0x74, 0x22, 0x2a, 0x0a, 0x12, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x48,
0x65, 0x61, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f,
0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x42, 0x2a, 0x5a, 0x28, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6b,
0x61, 0x73, 0x70, 0x61, 0x6e, 0x65, 0x74, 0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x64, 0x2f, 0x73,
0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x62, 0x06, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x33,
0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x48, 0x61, 0x73, 0x68, 0x52, 0x04, 0x74, 0x69, 0x70, 0x73,
0x22, 0x24, 0x0a, 0x0c, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x43, 0x6f, 0x75, 0x6e, 0x74,
0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52,
0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0x2a, 0x0a, 0x12, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63,
0x6b, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x14, 0x0a, 0x05,
0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75,
0x6e, 0x74, 0x42, 0x2a, 0x5a, 0x28, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x6e, 0x65, 0x74, 0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x64,
0x2f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x62, 0x06,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -2021,7 +1867,7 @@ func file_dbobjects_proto_rawDescGZIP() []byte {
return file_dbobjects_proto_rawDescData
}
var file_dbobjects_proto_msgTypes = make([]protoimpl.MessageInfo, 30)
var file_dbobjects_proto_msgTypes = make([]protoimpl.MessageInfo, 27)
var file_dbobjects_proto_goTypes = []interface{}{
(*DbBlock)(nil), // 0: serialization.DbBlock
(*DbBlockHeader)(nil), // 1: serialization.DbBlockHeader
@@ -2047,12 +1893,9 @@ var file_dbobjects_proto_goTypes = []interface{}{
(*DbReachabilityData)(nil), // 21: serialization.DbReachabilityData
(*DbReachabilityInterval)(nil), // 22: serialization.DbReachabilityInterval
(*DbUtxoDiff)(nil), // 23: serialization.DbUtxoDiff
(*DbPruningPointUTXOSetBytes)(nil), // 24: serialization.DbPruningPointUTXOSetBytes
(*DbHeaderTips)(nil), // 25: serialization.DbHeaderTips
(*DbTips)(nil), // 26: serialization.DbTips
(*DbVirtualDiffParents)(nil), // 27: serialization.DbVirtualDiffParents
(*DbBlockCount)(nil), // 28: serialization.DbBlockCount
(*DbBlockHeaderCount)(nil), // 29: serialization.DbBlockHeaderCount
(*DbTips)(nil), // 24: serialization.DbTips
(*DbBlockCount)(nil), // 25: serialization.DbBlockCount
(*DbBlockHeaderCount)(nil), // 26: serialization.DbBlockHeaderCount
}
var file_dbobjects_proto_depIdxs = []int32{
1, // 0: serialization.DbBlock.header:type_name -> serialization.DbBlockHeader
@@ -2090,14 +1933,12 @@ var file_dbobjects_proto_depIdxs = []int32{
2, // 32: serialization.DbReachabilityData.futureCoveringSet:type_name -> serialization.DbHash
18, // 33: serialization.DbUtxoDiff.toAdd:type_name -> serialization.DbUtxoCollectionItem
18, // 34: serialization.DbUtxoDiff.toRemove:type_name -> serialization.DbUtxoCollectionItem
2, // 35: serialization.DbHeaderTips.tips:type_name -> serialization.DbHash
2, // 36: serialization.DbTips.tips:type_name -> serialization.DbHash
2, // 37: serialization.DbVirtualDiffParents.virtualDiffParents:type_name -> serialization.DbHash
38, // [38:38] is the sub-list for method output_type
38, // [38:38] is the sub-list for method input_type
38, // [38:38] is the sub-list for extension type_name
38, // [38:38] is the sub-list for extension extendee
0, // [0:38] is the sub-list for field type_name
2, // 35: serialization.DbTips.tips:type_name -> serialization.DbHash
36, // [36:36] is the sub-list for method output_type
36, // [36:36] is the sub-list for method input_type
36, // [36:36] is the sub-list for extension type_name
36, // [36:36] is the sub-list for extension extendee
0, // [0:36] is the sub-list for field type_name
}
func init() { file_dbobjects_proto_init() }
@@ -2395,30 +2236,6 @@ func file_dbobjects_proto_init() {
}
}
file_dbobjects_proto_msgTypes[24].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbPruningPointUTXOSetBytes); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_dbobjects_proto_msgTypes[25].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbHeaderTips); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_dbobjects_proto_msgTypes[26].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbTips); i {
case 0:
return &v.state
@@ -2430,19 +2247,7 @@ func file_dbobjects_proto_init() {
return nil
}
}
file_dbobjects_proto_msgTypes[27].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbVirtualDiffParents); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_dbobjects_proto_msgTypes[28].Exporter = func(v interface{}, i int) interface{} {
file_dbobjects_proto_msgTypes[25].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbBlockCount); i {
case 0:
return &v.state
@@ -2454,7 +2259,7 @@ func file_dbobjects_proto_init() {
return nil
}
}
file_dbobjects_proto_msgTypes[29].Exporter = func(v interface{}, i int) interface{} {
file_dbobjects_proto_msgTypes[26].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbBlockHeaderCount); i {
case 0:
return &v.state
@@ -2473,7 +2278,7 @@ func file_dbobjects_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_dbobjects_proto_rawDesc,
NumEnums: 0,
NumMessages: 30,
NumMessages: 27,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -139,22 +139,10 @@ message DbUtxoDiff {
repeated DbUtxoCollectionItem toRemove = 2;
}
message DbPruningPointUTXOSetBytes {
bytes bytes = 1;
}
message DbHeaderTips {
repeated DbHash tips = 1;
}
message DbTips {
repeated DbHash tips = 1;
}
message DbVirtualDiffParents {
repeated DbHash virtualDiffParents = 1;
}
message DbBlockCount {
uint64 count = 1;
}

View File

@@ -1,17 +0,0 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// HeaderTipsToDBHeaderTips converts a slice of hashes to DbHeaderTips
func HeaderTipsToDBHeaderTips(tips []*externalapi.DomainHash) *DbHeaderTips {
return &DbHeaderTips{
Tips: DomainHashesToDbHashes(tips),
}
}
// DBHeaderTipsToHeaderTips converts DbHeaderTips to a slice of hashes
func DBHeaderTipsToHeaderTips(dbHeaderTips *DbHeaderTips) ([]*externalapi.DomainHash, error) {
return DbHashesToDomainHashes(dbHeaderTips.Tips)
}

View File

@@ -1,15 +1,15 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
)
func utxoCollectionToDBUTXOCollection(utxoCollection model.UTXOCollection) ([]*DbUtxoCollectionItem, error) {
func utxoCollectionToDBUTXOCollection(utxoCollection externalapi.UTXOCollection) ([]*DbUtxoCollectionItem, error) {
items := make([]*DbUtxoCollectionItem, utxoCollection.Len())
i := 0
utxoIterator := utxoCollection.Iterator()
defer utxoIterator.Close()
for ok := utxoIterator.First(); ok; ok = utxoIterator.Next() {
outpoint, entry, err := utxoIterator.Get()
if err != nil {
@@ -26,7 +26,7 @@ func utxoCollectionToDBUTXOCollection(utxoCollection model.UTXOCollection) ([]*D
return items, nil
}
func dbUTXOCollectionToUTXOCollection(items []*DbUtxoCollectionItem) (model.UTXOCollection, error) {
func dbUTXOCollectionToUTXOCollection(items []*DbUtxoCollectionItem) (externalapi.UTXOCollection, error) {
utxoMap := make(map[externalapi.DomainOutpoint]externalapi.UTXOEntry, len(items))
for _, item := range items {
outpoint, err := DbOutpointToDomainOutpoint(item.Outpoint)

View File

@@ -1,12 +1,12 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
)
// UTXODiffToDBUTXODiff converts UTXODiff to DbUtxoDiff
func UTXODiffToDBUTXODiff(diff model.UTXODiff) (*DbUtxoDiff, error) {
func UTXODiffToDBUTXODiff(diff externalapi.UTXODiff) (*DbUtxoDiff, error) {
toAdd, err := utxoCollectionToDBUTXOCollection(diff.ToAdd())
if err != nil {
return nil, err
@@ -24,7 +24,7 @@ func UTXODiffToDBUTXODiff(diff model.UTXODiff) (*DbUtxoDiff, error) {
}
// DBUTXODiffToUTXODiff converts DbUtxoDiff to UTXODiff
func DBUTXODiffToUTXODiff(diff *DbUtxoDiff) (model.UTXODiff, error) {
func DBUTXODiffToUTXODiff(diff *DbUtxoDiff) (externalapi.UTXODiff, error) {
toAdd, err := dbUTXOCollectionToUTXOCollection(diff.ToAdd)
if err != nil {
return nil, err

View File

@@ -1,17 +0,0 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// VirtualDiffParentsToDBHeaderVirtualDiffParents converts a slice of hashes to DbVirtualDiffParents
func VirtualDiffParentsToDBHeaderVirtualDiffParents(tips []*externalapi.DomainHash) *DbVirtualDiffParents {
return &DbVirtualDiffParents{
VirtualDiffParents: DomainHashesToDbHashes(tips),
}
}
// DBVirtualDiffParentsToVirtualDiffParents converts DbHeaderTips to a slice of hashes
func DBVirtualDiffParentsToVirtualDiffParents(dbVirtualDiffParents *DbVirtualDiffParents) ([]*externalapi.DomainHash, error) {
return DbHashesToDomainHashes(dbVirtualDiffParents.VirtualDiffParents)
}

View File

@@ -19,11 +19,11 @@ type acceptanceDataStore struct {
}
// New instantiates a new AcceptanceDataStore
func New(cacheSize int) model.AcceptanceDataStore {
func New(cacheSize int, preallocate bool) model.AcceptanceDataStore {
return &acceptanceDataStore{
staging: make(map[externalapi.DomainHash]externalapi.AcceptanceData),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize),
cache: lrucache.New(cacheSize, preallocate),
}
}

View File

@@ -21,11 +21,11 @@ type blockHeaderStore struct {
}
// New instantiates a new BlockHeaderStore
func New(dbContext model.DBReader, cacheSize int) (model.BlockHeaderStore, error) {
func New(dbContext model.DBReader, cacheSize int, preallocate bool) (model.BlockHeaderStore, error) {
blockHeaderStore := &blockHeaderStore{
staging: make(map[externalapi.DomainHash]externalapi.BlockHeader),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize),
cache: lrucache.New(cacheSize, preallocate),
}
err := blockHeaderStore.initializeCount(dbContext)

View File

@@ -18,10 +18,10 @@ type blockRelationStore struct {
}
// New instantiates a new BlockRelationStore
func New(cacheSize int) model.BlockRelationStore {
func New(cacheSize int, preallocate bool) model.BlockRelationStore {
return &blockRelationStore{
staging: make(map[externalapi.DomainHash]*model.BlockRelations),
cache: lrucache.New(cacheSize),
cache: lrucache.New(cacheSize, preallocate),
}
}

View File

@@ -18,10 +18,10 @@ type blockStatusStore struct {
}
// New instantiates a new BlockStatusStore
func New(cacheSize int) model.BlockStatusStore {
func New(cacheSize int, preallocate bool) model.BlockStatusStore {
return &blockStatusStore{
staging: make(map[externalapi.DomainHash]externalapi.BlockStatus),
cache: lrucache.New(cacheSize),
cache: lrucache.New(cacheSize, preallocate),
}
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/pkg/errors"
)
var bucket = database.MakeBucket([]byte("blocks"))
@@ -21,11 +22,11 @@ type blockStore struct {
}
// New instantiates a new BlockStore
func New(dbContext model.DBReader, cacheSize int) (model.BlockStore, error) {
func New(dbContext model.DBReader, cacheSize int, preallocate bool) (model.BlockStore, error) {
blockStore := &blockStore{
staging: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
toDelete: make(map[externalapi.DomainHash]struct{}),
cache: lrucache.New(cacheSize),
cache: lrucache.New(cacheSize, preallocate),
}
err := blockStore.initializeCount(dbContext)
@@ -212,3 +213,57 @@ func (bs *blockStore) serializeBlockCount(count uint64) ([]byte, error) {
dbBlockCount := &serialization.DbBlockCount{Count: count}
return proto.Marshal(dbBlockCount)
}
type allBlockHashesIterator struct {
cursor model.DBCursor
isClosed bool
}
func (a allBlockHashesIterator) First() bool {
if a.isClosed {
panic("Tried using a closed AllBlockHashesIterator")
}
return a.cursor.First()
}
func (a allBlockHashesIterator) Next() bool {
if a.isClosed {
panic("Tried using a closed AllBlockHashesIterator")
}
return a.cursor.Next()
}
func (a allBlockHashesIterator) Get() (*externalapi.DomainHash, error) {
if a.isClosed {
return nil, errors.New("Tried using a closed AllBlockHashesIterator")
}
key, err := a.cursor.Key()
if err != nil {
return nil, err
}
blockHashBytes := key.Suffix()
return externalapi.NewDomainHashFromByteSlice(blockHashBytes)
}
func (a allBlockHashesIterator) Close() error {
if a.isClosed {
return errors.New("Tried using a closed AllBlockHashesIterator")
}
a.isClosed = true
err := a.cursor.Close()
if err != nil {
return err
}
a.cursor = nil
return nil
}
func (bs *blockStore) AllBlockHashesIterator(dbContext model.DBReader) (model.BlockIterator, error) {
cursor, err := dbContext.Cursor(bucket)
if err != nil {
return nil, err
}
return &allBlockHashesIterator{cursor: cursor}, nil
}

View File

@@ -8,27 +8,24 @@ import (
// consensusStateStore represents a store for the current consensus state
type consensusStateStore struct {
tipsStaging []*externalapi.DomainHash
virtualDiffParentsStaging []*externalapi.DomainHash
virtualUTXODiffStaging model.UTXODiff
tipsStaging []*externalapi.DomainHash
virtualUTXODiffStaging externalapi.UTXODiff
virtualUTXOSetCache *utxolrucache.LRUCache
tipsCache []*externalapi.DomainHash
virtualDiffParentsCache []*externalapi.DomainHash
tipsCache []*externalapi.DomainHash
}
// New instantiates a new ConsensusStateStore
func New(utxoSetCacheSize int) model.ConsensusStateStore {
func New(utxoSetCacheSize int, preallocate bool) model.ConsensusStateStore {
return &consensusStateStore{
virtualUTXOSetCache: utxolrucache.New(utxoSetCacheSize),
virtualUTXOSetCache: utxolrucache.New(utxoSetCacheSize, preallocate),
}
}
func (css *consensusStateStore) Discard() {
css.tipsStaging = nil
css.virtualUTXODiffStaging = nil
css.virtualDiffParentsStaging = nil
}
func (css *consensusStateStore) Commit(dbTx model.DBTransaction) error {
@@ -36,10 +33,6 @@ func (css *consensusStateStore) Commit(dbTx model.DBTransaction) error {
if err != nil {
return err
}
err = css.commitVirtualDiffParents(dbTx)
if err != nil {
return err
}
err = css.commitVirtualUTXODiff(dbTx)
if err != nil {
@@ -53,6 +46,5 @@ func (css *consensusStateStore) Commit(dbTx model.DBTransaction) error {
func (css *consensusStateStore) IsStaged() bool {
return css.tipsStaging != nil ||
css.virtualDiffParentsStaging != nil ||
css.virtualUTXODiffStaging != nil
}

View File

@@ -19,7 +19,7 @@ func utxoKey(outpoint *externalapi.DomainOutpoint) (model.DBKey, error) {
return utxoSetBucket.Key(serializedOutpoint), nil
}
func (css *consensusStateStore) StageVirtualUTXODiff(virtualUTXODiff model.UTXODiff) {
func (css *consensusStateStore) StageVirtualUTXODiff(virtualUTXODiff externalapi.UTXODiff) {
css.virtualUTXODiffStaging = virtualUTXODiff
}
@@ -37,6 +37,7 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
}
toRemoveIterator := css.virtualUTXODiffStaging.ToRemove().Iterator()
defer toRemoveIterator.Close()
for ok := toRemoveIterator.First(); ok; ok = toRemoveIterator.Next() {
toRemoveOutpoint, _, err := toRemoveIterator.Get()
if err != nil {
@@ -56,6 +57,7 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
}
toAddIterator := css.virtualUTXODiffStaging.ToAdd().Iterator()
defer toAddIterator.Close()
for ok := toAddIterator.First(); ok; ok = toAddIterator.Next() {
toAddOutpoint, toAddEntry, err := toAddIterator.Get()
if err != nil {
@@ -149,7 +151,45 @@ func (css *consensusStateStore) hasUTXOByOutpointFromStagedVirtualUTXODiff(dbCon
return dbContext.Has(key)
}
func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader) (model.ReadOnlyUTXOSetIterator, error) {
func (css *consensusStateStore) VirtualUTXOs(dbContext model.DBReader,
fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error) {
cursor, err := dbContext.Cursor(utxoSetBucket)
if err != nil {
return nil, err
}
defer cursor.Close()
if fromOutpoint != nil {
serializedFromOutpoint, err := serializeOutpoint(fromOutpoint)
if err != nil {
return nil, err
}
seekKey := utxoSetBucket.Key(serializedFromOutpoint)
err = cursor.Seek(seekKey)
if err != nil {
return nil, err
}
}
iterator := newCursorUTXOSetIterator(cursor)
defer iterator.Close()
outpointAndUTXOEntryPairs := make([]*externalapi.OutpointAndUTXOEntryPair, 0, limit)
for len(outpointAndUTXOEntryPairs) < limit && iterator.Next() {
outpoint, utxoEntry, err := iterator.Get()
if err != nil {
return nil, err
}
outpointAndUTXOEntryPairs = append(outpointAndUTXOEntryPairs, &externalapi.OutpointAndUTXOEntryPair{
Outpoint: outpoint,
UTXOEntry: utxoEntry,
})
}
return outpointAndUTXOEntryPairs, nil
}
func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader) (externalapi.ReadOnlyUTXOSetIterator, error) {
cursor, err := dbContext.Cursor(utxoSetBucket)
if err != nil {
return nil, err
@@ -164,22 +204,32 @@ func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader)
}
type utxoSetIterator struct {
cursor model.DBCursor
cursor model.DBCursor
isClosed bool
}
func newCursorUTXOSetIterator(cursor model.DBCursor) model.ReadOnlyUTXOSetIterator {
func newCursorUTXOSetIterator(cursor model.DBCursor) externalapi.ReadOnlyUTXOSetIterator {
return &utxoSetIterator{cursor: cursor}
}
func (u utxoSetIterator) First() bool {
if u.isClosed {
panic("Tried using a closed utxoSetIterator")
}
return u.cursor.First()
}
func (u utxoSetIterator) Next() bool {
if u.isClosed {
panic("Tried using a closed utxoSetIterator")
}
return u.cursor.Next()
}
func (u utxoSetIterator) Get() (outpoint *externalapi.DomainOutpoint, utxoEntry externalapi.UTXOEntry, err error) {
if u.isClosed {
return nil, nil, errors.New("Tried using a closed utxoSetIterator")
}
key, err := u.cursor.Key()
if err != nil {
panic(err)
@@ -202,3 +252,16 @@ func (u utxoSetIterator) Get() (outpoint *externalapi.DomainOutpoint, utxoEntry
return outpoint, utxoEntry, nil
}
func (u utxoSetIterator) Close() error {
if u.isClosed {
return errors.New("Tried using a closed utxoSetIterator")
}
u.isClosed = true
err := u.cursor.Close()
if err != nil {
return err
}
u.cursor = nil
return nil
}

View File

@@ -1,74 +0,0 @@
package consensusstatestore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
var virtualDiffParentsKey = database.MakeBucket(nil).Key([]byte("virtual-diff-parents"))
func (css *consensusStateStore) VirtualDiffParents(dbContext model.DBReader) ([]*externalapi.DomainHash, error) {
if css.virtualDiffParentsStaging != nil {
return externalapi.CloneHashes(css.virtualDiffParentsStaging), nil
}
if css.virtualDiffParentsCache != nil {
return externalapi.CloneHashes(css.virtualDiffParentsCache), nil
}
virtualDiffParentsBytes, err := dbContext.Get(virtualDiffParentsKey)
if err != nil {
return nil, err
}
virtualDiffParents, err := css.deserializeVirtualDiffParents(virtualDiffParentsBytes)
if err != nil {
return nil, err
}
css.virtualDiffParentsCache = virtualDiffParents
return externalapi.CloneHashes(virtualDiffParents), nil
}
func (css *consensusStateStore) StageVirtualDiffParents(tipHashes []*externalapi.DomainHash) {
css.virtualDiffParentsStaging = externalapi.CloneHashes(tipHashes)
}
func (css *consensusStateStore) commitVirtualDiffParents(dbTx model.DBTransaction) error {
if css.virtualDiffParentsStaging == nil {
return nil
}
virtualDiffParentsBytes, err := css.serializeVirtualDiffParents(css.virtualDiffParentsStaging)
if err != nil {
return err
}
err = dbTx.Put(virtualDiffParentsKey, virtualDiffParentsBytes)
if err != nil {
return err
}
css.virtualDiffParentsCache = css.virtualDiffParentsStaging
// Note: we don't discard the staging here since that's
// being done at the end of Commit()
return nil
}
func (css *consensusStateStore) serializeVirtualDiffParents(virtualDiffParentsBytes []*externalapi.DomainHash) ([]byte, error) {
virtualDiffParents := serialization.VirtualDiffParentsToDBHeaderVirtualDiffParents(virtualDiffParentsBytes)
return proto.Marshal(virtualDiffParents)
}
func (css *consensusStateStore) deserializeVirtualDiffParents(virtualDiffParentsBytes []byte) ([]*externalapi.DomainHash,
error) {
dbVirtualDiffParents := &serialization.DbVirtualDiffParents{}
err := proto.Unmarshal(virtualDiffParentsBytes, dbVirtualDiffParents)
if err != nil {
return nil, err
}
return serialization.DBVirtualDiffParentsToVirtualDiffParents(dbVirtualDiffParents)
}

View File

@@ -3,6 +3,7 @@ package consensusstatestore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/pkg/errors"
)
@@ -21,7 +22,7 @@ func (css *consensusStateStore) FinishImportingPruningPointUTXOSet(dbContext mod
}
func (css *consensusStateStore) ImportPruningPointUTXOSetIntoVirtualUTXOSet(dbContext model.DBWriter,
pruningPointUTXOSetIterator model.ReadOnlyUTXOSetIterator) error {
pruningPointUTXOSetIterator externalapi.ReadOnlyUTXOSetIterator) error {
if css.virtualUTXODiffStaging != nil {
return errors.New("cannot import virtual UTXO set while virtual UTXO diff is staged")
@@ -44,6 +45,7 @@ func (css *consensusStateStore) ImportPruningPointUTXOSetIntoVirtualUTXOSet(dbCo
if err != nil {
return err
}
defer deleteCursor.Close()
for ok := deleteCursor.First(); ok; ok = deleteCursor.Next() {
key, err := deleteCursor.Key()
if err != nil {

Some files were not shown because too many files have changed in this diff Show More