Compare commits

..

107 Commits

Author SHA1 Message Date
Ori Newman
f6d46fd23f Update changelog.txt for v0.12.5 (#2130)
* Update changelog.txt for v0.12.5

* Update version and add #2131 to changelog.txt
2022-08-28 13:53:33 +03:00
Svarog
2a7e03e232 Kaspawallet.send(): Make separate context for Broadcast, to prolong timeout (#2131)
* Kaspawallet.send(): Make separate context for Broadcast, to prolong timeout

* Amend comment

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-28 13:17:58 +03:00
Ori Newman
3286a7d010 Call update pruning point if required on resolve virtual (#2129)
* Call UpdatePruningPointIfRequired when resolving virtual

* Don't sanity check pruning point UTXO set if it's genesis

* Add UpdatePruningPointIfRequired on init
2022-08-24 13:32:39 +03:00
Ori Newman
aabbc741d7 Add UseExistingChangeAddress option to the wallet (#2127) 2022-08-21 17:12:05 +03:00
Elichai Turkel
20b7ab89f9 Add tests for hash writers (#2120)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-21 09:08:20 +03:00
Ori Newman
10f1e7e3f4 Replace daglabs's dnsseeder with Wolfie's (#2119)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-21 03:10:59 +03:00
Ori Newman
d941c73701 Change testnet dnsseeder (#2126)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-21 02:32:40 +03:00
Michael Sutton
3f80638c86 Add missing locks to notification listener modifications (#2124) 2022-08-21 01:26:03 +03:00
Michael Sutton
266ec6c270 Calculate pruning point utxo set from acceptance data (#2123)
* Calc new pruning point utxo diff through chain acceptance data

* Fix daa score to chain block
2022-08-17 15:56:53 +03:00
Michael Sutton
9ee409afaa Fix RPC client memory/goroutine leak (#2122)
* Showcase the RPC client memory leak

* Fixes an RPC client goroutine leak by properly closing the underlying connection
2022-08-09 16:30:24 +03:00
Michael Sutton
715cb3b1ac Fix a subtle lock sync issue in consensus insert block (#2121)
* Manage the locks more tightly and fix a slight and rare sync issue

* Extract virtualResolveChunk constant
2022-08-02 10:33:39 +03:00
D-Stacks
eb693c4a86 Mempool: Retrive stable state of the mempool. optimze get mempool entries by addresses (#2111)
* fix mempool accessing, rewrite get_mempool_entries_by_addresses

* fix counter, add verbose

* fmt

* addresses as string

* Define error in case utxoEntry is missing.

* fix error variable to string

* stop tests from failing (see in code comment)

* access both pools in the same state via parameters

* get rid of todo message

* fmt - very important!

* perf: scriptpublickey in mempool, no txscript.

* address reveiw

* fmt fix

* mixed up isorphan bool, pass tests now

* do map preallocation in mempoolbyaddresses

* no proallocation for orphanpool sending.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-07-26 23:06:08 +03:00
Aleoami
7a61c637b0 Add RPC timeout parameter to wallet daemon (#2104)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-07-21 13:55:12 +03:00
Michael Sutton
c7bd84ef9d Bump version and changelog for v0.12.4 (#2118)
* Bump version and changelog
2022-07-17 03:07:51 +03:00
Svarog
b26b9f6c4b Implement multi-layer auto-compound (#2115)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-07-15 14:12:34 +03:00
Michael Sutton
1c9bb54cc2 Crucial fix for the UTXO difference mechanism (#2114)
* Illustrate the bug through prints

* Change consensus API to a single ResolveVirtual call

* nil changeset is not expected when err=nil

* Fixes a deep bug in the resolve virtual process

* Be more defensive at resolving virtual when adding a block

* When finally resolved, set virtual parents properly

* Return nil changeset when nothing happened

* Make sure the block at the split point is reversed to new chain as well

* bump to version 0.12.4

* Avoid locking consensus twice in the common case of adding block with updateVirtual=true

* check err

* Parents must be picked first before set as virtual parents

* Keep the flag for tracking virtual state, since tip sorting perf is high with many tips

* Improve and clarify resolve virtual tests

* Addressing minor review comments

* Fixed a bug in the reported virtual changeset, and modified the test to verify it

* Addressing review comments
2022-07-15 12:35:20 +03:00
Michael Sutton
b9093d59eb Bump version and add change log (#2110) 2022-06-29 16:24:56 +03:00
D-Stacks
18d000f625 Logger: change log level for "Couldn't find UTXO entry" to debug (and ignore message on orphan transactions) (#2108)
* Logger: change log level for "Couldn't find UTXO entry" to debug

* ignore error for orphans

* continue also if orphans

* check if blocks exist

* safe rpc mode

* limit window size

* verify block status depending on context

* allow a 2 factor gap in expected mergeset size

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-06-29 15:54:40 +03:00
Ori Newman
c5aade7e7f Update changelog to v0.12.2 (#2091) 2022-06-17 18:24:21 +03:00
Ori Newman
d4b741fd7c Bump version to v0.12.2 (#2086) 2022-06-16 12:54:38 +03:00
D-Stacks
74a4f927e9 RPC: include orphans into mempool entries (#2046)
* RPC: include orphans into mempool entries

* no need for + 1

* give request option to choose mempool pool(s)

* add to wallet, fix bg

* use orphanpool rpc to test for orphans

* fix fmt

* fix crash when quering orphan pool in get_mempool_entries

* pass the tests, fix fromAppMessage bug

* Update config_test.go

don't think this is needed

* needed for tests to pass

* inverse to transactionpoolfilter, cut down code to two ifs.

* fmt

* update test to true false, forgot one includetransactionpool renaming

* update and simplyfiy get_mempool_entry handler

* comment outdated

* i think we usually use make instead of var.

* Fix some leftovers of includeTransactionPool

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-15 23:46:44 +03:00
biospb
847aafc91f Fix RPC connections counting (#2026)
* Fix RPC connections counting

* show incomming connections count

* Use the flag RPCMaxClients instead of the const RPCMaxInboundConnections

* Add grpc server name to log message

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 22:49:36 +03:00
Aleoami
c87e541570 Clarify wallet message concerning a wallet daemon sync state (#2045)
* upd clarified wallet daemon syncronization state log message

* Update address.go

* Update create_unsigned_transaction.go

* Update external_spendable_utxos.go

* Update sync.go

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 16:05:01 +03:00
Aleoami
2ea1c4f922 Change the way the miner executable reports execution errors (closes issue #1677) (#2048)
* fix changed the way the miner executable reports execution errors

* Update main.go

* Update main.go

* Update main.go

* Update main.go

* Rolled back

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 14:05:05 +03:00
Aleoami
5e9c28b77b fix the confusing term in the getHashrate() (#2081)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 03:24:10 +03:00
Michael Sutton
d957a6d93a Fix UTXO diff child error (#2084)
* Avoid creating the chain iterator if high hash is actually low hash

* Always use iterator in nextPruningPointAndCandidateByBlockHash

* Initial failing test

* Minimal failing test + some comments

* go lint

* Add simpler tests with two different errors

* Missed some error checks

* Minor

* A workaround patch for preventing the missing utxo child diff bug

* Make sure we fully resolve virtual

* Move ResolveVirtualWithMaxParam to test consensus

* Mark virtual not updated and loop in batches

* Refactor: remove VirtualChangeSet from functions return values

* Remove workaround comments

* If block has no body, virtual is still considered updated

* Remove special error ErrReverseUTXODiffsUTXODiffChildNotFound

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-15 02:52:14 +03:00
Michael Sutton
b2648aa5bd Fix not in selected chain crash (#2082)
* Avoid creating the chain iterator if high hash is actually low hash

* Always use iterator in nextPruningPointAndCandidateByBlockHash

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-15 02:32:47 +03:00
D-Stacks
3908f274ae [Ready] - RPC & UtxoIndex: keep track of, query and test circulating supply. (#2070)
* start

* pass tests, (cannot get before put!) and clean up.

* add rpc call.

* create and pass tests, fix bugs.  fully implement rpc

* As always fmt

* remover old test

* clean up proto comment

* put the logger back in place.

* revert back to 10 sec limit.

* migration, change utxoChanged removal to whole utxoEntryPair, add methods to update circulating supply, intialize circulating supply from reset.

* Update utxoindex.go

* ad

* rename to max, change comment

* one more total to max

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-05 16:42:08 +03:00
Aleoami
fa7ea121ff fix kaspawallet help messages, clarify sweep command help string (#2067)
* fix kaspawallet help messages, clarify sweep command help string

* fix sweep cmd help string phrasing

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-05 15:39:56 +03:00
biospb
24848da895 Wallet parse/send/create commands improvement (#2024)
* Fix wallet parsing of multi tx data

* Output wallet msgs to stderr when creating/signing tx

* Fix go fmt

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-03 19:06:51 +03:00
Michael Sutton
b200b77541 Use chunks for GetBlocksAcceptanceData calls in order to avoid blocking consensus for too long (#2075) 2022-06-03 18:25:02 +03:00
Michael Sutton
d50ad0667c Unite multiple GetBlockAcceptanceData consensus calls to one (#2074)
* Unite multiple `GetBlockAcceptanceData` consensus calls to one

* Variable rename
2022-06-01 12:19:21 +03:00
Ori Newman
5cea285960 Update many-small-chains-and-one-big-chain DAG to not fail merge depth limit (#2072)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-31 16:13:43 +03:00
Michael Sutton
7eb5085f6b Update changelog with v0.12.1 changes (#2071)
* Update changelog with v0.12.1 changes

* Remove full links and credits
2022-05-31 15:00:13 +03:00
D-Stacks
491e3569d2 RPC: include isSynced and isUtxoIndexed in GetInfoResponse (#2068)
* include utxoIndex and isNearlySynced in GetInfo

* fmt fixing

* add missing IsNearlySynced

* change `isNearlySynced` -> `isSynced` & `isUtxoIndexSet` ->`isUtxoIndexed`

* Add a check for `HasPeers` to make `isSynced` identical to `GetBlockTemplate.isSynced`

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-05-30 13:12:00 +03:00
Aleoami
440aea19b0 Add wallet daemon state messages to a terminal window log (#2062)
* add wallet daemon state messages to a terminal window log

* refactoring changed var name to harmonize it

* fix global vars made the part of the "sever" class

* fix log message phrasing

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 22:55:53 +03:00
Aleoami
968d47c3e6 add explanatory message about the mnemonic at the wallet creation (#2047)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 18:36:38 +03:00
D-Stacks
052193865e populate with verbos (#2064)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 18:23:44 +03:00
D-Stacks
85febcb551 update rpc.md with includeAcceptedTransactionIds (#2065)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 18:13:35 +03:00
D-Stacks
a4d9fa10bf fix typo (#2060)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-25 10:17:58 +03:00
Michael Sutton
cd5fd86ad3 Wrap the entire wallet send operation with a lock (#2063) 2022-05-25 08:55:35 +03:00
Ori Newman
b84d6fed2c broadcast is using refreshUTXOs, so it should be protected by lock (#2061) 2022-05-25 01:16:54 +03:00
Svarog
24c94b38be Move OnBlockAdded event to the channel that was only used by virtualChanged events (#2059)
* Move OnBlockAdded event to the channel that was only used by virtualChanged events

* Don't send notifications for header-only and invalid blocks

* Return block status from block processor and use it for event raising decision

* Use MaybeEnqueue for consensus events

* go lint

* Fix RPC call to actually include tx ids

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-05-24 01:11:01 +03:00
Ori Newman
4dd7113dc5 Increase virtualChangeChan to 100e3 (#2056)
* Increase virtualChangeChan to 100e3
Don't crash when sending UTXO RPC notification to a closed route
Throw error if virtualChangeChan is full

* Use MaybeEnqueue in more places

* Remove comment

* Ignore capacity reached errors on MaybeEnqueue
2022-05-20 19:31:13 +03:00
biospb
48c7fa0104 Wallet synchronization improvement (#2025)
* Wallet synchronization improvement

* Much faster sync on startup
* Avoid double scan of the same address range

* Eliminate 1 sec delay on start

* Rename constant and add numIndexesToQueryForRecentAddresses const

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-19 17:58:33 +03:00
Ori Newman
4d0cf2169a Bumpt to v0.12.1 (#2054) 2022-05-19 16:45:35 +03:00
Ori Newman
5f7cc079e9 Make kaspawallet ignore outputs that exist in the mempool (#2053)
* Make kaspawallet ignore outputs that exist in the mempool

* Make kaspawallet ignore outputs that exist in the mempool

* Add comment
2022-05-19 16:12:17 +03:00
Michael Sutton
016ddfdfce Use a channel for utxo change events (#2052)
* Use a channel from within consensus in order to raise change events in order -- note that this is only a draft commit for discussion

* Fix compilation

* Check for nil

* Allow nil virtualChangeChan

* Remove redundant comments

* Call notifyVirtualChange instead of notifyUTXOsChanged

* Remove redundant comment

* Add a separate function for initVirtualChangeHandler

* Remove redundant type

* Check for nil in the right place

* Fix integration test

* Add data to virtual changeset and cleanup block added event logic

* Renames

* Comment

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-19 14:07:48 +03:00
Svarog
5d24e2afbc Allow blank address in NotifyUTXOsChanged to get all updates (#2027)
* Allow blank address in NotifyUTXOsChanged to get all updates

* To see if address is possible to extract: Check for NonStandardTy rather than error

* Don't swallow errors

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-18 22:28:33 +03:00
biospb
8735da045f Block template cache improvement (#2023)
* Block template cache improvement

* Avoid concurrent calls to template builder
* Clear cache on new block event
* Move IsNearlySynced logic to within consensus and cache it for each template 
* Use a single consensus call for building template and checking synced

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-05-06 11:36:07 +03:00
Ori Newman
c839337425 Add finality check to ResolveVirtual (#2041)
* Add finality check to ResolveVirtual

* Warn on finality conflict only if needed
2022-05-06 00:29:48 +03:00
Ori Newman
7390651072 Remove HF1 activation code (#2042)
* Remove HF1 activation code

* Remove test with an overflow

* Fix "max sompi amount is never dust"

* Remove estimatedHeaderUpperBound logic
2022-05-05 20:35:17 +03:00
Svarog
52fbeedf20 Add AcceptedTransactionIDs to ChainChanged notification and VirtualSelectedParentChain RPC (#2036)
* Add acceptedTransactionIds to GetVirtualSelectedParentChainFromBlockResponseMessage and VirtualSelectedParentChainChangedNotificationMessage

* Modify appmessage structs to include new fields

* Implement the functionality for acceptedTransactionID notifications

* Add missing field for IncludeAcceptedTransactionIds

* Notify of block added before notifying that chain changed

* Use consensushashing instead of Transaction.ID

* Don't notify of empty virtual changes

* Don't generate virtualChainChanged notification if there's nobody subscribed

* Fix test to not expect empty notifications

* Don't generate acceptedTransactionIDs if they were not requested by anyone

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-05 12:35:02 +03:00
biospb
1660cf0cf1 Improved staging shard performance (#2034)
using map instead of growing array as shard id can be 1000+ in some cases

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-04 21:24:25 +03:00
Ori Newman
2b5202be7a Update Dockerfile for go 1.18 (#2038) 2022-05-03 00:22:47 +03:00
Ori Newman
9ffbb15160 Use double pointer when passing rpcError to errors.As (#2039) 2022-04-29 13:23:17 +03:00
tmrlvi
540b0d3a22 Add support for from address in kaspawallet send (#1964)
* Added option to choose from address in kaspawallet + fixed a bug with 0 change

* Checking if From is missing

* Fixed after merge to kaspad0.12

* Applying changes also to send command

* Better help description

* Fixed bug where we take all utxos except the one we wanted

* Swallow the parsing error only when we want to filter

* checking for wallet address directly

* go fmt

* Changing to `--from-address`

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-27 22:51:10 +03:00
biospb
8d5faee53a Fix wallet output in send/broadcast cmd (#2032)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-27 01:41:02 +03:00
D-Stacks
6e2fd0633b add "GetMempoolEntriesByAddresses" to kaspad RPC. (#2022)
* add "GetMempoolEntriesByAddresses" to kaspad RPC.

* fmt formatting.

* some things I forgot

* update rpc.md

* forgot to add handler

* fix fmt

* bug fix, implicat testing & error handling

* address reveiw

* address reveiw
2022-04-26 12:31:31 +03:00
Ori Newman
beb038c815 Update changelog.txt for v0.12.0 (#2021)
* Update changelog.txt for v0.12.0

* Fix typo
2022-04-14 13:28:19 +03:00
Ori Newman
35a959b56f Making a workaround for the UTXO diff child bug (#2020)
* Making a workaround for the UTXO diff child bug

* Fix error message

* Fix error message
2022-04-14 12:55:20 +03:00
D-Stacks
57c6118be8 Adds a "sweep" command to kaspawallet (#2018)
* Add "sweep" command to kaspawallet

* minor linting & layout improvements.

* contain code in "sweep"

* update to sweep.go

* formatting and bugs

* clean up renaming of ExtractTransaction functions

* Nicer formating of display

* address reveiw.

* one more sweeped -> swept

* missed one reveiw comment. (extra mass).

* remove redundant code from split function.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-14 02:16:16 +03:00
Michael Sutton
723aebbec9 Do not apply merge depth root heuristic on orphan roots (#2019) 2022-04-13 21:06:08 +03:00
Ori Newman
2b395e34b1 Use separate depth than finality depth for merge set calculations after HF (#2013)
* Use separate than finality depth for merge set calculations after HF

* Add comments and edit error messages

* Fix TestValidateTransactionInContextAndPopulateFee

* Don't disconnect from node if isViolatingBoundedMergeDepth

* Use new merge root for virtual pick parents; apply HF1 daa score split for validation only

* Use `blue work` heuristic to skip irrelevant relay blocks

* Minor

* Make sure virtual's merge depth root is a real block

* For ghostdag data we always use the non-trusted data

* Fix TestBoundedMergeDepth and in IBD use VirtualMergeDepthRoot instead of MergeDepthRoot

* Update HF1DAAScore

* Make sure merge root and finality are called + avoid calculating virtual root twice

* Update block version to 1 after HF

* Update to v0.12.0

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-04-12 00:26:44 +03:00
Svarog
ada559f007 Kaspawallet daemon: Add Send and Sign commands (#2016)
* Add send and sign commands to protobuf

* Added Send and Sign stubs in kaspawalletd server

* Implement Sign

* Implemented Send

* Allow Broadcast command to supply multiple transactions

* No longer prompt for password in DecryptMnemonics

* Rename TransactionIDs -> TxIDs to keep consistency with Broadcast

* Add some comments and formatting

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-11 23:42:01 +03:00
Ori Newman
357e8ce73c Use cosigner index 0 for read only wallets (#2014) 2022-04-11 02:35:47 +03:00
Ori Newman
6725902663 Bump version and update changelog (#2011) 2022-04-06 22:06:57 +03:00
Ori Newman
99bb21c512 Decrement estimatedHeaderUpperBound from mempool's MaxBlockMass (#2009) 2022-04-06 21:53:00 +03:00
Ori Newman
a4669f3fb5 Bump version and update changelog (#2008) 2022-04-06 02:01:08 +03:00
Ori Newman
e8f40bdff9 Don't skip wallet address with different cosigner index (#2007) 2022-04-06 01:51:13 +03:00
Michael Sutton
68a407ea37 Update changelog for v0.11.15 (#2006)
* Update changelog for v0.11.15

* Print full json of rule violating header
2022-04-05 16:13:11 +03:00
Michael Sutton
80879cabe1 Clean up debug log level by moving many frequent logs to trace level (#2004)
* Clean up debug log level and move most/frequent logs to trace level

* Change function call traces to trace log level
2022-04-03 23:25:29 +03:00
Ori Newman
71afc62298 Avoid override and accidental staging of pruning point by index (#2005)
* Avoid override and accidental staging of pruning point by index

* Update pruning point when resolving virtual
2022-04-03 22:52:05 +03:00
Michael Sutton
ca5c8549b9 Add DB compaction following the deletion of a DB prefix (#2003) 2022-04-03 19:08:32 +03:00
Michael Sutton
ab73def07a Cache the pruning point anticone (#2002) 2022-04-03 14:44:42 +03:00
Ori Newman
3f840233d8 Fix migrate resolve virtual percents (#2001)
* Fix migration virtual resolution percents

* Don't sync consensus if pruning point is genesis

* Don't add virtual to pruning point proof

* Remove redundant check
2022-04-03 01:01:49 +03:00
Ori Newman
90d9edb8e5 Bump version to v0.11.15 (#1999)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-04-02 22:20:41 +03:00
Ori Newman
b9b360bce4 Remove increase pagefile (#2000)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-04-02 22:12:57 +03:00
Ori Newman
27654961f9 Remove v4 p2p version (#1998)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-04-02 21:54:40 +03:00
Ori Newman
d45af760d8 Validate GetBlockTemplate extra data (#1997)
* Validate GetBlockTemplate extra data

* Check payload instead of extra data

* Fix error message
2022-04-02 21:25:58 +03:00
Michael Sutton
95fa045297 New definition for "out of sync" (#1996)
* Changed the definition of should mine and redefined "out of sync"

* Check `IsNearlySynced` only if IBD running since IBD check is cheap
2022-04-01 10:36:59 +03:00
Ori Newman
cb65dae63d Add extra data to GetBlockTemplate request (#1995) 2022-03-31 20:12:33 +03:00
Michael Sutton
21b82d7efc Block template cache (#1994)
* minor text fix

* Implement a block template cache with template modification/reuse mechanism

* Fix compilation error

* Address review comments

* Added a through TestModifyBlockTemplate test

* Update header timestamp if possible

* Avoid copying the transactions when only the header changed

* go fmt
2022-03-31 16:37:48 +03:00
Ori Newman
63c6d7443b In blockParentBuilder.BuildParents check if a block is isInFutureOfVi… (#1993)
* In blockParentBuilder.BuildParents check if a block is isInFutureOfVirtualGenesisChildren instead of checking if it has reachability data

* Add comment
2022-03-31 02:43:07 +03:00
Svarog
753f4a2ec1 Add package name to kaspawalletd .proto file (#1991)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-03-30 20:56:36 +03:00
Svarog
ed667f7e54 Upgrade to go 1.18 (#1992)
* Upgrade to go 1.18

* Fix stability-test shell script
2022-03-30 20:17:29 +03:00
Michael Sutton
c4a034eb43 Optimize the miner-kaspad flow and latency (#1988)
* protobuf for new block template notification structs

* appmessage and wire for new block template notification structs

* Set up the entire handler/call-chain for managing the new-block-template event
2022-03-28 23:41:59 +03:00
Aleoami
2eca0f0b5f Add names to nameless routes (#1986)
* add names to nameless routes

* Update router.go

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-03-28 09:41:25 +03:00
Ori Newman
58d627e05a Unite reachability stores (#1963)
* Unite all reachability stores

* Upgrade script

* Fix tests

* Add UpdateReindexRoot to RebuildReachability

* Use dbTx when deleting reachability stores

* Use ghostdagDataWithoutPrunedBlocks when rebuilding reachability

* Use next tree ancestor wherever possible and avoid finality point search if the block is too close to pruning point

* Address the boundary case where the pruning point becomes the finality point

* some minor fixes

* Remove RebuildReachability and use manual syncing between old and new consensus for migration

* Remove sanity test (it failed when tips where not in the same order)

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-03-27 21:23:03 +03:00
Svarog
639183ba0e Add support for auto-compound in kaspawallet send (#1951)
* Add GetUTXOsByBalances command to rpc

* Fix wrong commands in GetBalanceByAddress

* Moved calculation of TransactionMass out of TransactionValidator, so t that it can be used in kaspawallet

* Allow CreateUnsignedTransaction to return multiple transactions

* Start working on split

* Implement maybeSplitTransactionInner

* estimateMassIncreaseForSignatures should multiply by the number of inputs

* Implement createSplitTransaction

* Implement mergeTransactions

* Broadcast all transaction, not only 1

* workaround missing UTXOEntry in partially signed transaction

* Bugfix in broadcast loop

* Add underscores in some constants

* Make all nets RelayNonStdTxs: false

* Change estimateMassIncreaseForSignatures to estimateMassAfterSignatures

* Allow situations where merge transaction doesn't have enough funds to pay fees

* Add comments

* A few of renames

* Handle missed errors

* Fix clone of PubKeySignaturePair  to properly clone nil signatures

* Add sanity check to make sure originalTransaction has exactly two outputs

* Re-use change address for splitAddress

* Add one more utxo if the total amount is smaller then what we need to send due to fees

* Fix off-by-1 error in splitTrasnaction

* Add a comment to maybeAutoCompoundTransaction

* Add comment on why we are increasing inputCountPerSplit

* Add comment explaining while originalTransaction has 1 or 2 outputs

* Move oneMoreUTXOForMergeTransaction to split_transaction.go

* Allow to add multiple utxos to pay fee for mergeTransactions, if needed

* calculate split input counts and sizes properly

* Allow multiple transactions inside the create-unsigned-transaction -> sign -> broadcast workflow

* Print the number of transaction which was sent, in case there were multiple

* Rename broadcastConfig.Transaction(File) to Transactions(File)

* Convert alreadySelectedUTXOs to a map

* Fix a typo

* Add comment explaining that we assume all inputs are the same

* Revert over-refactor of rename of config.Transaction -> config.Transactions

* Rename: inputPerSplitCount -> inputsPerSplitCount

* Add comment for splitAndInputPerSplitCounts

* Use createSplitTransaction to calculate the upper bound of mass for split transactions
2022-03-27 20:06:55 +03:00
Michael Sutton
9fa08442cf Minor log fixes (#1983) 2022-03-21 11:45:52 +02:00
Michael Sutton
0dd50394ec Fix a bug in the new p2p v5 IBD chain negotiation (#1981)
* Fix a bug in the case where syncer chain is fully known to syncee

* Extract chain negotiation to a separated method

* Bump version and update the changelog

* Add zoom-in progress validation and some debug logs

* Improved error explanation

* go fmt

* Validate zoom-in progress through a total count
2022-03-20 18:06:55 +02:00
Michael Sutton
ac8d4e1341 Update changelog with v0.11.13 changes (#1980) 2022-03-16 10:48:11 +02:00
Michael Sutton
2488fbde78 Apply avoiding IBD patch10 logic to p2p v4 (#1979)
* a patch for fixing p2p v4 IBD issues for all side-chains

* Perform side-chain check earlier to avoid IBD start

* A few comments explaining the IBD patch
2022-03-15 19:16:25 +02:00
Michael Sutton
2ab8065142 Improve output of non-critical protocol errors to avoid user panic (#1978)
* Improve output of non-critical protocol errors to avoid user panic

* Add log messages at the end of IBD with headers proof

* Found a case where this was falsely triggered due to the wrong equality test
2022-03-15 17:23:29 +02:00
Michael Sutton
25410b86ae Make sure there are no negative numbers in the progress report (#1977) 2022-03-15 11:58:15 +02:00
Michael Sutton
4e44dd8510 Various P2P V5 IBD fixes (#1976)
* The first message is expected to contain headers and not a "done" message (+comment and error text fixes)

* Dequeue w/o timeout during pp anticone batch processing

* Add a verification step for catching possible new IBD errors

* Fetch missing bodies for both, syncer selected tip past and relay block past

* Make sure the syncer is behaving correctly to avoid out of index errors

* Make sure progress reporter does not exceed 100%

* No orphan roots, so no need to queue the empty list

* Add a log to report utxo fetch failure with err message

* A duplicate blocks should not appear as a warning

* typo
2022-03-14 12:21:32 +02:00
Svarog
1e56a22b32 Ignore not found errors from tp.transactionsOrderedByFeeRate.Remove. (#1974)
* Fix error message referencing wrong function name

* Ignore not found errors from tp.transactionsOrderedByFeeRate.Remove.

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2022-03-13 18:01:13 +02:00
Ori Newman
7a95f0c7a4 Use nil suggestedLowHash if selected parent pruning point is not in the future of the current one (#1972)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2022-03-13 17:22:03 +02:00
Michael Sutton
c81506220b Send pruning point anticone in batches (#1973)
* add p2p v5 which is currently identical to v4

* set all internal imports to v5

* set default version to 5

* Send pruning point and its anticone in batches

* go lint

* Fix jsom format

* Use DequeueWithTimeout

* Assert that batch size < route capacity

* oops, this is a flow handler, by definition it needs to be w/o a timeout

* here however, a timeout is required

* Keep IDs of prev messages unmodified

* previous merge operation accidentally erased an important part of this pr

* Extend timeout of simple sync

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-03-13 16:31:34 +02:00
Michael Sutton
e5598c15a7 Fix ibd shared past negotiation to be non quadratic also in the worst-case (#1969)
* add p2p v5 which is currently identical to v4

* set all internal imports to v5

* wip

* set default version to 5

* protobuf gen for new ibd chain locator

* wire for new ibd chain locator types

* new ibd shared block algo -- only basic test passing

* address the case where pruning points disagree, now both IBD tests pass

* protobuf gen for new past diff request message

* wire for new request past diff message

* handle and flow for new request past diff message - logic unimplemented yet

* implement ibd sync past diff of relay and selected tip

* go fmt

* remove unused methods

* missed one err check

* addressing simple comments

* apply the traversal limit logic and sort headers

* rename pastdiff -> anticone

* apply Don't relay blocks in virtual anticone #1970 to v5

* go fmt

* Fixed minor comments

* Limit the number of chain negotiation restarts
2022-03-13 11:27:50 +02:00
Svarog
433af5e0fe Make findTransactionIndex return wasFound explicitly + fix crash caused by invalid handling of not found transaction (#1971)
* Make findTransactionIndex return wasFound explicitly + fix crash caused by invalid handling of not found transaction

* Add comment on findTransactionIndex
2022-03-12 11:07:10 +02:00
Ori Newman
b7be807167 Don't relay blocks in virtual anticone (#1970) 2022-03-11 13:24:45 +02:00
Ori Newman
e687ceeae7 Add version to block template (#1967) 2022-03-11 08:56:36 +02:00
Ori Newman
04e35321aa Bump to v0.11.13 (#1968) 2022-03-11 08:33:53 +02:00
Ori Newman
061e65be93 Fix argument order for IsAncestorOf in boundedMergeBreakingParents (#1966) 2022-03-09 21:11:00 +02:00
Ori Newman
190e725dd0 Optimize expected header pruning point (#1962)
* Use the correct heuristic to avoid checking for next pruning point movement when not needed
2022-03-07 00:16:29 +02:00
351 changed files with 13741 additions and 4114 deletions

View File

@@ -19,16 +19,11 @@ jobs:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# Increase the pagefile size on Windows to aviod running out of memory
- name: Increase pagefile size on Windows
if: runner.os == 'Windows'
run: powershell -command .github\workflows\SetPageFileSize.ps1
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
go-version: 1.18
- name: Build on Linux
if: runner.os == 'Linux'

View File

@@ -22,7 +22,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
go-version: 1.18
- name: Set scheduled branch name
shell: bash

View File

@@ -33,7 +33,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
go-version: 1.18
# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
@@ -58,7 +58,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
go-version: 1.18
- name: Checkout
uses: actions/checkout@v2
@@ -86,7 +86,7 @@ jobs:
- name: Setup Go
uses: actions/setup-go@v2
with:
go-version: 1.16
go-version: 1.18
- name: Delete the stability tests from coverage
run: rm -r stability-tests

View File

@@ -15,7 +15,7 @@ Kaspa is an attempt at a proof-of-work cryptocurrency with instant confirmations
## Requirements
Go 1.16 or later.
Go 1.18 or later.
## Installation

View File

@@ -69,6 +69,10 @@ const (
CmdReady
CmdTrustedData
CmdBlockWithTrustedDataV4
CmdRequestNextPruningPointAndItsAnticoneBlocks
CmdRequestIBDChainBlockLocator
CmdIBDChainBlockLocator
CmdRequestAnticone
// rpc
CmdGetCurrentNetworkRequestMessage
@@ -152,6 +156,13 @@ const (
CmdVirtualDaaScoreChangedNotificationMessage
CmdGetBalancesByAddressesRequestMessage
CmdGetBalancesByAddressesResponseMessage
CmdNotifyNewBlockTemplateRequestMessage
CmdNotifyNewBlockTemplateResponseMessage
CmdNewBlockTemplateNotificationMessage
CmdGetMempoolEntriesByAddressesRequestMessage
CmdGetMempoolEntriesByAddressesResponseMessage
CmdGetCoinSupplyRequestMessage
CmdGetCoinSupplyResponseMessage
)
// ProtocolMessageCommandToString maps all MessageCommands to their string representation
@@ -195,6 +206,10 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdReady: "Ready",
CmdTrustedData: "TrustedData",
CmdBlockWithTrustedDataV4: "BlockWithTrustedDataV4",
CmdRequestNextPruningPointAndItsAnticoneBlocks: "RequestNextPruningPointAndItsAnticoneBlocks",
CmdRequestIBDChainBlockLocator: "RequestIBDChainBlockLocator",
CmdIBDChainBlockLocator: "IBDChainBlockLocator",
CmdRequestAnticone: "RequestAnticone",
}
// RPCMessageCommandToString maps all MessageCommands to their string representation
@@ -278,6 +293,13 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdVirtualDaaScoreChangedNotificationMessage: "VirtualDaaScoreChangedNotification",
CmdGetBalancesByAddressesRequestMessage: "GetBalancesByAddressesRequest",
CmdGetBalancesByAddressesResponseMessage: "GetBalancesByAddressesResponse",
CmdNotifyNewBlockTemplateRequestMessage: "NotifyNewBlockTemplateRequest",
CmdNotifyNewBlockTemplateResponseMessage: "NotifyNewBlockTemplateResponse",
CmdNewBlockTemplateNotificationMessage: "NewBlockTemplateNotification",
CmdGetMempoolEntriesByAddressesRequestMessage: "GetMempoolEntriesByAddressesRequest",
CmdGetMempoolEntriesByAddressesResponseMessage: "GetMempoolEntriesByAddressesResponse",
CmdGetCoinSupplyRequestMessage: "GetCoinSupplyRequest",
CmdGetCoinSupplyResponseMessage: "GetCoinSupplyResponse",
}
// Message is an interface that describes a kaspa message. A type that

View File

@@ -0,0 +1,27 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgIBDChainBlockLocator implements the Message interface and represents a kaspa
// locator message. It is used to find the blockLocator of a peer that is
// syncing with you.
type MsgIBDChainBlockLocator struct {
baseMessage
BlockLocatorHashes []*externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgIBDChainBlockLocator) Command() MessageCommand {
return CmdIBDChainBlockLocator
}
// NewMsgIBDChainBlockLocator returns a new kaspa locator message that conforms to
// the Message interface. See MsgBlockLocator for details.
func NewMsgIBDChainBlockLocator(locatorHashes []*externalapi.DomainHash) *MsgIBDChainBlockLocator {
return &MsgIBDChainBlockLocator{
BlockLocatorHashes: locatorHashes,
}
}

View File

@@ -0,0 +1,33 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestAnticone implements the Message interface and represents a kaspa
// RequestHeaders message. It is used to request the set past(ContextHash) \cap anticone(BlockHash)
type MsgRequestAnticone struct {
baseMessage
BlockHash *externalapi.DomainHash
ContextHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestAnticone) Command() MessageCommand {
return CmdRequestAnticone
}
// NewMsgRequestAnticone returns a new kaspa RequestPastDiff message that conforms to the
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgRequestAnticone(blockHash, contextHash *externalapi.DomainHash) *MsgRequestAnticone {
return &MsgRequestAnticone{
BlockHash: blockHash,
ContextHash: contextHash,
}
}

View File

@@ -0,0 +1,31 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestIBDChainBlockLocator implements the Message interface and represents a kaspa
// IBDRequestChainBlockLocator message. It is used to request a block locator between low
// and high hash.
// The locator is returned via a locator message (MsgIBDChainBlockLocator).
type MsgRequestIBDChainBlockLocator struct {
baseMessage
HighHash *externalapi.DomainHash
LowHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestIBDChainBlockLocator) Command() MessageCommand {
return CmdRequestIBDChainBlockLocator
}
// NewMsgIBDRequestChainBlockLocator returns a new IBDRequestChainBlockLocator message that conforms to the
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgIBDRequestChainBlockLocator(highHash, lowHash *externalapi.DomainHash) *MsgRequestIBDChainBlockLocator {
return &MsgRequestIBDChainBlockLocator{
HighHash: highHash,
LowHash: lowHash,
}
}

View File

@@ -0,0 +1,22 @@
package appmessage
// MsgRequestNextPruningPointAndItsAnticoneBlocks implements the Message interface and represents a kaspa
// RequestNextPruningPointAndItsAnticoneBlocks message. It is used to notify the IBD syncer peer to send
// more blocks from the pruning anticone.
//
// This message has no payload.
type MsgRequestNextPruningPointAndItsAnticoneBlocks struct {
baseMessage
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestNextPruningPointAndItsAnticoneBlocks) Command() MessageCommand {
return CmdRequestNextPruningPointAndItsAnticoneBlocks
}
// NewMsgRequestNextPruningPointAndItsAnticoneBlocks returns a new kaspa RequestNextPruningPointAndItsAnticoneBlocks message that conforms to the
// Message interface.
func NewMsgRequestNextPruningPointAndItsAnticoneBlocks() *MsgRequestNextPruningPointAndItsAnticoneBlocks {
return &MsgRequestNextPruningPointAndItsAnticoneBlocks{}
}

View File

@@ -5,6 +5,7 @@ package appmessage
type GetBlockTemplateRequestMessage struct {
baseMessage
PayAddress string
ExtraData string
}
// Command returns the protocol command string for the message
@@ -13,9 +14,10 @@ func (msg *GetBlockTemplateRequestMessage) Command() MessageCommand {
}
// NewGetBlockTemplateRequestMessage returns a instance of the message
func NewGetBlockTemplateRequestMessage(payAddress string) *GetBlockTemplateRequestMessage {
func NewGetBlockTemplateRequestMessage(payAddress, extraData string) *GetBlockTemplateRequestMessage {
return &GetBlockTemplateRequestMessage{
PayAddress: payAddress,
ExtraData: extraData,
}
}

View File

@@ -0,0 +1,40 @@
package appmessage
// GetCoinSupplyRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetCoinSupplyRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *GetCoinSupplyRequestMessage) Command() MessageCommand {
return CmdGetCoinSupplyRequestMessage
}
// NewGetCoinSupplyRequestMessage returns a instance of the message
func NewGetCoinSupplyRequestMessage() *GetCoinSupplyRequestMessage {
return &GetCoinSupplyRequestMessage{}
}
// GetCoinSupplyResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetCoinSupplyResponseMessage struct {
baseMessage
MaxSompi uint64
CirculatingSompi uint64
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetCoinSupplyResponseMessage) Command() MessageCommand {
return CmdGetCoinSupplyResponseMessage
}
// NewGetCoinSupplyResponseMessage returns a instance of the message
func NewGetCoinSupplyResponseMessage(maxSompi uint64, circulatingSompi uint64) *GetCoinSupplyResponseMessage {
return &GetCoinSupplyResponseMessage{
MaxSompi: maxSompi,
CirculatingSompi: circulatingSompi,
}
}

View File

@@ -23,6 +23,8 @@ type GetInfoResponseMessage struct {
P2PID string
MempoolSize uint64
ServerVersion string
IsUtxoIndexed bool
IsSynced bool
Error *RPCError
}
@@ -33,10 +35,12 @@ func (msg *GetInfoResponseMessage) Command() MessageCommand {
}
// NewGetInfoResponseMessage returns a instance of the message
func NewGetInfoResponseMessage(p2pID string, mempoolSize uint64, serverVersion string) *GetInfoResponseMessage {
func NewGetInfoResponseMessage(p2pID string, mempoolSize uint64, serverVersion string, isUtxoIndexed bool, isSynced bool) *GetInfoResponseMessage {
return &GetInfoResponseMessage{
P2PID: p2pID,
MempoolSize: mempoolSize,
ServerVersion: serverVersion,
IsUtxoIndexed: isUtxoIndexed,
IsSynced: isSynced,
}
}

View File

@@ -4,6 +4,8 @@ package appmessage
// its respective RPC message
type GetMempoolEntriesRequestMessage struct {
baseMessage
IncludeOrphanPool bool
FilterTransactionPool bool
}
// Command returns the protocol command string for the message
@@ -12,8 +14,11 @@ func (msg *GetMempoolEntriesRequestMessage) Command() MessageCommand {
}
// NewGetMempoolEntriesRequestMessage returns a instance of the message
func NewGetMempoolEntriesRequestMessage() *GetMempoolEntriesRequestMessage {
return &GetMempoolEntriesRequestMessage{}
func NewGetMempoolEntriesRequestMessage(includeOrphanPool bool, filterTransactionPool bool) *GetMempoolEntriesRequestMessage {
return &GetMempoolEntriesRequestMessage{
IncludeOrphanPool: includeOrphanPool,
FilterTransactionPool: filterTransactionPool,
}
}
// GetMempoolEntriesResponseMessage is an appmessage corresponding to

View File

@@ -0,0 +1,52 @@
package appmessage
// MempoolEntryByAddress represents MempoolEntries associated with some address
type MempoolEntryByAddress struct {
Address string
Receiving []*MempoolEntry
Sending []*MempoolEntry
}
// GetMempoolEntriesByAddressesRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetMempoolEntriesByAddressesRequestMessage struct {
baseMessage
Addresses []string
IncludeOrphanPool bool
FilterTransactionPool bool
}
// Command returns the protocol command string for the message
func (msg *GetMempoolEntriesByAddressesRequestMessage) Command() MessageCommand {
return CmdGetMempoolEntriesByAddressesRequestMessage
}
// NewGetMempoolEntriesByAddressesRequestMessage returns a instance of the message
func NewGetMempoolEntriesByAddressesRequestMessage(addresses []string, includeOrphanPool bool, filterTransactionPool bool) *GetMempoolEntriesByAddressesRequestMessage {
return &GetMempoolEntriesByAddressesRequestMessage{
Addresses: addresses,
IncludeOrphanPool: includeOrphanPool,
FilterTransactionPool: filterTransactionPool,
}
}
// GetMempoolEntriesByAddressesResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetMempoolEntriesByAddressesResponseMessage struct {
baseMessage
Entries []*MempoolEntryByAddress
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetMempoolEntriesByAddressesResponseMessage) Command() MessageCommand {
return CmdGetMempoolEntriesByAddressesResponseMessage
}
// NewGetMempoolEntriesByAddressesResponseMessage returns a instance of the message
func NewGetMempoolEntriesByAddressesResponseMessage(entries []*MempoolEntryByAddress) *GetMempoolEntriesByAddressesResponseMessage {
return &GetMempoolEntriesByAddressesResponseMessage{
Entries: entries,
}
}

View File

@@ -4,7 +4,9 @@ package appmessage
// its respective RPC message
type GetMempoolEntryRequestMessage struct {
baseMessage
TxID string
TxID string
IncludeOrphanPool bool
FilterTransactionPool bool
}
// Command returns the protocol command string for the message
@@ -13,8 +15,12 @@ func (msg *GetMempoolEntryRequestMessage) Command() MessageCommand {
}
// NewGetMempoolEntryRequestMessage returns a instance of the message
func NewGetMempoolEntryRequestMessage(txID string) *GetMempoolEntryRequestMessage {
return &GetMempoolEntryRequestMessage{TxID: txID}
func NewGetMempoolEntryRequestMessage(txID string, includeOrphanPool bool, filterTransactionPool bool) *GetMempoolEntryRequestMessage {
return &GetMempoolEntryRequestMessage{
TxID: txID,
IncludeOrphanPool: includeOrphanPool,
FilterTransactionPool: filterTransactionPool,
}
}
// GetMempoolEntryResponseMessage is an appmessage corresponding to
@@ -30,6 +36,7 @@ type GetMempoolEntryResponseMessage struct {
type MempoolEntry struct {
Fee uint64
Transaction *RPCTransaction
IsOrphan bool
}
// Command returns the protocol command string for the message
@@ -38,11 +45,12 @@ func (msg *GetMempoolEntryResponseMessage) Command() MessageCommand {
}
// NewGetMempoolEntryResponseMessage returns a instance of the message
func NewGetMempoolEntryResponseMessage(fee uint64, transaction *RPCTransaction) *GetMempoolEntryResponseMessage {
func NewGetMempoolEntryResponseMessage(fee uint64, transaction *RPCTransaction, isOrphan bool) *GetMempoolEntryResponseMessage {
return &GetMempoolEntryResponseMessage{
Entry: &MempoolEntry{
Fee: fee,
Transaction: transaction,
IsOrphan: isOrphan,
},
}
}

View File

@@ -4,7 +4,8 @@ package appmessage
// its respective RPC message
type GetVirtualSelectedParentChainFromBlockRequestMessage struct {
baseMessage
StartHash string
StartHash string
IncludeAcceptedTransactionIDs bool
}
// Command returns the protocol command string for the message
@@ -13,18 +14,29 @@ func (msg *GetVirtualSelectedParentChainFromBlockRequestMessage) Command() Messa
}
// NewGetVirtualSelectedParentChainFromBlockRequestMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockRequestMessage(startHash string) *GetVirtualSelectedParentChainFromBlockRequestMessage {
func NewGetVirtualSelectedParentChainFromBlockRequestMessage(
startHash string, includeAcceptedTransactionIDs bool) *GetVirtualSelectedParentChainFromBlockRequestMessage {
return &GetVirtualSelectedParentChainFromBlockRequestMessage{
StartHash: startHash,
StartHash: startHash,
IncludeAcceptedTransactionIDs: includeAcceptedTransactionIDs,
}
}
// AcceptedTransactionIDs is a part of the GetVirtualSelectedParentChainFromBlockResponseMessage and
// VirtualSelectedParentChainChangedNotificationMessage appmessages
type AcceptedTransactionIDs struct {
AcceptingBlockHash string
AcceptedTransactionIDs []string
}
// GetVirtualSelectedParentChainFromBlockResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetVirtualSelectedParentChainFromBlockResponseMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlockHashes []string
AcceptedTransactionIDs []*AcceptedTransactionIDs
Error *RPCError
}
@@ -36,10 +48,11 @@ func (msg *GetVirtualSelectedParentChainFromBlockResponseMessage) Command() Mess
// NewGetVirtualSelectedParentChainFromBlockResponseMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockResponseMessage(removedChainBlockHashes,
addedChainBlockHashes []string) *GetVirtualSelectedParentChainFromBlockResponseMessage {
addedChainBlockHashes []string, acceptedTransactionIDs []*AcceptedTransactionIDs) *GetVirtualSelectedParentChainFromBlockResponseMessage {
return &GetVirtualSelectedParentChainFromBlockResponseMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlockHashes: addedChainBlockHashes,
AcceptedTransactionIDs: acceptedTransactionIDs,
}
}

View File

@@ -0,0 +1,50 @@
package appmessage
// NotifyNewBlockTemplateRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyNewBlockTemplateRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyNewBlockTemplateRequestMessage) Command() MessageCommand {
return CmdNotifyNewBlockTemplateRequestMessage
}
// NewNotifyNewBlockTemplateRequestMessage returns an instance of the message
func NewNotifyNewBlockTemplateRequestMessage() *NotifyNewBlockTemplateRequestMessage {
return &NotifyNewBlockTemplateRequestMessage{}
}
// NotifyNewBlockTemplateResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyNewBlockTemplateResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyNewBlockTemplateResponseMessage) Command() MessageCommand {
return CmdNotifyNewBlockTemplateResponseMessage
}
// NewNotifyNewBlockTemplateResponseMessage returns an instance of the message
func NewNotifyNewBlockTemplateResponseMessage() *NotifyNewBlockTemplateResponseMessage {
return &NotifyNewBlockTemplateResponseMessage{}
}
// NewBlockTemplateNotificationMessage is an appmessage corresponding to
// its respective RPC message
type NewBlockTemplateNotificationMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NewBlockTemplateNotificationMessage) Command() MessageCommand {
return CmdNewBlockTemplateNotificationMessage
}
// NewNewBlockTemplateNotificationMessage returns an instance of the message
func NewNewBlockTemplateNotificationMessage() *NewBlockTemplateNotificationMessage {
return &NewBlockTemplateNotificationMessage{}
}

View File

@@ -4,6 +4,7 @@ package appmessage
// its respective RPC message
type NotifyVirtualSelectedParentChainChangedRequestMessage struct {
baseMessage
IncludeAcceptedTransactionIDs bool
}
// Command returns the protocol command string for the message
@@ -11,9 +12,13 @@ func (msg *NotifyVirtualSelectedParentChainChangedRequestMessage) Command() Mess
return CmdNotifyVirtualSelectedParentChainChangedRequestMessage
}
// NewNotifyVirtualSelectedParentChainChangedRequestMessage returns a instance of the message
func NewNotifyVirtualSelectedParentChainChangedRequestMessage() *NotifyVirtualSelectedParentChainChangedRequestMessage {
return &NotifyVirtualSelectedParentChainChangedRequestMessage{}
// NewNotifyVirtualSelectedParentChainChangedRequestMessage returns an instance of the message
func NewNotifyVirtualSelectedParentChainChangedRequestMessage(
includeAcceptedTransactionIDs bool) *NotifyVirtualSelectedParentChainChangedRequestMessage {
return &NotifyVirtualSelectedParentChainChangedRequestMessage{
IncludeAcceptedTransactionIDs: includeAcceptedTransactionIDs,
}
}
// NotifyVirtualSelectedParentChainChangedResponseMessage is an appmessage corresponding to
@@ -39,6 +44,7 @@ type VirtualSelectedParentChainChangedNotificationMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlockHashes []string
AcceptedTransactionIDs []*AcceptedTransactionIDs
}
// Command returns the protocol command string for the message
@@ -48,10 +54,11 @@ func (msg *VirtualSelectedParentChainChangedNotificationMessage) Command() Messa
// NewVirtualSelectedParentChainChangedNotificationMessage returns a instance of the message
func NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes,
addedChainBlocks []string) *VirtualSelectedParentChainChangedNotificationMessage {
addedChainBlocks []string, acceptedTransactionIDs []*AcceptedTransactionIDs) *VirtualSelectedParentChainChangedNotificationMessage {
return &VirtualSelectedParentChainChangedNotificationMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlockHashes: addedChainBlocks,
AcceptedTransactionIDs: acceptedTransactionIDs,
}
}

View File

@@ -4,6 +4,8 @@ import (
"fmt"
"sync/atomic"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
"github.com/kaspanet/kaspad/app/protocol"
@@ -67,6 +69,7 @@ func (a *ComponentManager) Stop() {
}
a.protocolManager.Close()
close(a.protocolManager.Context().Domain().ConsensusEventsChannel())
return
}
@@ -118,7 +121,7 @@ func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database,
if err != nil {
return nil, err
}
rpcManager := setupRPC(cfg, domain, netAdapter, protocolManager, connectionManager, addressManager, utxoIndex, interrupt)
rpcManager := setupRPC(cfg, domain, netAdapter, protocolManager, connectionManager, addressManager, utxoIndex, domain.ConsensusEventsChannel(), interrupt)
return &ComponentManager{
cfg: cfg,
@@ -139,6 +142,7 @@ func setupRPC(
connectionManager *connmanager.ConnectionManager,
addressManager *addressmanager.AddressManager,
utxoIndex *utxoindex.UTXOIndex,
consensusEventsChan chan externalapi.ConsensusEvent,
shutDownChan chan<- struct{},
) *rpc.Manager {
@@ -150,10 +154,10 @@ func setupRPC(
connectionManager,
addressManager,
utxoIndex,
consensusEventsChan,
shutDownChan,
)
protocolManager.SetOnVirtualChange(rpcManager.NotifyVirtualChange)
protocolManager.SetOnBlockAddedToDAGHandler(rpcManager.NotifyBlockAddedToDAG)
protocolManager.SetOnNewBlockTemplateHandler(rpcManager.NotifyNewBlockTemplate)
protocolManager.SetOnPruningPointUTXOSetOverrideHandler(rpcManager.NotifyPruningPointUTXOSetOverride)
return rpcManager

View File

@@ -16,53 +16,42 @@ import (
// OnNewBlock updates the mempool after a new block arrival, and
// relays newly unorphaned transactions and possibly rebroadcast
// manually added transactions when not in IBD.
func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
virtualChangeSet *externalapi.VirtualChangeSet) error {
func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock) error {
hash := consensushashing.BlockHash(block)
log.Debugf("OnNewBlock start for block %s", hash)
defer log.Debugf("OnNewBlock end for block %s", hash)
log.Tracef("OnNewBlock start for block %s", hash)
defer log.Tracef("OnNewBlock end for block %s", hash)
unorphaningResults, err := f.UnorphanBlocks(block)
unorphanedBlocks, err := f.UnorphanBlocks(block)
if err != nil {
return err
}
log.Debugf("OnNewBlock: block %s unorphaned %d blocks", hash, len(unorphaningResults))
log.Debugf("OnNewBlock: block %s unorphaned %d blocks", hash, len(unorphanedBlocks))
newBlocks := []*externalapi.DomainBlock{block}
newVirtualChangeSets := []*externalapi.VirtualChangeSet{virtualChangeSet}
for _, unorphaningResult := range unorphaningResults {
newBlocks = append(newBlocks, unorphaningResult.block)
newVirtualChangeSets = append(newVirtualChangeSets, unorphaningResult.virtualChangeSet)
}
newBlocks = append(newBlocks, unorphanedBlocks...)
allAcceptedTransactions := make([]*externalapi.DomainTransaction, 0)
for i, newBlock := range newBlocks {
for _, newBlock := range newBlocks {
log.Debugf("OnNewBlock: passing block %s transactions to mining manager", hash)
acceptedTransactions, err := f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
if err != nil {
return err
}
allAcceptedTransactions = append(allAcceptedTransactions, acceptedTransactions...)
if f.onBlockAddedToDAGHandler != nil {
log.Debugf("OnNewBlock: calling f.onBlockAddedToDAGHandler for block %s", hash)
virtualChangeSet = newVirtualChangeSets[i]
err := f.onBlockAddedToDAGHandler(newBlock, virtualChangeSet)
if err != nil {
return err
}
}
}
return f.broadcastTransactionsAfterBlockAdded(newBlocks, allAcceptedTransactions)
}
// OnVirtualChange calls the handler function whenever the virtual block changes.
func (f *FlowContext) OnVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error {
if f.onVirtualChangeHandler != nil && virtualChangeSet != nil {
return f.onVirtualChangeHandler(virtualChangeSet)
// OnNewBlockTemplate calls the handler function whenever a new block template is available for miners.
func (f *FlowContext) OnNewBlockTemplate() error {
// Clear current template cache. Note we call this even if the handler is nil, in order to keep the
// state consistent without dependency on external event registration
f.Domain().MiningManager().ClearBlockTemplate()
if f.onNewBlockTemplateHandler != nil {
return f.onNewBlockTemplateHandler()
}
return nil
@@ -118,14 +107,18 @@ func (f *FlowContext) AddBlock(block *externalapi.DomainBlock) error {
return protocolerrors.Errorf(false, "cannot add header only block")
}
virtualChangeSet, err := f.Domain().Consensus().ValidateAndInsertBlock(block, true)
err := f.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for block %s: %s", consensushashing.BlockHash(block), err)
}
return err
}
err = f.OnNewBlock(block, virtualChangeSet)
err = f.OnNewBlockTemplate()
if err != nil {
return err
}
err = f.OnNewBlock(block)
if err != nil {
return err
}
@@ -150,7 +143,7 @@ func (f *FlowContext) TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool {
return false
}
f.ibdPeer = ibdPeer
log.Infof("IBD started")
log.Infof("IBD started with peer %s", ibdPeer)
return true
}

View File

@@ -2,6 +2,7 @@ package flowcontext
import (
"errors"
"strings"
"sync/atomic"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
@@ -9,6 +10,11 @@ import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
)
var (
// ErrPingTimeout signifies that a ping operation timed out.
ErrPingTimeout = protocolerrors.New(false, "timeout expired on ping")
)
// HandleError handles an error from a flow,
// It sends the error to errChan if isStopping == 0 and increments isStopping
//
@@ -21,8 +27,15 @@ func (*FlowContext) HandleError(err error, flowName string, isStopping *uint32,
if protocolErr := (protocolerrors.ProtocolError{}); !errors.As(err, &protocolErr) {
panic(err)
}
log.Errorf("error from %s: %s", flowName, err)
if errors.Is(err, ErrPingTimeout) {
// Avoid printing the call stack on ping timeouts, since users get panicked and this case is not interesting
log.Errorf("error from %s: %s", flowName, err)
} else {
// Explain to the user that this is not a panic, but only a protocol error with a specific peer
logFrame := strings.Repeat("=", 52)
log.Errorf("Non-critical peer protocol error from %s, printing the full stack for debug purposes: \n%s\n%+v \n%s",
flowName, logFrame, err, logFrame)
}
}
if atomic.AddUint32(isStopping, 1) == 1 {

View File

@@ -18,12 +18,8 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
)
// OnBlockAddedToDAGHandler is a handler function that's triggered
// when a block is added to the DAG
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock, virtualChangeSet *externalapi.VirtualChangeSet) error
// OnVirtualChangeHandler is a handler function that's triggered when the virtual changes
type OnVirtualChangeHandler func(virtualChangeSet *externalapi.VirtualChangeSet) error
// OnNewBlockTemplateHandler is a handler function that's triggered when a new block template is available
type OnNewBlockTemplateHandler func() error
// OnPruningPointUTXOSetOverrideHandler is a handle function that's triggered whenever the UTXO set
// resets due to pruning point change via IBD.
@@ -44,8 +40,7 @@ type FlowContext struct {
timeStarted int64
onVirtualChangeHandler OnVirtualChangeHandler
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onNewBlockTemplateHandler OnNewBlockTemplateHandler
onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
@@ -102,14 +97,14 @@ func (f *FlowContext) ShutdownChan() <-chan struct{} {
return f.shutdownChan
}
// SetOnVirtualChangeHandler sets the onVirtualChangeHandler handler
func (f *FlowContext) SetOnVirtualChangeHandler(onVirtualChangeHandler OnVirtualChangeHandler) {
f.onVirtualChangeHandler = onVirtualChangeHandler
// IsNearlySynced returns whether current consensus is considered synced or close to being synced.
func (f *FlowContext) IsNearlySynced() (bool, error) {
return f.Domain().Consensus().IsNearlySynced()
}
// SetOnBlockAddedToDAGHandler sets the onBlockAddedToDAG handler
func (f *FlowContext) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler OnBlockAddedToDAGHandler) {
f.onBlockAddedToDAGHandler = onBlockAddedToDAGHandler
// SetOnNewBlockTemplateHandler sets the onNewBlockTemplateHandler handler
func (f *FlowContext) SetOnNewBlockTemplateHandler(onNewBlockTemplateHandler OnNewBlockTemplateHandler) {
f.onNewBlockTemplateHandler = onNewBlockTemplateHandler
}
// SetOnPruningPointUTXOSetOverrideHandler sets the onPruningPointUTXOSetOverrideHandler handler

View File

@@ -72,3 +72,10 @@ func (f *FlowContext) Peers() []*peerpkg.Peer {
}
return peers
}
// HasPeers returns whether there are currently active peers
func (f *FlowContext) HasPeers() bool {
f.peersMutex.RLock()
defer f.peersMutex.RUnlock()
return len(f.peers) > 0
}

View File

@@ -15,12 +15,6 @@ import (
// on: 2^orphanResolutionRange * PHANTOM K.
const maxOrphans = 600
// UnorphaningResult is the result of unorphaning a block
type UnorphaningResult struct {
block *externalapi.DomainBlock
virtualChangeSet *externalapi.VirtualChangeSet
}
// AddOrphan adds the block to the orphan set
func (f *FlowContext) AddOrphan(orphanBlock *externalapi.DomainBlock) {
f.orphansMutex.Lock()
@@ -57,7 +51,7 @@ func (f *FlowContext) IsOrphan(blockHash *externalapi.DomainHash) bool {
}
// UnorphanBlocks removes the block from the orphan set, and remove all of the blocks that are not orphans anymore.
func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*UnorphaningResult, error) {
func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*externalapi.DomainBlock, error) {
f.orphansMutex.Lock()
defer f.orphansMutex.Unlock()
@@ -66,7 +60,7 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
rootBlockHash := consensushashing.BlockHash(rootBlock)
processQueue := f.addChildOrphansToProcessQueue(rootBlockHash, []externalapi.DomainHash{})
var unorphaningResults []*UnorphaningResult
var unorphanedBlocks []*externalapi.DomainBlock
for len(processQueue) > 0 {
var orphanHash externalapi.DomainHash
orphanHash, processQueue = processQueue[0], processQueue[1:]
@@ -90,21 +84,18 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
}
}
if canBeUnorphaned {
virtualChangeSet, unorphaningSucceeded, err := f.unorphanBlock(orphanHash)
unorphaningSucceeded, err := f.unorphanBlock(orphanHash)
if err != nil {
return nil, err
}
if unorphaningSucceeded {
unorphaningResults = append(unorphaningResults, &UnorphaningResult{
block: orphanBlock,
virtualChangeSet: virtualChangeSet,
})
unorphanedBlocks = append(unorphanedBlocks, orphanBlock)
processQueue = f.addChildOrphansToProcessQueue(&orphanHash, processQueue)
}
}
}
return unorphaningResults, nil
return unorphanedBlocks, nil
}
// addChildOrphansToProcessQueue finds all child orphans of `blockHash`
@@ -143,24 +134,24 @@ func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash)
return childOrphans
}
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (*externalapi.VirtualChangeSet, bool, error) {
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (bool, error) {
orphanBlock, ok := f.orphans[orphanHash]
if !ok {
return nil, false, errors.Errorf("attempted to unorphan a non-orphan block %s", orphanHash)
return false, errors.Errorf("attempted to unorphan a non-orphan block %s", orphanHash)
}
delete(f.orphans, orphanHash)
virtualChangeSet, err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock, true)
err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for orphan block %s: %s", orphanHash, err)
return nil, false, nil
return false, nil
}
return nil, false, err
return false, err
}
log.Infof("Unorphaned block %s", orphanHash)
return virtualChangeSet, true, nil
return true, nil
}
// GetOrphanRoots returns the roots of the missing ancestors DAG of the given orphan

View File

@@ -1,47 +0,0 @@
package flowcontext
import "github.com/kaspanet/kaspad/util/mstime"
const (
maxSelectedParentTimeDiffToAllowMiningInMilliSeconds = 60 * 60 * 1000 // 1 Hour
)
// ShouldMine returns whether it's ok to use block template from this node
// for mining purposes.
func (f *FlowContext) ShouldMine() (bool, error) {
peers := f.Peers()
if len(peers) == 0 {
log.Debugf("The node is not connected, so ShouldMine returns false")
return false, nil
}
if f.IsIBDRunning() {
log.Debugf("IBD is running, so ShouldMine returns false")
return false, nil
}
virtualSelectedParent, err := f.domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
if virtualSelectedParent.Equal(f.Config().NetParams().GenesisHash) {
return false, nil
}
virtualSelectedParentHeader, err := f.domain.Consensus().GetBlockHeader(virtualSelectedParent)
if err != nil {
return false, err
}
now := mstime.Now().UnixMilliseconds()
if now-virtualSelectedParentHeader.TimeInMilliseconds() < maxSelectedParentTimeDiffToAllowMiningInMilliSeconds {
log.Debugf("The selected tip timestamp is recent (%d), so ShouldMine returns true",
virtualSelectedParentHeader.TimeInMilliseconds())
return true, nil
}
log.Debugf("The selected tip timestamp is old (%d), so ShouldMine returns false",
virtualSelectedParentHeader.TimeInMilliseconds())
return false, nil
}

View File

@@ -18,9 +18,9 @@ var (
// minAcceptableProtocolVersion is the lowest protocol version that a
// connected peer may support.
minAcceptableProtocolVersion = uint32(4)
minAcceptableProtocolVersion = uint32(5)
maxAcceptableProtocolVersion = uint32(4)
maxAcceptableProtocolVersion = uint32(5)
)
type receiveVersionFlow struct {

View File

@@ -0,0 +1,16 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"testing"
)
func TestIBDBatchSizeLessThanRouteCapacity(t *testing.T) {
// The `ibdBatchSize` constant must be equal at both syncer and syncee. Therefore, we do not want
// to set it to `router.DefaultMaxMessages` to avoid confusion and human errors.
// However, nonetheless we must enforce that it does not exceed `router.DefaultMaxMessages`
if ibdBatchSize >= router.DefaultMaxMessages {
t.Fatalf("IBD batch size (%d) must be smaller than router.DefaultMaxMessages (%d)",
ibdBatchSize, router.DefaultMaxMessages)
}
}

View File

@@ -21,7 +21,7 @@ func (flow *handleRelayInvsFlow) receiveBlockLocator() (blockLocatorHashes []*ex
switch message := message.(type) {
case *appmessage.MsgInvRelayBlock:
flow.invsQueue = append(flow.invsQueue, message)
flow.invsQueue = append(flow.invsQueue, invRelayBlock{Hash: message.Hash, IsOrphanRoot: false})
case *appmessage.MsgBlockLocator:
return message.BlockLocatorHashes, nil
default:

View File

@@ -5,7 +5,6 @@ import (
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
@@ -34,7 +33,7 @@ func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *
if err != nil {
return err
}
if !blockInfo.Exists {
if !blockInfo.HasHeader() {
return protocolerrors.Errorf(true, "received IBDBlockLocator "+
"with an unknown targetHash %s", targetHash)
}
@@ -47,7 +46,7 @@ func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *
}
// The IBD block locator is checking only existing blocks with bodies.
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
if !blockInfo.HasBody() {
continue
}

View File

@@ -4,7 +4,6 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
@@ -32,8 +31,8 @@ func HandleIBDBlockRequests(context HandleIBDBlockRequestsContext, incomingRoute
if err != nil {
return err
}
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return protocolerrors.Errorf(true, "block %s not found", hash)
if !blockInfo.HasBody() {
return protocolerrors.Errorf(true, "block %s not found (v5)", hash)
}
block, err := context.Domain().Consensus().GetBlock(hash)
if err != nil {

View File

@@ -0,0 +1,85 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
// RequestIBDChainBlockLocatorContext is the interface for the context needed for the HandleRequestBlockLocator flow.
type RequestIBDChainBlockLocatorContext interface {
Domain() domain.Domain
}
type handleRequestIBDChainBlockLocatorFlow struct {
RequestIBDChainBlockLocatorContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestIBDChainBlockLocator handles getBlockLocator messages
func HandleRequestIBDChainBlockLocator(context RequestIBDChainBlockLocatorContext, incomingRoute *router.Route,
outgoingRoute *router.Route) error {
flow := &handleRequestIBDChainBlockLocatorFlow{
RequestIBDChainBlockLocatorContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestIBDChainBlockLocatorFlow) start() error {
for {
highHash, lowHash, err := flow.receiveRequestIBDChainBlockLocator()
if err != nil {
return err
}
log.Debugf("Received getIBDChainBlockLocator with highHash: %s, lowHash: %s", highHash, lowHash)
var locator externalapi.BlockLocator
if highHash == nil || lowHash == nil {
locator, err = flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
} else {
locator, err = flow.Domain().Consensus().CreateHeadersSelectedChainBlockLocator(lowHash, highHash)
if errors.Is(model.ErrBlockNotInSelectedParentChain, err) {
// The chain has been modified, signal it by sending an empty locator
locator, err = externalapi.BlockLocator{}, nil
}
}
if err != nil {
log.Debugf("Received error from CreateHeadersSelectedChainBlockLocator: %s", err)
return protocolerrors.Errorf(true, "couldn't build a block "+
"locator between %s and %s", lowHash, highHash)
}
err = flow.sendIBDChainBlockLocator(locator)
if err != nil {
return err
}
}
}
func (flow *handleRequestIBDChainBlockLocatorFlow) receiveRequestIBDChainBlockLocator() (highHash, lowHash *externalapi.DomainHash, err error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, nil, err
}
msgGetBlockLocator := message.(*appmessage.MsgRequestIBDChainBlockLocator)
return msgGetBlockLocator.HighHash, msgGetBlockLocator.LowHash, nil
}
func (flow *handleRequestIBDChainBlockLocatorFlow) sendIBDChainBlockLocator(locator externalapi.BlockLocator) error {
msgIBDChainBlockLocator := appmessage.NewMsgIBDChainBlockLocator(locator)
err := flow.outgoingRoute.Enqueue(msgIBDChainBlockLocator)
if err != nil {
return err
}
return nil
}

View File

@@ -118,7 +118,7 @@ func HandlePruningPointAndItsAnticoneRequests(context PruningPointAndItsAnticone
return err
}
for _, blockHash := range pointAndItsAnticone {
for i, blockHash := range pointAndItsAnticone {
block, err := context.Domain().Consensus().GetBlock(blockHash)
if err != nil {
return err
@@ -128,6 +128,19 @@ func HandlePruningPointAndItsAnticoneRequests(context PruningPointAndItsAnticone
if err != nil {
return err
}
if (i+1)%ibdBatchSize == 0 {
// No timeout here, as we don't care if the syncee takes its time computing,
// since it only blocks this dedicated flow
message, err := incomingRoute.Dequeue()
if err != nil {
return err
}
if _, ok := message.(*appmessage.MsgRequestNextPruningPointAndItsAnticoneBlocks); !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointAndItsAnticoneBlocks, message.Command())
}
}
}
err = outgoingRoute.Enqueue(appmessage.NewMsgDoneBlocksWithTrustedData())

View File

@@ -5,7 +5,6 @@ import (
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
@@ -33,7 +32,7 @@ func HandleRelayBlockRequests(context RelayBlockRequestsContext, incomingRoute *
if err != nil {
return err
}
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
if !blockInfo.HasBody() {
return protocolerrors.Errorf(true, "block %s not found", hash)
}
block, err := context.Domain().Consensus().GetBlock(hash)

View File

@@ -7,9 +7,11 @@ import (
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashset"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
@@ -24,8 +26,8 @@ var orphanResolutionRange uint32 = 5
type RelayInvsContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock, virtualChangeSet *externalapi.VirtualChangeSet) error
OnVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error
OnNewBlock(block *externalapi.DomainBlock) error
OnNewBlockTemplate() error
OnPruningPointUTXOSetOverride() error
SharedRequestedBlocks() *flowcontext.SharedRequestedBlocks
Broadcast(message appmessage.Message) error
@@ -34,13 +36,19 @@ type RelayInvsContext interface {
IsOrphan(blockHash *externalapi.DomainHash) bool
IsIBDRunning() bool
IsRecoverableError(err error) bool
IsNearlySynced() (bool, error)
}
type invRelayBlock struct {
Hash *externalapi.DomainHash
IsOrphanRoot bool
}
type handleRelayInvsFlow struct {
RelayInvsContext
incomingRoute, outgoingRoute *router.Route
peer *peerpkg.Peer
invsQueue []*appmessage.MsgInvRelayBlock
invsQueue []invRelayBlock
}
// HandleRelayInvs listens to appmessage.MsgInvRelayBlock messages, requests their corresponding blocks if they
@@ -53,7 +61,7 @@ func HandleRelayInvs(context RelayInvsContext, incomingRoute *router.Route, outg
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
invsQueue: make([]*appmessage.MsgInvRelayBlock, 0),
invsQueue: make([]invRelayBlock, 0),
}
err := flow.start()
// Currently, HandleRelayInvs flow is the only place where IBD is triggered, so the channel can be closed now
@@ -104,10 +112,16 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
// Block relay is disabled during IBD
// Block relay is disabled if the node is already during IBD AND considered out of sync
if flow.IsIBDRunning() {
log.Debugf("Got block %s while in IBD. continuing...", inv.Hash)
continue
isNearlySynced, err := flow.IsNearlySynced()
if err != nil {
return err
}
if !isNearlySynced {
log.Debugf("Got block %s while in IBD and the node is out of sync. Continuing...", inv.Hash)
continue
}
}
log.Debugf("Requesting block %s", inv.Hash)
@@ -130,8 +144,36 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
// Note we do not apply the heuristic below if inv was queued as an orphan root, since
// that means the process started by a proper and relevant relay block
if !inv.IsOrphanRoot {
// Check bounded merge depth to avoid requesting irrelevant data which cannot be merged under virtual
virtualMergeDepthRoot, err := flow.Domain().Consensus().VirtualMergeDepthRoot()
if err != nil {
return err
}
if !virtualMergeDepthRoot.Equal(model.VirtualGenesisBlockHash) {
mergeDepthRootHeader, err := flow.Domain().Consensus().GetBlockHeader(virtualMergeDepthRoot)
if err != nil {
return err
}
// Since `BlueWork` respects topology, this condition means that the relay
// block is not in the future of virtual's merge depth root, and thus cannot be merged unless
// other valid blocks Kosherize it, in which case it will be obtained once the merger is relayed
if block.Header.BlueWork().Cmp(mergeDepthRootHeader.BlueWork()) <= 0 {
log.Debugf("Block %s has lower blue work than virtual's merge root %s (%d <= %d), hence we are skipping it",
inv.Hash, virtualMergeDepthRoot, block.Header.BlueWork(), mergeDepthRootHeader.BlueWork())
continue
}
}
}
log.Debugf("Processing block %s", inv.Hash)
missingParents, virtualChangeSet, err := flow.processBlock(block)
oldVirtualInfo, err := flow.Domain().Consensus().GetVirtualInfo()
if err != nil {
return err
}
missingParents, err := flow.processBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrPrunedBlock) {
log.Infof("Ignoring pruned block %s", inv.Hash)
@@ -153,13 +195,44 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
log.Debugf("Relaying block %s", inv.Hash)
err = flow.relayBlock(block)
oldVirtualParents := hashset.New()
for _, parent := range oldVirtualInfo.ParentHashes {
oldVirtualParents.Add(parent)
}
newVirtualInfo, err := flow.Domain().Consensus().GetVirtualInfo()
if err != nil {
return err
}
virtualHasNewParents := false
for _, parent := range newVirtualInfo.ParentHashes {
if oldVirtualParents.Contains(parent) {
continue
}
virtualHasNewParents = true
block, err := flow.Domain().Consensus().GetBlock(parent)
if err != nil {
return err
}
blockHash := consensushashing.BlockHash(block)
log.Debugf("Relaying block %s", blockHash)
err = flow.relayBlock(block)
if err != nil {
return err
}
}
if virtualHasNewParents {
log.Debugf("Virtual %d has new parents, raising new block template event", newVirtualInfo.DAAScore)
err = flow.OnNewBlockTemplate()
if err != nil {
return err
}
}
log.Infof("Accepted block %s via relay", inv.Hash)
err = flow.OnNewBlock(block, virtualChangeSet)
err = flow.OnNewBlock(block)
if err != nil {
return err
}
@@ -175,24 +248,24 @@ func (flow *handleRelayInvsFlow) banIfBlockIsHeaderOnly(block *externalapi.Domai
return nil
}
func (flow *handleRelayInvsFlow) readInv() (*appmessage.MsgInvRelayBlock, error) {
func (flow *handleRelayInvsFlow) readInv() (invRelayBlock, error) {
if len(flow.invsQueue) > 0 {
var inv *appmessage.MsgInvRelayBlock
var inv invRelayBlock
inv, flow.invsQueue = flow.invsQueue[0], flow.invsQueue[1:]
return inv, nil
}
msg, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, err
return invRelayBlock{}, err
}
inv, ok := msg.(*appmessage.MsgInvRelayBlock)
msgInv, ok := msg.(*appmessage.MsgInvRelayBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "unexpected %s message in the block relay handleRelayInvsFlow while "+
return invRelayBlock{}, protocolerrors.Errorf(true, "unexpected %s message in the block relay handleRelayInvsFlow while "+
"expecting an inv message", msg.Command())
}
return inv, nil
return invRelayBlock{Hash: msgInv.Hash, IsOrphanRoot: false}, nil
}
func (flow *handleRelayInvsFlow) requestBlock(requestHash *externalapi.DomainHash) (*externalapi.DomainBlock, bool, error) {
@@ -237,7 +310,7 @@ func (flow *handleRelayInvsFlow) readMsgBlock() (msgBlock *appmessage.MsgBlock,
switch message := message.(type) {
case *appmessage.MsgInvRelayBlock:
flow.invsQueue = append(flow.invsQueue, message)
flow.invsQueue = append(flow.invsQueue, invRelayBlock{Hash: message.Hash, IsOrphanRoot: false})
case *appmessage.MsgBlock:
return message, nil
default:
@@ -246,22 +319,25 @@ func (flow *handleRelayInvsFlow) readMsgBlock() (msgBlock *appmessage.MsgBlock,
}
}
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, *externalapi.VirtualChangeSet, error) {
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, error) {
blockHash := consensushashing.BlockHash(block)
virtualChangeSet, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, true)
err := flow.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return nil, nil, errors.Wrapf(err, "failed to process block %s", blockHash)
return nil, errors.Wrapf(err, "failed to process block %s", blockHash)
}
missingParentsError := &ruleerrors.ErrMissingParents{}
if errors.As(err, missingParentsError) {
return missingParentsError.MissingParentHashes, nil, nil
return missingParentsError.MissingParentHashes, nil
}
log.Warnf("Rejected block %s from %s: %s", blockHash, flow.peer, err)
return nil, nil, protocolerrors.Wrapf(true, err, "got invalid block %s from relay", blockHash)
// A duplicate block should not appear to the user as a warning and is already reported in the calling function
if !errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Warnf("Rejected block %s from %s: %s", blockHash, flow.peer, err)
}
return nil, protocolerrors.Wrapf(true, err, "got invalid block %s from relay", blockHash)
}
return nil, virtualChangeSet, nil
return nil, nil
}
func (flow *handleRelayInvsFlow) relayBlock(block *externalapi.DomainBlock) error {
@@ -369,12 +445,16 @@ func (flow *handleRelayInvsFlow) AddOrphanRootsToQueue(orphan *externalapi.Domai
"probably happened because it was randomly evicted immediately after it was added.", orphan)
}
if len(orphanRoots) == 0 {
// In some rare cases we get here when there are no orphan roots already
return nil
}
log.Infof("Block %s has %d missing ancestors. Adding them to the invs queue...", orphan, len(orphanRoots))
invMessages := make([]*appmessage.MsgInvRelayBlock, len(orphanRoots))
invMessages := make([]invRelayBlock, len(orphanRoots))
for i, root := range orphanRoots {
log.Debugf("Adding block %s missing ancestor %s to the invs queue", orphan, root)
invMessages[i] = appmessage.NewMsgInvBlock(root)
invMessages[i] = invRelayBlock{Hash: root, IsOrphanRoot: true}
}
flow.invsQueue = append(invMessages, flow.invsQueue...)

View File

@@ -0,0 +1,95 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"sort"
)
// RequestAnticoneContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestAnticoneContext interface {
Domain() domain.Domain
Config() *config.Config
}
type handleRequestAnticoneFlow struct {
RequestAnticoneContext
incomingRoute, outgoingRoute *router.Route
peer *peer.Peer
}
// HandleRequestAnticone handles RequestAnticone messages
func HandleRequestAnticone(context RequestAnticoneContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
flow := &handleRequestAnticoneFlow{
RequestAnticoneContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
func (flow *handleRequestAnticoneFlow) start() error {
for {
blockHash, contextHash, err := receiveRequestAnticone(flow.incomingRoute)
if err != nil {
return err
}
log.Debugf("Received requestAnticone with blockHash: %s, contextHash: %s", blockHash, contextHash)
log.Debugf("Getting past(%s) cap anticone(%s) for peer %s", contextHash, blockHash, flow.peer)
// GetAnticone is expected to be called by the syncee for getting the anticone of the header selected tip
// intersected by past of relayed block, and is thus expected to be bounded by mergeset limit since
// we relay blocks only if they enter virtual's mergeset. We add a 2 factor for possible sync gaps.
blockHashes, err := flow.Domain().Consensus().GetAnticone(blockHash, contextHash,
flow.Config().ActiveNetParams.MergeSetSizeLimit*2)
if err != nil {
return protocolerrors.Wrap(true, err, "Failed querying anticone")
}
log.Debugf("Got %d header hashes in past(%s) cap anticone(%s)", len(blockHashes), contextHash, blockHash)
blockHeaders := make([]*appmessage.MsgBlockHeader, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := flow.Domain().Consensus().GetBlockHeader(blockHash)
if err != nil {
return err
}
blockHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(blockHeader)
}
// We sort the headers in bottom-up topological order before sending
sort.Slice(blockHeaders, func(i, j int) bool {
return blockHeaders[i].BlueWork.Cmp(blockHeaders[j].BlueWork) < 0
})
blockHeadersMessage := appmessage.NewBlockHeadersMessage(blockHeaders)
err = flow.outgoingRoute.Enqueue(blockHeadersMessage)
if err != nil {
return err
}
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
return err
}
}
}
func receiveRequestAnticone(incomingRoute *router.Route) (blockHash *externalapi.DomainHash,
contextHash *externalapi.DomainHash, err error) {
message, err := incomingRoute.Dequeue()
if err != nil {
return nil, nil, err
}
msgRequestAnticone := message.(*appmessage.MsgRequestAnticone)
return msgRequestAnticone.BlockHash, msgRequestAnticone.ContextHash, nil
}

View File

@@ -10,7 +10,9 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
const ibdBatchSize = router.DefaultMaxMessages
// This constant must be equal at both syncer and syncee. Therefore, never (!!) change this constant unless a new p2p
// version is introduced. See `TestIBDBatchSizeLessThanRouteCapacity` as well.
const ibdBatchSize = 99
// RequestHeadersContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestHeadersContext interface {
@@ -42,7 +44,34 @@ func (flow *handleRequestHeadersFlow) start() error {
if err != nil {
return err
}
log.Debugf("Recieved requestHeaders with lowHash: %s, highHash: %s", lowHash, highHash)
log.Debugf("Received requestHeaders with lowHash: %s, highHash: %s", lowHash, highHash)
consensus := flow.Domain().Consensus()
lowHashInfo, err := consensus.GetBlockInfo(lowHash)
if err != nil {
return err
}
if !lowHashInfo.HasHeader() {
return protocolerrors.Errorf(true, "Block %s does not exist", lowHash)
}
highHashInfo, err := consensus.GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashInfo.HasHeader() {
return protocolerrors.Errorf(true, "Block %s does not exist", highHash)
}
isLowSelectedAncestorOfHigh, err := consensus.IsInSelectedParentChainOf(lowHash, highHash)
if err != nil {
return err
}
if !isLowSelectedAncestorOfHigh {
return protocolerrors.Errorf(true, "Expected %s to be on the selected chain of %s",
lowHash, highHash)
}
for !lowHash.Equal(highHash) {
log.Debugf("Getting block headers between %s and %s to %s", lowHash, highHash, flow.peer)
@@ -51,7 +80,7 @@ func (flow *handleRequestHeadersFlow) start() error {
// in order to avoid locking the consensus for too long
// maxBlocks MUST be >= MergeSetSizeLimit + 1
const maxBlocks = 1 << 10
blockHashes, _, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash, maxBlocks)
blockHashes, _, err := consensus.GetHashesBetween(lowHash, highHash, maxBlocks)
if err != nil {
return err
}
@@ -59,7 +88,7 @@ func (flow *handleRequestHeadersFlow) start() error {
blockHeaders := make([]*appmessage.MsgBlockHeader, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := flow.Domain().Consensus().GetBlockHeader(blockHash)
blockHeader, err := consensus.GetBlockHeader(blockHash)
if err != nil {
return err
}

View File

@@ -6,7 +6,6 @@ import (
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
@@ -21,8 +20,8 @@ import (
type IBDContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock, virtualChangeSet *externalapi.VirtualChangeSet) error
OnVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error
OnNewBlock(block *externalapi.DomainBlock) error
OnNewBlockTemplate() error
OnPruningPointUTXOSetOverride() error
IsIBDRunning() bool
TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool
@@ -76,38 +75,19 @@ func (flow *handleIBDFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) er
flow.logIBDFinished(isFinishedSuccessfully)
}()
highHash := consensushashing.BlockHash(block)
log.Criticalf("IBD started with peer %s and highHash %s", flow.peer, highHash)
log.Criticalf("Syncing blocks up to %s", highHash)
log.Criticalf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, highestSharedBlockFound, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return err
}
log.Criticalf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
checkpoint, err := externalapi.NewDomainHashFromString("05ff0f2e1d201dcaee7c5e567cc2c1d42ca3cce9fefbd3b519dc68b5bb89d0b9")
relayBlockHash := consensushashing.BlockHash(block)
log.Debugf("IBD started with peer %s and relayBlockHash %s", flow.peer, relayBlockHash)
log.Debugf("Syncing blocks up to %s", relayBlockHash)
log.Debugf("Trying to find highest known syncer chain block from peer %s with relay hash %s", flow.peer, relayBlockHash)
syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, err := flow.negotiateMissingSyncerChainSegment()
if err != nil {
return err
}
info, err := flow.Domain().Consensus().GetBlockInfo(checkpoint)
if err != nil {
return err
}
if info.Exists {
isInSelectedParentChainOf, err := flow.Domain().Consensus().IsInSelectedParentChainOf(checkpoint, highestSharedBlockHash)
if err != nil {
return err
}
if !isInSelectedParentChainOf {
log.Criticalf("Stopped IBD because the checkpoint %s is not in the selected chain of %s", checkpoint, highestSharedBlockHash)
return nil
}
}
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(block, highestSharedBlockFound)
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(
block, highestKnownSyncerChainHash)
if err != nil {
return err
}
@@ -118,7 +98,7 @@ func (flow *handleIBDFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) er
if shouldDownloadHeadersProof {
log.Infof("Starting IBD with headers proof")
err := flow.ibdWithHeadersProof(highHash, block.Header.DAAScore())
err := flow.ibdWithHeadersProof(syncerHeaderSelectedTipHash, relayBlockHash, block.Header.DAAScore())
if err != nil {
return err
}
@@ -131,27 +111,162 @@ func (flow *handleIBDFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) er
if isGenesisVirtualSelectedParent {
log.Infof("Cannot IBD to %s because it won't change the pruning point. The node needs to IBD "+
"to the recent pruning point before normal operation can resume.", highHash)
"to the recent pruning point before normal operation can resume.", relayBlockHash)
return nil
}
}
err = flow.syncPruningPointFutureHeaders(flow.Domain().Consensus(), highestSharedBlockHash, highHash, block.Header.DAAScore())
err = flow.syncPruningPointFutureHeaders(
flow.Domain().Consensus(),
syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, relayBlockHash, block.Header.DAAScore())
if err != nil {
return err
}
}
err = flow.syncMissingBlockBodies(highHash)
// We start by syncing missing bodies over the syncer selected chain
err = flow.syncMissingBlockBodies(syncerHeaderSelectedTipHash)
if err != nil {
return err
}
relayBlockInfo, err := flow.Domain().Consensus().GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
// Relay block might be in the anticone of syncer selected tip, thus
// check his chain for missing bodies as well.
// Note: this operation can be slightly optimized to avoid the full chain search since relay block
// is in syncer virtual mergeset which has bounded size.
if relayBlockInfo.BlockStatus == externalapi.StatusHeaderOnly {
err = flow.syncMissingBlockBodies(relayBlockHash)
if err != nil {
return err
}
}
log.Debugf("Finished syncing blocks up to %s", highHash)
log.Debugf("Finished syncing blocks up to %s", relayBlockHash)
isFinishedSuccessfully = true
return nil
}
func (flow *handleIBDFlow) negotiateMissingSyncerChainSegment() (*externalapi.DomainHash, *externalapi.DomainHash, error) {
/*
Algorithm:
Request full selected chain block locator from syncer
Find the highest block which we know
Repeat the locator step over the new range until finding max(past(syncee) \cap chain(syncer))
*/
// Empty hashes indicate that the full chain is queried
locatorHashes, err := flow.getSyncerChainBlockLocator(nil, nil, common.DefaultTimeout)
if err != nil {
return nil, nil, err
}
if len(locatorHashes) == 0 {
return nil, nil, protocolerrors.Errorf(true, "Expecting initial syncer chain block locator "+
"to contain at least one element")
}
log.Debugf("IBD chain negotiation with peer %s started and received %d hashes (%s, %s)", flow.peer,
len(locatorHashes), locatorHashes[0], locatorHashes[len(locatorHashes)-1])
syncerHeaderSelectedTipHash := locatorHashes[0]
var highestKnownSyncerChainHash *externalapi.DomainHash
chainNegotiationRestartCounter := 0
chainNegotiationZoomCounts := 0
initialLocatorLen := len(locatorHashes)
for {
var lowestUnknownSyncerChainHash, currentHighestKnownSyncerChainHash *externalapi.DomainHash
for _, syncerChainHash := range locatorHashes {
info, err := flow.Domain().Consensus().GetBlockInfo(syncerChainHash)
if err != nil {
return nil, nil, err
}
if info.Exists {
currentHighestKnownSyncerChainHash = syncerChainHash
break
}
lowestUnknownSyncerChainHash = syncerChainHash
}
// No unknown blocks, break. Note this can only happen in the first iteration
if lowestUnknownSyncerChainHash == nil {
highestKnownSyncerChainHash = currentHighestKnownSyncerChainHash
break
}
// No shared block, break
if currentHighestKnownSyncerChainHash == nil {
highestKnownSyncerChainHash = nil
break
}
// No point in zooming further
if len(locatorHashes) == 1 {
highestKnownSyncerChainHash = currentHighestKnownSyncerChainHash
break
}
// Zoom in
locatorHashes, err = flow.getSyncerChainBlockLocator(
lowestUnknownSyncerChainHash,
currentHighestKnownSyncerChainHash, time.Second*10)
if err != nil {
return nil, nil, err
}
if len(locatorHashes) > 0 {
if !locatorHashes[0].Equal(lowestUnknownSyncerChainHash) ||
!locatorHashes[len(locatorHashes)-1].Equal(currentHighestKnownSyncerChainHash) {
return nil, nil, protocolerrors.Errorf(true, "Expecting the high and low "+
"hashes to match the locator bounds")
}
chainNegotiationZoomCounts++
log.Debugf("IBD chain negotiation with peer %s zoomed in (%d) and received %d hashes (%s, %s)", flow.peer,
chainNegotiationZoomCounts, len(locatorHashes), locatorHashes[0], locatorHashes[len(locatorHashes)-1])
if len(locatorHashes) == 2 {
// We found our search target
highestKnownSyncerChainHash = currentHighestKnownSyncerChainHash
break
}
if chainNegotiationZoomCounts > initialLocatorLen*2 {
// Since the zoom-in always queries two consecutive entries in the previous locator, it is
// expected to decrease in size at least every two iterations
return nil, nil, protocolerrors.Errorf(true,
"IBD chain negotiation: Number of zoom-in steps %d exceeded the upper bound of 2*%d",
chainNegotiationZoomCounts, initialLocatorLen)
}
} else { // Empty locator signals a restart due to chain changes
chainNegotiationZoomCounts = 0
chainNegotiationRestartCounter++
if chainNegotiationRestartCounter > 32 {
return nil, nil, protocolerrors.Errorf(false,
"IBD chain negotiation with syncer %s exceeded restart limit %d", flow.peer, chainNegotiationRestartCounter)
}
log.Warnf("IBD chain negotiation with syncer %s restarted %d times", flow.peer, chainNegotiationRestartCounter)
// An empty locator signals that the syncer chain was modified and no longer contains one of
// the queried hashes, so we restart the search. We use a shorter timeout here to avoid a timeout attack
locatorHashes, err = flow.getSyncerChainBlockLocator(nil, nil, time.Second*10)
if err != nil {
return nil, nil, err
}
if len(locatorHashes) == 0 {
return nil, nil, protocolerrors.Errorf(true, "Expecting initial syncer chain block locator "+
"to contain at least one element")
}
log.Infof("IBD chain negotiation with peer %s restarted (%d) and received %d hashes (%s, %s)", flow.peer,
chainNegotiationRestartCounter, len(locatorHashes), locatorHashes[0], locatorHashes[len(locatorHashes)-1])
initialLocatorLen = len(locatorHashes)
// Reset syncer's header selected tip
syncerHeaderSelectedTipHash = locatorHashes[0]
}
}
log.Debugf("Found highest known syncer chain block %s from peer %s",
highestKnownSyncerChainHash, flow.peer)
return syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, nil
}
func (flow *handleIBDFlow) isGenesisVirtualSelectedParent() (bool, error) {
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
@@ -166,141 +281,57 @@ func (flow *handleIBDFlow) logIBDFinished(isFinishedSuccessfully bool) {
if !isFinishedSuccessfully {
successString = "(interrupted)"
}
log.Infof("IBD finished %s", successString)
log.Infof("IBD with peer %s finished %s", flow.peer, successString)
}
// findHighestSharedBlock attempts to find the highest shared block between the peer
// and this node. This method may fail because the peer and us have conflicting pruning
// points. In that case we return (nil, false, nil) so that we may stop IBD gracefully.
func (flow *handleIBDFlow) findHighestSharedBlockHash(
targetHash *externalapi.DomainHash) (*externalapi.DomainHash, bool, error) {
func (flow *handleIBDFlow) getSyncerChainBlockLocator(
highHash, lowHash *externalapi.DomainHash, timeout time.Duration) ([]*externalapi.DomainHash, error) {
log.Debugf("Sending a blockLocator to %s between pruning point and headers selected tip", flow.peer)
blockLocator, err := flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
requestIbdChainBlockLocatorMessage := appmessage.NewMsgIBDRequestChainBlockLocator(highHash, lowHash)
err := flow.outgoingRoute.Enqueue(requestIbdChainBlockLocatorMessage)
if err != nil {
return nil, false, err
return nil, err
}
for {
highestHash, highestHashFound, err := flow.fetchHighestHash(targetHash, blockLocator)
if err != nil {
return nil, false, err
}
if !highestHashFound {
return nil, false, nil
}
highestHashIndex, err := flow.findHighestHashIndex(highestHash, blockLocator)
if err != nil {
return nil, false, err
}
if highestHashIndex == 0 ||
// If the block locator contains only two adjacent chain blocks, the
// syncer will always find the same highest chain block, so to avoid
// an endless loop, we explicitly stop the loop in such situation.
(len(blockLocator) == 2 && highestHashIndex == 1) {
return highestHash, true, nil
}
locatorHashAboveHighestHash := highestHash
if highestHashIndex > 0 {
locatorHashAboveHighestHash = blockLocator[highestHashIndex-1]
}
blockLocator, err = flow.nextBlockLocator(highestHash, locatorHashAboveHighestHash)
if err != nil {
return nil, false, err
}
}
}
func (flow *handleIBDFlow) nextBlockLocator(lowHash, highHash *externalapi.DomainHash) (externalapi.BlockLocator, error) {
log.Debugf("Sending a blockLocator to %s between %s and %s", flow.peer, lowHash, highHash)
blockLocator, err := flow.Domain().Consensus().CreateHeadersSelectedChainBlockLocator(lowHash, highHash)
message, err := flow.incomingRoute.DequeueWithTimeout(timeout)
if err != nil {
if errors.Is(model.ErrBlockNotInSelectedParentChain, err) {
return nil, err
}
log.Debugf("Headers selected parent chain moved since findHighestSharedBlockHash - " +
"restarting with full block locator")
blockLocator, err = flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
if err != nil {
return nil, err
}
}
return blockLocator, nil
}
func (flow *handleIBDFlow) findHighestHashIndex(
highestHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (int, error) {
highestHashIndex := 0
highestHashIndexFound := false
for i, blockLocatorHash := range blockLocator {
if highestHash.Equal(blockLocatorHash) {
highestHashIndex = i
highestHashIndexFound = true
break
}
}
if !highestHashIndexFound {
return 0, protocolerrors.Errorf(true, "highest hash %s "+
"returned from peer %s is not in the original blockLocator", highestHash, flow.peer)
}
log.Debugf("The index of the highest hash in the original "+
"blockLocator sent to %s is %d", flow.peer, highestHashIndex)
return highestHashIndex, nil
}
// fetchHighestHash attempts to fetch the highest hash the peer knows amongst the given
// blockLocator. This method may fail because the peer and us have conflicting pruning
// points. In that case we return (nil, false, nil) so that we may stop IBD gracefully.
func (flow *handleIBDFlow) fetchHighestHash(
targetHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (*externalapi.DomainHash, bool, error) {
ibdBlockLocatorMessage := appmessage.NewMsgIBDBlockLocator(targetHash, blockLocator)
err := flow.outgoingRoute.Enqueue(ibdBlockLocatorMessage)
if err != nil {
return nil, false, err
}
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, false, err
return nil, err
}
switch message := message.(type) {
case *appmessage.MsgIBDBlockLocatorHighestHash:
highestHash := message.HighestHash
log.Debugf("The highest hash the peer %s knows is %s", flow.peer, highestHash)
return highestHash, true, nil
case *appmessage.MsgIBDBlockLocatorHighestHashNotFound:
log.Debugf("Peer %s does not know any block within our blockLocator. "+
"This should only happen if there's a DAG split deeper than the pruning point.", flow.peer)
return nil, false, nil
case *appmessage.MsgIBDChainBlockLocator:
if len(message.BlockLocatorHashes) > 64 {
return nil, protocolerrors.Errorf(true,
"Got block locator of size %d>64 while expecting locator to have size "+
"which is logarithmic in DAG size (which should never exceed 2^64)",
len(message.BlockLocatorHashes))
}
return message.BlockLocatorHashes, nil
default:
return nil, false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlockLocatorHighestHash, message.Command())
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDChainBlockLocator, message.Command())
}
}
func (flow *handleIBDFlow) syncPruningPointFutureHeaders(consensus externalapi.Consensus, highestSharedBlockHash *externalapi.DomainHash,
highHash *externalapi.DomainHash, highBlockDAAScore uint64) error {
func (flow *handleIBDFlow) syncPruningPointFutureHeaders(consensus externalapi.Consensus,
syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, relayBlockHash *externalapi.DomainHash,
highBlockDAAScoreHint uint64) error {
log.Infof("Downloading headers from %s", flow.peer)
err := flow.sendRequestHeaders(highestSharedBlockHash, highHash)
if highestKnownSyncerChainHash.Equal(syncerHeaderSelectedTipHash) {
// No need to get syncer selected tip headers, so sync relay past and return
return flow.syncMissingRelayPast(consensus, syncerHeaderSelectedTipHash, relayBlockHash)
}
err := flow.sendRequestHeaders(highestKnownSyncerChainHash, syncerHeaderSelectedTipHash)
if err != nil {
return err
}
highestSharedBlockHeader, err := consensus.GetBlockHeader(highestSharedBlockHash)
highestSharedBlockHeader, err := consensus.GetBlockHeader(highestKnownSyncerChainHash)
if err != nil {
return err
}
progressReporter := newIBDProgressReporter(highestSharedBlockHeader.DAAScore(), highBlockDAAScore, "block headers")
progressReporter := newIBDProgressReporter(highestSharedBlockHeader.DAAScore(), highBlockDAAScoreHint, "block headers")
// Keep a short queue of BlockHeadersMessages so that there's
// never a moment when the node is not validating and inserting
@@ -318,6 +349,11 @@ func (flow *handleIBDFlow) syncPruningPointFutureHeaders(consensus externalapi.C
close(blockHeadersMessageChan)
return
}
if len(blockHeadersMessage.BlockHeaders) == 0 {
// The syncer should have sent a done message if the search completed, and not an empty list
errChan <- protocolerrors.Errorf(true, "Received an empty headers message from peer %s", flow.peer)
return
}
blockHeadersMessageChan <- blockHeadersMessage
@@ -329,24 +365,14 @@ func (flow *handleIBDFlow) syncPruningPointFutureHeaders(consensus externalapi.C
}
})
count := 0
for {
select {
case ibdBlocksMessage, ok := <-blockHeadersMessageChan:
if !ok {
// If the highHash has not been received, the peer is misbehaving
highHashBlockInfo, err := consensus.GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashBlockInfo.Exists {
return protocolerrors.Errorf(true, "did not receive "+
"highHash block %s from peer %s during block download", highHash, flow.peer)
}
return nil
return flow.syncMissingRelayPast(consensus, syncerHeaderSelectedTipHash, relayBlockHash)
}
for _, header := range ibdBlocksMessage.BlockHeaders {
_, err := flow.processHeader(consensus, header)
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
@@ -360,11 +386,70 @@ func (flow *handleIBDFlow) syncPruningPointFutureHeaders(consensus externalapi.C
}
}
func (flow *handleIBDFlow) sendRequestHeaders(highestSharedBlockHash *externalapi.DomainHash,
peerSelectedTipHash *externalapi.DomainHash) error {
func (flow *handleIBDFlow) syncMissingRelayPast(consensus externalapi.Consensus, syncerHeaderSelectedTipHash *externalapi.DomainHash, relayBlockHash *externalapi.DomainHash) error {
// Finished downloading syncer selected tip blocks,
// check if we already have the triggering relayBlockHash
relayBlockInfo, err := consensus.GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
if !relayBlockInfo.Exists {
// Send a special header request for the selected tip anticone. This is expected to
// be a small set, as it is bounded to the size of virtual's mergeset.
err = flow.sendRequestAnticone(syncerHeaderSelectedTipHash, relayBlockHash)
if err != nil {
return err
}
anticoneHeadersMessage, anticoneDone, err := flow.receiveHeaders()
if err != nil {
return err
}
if anticoneDone {
return protocolerrors.Errorf(true,
"Expected one anticone header chunk for past(%s) cap anticone(%s) but got zero",
relayBlockHash, syncerHeaderSelectedTipHash)
}
_, anticoneDone, err = flow.receiveHeaders()
if err != nil {
return err
}
if !anticoneDone {
return protocolerrors.Errorf(true,
"Expected only one anticone header chunk for past(%s) cap anticone(%s)",
relayBlockHash, syncerHeaderSelectedTipHash)
}
for _, header := range anticoneHeadersMessage.BlockHeaders {
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
}
}
msgGetBlockInvs := appmessage.NewMsgRequstHeaders(highestSharedBlockHash, peerSelectedTipHash)
return flow.outgoingRoute.Enqueue(msgGetBlockInvs)
// If the relayBlockHash has still not been received, the peer is misbehaving
relayBlockInfo, err = consensus.GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
if !relayBlockInfo.Exists {
return protocolerrors.Errorf(true, "did not receive "+
"relayBlockHash block %s from peer %s during block download", relayBlockHash, flow.peer)
}
return nil
}
func (flow *handleIBDFlow) sendRequestAnticone(
syncerHeaderSelectedTipHash, relayBlockHash *externalapi.DomainHash) error {
msgRequestAnticone := appmessage.NewMsgRequestAnticone(syncerHeaderSelectedTipHash, relayBlockHash)
return flow.outgoingRoute.Enqueue(msgRequestAnticone)
}
func (flow *handleIBDFlow) sendRequestHeaders(
highestKnownSyncerChainHash, syncerHeaderSelectedTipHash *externalapi.DomainHash) error {
msgRequestHeaders := appmessage.NewMsgRequstHeaders(highestKnownSyncerChainHash, syncerHeaderSelectedTipHash)
return flow.outgoingRoute.Enqueue(msgRequestHeaders)
}
func (flow *handleIBDFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneHeaders bool, err error) {
@@ -387,7 +472,7 @@ func (flow *handleIBDFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeader
}
}
func (flow *handleIBDFlow) processHeader(consensus externalapi.Consensus, msgBlockHeader *appmessage.MsgBlockHeader) (bool, error) {
func (flow *handleIBDFlow) processHeader(consensus externalapi.Consensus, msgBlockHeader *appmessage.MsgBlockHeader) error {
header := appmessage.BlockHeaderToDomainBlockHeader(msgBlockHeader)
block := &externalapi.DomainBlock{
Header: header,
@@ -397,26 +482,27 @@ func (flow *handleIBDFlow) processHeader(consensus externalapi.Consensus, msgBlo
blockHash := consensushashing.BlockHash(block)
blockInfo, err := consensus.GetBlockInfo(blockHash)
if err != nil {
return false, err
return err
}
if blockInfo.Exists {
log.Debugf("Block header %s is already in the DAG. Skipping...", blockHash)
return false, nil
return nil
}
_, err = consensus.ValidateAndInsertBlock(block, false)
err = consensus.ValidateAndInsertBlock(block, false)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return false, errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
return errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
}
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping block header %s as it is a duplicate", blockHash)
} else {
log.Infof("Rejected block header %s from %s during IBD: %s", blockHash, flow.peer, err)
return false, protocolerrors.Wrapf(true, err, "got invalid block header %s during IBD", blockHash)
return protocolerrors.Wrapf(true, err, "got invalid block header %s during IBD", blockHash)
}
}
return true, nil
return nil
}
func (flow *handleIBDFlow) validatePruningPointFutureHeaderTimestamps() error {
@@ -480,7 +566,7 @@ func (flow *handleIBDFlow) receiveAndInsertPruningPointUTXOSet(
receivedChunkCount++
if receivedChunkCount%ibdBatchSize == 0 {
log.Debugf("Received %d UTXO set chunks so far, totaling in %d UTXOs",
log.Infof("Received %d UTXO set chunks so far, totaling in %d UTXOs",
receivedChunkCount, receivedUTXOCount)
requestNextPruningPointUTXOSetChunkMessage := appmessage.NewMsgRequestNextPruningPointUTXOSetChunk()
@@ -568,7 +654,7 @@ func (flow *handleIBDFlow) syncMissingBlockBodies(highHash *externalapi.DomainHa
return err
}
virtualChangeSet, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, false)
err = flow.Domain().Consensus().ValidateAndInsertBlock(block, false)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping IBD Block %s as it has already been added to the DAG", blockHash)
@@ -576,7 +662,7 @@ func (flow *handleIBDFlow) syncMissingBlockBodies(highHash *externalapi.DomainHa
}
return protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "invalid block %s", blockHash)
}
err = flow.OnNewBlock(block, virtualChangeSet)
err = flow.OnNewBlock(block)
if err != nil {
return err
}
@@ -600,38 +686,28 @@ func (flow *handleIBDFlow) banIfBlockIsHeaderOnly(block *externalapi.DomainBlock
}
func (flow *handleIBDFlow) resolveVirtual(estimatedVirtualDAAScoreTarget uint64) error {
virtualDAAScoreStart, err := flow.Domain().Consensus().GetVirtualDAAScore()
err := flow.Domain().Consensus().ResolveVirtual(func(virtualDAAScoreStart uint64, virtualDAAScore uint64) {
var percents int
if estimatedVirtualDAAScoreTarget-virtualDAAScoreStart <= 0 {
percents = 100
} else {
percents = int(float64(virtualDAAScore-virtualDAAScoreStart) / float64(estimatedVirtualDAAScoreTarget-virtualDAAScoreStart) * 100)
}
if percents < 0 {
percents = 0
} else if percents > 100 {
percents = 100
}
log.Infof("Resolving virtual. Estimated progress: %d%%", percents)
})
if err != nil {
return err
}
for i := 0; ; i++ {
if i%10 == 0 {
virtualDAAScore, err := flow.Domain().Consensus().GetVirtualDAAScore()
if err != nil {
return err
}
var percents int
if estimatedVirtualDAAScoreTarget-virtualDAAScoreStart <= 0 {
percents = 100
} else {
percents = int(float64(virtualDAAScore-virtualDAAScoreStart) / float64(estimatedVirtualDAAScoreTarget-virtualDAAScoreStart) * 100)
}
log.Infof("Resolving virtual. Estimated progress: %d%%", percents)
}
virtualChangeSet, isCompletelyResolved, err := flow.Domain().Consensus().ResolveVirtual()
if err != nil {
return err
}
err = flow.OnVirtualChange(virtualChangeSet)
if err != nil {
return err
}
if isCompletelyResolved {
log.Infof("Resolved virtual")
return nil
}
log.Infof("Resolved virtual")
err = flow.OnNewBlockTemplate()
if err != nil {
return err
}
return nil
}

View File

@@ -10,6 +10,10 @@ type ibdProgressReporter struct {
}
func newIBDProgressReporter(lowDAAScore uint64, highDAAScore uint64, objectName string) *ibdProgressReporter {
if highDAAScore <= lowDAAScore {
// Avoid a zero or negative diff
highDAAScore = lowDAAScore + 1
}
return &ibdProgressReporter{
lowDAAScore: lowDAAScore,
highDAAScore: highDAAScore,
@@ -23,7 +27,16 @@ func newIBDProgressReporter(lowDAAScore uint64, highDAAScore uint64, objectName
func (ipr *ibdProgressReporter) reportProgress(processedDelta int, highestProcessedDAAScore uint64) {
ipr.processed += processedDelta
relativeDAAScore := highestProcessedDAAScore - ipr.lowDAAScore
// Avoid exploding numbers in the percentage report, since the original `highDAAScore` might have been only a hint
if highestProcessedDAAScore > ipr.highDAAScore {
ipr.highDAAScore = highestProcessedDAAScore + 1 // + 1 for keeping it at 99%
ipr.totalDAAScoreDifference = ipr.highDAAScore - ipr.lowDAAScore
}
relativeDAAScore := uint64(0)
if highestProcessedDAAScore > ipr.lowDAAScore {
// Avoid a negative diff
relativeDAAScore = highestProcessedDAAScore - ipr.lowDAAScore
}
progressPercent := int((float64(relativeDAAScore) / float64(ipr.totalDAAScoreDifference)) * 100)
if progressPercent > ipr.lastReportedProgressPercent {
log.Infof("IBD: Processed %d %s (%d%%)", ipr.processed, ipr.objectName, progressPercent)

View File

@@ -12,18 +12,20 @@ import (
"time"
)
func (flow *handleIBDFlow) ibdWithHeadersProof(highHash *externalapi.DomainHash, highBlockDAAScore uint64) error {
err := flow.Domain().InitStagingConsensus()
func (flow *handleIBDFlow) ibdWithHeadersProof(
syncerHeaderSelectedTipHash, relayBlockHash *externalapi.DomainHash, highBlockDAAScore uint64) error {
err := flow.Domain().InitStagingConsensusWithoutGenesis()
if err != nil {
return err
}
err = flow.downloadHeadersAndPruningUTXOSet(highHash, highBlockDAAScore)
err = flow.downloadHeadersAndPruningUTXOSet(syncerHeaderSelectedTipHash, relayBlockHash, highBlockDAAScore)
if err != nil {
if !flow.IsRecoverableError(err) {
return err
}
log.Infof("IBD with pruning proof from %s was unsuccessful. Deleting the staging consensus.", flow.peer)
deleteStagingConsensusErr := flow.Domain().DeleteStagingConsensus()
if deleteStagingConsensusErr != nil {
return deleteStagingConsensusErr
@@ -32,6 +34,8 @@ func (flow *handleIBDFlow) ibdWithHeadersProof(highHash *externalapi.DomainHash,
return err
}
log.Infof("Header download stage of IBD with pruning proof completed successfully from %s. "+
"Committing the staging consensus and deleting the previous obsolete one if such exists.", flow.peer)
err = flow.Domain().CommitStagingConsensus()
if err != nil {
return err
@@ -45,11 +49,29 @@ func (flow *handleIBDFlow) ibdWithHeadersProof(highHash *externalapi.DomainHash,
return nil
}
func (flow *handleIBDFlow) shouldSyncAndShouldDownloadHeadersProof(highBlock *externalapi.DomainBlock,
highestSharedBlockFound bool) (shouldDownload, shouldSync bool, err error) {
func (flow *handleIBDFlow) shouldSyncAndShouldDownloadHeadersProof(
relayBlock *externalapi.DomainBlock,
highestKnownSyncerChainHash *externalapi.DomainHash) (shouldDownload, shouldSync bool, err error) {
if !highestSharedBlockFound {
hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock)
var highestSharedBlockFound, isPruningPointInSharedBlockChain bool
if highestKnownSyncerChainHash != nil {
highestSharedBlockFound = true
pruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return false, false, err
}
isPruningPointInSharedBlockChain, err = flow.Domain().Consensus().IsInSelectedParentChainOf(
pruningPoint, highestKnownSyncerChainHash)
if err != nil {
return false, false, err
}
}
// Note: in the case where `highestSharedBlockFound == true && isPruningPointInSharedBlockChain == false`
// we might have here info which is relevant to finality conflict decisions. This should be taken into
// account when we improve this aspect.
if !highestSharedBlockFound || !isPruningPointInSharedBlockChain {
hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(relayBlock)
if err != nil {
return false, false, err
}
@@ -64,7 +86,7 @@ func (flow *handleIBDFlow) shouldSyncAndShouldDownloadHeadersProof(highBlock *ex
return false, true, nil
}
func (flow *handleIBDFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock *externalapi.DomainBlock) (bool, error) {
func (flow *handleIBDFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(relayBlock *externalapi.DomainBlock) (bool, error) {
headersSelectedTip, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return false, err
@@ -75,11 +97,11 @@ func (flow *handleIBDFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruni
return false, err
}
if highBlock.Header.BlueScore() < headersSelectedTipInfo.BlueScore+flow.Config().NetParams().PruningDepth() {
if relayBlock.Header.BlueScore() < headersSelectedTipInfo.BlueScore+flow.Config().NetParams().PruningDepth() {
return false, nil
}
return highBlock.Header.BlueWork().Cmp(headersSelectedTipInfo.BlueWork) > 0, nil
return relayBlock.Header.BlueWork().Cmp(headersSelectedTipInfo.BlueWork) > 0, nil
}
func (flow *handleIBDFlow) syncAndValidatePruningPointProof() (*externalapi.DomainHash, error) {
@@ -114,7 +136,10 @@ func (flow *handleIBDFlow) syncAndValidatePruningPointProof() (*externalapi.Doma
return consensushashing.HeaderHash(pruningPointProof.Headers[0][len(pruningPointProof.Headers[0])-1]), nil
}
func (flow *handleIBDFlow) downloadHeadersAndPruningUTXOSet(highHash *externalapi.DomainHash, highBlockDAAScore uint64) error {
func (flow *handleIBDFlow) downloadHeadersAndPruningUTXOSet(
syncerHeaderSelectedTipHash, relayBlockHash *externalapi.DomainHash,
highBlockDAAScore uint64) error {
proofPruningPoint, err := flow.syncAndValidatePruningPointProof()
if err != nil {
return err
@@ -131,19 +156,20 @@ func (flow *handleIBDFlow) downloadHeadersAndPruningUTXOSet(highHash *externalap
return protocolerrors.Errorf(true, "the genesis pruning point violates finality")
}
err = flow.syncPruningPointFutureHeaders(flow.Domain().StagingConsensus(), proofPruningPoint, highHash, highBlockDAAScore)
err = flow.syncPruningPointFutureHeaders(flow.Domain().StagingConsensus(),
syncerHeaderSelectedTipHash, proofPruningPoint, relayBlockHash, highBlockDAAScore)
if err != nil {
return err
}
log.Infof("Headers downloaded from peer %s", flow.peer)
highHashInfo, err := flow.Domain().StagingConsensus().GetBlockInfo(highHash)
relayBlockInfo, err := flow.Domain().StagingConsensus().GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
if !highHashInfo.Exists {
if !relayBlockInfo.Exists {
return protocolerrors.Errorf(true, "the triggering IBD block was not sent")
}
@@ -206,7 +232,8 @@ func (flow *handleIBDFlow) syncPruningPointsAndPruningPointAnticone(proofPruning
return err
}
for {
i := 0
for ; ; i++ {
blockWithTrustedData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
@@ -220,9 +247,19 @@ func (flow *handleIBDFlow) syncPruningPointsAndPruningPointAnticone(proofPruning
if err != nil {
return err
}
// We're using i+2 because we want to check if the next block will belong to the next batch, but we already downloaded
// the pruning point outside the loop so we use i+2 instead of i+1.
if (i+2)%ibdBatchSize == 0 {
log.Infof("Downloaded %d blocks from the pruning point anticone", i+1)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestNextPruningPointAndItsAnticoneBlocks())
if err != nil {
return err
}
}
}
log.Infof("Finished downloading pruning point and its anticone from %s", flow.peer)
log.Infof("Finished downloading pruning point and its anticone from %s. Total blocks downloaded: %d", flow.peer, i+1)
return nil
}
@@ -243,7 +280,7 @@ func (flow *handleIBDFlow) processBlockWithTrustedData(
blockWithTrustedData.GHOSTDAGData = append(blockWithTrustedData.GHOSTDAGData, appmessage.GHOSTDAGHashPairToDomainGHOSTDAGHashPair(data.GHOSTDAGData[index]))
}
_, err := consensus.ValidateAndInsertBlockWithTrustedData(blockWithTrustedData, false)
err := consensus.ValidateAndInsertBlockWithTrustedData(blockWithTrustedData, false)
return err
}
@@ -344,6 +381,7 @@ func (flow *handleIBDFlow) syncPruningPointUTXOSet(consensus externalapi.Consens
log.Info("Fetching the pruning point UTXO set")
isSuccessful, err := flow.fetchMissingUTXOSet(consensus, pruningPoint)
if err != nil {
log.Infof("An error occurred while fetching the pruning point UTXO set. Stopping IBD. (%s)", err)
return false, err
}

View File

@@ -2,6 +2,8 @@ package ping
import (
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/pkg/errors"
"time"
"github.com/kaspanet/kaspad/app/appmessage"
@@ -61,6 +63,9 @@ func (flow *sendPingsFlow) start() error {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
if errors.Is(err, router.ErrTimeout) {
return errors.Wrapf(flowcontext.ErrPingTimeout, err.Error())
}
return err
}
pongMessage := message.(*appmessage.MsgPong)

View File

@@ -1,14 +1,14 @@
package v4
package v5
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/kaspanet/kaspad/app/protocol/flows/v4/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/v4/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/v4/ping"
"github.com/kaspanet/kaspad/app/protocol/flows/v4/rejects"
"github.com/kaspanet/kaspad/app/protocol/flows/v4/transactionrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/ping"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/rejects"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
@@ -78,6 +78,7 @@ func registerBlockRelayFlows(m protocolManager, router *routerpkg.Router, isStop
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdIBDBlock, appmessage.CmdPruningPoints,
appmessage.CmdPruningPointProof,
appmessage.CmdTrustedData,
appmessage.CmdIBDChainBlockLocator,
},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBD(m.Context(), incomingRoute,
@@ -121,7 +122,7 @@ func registerBlockRelayFlows(m protocolManager, router *routerpkg.Router, isStop
),
m.RegisterFlow("HandlePruningPointAndItsAnticoneRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointAndItsAnticone}, isStopping, errChan,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointAndItsAnticone, appmessage.CmdRequestNextPruningPointAndItsAnticoneBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointAndItsAnticoneRequests(m.Context(), incomingRoute, outgoingRoute, peer)
},
@@ -134,6 +135,20 @@ func registerBlockRelayFlows(m protocolManager, router *routerpkg.Router, isStop
},
),
m.RegisterFlow("HandleRequestIBDChainBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDChainBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestIBDChainBlockLocator(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterFlow("HandleRequestAnticone", router,
[]appmessage.MessageCommand{appmessage.CmdRequestAnticone}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestAnticone(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
m.RegisterFlow("HandlePruningPointProofRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointProof}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {

View File

@@ -1,7 +1,7 @@
package testing
import (
"github.com/kaspanet/kaspad/app/protocol/flows/v4/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/addressexchange"
"testing"
"time"

View File

@@ -22,7 +22,7 @@ type TransactionsRelayContext interface {
SharedRequestedTransactions() *flowcontext.SharedRequestedTransactions
OnTransactionAddedToMempool()
EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error
IsIBDRunning() bool
IsNearlySynced() (bool, error)
}
type handleRelayedTransactionsFlow struct {
@@ -50,7 +50,12 @@ func (flow *handleRelayedTransactionsFlow) start() error {
return err
}
if flow.IsIBDRunning() {
isNearlySynced, err := flow.IsNearlySynced()
if err != nil {
return err
}
// Transaction relay is disabled if the node is out of sync and thus not mining
if !isNearlySynced {
continue
}
@@ -97,7 +102,7 @@ func (flow *handleRelayedTransactionsFlow) requestInvTransactions(
func (flow *handleRelayedTransactionsFlow) isKnownTransaction(txID *externalapi.DomainTransactionID) bool {
// Ask the transaction memory pool if the transaction is known
// to it in any form (main pool or orphan).
if _, ok := flow.Domain().MiningManager().GetTransaction(txID); ok {
if _, _, ok := flow.Domain().MiningManager().GetTransaction(txID, true, true); ok {
return true
}

View File

@@ -3,7 +3,7 @@ package transactionrelay_test
import (
"errors"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/kaspanet/kaspad/app/protocol/flows/v4/transactionrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/transactionrelay"
"strings"
"testing"
@@ -47,8 +47,8 @@ func (m *mocTransactionsRelayContext) EnqueueTransactionIDsForPropagation(transa
func (m *mocTransactionsRelayContext) OnTransactionAddedToMempool() {
}
func (m *mocTransactionsRelayContext) IsIBDRunning() bool {
return false
func (m *mocTransactionsRelayContext) IsNearlySynced() (bool, error) {
return true, nil
}
// TestHandleRelayedTransactionsNotFound tests the flow of HandleRelayedTransactions when the peer doesn't

View File

@@ -30,7 +30,7 @@ func (flow *handleRequestedTransactionsFlow) start() error {
}
for _, transactionID := range msgRequestTransactions.IDs {
tx, ok := flow.Domain().MiningManager().GetTransaction(transactionID)
tx, _, ok := flow.Domain().MiningManager().GetTransaction(transactionID, true, false)
if !ok {
msgTransactionNotFound := appmessage.NewMsgTransactionNotFound(transactionID)
@@ -40,7 +40,6 @@ func (flow *handleRequestedTransactionsFlow) start() error {
}
continue
}
err := flow.outgoingRoute.Enqueue(appmessage.DomainTransactionToMsgTx(tx))
if err != nil {
return err

View File

@@ -2,7 +2,7 @@ package transactionrelay_test
import (
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/kaspanet/kaspad/app/protocol/flows/v4/transactionrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/transactionrelay"
"testing"
"github.com/kaspanet/kaspad/app/appmessage"

View File

@@ -2,10 +2,11 @@ package protocol
import (
"fmt"
"github.com/kaspanet/kaspad/app/protocol/common"
"sync"
"sync/atomic"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain"
@@ -90,14 +91,9 @@ func (m *Manager) runFlows(flows []*common.Flow, peer *peerpkg.Peer, errChan <-c
return <-errChan
}
// SetOnVirtualChange sets the onVirtualChangeHandler handler
func (m *Manager) SetOnVirtualChange(onVirtualChangeHandler flowcontext.OnVirtualChangeHandler) {
m.context.SetOnVirtualChangeHandler(onVirtualChangeHandler)
}
// SetOnBlockAddedToDAGHandler sets the onBlockAddedToDAG handler
func (m *Manager) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler flowcontext.OnBlockAddedToDAGHandler) {
m.context.SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler)
// SetOnNewBlockTemplateHandler sets the onNewBlockTemplate handler
func (m *Manager) SetOnNewBlockTemplateHandler(onNewBlockTemplateHandler flowcontext.OnNewBlockTemplateHandler) {
m.context.SetOnNewBlockTemplateHandler(onNewBlockTemplateHandler)
}
// SetOnPruningPointUTXOSetOverrideHandler sets the OnPruningPointUTXOSetOverride handler
@@ -110,12 +106,6 @@ func (m *Manager) SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMemp
m.context.SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler)
}
// ShouldMine returns whether it's ok to use block template from this node
// for mining purposes.
func (m *Manager) ShouldMine() (bool, error) {
return m.context.ShouldMine()
}
// IsIBDRunning returns true if IBD is currently marked as running
func (m *Manager) IsIBDRunning() bool {
return m.context.IsIBDRunning()

View File

@@ -3,7 +3,7 @@ package protocol
import (
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flows/ready"
v4 "github.com/kaspanet/kaspad/app/protocol/flows/v4"
"github.com/kaspanet/kaspad/app/protocol/flows/v5"
"sync"
"sync/atomic"
@@ -23,7 +23,7 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
// errChan is used by the flow goroutines to return to runFlows when an error occurs.
// They are both initialized here and passed to register flows.
isStopping := uint32(0)
errChan := make(chan error)
errChan := make(chan error, 1)
receiveVersionRoute, sendVersionRoute, receiveReadyRoute := registerHandshakeRoutes(router)
@@ -76,8 +76,8 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
var flows []*common.Flow
log.Infof("Registering p2p flows for peer %s for protocol version %d", peer, peer.ProtocolVersion())
switch peer.ProtocolVersion() {
case 4:
flows = v4.Register(m, router, errChan, &isStopping)
case 5:
flows = v5.Register(m, router, errChan, &isStopping)
default:
panic(errors.Errorf("no way to handle protocol version %d", peer.ProtocolVersion()))
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/pkg/errors"
)
// Manager is an RPC manager
@@ -28,6 +29,7 @@ func NewManager(
connectionManager *connmanager.ConnectionManager,
addressManager *addressmanager.AddressManager,
utxoIndex *utxoindex.UTXOIndex,
consensusEventsChan chan externalapi.ConsensusEvent,
shutDownChan chan<- struct{}) *Manager {
manager := Manager{
@@ -44,50 +46,90 @@ func NewManager(
}
netAdapter.SetRPCRouterInitializer(manager.routerInitializer)
manager.initConsensusEventsHandler(consensusEventsChan)
return &manager
}
// NotifyBlockAddedToDAG notifies the manager that a block has been added to the DAG
func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyBlockAddedToDAG")
func (m *Manager) initConsensusEventsHandler(consensusEventsChan chan externalapi.ConsensusEvent) {
spawn("consensusEventsHandler", func() {
for {
consensusEvent, ok := <-consensusEventsChan
if !ok {
return
}
switch event := consensusEvent.(type) {
case *externalapi.VirtualChangeSet:
err := m.notifyVirtualChange(event)
if err != nil {
panic(err)
}
case *externalapi.BlockAdded:
err := m.notifyBlockAddedToDAG(event.Block)
if err != nil {
panic(err)
}
default:
panic(errors.Errorf("Got event of unsupported type %T", consensusEvent))
}
}
})
}
// notifyBlockAddedToDAG notifies the manager that a block has been added to the DAG
func (m *Manager) notifyBlockAddedToDAG(block *externalapi.DomainBlock) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.notifyBlockAddedToDAG")
defer onEnd()
err := m.NotifyVirtualChange(virtualChangeSet)
if err != nil {
return err
// Before converting the block and populating it, we check if any listeners are interested.
// This is done since most nodes do not use this event.
if !m.context.NotificationManager.HasBlockAddedListeners() {
return nil
}
rpcBlock := appmessage.DomainBlockToRPCBlock(block)
err = m.context.PopulateBlockWithVerboseData(rpcBlock, block.Header, block, false)
err := m.context.PopulateBlockWithVerboseData(rpcBlock, block.Header, block, true)
if err != nil {
return err
}
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(rpcBlock)
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
err = m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
if err != nil {
return err
}
return nil
}
// NotifyVirtualChange notifies the manager that the virtual block has been changed.
func (m *Manager) NotifyVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyBlockAddedToDAG")
// notifyVirtualChange notifies the manager that the virtual block has been changed.
func (m *Manager) notifyVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualChange")
defer onEnd()
if m.context.Config.UTXOIndex {
if m.context.Config.UTXOIndex && virtualChangeSet.VirtualUTXODiff != nil {
err := m.notifyUTXOsChanged(virtualChangeSet)
if err != nil {
return err
}
}
err := m.notifyVirtualSelectedParentBlueScoreChanged()
err := m.notifyVirtualSelectedParentBlueScoreChanged(virtualChangeSet.VirtualSelectedParentBlueScore)
if err != nil {
return err
}
err = m.notifyVirtualDaaScoreChanged()
err = m.notifyVirtualDaaScoreChanged(virtualChangeSet.VirtualDAAScore)
if err != nil {
return err
}
if virtualChangeSet.VirtualSelectedParentChainChanges == nil ||
(len(virtualChangeSet.VirtualSelectedParentChainChanges.Added) == 0 &&
len(virtualChangeSet.VirtualSelectedParentChainChanges.Removed) == 0) {
return nil
}
err = m.notifyVirtualSelectedParentChainChanged(virtualChangeSet)
if err != nil {
return err
@@ -96,6 +138,13 @@ func (m *Manager) NotifyVirtualChange(virtualChangeSet *externalapi.VirtualChang
return nil
}
// NotifyNewBlockTemplate notifies the manager that a new
// block template is available for miners
func (m *Manager) NotifyNewBlockTemplate() error {
notification := appmessage.NewNewBlockTemplateNotificationMessage()
return m.context.NotificationManager.NotifyNewBlockTemplate(notification)
}
// NotifyPruningPointUTXOSetOverride notifies the manager whenever the UTXO index
// resets due to pruning point change via IBD.
func (m *Manager) NotifyPruningPointUTXOSetOverride() error {
@@ -138,6 +187,7 @@ func (m *Manager) notifyUTXOsChanged(virtualChangeSet *externalapi.VirtualChange
if err != nil {
return err
}
return m.context.NotificationManager.NotifyUTXOsChanged(utxoIndexChanges)
}
@@ -153,33 +203,18 @@ func (m *Manager) notifyPruningPointUTXOSetOverride() error {
return m.context.NotificationManager.NotifyPruningPointUTXOSetOverride()
}
func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged() error {
func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged(virtualSelectedParentBlueScore uint64) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentBlueScoreChanged")
defer onEnd()
virtualSelectedParent, err := m.context.Domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return err
}
blockInfo, err := m.context.Domain.Consensus().GetBlockInfo(virtualSelectedParent)
if err != nil {
return err
}
notification := appmessage.NewVirtualSelectedParentBlueScoreChangedNotificationMessage(blockInfo.BlueScore)
notification := appmessage.NewVirtualSelectedParentBlueScoreChangedNotificationMessage(virtualSelectedParentBlueScore)
return m.context.NotificationManager.NotifyVirtualSelectedParentBlueScoreChanged(notification)
}
func (m *Manager) notifyVirtualDaaScoreChanged() error {
func (m *Manager) notifyVirtualDaaScoreChanged(virtualDAAScore uint64) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualDaaScoreChanged")
defer onEnd()
virtualDAAScore, err := m.context.Domain.Consensus().GetVirtualDAAScore()
if err != nil {
return err
}
notification := appmessage.NewVirtualDaaScoreChangedNotificationMessage(virtualDAAScore)
return m.context.NotificationManager.NotifyVirtualDaaScoreChanged(notification)
}
@@ -188,10 +223,16 @@ func (m *Manager) notifyVirtualSelectedParentChainChanged(virtualChangeSet *exte
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentChainChanged")
defer onEnd()
notification, err := m.context.ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
virtualChangeSet.VirtualSelectedParentChainChanges)
if err != nil {
return err
hasListeners, includeAcceptedTransactionIDs := m.context.NotificationManager.HasListenersThatPropagateVirtualSelectedParentChainChanged()
if hasListeners {
notification, err := m.context.ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
virtualChangeSet.VirtualSelectedParentChainChanges, includeAcceptedTransactionIDs)
if err != nil {
return err
}
return m.context.NotificationManager.NotifyVirtualSelectedParentChainChanged(notification)
}
return m.context.NotificationManager.NotifyVirtualSelectedParentChainChanged(notification)
return nil
}

View File

@@ -48,6 +48,9 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleStopNotifyingPruningPointUTXOSetOverrideRequest,
appmessage.CmdEstimateNetworkHashesPerSecondRequestMessage: rpchandlers.HandleEstimateNetworkHashesPerSecond,
appmessage.CmdNotifyVirtualDaaScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualDaaScoreChanged,
appmessage.CmdNotifyNewBlockTemplateRequestMessage: rpchandlers.HandleNotifyNewBlockTemplate,
appmessage.CmdGetCoinSupplyRequestMessage: rpchandlers.HandleGetCoinSupply,
appmessage.CmdGetMempoolEntriesByAddressesRequestMessage: rpchandlers.HandleGetMempoolEntriesByAddresses,
}
func (m *Manager) routerInitializer(router *router.Router, netConnection *netadapter.NetConnection) {

View File

@@ -3,12 +3,14 @@ package rpccontext
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage converts
// VirtualSelectedParentChainChanges to VirtualSelectedParentChainChangedNotificationMessage
func (ctx *Context) ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
selectedParentChainChanges *externalapi.SelectedChainPath) (*appmessage.VirtualSelectedParentChainChangedNotificationMessage, error) {
selectedParentChainChanges *externalapi.SelectedChainPath, includeAcceptedTransactionIDs bool) (
*appmessage.VirtualSelectedParentChainChangedNotificationMessage, error) {
removedChainBlockHashes := make([]string, len(selectedParentChainChanges.Removed))
for i, removed := range selectedParentChainChanges.Removed {
@@ -20,5 +22,58 @@ func (ctx *Context) ConvertVirtualSelectedParentChainChangesToChainChangedNotifi
addedChainBlocks[i] = added.String()
}
return appmessage.NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes, addedChainBlocks), nil
var acceptedTransactionIDs []*appmessage.AcceptedTransactionIDs
if includeAcceptedTransactionIDs {
var err error
acceptedTransactionIDs, err = ctx.getAndConvertAcceptedTransactionIDs(selectedParentChainChanges)
if err != nil {
return nil, err
}
}
return appmessage.NewVirtualSelectedParentChainChangedNotificationMessage(
removedChainBlockHashes, addedChainBlocks, acceptedTransactionIDs), nil
}
func (ctx *Context) getAndConvertAcceptedTransactionIDs(selectedParentChainChanges *externalapi.SelectedChainPath) (
[]*appmessage.AcceptedTransactionIDs, error) {
acceptedTransactionIDs := make([]*appmessage.AcceptedTransactionIDs, len(selectedParentChainChanges.Added))
const chunk = 1000
position := 0
for position < len(selectedParentChainChanges.Added) {
var chainBlocksChunk []*externalapi.DomainHash
if position+chunk > len(selectedParentChainChanges.Added) {
chainBlocksChunk = selectedParentChainChanges.Added[position:]
} else {
chainBlocksChunk = selectedParentChainChanges.Added[position : position+chunk]
}
// We use chunks in order to avoid blocking consensus for too long
chainBlocksAcceptanceData, err := ctx.Domain.Consensus().GetBlocksAcceptanceData(chainBlocksChunk)
if err != nil {
return nil, err
}
for i, addedChainBlock := range chainBlocksChunk {
chainBlockAcceptanceData := chainBlocksAcceptanceData[i]
acceptedTransactionIDs[position+i] = &appmessage.AcceptedTransactionIDs{
AcceptingBlockHash: addedChainBlock.String(),
AcceptedTransactionIDs: nil,
}
for _, blockAcceptanceData := range chainBlockAcceptanceData {
for _, transactionAcceptanceData := range blockAcceptanceData.TransactionAcceptanceData {
if transactionAcceptanceData.IsAccepted {
acceptedTransactionIDs[position+i].AcceptedTransactionIDs =
append(acceptedTransactionIDs[position+i].AcceptedTransactionIDs,
consensushashing.TransactionID(transactionAcceptanceData.Transaction).String())
}
}
}
}
position += chunk
}
return acceptedTransactionIDs, nil
}

View File

@@ -44,7 +44,7 @@ func NewContext(cfg *config.Config,
UTXOIndex: utxoIndex,
ShutDownChan: shutDownChan,
}
context.NotificationManager = NewNotificationManager()
context.NotificationManager = NewNotificationManager(cfg.ActiveNetParams)
return context
}

View File

@@ -3,6 +3,11 @@ package rpccontext
import (
"sync"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/utxoindex"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
@@ -13,6 +18,7 @@ import (
type NotificationManager struct {
sync.RWMutex
listeners map[*routerpkg.Router]*NotificationListener
params *dagconfig.Params
}
// UTXOsChangedNotificationAddress represents a kaspad address.
@@ -24,6 +30,8 @@ type UTXOsChangedNotificationAddress struct {
// NotificationListener represents a registered RPC notification listener
type NotificationListener struct {
params *dagconfig.Params
propagateBlockAddedNotifications bool
propagateVirtualSelectedParentChainChangedNotifications bool
propagateFinalityConflictNotifications bool
@@ -32,13 +40,16 @@ type NotificationListener struct {
propagateVirtualSelectedParentBlueScoreChangedNotifications bool
propagateVirtualDaaScoreChangedNotifications bool
propagatePruningPointUTXOSetOverrideNotifications bool
propagateNewBlockTemplateNotifications bool
propagateUTXOsChangedNotificationAddresses map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress
propagateUTXOsChangedNotificationAddresses map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress
includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications bool
}
// NewNotificationManager creates a new NotificationManager
func NewNotificationManager() *NotificationManager {
func NewNotificationManager(params *dagconfig.Params) *NotificationManager {
return &NotificationManager{
params: params,
listeners: make(map[*routerpkg.Router]*NotificationListener),
}
}
@@ -48,7 +59,7 @@ func (nm *NotificationManager) AddListener(router *routerpkg.Router) {
nm.Lock()
defer nm.Unlock()
listener := newNotificationListener()
listener := newNotificationListener(nm.params)
nm.listeners[router] = listener
}
@@ -72,6 +83,19 @@ func (nm *NotificationManager) Listener(router *routerpkg.Router) (*Notification
return listener, nil
}
// HasBlockAddedListeners indicates if the notification manager has any listeners for `BlockAdded` events
func (nm *NotificationManager) HasBlockAddedListeners() bool {
nm.RLock()
defer nm.RUnlock()
for _, listener := range nm.listeners {
if listener.propagateBlockAddedNotifications {
return true
}
}
return false
}
// NotifyBlockAdded notifies the notification manager that a block has been added to the DAG
func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAddedNotificationMessage) error {
nm.RLock()
@@ -79,10 +103,8 @@ func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAd
for router, listener := range nm.listeners {
if listener.propagateBlockAddedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if errors.Is(err, routerpkg.ErrRouteClosed) {
log.Warnf("Couldn't send notification: %s", err)
} else if err != nil {
err := router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
}
@@ -91,13 +113,27 @@ func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAd
}
// NotifyVirtualSelectedParentChainChanged notifies the notification manager that the DAG's selected parent chain has changed
func (nm *NotificationManager) NotifyVirtualSelectedParentChainChanged(notification *appmessage.VirtualSelectedParentChainChangedNotificationMessage) error {
func (nm *NotificationManager) NotifyVirtualSelectedParentChainChanged(
notification *appmessage.VirtualSelectedParentChainChangedNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
notificationWithoutAcceptedTransactionIDs := &appmessage.VirtualSelectedParentChainChangedNotificationMessage{
RemovedChainBlockHashes: notification.RemovedChainBlockHashes,
AddedChainBlockHashes: notification.AddedChainBlockHashes,
}
for router, listener := range nm.listeners {
if listener.propagateVirtualSelectedParentChainChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
var err error
if listener.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications {
err = router.OutgoingRoute().MaybeEnqueue(notification)
} else {
err = router.OutgoingRoute().MaybeEnqueue(notificationWithoutAcceptedTransactionIDs)
}
if err != nil {
return err
}
@@ -106,6 +142,31 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentChainChanged(notificat
return nil
}
// HasListenersThatPropagateVirtualSelectedParentChainChanged returns whether there's any listener that is
// subscribed to VirtualSelectedParentChainChanged notifications as well as checks if any such listener requested
// to include AcceptedTransactionIDs.
func (nm *NotificationManager) HasListenersThatPropagateVirtualSelectedParentChainChanged() (hasListeners, hasListenersThatRequireAcceptedTransactionIDs bool) {
nm.RLock()
defer nm.RUnlock()
hasListeners = false
hasListenersThatRequireAcceptedTransactionIDs = false
for _, listener := range nm.listeners {
if listener.propagateVirtualSelectedParentChainChangedNotifications {
hasListeners = true
// Generating acceptedTransactionIDs is a heavy operation, so we check if it's needed by any listener.
if listener.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications {
hasListenersThatRequireAcceptedTransactionIDs = true
break
}
}
}
return hasListeners, hasListenersThatRequireAcceptedTransactionIDs
}
// NotifyFinalityConflict notifies the notification manager that there's a finality conflict in the DAG
func (nm *NotificationManager) NotifyFinalityConflict(notification *appmessage.FinalityConflictNotificationMessage) error {
nm.RLock()
@@ -146,7 +207,10 @@ func (nm *NotificationManager) NotifyUTXOsChanged(utxoChanges *utxoindex.UTXOCha
for router, listener := range nm.listeners {
if listener.propagateUTXOsChangedNotifications {
// Filter utxoChanges and create a notification
notification := listener.convertUTXOChangesToUTXOsChangedNotification(utxoChanges)
notification, err := listener.convertUTXOChangesToUTXOsChangedNotification(utxoChanges)
if err != nil {
return err
}
// Don't send the notification if it's empty
if len(notification.Added) == 0 && len(notification.Removed) == 0 {
@@ -154,7 +218,7 @@ func (nm *NotificationManager) NotifyUTXOsChanged(utxoChanges *utxoindex.UTXOCha
}
// Enqueue the notification
err := router.OutgoingRoute().Enqueue(notification)
err = router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
@@ -173,7 +237,7 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentBlueScoreChanged(
for router, listener := range nm.listeners {
if listener.propagateVirtualSelectedParentBlueScoreChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
err := router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
@@ -192,6 +256,25 @@ func (nm *NotificationManager) NotifyVirtualDaaScoreChanged(
for router, listener := range nm.listeners {
if listener.propagateVirtualDaaScoreChangedNotifications {
err := router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
}
}
return nil
}
// NotifyNewBlockTemplate notifies the notification manager that a new
// block template is available for miners
func (nm *NotificationManager) NotifyNewBlockTemplate(
notification *appmessage.NewBlockTemplateNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagateNewBlockTemplateNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
return err
@@ -218,18 +301,27 @@ func (nm *NotificationManager) NotifyPruningPointUTXOSetOverride() error {
return nil
}
func newNotificationListener() *NotificationListener {
func newNotificationListener(params *dagconfig.Params) *NotificationListener {
return &NotificationListener{
params: params,
propagateBlockAddedNotifications: false,
propagateVirtualSelectedParentChainChangedNotifications: false,
propagateFinalityConflictNotifications: false,
propagateFinalityConflictResolvedNotifications: false,
propagateUTXOsChangedNotifications: false,
propagateVirtualSelectedParentBlueScoreChangedNotifications: false,
propagateNewBlockTemplateNotifications: false,
propagatePruningPointUTXOSetOverrideNotifications: false,
}
}
// IncludeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications returns true if this listener
// includes accepted transaction IDs in it's virtual-selected-parent-chain-changed notifications
func (nl *NotificationListener) IncludeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications() bool {
return nl.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications
}
// PropagateBlockAddedNotifications instructs the listener to send block added notifications
// to the remote listener
func (nl *NotificationListener) PropagateBlockAddedNotifications() {
@@ -238,8 +330,9 @@ func (nl *NotificationListener) PropagateBlockAddedNotifications() {
// PropagateVirtualSelectedParentChainChangedNotifications instructs the listener to send chain changed notifications
// to the remote listener
func (nl *NotificationListener) PropagateVirtualSelectedParentChainChangedNotifications() {
func (nl *NotificationListener) PropagateVirtualSelectedParentChainChangedNotifications(includeAcceptedTransactionIDs bool) {
nl.propagateVirtualSelectedParentChainChangedNotifications = true
nl.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications = includeAcceptedTransactionIDs
}
// PropagateFinalityConflictNotifications instructs the listener to send finality conflict notifications
@@ -258,7 +351,11 @@ func (nl *NotificationListener) PropagateFinalityConflictResolvedNotifications()
// to the remote listener for the given addresses. Subsequent calls instruct the listener to
// send UTXOs changed notifications for those addresses along with the old ones. Duplicate addresses
// are ignored.
func (nl *NotificationListener) PropagateUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
func (nm *NotificationManager) PropagateUTXOsChangedNotifications(nl *NotificationListener, addresses []*UTXOsChangedNotificationAddress) {
// Apply a write-lock since the internal listener address map is modified
nm.Lock()
defer nm.Unlock()
if !nl.propagateUTXOsChangedNotifications {
nl.propagateUTXOsChangedNotifications = true
nl.propagateUTXOsChangedNotificationAddresses =
@@ -273,7 +370,11 @@ func (nl *NotificationListener) PropagateUTXOsChangedNotifications(addresses []*
// StopPropagatingUTXOsChangedNotifications instructs the listener to stop sending UTXOs
// changed notifications to the remote listener for the given addresses. Addresses for which
// notifications are not currently sent are ignored.
func (nl *NotificationListener) StopPropagatingUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
func (nm *NotificationManager) StopPropagatingUTXOsChangedNotifications(nl *NotificationListener, addresses []*UTXOsChangedNotificationAddress) {
// Apply a write-lock since the internal listener address map is modified
nm.Lock()
defer nm.Unlock()
if !nl.propagateUTXOsChangedNotifications {
return
}
@@ -284,7 +385,7 @@ func (nl *NotificationListener) StopPropagatingUTXOsChangedNotifications(address
}
func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
utxoChanges *utxoindex.UTXOChanges) *appmessage.UTXOsChangedNotificationMessage {
utxoChanges *utxoindex.UTXOChanges) (*appmessage.UTXOsChangedNotificationMessage, error) {
// As an optimization, we iterate over the smaller set (O(n)) among the two below
// and check existence over the larger set (O(1))
@@ -299,27 +400,64 @@ func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
}
for scriptPublicKeyString, removedOutpoints := range utxoChanges.Removed {
for scriptPublicKeyString, removedPairs := range utxoChanges.Removed {
if listenerAddress, ok := nl.propagateUTXOsChangedNotificationAddresses[scriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, removedPairs)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
} else {
} else if addressesSize > 0 {
for _, listenerAddress := range nl.propagateUTXOsChangedNotificationAddresses {
listenerScriptPublicKeyString := listenerAddress.ScriptPublicKeyString
if addedPairs, ok := utxoChanges.Added[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
if removedOutpoints, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
if removedPairs, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, removedPairs)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
} else {
for scriptPublicKeyString, addedPairs := range utxoChanges.Added {
addressString, err := nl.scriptPubKeyStringToAddressString(scriptPublicKeyString)
if err != nil {
return nil, err
}
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(addressString, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
for scriptPublicKeyString, removedPAirs := range utxoChanges.Removed {
addressString, err := nl.scriptPubKeyStringToAddressString(scriptPublicKeyString)
if err != nil {
return nil, err
}
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(addressString, removedPAirs)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
return notification
return notification, nil
}
func (nl *NotificationListener) scriptPubKeyStringToAddressString(scriptPublicKeyString utxoindex.ScriptPublicKeyString) (string, error) {
scriptPubKey := externalapi.NewScriptPublicKeyFromString(string(scriptPublicKeyString))
// ignore error because it is often returned when the script is of unknown type
scriptType, address, err := txscript.ExtractScriptPubKeyAddress(scriptPubKey, nl.params)
if err != nil {
return "", err
}
var addressString string
if scriptType == txscript.NonStandardTy {
addressString = ""
} else {
addressString = address.String()
}
return addressString, nil
}
// PropagateVirtualSelectedParentBlueScoreChangedNotifications instructs the listener to send
@@ -334,6 +472,12 @@ func (nl *NotificationListener) PropagateVirtualDaaScoreChangedNotifications() {
nl.propagateVirtualDaaScoreChangedNotifications = true
}
// PropagateNewBlockTemplateNotifications instructs the listener to send
// new block template notifications to the remote listener
func (nl *NotificationListener) PropagateNewBlockTemplateNotifications() {
nl.propagateNewBlockTemplateNotifications = true
}
// PropagatePruningPointUTXOSetOverrideNotifications instructs the listener to send pruning point UTXO set override notifications
// to the remote listener.
func (nl *NotificationListener) PropagatePruningPointUTXOSetOverrideNotifications() {

View File

@@ -32,22 +32,6 @@ func ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(address string, pair
return utxosByAddressesEntries
}
// convertUTXOOutpointsToUTXOsByAddressesEntries converts
// UTXOOutpoints to a slice of UTXOsByAddressesEntry
func convertUTXOOutpointsToUTXOsByAddressesEntries(address string, outpoints utxoindex.UTXOOutpoints) []*appmessage.UTXOsByAddressesEntry {
utxosByAddressesEntries := make([]*appmessage.UTXOsByAddressesEntry, 0, len(outpoints))
for outpoint := range outpoints {
utxosByAddressesEntries = append(utxosByAddressesEntries, &appmessage.UTXOsByAddressesEntry{
Address: address,
Outpoint: &appmessage.RPCOutpoint{
TransactionID: outpoint.TransactionID.String(),
Index: outpoint.Index,
},
})
}
return utxosByAddressesEntries
}
// ConvertAddressStringsToUTXOsChangedNotificationAddresses converts address strings
// to UTXOsChangedNotificationAddresses
func (ctx *Context) ConvertAddressStringsToUTXOsChangedNotificationAddresses(
@@ -63,7 +47,7 @@ func (ctx *Context) ConvertAddressStringsToUTXOsChangedNotificationAddresses(
if err != nil {
return nil, errors.Errorf("Could not create a scriptPublicKey for address '%s': %s", addressString, err)
}
scriptPublicKeyString := utxoindex.ConvertScriptPublicKeyToString(scriptPublicKey)
scriptPublicKeyString := utxoindex.ScriptPublicKeyString(scriptPublicKey.String())
addresses[i] = &UTXOsChangedNotificationAddress{
Address: addressString,
ScriptPublicKeyString: scriptPublicKeyString,

View File

@@ -9,6 +9,14 @@ import (
// HandleAddPeer handles the respectively named RPC command
func HandleAddPeer(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
if context.Config.SafeRPC {
log.Warn("AddPeer RPC command called while node in safe RPC mode -- ignoring.")
response := appmessage.NewAddPeerResponseMessage()
response.Error =
appmessage.RPCErrorf("AddPeer RPC command called while node in safe RPC mode")
return response, nil
}
AddPeerRequest := request.(*appmessage.AddPeerRequestMessage)
address, err := network.NormalizeAddress(AddPeerRequest.Address, context.Config.ActiveNetParams.DefaultPort)
if err != nil {

View File

@@ -9,6 +9,14 @@ import (
// HandleBan handles the respectively named RPC command
func HandleBan(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
if context.Config.SafeRPC {
log.Warn("Ban RPC command called while node in safe RPC mode -- ignoring.")
response := appmessage.NewBanResponseMessage()
response.Error =
appmessage.RPCErrorf("Ban RPC command called while node in safe RPC mode")
return response, nil
}
banRequest := request.(*appmessage.BanRequestMessage)
ip := net.ParseIP(banRequest.IP)
if ip == nil {

View File

@@ -27,6 +27,27 @@ func HandleEstimateNetworkHashesPerSecond(
}
}
if context.Config.SafeRPC {
const windowSizeLimit = 10000
if windowSize > windowSizeLimit {
response := &appmessage.EstimateNetworkHashesPerSecondResponseMessage{}
response.Error =
appmessage.RPCErrorf(
"Requested window size %d is larger than max allowed in RPC safe mode (%d)",
windowSize, windowSizeLimit)
return response, nil
}
}
if uint64(windowSize) > context.Config.ActiveNetParams.PruningDepth() {
response := &appmessage.EstimateNetworkHashesPerSecondResponseMessage{}
response.Error =
appmessage.RPCErrorf(
"Requested window size %d is larger than pruning point depth %d",
windowSize, context.Config.ActiveNetParams.PruningDepth())
return response, nil
}
networkHashesPerSecond, err := context.Domain.Consensus().EstimateNetworkHashesPerSecond(startHash, windowSize)
if err != nil {
response := &appmessage.EstimateNetworkHashesPerSecondResponseMessage{}

View File

@@ -22,7 +22,7 @@ func HandleGetBalanceByAddress(context *rpccontext.Context, _ *router.Router, re
balance, err := getBalanceByAddress(context, getBalanceByAddressRequest.Address)
if err != nil {
rpcError := &appmessage.RPCError{}
if !errors.As(err, rpcError) {
if !errors.As(err, &rpcError) {
return nil, err
}
errorMessage := &appmessage.GetUTXOsByAddressesResponseMessage{}

View File

@@ -23,7 +23,7 @@ func HandleGetBalancesByAddresses(context *rpccontext.Context, _ *router.Router,
if err != nil {
rpcError := &appmessage.RPCError{}
if !errors.As(err, rpcError) {
if !errors.As(err, &rpcError) {
return nil, err
}
errorMessage := &appmessage.GetUTXOsByAddressesResponseMessage{}

View File

@@ -4,9 +4,11 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionhelper"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/version"
)
// HandleGetBlockTemplate handles the respectively named RPC command
@@ -15,7 +17,7 @@ func HandleGetBlockTemplate(context *rpccontext.Context, _ *router.Router, reque
payAddress, err := util.DecodeAddress(getBlockTemplateRequest.PayAddress, context.Config.ActiveNetParams.Prefix)
if err != nil {
errorMessage := &appmessage.GetBlockResponseMessage{}
errorMessage := &appmessage.GetBlockTemplateResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not decode address: %s", err)
return errorMessage, nil
}
@@ -25,18 +27,20 @@ func HandleGetBlockTemplate(context *rpccontext.Context, _ *router.Router, reque
return nil, err
}
coinbaseData := &externalapi.DomainCoinbaseData{ScriptPublicKey: scriptPublicKey}
coinbaseData := &externalapi.DomainCoinbaseData{ScriptPublicKey: scriptPublicKey, ExtraData: []byte(version.Version() + "/" + getBlockTemplateRequest.ExtraData)}
templateBlock, err := context.Domain.MiningManager().GetBlockTemplate(coinbaseData)
templateBlock, isNearlySynced, err := context.Domain.MiningManager().GetBlockTemplate(coinbaseData)
if err != nil {
return nil, err
}
if uint64(len(templateBlock.Transactions[transactionhelper.CoinbaseTransactionIndex].Payload)) > context.Config.NetParams().MaxCoinbasePayloadLength {
errorMessage := &appmessage.GetBlockTemplateResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Coinbase payload is above max length (%d). Try to shorten the extra data.", context.Config.NetParams().MaxCoinbasePayloadLength)
return errorMessage, nil
}
rpcBlock := appmessage.DomainBlockToRPCBlock(templateBlock)
isSynced, err := context.ProtocolManager.ShouldMine()
if err != nil {
return nil, err
}
return appmessage.NewGetBlockTemplateResponseMessage(rpcBlock, isSynced), nil
return appmessage.NewGetBlockTemplateResponseMessage(rpcBlock, context.ProtocolManager.Context().HasPeers() && isNearlySynced), nil
}

View File

@@ -23,6 +23,10 @@ type fakeDomain struct {
testapi.TestConsensus
}
func (d fakeDomain) ConsensusEventsChannel() chan externalapi.ConsensusEvent {
panic("implement me")
}
func (d fakeDomain) DeleteStagingConsensus() error {
panic("implement me")
}
@@ -31,7 +35,7 @@ func (d fakeDomain) StagingConsensus() externalapi.Consensus {
panic("implement me")
}
func (d fakeDomain) InitStagingConsensus() error {
func (d fakeDomain) InitStagingConsensusWithoutGenesis() error {
panic("implement me")
}

View File

@@ -0,0 +1,29 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleGetCoinSupply handles the respectively named RPC command
func HandleGetCoinSupply(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
if !context.Config.UTXOIndex {
errorMessage := &appmessage.GetCoinSupplyResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Method unavailable when kaspad is run without --utxoindex")
return errorMessage, nil
}
circulatingSompiSupply, err := context.UTXOIndex.GetCirculatingSompiSupply()
if err != nil {
return nil, err
}
response := appmessage.NewGetCoinSupplyResponseMessage(
constants.MaxSompi,
circulatingSompiSupply,
)
return response, nil
}

View File

@@ -9,10 +9,17 @@ import (
// HandleGetInfo handles the respectively named RPC command
func HandleGetInfo(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
isNearlySynced, err := context.Domain.Consensus().IsNearlySynced()
if err != nil {
return nil, err
}
response := appmessage.NewGetInfoResponseMessage(
context.NetAdapter.ID().String(),
uint64(context.Domain.MiningManager().TransactionCount()),
uint64(context.Domain.MiningManager().TransactionCount(true, false)),
version.Version(),
context.Config.UTXOIndex,
context.ProtocolManager.Context().HasPeers() && isNearlySynced,
)
return response, nil

View File

@@ -7,19 +7,40 @@ import (
)
// HandleGetMempoolEntries handles the respectively named RPC command
func HandleGetMempoolEntries(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
transactions := context.Domain.MiningManager().AllTransactions()
entries := make([]*appmessage.MempoolEntry, 0, len(transactions))
for _, transaction := range transactions {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
func HandleGetMempoolEntries(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
getMempoolEntriesRequest := request.(*appmessage.GetMempoolEntriesRequestMessage)
entries := make([]*appmessage.MempoolEntry, 0)
transactionPoolTransactions, orphanPoolTransactions := context.Domain.MiningManager().AllTransactions(!getMempoolEntriesRequest.FilterTransactionPool, getMempoolEntriesRequest.IncludeOrphanPool)
if !getMempoolEntriesRequest.FilterTransactionPool {
for _, transaction := range transactionPoolTransactions {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
entries = append(entries, &appmessage.MempoolEntry{
Fee: transaction.Fee,
Transaction: rpcTransaction,
IsOrphan: false,
})
}
}
if getMempoolEntriesRequest.IncludeOrphanPool {
for _, transaction := range orphanPoolTransactions {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
entries = append(entries, &appmessage.MempoolEntry{
Fee: transaction.Fee,
Transaction: rpcTransaction,
IsOrphan: true,
})
}
entries = append(entries, &appmessage.MempoolEntry{
Fee: transaction.Fee,
Transaction: rpcTransaction,
})
}
return appmessage.NewGetMempoolEntriesResponseMessage(entries), nil

View File

@@ -0,0 +1,122 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util"
)
// HandleGetMempoolEntriesByAddresses handles the respectively named RPC command
func HandleGetMempoolEntriesByAddresses(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
getMempoolEntriesByAddressesRequest := request.(*appmessage.GetMempoolEntriesByAddressesRequestMessage)
mempoolEntriesByAddresses := make([]*appmessage.MempoolEntryByAddress, 0)
sendingInTransactionPool, receivingInTransactionPool, sendingInOrphanPool, receivingInOrphanPool, err := context.Domain.MiningManager().GetTransactionsByAddresses(!getMempoolEntriesByAddressesRequest.FilterTransactionPool, getMempoolEntriesByAddressesRequest.IncludeOrphanPool)
if err != nil {
return nil, err
}
for _, addressString := range getMempoolEntriesByAddressesRequest.Addresses {
address, err := util.DecodeAddress(addressString, context.Config.NetParams().Prefix)
if err != nil {
errorMessage := &appmessage.GetMempoolEntriesByAddressesResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not decode address '%s': %s", addressString, err)
return errorMessage, nil
}
sending := make([]*appmessage.MempoolEntry, 0)
receiving := make([]*appmessage.MempoolEntry, 0)
scriptPublicKey, err := txscript.PayToAddrScript(address)
if err != nil {
errorMessage := &appmessage.GetMempoolEntriesByAddressesResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Could not extract scriptPublicKey from address '%s': %s", addressString, err)
return errorMessage, nil
}
if !getMempoolEntriesByAddressesRequest.FilterTransactionPool {
if transaction, found := sendingInTransactionPool[scriptPublicKey.String()]; found {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
sending = append(sending, &appmessage.MempoolEntry{
Fee: transaction.Fee,
Transaction: rpcTransaction,
IsOrphan: false,
},
)
}
if transaction, found := receivingInTransactionPool[scriptPublicKey.String()]; found {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
receiving = append(receiving, &appmessage.MempoolEntry{
Fee: transaction.Fee,
Transaction: rpcTransaction,
IsOrphan: false,
},
)
}
}
if getMempoolEntriesByAddressesRequest.IncludeOrphanPool {
if transaction, found := sendingInOrphanPool[scriptPublicKey.String()]; found {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
sending = append(sending, &appmessage.MempoolEntry{
Fee: transaction.Fee,
Transaction: rpcTransaction,
IsOrphan: true,
},
)
}
if transaction, found := receivingInOrphanPool[scriptPublicKey.String()]; found {
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
err := context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
receiving = append(receiving, &appmessage.MempoolEntry{
Fee: transaction.Fee,
Transaction: rpcTransaction,
IsOrphan: true,
},
)
}
}
if len(sending) > 0 || len(receiving) > 0 {
mempoolEntriesByAddresses = append(
mempoolEntriesByAddresses,
&appmessage.MempoolEntryByAddress{
Address: address.String(),
Sending: sending,
Receiving: receiving,
},
)
}
}
return appmessage.NewGetMempoolEntriesByAddressesResponseMessage(mempoolEntriesByAddresses), nil
}

View File

@@ -3,12 +3,18 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/transactionid"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleGetMempoolEntry handles the respectively named RPC command
func HandleGetMempoolEntry(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
transaction := &externalapi.DomainTransaction{}
var found bool
var isOrphan bool
getMempoolEntryRequest := request.(*appmessage.GetMempoolEntryRequestMessage)
transactionID, err := transactionid.FromString(getMempoolEntryRequest.TxID)
@@ -18,17 +24,18 @@ func HandleGetMempoolEntry(context *rpccontext.Context, _ *router.Router, reques
return errorMessage, nil
}
transaction, ok := context.Domain.MiningManager().GetTransaction(transactionID)
if !ok {
mempoolTransaction, isOrphan, found := context.Domain.MiningManager().GetTransaction(transactionID, !getMempoolEntryRequest.FilterTransactionPool, getMempoolEntryRequest.IncludeOrphanPool)
if !found {
errorMessage := &appmessage.GetMempoolEntryResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Transaction %s was not found", transactionID)
return errorMessage, nil
}
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(transaction)
rpcTransaction := appmessage.DomainTransactionToRPCTransaction(mempoolTransaction)
err = context.PopulateTransactionWithVerboseData(rpcTransaction, nil)
if err != nil {
return nil, err
}
return appmessage.NewGetMempoolEntryResponseMessage(transaction.Fee, rpcTransaction), nil
return appmessage.NewGetMempoolEntryResponseMessage(transaction.Fee, rpcTransaction, isOrphan), nil
}

View File

@@ -26,12 +26,14 @@ func HandleGetVirtualSelectedParentChainFromBlock(context *rpccontext.Context, _
return response, nil
}
chainChangedNotification, err := context.ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(virtualSelectedParentChain)
chainChangedNotification, err := context.ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
virtualSelectedParentChain, getVirtualSelectedParentChainFromBlockRequest.IncludeAcceptedTransactionIDs)
if err != nil {
return nil, err
}
response := appmessage.NewGetVirtualSelectedParentChainFromBlockResponseMessage(
chainChangedNotification.RemovedChainBlockHashes, chainChangedNotification.AddedChainBlockHashes)
chainChangedNotification.RemovedChainBlockHashes, chainChangedNotification.AddedChainBlockHashes,
chainChangedNotification.AcceptedTransactionIDs)
return response, nil
}

View File

@@ -0,0 +1,19 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleNotifyNewBlockTemplate handles the respectively named RPC command
func HandleNotifyNewBlockTemplate(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.PropagateNewBlockTemplateNotifications()
response := appmessage.NewNotifyNewBlockTemplateResponseMessage()
return response, nil
}

View File

@@ -26,7 +26,7 @@ func HandleNotifyUTXOsChanged(context *rpccontext.Context, router *router.Router
if err != nil {
return nil, err
}
listener.PropagateUTXOsChangedNotifications(addresses)
context.NotificationManager.PropagateUTXOsChangedNotifications(listener, addresses)
response := appmessage.NewNotifyUTXOsChangedResponseMessage()
return response, nil

View File

@@ -7,12 +7,17 @@ import (
)
// HandleNotifyVirtualSelectedParentChainChanged handles the respectively named RPC command
func HandleNotifyVirtualSelectedParentChainChanged(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
func HandleNotifyVirtualSelectedParentChainChanged(context *rpccontext.Context, router *router.Router,
request appmessage.Message) (appmessage.Message, error) {
notifyVirtualSelectedParentChainChangedRequest := request.(*appmessage.NotifyVirtualSelectedParentChainChangedRequestMessage)
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.PropagateVirtualSelectedParentChainChangedNotifications()
listener.PropagateVirtualSelectedParentChainChangedNotifications(
notifyVirtualSelectedParentChainChangedRequest.IncludeAcceptedTransactionIDs)
response := appmessage.NewNotifyVirtualSelectedParentChainChangedResponseMessage()
return response, nil

View File

@@ -8,6 +8,14 @@ import (
// HandleResolveFinalityConflict handles the respectively named RPC command
func HandleResolveFinalityConflict(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
if context.Config.SafeRPC {
log.Warn("ResolveFinalityConflict RPC command called while node in safe RPC mode -- ignoring.")
response := &appmessage.ResolveFinalityConflictResponseMessage{}
response.Error =
appmessage.RPCErrorf("ResolveFinalityConflict RPC command called while node in safe RPC mode")
return response, nil
}
response := &appmessage.ResolveFinalityConflictResponseMessage{}
response.Error = appmessage.RPCErrorf("not implemented")
return response, nil

View File

@@ -12,6 +12,14 @@ const pauseBeforeShutDown = time.Second
// HandleShutDown handles the respectively named RPC command
func HandleShutDown(context *rpccontext.Context, _ *router.Router, _ appmessage.Message) (appmessage.Message, error) {
if context.Config.SafeRPC {
log.Warn("ShutDown RPC command called while node in safe RPC mode -- ignoring.")
response := appmessage.NewShutDownResponseMessage()
response.Error =
appmessage.RPCErrorf("ShutDown RPC command called while node in safe RPC mode")
return response, nil
}
log.Warn("ShutDown RPC called.")
// Wait a second before shutting down, to allow time to return the response to the caller

View File

@@ -26,7 +26,7 @@ func HandleStopNotifyingUTXOsChanged(context *rpccontext.Context, router *router
if err != nil {
return nil, err
}
listener.StopPropagatingUTXOsChangedNotifications(addresses)
context.NotificationManager.StopPropagatingUTXOsChangedNotifications(listener, addresses)
response := appmessage.NewStopNotifyingUTXOsChangedResponseMessage()
return response, nil

View File

@@ -1,6 +1,7 @@
package rpchandlers
import (
"encoding/json"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
@@ -14,9 +15,14 @@ import (
func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
submitBlockRequest := request.(*appmessage.SubmitBlockRequestMessage)
isSynced, err := context.ProtocolManager.ShouldMine()
if err != nil {
return nil, err
var err error
isSynced := false
// The node is considered synced if it has peers and consensus state is nearly synced
if context.ProtocolManager.Context().HasPeers() {
isSynced, err = context.ProtocolManager.Context().IsNearlySynced()
if err != nil {
return nil, err
}
}
if !context.Config.AllowSubmitBlockWhenNotSynced && !isSynced {
@@ -58,6 +64,12 @@ func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request ap
return nil, err
}
jsonBytes, _ := json.MarshalIndent(submitBlockRequest.Block.Header, "", " ")
if jsonBytes != nil {
log.Warnf("The RPC submitted block triggered a rule/protocol error (%s), printing "+
"the full header for debug purposes: \n%s", err, string(jsonBytes))
}
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Block rejected. Reason: %s", err),
RejectReason: appmessage.RejectReasonBlockInvalid,

View File

@@ -9,6 +9,14 @@ import (
// HandleUnban handles the respectively named RPC command
func HandleUnban(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
if context.Config.SafeRPC {
log.Warn("Unban RPC command called while node in safe RPC mode -- ignoring.")
response := appmessage.NewUnbanResponseMessage()
response.Error =
appmessage.RPCErrorf("Unban RPC command called while node in safe RPC mode")
return response, nil
}
unbanRequest := request.(*appmessage.UnbanRequestMessage)
ip := net.ParseIP(unbanRequest.IP)
if ip == nil {

View File

@@ -5,9 +5,8 @@ FLAGS=$@
go version
go get $FLAGS -t -d ./...
# This is to bypass a go bug: https://github.com/golang/go/issues/27643
GO111MODULE=off go get $FLAGS golang.org/x/lint/golint \
honnef.co/go/tools/cmd/staticcheck
GO111MODULE=off go get $FLAGS golang.org/x/lint/golint
go install $FLAGS honnef.co/go/tools/cmd/staticcheck@latest
test -z "$(go fmt ./...)"

View File

@@ -1,3 +1,153 @@
Kaspad v0.12.5 - 2022-08-28
===========================
* Add tests for hash writers (#2120)
* Replace daglabs's dnsseeder with Wolfie's (#2119)
* Change testnet dnsseeder (#2126)
* Add RPC timeout parameter to wallet daemon (#2104)
Wallet new features:
* Add UseExistingChangeAddress option to the wallet (#2127)
Bug fixes:
* Call update pruning point if required on resolve virtual or startup (#2129)
* Add missing locks to notification listener modifications (#2124)
* Calculate pruning point utxo set from acceptance data (#2123)
* Fix RPC client memory/goroutine leak (#2122)
* Fix a subtle lock sync issue in consensus insert block (#2121)
* Mempool: Retrieve stable state of the mempool. Optimze get mempool entries by addresses (#2111)
* Kaspawallet.send(): Make separate context for Broadcast, to prolong timeout (#2131)
Kaspad v0.12.4 - 2022-07-17
===========================
* Crucial fix for the UTXO difference mechanism (#2114)
* Implement multi-layer auto-compound (#2115)
Kaspad v0.12.3 - 2022-06-29
===========================
* Fixes a few bugs which can lead to node crashes or out-of-memory errors
Kaspad v0.12.2 - 2022-06-17
===========================
* Clarify wallet message concerning a wallet daemon sync state (#2045)
* Change the way the miner executable reports execution errors (closes issue #1677) (#2048)
* Fix kaspawallet help messages, clarify sweep command help string (#2067)
* Wallet parse/send/create commands improvement (#2024)
* Use chunks for `GetBlocksAcceptanceData` calls in order to avoid blocking consensus for too long (#2075)
* Unite multiple `GetBlockAcceptanceData` consensus calls to one (#2074)
* Update many-small-chains-and-one-big-chain DAG to not fail merge depth limit (#2072)
RPC API Changes:
* RPC: include orphans into mempool entries (#2046)
* RPC & UtxoIndex: keep track of, query and test circulating supply. (#2070)
Bug Fixes:
* Fix RPC connections counting (#2026)
* Fix UTXO diff child error (#2084)
* Fix `not in selected chain` crash (#2082)
Kaspad v0.12.1 - 2022-05-31
===========================
* Fix utxoindex synchronization bug which resulted in kaspawallet orphan tx errors (#2052, #2056, #2059)
* Add a channel mechanism for consensus events to be processed in the order they were produced (#2052, #2056, #2059)
* Block template cache improvement (#2023)
* Improved staging shard performance (#2034)
* Add finality check to ResolveVirtual (#2041)
* Update Dockerfile for go 1.18 (#2038)
* Remove HF1 activation code (#2042)
Kaspa wallet:
* Various kaspawallet text fixes and log additions (#2032, #2047, #2062)
* Wallet address synchronization improvement (#2025)
* Add support for `from` address in `kaspawallet send` (#1964)
* Make kaspawallet ignore outputs that exist in the mempool (#2053)
* Wrap the entire wallet send operation with a lock (#2063)
RPC API:
* Add "GetMempoolEntriesByAddresses" to kaspad RPC (#2022)
* Make sure RPCErrors are returned and do not crash the system (#2039)
* Add AcceptedTransactionIDs to ChainChanged notification and VirtualSelectedParentChain RPC (#2036, for exchanges to track tx confirmations)
* Allow blank address in NotifyUTXOsChanged to get all updates (#2027)
* Include isSynced and isUtxoIndexed in GetInfoResponse (#2068)
Kaspad v0.12.0 - 2022-04-14
===========================
Breaking changes:
Hard-fork at DAA score 14687583 (estimated to be on 28/04 16:38 UTC) which includes:
* Using separate depth than finality depth for merge set calculations (#2013)
* Not counting the header size as part of the block mass (#2013)
* Increasing block version to 1 (#2013)
* Removing the limit on amount of KAS that can be sent in one transaction (#2013)
Bug fixes:
* Making a workaround for the UTXO diff child bug (#2020)
* Use cosigner index 0 for read only wallets (#2014)
Non-breaking changes:
* Adding a "sweep" command to `kaspawallet` (#2018)
* Use `blue work` heuristic to skip irrelevant relay blocks
* Kaspawallet daemon: Add Send and Sign commands (#2016)
Kaspad v0.11.17 - 2022-04-06
===========================
* Decrement estimatedHeaderUpperBound from mempool's MaxBlockMass (#2009)
Kaspad v0.11.16 - 2022-04-05
===========================
* Don't skip wallet address with different cosigner index (#2007)
Kaspad v0.11.15 - 2022-04-05
===========================
* Add support for auto-compound in `kaspawallet send` (#1951)
* Unite reachability stores (#1963, #1993, #2001)
* Add names to nameless routes (#1986)
* Optimize the miner-kaspad flow and latency (#1988)
* Upgrade to go 1.18 (#1992)
* Add package name to kaspawalletd .proto file (#1991)
* Block template cache (#1994)
* Add extra data to GetBlockTemplate request (#1995, #1997)
* New definition for "out of sync" (#1996)
* Remove v4 p2p version (#1998)
* Remove increase pagefile from deploy.yaml (#2000)
* Cache the pruning point anticone (#2002)
* Add DB compaction after the deletion of a DB prefix (#2003)
* Fixed a bug in staging of pruning point by index (#2005)
* Clean up debug log level by moving many frequent logs to trace level (#2004)
Kaspad v0.11.14 - 2022-03-20
===========================
* Fix a bug in the new p2p v5 IBD chain negotiation (#1981)
Kaspad v0.11.13 - 2022-03-16
===========================
* Display progress of IBD process in Kaspad logs (#1938, #1939, #1949, #1977)
* Optimize DB writes during fresh IBD (#1937)
* Add AllowConnectionToDifferentVersions flag to kaspactl (#1940)
* Drop support for p2p v3 (#1942)
* Various transaction processing fixes and workarounds (#1943, #1946, #1971, #1974)
* Make kaspawallet store the utxos sorted by amount (#1947)
* Implement a `parse` sub command in the kaspawallet (#1953)
* Set MaxBlockLevels for non-mainnet networks to 250 (#1952)
* Add cache to DAA block window (#1948)
* kaspactl: string slice parser for GetUtxosByAddresses (#1955, first contribution by @icook)
* Add MergeSet and IsChainBlock to RPC (#1961)
* Ignore transaction invs during IBD (#1960)
* Optimize validation of expected header pruning point (#1962)
* Fix a bug in bounded marge depth validation (#1966)
* Don't relay blocks in virtual anticone (#1970)
* Add version to block template to allow tracking of miner's kaspad version (#1967)
* New p2p version: v5 (#1969)
* Fix IBD shared past negotiation to be non quadratic also in the worst-case (#1969, p2p v5)
* Send pruning point anticone in batches (#1973, p2p v5)
* Cleanup log output mistakes and try to be more clear to the user (#1976, #1978)
* Apply avoiding IBD logic from patch10 to p2p v4 IBD handling (#1979)
Kaspad v0.11.11 - 2022-01-27
===========================
* Fix for rare consensus bug regarding DAA window order. The bug only affected IBD from scratch and only today (#1934)

View File

@@ -4,7 +4,7 @@ kaspactl is an RPC client for kaspad
## Requirements
Go 1.16 or later.
Go 1.18 or later.
## Installation

View File

@@ -31,10 +31,13 @@ var commandTypes = []reflect.Type{
reflect.TypeOf(protowire.KaspadMessage_GetMempoolEntryRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetMempoolEntriesRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetMempoolEntriesByAddressesRequest{}),
reflect.TypeOf(protowire.KaspadMessage_SubmitTransactionRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetUtxosByAddressesRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetBalanceByAddressRequest{}),
reflect.TypeOf(protowire.KaspadMessage_GetCoinSupplyRequest{}),
reflect.TypeOf(protowire.KaspadMessage_BanRequest{}),
reflect.TypeOf(protowire.KaspadMessage_UnbanRequest{}),

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.16-alpine AS build
FROM golang:1.18-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad

View File

@@ -4,7 +4,7 @@ Kaspaminer is a CPU-based miner for kaspad
## Requirements
Go 1.16 or later.
Go 1.18 or later.
## Installation

View File

@@ -13,8 +13,8 @@ const minerTimeout = 10 * time.Second
type minerClient struct {
*rpcclient.RPCClient
cfg *configFlags
blockAddedNotificationChan chan struct{}
cfg *configFlags
newBlockTemplateNotificationChan chan struct{}
}
func (mc *minerClient) connect() error {
@@ -30,14 +30,14 @@ func (mc *minerClient) connect() error {
mc.SetTimeout(minerTimeout)
mc.SetLogger(backendLog, logger.LevelTrace)
err = mc.RegisterForBlockAddedNotifications(func(_ *appmessage.BlockAddedNotificationMessage) {
err = mc.RegisterForNewBlockTemplateNotifications(func(_ *appmessage.NewBlockTemplateNotificationMessage) {
select {
case mc.blockAddedNotificationChan <- struct{}{}:
case mc.newBlockTemplateNotificationChan <- struct{}{}:
default:
}
})
if err != nil {
return errors.Wrapf(err, "error requesting block-added notifications")
return errors.Wrapf(err, "error requesting new-block-template notifications")
}
log.Infof("Connected to %s", rpcAddress)
@@ -47,8 +47,8 @@ func (mc *minerClient) connect() error {
func newMinerClient(cfg *configFlags) (*minerClient, error) {
minerClient := &minerClient{
cfg: cfg,
blockAddedNotificationChan: make(chan struct{}),
cfg: cfg,
newBlockTemplateNotificationChan: make(chan struct{}),
}
err := minerClient.connect()

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.16-alpine AS build
FROM golang:1.18-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad

View File

@@ -23,8 +23,7 @@ func main() {
cfg, err := parseConfig()
if err != nil {
fmt.Fprintf(os.Stderr, "Error parsing command-line arguments: %s\n", err)
os.Exit(1)
printErrorAndExit(errors.Errorf("Error parsing command-line arguments: %s", err))
}
defer backendLog.Close()
@@ -44,7 +43,7 @@ func main() {
miningAddr, err := util.DecodeAddress(cfg.MiningAddr, cfg.ActiveNetParams.Prefix)
if err != nil {
panic(errors.Wrap(err, "error decoding mining address"))
printErrorAndExit(errors.Errorf("Error decoding mining address: %s", err))
}
doneChan := make(chan struct{})
@@ -61,3 +60,8 @@ func main() {
case <-interrupt:
}
}
func printErrorAndExit(err error) {
fmt.Fprintf(os.Stderr, "%+v\n", err)
os.Exit(1)
}

View File

@@ -2,6 +2,7 @@ package main
import (
nativeerrors "errors"
"github.com/kaspanet/kaspad/version"
"math/rand"
"sync/atomic"
"time"
@@ -187,7 +188,7 @@ func getBlockForMining(mineWhenNotSynced bool) (*externalapi.DomainBlock, *pow.S
func templatesLoop(client *minerClient, miningAddr util.Address, errChan chan error) {
getBlockTemplate := func() {
template, err := client.GetBlockTemplate(miningAddr.String())
template, err := client.GetBlockTemplate(miningAddr.String(), "kaspaminer-"+version.Version())
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while requesting block template from %s: %s", client.Address(), err)
reconnectErr := client.Reconnect()
@@ -217,7 +218,7 @@ func templatesLoop(client *minerClient, miningAddr util.Address, errChan chan er
ticker := time.NewTicker(tickerTime)
for {
select {
case <-client.blockAddedNotificationChan:
case <-client.newBlockTemplateNotificationChan:
getBlockTemplate()
ticker.Reset(tickerTime)
case <-ticker.C:

View File

@@ -3,19 +3,12 @@ package main
import (
"context"
"fmt"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/client"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/cmd/kaspawallet/utils"
)
func formatKas(amount uint64) string {
res := " "
if amount > 0 {
res = fmt.Sprintf("%19.8f", float64(amount)/constants.SompiPerKaspa)
}
return res
}
func balance(conf *balanceConfig) error {
daemonClient, tearDown, err := client.Connect(conf.DaemonAddress)
if err != nil {
@@ -39,12 +32,12 @@ func balance(conf *balanceConfig) error {
println("Address Available Pending")
println("-----------------------------------------------------------------------------------------------------------")
for _, addressBalance := range response.AddressBalances {
fmt.Printf("%s %s %s\n", addressBalance.Address, formatKas(addressBalance.Available), formatKas(addressBalance.Pending))
fmt.Printf("%s %s %s\n", addressBalance.Address, utils.FormatKas(addressBalance.Available), utils.FormatKas(addressBalance.Pending))
}
println("-----------------------------------------------------------------------------------------------------------")
print(" ")
}
fmt.Printf("Total balance, KAS %s %s%s\n", formatKas(response.Available), formatKas(response.Pending), pendingSuffix)
fmt.Printf("Total balance, KAS %s %s%s\n", utils.FormatKas(response.Available), utils.FormatKas(response.Pending), pendingSuffix)
return nil
}

View File

@@ -2,13 +2,13 @@ package main
import (
"context"
"encoding/hex"
"fmt"
"io/ioutil"
"strings"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/client"
"github.com/kaspanet/kaspad/cmd/kaspawallet/daemon/pb"
"github.com/pkg/errors"
"io/ioutil"
"strings"
)
func broadcast(conf *broadcastConfig) error {
@@ -21,34 +21,36 @@ func broadcast(conf *broadcastConfig) error {
ctx, cancel := context.WithTimeout(context.Background(), daemonTimeout)
defer cancel()
if conf.Transaction == "" && conf.TransactionFile == "" {
if conf.Transactions == "" && conf.TransactionsFile == "" {
return errors.Errorf("Either --transaction or --transaction-file is required")
}
if conf.Transaction != "" && conf.TransactionFile != "" {
if conf.Transactions != "" && conf.TransactionsFile != "" {
return errors.Errorf("Both --transaction and --transaction-file cannot be passed at the same time")
}
transactionHex := conf.Transaction
if conf.TransactionFile != "" {
transactionHexBytes, err := ioutil.ReadFile(conf.TransactionFile)
transactionsHex := conf.Transactions
if conf.TransactionsFile != "" {
transactionHexBytes, err := ioutil.ReadFile(conf.TransactionsFile)
if err != nil {
return errors.Wrapf(err, "Could not read hex from %s", conf.TransactionFile)
return errors.Wrapf(err, "Could not read hex from %s", conf.TransactionsFile)
}
transactionHex = strings.TrimSpace(string(transactionHexBytes))
transactionsHex = strings.TrimSpace(string(transactionHexBytes))
}
transaction, err := hex.DecodeString(transactionHex)
transactions, err := decodeTransactionsFromHex(transactionsHex)
if err != nil {
return err
}
response, err := daemonClient.Broadcast(ctx, &pb.BroadcastRequest{Transaction: transaction})
response, err := daemonClient.Broadcast(ctx, &pb.BroadcastRequest{Transactions: transactions})
if err != nil {
return err
}
fmt.Println("Transaction was sent successfully")
fmt.Printf("Transaction ID: \t%s\n", response.TxID)
fmt.Println("Transactions were sent successfully")
fmt.Println("Transaction ID(s): ")
for _, txID := range response.TxIDs {
fmt.Printf("\t%s\n", txID)
}
return nil
}

View File

@@ -1,9 +1,10 @@
package main
import (
"os"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/pkg/errors"
"os"
"github.com/jessevdk/go-flags"
)
@@ -12,6 +13,7 @@ const (
createSubCmd = "create"
balanceSubCmd = "balance"
sendSubCmd = "send"
sweepSubCmd = "sweep"
createUnsignedTransactionSubCmd = "create-unsigned-transaction"
signSubCmd = "sign"
broadcastSubCmd = "broadcast"
@@ -44,39 +46,49 @@ type createConfig struct {
}
type balanceConfig struct {
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to"`
Verbose bool `long:"verbose" short:"v" description:"Verbose: show addresses with balance"`
config.NetworkFlags
}
type sendConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
FromAddresses []string `long:"from-address" short:"a" description:"Specific public address to send Kaspa from. Use multiple times to accept several addresses" required:"false"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
UseExistingChangeAddress bool `long:"use-existing-change-address" short:"u" description:"Will use an existing change address (in case no change address was ever used, it will use a new one)"`
config.NetworkFlags
}
type sweepConfig struct {
PrivateKey string `long:"private-key" short:"k" description:"Private key in hex format"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to"`
config.NetworkFlags
}
type createUnsignedTransactionConfig struct {
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to"`
ToAddress string `long:"to-address" short:"t" description:"The public address to send Kaspa to" required:"true"`
FromAddresses []string `long:"from-address" short:"a" description:"Specific public address to send Kaspa from. Use multiple times to accept several addresses" required:"false"`
SendAmount float64 `long:"send-amount" short:"v" description:"An amount to send in Kaspa (e.g. 1234.12345678)" required:"true"`
UseExistingChangeAddress bool `long:"use-existing-change-address" short:"u" description:"Will use an existing change address (in case no change address was ever used, it will use a new one)"`
config.NetworkFlags
}
type signConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
Transaction string `long:"transaction" short:"t" description:"The unsigned transaction to sign on (encoded in hex)"`
TransactionFile string `long:"transaction-file" short:"F" description:"The file containing the unsigned transaction to sign on (encoded in hex)"`
Transaction string `long:"transaction" short:"t" description:"The unsigned transaction(s) to sign on (encoded in hex)"`
TransactionFile string `long:"transaction-file" short:"F" description:"The file containing the unsigned transaction(s) to sign on (encoded in hex)"`
config.NetworkFlags
}
type broadcastConfig struct {
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
Transaction string `long:"transaction" short:"t" description:"The signed transaction to broadcast (encoded in hex)"`
TransactionFile string `long:"transaction-file" short:"F" description:"The file containing the unsigned transaction to sign on (encoded in hex)"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to"`
Transactions string `long:"transaction" short:"t" description:"The signed transaction to broadcast (encoded in hex)"`
TransactionsFile string `long:"transaction-file" short:"F" description:"The file containing the unsigned transaction to sign on (encoded in hex)"`
config.NetworkFlags
}
@@ -88,12 +100,12 @@ type parseConfig struct {
}
type showAddressesConfig struct {
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to"`
config.NetworkFlags
}
type newAddressConfig struct {
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to (default: localhost:8082)"`
DaemonAddress string `long:"daemonaddress" short:"d" description:"Wallet daemon server to connect to"`
config.NetworkFlags
}
@@ -101,7 +113,8 @@ type startDaemonConfig struct {
KeysFile string `long:"keys-file" short:"f" description:"Keys file location (default: ~/.kaspawallet/keys.json (*nix), %USERPROFILE%\\AppData\\Local\\Kaspawallet\\key.json (Windows))"`
Password string `long:"password" short:"p" description:"Wallet password"`
RPCServer string `long:"rpcserver" short:"s" description:"RPC server to connect to"`
Listen string `short:"l" long:"listen" description:"Address to listen on (default: 0.0.0.0:8082)"`
Listen string `long:"listen" short:"l" description:"Address to listen on (default: 0.0.0.0:8082)"`
Timeout uint32 `long:"wait-timeout" short:"w" description:"Waiting timeout for RPC calls, seconds (default: 30 s)"`
Profile string `long:"profile" description:"Enable HTTP profiling on given port -- NOTE port must be between 1024 and 65536"`
config.NetworkFlags
}
@@ -129,6 +142,12 @@ func parseCommandLine() (subCommand string, config interface{}) {
parser.AddCommand(sendSubCmd, "Sends a Kaspa transaction to a public address",
"Sends a Kaspa transaction to a public address", sendConf)
sweepConf := &sweepConfig{DaemonAddress: defaultListen}
parser.AddCommand(sweepSubCmd, "Sends all funds associated with the given schnorr private key to a new address of the current wallet",
"Sends all funds associated with the given schnorr private key to a newly created external (i.e. not a change) address of the "+
"keyfile that is under the daemon's contol. Can be used with a private key generated with the genkeypair utilily "+
"to send funds to your main wallet.", sweepConf)
createUnsignedTransactionConf := &createUnsignedTransactionConfig{DaemonAddress: defaultListen}
parser.AddCommand(createUnsignedTransactionSubCmd, "Create an unsigned Kaspa transaction",
"Create an unsigned Kaspa transaction", createUnsignedTransactionConf)
@@ -165,7 +184,6 @@ func parseCommandLine() (subCommand string, config interface{}) {
parser.AddCommand(startDaemonSubCmd, "Start the wallet daemon", "Start the wallet daemon", startDaemonConf)
_, err := parser.Parse()
if err != nil {
var flagsErr *flags.Error
if ok := errors.As(err, &flagsErr); ok && flagsErr.Type == flags.ErrHelp {
@@ -198,6 +216,13 @@ func parseCommandLine() (subCommand string, config interface{}) {
printErrorAndExit(err)
}
config = sendConf
case sweepSubCmd:
combineNetworkFlags(&sweepConf.NetworkFlags, &cfg.NetworkFlags)
err := sweepConf.ResolveNetwork(parser)
if err != nil {
printErrorAndExit(err)
}
config = sweepConf
case createUnsignedTransactionSubCmd:
combineNetworkFlags(&createUnsignedTransactionConf.NetworkFlags, &cfg.NetworkFlags)
err := createUnsignedTransactionConf.ResolveNetwork(parser)

View File

@@ -3,11 +3,12 @@ package main
import (
"bufio"
"fmt"
"os"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet"
"github.com/kaspanet/kaspad/cmd/kaspawallet/libkaspawallet/bip32"
"github.com/kaspanet/kaspad/cmd/kaspawallet/utils"
"github.com/pkg/errors"
"os"
"github.com/kaspanet/kaspad/cmd/kaspawallet/keys"
)
@@ -30,6 +31,10 @@ func create(conf *createConfig) error {
fmt.Printf("Extended public key of mnemonic #%d:\n%s\n\n", i+1, extendedPublicKey)
}
fmt.Printf("Notice the above is neither a secret key to your wallet " +
"(use \"kaspawallet dump-unencrypted-data\" to see a secret seed phrase) " +
"nor a wallet public address (use \"kaspawallet new-address\" to create and see one)\n\n")
extendedPublicKeys := make([]string, conf.NumPrivateKeys, conf.NumPublicKeys)
copy(extendedPublicKeys, signerExtendedPublicKeys)
reader := bufio.NewReader(os.Stdin)
@@ -50,9 +55,13 @@ func create(conf *createConfig) error {
extendedPublicKeys = append(extendedPublicKeys, string(extendedPublicKey))
}
cosignerIndex, err := libkaspawallet.MinimumCosignerIndex(signerExtendedPublicKeys, extendedPublicKeys)
if err != nil {
return err
// For a read only wallet the cosigner index is 0
cosignerIndex := uint32(0)
if len(signerExtendedPublicKeys) > 0 {
cosignerIndex, err = libkaspawallet.MinimumCosignerIndex(signerExtendedPublicKeys, extendedPublicKeys)
if err != nil {
return err
}
}
file := keys.File{

Some files were not shown because too many files have changed in this diff Show More