Compare commits

...

236 Commits

Author SHA1 Message Date
Michael Sutton
4bb5bf25d3
Bump version to 0.12.22 (#2307) 2025-05-08 11:47:23 +03:00
Michael Sutton
25c2dd8670
apply post-crescendo coinbase maturity to all nets (#2306) 2025-05-05 15:01:13 +03:00
Sergi Rene
c93100ccd0
getPayloadHash returns not zero when payload included (#2305) 2025-04-23 17:24:35 +03:00
Ori Newman
03cc7dfc19 Update version to v0.12.20 2025-03-18 11:06:56 +02:00
Toni Lukkaroinen
ed745a9acb
Fix block verbosedata population… (#2275)
* Fix block verbosedata population from skipping transaction verbosedata population when BlockInfo errorenously reports block as Header only, but domainBlock is still fount with GetBlockEvenIfHeaderOnly.

* Update app/rpc/rpccontext/verbosedata.go

---------

Co-authored-by: Toni Lukkaroinen <toni.lukkaroinen@hoosat.fi>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2025-03-18 11:03:47 +02:00
Ori Newman
c23c1d141c Update gh actions 2025-03-18 10:59:21 +02:00
Ori Newman
352d261fd6
Fix serializeTransaction (#2302) 2025-02-23 15:03:16 +02:00
Veronica
43b9523919
Adding support for mass. (#2301)
* Adding support for mass.

* update grpc v1.69.2

* Upgrade protos using latest protoc

* Use MassCommitment instead of Mass

* Fix DomainTransaction clone, equal and tests

---------

Co-authored-by: veronica <jimjouny@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2025-01-21 10:57:28 +02:00
Michael Sutton
6085d1fc84
bump version to 0.12.19 (#2295) 2024-10-22 13:09:31 +03:00
Ori Newman
1e9ddc42d0
Add fee estimation to wallet (#2291)
* Add fee estimation to wallet

* Add fee rate to kaspawallet parse

* Update go version

* Get rid of golint

* Add RBF support to wallet

* Fix bump_fee UTXO lookup and fix wrong change address

* impl storage mass as per KIP9

* Use CalculateTransactionOverallMass where needed

* Some fixes

* Minor typos

* Fix test

* update version

* BroadcastRBF -> BroadcastReplacement

* rc3

* align proto files to only use camel case (fixed on RK as well)

* Rename to FeePolicy and add MaxFee option + todo

* apply max fee constrains

* increase minChangeTarget to 10kas

* fmt

* Some fixes

* fix description: maximum -> minimum

* put min feerate check in the correct location

* Fix calculateFeeLimits nil handling

* Add validations to CLI flags

* Change to rc6

* Add checkTransactionFeeRate

* Add failed broadcast transactions on send error`

* Fix estimateFee change value

* Estimate fee correctly for --send-all

* On estimateFee always assume that the recipient has ECDSA address

* remove patch version

---------

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>
2024-10-22 12:34:54 +03:00
Ori Newman
48a142e12f
Add deprecated message (#2282)
* Add deprecated message

* fix readme
2024-08-15 08:15:30 +03:00
Michael Sutton
86b89065cf
KIP9 basic wallet compatibility (#2276)
* introduce min change target

* clarify wallet help messages for from-address and send-all
2024-05-09 18:53:21 +03:00
Michael Sutton
f41dc7fa0b
Bump version to 0.12.17 (#2268)
* Bump version to 0.12.17

* changelog
2024-02-19 11:58:40 +02:00
Michael Sutton
6b38bf7069
RPC SubmitTransaction: Dequeue old responses from previous requests (#2262) 2024-01-05 14:58:19 +02:00
Ori Newman
d2453f8e7b
Lazy wallet utxo sync after broadcasting a tx (#2258)
* Lazy wallet utxo sync after broadcasting a tx

* Make a more granular lock for refreshUTXOs

* Don't push to forceSyncChan if it has an element

* Better policy for used outpoints and wait for first sync when creating an unsigned tx

* fix expire condition

* lock address reading

* fix small memory leak

* add an rpc client dedicated for background ops

* rename to follow conventions

* one more rename

* Compare wallet addresses by value

* small fixes

* Add comment

---------

Co-authored-by: Michael Sutton <msutton@cs.huji.ac.il>
2023-12-27 18:10:16 +02:00
Ori Newman
629faa8436
Add options to see wallet and wallet daemon versions (#2257) 2023-12-26 16:00:44 +02:00
Michael Sutton
91e6c6b74b
version bump + changelog (#2255) 2023-12-25 10:22:29 +02:00
Michael Sutton
0819244ba1
if the tx has change and thus so 2 outputs, try having at least 2 inputs as well (in order to not be slowed down by dust patch) (#2254) 2023-12-25 09:19:04 +02:00
Ori Newman
a0149cd8d0
Broadcast wallet transactions in chunks (#2253) 2023-12-13 16:15:25 +02:00
Ori Newman
5a3b8a0066
Fix type detection in RemoveInvalidTransactions (#2252) 2023-12-12 17:13:27 +02:00
Michael Sutton
8e71f79f98
use rpc to identify testnet 11 (#2211)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2023-12-12 12:16:35 +02:00
Ori Newman
346341a709
Fix extract atomic swap data pushes (#2203)
* Remove unnecessary drop from ExtractAtomicSwapDataPushes

* fixed atomicswap 

fixed ExtractAtomicSwapDataPushes to extract the correct RefundBlake2b

* Fix message converters

* Fix locktime data size in ExtractAtomicSwapDataPushes

---------

Co-authored-by: pieroforfora <124444595+pieroforfora@users.noreply.github.com>
2023-12-11 16:24:16 +02:00
supertypo
8c881aea39
Added a mainnet dnsseeder (#2247)
Co-authored-by: supertypo <suprtypo@pm.me>
2023-12-07 21:14:25 +02:00
Ori Newman
40ec440dcf
Fix windows asset building by increasing go version (#2245) 2023-12-07 14:10:26 +02:00
Ori Newman
88bdcb43bc
Anti-spam measurements against dust attack (#2223) (#2244)
* BlockCandidateTransactions patch

* Fix condition

* Fix fee

* Fix bug

* Reject from mempool

* Fix hasCoinbaseInput

* Fix position

* Bump version to v0.12.14
2023-12-06 14:38:04 +02:00
Ori Newman
9d1e44673f
Use removeRedeemers correctly (#2235)
* Use removeRedeemers correctly

* Fix topological iteration

* Some fixes

* Ignore RejectDuplicate and swallow other rule errors

* Fix typo

* Don't remove redeemers and skip mempool full revalidation
2023-12-06 14:29:29 +02:00
coderofstuff
387fade044
Fix off by small amounts in sent amount kaspa (#2220)
* Fix off by small amounts in sent amount kaspa

Floating point math causes inconsistencies when converting kas to sompi.

Use a method that parses the amount as a string, the converts it to
sompi then parse back to uint64

* Deal with SendAmount as strings all the way

* Consistent config handling

* Set variables directly from utils.KasToSompi

Use = instead of := to ensure no shadowing

* Fix validate amount regex

* Use decimal places as defined by constants

Also check if SompiPerKaspa is multiple of 10

* Minor updates for context clarity
2023-09-23 11:28:38 +03:00
Ori Newman
c417c8b525
Update ECDSA address test to use a valid public key (#2202) 2023-04-09 14:40:01 +03:00
Ori Newman
bd1420220a
Bump version to v0.12.13 (#2196) 2023-03-06 17:12:35 +02:00
dependabot[bot]
5640ec4020
Bump golang.org/x/net from 0.0.0-20210405180319-a5a99cb37ef4 to 0.7.0 (#2194)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.0.0-20210405180319-a5a99cb37ef4 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/commits/v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2023-03-06 14:56:21 +02:00
dependabot[bot]
1c0887ca60
Bump golang.org/x/crypto from 0.0.0-20210513164829-c07d793c2f9a to 0.1.0 (#2195)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.0.0-20210513164829-c07d793c2f9a to 0.1.0.
- [Release notes](https://github.com/golang/crypto/releases)
- [Commits](https://github.com/golang/crypto/commits/v0.1.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-06 12:54:24 +02:00
Ori Newman
7be3f41aa7
Avoid sending transactions with no funds (#2193) 2023-03-06 12:20:42 +02:00
Ori Newman
26c4c73624
Update changelog.txt and version.go (#2192) 2023-02-28 18:14:01 +02:00
D-Stacks
880d917e58
Rename last references to blockheight - closes #1036 (#2089)
* Rename references of blockheight - closes #1036

* Update tx_invalid.json

* bluescore -> DAAScore

---------

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2023-02-27 17:14:41 +02:00
Eyal Yablonka
3c53c6d8cd
Eyal (#2183)
* Create CODE_OF_CONDUCT.md

Code of conduct added

* changed discord to google form

* Update CODE_OF_CONDUCT.md

Updated code of conduct to match community project.

* Update CODE_OF_CONDUCT.md

Updated code of conduct to match community project.
Added google form.

---------

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2023-02-27 16:10:27 +02:00
Ori Newman
3c4b973090
Extend TestGetPreciseSigOps with more tests (#2188) 2023-02-27 15:58:10 +02:00
Ori Newman
8aee8f81c5
Add Dockerfile to kaspawallet (#2187) 2023-02-27 13:11:52 +02:00
Svarog
ec3441e63f
Add --send-all to kaspawallet send command (#2181)
* Allow to send --all

* Fix a typo

---------

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2023-02-27 12:39:01 +02:00
dependabot[bot]
e3ba1ca07e
Bump golang.org/x/text from 0.3.5 to 0.3.8 (#2190)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.5 to 0.3.8.
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.3.5...v0.3.8)

---
updated-dependencies:
- dependency-name: golang.org/x/text
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2023-02-27 12:26:52 +02:00
Ori Newman
27fdbd9c88
Upgrade to go 1.19 (#2191)
* Upgrade to go 1.19

* fmt
2023-02-27 10:03:11 +02:00
Michael Sutton
377d9aaaeb
Update changelog.txt for v0.12.11 (#2176) 2022-12-01 12:33:48 +02:00
Ori Newman
beee947dda
Fix IBD sync conditions (#2174)
* Fix IBD sync conditions

* Fix syntax

* Fix Sprintf

* Bump version

* On negotiation check only blocks in future of PP

* Only log error and add comment

* Fix comment
2022-11-29 17:18:07 +02:00
Ori Newman
d4a27bf1c1
Update changelog.txt for v0.12.10 (#2172) 2022-11-23 12:26:47 +02:00
Ori Newman
eec6eb9669
Check rule errors when validating blocks with trusted data (#2171) 2022-11-21 23:06:00 +02:00
Ori Newman
d5c10832c2
Update README.md (#2163) 2022-11-20 14:50:47 +02:00
Ori Newman
9fbfba17b6
Compare blue score with selected tip when checking if a pruning point… (#2169)
* Compare blue score with selected tip when checking if a pruning point proof is needed

* Don't redeclare err

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-11-20 13:35:00 +02:00
Ori Newman
09d698dd0e
Add note about change addresses to 'show-addresses' (#2170)
* Add note about change addresses to 'show-addresses'

* Use stdout
2022-11-20 12:12:09 +02:00
Ori Newman
ec51c6926a
Use one of the From addresses as a change address (#2164)
* Use one of the From addresses as a change address

* Use change address from fromAddress only if useExisting is set to true

* Change FromAddresses description
2022-11-17 15:31:18 +02:00
Ori Newman
7d44275eb1
Add found to GetBlock (#2165) 2022-11-16 23:48:05 +02:00
Ori Newman
a3387a56b3
Increase devnet's initial difficulty (#2167) 2022-11-13 14:01:29 +02:00
Ori Newman
c2ae03fc89
Fix nativeTx to be actually native (#2166) 2022-11-13 13:15:18 +02:00
Ori Newman
6c774c966b
Changelog for v0.12.9 (#2161) 2022-10-24 00:50:38 +03:00
Ori Newman
2d54c9693b
Create directory before locking lock file (#2160) 2022-10-24 00:41:59 +03:00
Ori Newman
d8350d62b0
v0.12.8 changelog.txt (#2159) 2022-10-23 16:29:26 +03:00
Ori Newman
26c7db251f
Make more checks if status is invalid even if the block exists (#2158)
* Make more checks if status is invalid even if the block exists

* Use HasHeader
2022-10-13 19:22:00 +03:00
Michael Sutton
4d435f2b3a
Use utxo diff algo for pruning point move and use acceptance data method only as a fall-back (#2157)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-10-12 12:37:13 +03:00
Tiram
067688f549
Add a new testnet dnsseeder (#2156)
* Add a new testnet dnsseeder

* Apply code format

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-10-05 09:23:35 +03:00
Ori Newman
3a3fa0d3f0
Add lock file to kaspawallet (#2154)
* Add lock file to kaspawallet

* Add a possible explanation to password error

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-10-02 23:42:14 +03:00
Ori Newman
cf4073b773
Remove HF activation code (#2152)
* Remove HF activation code

* Remove unused var totalInputs
2022-10-02 19:17:33 +03:00
Michael Sutton
6a5e7c9e3f
Add IBD error details to the log (#2150) 2022-09-30 13:23:54 +03:00
Ori Newman
7e9b5b9010
Security Patch + HF (#2142)
* HF

* Fix lint
2022-09-21 18:58:32 +03:00
D-Stacks
953838e0d8
Allow mismatched versioning connections with rpc_client, with a warning. (#2137) 2022-09-14 09:53:30 +03:00
Ori Newman
a1dcb34c29
Update changelog for v0.12.6 (#2136) 2022-09-09 15:57:43 +03:00
Ori Newman
23764e1b0b
Optionally show serialized transactions on send (#2135)
* Optionally show serialized transactions on send

* Explain more about serialized transactions

* Increase grpc server send message size
2022-09-09 15:51:20 +03:00
Ori Newman
0838cc8e32
Update virtual on IBD if nearly synced (#2134)
* Update virtual on IBD if nearly synced

* Don't resolve virtual if updateVirtual
2022-09-09 00:52:08 +03:00
Ori Newman
9f51330f38
Remove tests from docker files (#2133)
* Remove tests from docker files

* Remove unused interface method
2022-09-01 14:14:37 +03:00
Ori Newman
f6d46fd23f
Update changelog.txt for v0.12.5 (#2130)
* Update changelog.txt for v0.12.5

* Update version and add #2131 to changelog.txt
2022-08-28 13:53:33 +03:00
Svarog
2a7e03e232
Kaspawallet.send(): Make separate context for Broadcast, to prolong timeout (#2131)
* Kaspawallet.send(): Make separate context for Broadcast, to prolong timeout

* Amend comment

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-28 13:17:58 +03:00
Ori Newman
3286a7d010
Call update pruning point if required on resolve virtual (#2129)
* Call UpdatePruningPointIfRequired when resolving virtual

* Don't sanity check pruning point UTXO set if it's genesis

* Add UpdatePruningPointIfRequired on init
2022-08-24 13:32:39 +03:00
Ori Newman
aabbc741d7
Add UseExistingChangeAddress option to the wallet (#2127) 2022-08-21 17:12:05 +03:00
Elichai Turkel
20b7ab89f9
Add tests for hash writers (#2120)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-21 09:08:20 +03:00
Ori Newman
10f1e7e3f4
Replace daglabs's dnsseeder with Wolfie's (#2119)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-21 03:10:59 +03:00
Ori Newman
d941c73701
Change testnet dnsseeder (#2126)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-08-21 02:32:40 +03:00
Michael Sutton
3f80638c86
Add missing locks to notification listener modifications (#2124) 2022-08-21 01:26:03 +03:00
Michael Sutton
266ec6c270
Calculate pruning point utxo set from acceptance data (#2123)
* Calc new pruning point utxo diff through chain acceptance data

* Fix daa score to chain block
2022-08-17 15:56:53 +03:00
Michael Sutton
9ee409afaa
Fix RPC client memory/goroutine leak (#2122)
* Showcase the RPC client memory leak

* Fixes an RPC client goroutine leak by properly closing the underlying connection
2022-08-09 16:30:24 +03:00
Michael Sutton
715cb3b1ac
Fix a subtle lock sync issue in consensus insert block (#2121)
* Manage the locks more tightly and fix a slight and rare sync issue

* Extract virtualResolveChunk constant
2022-08-02 10:33:39 +03:00
D-Stacks
eb693c4a86
Mempool: Retrive stable state of the mempool. optimze get mempool entries by addresses (#2111)
* fix mempool accessing, rewrite get_mempool_entries_by_addresses

* fix counter, add verbose

* fmt

* addresses as string

* Define error in case utxoEntry is missing.

* fix error variable to string

* stop tests from failing (see in code comment)

* access both pools in the same state via parameters

* get rid of todo message

* fmt - very important!

* perf: scriptpublickey in mempool, no txscript.

* address reveiw

* fmt fix

* mixed up isorphan bool, pass tests now

* do map preallocation in mempoolbyaddresses

* no proallocation for orphanpool sending.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-07-26 23:06:08 +03:00
Aleoami
7a61c637b0
Add RPC timeout parameter to wallet daemon (#2104)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-07-21 13:55:12 +03:00
Michael Sutton
c7bd84ef9d
Bump version and changelog for v0.12.4 (#2118)
* Bump version and changelog
2022-07-17 03:07:51 +03:00
Svarog
b26b9f6c4b
Implement multi-layer auto-compound (#2115)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-07-15 14:12:34 +03:00
Michael Sutton
1c9bb54cc2
Crucial fix for the UTXO difference mechanism (#2114)
* Illustrate the bug through prints

* Change consensus API to a single ResolveVirtual call

* nil changeset is not expected when err=nil

* Fixes a deep bug in the resolve virtual process

* Be more defensive at resolving virtual when adding a block

* When finally resolved, set virtual parents properly

* Return nil changeset when nothing happened

* Make sure the block at the split point is reversed to new chain as well

* bump to version 0.12.4

* Avoid locking consensus twice in the common case of adding block with updateVirtual=true

* check err

* Parents must be picked first before set as virtual parents

* Keep the flag for tracking virtual state, since tip sorting perf is high with many tips

* Improve and clarify resolve virtual tests

* Addressing minor review comments

* Fixed a bug in the reported virtual changeset, and modified the test to verify it

* Addressing review comments
2022-07-15 12:35:20 +03:00
Michael Sutton
b9093d59eb
Bump version and add change log (#2110) 2022-06-29 16:24:56 +03:00
D-Stacks
18d000f625
Logger: change log level for "Couldn't find UTXO entry" to debug (and ignore message on orphan transactions) (#2108)
* Logger: change log level for "Couldn't find UTXO entry" to debug

* ignore error for orphans

* continue also if orphans

* check if blocks exist

* safe rpc mode

* limit window size

* verify block status depending on context

* allow a 2 factor gap in expected mergeset size

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-06-29 15:54:40 +03:00
Ori Newman
c5aade7e7f
Update changelog to v0.12.2 (#2091) 2022-06-17 18:24:21 +03:00
Ori Newman
d4b741fd7c
Bump version to v0.12.2 (#2086) 2022-06-16 12:54:38 +03:00
D-Stacks
74a4f927e9
RPC: include orphans into mempool entries (#2046)
* RPC: include orphans into mempool entries

* no need for + 1

* give request option to choose mempool pool(s)

* add to wallet, fix bg

* use orphanpool rpc to test for orphans

* fix fmt

* fix crash when quering orphan pool in get_mempool_entries

* pass the tests, fix fromAppMessage bug

* Update config_test.go

don't think this is needed

* needed for tests to pass

* inverse to transactionpoolfilter, cut down code to two ifs.

* fmt

* update test to true false, forgot one includetransactionpool renaming

* update and simplyfiy get_mempool_entry handler

* comment outdated

* i think we usually use make instead of var.

* Fix some leftovers of includeTransactionPool

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-15 23:46:44 +03:00
biospb
847aafc91f
Fix RPC connections counting (#2026)
* Fix RPC connections counting

* show incomming connections count

* Use the flag RPCMaxClients instead of the const RPCMaxInboundConnections

* Add grpc server name to log message

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 22:49:36 +03:00
Aleoami
c87e541570
Clarify wallet message concerning a wallet daemon sync state (#2045)
* upd clarified wallet daemon syncronization state log message

* Update address.go

* Update create_unsigned_transaction.go

* Update external_spendable_utxos.go

* Update sync.go

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 16:05:01 +03:00
Aleoami
2ea1c4f922
Change the way the miner executable reports execution errors (closes issue #1677) (#2048)
* fix changed the way the miner executable reports execution errors

* Update main.go

* Update main.go

* Update main.go

* Update main.go

* Rolled back

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 14:05:05 +03:00
Aleoami
5e9c28b77b
fix the confusing term in the getHashrate() (#2081)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-06-15 03:24:10 +03:00
Michael Sutton
d957a6d93a
Fix UTXO diff child error (#2084)
* Avoid creating the chain iterator if high hash is actually low hash

* Always use iterator in nextPruningPointAndCandidateByBlockHash

* Initial failing test

* Minimal failing test + some comments

* go lint

* Add simpler tests with two different errors

* Missed some error checks

* Minor

* A workaround patch for preventing the missing utxo child diff bug

* Make sure we fully resolve virtual

* Move ResolveVirtualWithMaxParam to test consensus

* Mark virtual not updated and loop in batches

* Refactor: remove VirtualChangeSet from functions return values

* Remove workaround comments

* If block has no body, virtual is still considered updated

* Remove special error ErrReverseUTXODiffsUTXODiffChildNotFound

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-15 02:52:14 +03:00
Michael Sutton
b2648aa5bd
Fix not in selected chain crash (#2082)
* Avoid creating the chain iterator if high hash is actually low hash

* Always use iterator in nextPruningPointAndCandidateByBlockHash

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-15 02:32:47 +03:00
D-Stacks
3908f274ae
[Ready] - RPC & UtxoIndex: keep track of, query and test circulating supply. (#2070)
* start

* pass tests, (cannot get before put!) and clean up.

* add rpc call.

* create and pass tests, fix bugs.  fully implement rpc

* As always fmt

* remover old test

* clean up proto comment

* put the logger back in place.

* revert back to 10 sec limit.

* migration, change utxoChanged removal to whole utxoEntryPair, add methods to update circulating supply, intialize circulating supply from reset.

* Update utxoindex.go

* ad

* rename to max, change comment

* one more total to max

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-05 16:42:08 +03:00
Aleoami
fa7ea121ff
fix kaspawallet help messages, clarify sweep command help string (#2067)
* fix kaspawallet help messages, clarify sweep command help string

* fix sweep cmd help string phrasing

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-05 15:39:56 +03:00
biospb
24848da895
Wallet parse/send/create commands improvement (#2024)
* Fix wallet parsing of multi tx data

* Output wallet msgs to stderr when creating/signing tx

* Fix go fmt

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-06-03 19:06:51 +03:00
Michael Sutton
b200b77541
Use chunks for GetBlocksAcceptanceData calls in order to avoid blocking consensus for too long (#2075) 2022-06-03 18:25:02 +03:00
Michael Sutton
d50ad0667c
Unite multiple GetBlockAcceptanceData consensus calls to one (#2074)
* Unite multiple `GetBlockAcceptanceData` consensus calls to one

* Variable rename
2022-06-01 12:19:21 +03:00
Ori Newman
5cea285960
Update many-small-chains-and-one-big-chain DAG to not fail merge depth limit (#2072)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-31 16:13:43 +03:00
Michael Sutton
7eb5085f6b
Update changelog with v0.12.1 changes (#2071)
* Update changelog with v0.12.1 changes

* Remove full links and credits
2022-05-31 15:00:13 +03:00
D-Stacks
491e3569d2
RPC: include isSynced and isUtxoIndexed in GetInfoResponse (#2068)
* include utxoIndex and isNearlySynced in GetInfo

* fmt fixing

* add missing IsNearlySynced

* change `isNearlySynced` -> `isSynced` & `isUtxoIndexSet` ->`isUtxoIndexed`

* Add a check for `HasPeers` to make `isSynced` identical to `GetBlockTemplate.isSynced`

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-05-30 13:12:00 +03:00
Aleoami
440aea19b0
Add wallet daemon state messages to a terminal window log (#2062)
* add wallet daemon state messages to a terminal window log

* refactoring changed var name to harmonize it

* fix global vars made the part of the "sever" class

* fix log message phrasing

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 22:55:53 +03:00
Aleoami
968d47c3e6
add explanatory message about the mnemonic at the wallet creation (#2047)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 18:36:38 +03:00
D-Stacks
052193865e
populate with verbos (#2064)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 18:23:44 +03:00
D-Stacks
85febcb551
update rpc.md with includeAcceptedTransactionIds (#2065)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-05-25 18:13:35 +03:00
D-Stacks
a4d9fa10bf
fix typo (#2060)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-25 10:17:58 +03:00
Michael Sutton
cd5fd86ad3
Wrap the entire wallet send operation with a lock (#2063) 2022-05-25 08:55:35 +03:00
Ori Newman
b84d6fed2c
broadcast is using refreshUTXOs, so it should be protected by lock (#2061) 2022-05-25 01:16:54 +03:00
Svarog
24c94b38be
Move OnBlockAdded event to the channel that was only used by virtualChanged events (#2059)
* Move OnBlockAdded event to the channel that was only used by virtualChanged events

* Don't send notifications for header-only and invalid blocks

* Return block status from block processor and use it for event raising decision

* Use MaybeEnqueue for consensus events

* go lint

* Fix RPC call to actually include tx ids

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-05-24 01:11:01 +03:00
Ori Newman
4dd7113dc5
Increase virtualChangeChan to 100e3 (#2056)
* Increase virtualChangeChan to 100e3
Don't crash when sending UTXO RPC notification to a closed route
Throw error if virtualChangeChan is full

* Use MaybeEnqueue in more places

* Remove comment

* Ignore capacity reached errors on MaybeEnqueue
2022-05-20 19:31:13 +03:00
biospb
48c7fa0104
Wallet synchronization improvement (#2025)
* Wallet synchronization improvement

* Much faster sync on startup
* Avoid double scan of the same address range

* Eliminate 1 sec delay on start

* Rename constant and add numIndexesToQueryForRecentAddresses const

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-19 17:58:33 +03:00
Ori Newman
4d0cf2169a
Bumpt to v0.12.1 (#2054) 2022-05-19 16:45:35 +03:00
Ori Newman
5f7cc079e9
Make kaspawallet ignore outputs that exist in the mempool (#2053)
* Make kaspawallet ignore outputs that exist in the mempool

* Make kaspawallet ignore outputs that exist in the mempool

* Add comment
2022-05-19 16:12:17 +03:00
Michael Sutton
016ddfdfce
Use a channel for utxo change events (#2052)
* Use a channel from within consensus in order to raise change events in order -- note that this is only a draft commit for discussion

* Fix compilation

* Check for nil

* Allow nil virtualChangeChan

* Remove redundant comments

* Call notifyVirtualChange instead of notifyUTXOsChanged

* Remove redundant comment

* Add a separate function for initVirtualChangeHandler

* Remove redundant type

* Check for nil in the right place

* Fix integration test

* Add data to virtual changeset and cleanup block added event logic

* Renames

* Comment

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-19 14:07:48 +03:00
Svarog
5d24e2afbc
Allow blank address in NotifyUTXOsChanged to get all updates (#2027)
* Allow blank address in NotifyUTXOsChanged to get all updates

* To see if address is possible to extract: Check for NonStandardTy rather than error

* Don't swallow errors

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-18 22:28:33 +03:00
biospb
8735da045f
Block template cache improvement (#2023)
* Block template cache improvement

* Avoid concurrent calls to template builder
* Clear cache on new block event
* Move IsNearlySynced logic to within consensus and cache it for each template 
* Use a single consensus call for building template and checking synced

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-05-06 11:36:07 +03:00
Ori Newman
c839337425
Add finality check to ResolveVirtual (#2041)
* Add finality check to ResolveVirtual

* Warn on finality conflict only if needed
2022-05-06 00:29:48 +03:00
Ori Newman
7390651072
Remove HF1 activation code (#2042)
* Remove HF1 activation code

* Remove test with an overflow

* Fix "max sompi amount is never dust"

* Remove estimatedHeaderUpperBound logic
2022-05-05 20:35:17 +03:00
Svarog
52fbeedf20
Add AcceptedTransactionIDs to ChainChanged notification and VirtualSelectedParentChain RPC (#2036)
* Add acceptedTransactionIds to GetVirtualSelectedParentChainFromBlockResponseMessage and VirtualSelectedParentChainChangedNotificationMessage

* Modify appmessage structs to include new fields

* Implement the functionality for acceptedTransactionID notifications

* Add missing field for IncludeAcceptedTransactionIds

* Notify of block added before notifying that chain changed

* Use consensushashing instead of Transaction.ID

* Don't notify of empty virtual changes

* Don't generate virtualChainChanged notification if there's nobody subscribed

* Fix test to not expect empty notifications

* Don't generate acceptedTransactionIDs if they were not requested by anyone

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-05 12:35:02 +03:00
biospb
1660cf0cf1
Improved staging shard performance (#2034)
using map instead of growing array as shard id can be 1000+ in some cases

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-05-04 21:24:25 +03:00
Ori Newman
2b5202be7a
Update Dockerfile for go 1.18 (#2038) 2022-05-03 00:22:47 +03:00
Ori Newman
9ffbb15160
Use double pointer when passing rpcError to errors.As (#2039) 2022-04-29 13:23:17 +03:00
tmrlvi
540b0d3a22
Add support for from address in kaspawallet send (#1964)
* Added option to choose from address in kaspawallet + fixed a bug with 0 change

* Checking if From is missing

* Fixed after merge to kaspad0.12

* Applying changes also to send command

* Better help description

* Fixed bug where we take all utxos except the one we wanted

* Swallow the parsing error only when we want to filter

* checking for wallet address directly

* go fmt

* Changing to `--from-address`

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-27 22:51:10 +03:00
biospb
8d5faee53a
Fix wallet output in send/broadcast cmd (#2032)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-27 01:41:02 +03:00
D-Stacks
6e2fd0633b
add "GetMempoolEntriesByAddresses" to kaspad RPC. (#2022)
* add "GetMempoolEntriesByAddresses" to kaspad RPC.

* fmt formatting.

* some things I forgot

* update rpc.md

* forgot to add handler

* fix fmt

* bug fix, implicat testing & error handling

* address reveiw

* address reveiw
2022-04-26 12:31:31 +03:00
Ori Newman
beb038c815
Update changelog.txt for v0.12.0 (#2021)
* Update changelog.txt for v0.12.0

* Fix typo
2022-04-14 13:28:19 +03:00
Ori Newman
35a959b56f
Making a workaround for the UTXO diff child bug (#2020)
* Making a workaround for the UTXO diff child bug

* Fix error message

* Fix error message
2022-04-14 12:55:20 +03:00
D-Stacks
57c6118be8
Adds a "sweep" command to kaspawallet (#2018)
* Add "sweep" command to kaspawallet

* minor linting & layout improvements.

* contain code in "sweep"

* update to sweep.go

* formatting and bugs

* clean up renaming of ExtractTransaction functions

* Nicer formating of display

* address reveiw.

* one more sweeped -> swept

* missed one reveiw comment. (extra mass).

* remove redundant code from split function.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-14 02:16:16 +03:00
Michael Sutton
723aebbec9
Do not apply merge depth root heuristic on orphan roots (#2019) 2022-04-13 21:06:08 +03:00
Ori Newman
2b395e34b1
Use separate depth than finality depth for merge set calculations after HF (#2013)
* Use separate than finality depth for merge set calculations after HF

* Add comments and edit error messages

* Fix TestValidateTransactionInContextAndPopulateFee

* Don't disconnect from node if isViolatingBoundedMergeDepth

* Use new merge root for virtual pick parents; apply HF1 daa score split for validation only

* Use `blue work` heuristic to skip irrelevant relay blocks

* Minor

* Make sure virtual's merge depth root is a real block

* For ghostdag data we always use the non-trusted data

* Fix TestBoundedMergeDepth and in IBD use VirtualMergeDepthRoot instead of MergeDepthRoot

* Update HF1DAAScore

* Make sure merge root and finality are called + avoid calculating virtual root twice

* Update block version to 1 after HF

* Update to v0.12.0

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-04-12 00:26:44 +03:00
Svarog
ada559f007
Kaspawallet daemon: Add Send and Sign commands (#2016)
* Add send and sign commands to protobuf

* Added Send and Sign stubs in kaspawalletd server

* Implement Sign

* Implemented Send

* Allow Broadcast command to supply multiple transactions

* No longer prompt for password in DecryptMnemonics

* Rename TransactionIDs -> TxIDs to keep consistency with Broadcast

* Add some comments and formatting

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-04-11 23:42:01 +03:00
Ori Newman
357e8ce73c
Use cosigner index 0 for read only wallets (#2014) 2022-04-11 02:35:47 +03:00
Ori Newman
6725902663
Bump version and update changelog (#2011) 2022-04-06 22:06:57 +03:00
Ori Newman
99bb21c512
Decrement estimatedHeaderUpperBound from mempool's MaxBlockMass (#2009) 2022-04-06 21:53:00 +03:00
Ori Newman
a4669f3fb5
Bump version and update changelog (#2008) 2022-04-06 02:01:08 +03:00
Ori Newman
e8f40bdff9
Don't skip wallet address with different cosigner index (#2007) 2022-04-06 01:51:13 +03:00
Michael Sutton
68a407ea37
Update changelog for v0.11.15 (#2006)
* Update changelog for v0.11.15

* Print full json of rule violating header
2022-04-05 16:13:11 +03:00
Michael Sutton
80879cabe1
Clean up debug log level by moving many frequent logs to trace level (#2004)
* Clean up debug log level and move most/frequent logs to trace level

* Change function call traces to trace log level
2022-04-03 23:25:29 +03:00
Ori Newman
71afc62298
Avoid override and accidental staging of pruning point by index (#2005)
* Avoid override and accidental staging of pruning point by index

* Update pruning point when resolving virtual
2022-04-03 22:52:05 +03:00
Michael Sutton
ca5c8549b9
Add DB compaction following the deletion of a DB prefix (#2003) 2022-04-03 19:08:32 +03:00
Michael Sutton
ab73def07a
Cache the pruning point anticone (#2002) 2022-04-03 14:44:42 +03:00
Ori Newman
3f840233d8
Fix migrate resolve virtual percents (#2001)
* Fix migration virtual resolution percents

* Don't sync consensus if pruning point is genesis

* Don't add virtual to pruning point proof

* Remove redundant check
2022-04-03 01:01:49 +03:00
Ori Newman
90d9edb8e5
Bump version to v0.11.15 (#1999)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-04-02 22:20:41 +03:00
Ori Newman
b9b360bce4
Remove increase pagefile (#2000)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-04-02 22:12:57 +03:00
Ori Newman
27654961f9
Remove v4 p2p version (#1998)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-04-02 21:54:40 +03:00
Ori Newman
d45af760d8
Validate GetBlockTemplate extra data (#1997)
* Validate GetBlockTemplate extra data

* Check payload instead of extra data

* Fix error message
2022-04-02 21:25:58 +03:00
Michael Sutton
95fa045297
New definition for "out of sync" (#1996)
* Changed the definition of should mine and redefined "out of sync"

* Check `IsNearlySynced` only if IBD running since IBD check is cheap
2022-04-01 10:36:59 +03:00
Ori Newman
cb65dae63d
Add extra data to GetBlockTemplate request (#1995) 2022-03-31 20:12:33 +03:00
Michael Sutton
21b82d7efc
Block template cache (#1994)
* minor text fix

* Implement a block template cache with template modification/reuse mechanism

* Fix compilation error

* Address review comments

* Added a through TestModifyBlockTemplate test

* Update header timestamp if possible

* Avoid copying the transactions when only the header changed

* go fmt
2022-03-31 16:37:48 +03:00
Ori Newman
63c6d7443b
In blockParentBuilder.BuildParents check if a block is isInFutureOfVi… (#1993)
* In blockParentBuilder.BuildParents check if a block is isInFutureOfVirtualGenesisChildren instead of checking if it has reachability data

* Add comment
2022-03-31 02:43:07 +03:00
Svarog
753f4a2ec1
Add package name to kaspawalletd .proto file (#1991)
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2022-03-30 20:56:36 +03:00
Svarog
ed667f7e54
Upgrade to go 1.18 (#1992)
* Upgrade to go 1.18

* Fix stability-test shell script
2022-03-30 20:17:29 +03:00
Michael Sutton
c4a034eb43
Optimize the miner-kaspad flow and latency (#1988)
* protobuf for new block template notification structs

* appmessage and wire for new block template notification structs

* Set up the entire handler/call-chain for managing the new-block-template event
2022-03-28 23:41:59 +03:00
Aleoami
2eca0f0b5f
Add names to nameless routes (#1986)
* add names to nameless routes

* Update router.go

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-03-28 09:41:25 +03:00
Ori Newman
58d627e05a
Unite reachability stores (#1963)
* Unite all reachability stores

* Upgrade script

* Fix tests

* Add UpdateReindexRoot to RebuildReachability

* Use dbTx when deleting reachability stores

* Use ghostdagDataWithoutPrunedBlocks when rebuilding reachability

* Use next tree ancestor wherever possible and avoid finality point search if the block is too close to pruning point

* Address the boundary case where the pruning point becomes the finality point

* some minor fixes

* Remove RebuildReachability and use manual syncing between old and new consensus for migration

* Remove sanity test (it failed when tips where not in the same order)

Co-authored-by: msutton <mikisiton2@gmail.com>
2022-03-27 21:23:03 +03:00
Svarog
639183ba0e
Add support for auto-compound in kaspawallet send (#1951)
* Add GetUTXOsByBalances command to rpc

* Fix wrong commands in GetBalanceByAddress

* Moved calculation of TransactionMass out of TransactionValidator, so t that it can be used in kaspawallet

* Allow CreateUnsignedTransaction to return multiple transactions

* Start working on split

* Implement maybeSplitTransactionInner

* estimateMassIncreaseForSignatures should multiply by the number of inputs

* Implement createSplitTransaction

* Implement mergeTransactions

* Broadcast all transaction, not only 1

* workaround missing UTXOEntry in partially signed transaction

* Bugfix in broadcast loop

* Add underscores in some constants

* Make all nets RelayNonStdTxs: false

* Change estimateMassIncreaseForSignatures to estimateMassAfterSignatures

* Allow situations where merge transaction doesn't have enough funds to pay fees

* Add comments

* A few of renames

* Handle missed errors

* Fix clone of PubKeySignaturePair  to properly clone nil signatures

* Add sanity check to make sure originalTransaction has exactly two outputs

* Re-use change address for splitAddress

* Add one more utxo if the total amount is smaller then what we need to send due to fees

* Fix off-by-1 error in splitTrasnaction

* Add a comment to maybeAutoCompoundTransaction

* Add comment on why we are increasing inputCountPerSplit

* Add comment explaining while originalTransaction has 1 or 2 outputs

* Move oneMoreUTXOForMergeTransaction to split_transaction.go

* Allow to add multiple utxos to pay fee for mergeTransactions, if needed

* calculate split input counts and sizes properly

* Allow multiple transactions inside the create-unsigned-transaction -> sign -> broadcast workflow

* Print the number of transaction which was sent, in case there were multiple

* Rename broadcastConfig.Transaction(File) to Transactions(File)

* Convert alreadySelectedUTXOs to a map

* Fix a typo

* Add comment explaining that we assume all inputs are the same

* Revert over-refactor of rename of config.Transaction -> config.Transactions

* Rename: inputPerSplitCount -> inputsPerSplitCount

* Add comment for splitAndInputPerSplitCounts

* Use createSplitTransaction to calculate the upper bound of mass for split transactions
2022-03-27 20:06:55 +03:00
Michael Sutton
9fa08442cf
Minor log fixes (#1983) 2022-03-21 11:45:52 +02:00
Michael Sutton
0dd50394ec
Fix a bug in the new p2p v5 IBD chain negotiation (#1981)
* Fix a bug in the case where syncer chain is fully known to syncee

* Extract chain negotiation to a separated method

* Bump version and update the changelog

* Add zoom-in progress validation and some debug logs

* Improved error explanation

* go fmt

* Validate zoom-in progress through a total count
2022-03-20 18:06:55 +02:00
Michael Sutton
ac8d4e1341
Update changelog with v0.11.13 changes (#1980) 2022-03-16 10:48:11 +02:00
Michael Sutton
2488fbde78
Apply avoiding IBD patch10 logic to p2p v4 (#1979)
* a patch for fixing p2p v4 IBD issues for all side-chains

* Perform side-chain check earlier to avoid IBD start

* A few comments explaining the IBD patch
2022-03-15 19:16:25 +02:00
Michael Sutton
2ab8065142
Improve output of non-critical protocol errors to avoid user panic (#1978)
* Improve output of non-critical protocol errors to avoid user panic

* Add log messages at the end of IBD with headers proof

* Found a case where this was falsely triggered due to the wrong equality test
2022-03-15 17:23:29 +02:00
Michael Sutton
25410b86ae
Make sure there are no negative numbers in the progress report (#1977) 2022-03-15 11:58:15 +02:00
Michael Sutton
4e44dd8510
Various P2P V5 IBD fixes (#1976)
* The first message is expected to contain headers and not a "done" message (+comment and error text fixes)

* Dequeue w/o timeout during pp anticone batch processing

* Add a verification step for catching possible new IBD errors

* Fetch missing bodies for both, syncer selected tip past and relay block past

* Make sure the syncer is behaving correctly to avoid out of index errors

* Make sure progress reporter does not exceed 100%

* No orphan roots, so no need to queue the empty list

* Add a log to report utxo fetch failure with err message

* A duplicate blocks should not appear as a warning

* typo
2022-03-14 12:21:32 +02:00
Svarog
1e56a22b32
Ignore not found errors from tp.transactionsOrderedByFeeRate.Remove. (#1974)
* Fix error message referencing wrong function name

* Ignore not found errors from tp.transactionsOrderedByFeeRate.Remove.

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2022-03-13 18:01:13 +02:00
Ori Newman
7a95f0c7a4
Use nil suggestedLowHash if selected parent pruning point is not in the future of the current one (#1972)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2022-03-13 17:22:03 +02:00
Michael Sutton
c81506220b
Send pruning point anticone in batches (#1973)
* add p2p v5 which is currently identical to v4

* set all internal imports to v5

* set default version to 5

* Send pruning point and its anticone in batches

* go lint

* Fix jsom format

* Use DequeueWithTimeout

* Assert that batch size < route capacity

* oops, this is a flow handler, by definition it needs to be w/o a timeout

* here however, a timeout is required

* Keep IDs of prev messages unmodified

* previous merge operation accidentally erased an important part of this pr

* Extend timeout of simple sync

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-03-13 16:31:34 +02:00
Michael Sutton
e5598c15a7
Fix ibd shared past negotiation to be non quadratic also in the worst-case (#1969)
* add p2p v5 which is currently identical to v4

* set all internal imports to v5

* wip

* set default version to 5

* protobuf gen for new ibd chain locator

* wire for new ibd chain locator types

* new ibd shared block algo -- only basic test passing

* address the case where pruning points disagree, now both IBD tests pass

* protobuf gen for new past diff request message

* wire for new request past diff message

* handle and flow for new request past diff message - logic unimplemented yet

* implement ibd sync past diff of relay and selected tip

* go fmt

* remove unused methods

* missed one err check

* addressing simple comments

* apply the traversal limit logic and sort headers

* rename pastdiff -> anticone

* apply Don't relay blocks in virtual anticone #1970 to v5

* go fmt

* Fixed minor comments

* Limit the number of chain negotiation restarts
2022-03-13 11:27:50 +02:00
Svarog
433af5e0fe
Make findTransactionIndex return wasFound explicitly + fix crash caused by invalid handling of not found transaction (#1971)
* Make findTransactionIndex return wasFound explicitly + fix crash caused by invalid handling of not found transaction

* Add comment on findTransactionIndex
2022-03-12 11:07:10 +02:00
Ori Newman
b7be807167
Don't relay blocks in virtual anticone (#1970) 2022-03-11 13:24:45 +02:00
Ori Newman
e687ceeae7
Add version to block template (#1967) 2022-03-11 08:56:36 +02:00
Ori Newman
04e35321aa
Bump to v0.11.13 (#1968) 2022-03-11 08:33:53 +02:00
Ori Newman
061e65be93
Fix argument order for IsAncestorOf in boundedMergeBreakingParents (#1966) 2022-03-09 21:11:00 +02:00
Ori Newman
190e725dd0
Optimize expected header pruning point (#1962)
* Use the correct heuristic to avoid checking for next pruning point movement when not needed
2022-03-07 00:16:29 +02:00
Ori Newman
6449b03034
Ignore transaction invs on IBD (#1960)
* Ignore transaction invs on IBD

* Add IsIBDRunning mock to TestHandleRelayedTransactionsNotFound

Co-authored-by: Ori Newman <>
2022-02-26 22:20:08 +02:00
Ori Newman
9f02a24e8b
Add merge set and IsChainBlock to the RPC (#1961)
* Add merge set and IsChainBlock to the RPC

* Fix BlockInfo.Clone()
2022-02-25 16:22:00 +02:00
Isaac Cook
9b23bbcdb5
kaspactl: string slice deser for GetUtxosByAddresses (#1955)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-02-24 00:40:01 +02:00
stasatdaglabs
b30f7309a2
Implement a parse sub command in the walllet (#1953)
* Add boilerplate for the `parse` sub command.

* Deserialize the given transaction hax.

* Implement the rest of the wallet parse command.

* Hide transaction inputs behind a `verbose` flag.

* Indicate that we aren't able to extract an address out of a nonstandard transaction.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-02-20 22:12:23 +02:00
Ori Newman
1c18a49992
Add cache to block window (#1948)
* Add cache to block window

* Copy the window heap slice with the right capacity

* Use WindowHeapSliceStore

* Use the selected parent window as a basis (and some comments and variable renames)

* Clone slice on newSizedUpHeapFromSlice

* Rename isNotFoundError->currentIsNonTrustedBlock

* Increase windowHeapSliceStore cache size to 2000 and some cosmetic changes
2022-02-20 16:52:36 +02:00
stasatdaglabs
28d0f1ea2e
Set MaxBlockLevels for non-mainnet networks to 250 (#1952)
* Make MaxBlockLevel a DAG params instead of a constant.

* Change the testnet network name to 9.

* Fix TestBlockWindow.

* Set MaxBlockLevels for non-mainnet networks to 250.

* Revert "Fix TestBlockWindow."

This reverts commit 30a7892f53e0bb8d0d24435a68f0561a8efab575.

* Fix TestPruning.
2022-02-20 13:43:42 +02:00
Ori Newman
3f7e482291
Add progress indication for virtual resolution (#1949)
* Add progress indication for virtual resolution

* Avoid division by zero
2022-02-20 00:32:41 +02:00
Ori Newman
ce4f5fcc33
Remove BlockWithTrustedData method (#1950)
* Remove BlockWithTrustedData method

* Remove unused methods
2022-02-20 00:04:47 +02:00
Svarog
be3a6604d7
Make kaspawallet store the utxos sorted by amount (#1947)
* Make kaspawallet store the utxos sorted by amount, so that the bigger utxos are spent first - making it less likely a compound will be required

* Start refactor addEntryToUTXOSet

* Add GetUTXOsByBalances command to rpc

* Store list of addresses, updated with the collectAddresses methods
(replacing collectUTXOs methods)

* Fix wrong commands in GetBalanceByAddress

* Rename: refreshExistingUTXOs -> refreshUTXOs

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2022-02-18 15:08:08 +02:00
Svarog
f452531df0
Fix bookkeeping of chained transactions in mempool (#1946)
* Fix bookkeeping of chained transactions in mempool

* Fix golint error
2022-02-18 13:21:45 +02:00
Ori Newman
13a09da848
Bump version to v0.11.12 (#1941)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2022-02-17 18:27:25 +02:00
Michael Sutton
f58aeb4f9f
In profile dump file - use a time format supporting all file systems (#1945)
* Use a time format without ":" to support all file systems

* go fmt
2022-02-12 22:00:46 +02:00
Ori Newman
82f0a4d74f
Drop support for p2p v3 (#1942)
* Drop support for p2p v3

* Remove redundant aliases

* Remove redundant condition
2022-02-06 17:28:06 +02:00
Ori Newman
69d90fe827
Remove duplicate median time calculation on tx validation (#1943)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2022-02-06 16:34:05 +02:00
Ori Newman
c85b5d70fd
Add AllowConnectionToDifferentVersions flag to kaspactl (#1940) 2022-02-06 15:43:09 +02:00
Ori Newman
1cd712a63e
Increase headers proof timeout and add more progress logs (#1939)
* Increase timeout for pruning proof and add some logs

* Show resolving virtual progress as whole percents
2022-02-06 12:42:28 +02:00
stasatdaglabs
27ba9d0374
Split ApplyPruningPointProof to multiple small database transactions (#1937)
* Split ApplyPruningPointProof to multiple small database transactions.

* Increase the timeout duration in TestIBDWithPruning.

* Increase the timeout duration in simple-sync.

* Explain that if ApplyPruningPointProof fails, the database must be discarded.
2022-02-06 10:36:27 +02:00
stasatdaglabs
b1229f7908
Display progress of IBD process in Kaspad logs (#1938)
* Report progress percentage when downloading headers in IBD.

* Extract reporting logic to a separate type.

* Report progress for IBD missing block bodies.
2022-02-02 21:24:45 +02:00
Michael Sutton
4a560f25a6
Update changelog and version number for v0.11.11 (#1935) 2022-01-27 21:53:05 +02:00
Michael Sutton
dab1a881fe
Fix for rare consensus bug regarding daa window order (#1934)
* Fix for rare consensus bug: daa window min-time-block was not deterministic when timestamps are equal

* Something is missing

* Extract compare logic to a function with better performance

* typo
2022-01-27 20:29:44 +02:00
Ori Newman
598392d0cf
Update changelog to v0.11.10 (#1933)
Co-authored-by: Ori Newman <>
2022-01-27 12:56:26 +02:00
Michael Sutton
6d27637055
Add monitoring of heap and save heap profile if size is over some limit (#1932)
* Add monitoring of heap and save heap profile if size is over some limit

* Exported function

* Extract dump logic to a function (for defer close)

* Change trackHeapSize ticker interval to 10 seconds

* Add timestamp to dump file name

Co-authored-by: Ori Newman <>
2022-01-27 11:57:28 +02:00
Michael Sutton
4855d845b3
Extract IBD management from invs relay flow to a new separated flow (#1930)
* Separate IBD to a new flow (so now invs are handled concurrently and no route capacity errors)

* Invs messages should be queued while waiting for BlockLocator msg

* Close IBD channel so that HandleIBDFlow exits too

* Apply flow separation to p2p protocol v4

* Manage the IBDRequestChannel through the Peer struct

* Some IBDs take a little longer
2022-01-24 09:30:41 +02:00
stasatdaglabs
b1b179c105
Add --transaction-file options to the sign and broadcast wallet subcommands (#1927)
* Add --transaction-file to the sign wallet subcommand.

* Fix bad short sign config option.

* Trim whitespace around the hex file.

* Add --transaction-file to the broadcast subcommand.
2022-01-16 14:59:54 +02:00
Ori Newman
dadacdc0f4
Filter redundant blocks from daa window (#1925)
* Copy blockrelay flows to v4

* Remove duplicate sending of the same DAA blocks

* Advance testnet version

* Renames and add comments

* Add IBD test between v3 and v4

* Fix syncee v4 p2p version

* Check if netsync finished with selected tip
2022-01-09 16:58:51 +02:00
Ori Newman
d2379608ad
P2P upgrade mechanism (#1921)
* Implement upgrade mechanism for p2p

* Remove dependencies from flowcontext to v3

* Add p2p v4

* Add Ready flow

* Remove copy paste code of v3

* Register SendAddresses flow at the top level

* Add option to set protocol version from CLI and add TestAddressExchangeV3V4

* Send ready message on minimal net adapter

* Rename defaultMaxProtocolVersion->maxAcceptableProtocolVersion
2022-01-09 09:59:45 +02:00
Michael Sutton
14b2bcbd81
Update to version v0.11.9 plus changelog update (#1919) 2021-12-30 18:37:51 +02:00
Michael Sutton
71b284f4d5
Address manager randomization weighted by connection failures (#1916)
* Address manager refactor stage 1

* Use a simpler weightedRand function which makes it easier parameterize to the process

* Switch back to connectionFailedCount

* Simplify selected entry deletion

* Fix function comment

Co-authored-by: Constantine Bitensky <cbitensky1@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-30 12:05:53 +02:00
Michael Sutton
0e1d247915
Fix stability test mining (allow submission of non DAA blocks too) (#1917)
* Fix stability test mining (allow submission of non DAA blocks too)

* Comments for go fmt

* Use SubmitBlockAlsoIfNonDAA for all tests

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-12-30 11:04:56 +02:00
Michael Sutton
504ec36612
Remove DNS seeder managed by Elichai Turkel (its offline) (#1918) 2021-12-30 10:10:57 +02:00
Ori Newman
c80b113319
Fix two pruning proof bugs (#1913)
* Don't assume pruning point is at specific level

* Reverse pruning points order in ArePruningPointsViolatingFinality

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2021-12-29 21:24:29 +02:00
stasatdaglabs
0bdd19136f
Implement the new monetary policy (#1892)
* Remove unused functions, constants, and variables.

* Rename maxSubsidy to baseSubsidy.

* Remove unused parameters from CalcBlockSubsidy.

* Remove link to old monetary policy.

* If a block's DAA score is smaller than half a year, it should have a base subsidy.

* Fix merge errors.

* Fix more merge errors.

* Add DeflationaryPhaseBaseSubsidy to the params.

* Implement TestCalcDeflationaryPeriodBlockSubsidy.

* Implement calcDeflationaryPeriodBlockSubsidy naively.

* Implement calcDeflationaryPeriodBlockSubsidy not naively.

* Adjust the subsidy based on target block rate.

* Fix deflationaryPhaseDaaScore in TestCalcDeflationaryPeriodBlockSubsidy.

* Explain how secondsPerMonth is calculated.

* Don't adjust the subsidy based on the target block rate.

* Update defaultDeflationaryPhaseDaaScore and add an explanation.

* Use a pre-calculated table for subsidy per month

* Make the generation function fail if base subsidy is changed

* go fmt

* Use test logger for printing + simplify print loop

Co-authored-by: msutton <mikisiton2@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-29 20:49:20 +02:00
Svarog
7c1cddff11
Address Manager: Don't seed if we have enough Addresses + Remove Address that failed connection too many times (#1899)
* Seed from DNS only if we ran out of addresses to connect to

* Remove address if it has failed 10 connections

* Change connectionFailedCountForRemove to 3
2021-12-29 20:34:32 +02:00
Michael Sutton
064b0454e8
Reject out of date blocks submitted via RPC (#1914)
* Disallow by default RPC submission of out of date blocks (blocks which are out of virtual DAA window)

* go fmt

* Better condition test

* Make allowNonDAABlocks an RPC field and not a command line flag

* go fmt

* one more go fmt
2021-12-29 20:07:45 +02:00
Michael Sutton
8282fb486e
Fix protobuf generation for PR #1885 (#1915) 2021-12-29 15:30:07 +02:00
Constantine Bytensky
428449bb7d
Wallet: show balance by addresses (#1904)
Co-authored-by: Constantine Bitensky <cbitensky1@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-25 14:25:15 +02:00
Aleoami
f54659ead0
Add kaspadns.kaspacalc.net to seeder list (#1910)
* Update to version v0.11.8

* bugfix: addresses issue #1903, raised kaspawallet bruteforce core limit to 256

* fix contributing.md text

* added dns seeder kaspadns.kaspacalc.net

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-25 13:22:44 +02:00
Aleoami
99f82eb80f
fix CONTRIBUTING.md text (#1908)
* Update to version v0.11.8

* bugfix: addresses issue #1903, raised kaspawallet bruteforce core limit to 256

* fix contributing.md text

* Update cmd/kaspawallet/keys/keys.go

Co-authored-by: Ori Newman <orinewman1@gmail.com>

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-25 12:47:04 +02:00
georges-kuenzli
aa43c14fc5
Added seeder[1-4].kaspad.net to seeder list (#1901)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-23 00:42:17 +02:00
FestinaLente666
129e9119d2
Add request balance by address to kaspactl (#1885)
* Added new RPC command for getting balance of wallet address

* small details

* Regenerate protobuffs

* fixes

* routing fix

* Add 'Message' suffix to GetBalanceByAddressRequest and GetBalanceByAddressResponse

* Add Message

* linting

* linting

* fixed name

Co-authored-by: Mike Zak <feanorr@gmail.com>
Co-authored-by: Ori Newman <>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
2021-12-22 22:21:33 +02:00
Constantine Bytensky
cae7faced2
Added dnsseed.cbytensky.org to seeder list (#1897)
Co-authored-by: Constantine Bitensky <cbitensky1@gmail.com>
2021-12-20 08:42:02 +02:00
Elichai Turkel
f036b18f9e
Add a profile option to kaspawallet daemon (#1854)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-12-19 09:36:43 +02:00
Ori Newman
1d740e1eab
UTXO index fix (#1891)
* Update to version v0.11.8

* Call OnPruningPointUTXOSetOverride after committing the staging consensus

* use domain.Consensus() in utxo index so it'll always check the right consensus
2021-12-19 08:56:18 +02:00
Ori Newman
011871cda2
Update reindex root for each block level (#1881)
Co-authored-by: Ori Newman <>
2021-12-13 00:06:33 +02:00
Michael Sutton
7b4f761fb9
Update readme (#1848)
Update readme to reflect that project is no longer at pre-Alpha state

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-12-12 17:07:59 +02:00
Ori Newman
5806fef35f
Lower devnet's initial difficulty (#1869)
* Lower devnet's initial difficulty

* Increase simple-sync timeToPropagate to 10 seconds

* Check expectedAveragePropagationTime

Co-authored-by: Ori Newman <>
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-12-12 16:24:43 +02:00
Ori Newman
227ef392ba Update version 2021-12-11 20:55:25 +02:00
Ori Newman
f3d76d6565
Ignore header mass in devnet and testnet (#1879) 2021-12-11 20:36:58 +02:00
Ori Newman
df573bba63
Remove unused args from CalcSubsidy (#1877) 2021-12-11 20:16:23 +02:00
Ori Newman
2a97b7c9bb
ExpectedHeaderPruningPoint fix (#1876)
* ExpectedHeaderPruningPoint fix

* Fix off by one error when iterating lowHash's selected chain
2021-12-11 19:26:35 +02:00
Svarog
70900c571b
Changes to libkaspawallet to support Kaspaper (#1878)
* Add NewFileFromMnemonics

* Export InternalKeychain and ExternalKeychain

* Rename NewFileFromMnemonics -> NewFileFromMnemonic

* NewFileFromMnemonic: change also argument name

* Use libkaspawallet.ExternalKeychain instead of externalKeychain
2021-12-11 17:52:21 +02:00
Constantine Bytensky
7292438e4a
kaspawallet: show-address →new-address + show-addresses (#1870)
* Rename show-address to new-address

* New proto: ShowAdresses

* kaspawallet: added show-addresses command

Co-authored-by: Constantine Bitensky <cbitensky1@gmail.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-12-07 00:49:06 +02:00
Ori Newman
dced1a9376
Fix numThreads using getAEAD instead of decryptMnemonic (#1859)
* Fix num threads using getAEAD instead of decryptMnemonic

* Use d.NumThread to init bruteforce for num threads in getAEAD

Co-authored-by: Ori Newman <>
2021-12-06 23:56:00 +02:00
Ori Newman
32e8e539ac
Apply ResolveVirtual diffs to the UTXO index (#1868)
* Apply ResolveVirtual diffs to the UTXO index

* Add comments

Co-authored-by: Ori Newman <>
2021-12-05 13:22:48 +02:00
Ori Newman
11103a36d3
Get rid of genesis's UTXO dump (#1867)
* Get rid of genesis's UTXO dump

* Allow known orphans when AllowSubmitBlockWhenNotSynced=true

* gofmt

* Avoid IBD without changing the pruning point when only genesis is available

* Add DisallowDirectBlocksOnTopOfGenesis=true for mainnet

* Remove any mention to nobanning to let stability tests run

* Rename ifGenesisSetUtxoSet to loadUTXODataForGenesis

Co-authored-by: Ori Newman <>
2021-12-05 12:46:41 +02:00
Elichai Turkel
606b781ca0
Fix bug in RequiredDifficulty and release another version 2021-11-25 21:49:15 +02:00
Elichai Turkel
dbf18d8052
Hard fork - new genesis with the utxo set of the last block (#1856)
* UTXO dump of block 0fca37ca667c2d550a6c4416dad9717e50927128c424fa4edbebc436ab13aeef

* Activate HF immediately and change reward to 1000

* Change protocol version and datadir location

* Delete comments

* Fix zero hash to muhash zero hash in genesis utxo dump check

* Don't omit genesis as direct parent

* Fix tests

* Change subsidy to 500

* Dont assume genesis multiset is empty

* Fix BlockReward test

* Fix TestValidateAndInsertImportedPruningPoint test

* Fix pruning point genesis utxo set

* Fix tests related to mainnet utxo set

* Dont change the difficulty before you have a full window

* Fix TestBlockWindow tests

* Remove global utxo set variable, and persist mainnetnet utxo deserialization between runs

* Fix last tests

* Make peer banning opt-in

* small fix for a test

* Fix go lint

* Fix Ori's review comments

* Change DAA score of genesis to checkpoint DAA score and fix all tests

* Fix the BlockLevel bits counting

* Fix some tests and make them run a little faster

* Change datadir name back to kaspa-mainnet and change db path from /data to /datadir

* Last changes for the release and change the version to 0.11.5

Co-authored-by: Ori Newman <orinewman1@gmail.com>
Co-authored-by: Ori Newman <>
Co-authored-by: msutton <mikisiton2@gmail.com>
2021-11-25 20:18:43 +02:00
Michael Sutton
2a1b38ce7a
Fix mergeset limit violation bug (#1855)
* Added a test which reproduces a bug where virtual's mergeset exceeds the mergeset limit -- this test currently fails

* Added a simple condition to fix the mergeset limit bug

* Format issue
2021-11-24 19:34:59 +02:00
Ori Newman
29c410d123 Change HF time 2021-11-22 11:27:17 +02:00
Ori Newman
6e6fabf956 Change HF time 2021-11-22 11:24:55 +02:00
Ori Newman
b04292c97a Don't server parallel PruningPointAndItsAnticoneRequests 2021-11-22 09:56:30 +02:00
Ori Newman
765dd170e4
Optimizations and header size reduce hardfork (#1853)
* Modify DefaultTimeout to 120 seconds

A temporary workaround for nodes having trouble to sync (currently the download of pruning point related data during IBD takes more than 30 seconds)

* Cache existence in reachability store

* Cache block level in the header

* Fix IBD indication on submit block

* Add hardForkOmitGenesisFromParentsDAAScore logic

* Fix NumThreads bug in the wallet

* Get rid of ParentsAtLevel header method

* Fix a bug in BuildPruningPointProof

* Increase race detector timeout

* Add cache to BuildPruningPointProof

* Add comments and temp comment out go vet

* Fix ParentsAtLevel

* Dont fill empty parents

* Change HardForkOmitGenesisFromParentsDAAScore in fast netsync test

* Add --allow-submit-block-when-not-synced in stability tests

* Fix TestPruning

* Return fast tests

* Fix off by one error on kaspawallet

* Fetch only one block with trusted data at a time

* Update fork DAA score

* Don't ban for unexpected message type

* Fix tests

Co-authored-by: Michael Sutton <mikisiton2@gmail.com>
Co-authored-by: Ori Newman <>
2021-11-22 09:00:39 +02:00
Ori Newman
8e362845b3 Update to version 0.11.4 2021-11-21 01:52:56 +02:00
506 changed files with 28349 additions and 10605 deletions

View File

@ -1,7 +1,7 @@
name: Build and upload assets
on:
release:
types: [ published ]
types: [published]
jobs:
build:
@ -9,7 +9,7 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ ubuntu-latest, windows-latest, macos-latest ]
os: [ubuntu-latest, windows-latest, macos-latest]
name: Building, ${{ matrix.os }}
steps:
- name: Fix CRLF on Windows
@ -17,18 +17,12 @@ jobs:
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# Increase the pagefile size on Windows to aviod running out of memory
- name: Increase pagefile size on Windows
if: runner.os == 'Windows'
run: powershell -command .github\workflows\SetPageFileSize.ps1
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: 1.16
go-version: 1.21
- name: Build on Linux
if: runner.os == 'Linux'
@ -36,7 +30,7 @@ jobs:
# `-tags netgo,osusergo` means use pure go replacements for "os/user" and "net"
# `-s -w` strips the binary to produce smaller size binaries
run: |
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ . ./cmd/...
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-linux.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-linux.zip"
zip -r "${archive}" ./bin/*
@ -47,7 +41,7 @@ jobs:
if: runner.os == 'Windows'
shell: bash
run: |
go build -v -ldflags="-s -w" -o bin/ . ./cmd/...
go build -v -ldflags="-s -w" -o bin/ ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-win64.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-win64.zip"
powershell "Compress-Archive bin/* \"${archive}\""
@ -57,14 +51,13 @@ jobs:
- name: Build on MacOS
if: runner.os == 'macOS'
run: |
go build -v -ldflags="-s -w" -o ./bin/ . ./cmd/...
go build -v -ldflags="-s -w" -o ./bin/ ./cmd/...
archive="bin/kaspad-${{ github.event.release.tag_name }}-osx.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-osx.zip"
zip -r "${archive}" ./bin/*
echo "archive=${archive}" >> $GITHUB_ENV
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Upload release asset
uses: actions/upload-release-asset@v1
env:

View File

@ -11,18 +11,18 @@ jobs:
strategy:
fail-fast: false
matrix:
branch: [ master, latest ]
branch: [master, latest]
name: Race detection on ${{ matrix.branch }}
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: 1.16
go-version: 1.23
- name: Set scheduled branch name
shell: bash
@ -46,4 +46,4 @@ jobs:
run: |
git checkout "${{ env.run_on }}"
git status
go test -race ./...
go test -timeout 20m -race ./...

View File

@ -8,22 +8,20 @@ on:
types: [opened, synchronize, edited]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-latest, macos-latest ]
os: [ubuntu-latest, macos-latest]
name: Tests, ${{ matrix.os }}
steps:
- name: Fix CRLF on Windows
if: runner.os == 'Windows'
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
uses: actions/checkout@v4
# Increase the pagefile size on Windows to aviod running out of memory
- name: Increase pagefile size on Windows
@ -31,14 +29,13 @@ jobs:
run: powershell -command .github\workflows\SetPageFileSize.ps1
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: 1.16
go-version: 1.23
# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
- name: Go Cache
uses: actions/cache@v2
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
@ -49,19 +46,17 @@ jobs:
shell: bash
run: ./build_and_test.sh -v
stability-test-fast:
runs-on: ubuntu-latest
name: Fast stability tests, ${{ github.head_ref }}
steps:
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: 1.16
go-version: 1.23
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
fetch-depth: 0
@ -75,18 +70,17 @@ jobs:
working-directory: stability-tests
run: ./install_and_test.sh
coverage:
runs-on: ubuntu-latest
name: Produce code coverage
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Setup Go
uses: actions/setup-go@v2
uses: actions/setup-go@v5
with:
go-version: 1.16
go-version: 1.23
- name: Delete the stability tests from coverage
run: rm -r stability-tests

1
.gitignore vendored
View File

@ -53,6 +53,7 @@ _testmain.go
debug
debug.test
__debug_bin
*__debug_*
# CI
version.txt

43
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,43 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project maintainers on this [Google form][gform]. The project maintainers will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project maintainers are obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[gform]: https://forms.gle/dnKXMJL7VxdUjt3x5
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -12,8 +12,7 @@ If you want to make a big change it's better to discuss it first by opening an i
## Pull Request process
Any pull request should be opened against the development branch of the target version. The development branch format is
as follows: `vx.y.z-dev`, for example: `v0.8.5-dev`.
Any pull request should be opened against the development branch `dev`.
All pull requests should pass the checks written in `build_and_test.sh`, so it's recommended to run this script before
submitting your PR.

View File

@ -1,16 +1,15 @@
# DEPRECATED
Kaspad
====
Warning: This is pre-alpha software. There's no guarantee anything works.
====
The full node reference implementation was [rewritten in Rust](https://github.com/kaspanet/rusty-kaspa), as a result, the Go implementation is now deprecated.
PLEASE NOTE: Any pull requests or issues that will be opened in this repository will be closed without treatment, except for issues or pull requests related to the kaspawallet, which remains maintained. In any other case, please use the [Rust implementation](https://github.com/kaspanet/rusty-kaspa) instead.
# Kaspad
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](https://choosealicense.com/licenses/isc/)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/kaspanet/kaspad)
Kaspad is the reference full node Kaspa implementation written in Go (golang).
This project is currently under active development and is in a pre-Alpha state.
Some things still don't work and APIs are far from finalized. The code is provided for reference only.
Kaspad was the reference full node Kaspa implementation written in Go (golang).
## What is kaspa
@ -18,7 +17,7 @@ Kaspa is an attempt at a proof-of-work cryptocurrency with instant confirmations
## Requirements
Go 1.16 or later.
Go 1.23 or later.
## Installation
@ -45,7 +44,6 @@ $ go install . ./cmd/...
not already add the bin directory to your system path during Go installation,
you are encouraged to do so now.
## Getting Started
Kaspad has several configuration options available to tweak how it runs, but all
@ -56,6 +54,7 @@ $ kaspad
```
## Discord
Join our discord server using the following link: https://discord.gg/YNYnNN5Pf2
## Issue Tracker

View File

@ -20,7 +20,10 @@ import (
"github.com/kaspanet/kaspad/version"
)
const leveldbCacheSizeMiB = 256
const (
leveldbCacheSizeMiB = 256
defaultDataDirname = "datadir2"
)
var desiredLimits = &limits.DesiredLimits{
FileLimitWant: 2048,
@ -84,6 +87,7 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
if app.cfg.Profile != "" {
profiling.Start(app.cfg.Profile, log)
}
profiling.TrackHeap(app.cfg.AppDir, log)
// Return now if an interrupt signal was triggered.
if signal.InterruptRequested(interrupt) {
@ -159,7 +163,7 @@ func (app *kaspadApp) main(startedChan chan<- struct{}) error {
// dbPath returns the path to the block database given a database type.
func databasePath(cfg *config.Config) string {
return filepath.Join(cfg.AppDir, "data")
return filepath.Join(cfg.AppDir, defaultDataDirname)
}
func removeDatabase(cfg *config.Config) error {

View File

@ -6,7 +6,7 @@ supported kaspa messages to and from the appmessage. This package does not deal
with the specifics of message handling such as what to do when a message is
received. This provides the caller with a high level of flexibility.
Kaspa Message Overview
# Kaspa Message Overview
The kaspa protocol consists of exchanging messages between peers. Each
message is preceded by a header which identifies information about it such as
@ -22,7 +22,7 @@ messages, all of the details of marshalling and unmarshalling to and from the
appmessage using kaspa encoding are handled so the caller doesn't have to concern
themselves with the specifics.
Message Interaction
# Message Interaction
The following provides a quick summary of how the kaspa messages are intended
to interact with one another. As stated above, these interactions are not
@ -45,13 +45,13 @@ interactions in no particular order.
notfound message (MsgNotFound)
ping message (MsgPing) pong message (MsgPong)
Common Parameters
# Common Parameters
There are several common parameters that arise when using this package to read
and write kaspa messages. The following sections provide a quick overview of
these parameters so the next sections can build on them.
Protocol Version
# Protocol Version
The protocol version should be negotiated with the remote peer at a higher
level than this package via the version (MsgVersion) message exchange, however,
@ -60,7 +60,7 @@ latest protocol version this package supports and is typically the value to use
for all outbound connections before a potentially lower protocol version is
negotiated.
Kaspa Network
# Kaspa Network
The kaspa network is a magic number which is used to identify the start of a
message and which kaspa network the message applies to. This package provides
@ -71,7 +71,7 @@ the following constants:
appmessage.Simnet (Simulation test network)
appmessage.Devnet (Development network)
Determining Message Type
# Determining Message Type
As discussed in the kaspa message overview section, this package reads
and writes kaspa messages using a generic interface named Message. In
@ -89,7 +89,7 @@ switch or type assertion. An example of a type switch follows:
fmt.Printf("Number of tx in block: %d", msg.Header.TxnCount)
}
Reading Messages
# Reading Messages
In order to unmarshall kaspa messages from the appmessage, use the ReadMessage
function. It accepts any io.Reader, but typically this will be a net.Conn to
@ -104,7 +104,7 @@ a remote node running a kaspa peer. Example syntax is:
// Log and handle the error
}
Writing Messages
# Writing Messages
In order to marshall kaspa messages to the appmessage, use the WriteMessage
function. It accepts any io.Writer, but typically this will be a net.Conn to
@ -122,7 +122,7 @@ from a remote peer is:
// Log and handle the error
}
Errors
# Errors
Errors returned by this package are either the raw errors provided by underlying
calls to read/write from streams such as io.EOF, io.ErrUnexpectedEOF, and

View File

@ -2,9 +2,10 @@ package appmessage
import (
"encoding/hex"
"github.com/pkg/errors"
"math/big"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/utils/blockheader"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
@ -218,7 +219,8 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
Outputs: outputs,
LockTime: rpcTransaction.LockTime,
SubnetworkID: *subnetworkID,
Gas: rpcTransaction.LockTime,
Gas: rpcTransaction.Gas,
MassCommitment: rpcTransaction.Mass,
Payload: payload,
}, nil
}
@ -286,7 +288,8 @@ func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransactio
Outputs: outputs,
LockTime: transaction.LockTime,
SubnetworkID: subnetworkID,
Gas: transaction.LockTime,
Gas: transaction.Gas,
Mass: transaction.MassCommitment,
Payload: payload,
}
}
@ -436,10 +439,10 @@ func RPCBlockToDomainBlock(block *RPCBlock) (*externalapi.DomainBlock, error) {
// BlockWithTrustedDataToDomainBlockWithTrustedData converts *MsgBlockWithTrustedData to *externalapi.BlockWithTrustedData
func BlockWithTrustedDataToDomainBlockWithTrustedData(block *MsgBlockWithTrustedData) *externalapi.BlockWithTrustedData {
daaWindow := make([]*externalapi.TrustedDataDataDAABlock, len(block.DAAWindow))
daaWindow := make([]*externalapi.TrustedDataDataDAAHeader, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &externalapi.TrustedDataDataDAABlock{
Block: MsgBlockToDomainBlock(daaBlock.Block),
daaWindow[i] = &externalapi.TrustedDataDataDAAHeader{
Header: BlockHeaderToDomainBlockHeader(&daaBlock.Block.Header),
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
@ -454,12 +457,27 @@ func BlockWithTrustedDataToDomainBlockWithTrustedData(block *MsgBlockWithTrusted
return &externalapi.BlockWithTrustedData{
Block: MsgBlockToDomainBlock(block.Block),
DAAScore: block.DAAScore,
DAAWindow: daaWindow,
GHOSTDAGData: ghostdagData,
}
}
// TrustedDataDataDAABlockV4ToTrustedDataDataDAAHeader converts *TrustedDataDAAHeader to *externalapi.TrustedDataDataDAAHeader
func TrustedDataDataDAABlockV4ToTrustedDataDataDAAHeader(daaBlock *TrustedDataDAAHeader) *externalapi.TrustedDataDataDAAHeader {
return &externalapi.TrustedDataDataDAAHeader{
Header: BlockHeaderToDomainBlockHeader(daaBlock.Header),
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
// GHOSTDAGHashPairToDomainGHOSTDAGHashPair converts *BlockGHOSTDAGDataHashPair to *externalapi.BlockGHOSTDAGDataHashPair
func GHOSTDAGHashPairToDomainGHOSTDAGHashPair(datum *BlockGHOSTDAGDataHashPair) *externalapi.BlockGHOSTDAGDataHashPair {
return &externalapi.BlockGHOSTDAGDataHashPair{
Hash: datum.Hash,
GHOSTDAGData: ghostdagDataToDomainGHOSTDAGData(datum.GHOSTDAGData),
}
}
func ghostdagDataToDomainGHOSTDAGData(data *BlockGHOSTDAGData) *externalapi.BlockGHOSTDAGData {
bluesAnticoneSizes := make(map[externalapi.DomainHash]externalapi.KType, len(data.BluesAnticoneSizes))
for _, pair := range data.BluesAnticoneSizes {
@ -500,7 +518,9 @@ func DomainBlockWithTrustedDataToBlockWithTrustedData(block *externalapi.BlockWi
daaWindow := make([]*TrustedDataDataDAABlock, len(block.DAAWindow))
for i, daaBlock := range block.DAAWindow {
daaWindow[i] = &TrustedDataDataDAABlock{
Block: DomainBlockToMsgBlock(daaBlock.Block),
Block: &MsgBlock{
Header: *DomainBlockHeaderToBlockHeader(daaBlock.Header),
},
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
@ -515,7 +535,41 @@ func DomainBlockWithTrustedDataToBlockWithTrustedData(block *externalapi.BlockWi
return &MsgBlockWithTrustedData{
Block: DomainBlockToMsgBlock(block.Block),
DAAScore: block.DAAScore,
DAAScore: block.Block.Header.DAAScore(),
DAAWindow: daaWindow,
GHOSTDAGData: ghostdagData,
}
}
// DomainBlockWithTrustedDataToBlockWithTrustedDataV4 converts a set of *externalapi.DomainBlock, daa window indices and ghostdag data indices
// to *MsgBlockWithTrustedDataV4
func DomainBlockWithTrustedDataToBlockWithTrustedDataV4(block *externalapi.DomainBlock, daaWindowIndices, ghostdagDataIndices []uint64) *MsgBlockWithTrustedDataV4 {
return &MsgBlockWithTrustedDataV4{
Block: DomainBlockToMsgBlock(block),
DAAWindowIndices: daaWindowIndices,
GHOSTDAGDataIndices: ghostdagDataIndices,
}
}
// DomainTrustedDataToTrustedData converts *externalapi.BlockWithTrustedData to *MsgBlockWithTrustedData
func DomainTrustedDataToTrustedData(domainDAAWindow []*externalapi.TrustedDataDataDAAHeader, domainGHOSTDAGData []*externalapi.BlockGHOSTDAGDataHashPair) *MsgTrustedData {
daaWindow := make([]*TrustedDataDAAHeader, len(domainDAAWindow))
for i, daaBlock := range domainDAAWindow {
daaWindow[i] = &TrustedDataDAAHeader{
Header: DomainBlockHeaderToBlockHeader(daaBlock.Header),
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(daaBlock.GHOSTDAGData),
}
}
ghostdagData := make([]*BlockGHOSTDAGDataHashPair, len(domainGHOSTDAGData))
for i, datum := range domainGHOSTDAGData {
ghostdagData[i] = &BlockGHOSTDAGDataHashPair{
Hash: datum.Hash,
GHOSTDAGData: domainGHOSTDAGDataGHOSTDAGData(datum.GHOSTDAGData),
}
}
return &MsgTrustedData{
DAAWindow: daaWindow,
GHOSTDAGData: ghostdagData,
}

View File

@ -38,6 +38,10 @@ type RPCError struct {
Message string
}
func (err RPCError) Error() string {
return err.Message
}
// RPCErrorf formats according to a format specifier and returns the string
// as an RPCError.
func RPCErrorf(format string, args ...interface{}) *RPCError {

View File

@ -66,6 +66,13 @@ const (
CmdPruningPoints
CmdRequestPruningPointProof
CmdPruningPointProof
CmdReady
CmdTrustedData
CmdBlockWithTrustedDataV4
CmdRequestNextPruningPointAndItsAnticoneBlocks
CmdRequestIBDChainBlockLocator
CmdIBDChainBlockLocator
CmdRequestAnticone
// rpc
CmdGetCurrentNetworkRequestMessage
@ -124,6 +131,8 @@ const (
CmdStopNotifyingUTXOsChangedResponseMessage
CmdGetUTXOsByAddressesRequestMessage
CmdGetUTXOsByAddressesResponseMessage
CmdGetBalanceByAddressRequestMessage
CmdGetBalanceByAddressResponseMessage
CmdGetVirtualSelectedParentBlueScoreRequestMessage
CmdGetVirtualSelectedParentBlueScoreResponseMessage
CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage
@ -145,6 +154,19 @@ const (
CmdNotifyVirtualDaaScoreChangedRequestMessage
CmdNotifyVirtualDaaScoreChangedResponseMessage
CmdVirtualDaaScoreChangedNotificationMessage
CmdGetBalancesByAddressesRequestMessage
CmdGetBalancesByAddressesResponseMessage
CmdNotifyNewBlockTemplateRequestMessage
CmdNotifyNewBlockTemplateResponseMessage
CmdNewBlockTemplateNotificationMessage
CmdGetMempoolEntriesByAddressesRequestMessage
CmdGetMempoolEntriesByAddressesResponseMessage
CmdGetCoinSupplyRequestMessage
CmdGetCoinSupplyResponseMessage
CmdGetFeeEstimateRequestMessage
CmdGetFeeEstimateResponseMessage
CmdSubmitTransactionReplacementRequestMessage
CmdSubmitTransactionReplacementResponseMessage
)
// ProtocolMessageCommandToString maps all MessageCommands to their string representation
@ -185,6 +207,13 @@ var ProtocolMessageCommandToString = map[MessageCommand]string{
CmdPruningPoints: "PruningPoints",
CmdRequestPruningPointProof: "RequestPruningPointProof",
CmdPruningPointProof: "PruningPointProof",
CmdReady: "Ready",
CmdTrustedData: "TrustedData",
CmdBlockWithTrustedDataV4: "BlockWithTrustedDataV4",
CmdRequestNextPruningPointAndItsAnticoneBlocks: "RequestNextPruningPointAndItsAnticoneBlocks",
CmdRequestIBDChainBlockLocator: "RequestIBDChainBlockLocator",
CmdIBDChainBlockLocator: "IBDChainBlockLocator",
CmdRequestAnticone: "RequestAnticone",
}
// RPCMessageCommandToString maps all MessageCommands to their string representation
@ -243,6 +272,8 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdStopNotifyingUTXOsChangedResponseMessage: "StopNotifyingUTXOsChangedResponse",
CmdGetUTXOsByAddressesRequestMessage: "GetUTXOsByAddressesRequest",
CmdGetUTXOsByAddressesResponseMessage: "GetUTXOsByAddressesResponse",
CmdGetBalanceByAddressRequestMessage: "GetBalanceByAddressRequest",
CmdGetBalanceByAddressResponseMessage: "GetBalancesByAddressResponse",
CmdGetVirtualSelectedParentBlueScoreRequestMessage: "GetVirtualSelectedParentBlueScoreRequest",
CmdGetVirtualSelectedParentBlueScoreResponseMessage: "GetVirtualSelectedParentBlueScoreResponse",
CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage: "NotifyVirtualSelectedParentBlueScoreChangedRequest",
@ -264,6 +295,19 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdNotifyVirtualDaaScoreChangedRequestMessage: "NotifyVirtualDaaScoreChangedRequest",
CmdNotifyVirtualDaaScoreChangedResponseMessage: "NotifyVirtualDaaScoreChangedResponse",
CmdVirtualDaaScoreChangedNotificationMessage: "VirtualDaaScoreChangedNotification",
CmdGetBalancesByAddressesRequestMessage: "GetBalancesByAddressesRequest",
CmdGetBalancesByAddressesResponseMessage: "GetBalancesByAddressesResponse",
CmdNotifyNewBlockTemplateRequestMessage: "NotifyNewBlockTemplateRequest",
CmdNotifyNewBlockTemplateResponseMessage: "NotifyNewBlockTemplateResponse",
CmdNewBlockTemplateNotificationMessage: "NewBlockTemplateNotification",
CmdGetMempoolEntriesByAddressesRequestMessage: "GetMempoolEntriesByAddressesRequest",
CmdGetMempoolEntriesByAddressesResponseMessage: "GetMempoolEntriesByAddressesResponse",
CmdGetCoinSupplyRequestMessage: "GetCoinSupplyRequest",
CmdGetCoinSupplyResponseMessage: "GetCoinSupplyResponse",
CmdGetFeeEstimateRequestMessage: "GetFeeEstimateRequest",
CmdGetFeeEstimateResponseMessage: "GetFeeEstimateResponse",
CmdSubmitTransactionReplacementRequestMessage: "SubmitTransactionReplacementRequest",
CmdSubmitTransactionReplacementResponseMessage: "SubmitTransactionReplacementResponse",
}
// Message is an interface that describes a kaspa message. A type that

View File

@ -18,7 +18,7 @@ import (
// TestBlock tests the MsgBlock API.
func TestBlock(t *testing.T) {
pver := ProtocolVersion
pver := uint32(4)
// Block 1 header.
parents := blockOne.Header.Parents
@ -132,7 +132,7 @@ func TestConvertToPartial(t *testing.T) {
}
}
//blockOne is the first block in the mainnet block DAG.
// blockOne is the first block in the mainnet block DAG.
var blockOne = MsgBlock{
Header: MsgBlockHeader{
Version: 0,

View File

@ -0,0 +1,20 @@
package appmessage
// MsgBlockWithTrustedDataV4 represents a kaspa BlockWithTrustedDataV4 message
type MsgBlockWithTrustedDataV4 struct {
baseMessage
Block *MsgBlock
DAAWindowIndices []uint64
GHOSTDAGDataIndices []uint64
}
// Command returns the protocol command string for the message
func (msg *MsgBlockWithTrustedDataV4) Command() MessageCommand {
return CmdBlockWithTrustedDataV4
}
// NewMsgBlockWithTrustedDataV4 returns a new MsgBlockWithTrustedDataV4.
func NewMsgBlockWithTrustedDataV4() *MsgBlockWithTrustedDataV4 {
return &MsgBlockWithTrustedDataV4{}
}

View File

@ -0,0 +1,27 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgIBDChainBlockLocator implements the Message interface and represents a kaspa
// locator message. It is used to find the blockLocator of a peer that is
// syncing with you.
type MsgIBDChainBlockLocator struct {
baseMessage
BlockLocatorHashes []*externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgIBDChainBlockLocator) Command() MessageCommand {
return CmdIBDChainBlockLocator
}
// NewMsgIBDChainBlockLocator returns a new kaspa locator message that conforms to
// the Message interface. See MsgBlockLocator for details.
func NewMsgIBDChainBlockLocator(locatorHashes []*externalapi.DomainHash) *MsgIBDChainBlockLocator {
return &MsgIBDChainBlockLocator{
BlockLocatorHashes: locatorHashes,
}
}

View File

@ -0,0 +1,33 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestAnticone implements the Message interface and represents a kaspa
// RequestHeaders message. It is used to request the set past(ContextHash) \cap anticone(BlockHash)
type MsgRequestAnticone struct {
baseMessage
BlockHash *externalapi.DomainHash
ContextHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestAnticone) Command() MessageCommand {
return CmdRequestAnticone
}
// NewMsgRequestAnticone returns a new kaspa RequestPastDiff message that conforms to the
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgRequestAnticone(blockHash, contextHash *externalapi.DomainHash) *MsgRequestAnticone {
return &MsgRequestAnticone{
BlockHash: blockHash,
ContextHash: contextHash,
}
}

View File

@ -0,0 +1,31 @@
package appmessage
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// MsgRequestIBDChainBlockLocator implements the Message interface and represents a kaspa
// IBDRequestChainBlockLocator message. It is used to request a block locator between low
// and high hash.
// The locator is returned via a locator message (MsgIBDChainBlockLocator).
type MsgRequestIBDChainBlockLocator struct {
baseMessage
HighHash *externalapi.DomainHash
LowHash *externalapi.DomainHash
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestIBDChainBlockLocator) Command() MessageCommand {
return CmdRequestIBDChainBlockLocator
}
// NewMsgIBDRequestChainBlockLocator returns a new IBDRequestChainBlockLocator message that conforms to the
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgIBDRequestChainBlockLocator(highHash, lowHash *externalapi.DomainHash) *MsgRequestIBDChainBlockLocator {
return &MsgRequestIBDChainBlockLocator{
HighHash: highHash,
LowHash: lowHash,
}
}

View File

@ -0,0 +1,22 @@
package appmessage
// MsgRequestNextPruningPointAndItsAnticoneBlocks implements the Message interface and represents a kaspa
// RequestNextPruningPointAndItsAnticoneBlocks message. It is used to notify the IBD syncer peer to send
// more blocks from the pruning anticone.
//
// This message has no payload.
type MsgRequestNextPruningPointAndItsAnticoneBlocks struct {
baseMessage
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgRequestNextPruningPointAndItsAnticoneBlocks) Command() MessageCommand {
return CmdRequestNextPruningPointAndItsAnticoneBlocks
}
// NewMsgRequestNextPruningPointAndItsAnticoneBlocks returns a new kaspa RequestNextPruningPointAndItsAnticoneBlocks message that conforms to the
// Message interface.
func NewMsgRequestNextPruningPointAndItsAnticoneBlocks() *MsgRequestNextPruningPointAndItsAnticoneBlocks {
return &MsgRequestNextPruningPointAndItsAnticoneBlocks{}
}

View File

@ -0,0 +1,25 @@
package appmessage
// MsgTrustedData represents a kaspa TrustedData message
type MsgTrustedData struct {
baseMessage
DAAWindow []*TrustedDataDAAHeader
GHOSTDAGData []*BlockGHOSTDAGDataHashPair
}
// Command returns the protocol command string for the message
func (msg *MsgTrustedData) Command() MessageCommand {
return CmdTrustedData
}
// NewMsgTrustedData returns a new MsgTrustedData.
func NewMsgTrustedData() *MsgTrustedData {
return &MsgTrustedData{}
}
// TrustedDataDAAHeader is an appmessage representation of externalapi.TrustedDataDataDAAHeader
type TrustedDataDAAHeader struct {
Header *MsgBlockHeader
GHOSTDAGData *BlockGHOSTDAGData
}

View File

@ -22,7 +22,7 @@ import (
// TestTx tests the MsgTx API.
func TestTx(t *testing.T) {
pver := ProtocolVersion
pver := uint32(4)
txIDStr := "000000000003ba27aa200b1cecaad478d2b00432346c3f1f3986da1afd33e506"
txID, err := transactionid.FromString(txIDStr)
@ -133,8 +133,8 @@ func TestTx(t *testing.T) {
// TestTxHash tests the ability to generate the hash of a transaction accurately.
func TestTxHashAndID(t *testing.T) {
txHash1Str := "93663e597f6c968d32d229002f76408edf30d6a0151ff679fc729812d8cb2acc"
txID1Str := "24079c6d2bdf602fc389cc307349054937744a9c8dc0f07c023e6af0e949a4e7"
txHash1Str := "b06f8b650115b5cf4d59499e10764a9312742930cb43c9b4ff6495d76f332ed7"
txID1Str := "e20225c3d065ee41743607ee627db44d01ef396dc9779b05b2caf55bac50e12d"
wantTxID1, err := transactionid.FromString(txID1Str)
if err != nil {
t.Fatalf("NewTxIDFromStr: %v", err)
@ -185,7 +185,7 @@ func TestTxHashAndID(t *testing.T) {
spew.Sprint(tx1ID), spew.Sprint(wantTxID1))
}
hash2Str := "8dafd1bec24527d8e3b443ceb0a3b92fffc0d60026317f890b2faf5e9afc177a"
hash2Str := "fa16a8ce88d52ca1ff45187bbba0d33044e9f5fe309e8d0b22d4812dcf1782b7"
wantHash2, err := externalapi.NewDomainHashFromString(hash2Str)
if err != nil {
t.Errorf("NewTxIDFromStr: %v", err)

View File

@ -82,12 +82,12 @@ func (msg *MsgVersion) Command() MessageCommand {
// Message interface using the passed parameters and defaults for the remaining
// fields.
func NewMsgVersion(addr *NetAddress, id *id.ID, network string,
subnetworkID *externalapi.DomainSubnetworkID) *MsgVersion {
subnetworkID *externalapi.DomainSubnetworkID, protocolVersion uint32) *MsgVersion {
// Limit the timestamp to one millisecond precision since the protocol
// doesn't support better.
return &MsgVersion{
ProtocolVersion: ProtocolVersion,
ProtocolVersion: protocolVersion,
Network: network,
Services: 0,
Timestamp: mstime.Now(),

View File

@ -15,7 +15,7 @@ import (
// TestVersion tests the MsgVersion API.
func TestVersion(t *testing.T) {
pver := ProtocolVersion
pver := uint32(4)
// Create version message data.
tcpAddrMe := &net.TCPAddr{IP: net.ParseIP("127.0.0.1"), Port: 16111}
@ -26,7 +26,7 @@ func TestVersion(t *testing.T) {
}
// Ensure we get the correct data back out.
msg := NewMsgVersion(me, generatedID, "mainnet", nil)
msg := NewMsgVersion(me, generatedID, "mainnet", nil, 4)
if msg.ProtocolVersion != pver {
t.Errorf("NewMsgVersion: wrong protocol version - got %v, want %v",
msg.ProtocolVersion, pver)

View File

@ -5,8 +5,9 @@
package appmessage
import (
"github.com/kaspanet/kaspad/util/mstime"
"net"
"github.com/kaspanet/kaspad/util/mstime"
)
// NetAddress defines information about a peer on the network including the time
@ -57,3 +58,7 @@ func NewNetAddressTimestamp(
func NewNetAddress(addr *net.TCPAddr) *NetAddress {
return NewNetAddressIPPort(addr.IP, uint16(addr.Port))
}
func (na NetAddress) String() string {
return na.TCPAddress().String()
}

View File

@ -0,0 +1,22 @@
package appmessage
// MsgReady implements the Message interface and represents a kaspa
// Ready message. It is used to notify that the peer is ready to receive
// messages.
//
// This message has no payload.
type MsgReady struct {
baseMessage
}
// Command returns the protocol command string for the message. This is part
// of the Message interface implementation.
func (msg *MsgReady) Command() MessageCommand {
return CmdReady
}
// NewMsgReady returns a new kaspa Ready message that conforms to the
// Message interface.
func NewMsgReady() *MsgReady {
return &MsgReady{}
}

View File

@ -11,9 +11,6 @@ import (
)
const (
// ProtocolVersion is the latest protocol version this package supports.
ProtocolVersion uint32 = 1
// DefaultServices describes the default services that are supported by
// the server.
DefaultServices = SFNodeNetwork | SFNodeBloom | SFNodeCF

View File

@ -0,0 +1,47 @@
package appmessage
// GetFeeEstimateRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetFeeEstimateRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *GetFeeEstimateRequestMessage) Command() MessageCommand {
return CmdGetFeeEstimateRequestMessage
}
// NewGetFeeEstimateRequestMessage returns a instance of the message
func NewGetFeeEstimateRequestMessage() *GetFeeEstimateRequestMessage {
return &GetFeeEstimateRequestMessage{}
}
type RPCFeeRateBucket struct {
Feerate float64
EstimatedSeconds float64
}
type RPCFeeEstimate struct {
PriorityBucket RPCFeeRateBucket
NormalBuckets []RPCFeeRateBucket
LowBuckets []RPCFeeRateBucket
}
// GetCoinSupplyResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetFeeEstimateResponseMessage struct {
baseMessage
Estimate RPCFeeEstimate
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetFeeEstimateResponseMessage) Command() MessageCommand {
return CmdGetFeeEstimateResponseMessage
}
// NewGetFeeEstimateResponseMessage returns a instance of the message
func NewGetFeeEstimateResponseMessage() *GetFeeEstimateResponseMessage {
return &GetFeeEstimateResponseMessage{}
}

View File

@ -0,0 +1,41 @@
package appmessage
// GetBalanceByAddressRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetBalanceByAddressRequestMessage struct {
baseMessage
Address string
}
// Command returns the protocol command string for the message
func (msg *GetBalanceByAddressRequestMessage) Command() MessageCommand {
return CmdGetBalanceByAddressRequestMessage
}
// NewGetBalanceByAddressRequest returns a instance of the message
func NewGetBalanceByAddressRequest(address string) *GetBalanceByAddressRequestMessage {
return &GetBalanceByAddressRequestMessage{
Address: address,
}
}
// GetBalanceByAddressResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetBalanceByAddressResponseMessage struct {
baseMessage
Balance uint64
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetBalanceByAddressResponseMessage) Command() MessageCommand {
return CmdGetBalanceByAddressResponseMessage
}
// NewGetBalanceByAddressResponse returns an instance of the message
func NewGetBalanceByAddressResponse(Balance uint64) *GetBalanceByAddressResponseMessage {
return &GetBalanceByAddressResponseMessage{
Balance: Balance,
}
}

View File

@ -0,0 +1,47 @@
package appmessage
// GetBalancesByAddressesRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetBalancesByAddressesRequestMessage struct {
baseMessage
Addresses []string
}
// Command returns the protocol command string for the message
func (msg *GetBalancesByAddressesRequestMessage) Command() MessageCommand {
return CmdGetBalancesByAddressesRequestMessage
}
// NewGetBalancesByAddressesRequest returns a instance of the message
func NewGetBalancesByAddressesRequest(addresses []string) *GetBalancesByAddressesRequestMessage {
return &GetBalancesByAddressesRequestMessage{
Addresses: addresses,
}
}
// BalancesByAddressesEntry represents the balance of some address
type BalancesByAddressesEntry struct {
Address string
Balance uint64
}
// GetBalancesByAddressesResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetBalancesByAddressesResponseMessage struct {
baseMessage
Entries []*BalancesByAddressesEntry
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetBalancesByAddressesResponseMessage) Command() MessageCommand {
return CmdGetBalancesByAddressesResponseMessage
}
// NewGetBalancesByAddressesResponse returns an instance of the message
func NewGetBalancesByAddressesResponse(entries []*BalancesByAddressesEntry) *GetBalancesByAddressesResponseMessage {
return &GetBalancesByAddressesResponseMessage{
Entries: entries,
}
}

View File

@ -5,6 +5,7 @@ package appmessage
type GetBlockTemplateRequestMessage struct {
baseMessage
PayAddress string
ExtraData string
}
// Command returns the protocol command string for the message
@ -13,9 +14,10 @@ func (msg *GetBlockTemplateRequestMessage) Command() MessageCommand {
}
// NewGetBlockTemplateRequestMessage returns a instance of the message
func NewGetBlockTemplateRequestMessage(payAddress string) *GetBlockTemplateRequestMessage {
func NewGetBlockTemplateRequestMessage(payAddress, extraData string) *GetBlockTemplateRequestMessage {
return &GetBlockTemplateRequestMessage{
PayAddress: payAddress,
ExtraData: extraData,
}
}

View File

@ -0,0 +1,40 @@
package appmessage
// GetCoinSupplyRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetCoinSupplyRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *GetCoinSupplyRequestMessage) Command() MessageCommand {
return CmdGetCoinSupplyRequestMessage
}
// NewGetCoinSupplyRequestMessage returns a instance of the message
func NewGetCoinSupplyRequestMessage() *GetCoinSupplyRequestMessage {
return &GetCoinSupplyRequestMessage{}
}
// GetCoinSupplyResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetCoinSupplyResponseMessage struct {
baseMessage
MaxSompi uint64
CirculatingSompi uint64
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetCoinSupplyResponseMessage) Command() MessageCommand {
return CmdGetCoinSupplyResponseMessage
}
// NewGetCoinSupplyResponseMessage returns a instance of the message
func NewGetCoinSupplyResponseMessage(maxSompi uint64, circulatingSompi uint64) *GetCoinSupplyResponseMessage {
return &GetCoinSupplyResponseMessage{
MaxSompi: maxSompi,
CirculatingSompi: circulatingSompi,
}
}

View File

@ -23,6 +23,8 @@ type GetInfoResponseMessage struct {
P2PID string
MempoolSize uint64
ServerVersion string
IsUtxoIndexed bool
IsSynced bool
Error *RPCError
}
@ -33,10 +35,12 @@ func (msg *GetInfoResponseMessage) Command() MessageCommand {
}
// NewGetInfoResponseMessage returns a instance of the message
func NewGetInfoResponseMessage(p2pID string, mempoolSize uint64, serverVersion string) *GetInfoResponseMessage {
func NewGetInfoResponseMessage(p2pID string, mempoolSize uint64, serverVersion string, isUtxoIndexed bool, isSynced bool) *GetInfoResponseMessage {
return &GetInfoResponseMessage{
P2PID: p2pID,
MempoolSize: mempoolSize,
ServerVersion: serverVersion,
IsUtxoIndexed: isUtxoIndexed,
IsSynced: isSynced,
}
}

View File

@ -4,6 +4,8 @@ package appmessage
// its respective RPC message
type GetMempoolEntriesRequestMessage struct {
baseMessage
IncludeOrphanPool bool
FilterTransactionPool bool
}
// Command returns the protocol command string for the message
@ -12,8 +14,11 @@ func (msg *GetMempoolEntriesRequestMessage) Command() MessageCommand {
}
// NewGetMempoolEntriesRequestMessage returns a instance of the message
func NewGetMempoolEntriesRequestMessage() *GetMempoolEntriesRequestMessage {
return &GetMempoolEntriesRequestMessage{}
func NewGetMempoolEntriesRequestMessage(includeOrphanPool bool, filterTransactionPool bool) *GetMempoolEntriesRequestMessage {
return &GetMempoolEntriesRequestMessage{
IncludeOrphanPool: includeOrphanPool,
FilterTransactionPool: filterTransactionPool,
}
}
// GetMempoolEntriesResponseMessage is an appmessage corresponding to

View File

@ -0,0 +1,52 @@
package appmessage
// MempoolEntryByAddress represents MempoolEntries associated with some address
type MempoolEntryByAddress struct {
Address string
Receiving []*MempoolEntry
Sending []*MempoolEntry
}
// GetMempoolEntriesByAddressesRequestMessage is an appmessage corresponding to
// its respective RPC message
type GetMempoolEntriesByAddressesRequestMessage struct {
baseMessage
Addresses []string
IncludeOrphanPool bool
FilterTransactionPool bool
}
// Command returns the protocol command string for the message
func (msg *GetMempoolEntriesByAddressesRequestMessage) Command() MessageCommand {
return CmdGetMempoolEntriesByAddressesRequestMessage
}
// NewGetMempoolEntriesByAddressesRequestMessage returns a instance of the message
func NewGetMempoolEntriesByAddressesRequestMessage(addresses []string, includeOrphanPool bool, filterTransactionPool bool) *GetMempoolEntriesByAddressesRequestMessage {
return &GetMempoolEntriesByAddressesRequestMessage{
Addresses: addresses,
IncludeOrphanPool: includeOrphanPool,
FilterTransactionPool: filterTransactionPool,
}
}
// GetMempoolEntriesByAddressesResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetMempoolEntriesByAddressesResponseMessage struct {
baseMessage
Entries []*MempoolEntryByAddress
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *GetMempoolEntriesByAddressesResponseMessage) Command() MessageCommand {
return CmdGetMempoolEntriesByAddressesResponseMessage
}
// NewGetMempoolEntriesByAddressesResponseMessage returns a instance of the message
func NewGetMempoolEntriesByAddressesResponseMessage(entries []*MempoolEntryByAddress) *GetMempoolEntriesByAddressesResponseMessage {
return &GetMempoolEntriesByAddressesResponseMessage{
Entries: entries,
}
}

View File

@ -5,6 +5,8 @@ package appmessage
type GetMempoolEntryRequestMessage struct {
baseMessage
TxID string
IncludeOrphanPool bool
FilterTransactionPool bool
}
// Command returns the protocol command string for the message
@ -13,8 +15,12 @@ func (msg *GetMempoolEntryRequestMessage) Command() MessageCommand {
}
// NewGetMempoolEntryRequestMessage returns a instance of the message
func NewGetMempoolEntryRequestMessage(txID string) *GetMempoolEntryRequestMessage {
return &GetMempoolEntryRequestMessage{TxID: txID}
func NewGetMempoolEntryRequestMessage(txID string, includeOrphanPool bool, filterTransactionPool bool) *GetMempoolEntryRequestMessage {
return &GetMempoolEntryRequestMessage{
TxID: txID,
IncludeOrphanPool: includeOrphanPool,
FilterTransactionPool: filterTransactionPool,
}
}
// GetMempoolEntryResponseMessage is an appmessage corresponding to
@ -30,6 +36,7 @@ type GetMempoolEntryResponseMessage struct {
type MempoolEntry struct {
Fee uint64
Transaction *RPCTransaction
IsOrphan bool
}
// Command returns the protocol command string for the message
@ -38,11 +45,12 @@ func (msg *GetMempoolEntryResponseMessage) Command() MessageCommand {
}
// NewGetMempoolEntryResponseMessage returns a instance of the message
func NewGetMempoolEntryResponseMessage(fee uint64, transaction *RPCTransaction) *GetMempoolEntryResponseMessage {
func NewGetMempoolEntryResponseMessage(fee uint64, transaction *RPCTransaction, isOrphan bool) *GetMempoolEntryResponseMessage {
return &GetMempoolEntryResponseMessage{
Entry: &MempoolEntry{
Fee: fee,
Transaction: transaction,
IsOrphan: isOrphan,
},
}
}

View File

@ -5,6 +5,7 @@ package appmessage
type GetVirtualSelectedParentChainFromBlockRequestMessage struct {
baseMessage
StartHash string
IncludeAcceptedTransactionIDs bool
}
// Command returns the protocol command string for the message
@ -13,18 +14,29 @@ func (msg *GetVirtualSelectedParentChainFromBlockRequestMessage) Command() Messa
}
// NewGetVirtualSelectedParentChainFromBlockRequestMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockRequestMessage(startHash string) *GetVirtualSelectedParentChainFromBlockRequestMessage {
func NewGetVirtualSelectedParentChainFromBlockRequestMessage(
startHash string, includeAcceptedTransactionIDs bool) *GetVirtualSelectedParentChainFromBlockRequestMessage {
return &GetVirtualSelectedParentChainFromBlockRequestMessage{
StartHash: startHash,
IncludeAcceptedTransactionIDs: includeAcceptedTransactionIDs,
}
}
// AcceptedTransactionIDs is a part of the GetVirtualSelectedParentChainFromBlockResponseMessage and
// VirtualSelectedParentChainChangedNotificationMessage appmessages
type AcceptedTransactionIDs struct {
AcceptingBlockHash string
AcceptedTransactionIDs []string
}
// GetVirtualSelectedParentChainFromBlockResponseMessage is an appmessage corresponding to
// its respective RPC message
type GetVirtualSelectedParentChainFromBlockResponseMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlockHashes []string
AcceptedTransactionIDs []*AcceptedTransactionIDs
Error *RPCError
}
@ -36,10 +48,11 @@ func (msg *GetVirtualSelectedParentChainFromBlockResponseMessage) Command() Mess
// NewGetVirtualSelectedParentChainFromBlockResponseMessage returns a instance of the message
func NewGetVirtualSelectedParentChainFromBlockResponseMessage(removedChainBlockHashes,
addedChainBlockHashes []string) *GetVirtualSelectedParentChainFromBlockResponseMessage {
addedChainBlockHashes []string, acceptedTransactionIDs []*AcceptedTransactionIDs) *GetVirtualSelectedParentChainFromBlockResponseMessage {
return &GetVirtualSelectedParentChainFromBlockResponseMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlockHashes: addedChainBlockHashes,
AcceptedTransactionIDs: acceptedTransactionIDs,
}
}

View File

@ -0,0 +1,50 @@
package appmessage
// NotifyNewBlockTemplateRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyNewBlockTemplateRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyNewBlockTemplateRequestMessage) Command() MessageCommand {
return CmdNotifyNewBlockTemplateRequestMessage
}
// NewNotifyNewBlockTemplateRequestMessage returns an instance of the message
func NewNotifyNewBlockTemplateRequestMessage() *NotifyNewBlockTemplateRequestMessage {
return &NotifyNewBlockTemplateRequestMessage{}
}
// NotifyNewBlockTemplateResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyNewBlockTemplateResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyNewBlockTemplateResponseMessage) Command() MessageCommand {
return CmdNotifyNewBlockTemplateResponseMessage
}
// NewNotifyNewBlockTemplateResponseMessage returns an instance of the message
func NewNotifyNewBlockTemplateResponseMessage() *NotifyNewBlockTemplateResponseMessage {
return &NotifyNewBlockTemplateResponseMessage{}
}
// NewBlockTemplateNotificationMessage is an appmessage corresponding to
// its respective RPC message
type NewBlockTemplateNotificationMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NewBlockTemplateNotificationMessage) Command() MessageCommand {
return CmdNewBlockTemplateNotificationMessage
}
// NewNewBlockTemplateNotificationMessage returns an instance of the message
func NewNewBlockTemplateNotificationMessage() *NewBlockTemplateNotificationMessage {
return &NewBlockTemplateNotificationMessage{}
}

View File

@ -4,6 +4,7 @@ package appmessage
// its respective RPC message
type NotifyVirtualSelectedParentChainChangedRequestMessage struct {
baseMessage
IncludeAcceptedTransactionIDs bool
}
// Command returns the protocol command string for the message
@ -11,9 +12,13 @@ func (msg *NotifyVirtualSelectedParentChainChangedRequestMessage) Command() Mess
return CmdNotifyVirtualSelectedParentChainChangedRequestMessage
}
// NewNotifyVirtualSelectedParentChainChangedRequestMessage returns a instance of the message
func NewNotifyVirtualSelectedParentChainChangedRequestMessage() *NotifyVirtualSelectedParentChainChangedRequestMessage {
return &NotifyVirtualSelectedParentChainChangedRequestMessage{}
// NewNotifyVirtualSelectedParentChainChangedRequestMessage returns an instance of the message
func NewNotifyVirtualSelectedParentChainChangedRequestMessage(
includeAcceptedTransactionIDs bool) *NotifyVirtualSelectedParentChainChangedRequestMessage {
return &NotifyVirtualSelectedParentChainChangedRequestMessage{
IncludeAcceptedTransactionIDs: includeAcceptedTransactionIDs,
}
}
// NotifyVirtualSelectedParentChainChangedResponseMessage is an appmessage corresponding to
@ -39,6 +44,7 @@ type VirtualSelectedParentChainChangedNotificationMessage struct {
baseMessage
RemovedChainBlockHashes []string
AddedChainBlockHashes []string
AcceptedTransactionIDs []*AcceptedTransactionIDs
}
// Command returns the protocol command string for the message
@ -48,10 +54,11 @@ func (msg *VirtualSelectedParentChainChangedNotificationMessage) Command() Messa
// NewVirtualSelectedParentChainChangedNotificationMessage returns a instance of the message
func NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes,
addedChainBlocks []string) *VirtualSelectedParentChainChangedNotificationMessage {
addedChainBlocks []string, acceptedTransactionIDs []*AcceptedTransactionIDs) *VirtualSelectedParentChainChangedNotificationMessage {
return &VirtualSelectedParentChainChangedNotificationMessage{
RemovedChainBlockHashes: removedChainBlockHashes,
AddedChainBlockHashes: addedChainBlocks,
AcceptedTransactionIDs: acceptedTransactionIDs,
}
}

View File

@ -5,6 +5,7 @@ package appmessage
type SubmitBlockRequestMessage struct {
baseMessage
Block *RPCBlock
AllowNonDAABlocks bool
}
// Command returns the protocol command string for the message
@ -13,9 +14,10 @@ func (msg *SubmitBlockRequestMessage) Command() MessageCommand {
}
// NewSubmitBlockRequestMessage returns a instance of the message
func NewSubmitBlockRequestMessage(block *RPCBlock) *SubmitBlockRequestMessage {
func NewSubmitBlockRequestMessage(block *RPCBlock, allowNonDAABlocks bool) *SubmitBlockRequestMessage {
return &SubmitBlockRequestMessage{
Block: block,
AllowNonDAABlocks: allowNonDAABlocks,
}
}
@ -97,4 +99,7 @@ type RPCBlockVerboseData struct {
IsHeaderOnly bool
BlueScore uint64
ChildrenHashes []string
MergeSetBluesHashes []string
MergeSetRedsHashes []string
IsChainBlock bool
}

View File

@ -52,6 +52,7 @@ type RPCTransaction struct {
SubnetworkID string
Gas uint64
Payload string
Mass uint64
VerboseData *RPCTransactionVerboseData
}

View File

@ -0,0 +1,42 @@
package appmessage
// SubmitTransactionReplacementRequestMessage is an appmessage corresponding to
// its respective RPC message
type SubmitTransactionReplacementRequestMessage struct {
baseMessage
Transaction *RPCTransaction
}
// Command returns the protocol command string for the message
func (msg *SubmitTransactionReplacementRequestMessage) Command() MessageCommand {
return CmdSubmitTransactionReplacementRequestMessage
}
// NewSubmitTransactionReplacementRequestMessage returns a instance of the message
func NewSubmitTransactionReplacementRequestMessage(transaction *RPCTransaction) *SubmitTransactionReplacementRequestMessage {
return &SubmitTransactionReplacementRequestMessage{
Transaction: transaction,
}
}
// SubmitTransactionReplacementResponseMessage is an appmessage corresponding to
// its respective RPC message
type SubmitTransactionReplacementResponseMessage struct {
baseMessage
TransactionID string
ReplacedTransaction *RPCTransaction
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *SubmitTransactionReplacementResponseMessage) Command() MessageCommand {
return CmdSubmitTransactionReplacementResponseMessage
}
// NewSubmitTransactionReplacementResponseMessage returns a instance of the message
func NewSubmitTransactionReplacementResponseMessage(transactionID string) *SubmitTransactionReplacementResponseMessage {
return &SubmitTransactionReplacementResponseMessage{
TransactionID: transactionID,
}
}

View File

@ -4,6 +4,8 @@ import (
"fmt"
"sync/atomic"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/miningmanager/mempool"
"github.com/kaspanet/kaspad/app/protocol"
@ -67,6 +69,7 @@ func (a *ComponentManager) Stop() {
}
a.protocolManager.Close()
close(a.protocolManager.Context().Domain().ConsensusEventsChannel())
return
}
@ -102,7 +105,7 @@ func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database,
var utxoIndex *utxoindex.UTXOIndex
if cfg.UTXOIndex {
utxoIndex, err = utxoindex.New(domain.Consensus(), db)
utxoIndex, err = utxoindex.New(domain, db)
if err != nil {
return nil, err
}
@ -118,7 +121,7 @@ func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database,
if err != nil {
return nil, err
}
rpcManager := setupRPC(cfg, domain, netAdapter, protocolManager, connectionManager, addressManager, utxoIndex, interrupt)
rpcManager := setupRPC(cfg, domain, netAdapter, protocolManager, connectionManager, addressManager, utxoIndex, domain.ConsensusEventsChannel(), interrupt)
return &ComponentManager{
cfg: cfg,
@ -139,6 +142,7 @@ func setupRPC(
connectionManager *connmanager.ConnectionManager,
addressManager *addressmanager.AddressManager,
utxoIndex *utxoindex.UTXOIndex,
consensusEventsChan chan externalapi.ConsensusEvent,
shutDownChan chan<- struct{},
) *rpc.Manager {
@ -150,9 +154,10 @@ func setupRPC(
connectionManager,
addressManager,
utxoIndex,
consensusEventsChan,
shutDownChan,
)
protocolManager.SetOnBlockAddedToDAGHandler(rpcManager.NotifyBlockAddedToDAG)
protocolManager.SetOnNewBlockTemplateHandler(rpcManager.NotifyNewBlockTemplate)
protocolManager.SetOnPruningPointUTXOSetOverrideHandler(rpcManager.NotifyPruningPointUTXOSetOverride)
return rpcManager

View File

@ -1,6 +1,8 @@
package common
import (
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"time"
"github.com/pkg/errors"
@ -8,7 +10,18 @@ import (
// DefaultTimeout is the default duration to wait for enqueuing/dequeuing
// to/from routes.
const DefaultTimeout = 30 * time.Second
const DefaultTimeout = 120 * time.Second
// ErrPeerWithSameIDExists signifies that a peer with the same ID already exist.
var ErrPeerWithSameIDExists = errors.New("ready peer with the same ID already exists")
type flowExecuteFunc func(peer *peerpkg.Peer)
// Flow is a a data structure that is used in order to associate a p2p flow to some route in a router.
type Flow struct {
Name string
ExecuteFunc flowExecuteFunc
}
// FlowInitializeFunc is a function that is used in order to initialize a flow
type FlowInitializeFunc func(route *routerpkg.Route, peer *peerpkg.Peer) error

View File

@ -11,55 +11,52 @@ import (
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
)
// OnNewBlock updates the mempool after a new block arrival, and
// relays newly unorphaned transactions and possibly rebroadcast
// manually added transactions when not in IBD.
func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
blockInsertionResult *externalapi.BlockInsertionResult) error {
func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock) error {
hash := consensushashing.BlockHash(block)
log.Debugf("OnNewBlock start for block %s", hash)
defer log.Debugf("OnNewBlock end for block %s", hash)
log.Tracef("OnNewBlock start for block %s", hash)
defer log.Tracef("OnNewBlock end for block %s", hash)
unorphaningResults, err := f.UnorphanBlocks(block)
unorphanedBlocks, err := f.UnorphanBlocks(block)
if err != nil {
return err
}
log.Debugf("OnNewBlock: block %s unorphaned %d blocks", hash, len(unorphaningResults))
log.Debugf("OnNewBlock: block %s unorphaned %d blocks", hash, len(unorphanedBlocks))
newBlocks := []*externalapi.DomainBlock{block}
newBlockInsertionResults := []*externalapi.BlockInsertionResult{blockInsertionResult}
for _, unorphaningResult := range unorphaningResults {
newBlocks = append(newBlocks, unorphaningResult.block)
newBlockInsertionResults = append(newBlockInsertionResults, unorphaningResult.blockInsertionResult)
}
newBlocks = append(newBlocks, unorphanedBlocks...)
allAcceptedTransactions := make([]*externalapi.DomainTransaction, 0)
for i, newBlock := range newBlocks {
for _, newBlock := range newBlocks {
log.Debugf("OnNewBlock: passing block %s transactions to mining manager", hash)
acceptedTransactions, err := f.Domain().MiningManager().HandleNewBlockTransactions(newBlock.Transactions)
if err != nil {
return err
}
allAcceptedTransactions = append(allAcceptedTransactions, acceptedTransactions...)
if f.onBlockAddedToDAGHandler != nil {
log.Debugf("OnNewBlock: calling f.onBlockAddedToDAGHandler for block %s", hash)
blockInsertionResult = newBlockInsertionResults[i]
err := f.onBlockAddedToDAGHandler(newBlock, blockInsertionResult)
if err != nil {
return err
}
}
}
return f.broadcastTransactionsAfterBlockAdded(newBlocks, allAcceptedTransactions)
}
// OnNewBlockTemplate calls the handler function whenever a new block template is available for miners.
func (f *FlowContext) OnNewBlockTemplate() error {
// Clear current template cache. Note we call this even if the handler is nil, in order to keep the
// state consistent without dependency on external event registration
f.Domain().MiningManager().ClearBlockTemplate()
if f.onNewBlockTemplateHandler != nil {
return f.onNewBlockTemplateHandler()
}
return nil
}
// OnPruningPointUTXOSetOverride calls the handler function whenever the UTXO set
// resets due to pruning point change via IBD.
func (f *FlowContext) OnPruningPointUTXOSetOverride() error {
@ -100,7 +97,7 @@ func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
// SharedRequestedBlocks returns a *blockrelay.SharedRequestedBlocks for sharing
// data about requested blocks between different peers.
func (f *FlowContext) SharedRequestedBlocks() *blockrelay.SharedRequestedBlocks {
func (f *FlowContext) SharedRequestedBlocks() *SharedRequestedBlocks {
return f.sharedRequestedBlocks
}
@ -110,14 +107,18 @@ func (f *FlowContext) AddBlock(block *externalapi.DomainBlock) error {
return protocolerrors.Errorf(false, "cannot add header only block")
}
blockInsertionResult, err := f.Domain().Consensus().ValidateAndInsertBlock(block, true)
err := f.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for block %s: %s", consensushashing.BlockHash(block), err)
}
return err
}
err = f.OnNewBlock(block, blockInsertionResult)
err = f.OnNewBlockTemplate()
if err != nil {
return err
}
err = f.OnNewBlock(block)
if err != nil {
return err
}
@ -142,7 +143,7 @@ func (f *FlowContext) TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool {
return false
}
f.ibdPeer = ibdPeer
log.Infof("IBD started")
log.Infof("IBD started with peer %s", ibdPeer)
return true
}

View File

@ -2,6 +2,7 @@ package flowcontext
import (
"errors"
"strings"
"sync/atomic"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
@ -9,6 +10,11 @@ import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
)
var (
// ErrPingTimeout signifies that a ping operation timed out.
ErrPingTimeout = protocolerrors.New(false, "timeout expired on ping")
)
// HandleError handles an error from a flow,
// It sends the error to errChan if isStopping == 0 and increments isStopping
//
@ -21,8 +27,15 @@ func (*FlowContext) HandleError(err error, flowName string, isStopping *uint32,
if protocolErr := (protocolerrors.ProtocolError{}); !errors.As(err, &protocolErr) {
panic(err)
}
if errors.Is(err, ErrPingTimeout) {
// Avoid printing the call stack on ping timeouts, since users get panicked and this case is not interesting
log.Errorf("error from %s: %s", flowName, err)
} else {
// Explain to the user that this is not a panic, but only a protocol error with a specific peer
logFrame := strings.Repeat("=", 52)
log.Errorf("Non-critical peer protocol error from %s, printing the full stack for debug purposes: \n%s\n%+v \n%s",
flowName, logFrame, err, logFrame)
}
}
if atomic.AddUint32(isStopping, 1) == 1 {

View File

@ -10,8 +10,6 @@ import (
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
@ -20,9 +18,8 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/id"
)
// OnBlockAddedToDAGHandler is a handler function that's triggered
// when a block is added to the DAG
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
// OnNewBlockTemplateHandler is a handler function that's triggered when a new block template is available
type OnNewBlockTemplateHandler func() error
// OnPruningPointUTXOSetOverrideHandler is a handle function that's triggered whenever the UTXO set
// resets due to pruning point change via IBD.
@ -43,14 +40,14 @@ type FlowContext struct {
timeStarted int64
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onNewBlockTemplateHandler OnNewBlockTemplateHandler
onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
lastRebroadcastTime time.Time
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
sharedRequestedTransactions *SharedRequestedTransactions
sharedRequestedBlocks *blockrelay.SharedRequestedBlocks
sharedRequestedBlocks *SharedRequestedBlocks
ibdPeer *peerpkg.Peer
ibdPeerMutex sync.RWMutex
@ -78,8 +75,8 @@ func New(cfg *config.Config, domain domain.Domain, addressManager *addressmanage
domain: domain,
addressManager: addressManager,
connectionManager: connectionManager,
sharedRequestedTransactions: transactionrelay.NewSharedRequestedTransactions(),
sharedRequestedBlocks: blockrelay.NewSharedRequestedBlocks(),
sharedRequestedTransactions: NewSharedRequestedTransactions(),
sharedRequestedBlocks: NewSharedRequestedBlocks(),
peers: make(map[id.ID]*peerpkg.Peer),
orphans: make(map[externalapi.DomainHash]*externalapi.DomainBlock),
timeStarted: mstime.Now().UnixMilliseconds(),
@ -100,9 +97,14 @@ func (f *FlowContext) ShutdownChan() <-chan struct{} {
return f.shutdownChan
}
// SetOnBlockAddedToDAGHandler sets the onBlockAddedToDAG handler
func (f *FlowContext) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler OnBlockAddedToDAGHandler) {
f.onBlockAddedToDAGHandler = onBlockAddedToDAGHandler
// IsNearlySynced returns whether current consensus is considered synced or close to being synced.
func (f *FlowContext) IsNearlySynced() (bool, error) {
return f.Domain().Consensus().IsNearlySynced()
}
// SetOnNewBlockTemplateHandler sets the onNewBlockTemplateHandler handler
func (f *FlowContext) SetOnNewBlockTemplateHandler(onNewBlockTemplateHandler OnNewBlockTemplateHandler) {
f.onNewBlockTemplateHandler = onNewBlockTemplateHandler
}
// SetOnPruningPointUTXOSetOverrideHandler sets the onPruningPointUTXOSetOverrideHandler handler

View File

@ -72,3 +72,10 @@ func (f *FlowContext) Peers() []*peerpkg.Peer {
}
return peers
}
// HasPeers returns whether there are currently active peers
func (f *FlowContext) HasPeers() bool {
f.peersMutex.RLock()
defer f.peersMutex.RUnlock()
return len(f.peers) > 0
}

View File

@ -15,12 +15,6 @@ import (
// on: 2^orphanResolutionRange * PHANTOM K.
const maxOrphans = 600
// UnorphaningResult is the result of unorphaning a block
type UnorphaningResult struct {
block *externalapi.DomainBlock
blockInsertionResult *externalapi.BlockInsertionResult
}
// AddOrphan adds the block to the orphan set
func (f *FlowContext) AddOrphan(orphanBlock *externalapi.DomainBlock) {
f.orphansMutex.Lock()
@ -57,7 +51,7 @@ func (f *FlowContext) IsOrphan(blockHash *externalapi.DomainHash) bool {
}
// UnorphanBlocks removes the block from the orphan set, and remove all of the blocks that are not orphans anymore.
func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*UnorphaningResult, error) {
func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*externalapi.DomainBlock, error) {
f.orphansMutex.Lock()
defer f.orphansMutex.Unlock()
@ -66,7 +60,7 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
rootBlockHash := consensushashing.BlockHash(rootBlock)
processQueue := f.addChildOrphansToProcessQueue(rootBlockHash, []externalapi.DomainHash{})
var unorphaningResults []*UnorphaningResult
var unorphanedBlocks []*externalapi.DomainBlock
for len(processQueue) > 0 {
var orphanHash externalapi.DomainHash
orphanHash, processQueue = processQueue[0], processQueue[1:]
@ -90,21 +84,18 @@ func (f *FlowContext) UnorphanBlocks(rootBlock *externalapi.DomainBlock) ([]*Uno
}
}
if canBeUnorphaned {
blockInsertionResult, unorphaningSucceeded, err := f.unorphanBlock(orphanHash)
unorphaningSucceeded, err := f.unorphanBlock(orphanHash)
if err != nil {
return nil, err
}
if unorphaningSucceeded {
unorphaningResults = append(unorphaningResults, &UnorphaningResult{
block: orphanBlock,
blockInsertionResult: blockInsertionResult,
})
unorphanedBlocks = append(unorphanedBlocks, orphanBlock)
processQueue = f.addChildOrphansToProcessQueue(&orphanHash, processQueue)
}
}
}
return unorphaningResults, nil
return unorphanedBlocks, nil
}
// addChildOrphansToProcessQueue finds all child orphans of `blockHash`
@ -143,24 +134,24 @@ func (f *FlowContext) findChildOrphansOfBlock(blockHash *externalapi.DomainHash)
return childOrphans
}
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (*externalapi.BlockInsertionResult, bool, error) {
func (f *FlowContext) unorphanBlock(orphanHash externalapi.DomainHash) (bool, error) {
orphanBlock, ok := f.orphans[orphanHash]
if !ok {
return nil, false, errors.Errorf("attempted to unorphan a non-orphan block %s", orphanHash)
return false, errors.Errorf("attempted to unorphan a non-orphan block %s", orphanHash)
}
delete(f.orphans, orphanHash)
blockInsertionResult, err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock, true)
err := f.domain.Consensus().ValidateAndInsertBlock(orphanBlock, true)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
log.Warnf("Validation failed for orphan block %s: %s", orphanHash, err)
return nil, false, nil
return false, nil
}
return nil, false, err
return false, err
}
log.Infof("Unorphaned block %s", orphanHash)
return blockInsertionResult, true, nil
return true, nil
}
// GetOrphanRoots returns the roots of the missing ancestors DAG of the given orphan

View File

@ -1,4 +1,4 @@
package blockrelay
package flowcontext
import (
"sync"
@ -13,13 +13,15 @@ type SharedRequestedBlocks struct {
sync.Mutex
}
func (s *SharedRequestedBlocks) remove(hash *externalapi.DomainHash) {
// Remove removes a block from the set.
func (s *SharedRequestedBlocks) Remove(hash *externalapi.DomainHash) {
s.Lock()
defer s.Unlock()
delete(s.blocks, *hash)
}
func (s *SharedRequestedBlocks) removeSet(blockHashes map[externalapi.DomainHash]struct{}) {
// RemoveSet removes a set of blocks from the set.
func (s *SharedRequestedBlocks) RemoveSet(blockHashes map[externalapi.DomainHash]struct{}) {
s.Lock()
defer s.Unlock()
for hash := range blockHashes {
@ -27,7 +29,8 @@ func (s *SharedRequestedBlocks) removeSet(blockHashes map[externalapi.DomainHash
}
}
func (s *SharedRequestedBlocks) addIfNotExists(hash *externalapi.DomainHash) (exists bool) {
// AddIfNotExists adds a block to the set if it doesn't exist yet.
func (s *SharedRequestedBlocks) AddIfNotExists(hash *externalapi.DomainHash) (exists bool) {
s.Lock()
defer s.Unlock()
_, ok := s.blocks[*hash]

View File

@ -1,4 +1,4 @@
package transactionrelay
package flowcontext
import (
"sync"
@ -13,13 +13,15 @@ type SharedRequestedTransactions struct {
sync.Mutex
}
func (s *SharedRequestedTransactions) remove(txID *externalapi.DomainTransactionID) {
// Remove removes a transaction from the set.
func (s *SharedRequestedTransactions) Remove(txID *externalapi.DomainTransactionID) {
s.Lock()
defer s.Unlock()
delete(s.transactions, *txID)
}
func (s *SharedRequestedTransactions) removeMany(txIDs []*externalapi.DomainTransactionID) {
// RemoveMany removes a set of transactions from the set.
func (s *SharedRequestedTransactions) RemoveMany(txIDs []*externalapi.DomainTransactionID) {
s.Lock()
defer s.Unlock()
for _, txID := range txIDs {
@ -27,7 +29,8 @@ func (s *SharedRequestedTransactions) removeMany(txIDs []*externalapi.DomainTran
}
}
func (s *SharedRequestedTransactions) addIfNotExists(txID *externalapi.DomainTransactionID) (exists bool) {
// AddIfNotExists adds a transaction to the set if it doesn't exist yet.
func (s *SharedRequestedTransactions) AddIfNotExists(txID *externalapi.DomainTransactionID) (exists bool) {
s.Lock()
defer s.Unlock()
_, ok := s.transactions[*txID]

View File

@ -1,43 +0,0 @@
package flowcontext
import "github.com/kaspanet/kaspad/util/mstime"
const (
maxSelectedParentTimeDiffToAllowMiningInMilliSeconds = 60 * 60 * 1000 // 1 Hour
)
// ShouldMine returns whether it's ok to use block template from this node
// for mining purposes.
func (f *FlowContext) ShouldMine() (bool, error) {
peers := f.Peers()
if len(peers) == 0 {
log.Debugf("The node is not connected, so ShouldMine returns false")
return false, nil
}
if f.IsIBDRunning() {
log.Debugf("IBD is running, so ShouldMine returns false")
return false, nil
}
virtualSelectedParent, err := f.domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
virtualSelectedParentHeader, err := f.domain.Consensus().GetBlockHeader(virtualSelectedParent)
if err != nil {
return false, err
}
now := mstime.Now().UnixMilliseconds()
if now-virtualSelectedParentHeader.TimeInMilliseconds() < maxSelectedParentTimeDiffToAllowMiningInMilliSeconds {
log.Debugf("The selected tip timestamp is recent (%d), so ShouldMine returns true",
virtualSelectedParentHeader.TimeInMilliseconds())
return true, nil
}
log.Debugf("The selected tip timestamp is old (%d), so ShouldMine returns false",
virtualSelectedParentHeader.TimeInMilliseconds())
return false, nil
}

View File

@ -4,7 +4,6 @@ import (
"time"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
@ -30,7 +29,7 @@ func (f *FlowContext) shouldRebroadcastTransactions() bool {
// SharedRequestedTransactions returns a *transactionrelay.SharedRequestedTransactions for sharing
// data about requested transactions between different peers.
func (f *FlowContext) SharedRequestedTransactions() *transactionrelay.SharedRequestedTransactions {
func (f *FlowContext) SharedRequestedTransactions() *SharedRequestedTransactions {
return f.sharedRequestedTransactions
}

View File

@ -1,62 +0,0 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// PruningPointAndItsAnticoneRequestsContext is the interface for the context needed for the HandlePruningPointAndItsAnticoneRequests flow.
type PruningPointAndItsAnticoneRequestsContext interface {
Domain() domain.Domain
}
// HandlePruningPointAndItsAnticoneRequests listens to appmessage.MsgRequestPruningPointAndItsAnticone messages and sends
// the pruning point and its anticone to the requesting peer.
func HandlePruningPointAndItsAnticoneRequests(context PruningPointAndItsAnticoneRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
log.Debugf("Got request for pruning point and its anticone from %s", peer)
pruningPointHeaders, err := context.Domain().Consensus().PruningPointHeaders()
if err != nil {
return err
}
msgPruningPointHeaders := make([]*appmessage.MsgBlockHeader, len(pruningPointHeaders))
for i, header := range pruningPointHeaders {
msgPruningPointHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(header)
}
err = outgoingRoute.Enqueue(appmessage.NewMsgPruningPoints(msgPruningPointHeaders))
if err != nil {
return err
}
blocks, err := context.Domain().Consensus().PruningPointAndItsAnticoneWithTrustedData()
if err != nil {
return err
}
for _, block := range blocks {
err = outgoingRoute.Enqueue(appmessage.DomainBlockWithTrustedDataToBlockWithTrustedData(block))
if err != nil {
return err
}
}
err = outgoingRoute.Enqueue(appmessage.NewMsgDoneBlocksWithTrustedData())
if err != nil {
return err
}
log.Debugf("Sent pruning point and its anticone to %s", peer)
}
}

View File

@ -1,492 +0,0 @@
package blockrelay
import (
"time"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
)
func (flow *handleRelayInvsFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) error {
wasIBDNotRunning := flow.TrySetIBDRunning(flow.peer)
if !wasIBDNotRunning {
log.Debugf("IBD is already running")
return nil
}
isFinishedSuccessfully := false
defer func() {
flow.UnsetIBDRunning()
flow.logIBDFinished(isFinishedSuccessfully)
}()
highHash := consensushashing.BlockHash(block)
log.Debugf("IBD started with peer %s and highHash %s", flow.peer, highHash)
log.Debugf("Syncing blocks up to %s", highHash)
log.Debugf("Trying to find highest shared chain block with peer %s with high hash %s", flow.peer, highHash)
highestSharedBlockHash, highestSharedBlockFound, err := flow.findHighestSharedBlockHash(highHash)
if err != nil {
return err
}
log.Debugf("Found highest shared chain block %s with peer %s", highestSharedBlockHash, flow.peer)
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(block, highestSharedBlockFound)
if err != nil {
return err
}
if !shouldSync {
return nil
}
if shouldDownloadHeadersProof {
log.Infof("Starting IBD with headers proof")
err := flow.ibdWithHeadersProof(highHash)
if err != nil {
return err
}
} else {
err = flow.syncPruningPointFutureHeaders(flow.Domain().Consensus(), highestSharedBlockHash, highHash)
if err != nil {
return err
}
}
err = flow.syncMissingBlockBodies(highHash)
if err != nil {
return err
}
log.Debugf("Finished syncing blocks up to %s", highHash)
isFinishedSuccessfully = true
return nil
}
func (flow *handleRelayInvsFlow) logIBDFinished(isFinishedSuccessfully bool) {
successString := "successfully"
if !isFinishedSuccessfully {
successString = "(interrupted)"
}
log.Infof("IBD finished %s", successString)
}
// findHighestSharedBlock attempts to find the highest shared block between the peer
// and this node. This method may fail because the peer and us have conflicting pruning
// points. In that case we return (nil, false, nil) so that we may stop IBD gracefully.
func (flow *handleRelayInvsFlow) findHighestSharedBlockHash(
targetHash *externalapi.DomainHash) (*externalapi.DomainHash, bool, error) {
log.Debugf("Sending a blockLocator to %s between pruning point and headers selected tip", flow.peer)
blockLocator, err := flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
if err != nil {
return nil, false, err
}
for {
highestHash, highestHashFound, err := flow.fetchHighestHash(targetHash, blockLocator)
if err != nil {
return nil, false, err
}
if !highestHashFound {
return nil, false, nil
}
highestHashIndex, err := flow.findHighestHashIndex(highestHash, blockLocator)
if err != nil {
return nil, false, err
}
if highestHashIndex == 0 ||
// If the block locator contains only two adjacent chain blocks, the
// syncer will always find the same highest chain block, so to avoid
// an endless loop, we explicitly stop the loop in such situation.
(len(blockLocator) == 2 && highestHashIndex == 1) {
return highestHash, true, nil
}
locatorHashAboveHighestHash := highestHash
if highestHashIndex > 0 {
locatorHashAboveHighestHash = blockLocator[highestHashIndex-1]
}
blockLocator, err = flow.nextBlockLocator(highestHash, locatorHashAboveHighestHash)
if err != nil {
return nil, false, err
}
}
}
func (flow *handleRelayInvsFlow) nextBlockLocator(lowHash, highHash *externalapi.DomainHash) (externalapi.BlockLocator, error) {
log.Debugf("Sending a blockLocator to %s between %s and %s", flow.peer, lowHash, highHash)
blockLocator, err := flow.Domain().Consensus().CreateHeadersSelectedChainBlockLocator(lowHash, highHash)
if err != nil {
if errors.Is(model.ErrBlockNotInSelectedParentChain, err) {
return nil, err
}
log.Debugf("Headers selected parent chain moved since findHighestSharedBlockHash - " +
"restarting with full block locator")
blockLocator, err = flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
if err != nil {
return nil, err
}
}
return blockLocator, nil
}
func (flow *handleRelayInvsFlow) findHighestHashIndex(
highestHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (int, error) {
highestHashIndex := 0
highestHashIndexFound := false
for i, blockLocatorHash := range blockLocator {
if highestHash.Equal(blockLocatorHash) {
highestHashIndex = i
highestHashIndexFound = true
break
}
}
if !highestHashIndexFound {
return 0, protocolerrors.Errorf(true, "highest hash %s "+
"returned from peer %s is not in the original blockLocator", highestHash, flow.peer)
}
log.Debugf("The index of the highest hash in the original "+
"blockLocator sent to %s is %d", flow.peer, highestHashIndex)
return highestHashIndex, nil
}
// fetchHighestHash attempts to fetch the highest hash the peer knows amongst the given
// blockLocator. This method may fail because the peer and us have conflicting pruning
// points. In that case we return (nil, false, nil) so that we may stop IBD gracefully.
func (flow *handleRelayInvsFlow) fetchHighestHash(
targetHash *externalapi.DomainHash, blockLocator externalapi.BlockLocator) (*externalapi.DomainHash, bool, error) {
ibdBlockLocatorMessage := appmessage.NewMsgIBDBlockLocator(targetHash, blockLocator)
err := flow.outgoingRoute.Enqueue(ibdBlockLocatorMessage)
if err != nil {
return nil, false, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch message := message.(type) {
case *appmessage.MsgIBDBlockLocatorHighestHash:
highestHash := message.HighestHash
log.Debugf("The highest hash the peer %s knows is %s", flow.peer, highestHash)
return highestHash, true, nil
case *appmessage.MsgIBDBlockLocatorHighestHashNotFound:
log.Debugf("Peer %s does not know any block within our blockLocator. "+
"This should only happen if there's a DAG split deeper than the pruning point.", flow.peer)
return nil, false, nil
default:
return nil, false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlockLocatorHighestHash, message.Command())
}
}
func (flow *handleRelayInvsFlow) syncPruningPointFutureHeaders(consensus externalapi.Consensus, highestSharedBlockHash *externalapi.DomainHash,
highHash *externalapi.DomainHash) error {
log.Infof("Downloading headers from %s", flow.peer)
err := flow.sendRequestHeaders(highestSharedBlockHash, highHash)
if err != nil {
return err
}
// Keep a short queue of BlockHeadersMessages so that there's
// never a moment when the node is not validating and inserting
// headers
blockHeadersMessageChan := make(chan *appmessage.BlockHeadersMessage, 2)
errChan := make(chan error)
spawn("handleRelayInvsFlow-syncPruningPointFutureHeaders", func() {
for {
blockHeadersMessage, doneIBD, err := flow.receiveHeaders()
if err != nil {
errChan <- err
return
}
if doneIBD {
close(blockHeadersMessageChan)
return
}
blockHeadersMessageChan <- blockHeadersMessage
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestNextHeaders())
if err != nil {
errChan <- err
return
}
}
})
for {
select {
case ibdBlocksMessage, ok := <-blockHeadersMessageChan:
if !ok {
// If the highHash has not been received, the peer is misbehaving
highHashBlockInfo, err := consensus.GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashBlockInfo.Exists {
return protocolerrors.Errorf(true, "did not receive "+
"highHash block %s from peer %s during block download", highHash, flow.peer)
}
return nil
}
for _, header := range ibdBlocksMessage.BlockHeaders {
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
}
case err := <-errChan:
return err
}
}
}
func (flow *handleRelayInvsFlow) sendRequestHeaders(highestSharedBlockHash *externalapi.DomainHash,
peerSelectedTipHash *externalapi.DomainHash) error {
msgGetBlockInvs := appmessage.NewMsgRequstHeaders(highestSharedBlockHash, peerSelectedTipHash)
return flow.outgoingRoute.Enqueue(msgGetBlockInvs)
}
func (flow *handleRelayInvsFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneHeaders bool, err error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch message := message.(type) {
case *appmessage.BlockHeadersMessage:
return message, false, nil
case *appmessage.MsgDoneHeaders:
return nil, true, nil
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s",
appmessage.CmdBlockHeaders,
appmessage.CmdDoneHeaders,
message.Command())
}
}
func (flow *handleRelayInvsFlow) processHeader(consensus externalapi.Consensus, msgBlockHeader *appmessage.MsgBlockHeader) error {
header := appmessage.BlockHeaderToDomainBlockHeader(msgBlockHeader)
block := &externalapi.DomainBlock{
Header: header,
Transactions: nil,
}
blockHash := consensushashing.BlockHash(block)
blockInfo, err := consensus.GetBlockInfo(blockHash)
if err != nil {
return err
}
if blockInfo.Exists {
log.Debugf("Block header %s is already in the DAG. Skipping...", blockHash)
return nil
}
_, err = consensus.ValidateAndInsertBlock(block, false)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
}
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping block header %s as it is a duplicate", blockHash)
} else {
log.Infof("Rejected block header %s from %s during IBD: %s", blockHash, flow.peer, err)
return protocolerrors.Wrapf(true, err, "got invalid block header %s during IBD", blockHash)
}
}
return nil
}
func (flow *handleRelayInvsFlow) validatePruningPointFutureHeaderTimestamps() error {
headerSelectedTipHash, err := flow.Domain().StagingConsensus().GetHeadersSelectedTip()
if err != nil {
return err
}
headerSelectedTipHeader, err := flow.Domain().StagingConsensus().GetBlockHeader(headerSelectedTipHash)
if err != nil {
return err
}
headerSelectedTipTimestamp := headerSelectedTipHeader.TimeInMilliseconds()
currentSelectedTipHash, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return err
}
currentSelectedTipHeader, err := flow.Domain().Consensus().GetBlockHeader(currentSelectedTipHash)
if err != nil {
return err
}
currentSelectedTipTimestamp := currentSelectedTipHeader.TimeInMilliseconds()
if headerSelectedTipTimestamp < currentSelectedTipTimestamp {
return protocolerrors.Errorf(false, "the timestamp of the candidate selected "+
"tip is smaller than the current selected tip")
}
minTimestampDifferenceInMilliseconds := (10 * time.Minute).Milliseconds()
if headerSelectedTipTimestamp-currentSelectedTipTimestamp < minTimestampDifferenceInMilliseconds {
return protocolerrors.Errorf(false, "difference between the timestamps of "+
"the current pruning point and the candidate pruning point is too small. Aborting IBD...")
}
return nil
}
func (flow *handleRelayInvsFlow) receiveAndInsertPruningPointUTXOSet(
consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (bool, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receiveAndInsertPruningPointUTXOSet")
defer onEnd()
receivedChunkCount := 0
receivedUTXOCount := 0
for {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return false, err
}
switch message := message.(type) {
case *appmessage.MsgPruningPointUTXOSetChunk:
receivedUTXOCount += len(message.OutpointAndUTXOEntryPairs)
domainOutpointAndUTXOEntryPairs :=
appmessage.OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(message.OutpointAndUTXOEntryPairs)
err := consensus.AppendImportedPruningPointUTXOs(domainOutpointAndUTXOEntryPairs)
if err != nil {
return false, err
}
receivedChunkCount++
if receivedChunkCount%ibdBatchSize == 0 {
log.Debugf("Received %d UTXO set chunks so far, totaling in %d UTXOs",
receivedChunkCount, receivedUTXOCount)
requestNextPruningPointUTXOSetChunkMessage := appmessage.NewMsgRequestNextPruningPointUTXOSetChunk()
err := flow.outgoingRoute.Enqueue(requestNextPruningPointUTXOSetChunkMessage)
if err != nil {
return false, err
}
}
case *appmessage.MsgDonePruningPointUTXOSetChunks:
log.Infof("Finished receiving the UTXO set. Total UTXOs: %d", receivedUTXOCount)
return true, nil
case *appmessage.MsgUnexpectedPruningPoint:
log.Infof("Could not receive the next UTXO chunk because the pruning point %s "+
"is no longer the pruning point of peer %s", pruningPointHash, flow.peer)
return false, nil
default:
return false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s or %s, got: %s", appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdUnexpectedPruningPoint, message.Command(),
)
}
}
}
func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.DomainHash) error {
hashes, err := flow.Domain().Consensus().GetMissingBlockBodyHashes(highHash)
if err != nil {
return err
}
if len(hashes) == 0 {
// Blocks can be inserted inside the DAG during IBD if those were requested before IBD started.
// In rare cases, all the IBD blocks might be already inserted by the time we reach this point.
// In these cases - GetMissingBlockBodyHashes would return an empty array.
log.Debugf("No missing block body hashes found.")
return nil
}
for offset := 0; offset < len(hashes); offset += ibdBatchSize {
var hashesToRequest []*externalapi.DomainHash
if offset+ibdBatchSize < len(hashes) {
hashesToRequest = hashes[offset : offset+ibdBatchSize]
} else {
hashesToRequest = hashes[offset:]
}
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestIBDBlocks(hashesToRequest))
if err != nil {
return err
}
for _, expectedHash := range hashesToRequest {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
if err != nil {
return err
}
msgIBDBlock, ok := message.(*appmessage.MsgIBDBlock)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlock, message.Command())
}
block := appmessage.MsgBlockToDomainBlock(msgIBDBlock.MsgBlock)
blockHash := consensushashing.BlockHash(block)
if !expectedHash.Equal(blockHash) {
return protocolerrors.Errorf(true, "expected block %s but got %s", expectedHash, blockHash)
}
err = flow.banIfBlockIsHeaderOnly(block)
if err != nil {
return err
}
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, false)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping IBD Block %s as it has already been added to the DAG", blockHash)
continue
}
return protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "invalid block %s", blockHash)
}
err = flow.OnNewBlock(block, blockInsertionResult)
if err != nil {
return err
}
}
}
return flow.Domain().Consensus().ResolveVirtual()
}
// dequeueIncomingMessageAndSkipInvs is a convenience method to be used during
// IBD. Inv messages are expected to arrive at any given moment, but should be
// ignored while we're in IBD
func (flow *handleRelayInvsFlow) dequeueIncomingMessageAndSkipInvs(timeout time.Duration) (appmessage.Message, error) {
for {
message, err := flow.incomingRoute.DequeueWithTimeout(timeout)
if err != nil {
return nil, err
}
if _, ok := message.(*appmessage.MsgInvRelayBlock); !ok {
return message, nil
}
}
}

View File

@ -28,7 +28,7 @@ type HandleHandshakeContext interface {
HandleError(err error, flowName string, isStopping *uint32, errChan chan<- error)
}
// HandleHandshake sets up the handshake protocol - It sends a version message and waits for an incoming
// HandleHandshake sets up the new_handshake protocol - It sends a version message and waits for an incoming
// version message, as well as a verack for the sent version
func HandleHandshake(context HandleHandshakeContext, netConnection *netadapter.NetConnection,
receiveVersionRoute *routerpkg.Route, sendVersionRoute *routerpkg.Route, outgoingRoute *routerpkg.Route,
@ -98,7 +98,7 @@ func HandleHandshake(context HandleHandshakeContext, netConnection *netadapter.N
}
// Handshake is different from other flows, since in it should forward router.ErrRouteClosed to errChan
// Therefore we implement a separate handleError for handshake
// Therefore we implement a separate handleError for new_handshake
func handleError(err error, flowName string, isStopping *uint32, errChan chan error) {
if errors.Is(err, routerpkg.ErrRouteClosed) {
if atomic.AddUint32(isStopping, 1) == 1 {

View File

@ -7,6 +7,7 @@ import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
var (
@ -17,7 +18,9 @@ var (
// minAcceptableProtocolVersion is the lowest protocol version that a
// connected peer may support.
minAcceptableProtocolVersion = appmessage.ProtocolVersion
minAcceptableProtocolVersion = uint32(5)
maxAcceptableProtocolVersion = uint32(5)
)
type receiveVersionFlow struct {
@ -97,7 +100,12 @@ func (flow *receiveVersionFlow) start() (*appmessage.NetAddress, error) {
return nil, protocolerrors.New(false, "incompatible subnetworks")
}
flow.peer.UpdateFieldsFromMsgVersion(msgVersion)
if flow.Config().ProtocolVersion > maxAcceptableProtocolVersion {
return nil, errors.Errorf("%d is a non existing protocol version", flow.Config().ProtocolVersion)
}
maxProtocolVersion := flow.Config().ProtocolVersion
flow.peer.UpdateFieldsFromMsgVersion(msgVersion, maxProtocolVersion)
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgVerAck())
if err != nil {
return nil, err

View File

@ -7,6 +7,7 @@ import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/version"
"github.com/pkg/errors"
)
var (
@ -56,15 +57,18 @@ func (flow *sendVersionFlow) start() error {
// Version message.
localAddress := flow.AddressManager().BestLocalAddress(flow.peer.Connection().NetAddress())
subnetworkID := flow.Config().SubnetworkID
if flow.Config().ProtocolVersion < minAcceptableProtocolVersion {
return errors.Errorf("configured protocol version %d is obsolete", flow.Config().ProtocolVersion)
}
msg := appmessage.NewMsgVersion(localAddress, flow.NetAdapter().ID(),
flow.Config().ActiveNetParams.Name, subnetworkID)
flow.Config().ActiveNetParams.Name, subnetworkID, flow.Config().ProtocolVersion)
msg.AddUserAgent(userAgentName, userAgentVersion, flow.Config().UserAgentComments...)
// Advertise the services flag
msg.Services = defaultServices
// Advertise our max supported protocol version.
msg.ProtocolVersion = appmessage.ProtocolVersion
msg.ProtocolVersion = flow.Config().ProtocolVersion
// Advertise if inv messages for transactions are desired.
msg.DisableRelayTx = flow.Config().BlocksOnly

View File

@ -0,0 +1,9 @@
package ready
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@ -0,0 +1,56 @@
package ready
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"sync/atomic"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
// HandleReady notify the other peer that peer is ready for messages, and wait for the other peer
// to send a ready message before start running the flows.
func HandleReady(incomingRoute *routerpkg.Route, outgoingRoute *routerpkg.Route,
peer *peerpkg.Peer,
) error {
log.Debugf("Sending ready message to %s", peer)
isStopping := uint32(0)
err := outgoingRoute.Enqueue(appmessage.NewMsgReady())
if err != nil {
return handleError(err, "HandleReady", &isStopping)
}
_, err = incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return handleError(err, "HandleReady", &isStopping)
}
log.Debugf("Got ready message from %s", peer)
return nil
}
// Ready is different from other flows, since in it should forward router.ErrRouteClosed to errChan
// Therefore we implement a separate handleError for 'ready'
func handleError(err error, flowName string, isStopping *uint32) error {
if errors.Is(err, routerpkg.ErrRouteClosed) {
if atomic.AddUint32(isStopping, 1) == 1 {
return err
}
return nil
}
if protocolErr := (protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
log.Errorf("Ready protocol error from %s: %s", flowName, err)
if atomic.AddUint32(isStopping, 1) == 1 {
return err
}
return nil
}
panic(err)
}

View File

@ -0,0 +1,16 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"testing"
)
func TestIBDBatchSizeLessThanRouteCapacity(t *testing.T) {
// The `ibdBatchSize` constant must be equal at both syncer and syncee. Therefore, we do not want
// to set it to `router.DefaultMaxMessages` to avoid confusion and human errors.
// However, nonetheless we must enforce that it does not exceed `router.DefaultMaxMessages`
if ibdBatchSize >= router.DefaultMaxMessages {
t.Fatalf("IBD batch size (%d) must be smaller than router.DefaultMaxMessages (%d)",
ibdBatchSize, router.DefaultMaxMessages)
}
}

View File

@ -13,15 +13,21 @@ func (flow *handleRelayInvsFlow) sendGetBlockLocator(highHash *externalapi.Domai
}
func (flow *handleRelayInvsFlow) receiveBlockLocator() (blockLocatorHashes []*externalapi.DomainHash, err error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
for {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, err
}
msgBlockLocator, ok := message.(*appmessage.MsgBlockLocator)
if !ok {
switch message := message.(type) {
case *appmessage.MsgInvRelayBlock:
flow.invsQueue = append(flow.invsQueue, invRelayBlock{Hash: message.Hash, IsOrphanRoot: false})
case *appmessage.MsgBlockLocator:
return message.BlockLocatorHashes, nil
default:
return nil,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdBlockLocator, message.Command())
}
return msgBlockLocator.BlockLocatorHashes, nil
}
}

View File

@ -5,7 +5,6 @@ import (
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
@ -34,7 +33,7 @@ func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *
if err != nil {
return err
}
if !blockInfo.Exists {
if !blockInfo.HasHeader() {
return protocolerrors.Errorf(true, "received IBDBlockLocator "+
"with an unknown targetHash %s", targetHash)
}
@ -47,7 +46,7 @@ func HandleIBDBlockLocator(context HandleIBDBlockLocatorContext, incomingRoute *
}
// The IBD block locator is checking only existing blocks with bodies.
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
if !blockInfo.HasBody() {
continue
}

View File

@ -4,7 +4,6 @@ import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
@ -28,18 +27,15 @@ func HandleIBDBlockRequests(context HandleIBDBlockRequestsContext, incomingRoute
log.Debugf("Got request for %d ibd blocks", len(msgRequestIBDBlocks.Hashes))
for i, hash := range msgRequestIBDBlocks.Hashes {
// Fetch the block from the database.
blockInfo, err := context.Domain().Consensus().GetBlockInfo(hash)
if err != nil {
return err
}
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return protocolerrors.Errorf(true, "block %s not found", hash)
}
block, err := context.Domain().Consensus().GetBlock(hash)
block, found, err := context.Domain().Consensus().GetBlock(hash)
if err != nil {
return errors.Wrapf(err, "unable to fetch requested block hash %s", hash)
}
if !found {
return protocolerrors.Errorf(false, "IBD block %s not found", hash)
}
// TODO (Partial nodes): Convert block to partial block if needed
blockMessage := appmessage.DomainBlockToMsgBlock(block)

View File

@ -0,0 +1,85 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
// RequestIBDChainBlockLocatorContext is the interface for the context needed for the HandleRequestBlockLocator flow.
type RequestIBDChainBlockLocatorContext interface {
Domain() domain.Domain
}
type handleRequestIBDChainBlockLocatorFlow struct {
RequestIBDChainBlockLocatorContext
incomingRoute, outgoingRoute *router.Route
}
// HandleRequestIBDChainBlockLocator handles getBlockLocator messages
func HandleRequestIBDChainBlockLocator(context RequestIBDChainBlockLocatorContext, incomingRoute *router.Route,
outgoingRoute *router.Route) error {
flow := &handleRequestIBDChainBlockLocatorFlow{
RequestIBDChainBlockLocatorContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
}
return flow.start()
}
func (flow *handleRequestIBDChainBlockLocatorFlow) start() error {
for {
highHash, lowHash, err := flow.receiveRequestIBDChainBlockLocator()
if err != nil {
return err
}
log.Debugf("Received getIBDChainBlockLocator with highHash: %s, lowHash: %s", highHash, lowHash)
var locator externalapi.BlockLocator
if highHash == nil || lowHash == nil {
locator, err = flow.Domain().Consensus().CreateFullHeadersSelectedChainBlockLocator()
} else {
locator, err = flow.Domain().Consensus().CreateHeadersSelectedChainBlockLocator(lowHash, highHash)
if errors.Is(model.ErrBlockNotInSelectedParentChain, err) {
// The chain has been modified, signal it by sending an empty locator
locator, err = externalapi.BlockLocator{}, nil
}
}
if err != nil {
log.Debugf("Received error from CreateHeadersSelectedChainBlockLocator: %s", err)
return protocolerrors.Errorf(true, "couldn't build a block "+
"locator between %s and %s", lowHash, highHash)
}
err = flow.sendIBDChainBlockLocator(locator)
if err != nil {
return err
}
}
}
func (flow *handleRequestIBDChainBlockLocatorFlow) receiveRequestIBDChainBlockLocator() (highHash, lowHash *externalapi.DomainHash, err error) {
message, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, nil, err
}
msgGetBlockLocator := message.(*appmessage.MsgRequestIBDChainBlockLocator)
return msgGetBlockLocator.HighHash, msgGetBlockLocator.LowHash, nil
}
func (flow *handleRequestIBDChainBlockLocatorFlow) sendIBDChainBlockLocator(locator externalapi.BlockLocator) error {
msgIBDChainBlockLocator := appmessage.NewMsgIBDChainBlockLocator(locator)
err := flow.outgoingRoute.Enqueue(msgIBDChainBlockLocator)
if err != nil {
return err
}
return nil
}

View File

@ -0,0 +1,162 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"sync/atomic"
)
// PruningPointAndItsAnticoneRequestsContext is the interface for the context needed for the HandlePruningPointAndItsAnticoneRequests flow.
type PruningPointAndItsAnticoneRequestsContext interface {
Domain() domain.Domain
Config() *config.Config
}
var isBusy uint32
// HandlePruningPointAndItsAnticoneRequests listens to appmessage.MsgRequestPruningPointAndItsAnticone messages and sends
// the pruning point and its anticone to the requesting peer.
func HandlePruningPointAndItsAnticoneRequests(context PruningPointAndItsAnticoneRequestsContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peerpkg.Peer) error {
for {
err := func() error {
_, err := incomingRoute.Dequeue()
if err != nil {
return err
}
if !atomic.CompareAndSwapUint32(&isBusy, 0, 1) {
return protocolerrors.Errorf(false, "node is busy with other pruning point anticone requests")
}
defer atomic.StoreUint32(&isBusy, 0)
log.Debugf("Got request for pruning point and its anticone from %s", peer)
pruningPointHeaders, err := context.Domain().Consensus().PruningPointHeaders()
if err != nil {
return err
}
msgPruningPointHeaders := make([]*appmessage.MsgBlockHeader, len(pruningPointHeaders))
for i, header := range pruningPointHeaders {
msgPruningPointHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(header)
}
err = outgoingRoute.Enqueue(appmessage.NewMsgPruningPoints(msgPruningPointHeaders))
if err != nil {
return err
}
pointAndItsAnticone, err := context.Domain().Consensus().PruningPointAndItsAnticone()
if err != nil {
return err
}
windowSize := context.Config().NetParams().DifficultyAdjustmentWindowSize
daaWindowBlocks := make([]*externalapi.TrustedDataDataDAAHeader, 0, windowSize)
daaWindowHashesToIndex := make(map[externalapi.DomainHash]int, windowSize)
trustedDataDAABlockIndexes := make(map[externalapi.DomainHash][]uint64)
ghostdagData := make([]*externalapi.BlockGHOSTDAGDataHashPair, 0)
ghostdagDataHashToIndex := make(map[externalapi.DomainHash]int)
trustedDataGHOSTDAGDataIndexes := make(map[externalapi.DomainHash][]uint64)
for _, blockHash := range pointAndItsAnticone {
blockDAAWindowHashes, err := context.Domain().Consensus().BlockDAAWindowHashes(blockHash)
if err != nil {
return err
}
trustedDataDAABlockIndexes[*blockHash] = make([]uint64, 0, windowSize)
for i, daaBlockHash := range blockDAAWindowHashes {
index, exists := daaWindowHashesToIndex[*daaBlockHash]
if !exists {
trustedDataDataDAAHeader, err := context.Domain().Consensus().TrustedDataDataDAAHeader(blockHash, daaBlockHash, uint64(i))
if err != nil {
return err
}
daaWindowBlocks = append(daaWindowBlocks, trustedDataDataDAAHeader)
index = len(daaWindowBlocks) - 1
daaWindowHashesToIndex[*daaBlockHash] = index
}
trustedDataDAABlockIndexes[*blockHash] = append(trustedDataDAABlockIndexes[*blockHash], uint64(index))
}
ghostdagDataBlockHashes, err := context.Domain().Consensus().TrustedBlockAssociatedGHOSTDAGDataBlockHashes(blockHash)
if err != nil {
return err
}
trustedDataGHOSTDAGDataIndexes[*blockHash] = make([]uint64, 0, context.Config().NetParams().K)
for _, ghostdagDataBlockHash := range ghostdagDataBlockHashes {
index, exists := ghostdagDataHashToIndex[*ghostdagDataBlockHash]
if !exists {
data, err := context.Domain().Consensus().TrustedGHOSTDAGData(ghostdagDataBlockHash)
if err != nil {
return err
}
ghostdagData = append(ghostdagData, &externalapi.BlockGHOSTDAGDataHashPair{
Hash: ghostdagDataBlockHash,
GHOSTDAGData: data,
})
index = len(ghostdagData) - 1
ghostdagDataHashToIndex[*ghostdagDataBlockHash] = index
}
trustedDataGHOSTDAGDataIndexes[*blockHash] = append(trustedDataGHOSTDAGDataIndexes[*blockHash], uint64(index))
}
}
err = outgoingRoute.Enqueue(appmessage.DomainTrustedDataToTrustedData(daaWindowBlocks, ghostdagData))
if err != nil {
return err
}
for i, blockHash := range pointAndItsAnticone {
block, found, err := context.Domain().Consensus().GetBlock(blockHash)
if err != nil {
return err
}
if !found {
return protocolerrors.Errorf(false, "pruning point anticone block %s not found", blockHash)
}
err = outgoingRoute.Enqueue(appmessage.DomainBlockWithTrustedDataToBlockWithTrustedDataV4(block, trustedDataDAABlockIndexes[*blockHash], trustedDataGHOSTDAGDataIndexes[*blockHash]))
if err != nil {
return err
}
if (i+1)%ibdBatchSize == 0 {
// No timeout here, as we don't care if the syncee takes its time computing,
// since it only blocks this dedicated flow
message, err := incomingRoute.Dequeue()
if err != nil {
return err
}
if _, ok := message.(*appmessage.MsgRequestNextPruningPointAndItsAnticoneBlocks); !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointAndItsAnticoneBlocks, message.Command())
}
}
}
err = outgoingRoute.Enqueue(appmessage.NewMsgDoneBlocksWithTrustedData())
if err != nil {
return err
}
log.Debugf("Sent pruning point and its anticone to %s", peer)
return nil
}()
if err != nil {
return err
}
}
}

View File

@ -5,7 +5,6 @@ import (
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
@ -29,18 +28,15 @@ func HandleRelayBlockRequests(context RelayBlockRequestsContext, incomingRoute *
log.Debugf("Got request for relay blocks with hashes %s", getRelayBlocksMessage.Hashes)
for _, hash := range getRelayBlocksMessage.Hashes {
// Fetch the block from the database.
blockInfo, err := context.Domain().Consensus().GetBlockInfo(hash)
if err != nil {
return err
}
if !blockInfo.Exists || blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return protocolerrors.Errorf(true, "block %s not found", hash)
}
block, err := context.Domain().Consensus().GetBlock(hash)
block, found, err := context.Domain().Consensus().GetBlock(hash)
if err != nil {
return errors.Wrapf(err, "unable to fetch requested block hash %s", hash)
}
if !found {
return protocolerrors.Errorf(false, "Relay block %s not found", hash)
}
// TODO (Partial nodes): Convert block to partial block if needed
err = outgoingRoute.Enqueue(appmessage.DomainBlockToMsgBlock(block))

View File

@ -3,12 +3,15 @@ package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashset"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
@ -23,24 +26,29 @@ var orphanResolutionRange uint32 = 5
type RelayInvsContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
OnNewBlock(block *externalapi.DomainBlock) error
OnNewBlockTemplate() error
OnPruningPointUTXOSetOverride() error
SharedRequestedBlocks() *SharedRequestedBlocks
SharedRequestedBlocks() *flowcontext.SharedRequestedBlocks
Broadcast(message appmessage.Message) error
AddOrphan(orphanBlock *externalapi.DomainBlock)
GetOrphanRoots(orphanHash *externalapi.DomainHash) ([]*externalapi.DomainHash, bool, error)
IsOrphan(blockHash *externalapi.DomainHash) bool
IsIBDRunning() bool
TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool
UnsetIBDRunning()
IsRecoverableError(err error) bool
IsNearlySynced() (bool, error)
}
type invRelayBlock struct {
Hash *externalapi.DomainHash
IsOrphanRoot bool
}
type handleRelayInvsFlow struct {
RelayInvsContext
incomingRoute, outgoingRoute *router.Route
peer *peerpkg.Peer
invsQueue []*appmessage.MsgInvRelayBlock
invsQueue []invRelayBlock
}
// HandleRelayInvs listens to appmessage.MsgInvRelayBlock messages, requests their corresponding blocks if they
@ -53,9 +61,12 @@ func HandleRelayInvs(context RelayInvsContext, incomingRoute *router.Route, outg
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
invsQueue: make([]*appmessage.MsgInvRelayBlock, 0),
invsQueue: make([]invRelayBlock, 0),
}
return flow.start()
err := flow.start()
// Currently, HandleRelayInvs flow is the only place where IBD is triggered, so the channel can be closed now
close(peer.IBDRequestChannel())
return err
}
func (flow *handleRelayInvsFlow) start() error {
@ -81,7 +92,18 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
isGenesisVirtualSelectedParent, err := flow.isGenesisVirtualSelectedParent()
if err != nil {
return err
}
if flow.IsOrphan(inv.Hash) {
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced && isGenesisVirtualSelectedParent {
log.Infof("Cannot process orphan %s for a node with only the genesis block. The node needs to IBD "+
"to the recent pruning point before normal operation can resume.", inv.Hash)
continue
}
log.Debugf("Block %s is a known orphan. Requesting its missing ancestors", inv.Hash)
err := flow.AddOrphanRootsToQueue(inv.Hash)
if err != nil {
@ -90,11 +112,17 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
// Block relay is disabled during IBD
// Block relay is disabled if the node is already during IBD AND considered out of sync
if flow.IsIBDRunning() {
log.Debugf("Got block %s while in IBD. continuing...", inv.Hash)
isNearlySynced, err := flow.IsNearlySynced()
if err != nil {
return err
}
if !isNearlySynced {
log.Debugf("Got block %s while in IBD and the node is out of sync. Continuing...", inv.Hash)
continue
}
}
log.Debugf("Requesting block %s", inv.Hash)
block, exists, err := flow.requestBlock(inv.Hash)
@ -111,8 +139,41 @@ func (flow *handleRelayInvsFlow) start() error {
return err
}
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced && !flow.Config().Devnet && flow.isChildOfGenesis(block) {
log.Infof("Cannot process %s because it's a direct child of genesis.", consensushashing.BlockHash(block))
continue
}
// Note we do not apply the heuristic below if inv was queued as an orphan root, since
// that means the process started by a proper and relevant relay block
if !inv.IsOrphanRoot {
// Check bounded merge depth to avoid requesting irrelevant data which cannot be merged under virtual
virtualMergeDepthRoot, err := flow.Domain().Consensus().VirtualMergeDepthRoot()
if err != nil {
return err
}
if !virtualMergeDepthRoot.Equal(model.VirtualGenesisBlockHash) {
mergeDepthRootHeader, err := flow.Domain().Consensus().GetBlockHeader(virtualMergeDepthRoot)
if err != nil {
return err
}
// Since `BlueWork` respects topology, this condition means that the relay
// block is not in the future of virtual's merge depth root, and thus cannot be merged unless
// other valid blocks Kosherize it, in which case it will be obtained once the merger is relayed
if block.Header.BlueWork().Cmp(mergeDepthRootHeader.BlueWork()) <= 0 {
log.Debugf("Block %s has lower blue work than virtual's merge root %s (%d <= %d), hence we are skipping it",
inv.Hash, virtualMergeDepthRoot, block.Header.BlueWork(), mergeDepthRootHeader.BlueWork())
continue
}
}
}
log.Debugf("Processing block %s", inv.Hash)
missingParents, blockInsertionResult, err := flow.processBlock(block)
oldVirtualInfo, err := flow.Domain().Consensus().GetVirtualInfo()
if err != nil {
return err
}
missingParents, err := flow.processBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrPrunedBlock) {
log.Infof("Ignoring pruned block %s", inv.Hash)
@ -134,13 +195,48 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
log.Debugf("Relaying block %s", inv.Hash)
oldVirtualParents := hashset.New()
for _, parent := range oldVirtualInfo.ParentHashes {
oldVirtualParents.Add(parent)
}
newVirtualInfo, err := flow.Domain().Consensus().GetVirtualInfo()
if err != nil {
return err
}
virtualHasNewParents := false
for _, parent := range newVirtualInfo.ParentHashes {
if oldVirtualParents.Contains(parent) {
continue
}
virtualHasNewParents = true
block, found, err := flow.Domain().Consensus().GetBlock(parent)
if err != nil {
return err
}
if !found {
return protocolerrors.Errorf(false, "Virtual parent %s not found", parent)
}
blockHash := consensushashing.BlockHash(block)
log.Debugf("Relaying block %s", blockHash)
err = flow.relayBlock(block)
if err != nil {
return err
}
}
if virtualHasNewParents {
log.Debugf("Virtual %d has new parents, raising new block template event", newVirtualInfo.DAAScore)
err = flow.OnNewBlockTemplate()
if err != nil {
return err
}
}
log.Infof("Accepted block %s via relay", inv.Hash)
err = flow.OnNewBlock(block, blockInsertionResult)
err = flow.OnNewBlock(block)
if err != nil {
return err
}
@ -156,35 +252,35 @@ func (flow *handleRelayInvsFlow) banIfBlockIsHeaderOnly(block *externalapi.Domai
return nil
}
func (flow *handleRelayInvsFlow) readInv() (*appmessage.MsgInvRelayBlock, error) {
func (flow *handleRelayInvsFlow) readInv() (invRelayBlock, error) {
if len(flow.invsQueue) > 0 {
var inv *appmessage.MsgInvRelayBlock
var inv invRelayBlock
inv, flow.invsQueue = flow.invsQueue[0], flow.invsQueue[1:]
return inv, nil
}
msg, err := flow.incomingRoute.Dequeue()
if err != nil {
return nil, err
return invRelayBlock{}, err
}
inv, ok := msg.(*appmessage.MsgInvRelayBlock)
msgInv, ok := msg.(*appmessage.MsgInvRelayBlock)
if !ok {
return nil, protocolerrors.Errorf(true, "unexpected %s message in the block relay handleRelayInvsFlow while "+
return invRelayBlock{}, protocolerrors.Errorf(true, "unexpected %s message in the block relay handleRelayInvsFlow while "+
"expecting an inv message", msg.Command())
}
return inv, nil
return invRelayBlock{Hash: msgInv.Hash, IsOrphanRoot: false}, nil
}
func (flow *handleRelayInvsFlow) requestBlock(requestHash *externalapi.DomainHash) (*externalapi.DomainBlock, bool, error) {
exists := flow.SharedRequestedBlocks().addIfNotExists(requestHash)
exists := flow.SharedRequestedBlocks().AddIfNotExists(requestHash)
if exists {
return nil, true, nil
}
// In case the function returns earlier than expected, we want to make sure flow.SharedRequestedBlocks() is
// clean from any pending blocks.
defer flow.SharedRequestedBlocks().remove(requestHash)
defer flow.SharedRequestedBlocks().Remove(requestHash)
getRelayBlocksMsg := appmessage.NewMsgRequestRelayBlocks([]*externalapi.DomainHash{requestHash})
err := flow.outgoingRoute.Enqueue(getRelayBlocksMsg)
@ -218,7 +314,7 @@ func (flow *handleRelayInvsFlow) readMsgBlock() (msgBlock *appmessage.MsgBlock,
switch message := message.(type) {
case *appmessage.MsgInvRelayBlock:
flow.invsQueue = append(flow.invsQueue, message)
flow.invsQueue = append(flow.invsQueue, invRelayBlock{Hash: message.Hash, IsOrphanRoot: false})
case *appmessage.MsgBlock:
return message, nil
default:
@ -227,22 +323,25 @@ func (flow *handleRelayInvsFlow) readMsgBlock() (msgBlock *appmessage.MsgBlock,
}
}
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, *externalapi.BlockInsertionResult, error) {
func (flow *handleRelayInvsFlow) processBlock(block *externalapi.DomainBlock) ([]*externalapi.DomainHash, error) {
blockHash := consensushashing.BlockHash(block)
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block, true)
err := flow.Domain().Consensus().ValidateAndInsertBlock(block, true)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return nil, nil, errors.Wrapf(err, "failed to process block %s", blockHash)
return nil, errors.Wrapf(err, "failed to process block %s", blockHash)
}
missingParentsError := &ruleerrors.ErrMissingParents{}
if errors.As(err, missingParentsError) {
return missingParentsError.MissingParentHashes, nil, nil
return missingParentsError.MissingParentHashes, nil
}
// A duplicate block should not appear to the user as a warning and is already reported in the calling function
if !errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Warnf("Rejected block %s from %s: %s", blockHash, flow.peer, err)
return nil, nil, protocolerrors.Wrapf(true, err, "got invalid block %s from relay", blockHash)
}
return nil, blockInsertionResult, nil
return nil, protocolerrors.Wrapf(true, err, "got invalid block %s from relay", blockHash)
}
return nil, nil
}
func (flow *handleRelayInvsFlow) relayBlock(block *externalapi.DomainBlock) error {
@ -265,6 +364,19 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock) e
return err
}
if isBlockInOrphanResolutionRange {
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced {
isGenesisVirtualSelectedParent, err := flow.isGenesisVirtualSelectedParent()
if err != nil {
return err
}
if isGenesisVirtualSelectedParent {
log.Infof("Cannot process orphan %s for a node with only the genesis block. The node needs to IBD "+
"to the recent pruning point before normal operation can resume.", blockHash)
return nil
}
}
log.Debugf("Block %s is within orphan resolution range. "+
"Adding it to the orphan set", blockHash)
flow.AddOrphan(block)
@ -275,7 +387,28 @@ func (flow *handleRelayInvsFlow) processOrphan(block *externalapi.DomainBlock) e
// Start IBD unless we already are in IBD
log.Debugf("Block %s is out of orphan resolution range. "+
"Attempting to start IBD against it.", blockHash)
return flow.runIBDIfNotRunning(block)
// Send the block to IBD flow via the IBDRequestChannel.
// Note that this is a non-blocking send, since if IBD is already running, there is no need to trigger it
select {
case flow.peer.IBDRequestChannel() <- block:
default:
}
return nil
}
func (flow *handleRelayInvsFlow) isGenesisVirtualSelectedParent() (bool, error) {
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
return virtualSelectedParent.Equal(flow.Config().NetParams().GenesisHash), nil
}
func (flow *handleRelayInvsFlow) isChildOfGenesis(block *externalapi.DomainBlock) bool {
parents := block.Header.DirectParents()
return len(parents) == 1 && parents[0].Equal(flow.Config().NetParams().GenesisHash)
}
// isBlockInOrphanResolutionRange finds out whether the given blockHash should be
@ -316,12 +449,16 @@ func (flow *handleRelayInvsFlow) AddOrphanRootsToQueue(orphan *externalapi.Domai
"probably happened because it was randomly evicted immediately after it was added.", orphan)
}
if len(orphanRoots) == 0 {
// In some rare cases we get here when there are no orphan roots already
return nil
}
log.Infof("Block %s has %d missing ancestors. Adding them to the invs queue...", orphan, len(orphanRoots))
invMessages := make([]*appmessage.MsgInvRelayBlock, len(orphanRoots))
invMessages := make([]invRelayBlock, len(orphanRoots))
for i, root := range orphanRoots {
log.Debugf("Adding block %s missing ancestor %s to the invs queue", orphan, root)
invMessages[i] = appmessage.NewMsgInvBlock(root)
invMessages[i] = invRelayBlock{Hash: root, IsOrphanRoot: true}
}
flow.invsQueue = append(invMessages, flow.invsQueue...)

View File

@ -0,0 +1,95 @@
package blockrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"sort"
)
// RequestAnticoneContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestAnticoneContext interface {
Domain() domain.Domain
Config() *config.Config
}
type handleRequestAnticoneFlow struct {
RequestAnticoneContext
incomingRoute, outgoingRoute *router.Route
peer *peer.Peer
}
// HandleRequestAnticone handles RequestAnticone messages
func HandleRequestAnticone(context RequestAnticoneContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
flow := &handleRequestAnticoneFlow{
RequestAnticoneContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
func (flow *handleRequestAnticoneFlow) start() error {
for {
blockHash, contextHash, err := receiveRequestAnticone(flow.incomingRoute)
if err != nil {
return err
}
log.Debugf("Received requestAnticone with blockHash: %s, contextHash: %s", blockHash, contextHash)
log.Debugf("Getting past(%s) cap anticone(%s) for peer %s", contextHash, blockHash, flow.peer)
// GetAnticone is expected to be called by the syncee for getting the anticone of the header selected tip
// intersected by past of relayed block, and is thus expected to be bounded by mergeset limit since
// we relay blocks only if they enter virtual's mergeset. We add a 2 factor for possible sync gaps.
blockHashes, err := flow.Domain().Consensus().GetAnticone(blockHash, contextHash,
flow.Config().ActiveNetParams.MergeSetSizeLimit*2)
if err != nil {
return protocolerrors.Wrap(true, err, "Failed querying anticone")
}
log.Debugf("Got %d header hashes in past(%s) cap anticone(%s)", len(blockHashes), contextHash, blockHash)
blockHeaders := make([]*appmessage.MsgBlockHeader, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := flow.Domain().Consensus().GetBlockHeader(blockHash)
if err != nil {
return err
}
blockHeaders[i] = appmessage.DomainBlockHeaderToBlockHeader(blockHeader)
}
// We sort the headers in bottom-up topological order before sending
sort.Slice(blockHeaders, func(i, j int) bool {
return blockHeaders[i].BlueWork.Cmp(blockHeaders[j].BlueWork) < 0
})
blockHeadersMessage := appmessage.NewBlockHeadersMessage(blockHeaders)
err = flow.outgoingRoute.Enqueue(blockHeadersMessage)
if err != nil {
return err
}
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgDoneHeaders())
if err != nil {
return err
}
}
}
func receiveRequestAnticone(incomingRoute *router.Route) (blockHash *externalapi.DomainHash,
contextHash *externalapi.DomainHash, err error) {
message, err := incomingRoute.Dequeue()
if err != nil {
return nil, nil, err
}
msgRequestAnticone := message.(*appmessage.MsgRequestAnticone)
return msgRequestAnticone.BlockHash, msgRequestAnticone.ContextHash, nil
}

View File

@ -10,7 +10,9 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
const ibdBatchSize = router.DefaultMaxMessages
// This constant must be equal at both syncer and syncee. Therefore, never (!!) change this constant unless a new p2p
// version is introduced. See `TestIBDBatchSizeLessThanRouteCapacity` as well.
const ibdBatchSize = 99
// RequestHeadersContext is the interface for the context needed for the HandleRequestHeaders flow.
type RequestHeadersContext interface {
@ -42,7 +44,34 @@ func (flow *handleRequestHeadersFlow) start() error {
if err != nil {
return err
}
log.Debugf("Recieved requestHeaders with lowHash: %s, highHash: %s", lowHash, highHash)
log.Debugf("Received requestHeaders with lowHash: %s, highHash: %s", lowHash, highHash)
consensus := flow.Domain().Consensus()
lowHashInfo, err := consensus.GetBlockInfo(lowHash)
if err != nil {
return err
}
if !lowHashInfo.HasHeader() {
return protocolerrors.Errorf(true, "Block %s does not exist", lowHash)
}
highHashInfo, err := consensus.GetBlockInfo(highHash)
if err != nil {
return err
}
if !highHashInfo.HasHeader() {
return protocolerrors.Errorf(true, "Block %s does not exist", highHash)
}
isLowSelectedAncestorOfHigh, err := consensus.IsInSelectedParentChainOf(lowHash, highHash)
if err != nil {
return err
}
if !isLowSelectedAncestorOfHigh {
return protocolerrors.Errorf(true, "Expected %s to be on the selected chain of %s",
lowHash, highHash)
}
for !lowHash.Equal(highHash) {
log.Debugf("Getting block headers between %s and %s to %s", lowHash, highHash, flow.peer)
@ -51,7 +80,7 @@ func (flow *handleRequestHeadersFlow) start() error {
// in order to avoid locking the consensus for too long
// maxBlocks MUST be >= MergeSetSizeLimit + 1
const maxBlocks = 1 << 10
blockHashes, _, err := flow.Domain().Consensus().GetHashesBetween(lowHash, highHash, maxBlocks)
blockHashes, _, err := consensus.GetHashesBetween(lowHash, highHash, maxBlocks)
if err != nil {
return err
}
@ -59,7 +88,7 @@ func (flow *handleRequestHeadersFlow) start() error {
blockHeaders := make([]*appmessage.MsgBlockHeader, len(blockHashes))
for i, blockHash := range blockHashes {
blockHeader, err := flow.Domain().Consensus().GetBlockHeader(blockHash)
blockHeader, err := consensus.GetBlockHeader(blockHash)
if err != nil {
return err
}

View File

@ -70,7 +70,8 @@ func (flow *handleRequestPruningPointUTXOSetFlow) waitForRequestPruningPointUTXO
}
msgRequestPruningPointUTXOSet, ok := message.(*appmessage.MsgRequestPruningPointUTXOSet)
if !ok {
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
// TODO: Change to shouldBan: true once we fix the bug of getting redundant messages
return nil, protocolerrors.Errorf(false, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestPruningPointUTXOSet, message.Command())
}
return msgRequestPruningPointUTXOSet, nil
@ -123,7 +124,8 @@ func (flow *handleRequestPruningPointUTXOSetFlow) sendPruningPointUTXOSet(
}
_, ok := message.(*appmessage.MsgRequestNextPruningPointUTXOSetChunk)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
// TODO: Change to shouldBan: true once we fix the bug of getting redundant messages
return protocolerrors.Errorf(false, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdRequestNextPruningPointUTXOSetChunk, message.Command())
}

View File

@ -0,0 +1,751 @@
package blockrelay
import (
"fmt"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
"time"
)
// IBDContext is the interface for the context needed for the HandleIBD flow.
type IBDContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock) error
OnNewBlockTemplate() error
OnPruningPointUTXOSetOverride() error
IsIBDRunning() bool
TrySetIBDRunning(ibdPeer *peerpkg.Peer) bool
UnsetIBDRunning()
IsRecoverableError(err error) bool
}
type handleIBDFlow struct {
IBDContext
incomingRoute, outgoingRoute *router.Route
peer *peerpkg.Peer
}
// HandleIBD handles IBD
func HandleIBD(context IBDContext, incomingRoute *router.Route, outgoingRoute *router.Route,
peer *peerpkg.Peer) error {
flow := &handleIBDFlow{
IBDContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
peer: peer,
}
return flow.start()
}
func (flow *handleIBDFlow) start() error {
for {
// Wait for IBD requests triggered by other flows
block, ok := <-flow.peer.IBDRequestChannel()
if !ok {
return nil
}
err := flow.runIBDIfNotRunning(block)
if err != nil {
return err
}
}
}
func (flow *handleIBDFlow) runIBDIfNotRunning(block *externalapi.DomainBlock) error {
wasIBDNotRunning := flow.TrySetIBDRunning(flow.peer)
if !wasIBDNotRunning {
log.Debugf("IBD is already running")
return nil
}
isFinishedSuccessfully := false
var err error
defer func() {
flow.UnsetIBDRunning()
flow.logIBDFinished(isFinishedSuccessfully, err)
}()
relayBlockHash := consensushashing.BlockHash(block)
log.Infof("IBD started with peer %s and relayBlockHash %s", flow.peer, relayBlockHash)
log.Infof("Syncing blocks up to %s", relayBlockHash)
log.Infof("Trying to find highest known syncer chain block from peer %s with relay hash %s", flow.peer, relayBlockHash)
syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, err := flow.negotiateMissingSyncerChainSegment()
if err != nil {
return err
}
shouldDownloadHeadersProof, shouldSync, err := flow.shouldSyncAndShouldDownloadHeadersProof(
block, highestKnownSyncerChainHash)
if err != nil {
return err
}
if !shouldSync {
return nil
}
if shouldDownloadHeadersProof {
log.Infof("Starting IBD with headers proof")
err = flow.ibdWithHeadersProof(syncerHeaderSelectedTipHash, relayBlockHash, block.Header.DAAScore())
if err != nil {
return err
}
} else {
if flow.Config().NetParams().DisallowDirectBlocksOnTopOfGenesis && !flow.Config().AllowSubmitBlockWhenNotSynced {
isGenesisVirtualSelectedParent, err := flow.isGenesisVirtualSelectedParent()
if err != nil {
return err
}
if isGenesisVirtualSelectedParent {
log.Infof("Cannot IBD to %s because it won't change the pruning point. The node needs to IBD "+
"to the recent pruning point before normal operation can resume.", relayBlockHash)
return nil
}
}
err = flow.syncPruningPointFutureHeaders(
flow.Domain().Consensus(),
syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, relayBlockHash, block.Header.DAAScore())
if err != nil {
return err
}
}
// We start by syncing missing bodies over the syncer selected chain
err = flow.syncMissingBlockBodies(syncerHeaderSelectedTipHash)
if err != nil {
return err
}
relayBlockInfo, err := flow.Domain().Consensus().GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
// Relay block might be in the anticone of syncer selected tip, thus
// check his chain for missing bodies as well.
// Note: this operation can be slightly optimized to avoid the full chain search since relay block
// is in syncer virtual mergeset which has bounded size.
if relayBlockInfo.BlockStatus == externalapi.StatusHeaderOnly {
err = flow.syncMissingBlockBodies(relayBlockHash)
if err != nil {
return err
}
}
log.Debugf("Finished syncing blocks up to %s", relayBlockHash)
isFinishedSuccessfully = true
return nil
}
func (flow *handleIBDFlow) negotiateMissingSyncerChainSegment() (*externalapi.DomainHash, *externalapi.DomainHash, error) {
/*
Algorithm:
Request full selected chain block locator from syncer
Find the highest block which we know
Repeat the locator step over the new range until finding max(past(syncee) \cap chain(syncer))
*/
// Empty hashes indicate that the full chain is queried
locatorHashes, err := flow.getSyncerChainBlockLocator(nil, nil, common.DefaultTimeout)
if err != nil {
return nil, nil, err
}
if len(locatorHashes) == 0 {
return nil, nil, protocolerrors.Errorf(true, "Expecting initial syncer chain block locator "+
"to contain at least one element")
}
log.Debugf("IBD chain negotiation with peer %s started and received %d hashes (%s, %s)", flow.peer,
len(locatorHashes), locatorHashes[0], locatorHashes[len(locatorHashes)-1])
syncerHeaderSelectedTipHash := locatorHashes[0]
var highestKnownSyncerChainHash *externalapi.DomainHash
chainNegotiationRestartCounter := 0
chainNegotiationZoomCounts := 0
initialLocatorLen := len(locatorHashes)
pruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return nil, nil, err
}
for {
var lowestUnknownSyncerChainHash, currentHighestKnownSyncerChainHash *externalapi.DomainHash
for _, syncerChainHash := range locatorHashes {
info, err := flow.Domain().Consensus().GetBlockInfo(syncerChainHash)
if err != nil {
return nil, nil, err
}
if info.Exists {
if info.BlockStatus == externalapi.StatusInvalid {
return nil, nil, protocolerrors.Errorf(true, "Sent invalid chain block %s", syncerChainHash)
}
isPruningPointOnSyncerChain, err := flow.Domain().Consensus().IsInSelectedParentChainOf(pruningPoint, syncerChainHash)
if err != nil {
log.Errorf("Error checking isPruningPointOnSyncerChain: %s", err)
}
// We're only interested in syncer chain blocks that have our pruning
// point in their selected chain. Otherwise, it means one of the following:
// 1) We will not switch the virtual selected chain to the syncers chain since it will violate finality
// (hence we can ignore it unless merged by others).
// 2) syncerChainHash is actually in the past of our pruning point so there's no
// point in syncing from it.
if err == nil && isPruningPointOnSyncerChain {
currentHighestKnownSyncerChainHash = syncerChainHash
break
}
}
lowestUnknownSyncerChainHash = syncerChainHash
}
// No unknown blocks, break. Note this can only happen in the first iteration
if lowestUnknownSyncerChainHash == nil {
highestKnownSyncerChainHash = currentHighestKnownSyncerChainHash
break
}
// No shared block, break
if currentHighestKnownSyncerChainHash == nil {
highestKnownSyncerChainHash = nil
break
}
// No point in zooming further
if len(locatorHashes) == 1 {
highestKnownSyncerChainHash = currentHighestKnownSyncerChainHash
break
}
// Zoom in
locatorHashes, err = flow.getSyncerChainBlockLocator(
lowestUnknownSyncerChainHash,
currentHighestKnownSyncerChainHash, time.Second*10)
if err != nil {
return nil, nil, err
}
if len(locatorHashes) > 0 {
if !locatorHashes[0].Equal(lowestUnknownSyncerChainHash) ||
!locatorHashes[len(locatorHashes)-1].Equal(currentHighestKnownSyncerChainHash) {
return nil, nil, protocolerrors.Errorf(true, "Expecting the high and low "+
"hashes to match the locator bounds")
}
chainNegotiationZoomCounts++
log.Debugf("IBD chain negotiation with peer %s zoomed in (%d) and received %d hashes (%s, %s)", flow.peer,
chainNegotiationZoomCounts, len(locatorHashes), locatorHashes[0], locatorHashes[len(locatorHashes)-1])
if len(locatorHashes) == 2 {
// We found our search target
highestKnownSyncerChainHash = currentHighestKnownSyncerChainHash
break
}
if chainNegotiationZoomCounts > initialLocatorLen*2 {
// Since the zoom-in always queries two consecutive entries in the previous locator, it is
// expected to decrease in size at least every two iterations
return nil, nil, protocolerrors.Errorf(true,
"IBD chain negotiation: Number of zoom-in steps %d exceeded the upper bound of 2*%d",
chainNegotiationZoomCounts, initialLocatorLen)
}
} else { // Empty locator signals a restart due to chain changes
chainNegotiationZoomCounts = 0
chainNegotiationRestartCounter++
if chainNegotiationRestartCounter > 32 {
return nil, nil, protocolerrors.Errorf(false,
"IBD chain negotiation with syncer %s exceeded restart limit %d", flow.peer, chainNegotiationRestartCounter)
}
log.Warnf("IBD chain negotiation with syncer %s restarted %d times", flow.peer, chainNegotiationRestartCounter)
// An empty locator signals that the syncer chain was modified and no longer contains one of
// the queried hashes, so we restart the search. We use a shorter timeout here to avoid a timeout attack
locatorHashes, err = flow.getSyncerChainBlockLocator(nil, nil, time.Second*10)
if err != nil {
return nil, nil, err
}
if len(locatorHashes) == 0 {
return nil, nil, protocolerrors.Errorf(true, "Expecting initial syncer chain block locator "+
"to contain at least one element")
}
log.Infof("IBD chain negotiation with peer %s restarted (%d) and received %d hashes (%s, %s)", flow.peer,
chainNegotiationRestartCounter, len(locatorHashes), locatorHashes[0], locatorHashes[len(locatorHashes)-1])
initialLocatorLen = len(locatorHashes)
// Reset syncer's header selected tip
syncerHeaderSelectedTipHash = locatorHashes[0]
}
}
log.Infof("Found highest known syncer chain block %s from peer %s",
highestKnownSyncerChainHash, flow.peer)
return syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, nil
}
func (flow *handleIBDFlow) isGenesisVirtualSelectedParent() (bool, error) {
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
return virtualSelectedParent.Equal(flow.Config().NetParams().GenesisHash), nil
}
func (flow *handleIBDFlow) logIBDFinished(isFinishedSuccessfully bool, err error) {
successString := "successfully"
if !isFinishedSuccessfully {
if err != nil {
successString = fmt.Sprintf("(interrupted: %s)", err)
} else {
successString = fmt.Sprintf("(interrupted)")
}
}
log.Infof("IBD with peer %s finished %s", flow.peer, successString)
}
func (flow *handleIBDFlow) getSyncerChainBlockLocator(
highHash, lowHash *externalapi.DomainHash, timeout time.Duration) ([]*externalapi.DomainHash, error) {
requestIbdChainBlockLocatorMessage := appmessage.NewMsgIBDRequestChainBlockLocator(highHash, lowHash)
err := flow.outgoingRoute.Enqueue(requestIbdChainBlockLocatorMessage)
if err != nil {
return nil, err
}
message, err := flow.incomingRoute.DequeueWithTimeout(timeout)
if err != nil {
return nil, err
}
switch message := message.(type) {
case *appmessage.MsgIBDChainBlockLocator:
if len(message.BlockLocatorHashes) > 64 {
return nil, protocolerrors.Errorf(true,
"Got block locator of size %d>64 while expecting locator to have size "+
"which is logarithmic in DAG size (which should never exceed 2^64)",
len(message.BlockLocatorHashes))
}
return message.BlockLocatorHashes, nil
default:
return nil, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDChainBlockLocator, message.Command())
}
}
func (flow *handleIBDFlow) syncPruningPointFutureHeaders(consensus externalapi.Consensus,
syncerHeaderSelectedTipHash, highestKnownSyncerChainHash, relayBlockHash *externalapi.DomainHash,
highBlockDAAScoreHint uint64) error {
log.Infof("Downloading headers from %s", flow.peer)
if highestKnownSyncerChainHash.Equal(syncerHeaderSelectedTipHash) {
// No need to get syncer selected tip headers, so sync relay past and return
return flow.syncMissingRelayPast(consensus, syncerHeaderSelectedTipHash, relayBlockHash)
}
err := flow.sendRequestHeaders(highestKnownSyncerChainHash, syncerHeaderSelectedTipHash)
if err != nil {
return err
}
highestSharedBlockHeader, err := consensus.GetBlockHeader(highestKnownSyncerChainHash)
if err != nil {
return err
}
progressReporter := newIBDProgressReporter(highestSharedBlockHeader.DAAScore(), highBlockDAAScoreHint, "block headers")
// Keep a short queue of BlockHeadersMessages so that there's
// never a moment when the node is not validating and inserting
// headers
blockHeadersMessageChan := make(chan *appmessage.BlockHeadersMessage, 2)
errChan := make(chan error)
spawn("handleRelayInvsFlow-syncPruningPointFutureHeaders", func() {
for {
blockHeadersMessage, doneIBD, err := flow.receiveHeaders()
if err != nil {
errChan <- err
return
}
if doneIBD {
close(blockHeadersMessageChan)
return
}
if len(blockHeadersMessage.BlockHeaders) == 0 {
// The syncer should have sent a done message if the search completed, and not an empty list
errChan <- protocolerrors.Errorf(true, "Received an empty headers message from peer %s", flow.peer)
return
}
blockHeadersMessageChan <- blockHeadersMessage
err = flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestNextHeaders())
if err != nil {
errChan <- err
return
}
}
})
for {
select {
case ibdBlocksMessage, ok := <-blockHeadersMessageChan:
if !ok {
return flow.syncMissingRelayPast(consensus, syncerHeaderSelectedTipHash, relayBlockHash)
}
for _, header := range ibdBlocksMessage.BlockHeaders {
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
}
lastReceivedHeader := ibdBlocksMessage.BlockHeaders[len(ibdBlocksMessage.BlockHeaders)-1]
progressReporter.reportProgress(len(ibdBlocksMessage.BlockHeaders), lastReceivedHeader.DAAScore)
case err := <-errChan:
return err
}
}
}
func (flow *handleIBDFlow) syncMissingRelayPast(consensus externalapi.Consensus, syncerHeaderSelectedTipHash *externalapi.DomainHash, relayBlockHash *externalapi.DomainHash) error {
// Finished downloading syncer selected tip blocks,
// check if we already have the triggering relayBlockHash
relayBlockInfo, err := consensus.GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
if !relayBlockInfo.Exists {
// Send a special header request for the selected tip anticone. This is expected to
// be a small set, as it is bounded to the size of virtual's mergeset.
err = flow.sendRequestAnticone(syncerHeaderSelectedTipHash, relayBlockHash)
if err != nil {
return err
}
anticoneHeadersMessage, anticoneDone, err := flow.receiveHeaders()
if err != nil {
return err
}
if anticoneDone {
return protocolerrors.Errorf(true,
"Expected one anticone header chunk for past(%s) cap anticone(%s) but got zero",
relayBlockHash, syncerHeaderSelectedTipHash)
}
_, anticoneDone, err = flow.receiveHeaders()
if err != nil {
return err
}
if !anticoneDone {
return protocolerrors.Errorf(true,
"Expected only one anticone header chunk for past(%s) cap anticone(%s)",
relayBlockHash, syncerHeaderSelectedTipHash)
}
for _, header := range anticoneHeadersMessage.BlockHeaders {
err = flow.processHeader(consensus, header)
if err != nil {
return err
}
}
}
// If the relayBlockHash has still not been received, the peer is misbehaving
relayBlockInfo, err = consensus.GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
if !relayBlockInfo.Exists {
return protocolerrors.Errorf(true, "did not receive "+
"relayBlockHash block %s from peer %s during block download", relayBlockHash, flow.peer)
}
return nil
}
func (flow *handleIBDFlow) sendRequestAnticone(
syncerHeaderSelectedTipHash, relayBlockHash *externalapi.DomainHash) error {
msgRequestAnticone := appmessage.NewMsgRequestAnticone(syncerHeaderSelectedTipHash, relayBlockHash)
return flow.outgoingRoute.Enqueue(msgRequestAnticone)
}
func (flow *handleIBDFlow) sendRequestHeaders(
highestKnownSyncerChainHash, syncerHeaderSelectedTipHash *externalapi.DomainHash) error {
msgRequestHeaders := appmessage.NewMsgRequstHeaders(highestKnownSyncerChainHash, syncerHeaderSelectedTipHash)
return flow.outgoingRoute.Enqueue(msgRequestHeaders)
}
func (flow *handleIBDFlow) receiveHeaders() (msgIBDBlock *appmessage.BlockHeadersMessage, doneHeaders bool, err error) {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch message := message.(type) {
case *appmessage.BlockHeadersMessage:
return message, false, nil
case *appmessage.MsgDoneHeaders:
return nil, true, nil
default:
return nil, false,
protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s, got: %s",
appmessage.CmdBlockHeaders,
appmessage.CmdDoneHeaders,
message.Command())
}
}
func (flow *handleIBDFlow) processHeader(consensus externalapi.Consensus, msgBlockHeader *appmessage.MsgBlockHeader) error {
header := appmessage.BlockHeaderToDomainBlockHeader(msgBlockHeader)
block := &externalapi.DomainBlock{
Header: header,
Transactions: nil,
}
blockHash := consensushashing.BlockHash(block)
blockInfo, err := consensus.GetBlockInfo(blockHash)
if err != nil {
return err
}
if blockInfo.Exists {
log.Debugf("Block header %s is already in the DAG. Skipping...", blockHash)
return nil
}
err = consensus.ValidateAndInsertBlock(block, false)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
return errors.Wrapf(err, "failed to process header %s during IBD", blockHash)
}
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping block header %s as it is a duplicate", blockHash)
} else {
log.Infof("Rejected block header %s from %s during IBD: %s", blockHash, flow.peer, err)
return protocolerrors.Wrapf(true, err, "got invalid block header %s during IBD", blockHash)
}
}
return nil
}
func (flow *handleIBDFlow) validatePruningPointFutureHeaderTimestamps() error {
headerSelectedTipHash, err := flow.Domain().StagingConsensus().GetHeadersSelectedTip()
if err != nil {
return err
}
headerSelectedTipHeader, err := flow.Domain().StagingConsensus().GetBlockHeader(headerSelectedTipHash)
if err != nil {
return err
}
headerSelectedTipTimestamp := headerSelectedTipHeader.TimeInMilliseconds()
currentSelectedTipHash, err := flow.Domain().Consensus().GetHeadersSelectedTip()
if err != nil {
return err
}
currentSelectedTipHeader, err := flow.Domain().Consensus().GetBlockHeader(currentSelectedTipHash)
if err != nil {
return err
}
currentSelectedTipTimestamp := currentSelectedTipHeader.TimeInMilliseconds()
if headerSelectedTipTimestamp < currentSelectedTipTimestamp {
return protocolerrors.Errorf(false, "the timestamp of the candidate selected "+
"tip is smaller than the current selected tip")
}
minTimestampDifferenceInMilliseconds := (10 * time.Minute).Milliseconds()
if headerSelectedTipTimestamp-currentSelectedTipTimestamp < minTimestampDifferenceInMilliseconds {
return protocolerrors.Errorf(false, "difference between the timestamps of "+
"the current pruning point and the candidate pruning point is too small. Aborting IBD...")
}
return nil
}
func (flow *handleIBDFlow) receiveAndInsertPruningPointUTXOSet(
consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (bool, error) {
onEnd := logger.LogAndMeasureExecutionTime(log, "receiveAndInsertPruningPointUTXOSet")
defer onEnd()
receivedChunkCount := 0
receivedUTXOCount := 0
for {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return false, err
}
switch message := message.(type) {
case *appmessage.MsgPruningPointUTXOSetChunk:
receivedUTXOCount += len(message.OutpointAndUTXOEntryPairs)
domainOutpointAndUTXOEntryPairs :=
appmessage.OutpointAndUTXOEntryPairsToDomainOutpointAndUTXOEntryPairs(message.OutpointAndUTXOEntryPairs)
err := consensus.AppendImportedPruningPointUTXOs(domainOutpointAndUTXOEntryPairs)
if err != nil {
return false, err
}
receivedChunkCount++
if receivedChunkCount%ibdBatchSize == 0 {
log.Infof("Received %d UTXO set chunks so far, totaling in %d UTXOs",
receivedChunkCount, receivedUTXOCount)
requestNextPruningPointUTXOSetChunkMessage := appmessage.NewMsgRequestNextPruningPointUTXOSetChunk()
err := flow.outgoingRoute.Enqueue(requestNextPruningPointUTXOSetChunkMessage)
if err != nil {
return false, err
}
}
case *appmessage.MsgDonePruningPointUTXOSetChunks:
log.Infof("Finished receiving the UTXO set. Total UTXOs: %d", receivedUTXOCount)
return true, nil
case *appmessage.MsgUnexpectedPruningPoint:
log.Infof("Could not receive the next UTXO chunk because the pruning point %s "+
"is no longer the pruning point of peer %s", pruningPointHash, flow.peer)
return false, nil
default:
return false, protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s or %s or %s, got: %s", appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdUnexpectedPruningPoint, message.Command(),
)
}
}
}
func (flow *handleIBDFlow) syncMissingBlockBodies(highHash *externalapi.DomainHash) error {
hashes, err := flow.Domain().Consensus().GetMissingBlockBodyHashes(highHash)
if err != nil {
return err
}
if len(hashes) == 0 {
// Blocks can be inserted inside the DAG during IBD if those were requested before IBD started.
// In rare cases, all the IBD blocks might be already inserted by the time we reach this point.
// In these cases - GetMissingBlockBodyHashes would return an empty array.
log.Debugf("No missing block body hashes found.")
return nil
}
lowBlockHeader, err := flow.Domain().Consensus().GetBlockHeader(hashes[0])
if err != nil {
return err
}
highBlockHeader, err := flow.Domain().Consensus().GetBlockHeader(hashes[len(hashes)-1])
if err != nil {
return err
}
progressReporter := newIBDProgressReporter(lowBlockHeader.DAAScore(), highBlockHeader.DAAScore(), "blocks")
highestProcessedDAAScore := lowBlockHeader.DAAScore()
// If the IBD is small, we want to update the virtual after each block in order to avoid complications and possible bugs.
updateVirtual, err := flow.Domain().Consensus().IsNearlySynced()
if err != nil {
return err
}
for offset := 0; offset < len(hashes); offset += ibdBatchSize {
var hashesToRequest []*externalapi.DomainHash
if offset+ibdBatchSize < len(hashes) {
hashesToRequest = hashes[offset : offset+ibdBatchSize]
} else {
hashesToRequest = hashes[offset:]
}
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestIBDBlocks(hashesToRequest))
if err != nil {
return err
}
for _, expectedHash := range hashesToRequest {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
msgIBDBlock, ok := message.(*appmessage.MsgIBDBlock)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdIBDBlock, message.Command())
}
block := appmessage.MsgBlockToDomainBlock(msgIBDBlock.MsgBlock)
blockHash := consensushashing.BlockHash(block)
if !expectedHash.Equal(blockHash) {
return protocolerrors.Errorf(true, "expected block %s but got %s", expectedHash, blockHash)
}
err = flow.banIfBlockIsHeaderOnly(block)
if err != nil {
return err
}
err = flow.Domain().Consensus().ValidateAndInsertBlock(block, updateVirtual)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Debugf("Skipping IBD Block %s as it has already been added to the DAG", blockHash)
continue
}
return protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "invalid block %s", blockHash)
}
err = flow.OnNewBlock(block)
if err != nil {
return err
}
highestProcessedDAAScore = block.Header.DAAScore()
}
progressReporter.reportProgress(len(hashesToRequest), highestProcessedDAAScore)
}
// We need to resolve virtual only if it wasn't updated while syncing block bodies
if !updateVirtual {
err := flow.resolveVirtual(highestProcessedDAAScore)
if err != nil {
return err
}
}
return flow.OnNewBlockTemplate()
}
func (flow *handleIBDFlow) banIfBlockIsHeaderOnly(block *externalapi.DomainBlock) error {
if len(block.Transactions) == 0 {
return protocolerrors.Errorf(true, "sent header of %s block where expected block with body",
consensushashing.BlockHash(block))
}
return nil
}
func (flow *handleIBDFlow) resolveVirtual(estimatedVirtualDAAScoreTarget uint64) error {
err := flow.Domain().Consensus().ResolveVirtual(func(virtualDAAScoreStart uint64, virtualDAAScore uint64) {
var percents int
if estimatedVirtualDAAScoreTarget-virtualDAAScoreStart <= 0 {
percents = 100
} else {
percents = int(float64(virtualDAAScore-virtualDAAScoreStart) / float64(estimatedVirtualDAAScoreTarget-virtualDAAScoreStart) * 100)
}
if percents < 0 {
percents = 0
} else if percents > 100 {
percents = 100
}
log.Infof("Resolving virtual. Estimated progress: %d%%", percents)
})
if err != nil {
return err
}
log.Infof("Resolved virtual")
return nil
}

View File

@ -0,0 +1,45 @@
package blockrelay
type ibdProgressReporter struct {
lowDAAScore uint64
highDAAScore uint64
objectName string
totalDAAScoreDifference uint64
lastReportedProgressPercent int
processed int
}
func newIBDProgressReporter(lowDAAScore uint64, highDAAScore uint64, objectName string) *ibdProgressReporter {
if highDAAScore <= lowDAAScore {
// Avoid a zero or negative diff
highDAAScore = lowDAAScore + 1
}
return &ibdProgressReporter{
lowDAAScore: lowDAAScore,
highDAAScore: highDAAScore,
objectName: objectName,
totalDAAScoreDifference: highDAAScore - lowDAAScore,
lastReportedProgressPercent: 0,
processed: 0,
}
}
func (ipr *ibdProgressReporter) reportProgress(processedDelta int, highestProcessedDAAScore uint64) {
ipr.processed += processedDelta
// Avoid exploding numbers in the percentage report, since the original `highDAAScore` might have been only a hint
if highestProcessedDAAScore > ipr.highDAAScore {
ipr.highDAAScore = highestProcessedDAAScore + 1 // + 1 for keeping it at 99%
ipr.totalDAAScoreDifference = ipr.highDAAScore - ipr.lowDAAScore
}
relativeDAAScore := uint64(0)
if highestProcessedDAAScore > ipr.lowDAAScore {
// Avoid a negative diff
relativeDAAScore = highestProcessedDAAScore - ipr.lowDAAScore
}
progressPercent := int((float64(relativeDAAScore) / float64(ipr.totalDAAScoreDifference)) * 100)
if progressPercent > ipr.lastReportedProgressPercent {
log.Infof("IBD: Processed %d %s (%d%%)", ipr.processed, ipr.objectName, progressPercent)
ipr.lastReportedProgressPercent = progressPercent
}
}

View File

@ -9,20 +9,23 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/pkg/errors"
"time"
)
func (flow *handleRelayInvsFlow) ibdWithHeadersProof(highHash *externalapi.DomainHash) error {
err := flow.Domain().InitStagingConsensus()
func (flow *handleIBDFlow) ibdWithHeadersProof(
syncerHeaderSelectedTipHash, relayBlockHash *externalapi.DomainHash, highBlockDAAScore uint64) error {
err := flow.Domain().InitStagingConsensusWithoutGenesis()
if err != nil {
return err
}
err = flow.downloadHeadersAndPruningUTXOSet(highHash)
err = flow.downloadHeadersAndPruningUTXOSet(syncerHeaderSelectedTipHash, relayBlockHash, highBlockDAAScore)
if err != nil {
if !flow.IsRecoverableError(err) {
return err
}
log.Infof("IBD with pruning proof from %s was unsuccessful. Deleting the staging consensus. (%s)", flow.peer, err)
deleteStagingConsensusErr := flow.Domain().DeleteStagingConsensus()
if deleteStagingConsensusErr != nil {
return deleteStagingConsensusErr
@ -31,19 +34,49 @@ func (flow *handleRelayInvsFlow) ibdWithHeadersProof(highHash *externalapi.Domai
return err
}
log.Infof("Header download stage of IBD with pruning proof completed successfully from %s. "+
"Committing the staging consensus and deleting the previous obsolete one if such exists.", flow.peer)
err = flow.Domain().CommitStagingConsensus()
if err != nil {
return err
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return err
}
return nil
}
func (flow *handleRelayInvsFlow) shouldSyncAndShouldDownloadHeadersProof(highBlock *externalapi.DomainBlock,
highestSharedBlockFound bool) (shouldDownload, shouldSync bool, err error) {
func (flow *handleIBDFlow) shouldSyncAndShouldDownloadHeadersProof(
relayBlock *externalapi.DomainBlock,
highestKnownSyncerChainHash *externalapi.DomainHash) (shouldDownload, shouldSync bool, err error) {
if !highestSharedBlockFound {
hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock)
var highestSharedBlockFound, isPruningPointInSharedBlockChain bool
if highestKnownSyncerChainHash != nil {
blockInfo, err := flow.Domain().Consensus().GetBlockInfo(highestKnownSyncerChainHash)
if err != nil {
return false, false, err
}
highestSharedBlockFound = blockInfo.HasBody()
pruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return false, false, err
}
isPruningPointInSharedBlockChain, err = flow.Domain().Consensus().IsInSelectedParentChainOf(
pruningPoint, highestKnownSyncerChainHash)
if err != nil {
return false, false, err
}
}
// Note: in the case where `highestSharedBlockFound == true && isPruningPointInSharedBlockChain == false`
// we might have here info which is relevant to finality conflict decisions. This should be taken into
// account when we improve this aspect.
if !highestSharedBlockFound || !isPruningPointInSharedBlockChain {
hasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore, err := flow.checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(relayBlock)
if err != nil {
return false, false, err
}
@ -52,37 +85,42 @@ func (flow *handleRelayInvsFlow) shouldSyncAndShouldDownloadHeadersProof(highBlo
return true, true, nil
}
if highestKnownSyncerChainHash == nil {
log.Infof("Stopping IBD since IBD from this node will cause a finality conflict")
return false, false, nil
}
return false, true, nil
}
return false, true, nil
}
func (flow *handleRelayInvsFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(highBlock *externalapi.DomainBlock) (bool, error) {
headersSelectedTip, err := flow.Domain().Consensus().GetHeadersSelectedTip()
func (flow *handleIBDFlow) checkIfHighHashHasMoreBlueWorkThanSelectedTipAndPruningDepthMoreBlueScore(relayBlock *externalapi.DomainBlock) (bool, error) {
virtualSelectedParent, err := flow.Domain().Consensus().GetVirtualSelectedParent()
if err != nil {
return false, err
}
headersSelectedTipInfo, err := flow.Domain().Consensus().GetBlockInfo(headersSelectedTip)
virtualSelectedTipInfo, err := flow.Domain().Consensus().GetBlockInfo(virtualSelectedParent)
if err != nil {
return false, err
}
if highBlock.Header.BlueScore() < headersSelectedTipInfo.BlueScore+flow.Config().NetParams().PruningDepth() {
if relayBlock.Header.BlueScore() < virtualSelectedTipInfo.BlueScore+flow.Config().NetParams().PruningDepth() {
return false, nil
}
return highBlock.Header.BlueWork().Cmp(headersSelectedTipInfo.BlueWork) > 0, nil
return relayBlock.Header.BlueWork().Cmp(virtualSelectedTipInfo.BlueWork) > 0, nil
}
func (flow *handleRelayInvsFlow) syncAndValidatePruningPointProof() (*externalapi.DomainHash, error) {
func (flow *handleIBDFlow) syncAndValidatePruningPointProof() (*externalapi.DomainHash, error) {
log.Infof("Downloading the pruning point proof from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointProof())
if err != nil {
return nil, err
}
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
message, err := flow.incomingRoute.DequeueWithTimeout(10 * time.Minute)
if err != nil {
return nil, err
}
@ -108,7 +146,10 @@ func (flow *handleRelayInvsFlow) syncAndValidatePruningPointProof() (*externalap
return consensushashing.HeaderHash(pruningPointProof.Headers[0][len(pruningPointProof.Headers[0])-1]), nil
}
func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(highHash *externalapi.DomainHash) error {
func (flow *handleIBDFlow) downloadHeadersAndPruningUTXOSet(
syncerHeaderSelectedTipHash, relayBlockHash *externalapi.DomainHash,
highBlockDAAScore uint64) error {
proofPruningPoint, err := flow.syncAndValidatePruningPointProof()
if err != nil {
return err
@ -125,19 +166,20 @@ func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(highHash *exte
return protocolerrors.Errorf(true, "the genesis pruning point violates finality")
}
err = flow.syncPruningPointFutureHeaders(flow.Domain().StagingConsensus(), proofPruningPoint, highHash)
err = flow.syncPruningPointFutureHeaders(flow.Domain().StagingConsensus(),
syncerHeaderSelectedTipHash, proofPruningPoint, relayBlockHash, highBlockDAAScore)
if err != nil {
return err
}
log.Debugf("Headers downloaded from peer %s", flow.peer)
log.Infof("Headers downloaded from peer %s", flow.peer)
highHashInfo, err := flow.Domain().StagingConsensus().GetBlockInfo(highHash)
relayBlockInfo, err := flow.Domain().StagingConsensus().GetBlockInfo(relayBlockHash)
if err != nil {
return err
}
if !highHashInfo.Exists {
if !relayBlockInfo.Exists {
return protocolerrors.Errorf(true, "the triggering IBD block was not sent")
}
@ -159,7 +201,7 @@ func (flow *handleRelayInvsFlow) downloadHeadersAndPruningUTXOSet(highHash *exte
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointsAndPruningPointAnticone(proofPruningPoint *externalapi.DomainHash) error {
func (flow *handleIBDFlow) syncPruningPointsAndPruningPointAnticone(proofPruningPoint *externalapi.DomainHash) error {
log.Infof("Downloading the past pruning points and the pruning point anticone from %s", flow.peer)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestPruningPointAndItsAnticone())
if err != nil {
@ -171,6 +213,17 @@ func (flow *handleRelayInvsFlow) syncPruningPointsAndPruningPointAnticone(proofP
return err
}
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return err
}
msgTrustedData, ok := message.(*appmessage.MsgTrustedData)
if !ok {
return protocolerrors.Errorf(true, "received unexpected message type. "+
"expected: %s, got: %s", appmessage.CmdTrustedData, message.Command())
}
pruningPointWithMetaData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
@ -184,12 +237,13 @@ func (flow *handleRelayInvsFlow) syncPruningPointsAndPruningPointAnticone(proofP
return protocolerrors.Errorf(true, "first block with trusted data is not the pruning point")
}
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), pruningPointWithMetaData)
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), pruningPointWithMetaData, msgTrustedData)
if err != nil {
return err
}
for {
i := 0
for ; ; i++ {
blockWithTrustedData, done, err := flow.receiveBlockWithTrustedData()
if err != nil {
return err
@ -199,31 +253,61 @@ func (flow *handleRelayInvsFlow) syncPruningPointsAndPruningPointAnticone(proofP
break
}
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), blockWithTrustedData)
err = flow.processBlockWithTrustedData(flow.Domain().StagingConsensus(), blockWithTrustedData, msgTrustedData)
if err != nil {
return err
}
// We're using i+2 because we want to check if the next block will belong to the next batch, but we already downloaded
// the pruning point outside the loop so we use i+2 instead of i+1.
if (i+2)%ibdBatchSize == 0 {
log.Infof("Downloaded %d blocks from the pruning point anticone", i+1)
err := flow.outgoingRoute.Enqueue(appmessage.NewMsgRequestNextPruningPointAndItsAnticoneBlocks())
if err != nil {
return err
}
}
}
log.Infof("Finished downloading pruning point and its anticone from %s", flow.peer)
log.Infof("Finished downloading pruning point and its anticone from %s. Total blocks downloaded: %d", flow.peer, i+1)
return nil
}
func (flow *handleRelayInvsFlow) processBlockWithTrustedData(
consensus externalapi.Consensus, block *appmessage.MsgBlockWithTrustedData) error {
func (flow *handleIBDFlow) processBlockWithTrustedData(
consensus externalapi.Consensus, block *appmessage.MsgBlockWithTrustedDataV4, data *appmessage.MsgTrustedData) error {
_, err := consensus.ValidateAndInsertBlockWithTrustedData(appmessage.BlockWithTrustedDataToDomainBlockWithTrustedData(block), false)
blockWithTrustedData := &externalapi.BlockWithTrustedData{
Block: appmessage.MsgBlockToDomainBlock(block.Block),
DAAWindow: make([]*externalapi.TrustedDataDataDAAHeader, 0, len(block.DAAWindowIndices)),
GHOSTDAGData: make([]*externalapi.BlockGHOSTDAGDataHashPair, 0, len(block.GHOSTDAGDataIndices)),
}
for _, index := range block.DAAWindowIndices {
blockWithTrustedData.DAAWindow = append(blockWithTrustedData.DAAWindow, appmessage.TrustedDataDataDAABlockV4ToTrustedDataDataDAAHeader(data.DAAWindow[index]))
}
for _, index := range block.GHOSTDAGDataIndices {
blockWithTrustedData.GHOSTDAGData = append(blockWithTrustedData.GHOSTDAGData, appmessage.GHOSTDAGHashPairToDomainGHOSTDAGHashPair(data.GHOSTDAGData[index]))
}
err := consensus.ValidateAndInsertBlockWithTrustedData(blockWithTrustedData, false)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {
return protocolerrors.Wrapf(true, err, "failed validating block with trusted data")
}
return err
}
return nil
}
func (flow *handleRelayInvsFlow) receiveBlockWithTrustedData() (*appmessage.MsgBlockWithTrustedData, bool, error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
func (flow *handleIBDFlow) receiveBlockWithTrustedData() (*appmessage.MsgBlockWithTrustedDataV4, bool, error) {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, false, err
}
switch downCastedMessage := message.(type) {
case *appmessage.MsgBlockWithTrustedData:
case *appmessage.MsgBlockWithTrustedDataV4:
return downCastedMessage, false, nil
case *appmessage.MsgDoneBlocksWithTrustedData:
return nil, true, nil
@ -237,8 +321,8 @@ func (flow *handleRelayInvsFlow) receiveBlockWithTrustedData() (*appmessage.MsgB
}
}
func (flow *handleRelayInvsFlow) receivePruningPoints() (*appmessage.MsgPruningPoints, error) {
message, err := flow.dequeueIncomingMessageAndSkipInvs(common.DefaultTimeout)
func (flow *handleIBDFlow) receivePruningPoints() (*appmessage.MsgPruningPoints, error) {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
return nil, err
}
@ -253,7 +337,7 @@ func (flow *handleRelayInvsFlow) receivePruningPoints() (*appmessage.MsgPruningP
return msgPruningPoints, nil
}
func (flow *handleRelayInvsFlow) validateAndInsertPruningPoints(proofPruningPoint *externalapi.DomainHash) error {
func (flow *handleIBDFlow) validateAndInsertPruningPoints(proofPruningPoint *externalapi.DomainHash) error {
currentPruningPoint, err := flow.Domain().Consensus().PruningPoint()
if err != nil {
return err
@ -297,7 +381,7 @@ func (flow *handleRelayInvsFlow) validateAndInsertPruningPoints(proofPruningPoin
return nil
}
func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet(consensus externalapi.Consensus,
func (flow *handleIBDFlow) syncPruningPointUTXOSet(consensus externalapi.Consensus,
pruningPoint *externalapi.DomainHash) (bool, error) {
log.Infof("Checking if the suggested pruning point %s is compatible to the node DAG", pruningPoint)
@ -313,6 +397,7 @@ func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet(consensus externalapi.C
log.Info("Fetching the pruning point UTXO set")
isSuccessful, err := flow.fetchMissingUTXOSet(consensus, pruningPoint)
if err != nil {
log.Infof("An error occurred while fetching the pruning point UTXO set. Stopping IBD. (%s)", err)
return false, err
}
@ -325,7 +410,7 @@ func (flow *handleRelayInvsFlow) syncPruningPointUTXOSet(consensus externalapi.C
return true, nil
}
func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (succeed bool, err error) {
func (flow *handleIBDFlow) fetchMissingUTXOSet(consensus externalapi.Consensus, pruningPointHash *externalapi.DomainHash) (succeed bool, err error) {
defer func() {
err := flow.Domain().StagingConsensus().ClearImportedPruningPointData()
if err != nil {
@ -355,10 +440,5 @@ func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(consensus externalapi.Conse
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return false, err
}
return true, nil
}

View File

@ -2,6 +2,8 @@ package ping
import (
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/pkg/errors"
"time"
"github.com/kaspanet/kaspad/app/appmessage"
@ -61,6 +63,9 @@ func (flow *sendPingsFlow) start() error {
message, err := flow.incomingRoute.DequeueWithTimeout(common.DefaultTimeout)
if err != nil {
if errors.Is(err, router.ErrTimeout) {
return errors.Wrapf(flowcontext.ErrPingTimeout, err.Error())
}
return err
}
pongMessage := message.(*appmessage.MsgPong)

View File

@ -0,0 +1,209 @@
package v5
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/ping"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/rejects"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
type protocolManager interface {
RegisterFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc common.FlowInitializeFunc) *common.Flow
RegisterOneTimeFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand,
isStopping *uint32, stopChan chan error, initializeFunc common.FlowInitializeFunc) *common.Flow
RegisterFlowWithCapacity(name string, capacity int, router *routerpkg.Router,
messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc common.FlowInitializeFunc) *common.Flow
Context() *flowcontext.FlowContext
}
// Register is used in order to register all the protocol flows to the given router.
func Register(m protocolManager, router *routerpkg.Router, errChan chan error, isStopping *uint32) (flows []*common.Flow) {
flows = registerAddressFlows(m, router, isStopping, errChan)
flows = append(flows, registerBlockRelayFlows(m, router, isStopping, errChan)...)
flows = append(flows, registerPingFlows(m, router, isStopping, errChan)...)
flows = append(flows, registerTransactionRelayFlow(m, router, isStopping, errChan)...)
flows = append(flows, registerRejectsFlow(m, router, isStopping, errChan)...)
return flows
}
func registerAddressFlows(m protocolManager, router *routerpkg.Router, isStopping *uint32, errChan chan error) []*common.Flow {
outgoingRoute := router.OutgoingRoute()
return []*common.Flow{
m.RegisterFlow("SendAddresses", router, []appmessage.MessageCommand{appmessage.CmdRequestAddresses}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return addressexchange.SendAddresses(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterOneTimeFlow("ReceiveAddresses", router, []appmessage.MessageCommand{appmessage.CmdAddresses}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return addressexchange.ReceiveAddresses(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
}
}
func registerBlockRelayFlows(m protocolManager, router *routerpkg.Router, isStopping *uint32, errChan chan error) []*common.Flow {
outgoingRoute := router.OutgoingRoute()
return []*common.Flow{
m.RegisterOneTimeFlow("SendVirtualSelectedParentInv", router, []appmessage.MessageCommand{},
isStopping, errChan, func(route *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.SendVirtualSelectedParentInv(m.Context(), outgoingRoute, peer)
}),
m.RegisterFlow("HandleRelayInvs", router, []appmessage.MessageCommand{
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator,
},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayInvs(m.Context(), incomingRoute,
outgoingRoute, peer)
},
),
m.RegisterFlow("HandleIBD", router, []appmessage.MessageCommand{
appmessage.CmdDoneHeaders, appmessage.CmdUnexpectedPruningPoint, appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdBlockHeaders, appmessage.CmdIBDBlockLocatorHighestHash, appmessage.CmdBlockWithTrustedDataV4,
appmessage.CmdDoneBlocksWithTrustedData, appmessage.CmdIBDBlockLocatorHighestHashNotFound,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdIBDBlock, appmessage.CmdPruningPoints,
appmessage.CmdPruningPointProof,
appmessage.CmdTrustedData,
appmessage.CmdIBDChainBlockLocator,
},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBD(m.Context(), incomingRoute,
outgoingRoute, peer)
},
),
m.RegisterFlow("HandleRelayBlockRequests", router, []appmessage.MessageCommand{appmessage.CmdRequestRelayBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayBlockRequests(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
m.RegisterFlow("HandleRequestBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdRequestBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestBlockLocator(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterFlow("HandleRequestHeaders", router,
[]appmessage.MessageCommand{appmessage.CmdRequestHeaders, appmessage.CmdRequestNextHeaders}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestHeaders(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
m.RegisterFlow("HandleIBDBlockRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBDBlockRequests(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterFlow("HandleRequestPruningPointUTXOSet", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointUTXOSet,
appmessage.CmdRequestNextPruningPointUTXOSetChunk}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestPruningPointUTXOSet(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterFlow("HandlePruningPointAndItsAnticoneRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointAndItsAnticone, appmessage.CmdRequestNextPruningPointAndItsAnticoneBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointAndItsAnticoneRequests(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
m.RegisterFlow("HandleIBDBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdIBDBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBDBlockLocator(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
m.RegisterFlow("HandleRequestIBDChainBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDChainBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestIBDChainBlockLocator(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterFlow("HandleRequestAnticone", router,
[]appmessage.MessageCommand{appmessage.CmdRequestAnticone}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestAnticone(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
m.RegisterFlow("HandlePruningPointProofRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointProof}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointProofRequests(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
}
}
func registerPingFlows(m protocolManager, router *routerpkg.Router, isStopping *uint32, errChan chan error) []*common.Flow {
outgoingRoute := router.OutgoingRoute()
return []*common.Flow{
m.RegisterFlow("ReceivePings", router, []appmessage.MessageCommand{appmessage.CmdPing}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ping.ReceivePings(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterFlow("SendPings", router, []appmessage.MessageCommand{appmessage.CmdPong}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ping.SendPings(m.Context(), incomingRoute, outgoingRoute, peer)
},
),
}
}
func registerTransactionRelayFlow(m protocolManager, router *routerpkg.Router, isStopping *uint32, errChan chan error) []*common.Flow {
outgoingRoute := router.OutgoingRoute()
return []*common.Flow{
m.RegisterFlowWithCapacity("HandleRelayedTransactions", 10_000, router,
[]appmessage.MessageCommand{appmessage.CmdInvTransaction, appmessage.CmdTx, appmessage.CmdTransactionNotFound}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return transactionrelay.HandleRelayedTransactions(m.Context(), incomingRoute, outgoingRoute)
},
),
m.RegisterFlow("HandleRequestTransactions", router,
[]appmessage.MessageCommand{appmessage.CmdRequestTransactions}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return transactionrelay.HandleRequestedTransactions(m.Context(), incomingRoute, outgoingRoute)
},
),
}
}
func registerRejectsFlow(m protocolManager, router *routerpkg.Router, isStopping *uint32, errChan chan error) []*common.Flow {
outgoingRoute := router.OutgoingRoute()
return []*common.Flow{
m.RegisterFlow("HandleRejects", router,
[]appmessage.MessageCommand{appmessage.CmdReject}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return rejects.HandleRejects(m.Context(), incomingRoute, outgoingRoute)
},
),
}
}

View File

@ -1,11 +1,11 @@
package testing
import (
"github.com/kaspanet/kaspad/app/protocol/flows/v5/addressexchange"
"testing"
"time"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"

View File

@ -3,6 +3,7 @@ package transactionrelay
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
@ -18,9 +19,10 @@ import (
type TransactionsRelayContext interface {
NetAdapter() *netadapter.NetAdapter
Domain() domain.Domain
SharedRequestedTransactions() *SharedRequestedTransactions
SharedRequestedTransactions() *flowcontext.SharedRequestedTransactions
OnTransactionAddedToMempool()
EnqueueTransactionIDsForPropagation(transactionIDs []*externalapi.DomainTransactionID) error
IsNearlySynced() (bool, error)
}
type handleRelayedTransactionsFlow struct {
@ -48,6 +50,15 @@ func (flow *handleRelayedTransactionsFlow) start() error {
return err
}
isNearlySynced, err := flow.IsNearlySynced()
if err != nil {
return err
}
// Transaction relay is disabled if the node is out of sync and thus not mining
if !isNearlySynced {
continue
}
requestedIDs, err := flow.requestInvTransactions(inv)
if err != nil {
return err
@ -68,7 +79,7 @@ func (flow *handleRelayedTransactionsFlow) requestInvTransactions(
if flow.isKnownTransaction(txID) {
continue
}
exists := flow.SharedRequestedTransactions().addIfNotExists(txID)
exists := flow.SharedRequestedTransactions().AddIfNotExists(txID)
if exists {
continue
}
@ -82,7 +93,7 @@ func (flow *handleRelayedTransactionsFlow) requestInvTransactions(
msgGetTransactions := appmessage.NewMsgRequestTransactions(idsToRequest)
err = flow.outgoingRoute.Enqueue(msgGetTransactions)
if err != nil {
flow.SharedRequestedTransactions().removeMany(idsToRequest)
flow.SharedRequestedTransactions().RemoveMany(idsToRequest)
return nil, err
}
return idsToRequest, nil
@ -91,7 +102,7 @@ func (flow *handleRelayedTransactionsFlow) requestInvTransactions(
func (flow *handleRelayedTransactionsFlow) isKnownTransaction(txID *externalapi.DomainTransactionID) bool {
// Ask the transaction memory pool if the transaction is known
// to it in any form (main pool or orphan).
if _, ok := flow.Domain().MiningManager().GetTransaction(txID); ok {
if _, _, ok := flow.Domain().MiningManager().GetTransaction(txID, true, true); ok {
return true
}
@ -151,7 +162,7 @@ func (flow *handleRelayedTransactionsFlow) readMsgTxOrNotFound() (
func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransactions []*externalapi.DomainTransactionID) error {
// In case the function returns earlier than expected, we want to make sure sharedRequestedTransactions is
// clean from any pending transactions.
defer flow.SharedRequestedTransactions().removeMany(requestedTransactions)
defer flow.SharedRequestedTransactions().RemoveMany(requestedTransactions)
for _, expectedID := range requestedTransactions {
msgTx, msgTxNotFound, err := flow.readMsgTxOrNotFound()
if err != nil {

View File

@ -2,10 +2,11 @@ package transactionrelay_test
import (
"errors"
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/transactionrelay"
"strings"
"testing"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
@ -24,7 +25,7 @@ import (
type mocTransactionsRelayContext struct {
netAdapter *netadapter.NetAdapter
domain domain.Domain
sharedRequestedTransactions *transactionrelay.SharedRequestedTransactions
sharedRequestedTransactions *flowcontext.SharedRequestedTransactions
}
func (m *mocTransactionsRelayContext) NetAdapter() *netadapter.NetAdapter {
@ -35,7 +36,7 @@ func (m *mocTransactionsRelayContext) Domain() domain.Domain {
return m.domain
}
func (m *mocTransactionsRelayContext) SharedRequestedTransactions() *transactionrelay.SharedRequestedTransactions {
func (m *mocTransactionsRelayContext) SharedRequestedTransactions() *flowcontext.SharedRequestedTransactions {
return m.sharedRequestedTransactions
}
@ -46,6 +47,10 @@ func (m *mocTransactionsRelayContext) EnqueueTransactionIDsForPropagation(transa
func (m *mocTransactionsRelayContext) OnTransactionAddedToMempool() {
}
func (m *mocTransactionsRelayContext) IsNearlySynced() (bool, error) {
return true, nil
}
// TestHandleRelayedTransactionsNotFound tests the flow of HandleRelayedTransactions when the peer doesn't
// have the requested transactions in the mempool.
func TestHandleRelayedTransactionsNotFound(t *testing.T) {
@ -60,7 +65,7 @@ func TestHandleRelayedTransactionsNotFound(t *testing.T) {
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
sharedRequestedTransactions := flowcontext.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to create a NetAdapter: %v", err)
@ -153,7 +158,7 @@ func TestOnClosedIncomingRoute(t *testing.T) {
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
sharedRequestedTransactions := flowcontext.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to creat a NetAdapter : %v", err)

View File

@ -30,7 +30,7 @@ func (flow *handleRequestedTransactionsFlow) start() error {
}
for _, transactionID := range msgRequestTransactions.IDs {
tx, ok := flow.Domain().MiningManager().GetTransaction(transactionID)
tx, _, ok := flow.Domain().MiningManager().GetTransaction(transactionID, true, false)
if !ok {
msgTransactionNotFound := appmessage.NewMsgTransactionNotFound(transactionID)
@ -40,7 +40,6 @@ func (flow *handleRequestedTransactionsFlow) start() error {
}
continue
}
err := flow.outgoingRoute.Enqueue(appmessage.DomainTransactionToMsgTx(tx))
if err != nil {
return err

View File

@ -1,10 +1,11 @@
package transactionrelay_test
import (
"github.com/kaspanet/kaspad/app/protocol/flowcontext"
"github.com/kaspanet/kaspad/app/protocol/flows/v5/transactionrelay"
"testing"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
"github.com/kaspanet/kaspad/domain"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
@ -31,7 +32,7 @@ func TestHandleRequestedTransactionsNotFound(t *testing.T) {
}
defer teardown(false)
sharedRequestedTransactions := transactionrelay.NewSharedRequestedTransactions()
sharedRequestedTransactions := flowcontext.NewSharedRequestedTransactions()
adapter, err := netadapter.NewNetAdapter(config.DefaultConfig())
if err != nil {
t.Fatalf("Failed to create a NetAdapter: %v", err)

View File

@ -5,6 +5,8 @@ import (
"sync"
"sync/atomic"
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain"
@ -71,11 +73,16 @@ func (m *Manager) AddBlock(block *externalapi.DomainBlock) error {
return m.context.AddBlock(block)
}
func (m *Manager) runFlows(flows []*flow, peer *peerpkg.Peer, errChan <-chan error, flowsWaitGroup *sync.WaitGroup) error {
// Context returns the manager's flow context
func (m *Manager) Context() *flowcontext.FlowContext {
return m.context
}
func (m *Manager) runFlows(flows []*common.Flow, peer *peerpkg.Peer, errChan <-chan error, flowsWaitGroup *sync.WaitGroup) error {
flowsWaitGroup.Add(len(flows))
for _, flow := range flows {
executeFunc := flow.executeFunc // extract to new variable so that it's not overwritten
spawn(fmt.Sprintf("flow-%s", flow.name), func() {
executeFunc := flow.ExecuteFunc // extract to new variable so that it's not overwritten
spawn(fmt.Sprintf("flow-%s", flow.Name), func() {
executeFunc(peer)
flowsWaitGroup.Done()
})
@ -84,9 +91,9 @@ func (m *Manager) runFlows(flows []*flow, peer *peerpkg.Peer, errChan <-chan err
return <-errChan
}
// SetOnBlockAddedToDAGHandler sets the onBlockAddedToDAG handler
func (m *Manager) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler flowcontext.OnBlockAddedToDAGHandler) {
m.context.SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler)
// SetOnNewBlockTemplateHandler sets the onNewBlockTemplate handler
func (m *Manager) SetOnNewBlockTemplateHandler(onNewBlockTemplateHandler flowcontext.OnNewBlockTemplateHandler) {
m.context.SetOnNewBlockTemplateHandler(onNewBlockTemplateHandler)
}
// SetOnPruningPointUTXOSetOverrideHandler sets the OnPruningPointUTXOSetOverride handler
@ -99,12 +106,6 @@ func (m *Manager) SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMemp
m.context.SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler)
}
// ShouldMine returns whether it's ok to use block template from this node
// for mining purposes.
func (m *Manager) ShouldMine() (bool, error) {
return m.context.ShouldMine()
}
// IsIBDRunning returns true if IBD is currently marked as running
func (m *Manager) IsIBDRunning() bool {
return m.context.IsIBDRunning()

View File

@ -13,10 +13,6 @@ import (
"github.com/kaspanet/kaspad/util/mstime"
)
// maxProtocolVersion version is the maximum supported protocol
// version this kaspad node supports
const maxProtocolVersion = 1
// Peer holds data about a peer.
type Peer struct {
connection *netadapter.NetConnection
@ -35,6 +31,8 @@ type Peer struct {
lastPingNonce uint64 // The nonce of the last ping we sent
lastPingTime time.Time // Time we sent last ping
lastPingDuration time.Duration // Time for last ping to return
ibdRequestChannel chan *externalapi.DomainBlock // A channel used to communicate IBD requests between flows
}
// New returns a new Peer
@ -42,6 +40,7 @@ func New(connection *netadapter.NetConnection) *Peer {
return &Peer{
connection: connection,
connectionStarted: time.Now(),
ibdRequestChannel: make(chan *externalapi.DomainBlock),
}
}
@ -76,6 +75,11 @@ func (p *Peer) AdvertisedProtocolVersion() uint32 {
return p.advertisedProtocolVerion
}
// ProtocolVersion returns the protocol version which is used when communicating with the peer.
func (p *Peer) ProtocolVersion() uint32 {
return p.protocolVersion
}
// TimeConnected returns the time since the connection to this been has been started.
func (p *Peer) TimeConnected() time.Duration {
return time.Since(p.connectionStarted)
@ -87,7 +91,7 @@ func (p *Peer) IsOutbound() bool {
}
// UpdateFieldsFromMsgVersion updates the peer with the data from the version message.
func (p *Peer) UpdateFieldsFromMsgVersion(msg *appmessage.MsgVersion) {
func (p *Peer) UpdateFieldsFromMsgVersion(msg *appmessage.MsgVersion, maxProtocolVersion uint32) {
// Negotiate the protocol version.
p.advertisedProtocolVerion = msg.ProtocolVersion
p.protocolVersion = mathUtil.MinUint32(maxProtocolVersion, p.advertisedProtocolVerion)
@ -142,3 +146,8 @@ func (p *Peer) LastPingDuration() time.Duration {
return p.lastPingDuration
}
// IBDRequestChannel returns the channel used in order to communicate an IBD request between peer flows
func (p *Peer) IBDRequestChannel() chan *externalapi.DomainBlock {
return p.ibdRequestChannel
}

View File

@ -1,16 +1,14 @@
package protocol
import (
"github.com/kaspanet/kaspad/app/protocol/common"
"github.com/kaspanet/kaspad/app/protocol/flows/ready"
"github.com/kaspanet/kaspad/app/protocol/flows/v5"
"sync"
"sync/atomic"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
"github.com/kaspanet/kaspad/app/protocol/flows/blockrelay"
"github.com/kaspanet/kaspad/app/protocol/flows/handshake"
"github.com/kaspanet/kaspad/app/protocol/flows/ping"
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"github.com/kaspanet/kaspad/app/protocol/flows/transactionrelay"
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
@ -20,23 +18,14 @@ import (
"github.com/pkg/errors"
)
type flowInitializeFunc func(route *routerpkg.Route, peer *peerpkg.Peer) error
type flowExecuteFunc func(peer *peerpkg.Peer)
type flow struct {
name string
executeFunc flowExecuteFunc
}
func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *netadapter.NetConnection) {
// isStopping flag is raised the moment that the connection associated with this router is disconnected
// errChan is used by the flow goroutines to return to runFlows when an error occurs.
// They are both initialized here and passed to register flows.
isStopping := uint32(0)
errChan := make(chan error)
errChan := make(chan error, 1)
flows := m.registerFlows(router, errChan, &isStopping)
receiveVersionRoute, sendVersionRoute := registerHandshakeRoutes(router)
receiveVersionRoute, sendVersionRoute, receiveReadyRoute := registerHandshakeRoutes(router)
// After flows were registered - spawn a new thread that will wait for connection to finish initializing
// and start receiving messages
@ -84,6 +73,21 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
}
defer m.context.RemoveFromPeers(peer)
var flows []*common.Flow
log.Infof("Registering p2p flows for peer %s for protocol version %d", peer, peer.ProtocolVersion())
switch peer.ProtocolVersion() {
case 5:
flows = v5.Register(m, router, errChan, &isStopping)
default:
panic(errors.Errorf("no way to handle protocol version %d", peer.ProtocolVersion()))
}
err = ready.HandleReady(receiveReadyRoute, router.OutgoingRoute(), peer)
if err != nil {
m.handleError(err, netConnection, router.OutgoingRoute())
return
}
removeHandshakeRoutes(router)
flowsWaitGroup := &sync.WaitGroup{}
@ -102,7 +106,7 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection, outgoingRoute *routerpkg.Route) {
if protocolErr := (protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if !m.context.Config().DisableBanning && protocolErr.ShouldBan {
if m.context.Config().EnableBanning && protocolErr.ShouldBan {
log.Warnf("Banning %s (reason: %s)", netConnection, protocolErr.Cause)
err := m.context.ConnectionManager().Ban(netConnection)
@ -130,167 +134,9 @@ func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection
panic(err)
}
func (m *Manager) registerFlows(router *routerpkg.Router, errChan chan error, isStopping *uint32) (flows []*flow) {
flows = m.registerAddressFlows(router, isStopping, errChan)
flows = append(flows, m.registerBlockRelayFlows(router, isStopping, errChan)...)
flows = append(flows, m.registerPingFlows(router, isStopping, errChan)...)
flows = append(flows, m.registerTransactionRelayFlow(router, isStopping, errChan)...)
flows = append(flows, m.registerRejectsFlow(router, isStopping, errChan)...)
return flows
}
func (m *Manager) registerAddressFlows(router *routerpkg.Router, isStopping *uint32, errChan chan error) []*flow {
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerFlow("SendAddresses", router, []appmessage.MessageCommand{appmessage.CmdRequestAddresses}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return addressexchange.SendAddresses(m.context, incomingRoute, outgoingRoute)
},
),
m.registerOneTimeFlow("ReceiveAddresses", router, []appmessage.MessageCommand{appmessage.CmdAddresses}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return addressexchange.ReceiveAddresses(m.context, incomingRoute, outgoingRoute, peer)
},
),
}
}
func (m *Manager) registerBlockRelayFlows(router *routerpkg.Router, isStopping *uint32, errChan chan error) []*flow {
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerOneTimeFlow("SendVirtualSelectedParentInv", router, []appmessage.MessageCommand{},
isStopping, errChan, func(route *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.SendVirtualSelectedParentInv(m.context, outgoingRoute, peer)
}),
m.registerFlow("HandleRelayInvs", router, []appmessage.MessageCommand{
appmessage.CmdInvRelayBlock, appmessage.CmdBlock, appmessage.CmdBlockLocator,
appmessage.CmdDoneHeaders, appmessage.CmdUnexpectedPruningPoint, appmessage.CmdPruningPointUTXOSetChunk,
appmessage.CmdBlockHeaders, appmessage.CmdIBDBlockLocatorHighestHash, appmessage.CmdBlockWithTrustedData,
appmessage.CmdDoneBlocksWithTrustedData, appmessage.CmdIBDBlockLocatorHighestHashNotFound,
appmessage.CmdDonePruningPointUTXOSetChunks, appmessage.CmdIBDBlock, appmessage.CmdPruningPoints,
appmessage.CmdPruningPointProof,
},
isStopping, errChan, func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayInvs(m.context, incomingRoute,
outgoingRoute, peer)
},
),
m.registerFlow("HandleRelayBlockRequests", router, []appmessage.MessageCommand{appmessage.CmdRequestRelayBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRelayBlockRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandleRequestBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdRequestBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestBlockLocator(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestHeaders", router,
[]appmessage.MessageCommand{appmessage.CmdRequestHeaders, appmessage.CmdRequestNextHeaders}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestHeaders(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandleIBDBlockRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestIBDBlocks}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBDBlockRequests(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestPruningPointUTXOSet", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointUTXOSet,
appmessage.CmdRequestNextPruningPointUTXOSetChunk}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleRequestPruningPointUTXOSet(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandlePruningPointAndItsAnticoneRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointAndItsAnticone}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointAndItsAnticoneRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandleIBDBlockLocator", router,
[]appmessage.MessageCommand{appmessage.CmdIBDBlockLocator}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandleIBDBlockLocator(m.context, incomingRoute, outgoingRoute, peer)
},
),
m.registerFlow("HandlePruningPointProofRequests", router,
[]appmessage.MessageCommand{appmessage.CmdRequestPruningPointProof}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return blockrelay.HandlePruningPointProofRequests(m.context, incomingRoute, outgoingRoute, peer)
},
),
}
}
func (m *Manager) registerPingFlows(router *routerpkg.Router, isStopping *uint32, errChan chan error) []*flow {
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerFlow("ReceivePings", router, []appmessage.MessageCommand{appmessage.CmdPing}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ping.ReceivePings(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("SendPings", router, []appmessage.MessageCommand{appmessage.CmdPong}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return ping.SendPings(m.context, incomingRoute, outgoingRoute, peer)
},
),
}
}
func (m *Manager) registerTransactionRelayFlow(router *routerpkg.Router, isStopping *uint32, errChan chan error) []*flow {
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerFlowWithCapacity("HandleRelayedTransactions", 10_000, router,
[]appmessage.MessageCommand{appmessage.CmdInvTransaction, appmessage.CmdTx, appmessage.CmdTransactionNotFound}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return transactionrelay.HandleRelayedTransactions(m.context, incomingRoute, outgoingRoute)
},
),
m.registerFlow("HandleRequestTransactions", router,
[]appmessage.MessageCommand{appmessage.CmdRequestTransactions}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return transactionrelay.HandleRequestedTransactions(m.context, incomingRoute, outgoingRoute)
},
),
}
}
func (m *Manager) registerRejectsFlow(router *routerpkg.Router, isStopping *uint32, errChan chan error) []*flow {
outgoingRoute := router.OutgoingRoute()
return []*flow{
m.registerFlow("HandleRejects", router,
[]appmessage.MessageCommand{appmessage.CmdReject}, isStopping, errChan,
func(incomingRoute *routerpkg.Route, peer *peerpkg.Peer) error {
return rejects.HandleRejects(m.context, incomingRoute, outgoingRoute)
},
),
}
}
func (m *Manager) registerFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
// RegisterFlow registers a flow to the given router.
func (m *Manager) RegisterFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc common.FlowInitializeFunc) *common.Flow {
route, err := router.AddIncomingRoute(name, messageTypes)
if err != nil {
@ -300,9 +146,10 @@ func (m *Manager) registerFlow(name string, router *routerpkg.Router, messageTyp
return m.registerFlowForRoute(route, name, isStopping, errChan, initializeFunc)
}
func (m *Manager) registerFlowWithCapacity(name string, capacity int, router *routerpkg.Router,
// RegisterFlowWithCapacity registers a flow to the given router with a custom capacity.
func (m *Manager) RegisterFlowWithCapacity(name string, capacity int, router *routerpkg.Router,
messageTypes []appmessage.MessageCommand, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
errChan chan error, initializeFunc common.FlowInitializeFunc) *common.Flow {
route, err := router.AddIncomingRouteWithCapacity(name, capacity, messageTypes)
if err != nil {
@ -313,11 +160,11 @@ func (m *Manager) registerFlowWithCapacity(name string, capacity int, router *ro
}
func (m *Manager) registerFlowForRoute(route *routerpkg.Route, name string, isStopping *uint32,
errChan chan error, initializeFunc flowInitializeFunc) *flow {
errChan chan error, initializeFunc common.FlowInitializeFunc) *common.Flow {
return &flow{
name: name,
executeFunc: func(peer *peerpkg.Peer) {
return &common.Flow{
Name: name,
ExecuteFunc: func(peer *peerpkg.Peer) {
err := initializeFunc(route, peer)
if err != nil {
m.context.HandleError(err, name, isStopping, errChan)
@ -327,17 +174,18 @@ func (m *Manager) registerFlowForRoute(route *routerpkg.Route, name string, isSt
}
}
func (m *Manager) registerOneTimeFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand,
isStopping *uint32, stopChan chan error, initializeFunc flowInitializeFunc) *flow {
// RegisterOneTimeFlow registers a one-time flow (that exits once some operations are done) to the given router.
func (m *Manager) RegisterOneTimeFlow(name string, router *routerpkg.Router, messageTypes []appmessage.MessageCommand,
isStopping *uint32, stopChan chan error, initializeFunc common.FlowInitializeFunc) *common.Flow {
route, err := router.AddIncomingRoute(name, messageTypes)
if err != nil {
panic(err)
}
return &flow{
name: name,
executeFunc: func(peer *peerpkg.Peer) {
return &common.Flow{
Name: name,
ExecuteFunc: func(peer *peerpkg.Peer) {
defer func() {
err := router.RemoveRoute(messageTypes)
if err != nil {
@ -355,7 +203,7 @@ func (m *Manager) registerOneTimeFlow(name string, router *routerpkg.Router, mes
}
func registerHandshakeRoutes(router *routerpkg.Router) (
receiveVersionRoute *routerpkg.Route, sendVersionRoute *routerpkg.Route) {
receiveVersionRoute, sendVersionRoute, receiveReadyRoute *routerpkg.Route) {
receiveVersionRoute, err := router.AddIncomingRoute("recieveVersion - incoming", []appmessage.MessageCommand{appmessage.CmdVersion})
if err != nil {
panic(err)
@ -366,11 +214,16 @@ func registerHandshakeRoutes(router *routerpkg.Router) (
panic(err)
}
return receiveVersionRoute, sendVersionRoute
receiveReadyRoute, err = router.AddIncomingRoute("recieveReady - incoming", []appmessage.MessageCommand{appmessage.CmdReady})
if err != nil {
panic(err)
}
return receiveVersionRoute, sendVersionRoute, receiveReadyRoute
}
func removeHandshakeRoutes(router *routerpkg.Router) {
err := router.RemoveRoute([]appmessage.MessageCommand{appmessage.CmdVersion, appmessage.CmdVerAck})
err := router.RemoveRoute([]appmessage.MessageCommand{appmessage.CmdVersion, appmessage.CmdVerAck, appmessage.CmdReady})
if err != nil {
panic(err)
}

View File

@ -12,6 +12,7 @@ import (
"github.com/kaspanet/kaspad/infrastructure/network/addressmanager"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter"
"github.com/pkg/errors"
)
// Manager is an RPC manager
@ -28,6 +29,7 @@ func NewManager(
connectionManager *connmanager.ConnectionManager,
addressManager *addressmanager.AddressManager,
utxoIndex *utxoindex.UTXOIndex,
consensusEventsChan chan externalapi.ConsensusEvent,
shutDownChan chan<- struct{}) *Manager {
manager := Manager{
@ -44,43 +46,103 @@ func NewManager(
}
netAdapter.SetRPCRouterInitializer(manager.routerInitializer)
manager.initConsensusEventsHandler(consensusEventsChan)
return &manager
}
// NotifyBlockAddedToDAG notifies the manager that a block has been added to the DAG
func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyBlockAddedToDAG")
func (m *Manager) initConsensusEventsHandler(consensusEventsChan chan externalapi.ConsensusEvent) {
spawn("consensusEventsHandler", func() {
for {
consensusEvent, ok := <-consensusEventsChan
if !ok {
return
}
switch event := consensusEvent.(type) {
case *externalapi.VirtualChangeSet:
err := m.notifyVirtualChange(event)
if err != nil {
panic(err)
}
case *externalapi.BlockAdded:
err := m.notifyBlockAddedToDAG(event.Block)
if err != nil {
panic(err)
}
default:
panic(errors.Errorf("Got event of unsupported type %T", consensusEvent))
}
}
})
}
// notifyBlockAddedToDAG notifies the manager that a block has been added to the DAG
func (m *Manager) notifyBlockAddedToDAG(block *externalapi.DomainBlock) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.notifyBlockAddedToDAG")
defer onEnd()
if m.context.Config.UTXOIndex {
err := m.notifyUTXOsChanged(blockInsertionResult)
if err != nil {
return err
}
}
err := m.notifyVirtualSelectedParentBlueScoreChanged()
if err != nil {
return err
}
err = m.notifyVirtualDaaScoreChanged()
if err != nil {
return err
}
err = m.notifyVirtualSelectedParentChainChanged(blockInsertionResult)
if err != nil {
return err
// Before converting the block and populating it, we check if any listeners are interested.
// This is done since most nodes do not use this event.
if !m.context.NotificationManager.HasBlockAddedListeners() {
return nil
}
rpcBlock := appmessage.DomainBlockToRPCBlock(block)
err = m.context.PopulateBlockWithVerboseData(rpcBlock, block.Header, block, false)
err := m.context.PopulateBlockWithVerboseData(rpcBlock, block.Header, block, true)
if err != nil {
return err
}
blockAddedNotification := appmessage.NewBlockAddedNotificationMessage(rpcBlock)
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
err = m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
if err != nil {
return err
}
return nil
}
// notifyVirtualChange notifies the manager that the virtual block has been changed.
func (m *Manager) notifyVirtualChange(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualChange")
defer onEnd()
if m.context.Config.UTXOIndex && virtualChangeSet.VirtualUTXODiff != nil {
err := m.notifyUTXOsChanged(virtualChangeSet)
if err != nil {
return err
}
}
err := m.notifyVirtualSelectedParentBlueScoreChanged(virtualChangeSet.VirtualSelectedParentBlueScore)
if err != nil {
return err
}
err = m.notifyVirtualDaaScoreChanged(virtualChangeSet.VirtualDAAScore)
if err != nil {
return err
}
if virtualChangeSet.VirtualSelectedParentChainChanges == nil ||
(len(virtualChangeSet.VirtualSelectedParentChainChanges.Added) == 0 &&
len(virtualChangeSet.VirtualSelectedParentChainChanges.Removed) == 0) {
return nil
}
err = m.notifyVirtualSelectedParentChainChanged(virtualChangeSet)
if err != nil {
return err
}
return nil
}
// NotifyNewBlockTemplate notifies the manager that a new
// block template is available for miners
func (m *Manager) NotifyNewBlockTemplate() error {
notification := appmessage.NewNewBlockTemplateNotificationMessage()
return m.context.NotificationManager.NotifyNewBlockTemplate(notification)
}
// NotifyPruningPointUTXOSetOverride notifies the manager whenever the UTXO index
@ -117,14 +179,15 @@ func (m *Manager) NotifyFinalityConflictResolved(finalityBlockHash string) error
return m.context.NotificationManager.NotifyFinalityConflictResolved(notification)
}
func (m *Manager) notifyUTXOsChanged(blockInsertionResult *externalapi.BlockInsertionResult) error {
func (m *Manager) notifyUTXOsChanged(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyUTXOsChanged")
defer onEnd()
utxoIndexChanges, err := m.context.UTXOIndex.Update(blockInsertionResult)
utxoIndexChanges, err := m.context.UTXOIndex.Update(virtualChangeSet)
if err != nil {
return err
}
return m.context.NotificationManager.NotifyUTXOsChanged(utxoIndexChanges)
}
@ -140,45 +203,36 @@ func (m *Manager) notifyPruningPointUTXOSetOverride() error {
return m.context.NotificationManager.NotifyPruningPointUTXOSetOverride()
}
func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged() error {
func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged(virtualSelectedParentBlueScore uint64) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentBlueScoreChanged")
defer onEnd()
virtualSelectedParent, err := m.context.Domain.Consensus().GetVirtualSelectedParent()
if err != nil {
return err
}
blockInfo, err := m.context.Domain.Consensus().GetBlockInfo(virtualSelectedParent)
if err != nil {
return err
}
notification := appmessage.NewVirtualSelectedParentBlueScoreChangedNotificationMessage(blockInfo.BlueScore)
notification := appmessage.NewVirtualSelectedParentBlueScoreChangedNotificationMessage(virtualSelectedParentBlueScore)
return m.context.NotificationManager.NotifyVirtualSelectedParentBlueScoreChanged(notification)
}
func (m *Manager) notifyVirtualDaaScoreChanged() error {
func (m *Manager) notifyVirtualDaaScoreChanged(virtualDAAScore uint64) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualDaaScoreChanged")
defer onEnd()
virtualDAAScore, err := m.context.Domain.Consensus().GetVirtualDAAScore()
if err != nil {
return err
}
notification := appmessage.NewVirtualDaaScoreChangedNotificationMessage(virtualDAAScore)
return m.context.NotificationManager.NotifyVirtualDaaScoreChanged(notification)
}
func (m *Manager) notifyVirtualSelectedParentChainChanged(blockInsertionResult *externalapi.BlockInsertionResult) error {
func (m *Manager) notifyVirtualSelectedParentChainChanged(virtualChangeSet *externalapi.VirtualChangeSet) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentChainChanged")
defer onEnd()
hasListeners, includeAcceptedTransactionIDs := m.context.NotificationManager.HasListenersThatPropagateVirtualSelectedParentChainChanged()
if hasListeners {
notification, err := m.context.ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
blockInsertionResult.VirtualSelectedParentChainChanges)
virtualChangeSet.VirtualSelectedParentChainChanges, includeAcceptedTransactionIDs)
if err != nil {
return err
}
return m.context.NotificationManager.NotifyVirtualSelectedParentChainChanged(notification)
}
return nil
}

View File

@ -28,6 +28,7 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdGetVirtualSelectedParentChainFromBlockRequestMessage: rpchandlers.HandleGetVirtualSelectedParentChainFromBlock,
appmessage.CmdGetBlocksRequestMessage: rpchandlers.HandleGetBlocks,
appmessage.CmdGetBlockCountRequestMessage: rpchandlers.HandleGetBlockCount,
appmessage.CmdGetBalanceByAddressRequestMessage: rpchandlers.HandleGetBalanceByAddress,
appmessage.CmdGetBlockDAGInfoRequestMessage: rpchandlers.HandleGetBlockDAGInfo,
appmessage.CmdResolveFinalityConflictRequestMessage: rpchandlers.HandleResolveFinalityConflict,
appmessage.CmdNotifyFinalityConflictsRequestMessage: rpchandlers.HandleNotifyFinalityConflicts,
@ -37,6 +38,7 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdNotifyUTXOsChangedRequestMessage: rpchandlers.HandleNotifyUTXOsChanged,
appmessage.CmdStopNotifyingUTXOsChangedRequestMessage: rpchandlers.HandleStopNotifyingUTXOsChanged,
appmessage.CmdGetUTXOsByAddressesRequestMessage: rpchandlers.HandleGetUTXOsByAddresses,
appmessage.CmdGetBalancesByAddressesRequestMessage: rpchandlers.HandleGetBalancesByAddresses,
appmessage.CmdGetVirtualSelectedParentBlueScoreRequestMessage: rpchandlers.HandleGetVirtualSelectedParentBlueScore,
appmessage.CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualSelectedParentBlueScoreChanged,
appmessage.CmdBanRequestMessage: rpchandlers.HandleBan,
@ -46,6 +48,9 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleStopNotifyingPruningPointUTXOSetOverrideRequest,
appmessage.CmdEstimateNetworkHashesPerSecondRequestMessage: rpchandlers.HandleEstimateNetworkHashesPerSecond,
appmessage.CmdNotifyVirtualDaaScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualDaaScoreChanged,
appmessage.CmdNotifyNewBlockTemplateRequestMessage: rpchandlers.HandleNotifyNewBlockTemplate,
appmessage.CmdGetCoinSupplyRequestMessage: rpchandlers.HandleGetCoinSupply,
appmessage.CmdGetMempoolEntriesByAddressesRequestMessage: rpchandlers.HandleGetMempoolEntriesByAddresses,
}
func (m *Manager) routerInitializer(router *router.Router, netConnection *netadapter.NetConnection) {

View File

@ -3,12 +3,14 @@ package rpccontext
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
)
// ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage converts
// VirtualSelectedParentChainChanges to VirtualSelectedParentChainChangedNotificationMessage
func (ctx *Context) ConvertVirtualSelectedParentChainChangesToChainChangedNotificationMessage(
selectedParentChainChanges *externalapi.SelectedChainPath) (*appmessage.VirtualSelectedParentChainChangedNotificationMessage, error) {
selectedParentChainChanges *externalapi.SelectedChainPath, includeAcceptedTransactionIDs bool) (
*appmessage.VirtualSelectedParentChainChangedNotificationMessage, error) {
removedChainBlockHashes := make([]string, len(selectedParentChainChanges.Removed))
for i, removed := range selectedParentChainChanges.Removed {
@ -20,5 +22,58 @@ func (ctx *Context) ConvertVirtualSelectedParentChainChangesToChainChangedNotifi
addedChainBlocks[i] = added.String()
}
return appmessage.NewVirtualSelectedParentChainChangedNotificationMessage(removedChainBlockHashes, addedChainBlocks), nil
var acceptedTransactionIDs []*appmessage.AcceptedTransactionIDs
if includeAcceptedTransactionIDs {
var err error
acceptedTransactionIDs, err = ctx.getAndConvertAcceptedTransactionIDs(selectedParentChainChanges)
if err != nil {
return nil, err
}
}
return appmessage.NewVirtualSelectedParentChainChangedNotificationMessage(
removedChainBlockHashes, addedChainBlocks, acceptedTransactionIDs), nil
}
func (ctx *Context) getAndConvertAcceptedTransactionIDs(selectedParentChainChanges *externalapi.SelectedChainPath) (
[]*appmessage.AcceptedTransactionIDs, error) {
acceptedTransactionIDs := make([]*appmessage.AcceptedTransactionIDs, len(selectedParentChainChanges.Added))
const chunk = 1000
position := 0
for position < len(selectedParentChainChanges.Added) {
var chainBlocksChunk []*externalapi.DomainHash
if position+chunk > len(selectedParentChainChanges.Added) {
chainBlocksChunk = selectedParentChainChanges.Added[position:]
} else {
chainBlocksChunk = selectedParentChainChanges.Added[position : position+chunk]
}
// We use chunks in order to avoid blocking consensus for too long
chainBlocksAcceptanceData, err := ctx.Domain.Consensus().GetBlocksAcceptanceData(chainBlocksChunk)
if err != nil {
return nil, err
}
for i, addedChainBlock := range chainBlocksChunk {
chainBlockAcceptanceData := chainBlocksAcceptanceData[i]
acceptedTransactionIDs[position+i] = &appmessage.AcceptedTransactionIDs{
AcceptingBlockHash: addedChainBlock.String(),
AcceptedTransactionIDs: nil,
}
for _, blockAcceptanceData := range chainBlockAcceptanceData {
for _, transactionAcceptanceData := range blockAcceptanceData.TransactionAcceptanceData {
if transactionAcceptanceData.IsAccepted {
acceptedTransactionIDs[position+i].AcceptedTransactionIDs =
append(acceptedTransactionIDs[position+i].AcceptedTransactionIDs,
consensushashing.TransactionID(transactionAcceptanceData.Transaction).String())
}
}
}
}
position += chunk
}
return acceptedTransactionIDs, nil
}

View File

@ -44,7 +44,7 @@ func NewContext(cfg *config.Config,
UTXOIndex: utxoIndex,
ShutDownChan: shutDownChan,
}
context.NotificationManager = NewNotificationManager()
context.NotificationManager = NewNotificationManager(cfg.ActiveNetParams)
return context
}

View File

@ -3,6 +3,11 @@ package rpccontext
import (
"sync"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/utxoindex"
routerpkg "github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
@ -13,6 +18,7 @@ import (
type NotificationManager struct {
sync.RWMutex
listeners map[*routerpkg.Router]*NotificationListener
params *dagconfig.Params
}
// UTXOsChangedNotificationAddress represents a kaspad address.
@ -24,6 +30,8 @@ type UTXOsChangedNotificationAddress struct {
// NotificationListener represents a registered RPC notification listener
type NotificationListener struct {
params *dagconfig.Params
propagateBlockAddedNotifications bool
propagateVirtualSelectedParentChainChangedNotifications bool
propagateFinalityConflictNotifications bool
@ -32,13 +40,16 @@ type NotificationListener struct {
propagateVirtualSelectedParentBlueScoreChangedNotifications bool
propagateVirtualDaaScoreChangedNotifications bool
propagatePruningPointUTXOSetOverrideNotifications bool
propagateNewBlockTemplateNotifications bool
propagateUTXOsChangedNotificationAddresses map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress
includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications bool
}
// NewNotificationManager creates a new NotificationManager
func NewNotificationManager() *NotificationManager {
func NewNotificationManager(params *dagconfig.Params) *NotificationManager {
return &NotificationManager{
params: params,
listeners: make(map[*routerpkg.Router]*NotificationListener),
}
}
@ -48,7 +59,7 @@ func (nm *NotificationManager) AddListener(router *routerpkg.Router) {
nm.Lock()
defer nm.Unlock()
listener := newNotificationListener()
listener := newNotificationListener(nm.params)
nm.listeners[router] = listener
}
@ -72,6 +83,19 @@ func (nm *NotificationManager) Listener(router *routerpkg.Router) (*Notification
return listener, nil
}
// HasBlockAddedListeners indicates if the notification manager has any listeners for `BlockAdded` events
func (nm *NotificationManager) HasBlockAddedListeners() bool {
nm.RLock()
defer nm.RUnlock()
for _, listener := range nm.listeners {
if listener.propagateBlockAddedNotifications {
return true
}
}
return false
}
// NotifyBlockAdded notifies the notification manager that a block has been added to the DAG
func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAddedNotificationMessage) error {
nm.RLock()
@ -79,10 +103,8 @@ func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAd
for router, listener := range nm.listeners {
if listener.propagateBlockAddedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if errors.Is(err, routerpkg.ErrRouteClosed) {
log.Warnf("Couldn't send notification: %s", err)
} else if err != nil {
err := router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
}
@ -91,13 +113,27 @@ func (nm *NotificationManager) NotifyBlockAdded(notification *appmessage.BlockAd
}
// NotifyVirtualSelectedParentChainChanged notifies the notification manager that the DAG's selected parent chain has changed
func (nm *NotificationManager) NotifyVirtualSelectedParentChainChanged(notification *appmessage.VirtualSelectedParentChainChangedNotificationMessage) error {
func (nm *NotificationManager) NotifyVirtualSelectedParentChainChanged(
notification *appmessage.VirtualSelectedParentChainChangedNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
notificationWithoutAcceptedTransactionIDs := &appmessage.VirtualSelectedParentChainChangedNotificationMessage{
RemovedChainBlockHashes: notification.RemovedChainBlockHashes,
AddedChainBlockHashes: notification.AddedChainBlockHashes,
}
for router, listener := range nm.listeners {
if listener.propagateVirtualSelectedParentChainChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
var err error
if listener.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications {
err = router.OutgoingRoute().MaybeEnqueue(notification)
} else {
err = router.OutgoingRoute().MaybeEnqueue(notificationWithoutAcceptedTransactionIDs)
}
if err != nil {
return err
}
@ -106,6 +142,31 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentChainChanged(notificat
return nil
}
// HasListenersThatPropagateVirtualSelectedParentChainChanged returns whether there's any listener that is
// subscribed to VirtualSelectedParentChainChanged notifications as well as checks if any such listener requested
// to include AcceptedTransactionIDs.
func (nm *NotificationManager) HasListenersThatPropagateVirtualSelectedParentChainChanged() (hasListeners, hasListenersThatRequireAcceptedTransactionIDs bool) {
nm.RLock()
defer nm.RUnlock()
hasListeners = false
hasListenersThatRequireAcceptedTransactionIDs = false
for _, listener := range nm.listeners {
if listener.propagateVirtualSelectedParentChainChangedNotifications {
hasListeners = true
// Generating acceptedTransactionIDs is a heavy operation, so we check if it's needed by any listener.
if listener.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications {
hasListenersThatRequireAcceptedTransactionIDs = true
break
}
}
}
return hasListeners, hasListenersThatRequireAcceptedTransactionIDs
}
// NotifyFinalityConflict notifies the notification manager that there's a finality conflict in the DAG
func (nm *NotificationManager) NotifyFinalityConflict(notification *appmessage.FinalityConflictNotificationMessage) error {
nm.RLock()
@ -146,7 +207,10 @@ func (nm *NotificationManager) NotifyUTXOsChanged(utxoChanges *utxoindex.UTXOCha
for router, listener := range nm.listeners {
if listener.propagateUTXOsChangedNotifications {
// Filter utxoChanges and create a notification
notification := listener.convertUTXOChangesToUTXOsChangedNotification(utxoChanges)
notification, err := listener.convertUTXOChangesToUTXOsChangedNotification(utxoChanges)
if err != nil {
return err
}
// Don't send the notification if it's empty
if len(notification.Added) == 0 && len(notification.Removed) == 0 {
@ -154,7 +218,7 @@ func (nm *NotificationManager) NotifyUTXOsChanged(utxoChanges *utxoindex.UTXOCha
}
// Enqueue the notification
err := router.OutgoingRoute().Enqueue(notification)
err = router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
@ -173,7 +237,7 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentBlueScoreChanged(
for router, listener := range nm.listeners {
if listener.propagateVirtualSelectedParentBlueScoreChangedNotifications {
err := router.OutgoingRoute().Enqueue(notification)
err := router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
@ -192,6 +256,25 @@ func (nm *NotificationManager) NotifyVirtualDaaScoreChanged(
for router, listener := range nm.listeners {
if listener.propagateVirtualDaaScoreChangedNotifications {
err := router.OutgoingRoute().MaybeEnqueue(notification)
if err != nil {
return err
}
}
}
return nil
}
// NotifyNewBlockTemplate notifies the notification manager that a new
// block template is available for miners
func (nm *NotificationManager) NotifyNewBlockTemplate(
notification *appmessage.NewBlockTemplateNotificationMessage) error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagateNewBlockTemplateNotifications {
err := router.OutgoingRoute().Enqueue(notification)
if err != nil {
return err
@ -218,18 +301,27 @@ func (nm *NotificationManager) NotifyPruningPointUTXOSetOverride() error {
return nil
}
func newNotificationListener() *NotificationListener {
func newNotificationListener(params *dagconfig.Params) *NotificationListener {
return &NotificationListener{
params: params,
propagateBlockAddedNotifications: false,
propagateVirtualSelectedParentChainChangedNotifications: false,
propagateFinalityConflictNotifications: false,
propagateFinalityConflictResolvedNotifications: false,
propagateUTXOsChangedNotifications: false,
propagateVirtualSelectedParentBlueScoreChangedNotifications: false,
propagateNewBlockTemplateNotifications: false,
propagatePruningPointUTXOSetOverrideNotifications: false,
}
}
// IncludeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications returns true if this listener
// includes accepted transaction IDs in it's virtual-selected-parent-chain-changed notifications
func (nl *NotificationListener) IncludeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications() bool {
return nl.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications
}
// PropagateBlockAddedNotifications instructs the listener to send block added notifications
// to the remote listener
func (nl *NotificationListener) PropagateBlockAddedNotifications() {
@ -238,8 +330,9 @@ func (nl *NotificationListener) PropagateBlockAddedNotifications() {
// PropagateVirtualSelectedParentChainChangedNotifications instructs the listener to send chain changed notifications
// to the remote listener
func (nl *NotificationListener) PropagateVirtualSelectedParentChainChangedNotifications() {
func (nl *NotificationListener) PropagateVirtualSelectedParentChainChangedNotifications(includeAcceptedTransactionIDs bool) {
nl.propagateVirtualSelectedParentChainChangedNotifications = true
nl.includeAcceptedTransactionIDsInVirtualSelectedParentChainChangedNotifications = includeAcceptedTransactionIDs
}
// PropagateFinalityConflictNotifications instructs the listener to send finality conflict notifications
@ -258,7 +351,11 @@ func (nl *NotificationListener) PropagateFinalityConflictResolvedNotifications()
// to the remote listener for the given addresses. Subsequent calls instruct the listener to
// send UTXOs changed notifications for those addresses along with the old ones. Duplicate addresses
// are ignored.
func (nl *NotificationListener) PropagateUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
func (nm *NotificationManager) PropagateUTXOsChangedNotifications(nl *NotificationListener, addresses []*UTXOsChangedNotificationAddress) {
// Apply a write-lock since the internal listener address map is modified
nm.Lock()
defer nm.Unlock()
if !nl.propagateUTXOsChangedNotifications {
nl.propagateUTXOsChangedNotifications = true
nl.propagateUTXOsChangedNotificationAddresses =
@ -273,7 +370,11 @@ func (nl *NotificationListener) PropagateUTXOsChangedNotifications(addresses []*
// StopPropagatingUTXOsChangedNotifications instructs the listener to stop sending UTXOs
// changed notifications to the remote listener for the given addresses. Addresses for which
// notifications are not currently sent are ignored.
func (nl *NotificationListener) StopPropagatingUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
func (nm *NotificationManager) StopPropagatingUTXOsChangedNotifications(nl *NotificationListener, addresses []*UTXOsChangedNotificationAddress) {
// Apply a write-lock since the internal listener address map is modified
nm.Lock()
defer nm.Unlock()
if !nl.propagateUTXOsChangedNotifications {
return
}
@ -284,7 +385,7 @@ func (nl *NotificationListener) StopPropagatingUTXOsChangedNotifications(address
}
func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
utxoChanges *utxoindex.UTXOChanges) *appmessage.UTXOsChangedNotificationMessage {
utxoChanges *utxoindex.UTXOChanges) (*appmessage.UTXOsChangedNotificationMessage, error) {
// As an optimization, we iterate over the smaller set (O(n)) among the two below
// and check existence over the larger set (O(1))
@ -299,27 +400,64 @@ func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
}
for scriptPublicKeyString, removedOutpoints := range utxoChanges.Removed {
for scriptPublicKeyString, removedPairs := range utxoChanges.Removed {
if listenerAddress, ok := nl.propagateUTXOsChangedNotificationAddresses[scriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, removedPairs)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
} else {
} else if addressesSize > 0 {
for _, listenerAddress := range nl.propagateUTXOsChangedNotificationAddresses {
listenerScriptPublicKeyString := listenerAddress.ScriptPublicKeyString
if addedPairs, ok := utxoChanges.Added[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
if removedOutpoints, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
if removedPairs, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, removedPairs)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
} else {
for scriptPublicKeyString, addedPairs := range utxoChanges.Added {
addressString, err := nl.scriptPubKeyStringToAddressString(scriptPublicKeyString)
if err != nil {
return nil, err
}
return notification
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(addressString, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
for scriptPublicKeyString, removedPAirs := range utxoChanges.Removed {
addressString, err := nl.scriptPubKeyStringToAddressString(scriptPublicKeyString)
if err != nil {
return nil, err
}
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(addressString, removedPAirs)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
return notification, nil
}
func (nl *NotificationListener) scriptPubKeyStringToAddressString(scriptPublicKeyString utxoindex.ScriptPublicKeyString) (string, error) {
scriptPubKey := externalapi.NewScriptPublicKeyFromString(string(scriptPublicKeyString))
// ignore error because it is often returned when the script is of unknown type
scriptType, address, err := txscript.ExtractScriptPubKeyAddress(scriptPubKey, nl.params)
if err != nil {
return "", err
}
var addressString string
if scriptType == txscript.NonStandardTy {
addressString = ""
} else {
addressString = address.String()
}
return addressString, nil
}
// PropagateVirtualSelectedParentBlueScoreChangedNotifications instructs the listener to send
@ -334,6 +472,12 @@ func (nl *NotificationListener) PropagateVirtualDaaScoreChangedNotifications() {
nl.propagateVirtualDaaScoreChangedNotifications = true
}
// PropagateNewBlockTemplateNotifications instructs the listener to send
// new block template notifications to the remote listener
func (nl *NotificationListener) PropagateNewBlockTemplateNotifications() {
nl.propagateNewBlockTemplateNotifications = true
}
// PropagatePruningPointUTXOSetOverrideNotifications instructs the listener to send pruning point UTXO set override notifications
// to the remote listener.
func (nl *NotificationListener) PropagatePruningPointUTXOSetOverrideNotifications() {

View File

@ -32,22 +32,6 @@ func ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(address string, pair
return utxosByAddressesEntries
}
// convertUTXOOutpointsToUTXOsByAddressesEntries converts
// UTXOOutpoints to a slice of UTXOsByAddressesEntry
func convertUTXOOutpointsToUTXOsByAddressesEntries(address string, outpoints utxoindex.UTXOOutpoints) []*appmessage.UTXOsByAddressesEntry {
utxosByAddressesEntries := make([]*appmessage.UTXOsByAddressesEntry, 0, len(outpoints))
for outpoint := range outpoints {
utxosByAddressesEntries = append(utxosByAddressesEntries, &appmessage.UTXOsByAddressesEntry{
Address: address,
Outpoint: &appmessage.RPCOutpoint{
TransactionID: outpoint.TransactionID.String(),
Index: outpoint.Index,
},
})
}
return utxosByAddressesEntries
}
// ConvertAddressStringsToUTXOsChangedNotificationAddresses converts address strings
// to UTXOsChangedNotificationAddresses
func (ctx *Context) ConvertAddressStringsToUTXOsChangedNotificationAddresses(
@ -63,7 +47,7 @@ func (ctx *Context) ConvertAddressStringsToUTXOsChangedNotificationAddresses(
if err != nil {
return nil, errors.Errorf("Could not create a scriptPublicKey for address '%s': %s", addressString, err)
}
scriptPublicKeyString := utxoindex.ConvertScriptPublicKeyToString(scriptPublicKey)
scriptPublicKeyString := utxoindex.ScriptPublicKeyString(scriptPublicKey.String())
addresses[i] = &UTXOsChangedNotificationAddress{
Address: addressString,
ScriptPublicKeyString: scriptPublicKeyString,

View File

@ -56,7 +56,12 @@ func (ctx *Context) PopulateBlockWithVerboseData(block *appmessage.RPCBlock, dom
"invalid block")
}
_, selectedParentHash, childrenHashes, err := ctx.Domain.Consensus().GetBlockRelations(blockHash)
_, childrenHashes, err := ctx.Domain.Consensus().GetBlockRelations(blockHash)
if err != nil {
return err
}
isChainBlock, err := ctx.Domain.Consensus().IsChainBlock(blockHash)
if err != nil {
return err
}
@ -67,14 +72,13 @@ func (ctx *Context) PopulateBlockWithVerboseData(block *appmessage.RPCBlock, dom
ChildrenHashes: hashes.ToStrings(childrenHashes),
IsHeaderOnly: blockInfo.BlockStatus == externalapi.StatusHeaderOnly,
BlueScore: blockInfo.BlueScore,
MergeSetBluesHashes: hashes.ToStrings(blockInfo.MergeSetBlues),
MergeSetRedsHashes: hashes.ToStrings(blockInfo.MergeSetReds),
IsChainBlock: isChainBlock,
}
// selectedParentHash will be nil in the genesis block
if selectedParentHash != nil {
block.VerboseData.SelectedParentHash = selectedParentHash.String()
}
if blockInfo.BlockStatus == externalapi.StatusHeaderOnly {
return nil
if blockInfo.SelectedParent != nil {
block.VerboseData.SelectedParentHash = blockInfo.SelectedParent.String()
}
// Get the block if we didn't receive it previously
@ -85,6 +89,10 @@ func (ctx *Context) PopulateBlockWithVerboseData(block *appmessage.RPCBlock, dom
}
}
if len(domainBlock.Transactions) == 0 {
return nil
}
transactionIDs := make([]string, len(domainBlock.Transactions))
for i, transaction := range domainBlock.Transactions {
transactionIDs[i] = consensushashing.TransactionID(transaction).String()
@ -114,6 +122,7 @@ func (ctx *Context) PopulateTransactionWithVerboseData(
}
ctx.Domain.Consensus().PopulateMass(domainTransaction)
transaction.VerboseData = &appmessage.RPCTransactionVerboseData{
TransactionID: consensushashing.TransactionID(domainTransaction).String(),
Hash: consensushashing.TransactionHash(domainTransaction).String(),

View File

@ -9,6 +9,14 @@ import (
// HandleAddPeer handles the respectively named RPC command
func HandleAddPeer(context *rpccontext.Context, _ *router.Router, request appmessage.Message) (appmessage.Message, error) {
if context.Config.SafeRPC {
log.Warn("AddPeer RPC command called while node in safe RPC mode -- ignoring.")
response := appmessage.NewAddPeerResponseMessage()
response.Error =
appmessage.RPCErrorf("AddPeer RPC command called while node in safe RPC mode")
return response, nil
}
AddPeerRequest := request.(*appmessage.AddPeerRequestMessage)
address, err := network.NormalizeAddress(AddPeerRequest.Address, context.Config.ActiveNetParams.DefaultPort)
if err != nil {

Some files were not shown because too many files have changed in this diff Show More