Compare commits

..

47 Commits

Author SHA1 Message Date
stasatdaglabs
9893b7396c [NOD-1105] When recovering acceptance index, use a database transaction per block instead of for the entire recovery (#781)
* [NOD-1105] Don't use a database transaction when recovering acceptance index.

* Revert "[NOD-1105] Don't use a database transaction when recovering acceptance index."

This reverts commit da550f8e

* [NOD-1105] When recovering acceptance index, use a database transaction per block instead of for the entire recovery.
2020-07-01 13:43:51 +03:00
Svarog
8c90344f28 [NOD-1103] Fix testnetGenesisTxPayload with 8-byte blue-score (#780)
* [NOD-1103] Fix testnetGenesisTxPayload with 8-byte blue-score

* [NOD-1103] Fix genesis block bytes
2020-07-01 09:21:42 +03:00
Mike Zak
8a7b0314e5 Merge branch 'v0.5.0-dev' of github.com:kaspanet/kaspad into v0.5.0-dev 2020-06-29 12:17:41 +03:00
Mike Zak
e87d00c9cf [NOD-1063] Fix a bug in which a block is pointing directly to a block in the selected parent chain below the reindex root
commit e303efef42
Author: stasatdaglabs <stas@daglabs.com>
Date:   Mon Jun 29 11:59:36 2020 +0300

    [NOD-1063] Rename a test.

commit bfecd57470
Author: stasatdaglabs <stas@daglabs.com>
Date:   Mon Jun 29 11:57:36 2020 +0300

    [NOD-1063] Fix a comment.

commit b969e5922d
Author: stasatdaglabs <stas@daglabs.com>
Date:   Sun Jun 28 18:14:44 2020 +0300

    [NOD-1063] Convert modifiedTreeNode to an out param.

commit 170f9872f4
Author: stasatdaglabs <stas@daglabs.com>
Date:   Sun Jun 28 17:05:01 2020 +0300

    [NOD-1063] Fix a bug in which a block is added to the selected parent chain below the reindex root.
2020-06-29 12:16:47 +03:00
stasatdaglabs
336347b3c5 [NOD-1063] Fix a bug in which a block is pointing directly to a block in the selected parent chain below the reindex root (#777)
* [NOD-1063] Fix a bug in which a block is added to the selected parent chain below the reindex root.

* [NOD-1063] Convert modifiedTreeNode to an out param.

* [NOD-1063] Fix a comment.

* [NOD-1063] Rename a test.
2020-06-29 12:13:51 +03:00
Mike Zak
ad096f9781 Update to version 0.5.0 2020-06-29 08:59:56 +03:00
stasatdaglabs
57b1653383 [NOD-1063] Optimize deep reachability tree insertions (#773)
* [NOD-1055] Give higher priority for requesting missing ancestors when sending a getdata message (#767)

* [NOD-1063] Remove the remainingInterval field.

* [NOD-1063] Add helper functions to reachabilityTreeNode.

* [NOD-1063] Add reachabilityReindexRoot.

* [NOD-1063] Start implementing findNextReachabilityReindexRoot.

* [NOD-1063] Implement findCommonAncestor.

* [NOD-1063] Implement findReachabilityTreeAncestorInChildren.

* [NOD-1063] Add reachabilityReindexWindow.

* [NOD-1063] Fix findReachabilityTreeAncestorInChildren.

* [NOD-1063] Remove BlockDAG reference in findReachabilityTreeAncestorInChildren.

* [NOD-1063] Extract updateReachabilityReindexRoot to a separate function.

* [NOD-1063] Add reachabilityReindexSlack.

* [NOD-1063] Implement splitReindexRootChildrenAroundChosen.

* [NOD-1063] Implement calcReachabilityTreeNodeSizes.

* [NOD-1063] Implement propagateChildIntervals.

* [NOD-1063] Extract tightenReachabilityTreeIntervalsBeforeChosenReindexRootChild and tightenReachabilityTreeIntervalsAfterChosenReindexRootChild to separate functions.

* [NOD-1063] Implement expandReachabilityTreeIntervalInChosenReindexRootChild.

* [NOD-1063] Finished implementing concentrateReachabilityTreeIntervalAroundReindexRootChild.

* [NOD-1063] Begin implementing reindexIntervalsBeforeReindexRoot.

* [NOD-1063] Implement top-level logic of reindexIntervalsBeforeReindexRoot.

* [NOD-1063] Implement reclaimIntervalBeforeChosenChild.

* [NOD-1063] Add a debug log for reindexIntervalsBeforeReindexRoot.

* [NOD-1063] Rename reindexIntervalsBeforeReindexRoot to reindexIntervalsEarlierThanReindexRoot.

* [NOD-1063] Implement reclaimIntervalAfterChosenChild.

* [NOD-1063] Add a debug log for updateReachabilityReindexRoot.

* [NOD-1063] Convert modifiedTreeNodes from slices to sets.

* [NOD-1063] Fix findCommonAncestor.

* [NOD-1063] Fix reindexIntervalsEarlierThanReindexRoot.`

* [NOD-1063] Remove redundant nil conditions.

* [NOD-1063] Make map[*reachabilityTreeNode]struct{} into a type alias with a copyAllFrom method.

* [NOD-1063] Remove setInterval.

* [NOD-1063] Create a new struct to hold reachability stuff called reachabilityTree.

* [NOD-1063] Rename functions under reachabilityTree.

* [NOD-1063] Move reachabilityStore into reachabilityTree.

* [NOD-1063] Move the rest of the functions in reachability.go into the reachabilityTree struct.

* [NOD-1063] Update newReachabilityTree to take an instance of reachabilityStore.

* [NOD-1063] Fix merge errors.

* [NOD-1063] Fix merge errors.

* [NOD-1063] Pass a reference to the dag into reachabilityTree.

* [NOD-1063] Use Wrapf instead of Errorf.

* [NOD-1063] Merge assignments.

* [NOD-1063] Disambiguate a varaible name.

* [NOD-1063] Add a test case for intervalBefore.

* [NOD-1063] Simplify splitChildrenAroundChosenChild.

* [NOD-1063] Fold temporary variables into newReachabilityInterval.

* [NOD-1063] Fold more temporary variables into newReachabilityInterval.

* [NOD-1063] Fix a bug in expandIntervalInReindexRootChosenChild.

* [NOD-1063] Remove blockNode from futureCoveringBlock.

* [NOD-1063] Get rid of futureCoveringBlock.

* [NOD-1063] Use findIndex directly in findAncestorAmongChildren.

* [NOD-1063] Make findIndex a bit nicer to use. Also rename it to findAncestorIndexOfNode.

* [NOD-1063] Rename childIntervalAllocationRange to intervalRangeForChildAllocation.

* [NOD-1063] Optimize findCommonAncestor.

* [NOD-1063] In reindexIntervalsBeforeChosenChild, use chosenChild.interval.start - 1 instead of childrenBeforeChosen[len(childrenBeforeChosen)-1].interval.end + 1.

* [NOD-1063] Rename reindexIntervalsBeforeChosenChild to reindexIntervalsBeforeNode.

* [NOD-1063] Add a comment explain what "the chosen child" is.

* [NOD-1063] In concentrateIntervalAroundReindexRootChosenChild, rename modifiedTreeNodes to allModifiedTreeNodes.

* [NOD-1063] Extract propagateIntervals to a function.

* [NOD-1063] Extract interval "contains" logic to a separate function.

* [NOD-1063] Simplify "looping up" logic in reclaimIntervalXXXChosenChild.

* [NOD-1063] Add comments to reclaimIntervalXXXChosenChild.

* [NOD-1063] Rename copyAllFrom to addAll.

* [NOD-1063] Rename reachabilityStore (the variable) to just store.

* [NOD-1063] Fix an error message.

* [NOD-1063] Reword a comment.

* [NOD-1063] Don't return -1 from findAncestorIndexOfNode.

* [NOD-1063] Extract slackReachabilityIntervalForReclaiming to a constant.

* [NOD-1063] Add a missing condition.

* [NOD-1063] Call isAncestorOf directly in insertNode.

* [NOD-1063] Rename chosenReindexRootChild to reindexRootChosenChild.

* [NOD-1063] Rename treeNodeSet to orderedTreeNodeSet.

* [NOD-1063] Add a disclaimer to orderedTreeNodeSet.

* [NOD-1063] Implement StoreReachabilityReindexRoot and FetchReachabilityReindexRoot.

* [NOD-1063] Move storing the reindex root to within reachabilityTree.

* [NOD-1063] Remove isAncestorOf from reachabilityInterval.

* [NOD-1063] Add a comment about graph theory conventions.

* [NOD-1063] Fix tests.

* [NOD-1063] Change inclusion in isAncestorOf functions.

* [NOD-1063] Rename a test.

* [NOD-1063] Implement TestIsInFuture.

* [NOD-1063] Fix error messages in TestIsInFuture.

* [NOD-1063] Fix error messages in TestIsInFuture.

* [NOD-1063] Rename isInSelectedParentChain to isInSelectedParentChainOf.

* [NOD-1063] Rename isInFuture to isInPast.

* [NOD-1063] Expand on a comment.

* [NOD-1063] Rename modifiedTreeNodes.

* [NOD-1063] Implement test: TestReindexIntervalsEarlierThanReindexRoot.

* [NOD-1063] Implement test: TestUpdateReindexRoot.

* [NOD-1063] Explain a check.

* [NOD-1063] Use a method instead of calling reachabilityStore.loaded directly.

* [NOD-1063] Lowercasified an error message.

* [NOD-1063] Fix failing test.

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2020-06-28 14:27:01 +03:00
Ori Newman
a86255ba51 [NOD-1088] Rename RejectReasion to RejectReason (#775) 2020-06-25 18:08:58 +03:00
Ori Newman
895f67a8d4 [NOD-1035] Use reachability to check finality (#763)
* [NOD-1034] Use reachability to check finality

* [NOD-1034] Add comments and rename variables

* [NOD-1034] Fix comments

* [NOD-1034] Rename checkFinalityRules->checkFinalityViolation

* [NOD-1034] Change isAncestorOf to be exclusive

* [NOD-1034] Make isAncestorOf exclusive and also more explicit, and add TestReachabilityTreeNodeIsAncestorOf
2020-06-22 17:14:59 +03:00
Svarog
56e807b663 [NOD-999] Set TargetOutbound=0 to prevent rogue connectionRequests except the one requested (#771) 2020-06-22 14:20:12 +03:00
Mike Zak
af64c7dc2d Merge remote-tracking branch 'origin/v0.4.1-dev' into v0.5.0-dev 2020-06-21 16:15:05 +03:00
Svarog
1e6458973b [NOD-1064] Don't send GetBlockInvsMsg with lowHash = nil (#769) 2020-06-21 09:09:07 +03:00
Ori Newman
7bf8bb5436 [NOD-1017] Move peers.json to db (#733)
* [NOD-1017] Move peers.json to db

* [NOD-1017] Fix tests

* [NOD-1017] Change comments and rename variables

* [NOD-1017] Separate to smaller functions

* [NOD-1017] Renames

* [NOD-1017] Name newAddrManagerForTest return params

* [NOD-1017] Fix handling of non existing peersState

* [NOD-1017] Add getPeersState rpc command

* [NOD-1017] Fix comment

* [NOD-1017] Split long line

* [NOD-1017] Rename getPeersState->getPeerAddresses

* [NOD-1017] Rename getPeerInfo->getConnectedPeerInfo
2020-06-18 12:12:49 +03:00
Svarog
1358911d95 [NOD-1064] Don't send GetBlockInvsMsg with lowHash = nil (#769) 2020-06-17 14:18:00 +03:00
Ori Newman
1271d2f113 [NOD-1038] Give higher priority for requesting missing ancestors when sending a getdata message (#767) 2020-06-17 11:52:10 +03:00
Ori Newman
bc0227b49b [NOD-1059] Always call sm.restartSyncIfNeeded() when getting selectedTip message (#766) 2020-06-16 16:51:38 +03:00
Ori Newman
dc643c2d76 [NOD-833] Remove getBlockTemplate capabilites and move mining address to getBlockTemplate (#762)
* [NOD-833] Remove getBlockTemplate capabilites and move mining address to getBlockTemplate

* [NOD-833] Fix tests

* [NOD-833] Break long lines
2020-06-16 11:01:06 +03:00
Ori Newman
0744e8ebc0 [NOD-1042] Ignore very high orphans (#761)
* [NOD-530] Remove coinbase inputs and add blue score to payload

* [NOD-1042] Ignore very high orphans

* [NOD-1042] Add ban score to an orphan with malformed blue score

* [NOD-1042] Fix log
2020-06-15 16:08:25 +03:00
Ori Newman
d4c9fdf6ac [NOD-614] Add ban score (#760)
* [NOD-614] Copy bitcoin-core ban score policy

* [NOD-614] Add ban score to disconnects

* [NOD-614] Fix wrong branch of AddBanScore

* [NOD-614] Add ban score on sending too many addresses

* [NOD-614] Add comments

* [NOD-614] Remove redundant reject messages

* [NOD-614] Fix log message

* [NOD-614] Ban every node that sends invalid invs

* [NOD-614] Make constants for ban scores
2020-06-15 12:12:38 +03:00
stasatdaglabs
829979b6c7 [NOD-1007] Split checkBlockSanity subroutines (#743)
* [NOD-1007] Split checkBlockSanity subroutines.

* [NOD-1007] Put back the comments about performance.

* [NOD-1007] Make all the functions in checkBlockSanity take a *util.Block.

* [NOD-1007] Rename checkBlockTransactionsOrderedBySubnetwork to checkBlockTransactionOrder.

* [NOD-1007] Move a comment up a scope level.
2020-06-15 11:07:52 +03:00
Mike Zak
32cd29bf70 Merge remote-tracking branch 'origin/v0.4.1-dev' into v0.5.0-dev 2020-06-14 12:51:59 +03:00
stasatdaglabs
03cb6cbd4d [NOD-1048] Use a smaller writeBuffer and use disableSeeksCompaction directly. (#759) 2020-06-11 16:11:22 +03:00
Ori Newman
ba4a89488e [NOD-530] Remove coinbase inputs and add blue score to payload (#752)
* [NOD-530] Remove coinbase inputs and add blue score to payload

* [NOD-530] Fix comment

* [NOD-530] Change util.Block private fields comments
2020-06-11 15:54:11 +03:00
Ori Newman
b0d4a92e47 [NOD-1046] Delete redundant conversion from rule error (#755) 2020-06-11 12:19:49 +03:00
stasatdaglabs
3e5a840c5a [NOD-1052] Add a lock around clearOldEntries to protect against concurrent access of utxoDiffStore.loaded. (#758) 2020-06-11 11:56:25 +03:00
Ori Newman
d6d34238d2 [NOD-1049] Allow empty addr messages (#753) 2020-06-10 16:13:13 +03:00
Ori Newman
8bbced5925 [NOD-1051] Don't disconnect from sync peer if it sends an orphan (#757) 2020-06-10 16:05:48 +03:00
stasatdaglabs
20da1b9c9a [NOD-1048] Make leveldb compaction much less frequent (#756)
* [NOD-1048] Make leveldb compaction much less frequent. Also, allocate an entire gigabyte for leveldb's blockCache and writeBuffer.

* [NOD-1048] Implement changing the options for testing purposes.

* [NOD-1048] Rename originalOptions to originalLDBOptions.

* [NOD-1048] Add a comment.
2020-06-10 16:05:02 +03:00
Ori Newman
b6a6e577c4 [NOD-1013] Don't block handleBlockDAGNotification when calling peerNotifier (#749)
* [NOD-1013] Don't block handleBlockDAGNotification when calling peerNotifier

* [NOD-1013] Add comment
2020-06-09 12:12:18 +03:00
Mike Zak
84888221ae Merge remote-tracking branch 'origin/v0.4.1-dev' into v0.5.0-dev 2020-06-08 12:23:33 +03:00
stasatdaglabs
222477b33e [NOD-1040] Don't remove DAG tips from the diffStore's loaded set (#750)
* [NOD-1040] Don't remove DAG tips from the diffStore's loaded set

* [NOD-1040] Remove a debug log.
2020-06-08 12:14:58 +03:00
Mike Zak
4a50d94633 Update to v0.4.1 2020-06-07 17:54:30 +03:00
stasatdaglabs
b4dba782fb [NOD-1040] Increase maxBlueScoreDifferenceToKeepLoaded to 1500 (#746)
* [NOD-1040] Don't remove DAG tips from the diffStore's loaded set

* [NOD-1040] Fix TestClearOldEntries.

* Revert "[NOD-1040] Fix TestClearOldEntries."

This reverts commit e0705814

* Revert "[NOD-1040] Don't remove DAG tips from the diffStore's loaded set"

This reverts commit d3eba1c1

* [NOD-1040] Increase maxBlueScoreDifferenceToKeepLoaded to 1500.
2020-06-07 17:50:57 +03:00
stasatdaglabs
9c78a797e4 [NOD-1041] Call outboundPeerConnected and outboundPeerConnectionFailed directly instead of routing them through peerHandler (#748)
* [NOD-1041] Fix a deadlock between connHandler and peerHandler.

* [NOD-1041] Simplified the fix.
2020-06-07 16:35:48 +03:00
Ori Newman
35c733a4c1 [NOD-970] Add isSyncing flag (#747)
* [NOD-970] Add isSyncing flag

* [NOD-970] Rename shouldSendSelectedTip->peerShouldSendSelectedTip
2020-06-07 16:31:17 +03:00
Mike Zak
e5810d023e Merge remote-tracking branch 'origin/v0.4.1-dev' into v0.5.0-dev 2020-06-07 14:21:49 +03:00
stasatdaglabs
96930bd6ea [NOD-1039] Remove the call to SetGCPercent. (#745) 2020-06-07 09:19:28 +03:00
Mike Zak
e09ce32146 Merge remote-tracking branch 'origin/v0.4.1-dev' into v0.5.0-dev 2020-06-04 15:11:40 +03:00
stasatdaglabs
d15c009b3c [NOD-1030] Disconnect from syncPeers that send orphan blocks (#744)
* [NOD-1030] Disconnect from syncPeers that send orphan blocks.

* [NOD-1030] Remove debug log.

* [NOD-1030] Remove unnecessary call to stopSyncFromPeer.
2020-06-04 15:11:05 +03:00
stasatdaglabs
95c8b8e9d8 [NOD-1023] Rename isCurrent/current to isSynced/synced (#742)
* [NOD-1023] Rename BlockDAG.isCurrent to isSynced.

* [NOD-1023] Rename SyncManager.current to synced.

* [NOD-1023] Fix comments.
2020-06-03 16:04:14 +03:00
stasatdaglabs
2d798a5611 [NOD-1020] Do send addr response to getaddr messages even if there aren't any addresses to send. (#740) 2020-06-01 14:09:18 +03:00
stasatdaglabs
3a22249be9 [NOD-1012] Fix erroneous partial node check (#739)
* [NOD-1012] Fix bad partial node check.

* [NOD-1012] Fix unit tests.
2020-05-31 14:13:30 +03:00
stasatdaglabs
a4c1898624 [NOD-1012] Disable subnetworks (#731)
* [NOD-1012] Disallow non-native/coinbase transactions.

* [NOD-1012] Fix logic error.

* [NOD-1012] Fix/skip tests and remove --subnetwork.

* [NOD-1012] Disconnect from non-native peers.

* [NOD-1012] Don't skip subnetwork tests.

* [NOD-1012] Use EnableNonNativeSubnetworks in peer.go.

* [NOD-1012] Set EnableNonNativeSubnetworks = true in the tests that need them rather than by default in Simnet.
2020-05-31 10:50:46 +03:00
Ori Newman
672f02490a [NOD-763] Change genesis version (#737) 2020-05-28 09:55:59 +03:00
stasatdaglabs
fc00275d9c [NOD-553] Get rid of base58 (#735)
* [NOD-553] Get rid of wif.

* [NOD-553] Get rid of base58.
2020-05-27 17:37:03 +03:00
Ori Newman
3a4571d671 [NOD-965] Make LookupNode return boolean (#729)
* [NOD-965] Make dag.index.LookupNode return false if node is not found

* [NOD-965] Rename blockDAG->dag

* [NOD-965] Remove irrelevant test

* [NOD-965] Use bi.index's ok in LookupNode
2020-05-25 12:51:30 +03:00
Ori Newman
96052ac69a [NOD-809] Change fee rate to fee per megagram (#730) 2020-05-24 16:59:37 +03:00
104 changed files with 2909 additions and 2474 deletions

View File

@@ -5,16 +5,16 @@
package addrmgr
import (
"bytes"
"container/list"
crand "crypto/rand" // for seeding
"encoding/binary"
"encoding/json"
"encoding/gob"
"github.com/kaspanet/kaspad/dbaccess"
"github.com/pkg/errors"
"io"
"math/rand"
"net"
"os"
"path/filepath"
"strconv"
"sync"
"sync/atomic"
@@ -26,14 +26,13 @@ import (
"github.com/kaspanet/kaspad/wire"
)
type newBucket [newBucketCount]map[string]*KnownAddress
type triedBucket [triedBucketCount]*list.List
type newBucket [NewBucketCount]map[string]*KnownAddress
type triedBucket [TriedBucketCount]*list.List
// AddrManager provides a concurrency safe address manager for caching potential
// peers on the Kaspa network.
type AddrManager struct {
mtx sync.Mutex
peersFile string
lookupFunc func(string) ([]net.IP, error)
rand *rand.Rand
key [32]byte
@@ -66,10 +65,12 @@ type serializedKnownAddress struct {
// no refcount or tried, that is available from context.
}
type serializedNewBucket [newBucketCount][]string
type serializedTriedBucket [triedBucketCount][]string
type serializedNewBucket [NewBucketCount][]string
type serializedTriedBucket [TriedBucketCount][]string
type serializedAddrManager struct {
// PeersStateForSerialization is the data model that is used to
// serialize the peers state to any encoding.
type PeersStateForSerialization struct {
Version int
Key [32]byte
Addresses []*serializedKnownAddress
@@ -118,17 +119,17 @@ const (
// tried address bucket.
triedBucketSize = 256
// triedBucketCount is the number of buckets we split tried
// TriedBucketCount is the number of buckets we split tried
// addresses over.
triedBucketCount = 64
TriedBucketCount = 64
// newBucketSize is the maximum number of addresses in each new address
// bucket.
newBucketSize = 64
// newBucketCount is the number of buckets that we spread new addresses
// NewBucketCount is the number of buckets that we spread new addresses
// over.
newBucketCount = 1024
NewBucketCount = 1024
// triedBucketsPerGroup is the number of tried buckets over which an
// address group will be spread.
@@ -162,17 +163,17 @@ const (
// to a getAddr. If we have less than this amount, we send everything.
getAddrMin = 50
// getAddrMax is the most addresses that we will send in response
// GetAddrMax is the most addresses that we will send in response
// to a getAddr (in practise the most addresses we will return from a
// call to AddressCache()).
getAddrMax = 2500
GetAddrMax = 2500
// getAddrPercent is the percentage of total addresses known that we
// will share with a call to AddressCache.
getAddrPercent = 23
// serialisationVersion is the current version of the on-disk format.
serialisationVersion = 1
// serializationVersion is the current version of the on-disk format.
serializationVersion = 1
)
// updateAddress is a helper function to either update an address already known
@@ -392,7 +393,7 @@ func (a *AddrManager) getNewBucket(netAddr, srcAddr *wire.NetAddress) int {
data2 = append(data2, hashbuf[:]...)
hash2 := daghash.DoubleHashB(data2)
return int(binary.LittleEndian.Uint64(hash2) % newBucketCount)
return int(binary.LittleEndian.Uint64(hash2) % NewBucketCount)
}
func (a *AddrManager) getTriedBucket(netAddr *wire.NetAddress) int {
@@ -411,7 +412,7 @@ func (a *AddrManager) getTriedBucket(netAddr *wire.NetAddress) int {
data2 = append(data2, hashbuf[:]...)
hash2 := daghash.DoubleHashB(data2)
return int(binary.LittleEndian.Uint64(hash2) % triedBucketCount)
return int(binary.LittleEndian.Uint64(hash2) % TriedBucketCount)
}
// addressHandler is the main handler for the address manager. It must be run
@@ -423,30 +424,62 @@ out:
for {
select {
case <-dumpAddressTicker.C:
a.savePeers()
err := a.savePeers()
if err != nil {
panic(errors.Wrap(err, "error saving peers"))
}
case <-a.quit:
break out
}
}
a.savePeers()
err := a.savePeers()
if err != nil {
panic(errors.Wrap(err, "error saving peers"))
}
a.wg.Done()
log.Trace("Address handler done")
}
// savePeers saves all the known addresses to a file so they can be read back
// savePeers saves all the known addresses to the database so they can be read back
// in at next run.
func (a *AddrManager) savePeers() {
func (a *AddrManager) savePeers() error {
serializedPeersState, err := a.serializePeersState()
if err != nil {
return err
}
return dbaccess.StorePeersState(dbaccess.NoTx(), serializedPeersState)
}
func (a *AddrManager) serializePeersState() ([]byte, error) {
peersState, err := a.PeersStateForSerialization()
if err != nil {
return nil, err
}
w := &bytes.Buffer{}
encoder := gob.NewEncoder(w)
err = encoder.Encode(&peersState)
if err != nil {
return nil, errors.Wrap(err, "failed to encode peers state")
}
return w.Bytes(), nil
}
// PeersStateForSerialization returns the data model that is used to serialize the peers state to any encoding.
func (a *AddrManager) PeersStateForSerialization() (*PeersStateForSerialization, error) {
a.mtx.Lock()
defer a.mtx.Unlock()
// First we make a serialisable datastructure so we can encode it to
// json.
sam := new(serializedAddrManager)
sam.Version = serialisationVersion
copy(sam.Key[:], a.key[:])
// First we make a serializable data structure so we can encode it to
// gob.
peersState := new(PeersStateForSerialization)
peersState.Version = serializationVersion
copy(peersState.Key[:], a.key[:])
sam.Addresses = make([]*serializedKnownAddress, len(a.addrIndex))
peersState.Addresses = make([]*serializedKnownAddress, len(a.addrIndex))
i := 0
for k, v := range a.addrIndex {
ska := new(serializedKnownAddress)
@@ -463,119 +496,104 @@ func (a *AddrManager) savePeers() {
ska.LastSuccess = v.lastsuccess.Unix()
// Tried and refs are implicit in the rest of the structure
// and will be worked out from context on unserialisation.
sam.Addresses[i] = ska
peersState.Addresses[i] = ska
i++
}
sam.NewBuckets = make(map[string]*serializedNewBucket)
peersState.NewBuckets = make(map[string]*serializedNewBucket)
for subnetworkID := range a.addrNew {
subnetworkIDStr := subnetworkID.String()
sam.NewBuckets[subnetworkIDStr] = &serializedNewBucket{}
peersState.NewBuckets[subnetworkIDStr] = &serializedNewBucket{}
for i := range a.addrNew[subnetworkID] {
sam.NewBuckets[subnetworkIDStr][i] = make([]string, len(a.addrNew[subnetworkID][i]))
peersState.NewBuckets[subnetworkIDStr][i] = make([]string, len(a.addrNew[subnetworkID][i]))
j := 0
for k := range a.addrNew[subnetworkID][i] {
sam.NewBuckets[subnetworkIDStr][i][j] = k
peersState.NewBuckets[subnetworkIDStr][i][j] = k
j++
}
}
}
for i := range a.addrNewFullNodes {
sam.NewBucketFullNodes[i] = make([]string, len(a.addrNewFullNodes[i]))
peersState.NewBucketFullNodes[i] = make([]string, len(a.addrNewFullNodes[i]))
j := 0
for k := range a.addrNewFullNodes[i] {
sam.NewBucketFullNodes[i][j] = k
peersState.NewBucketFullNodes[i][j] = k
j++
}
}
sam.TriedBuckets = make(map[string]*serializedTriedBucket)
peersState.TriedBuckets = make(map[string]*serializedTriedBucket)
for subnetworkID := range a.addrTried {
subnetworkIDStr := subnetworkID.String()
sam.TriedBuckets[subnetworkIDStr] = &serializedTriedBucket{}
peersState.TriedBuckets[subnetworkIDStr] = &serializedTriedBucket{}
for i := range a.addrTried[subnetworkID] {
sam.TriedBuckets[subnetworkIDStr][i] = make([]string, a.addrTried[subnetworkID][i].Len())
peersState.TriedBuckets[subnetworkIDStr][i] = make([]string, a.addrTried[subnetworkID][i].Len())
j := 0
for e := a.addrTried[subnetworkID][i].Front(); e != nil; e = e.Next() {
ka := e.Value.(*KnownAddress)
sam.TriedBuckets[subnetworkIDStr][i][j] = NetAddressKey(ka.na)
peersState.TriedBuckets[subnetworkIDStr][i][j] = NetAddressKey(ka.na)
j++
}
}
}
for i := range a.addrTriedFullNodes {
sam.TriedBucketFullNodes[i] = make([]string, a.addrTriedFullNodes[i].Len())
peersState.TriedBucketFullNodes[i] = make([]string, a.addrTriedFullNodes[i].Len())
j := 0
for e := a.addrTriedFullNodes[i].Front(); e != nil; e = e.Next() {
ka := e.Value.(*KnownAddress)
sam.TriedBucketFullNodes[i][j] = NetAddressKey(ka.na)
peersState.TriedBucketFullNodes[i][j] = NetAddressKey(ka.na)
j++
}
}
w, err := os.Create(a.peersFile)
if err != nil {
log.Errorf("Error opening file %s: %s", a.peersFile, err)
return
}
enc := json.NewEncoder(w)
defer w.Close()
if err := enc.Encode(&sam); err != nil {
log.Errorf("Failed to encode file %s: %s", a.peersFile, err)
return
}
return peersState, nil
}
// loadPeers loads the known address from the saved file. If empty, missing, or
// malformed file, just don't load anything and start fresh
func (a *AddrManager) loadPeers() {
// loadPeers loads the known address from the database. If missing,
// just don't load anything and start fresh.
func (a *AddrManager) loadPeers() error {
a.mtx.Lock()
defer a.mtx.Unlock()
err := a.deserializePeers(a.peersFile)
if err != nil {
log.Errorf("Failed to parse file %s: %s", a.peersFile, err)
// if it is invalid we nuke the old one unconditionally.
err = os.Remove(a.peersFile)
if err != nil {
log.Warnf("Failed to remove corrupt peers file %s: %s",
a.peersFile, err)
}
serializedPeerState, err := dbaccess.FetchPeersState(dbaccess.NoTx())
if dbaccess.IsNotFoundError(err) {
a.reset()
return
}
log.Infof("Loaded %d addresses from file '%s'", a.totalNumAddresses(), a.peersFile)
}
func (a *AddrManager) deserializePeers(filePath string) error {
_, err := os.Stat(filePath)
if os.IsNotExist(err) {
log.Info("No peers state was found in the database. Created a new one", a.totalNumAddresses())
return nil
}
r, err := os.Open(filePath)
if err != nil {
return errors.Errorf("%s error opening file: %s", filePath, err)
}
defer r.Close()
var sam serializedAddrManager
dec := json.NewDecoder(r)
err = dec.Decode(&sam)
if err != nil {
return errors.Errorf("error reading %s: %s", filePath, err)
return err
}
if sam.Version != serialisationVersion {
err = a.deserializePeersState(serializedPeerState)
if err != nil {
return err
}
log.Infof("Loaded %d addresses from database", a.totalNumAddresses())
return nil
}
func (a *AddrManager) deserializePeersState(serializedPeerState []byte) error {
var peersState PeersStateForSerialization
r := bytes.NewBuffer(serializedPeerState)
dec := gob.NewDecoder(r)
err := dec.Decode(&peersState)
if err != nil {
return errors.Wrap(err, "error deserializing peers state")
}
if peersState.Version != serializationVersion {
return errors.Errorf("unknown version %d in serialized "+
"addrmanager", sam.Version)
"peers state", peersState.Version)
}
copy(a.key[:], sam.Key[:])
copy(a.key[:], peersState.Key[:])
for _, v := range sam.Addresses {
for _, v := range peersState.Addresses {
ka := new(KnownAddress)
ka.na, err = a.DeserializeNetAddress(v.Addr)
if err != nil {
@@ -600,12 +618,12 @@ func (a *AddrManager) deserializePeers(filePath string) error {
a.addrIndex[NetAddressKey(ka.na)] = ka
}
for subnetworkIDStr := range sam.NewBuckets {
for subnetworkIDStr := range peersState.NewBuckets {
subnetworkID, err := subnetworkid.NewFromStr(subnetworkIDStr)
if err != nil {
return err
}
for i, subnetworkNewBucket := range sam.NewBuckets[subnetworkIDStr] {
for i, subnetworkNewBucket := range peersState.NewBuckets[subnetworkIDStr] {
for _, val := range subnetworkNewBucket {
ka, ok := a.addrIndex[val]
if !ok {
@@ -622,7 +640,7 @@ func (a *AddrManager) deserializePeers(filePath string) error {
}
}
for i, newBucket := range sam.NewBucketFullNodes {
for i, newBucket := range peersState.NewBucketFullNodes {
for _, val := range newBucket {
ka, ok := a.addrIndex[val]
if !ok {
@@ -638,12 +656,12 @@ func (a *AddrManager) deserializePeers(filePath string) error {
}
}
for subnetworkIDStr := range sam.TriedBuckets {
for subnetworkIDStr := range peersState.TriedBuckets {
subnetworkID, err := subnetworkid.NewFromStr(subnetworkIDStr)
if err != nil {
return err
}
for i, subnetworkTriedBucket := range sam.TriedBuckets[subnetworkIDStr] {
for i, subnetworkTriedBucket := range peersState.TriedBuckets[subnetworkIDStr] {
for _, val := range subnetworkTriedBucket {
ka, ok := a.addrIndex[val]
if !ok {
@@ -658,7 +676,7 @@ func (a *AddrManager) deserializePeers(filePath string) error {
}
}
for i, triedBucket := range sam.TriedBucketFullNodes {
for i, triedBucket := range peersState.TriedBucketFullNodes {
for _, val := range triedBucket {
ka, ok := a.addrIndex[val]
if !ok {
@@ -704,20 +722,24 @@ func (a *AddrManager) DeserializeNetAddress(addr string) (*wire.NetAddress, erro
// Start begins the core address handler which manages a pool of known
// addresses, timeouts, and interval based writes.
func (a *AddrManager) Start() {
func (a *AddrManager) Start() error {
// Already started?
if atomic.AddInt32(&a.started, 1) != 1 {
return
return nil
}
log.Trace("Starting address manager")
// Load peers we already know about from file.
a.loadPeers()
// Load peers we already know about from the database.
err := a.loadPeers()
if err != nil {
return err
}
// Start the address ticker to save addresses periodically.
a.wg.Add(1)
spawn(a.addressHandler)
return nil
}
// Stop gracefully shuts down the address manager by stopping the main handler.
@@ -839,8 +861,8 @@ func (a *AddrManager) AddressCache(includeAllSubnetworks bool, subnetworkID *sub
}
numAddresses := len(allAddr) * getAddrPercent / 100
if numAddresses > getAddrMax {
numAddresses = getAddrMax
if numAddresses > GetAddrMax {
numAddresses = GetAddrMax
}
if len(allAddr) < getAddrMin {
numAddresses = len(allAddr)
@@ -1333,9 +1355,8 @@ func (a *AddrManager) GetBestLocalAddress(remoteAddr *wire.NetAddress) *wire.Net
// New returns a new Kaspa address manager.
// Use Start to begin processing asynchronous address updates.
func New(dataDir string, lookupFunc func(string) ([]net.IP, error), subnetworkID *subnetworkid.SubnetworkID) *AddrManager {
func New(lookupFunc func(string) ([]net.IP, error), subnetworkID *subnetworkid.SubnetworkID) *AddrManager {
am := AddrManager{
peersFile: filepath.Join(dataDir, "peers.json"),
lookupFunc: lookupFunc,
rand: rand.New(rand.NewSource(time.Now().UnixNano())),
quit: make(chan struct{}),

View File

@@ -8,7 +8,9 @@ import (
"fmt"
"github.com/kaspanet/kaspad/config"
"github.com/kaspanet/kaspad/dagconfig"
"github.com/kaspanet/kaspad/dbaccess"
"github.com/kaspanet/kaspad/util/subnetworkid"
"io/ioutil"
"net"
"reflect"
"testing"
@@ -101,14 +103,41 @@ func addNaTest(ip string, port uint16, want string) {
naTests = append(naTests, test)
}
func lookupFunc(host string) ([]net.IP, error) {
func lookupFuncForTest(host string) ([]net.IP, error) {
return nil, errors.New("not implemented")
}
func newAddrManagerForTest(t *testing.T, testName string,
localSubnetworkID *subnetworkid.SubnetworkID) (addressManager *AddrManager, teardown func()) {
dbPath, err := ioutil.TempDir("", testName)
if err != nil {
t.Fatalf("Error creating temporary directory: %s", err)
}
err = dbaccess.Open(dbPath)
if err != nil {
t.Fatalf("error creating db: %s", err)
}
addressManager = New(lookupFuncForTest, localSubnetworkID)
return addressManager, func() {
err := dbaccess.Close()
if err != nil {
t.Fatalf("error closing the database: %s", err)
}
}
}
func TestStartStop(t *testing.T) {
n := New("teststartstop", lookupFunc, nil)
n.Start()
err := n.Stop()
amgr, teardown := newAddrManagerForTest(t, "TestStartStop", nil)
defer teardown()
err := amgr.Start()
if err != nil {
t.Fatalf("Address Manager failed to start: %v", err)
}
err = amgr.Stop()
if err != nil {
t.Fatalf("Address Manager failed to stop: %v", err)
}
@@ -148,7 +177,8 @@ func TestAddAddressByIP(t *testing.T) {
},
}
amgr := New("testaddressbyip", nil, nil)
amgr, teardown := newAddrManagerForTest(t, "TestAddAddressByIP", nil)
defer teardown()
for i, test := range tests {
err := amgr.AddAddressByIP(test.addrIP, nil)
if test.err != nil && err == nil {
@@ -213,7 +243,8 @@ func TestAddLocalAddress(t *testing.T) {
true,
},
}
amgr := New("testaddlocaladdress", nil, nil)
amgr, teardown := newAddrManagerForTest(t, "TestAddLocalAddress", nil)
defer teardown()
for x, test := range tests {
result := amgr.AddLocalAddress(&test.address, test.priority)
if result == nil && !test.valid {
@@ -239,21 +270,22 @@ func TestAttempt(t *testing.T) {
})
defer config.SetActiveConfig(originalActiveCfg)
n := New("testattempt", lookupFunc, nil)
amgr, teardown := newAddrManagerForTest(t, "TestAttempt", nil)
defer teardown()
// Add a new address and get it
err := n.AddAddressByIP(someIP+":8333", nil)
err := amgr.AddAddressByIP(someIP+":8333", nil)
if err != nil {
t.Fatalf("Adding address failed: %v", err)
}
ka := n.GetAddress()
ka := amgr.GetAddress()
if !ka.LastAttempt().IsZero() {
t.Errorf("Address should not have attempts, but does")
}
na := ka.NetAddress()
n.Attempt(na)
amgr.Attempt(na)
if ka.LastAttempt().IsZero() {
t.Errorf("Address should have an attempt, but does not")
@@ -270,19 +302,20 @@ func TestConnected(t *testing.T) {
})
defer config.SetActiveConfig(originalActiveCfg)
n := New("testconnected", lookupFunc, nil)
amgr, teardown := newAddrManagerForTest(t, "TestConnected", nil)
defer teardown()
// Add a new address and get it
err := n.AddAddressByIP(someIP+":8333", nil)
err := amgr.AddAddressByIP(someIP+":8333", nil)
if err != nil {
t.Fatalf("Adding address failed: %v", err)
}
ka := n.GetAddress()
ka := amgr.GetAddress()
na := ka.NetAddress()
// make it an hour ago
na.Timestamp = time.Unix(time.Now().Add(time.Hour*-1).Unix(), 0)
n.Connected(na)
amgr.Connected(na)
if !ka.NetAddress().Timestamp.After(na.Timestamp) {
t.Errorf("Address should have a new timestamp, but does not")
@@ -299,9 +332,10 @@ func TestNeedMoreAddresses(t *testing.T) {
})
defer config.SetActiveConfig(originalActiveCfg)
n := New("testneedmoreaddresses", lookupFunc, nil)
amgr, teardown := newAddrManagerForTest(t, "TestNeedMoreAddresses", nil)
defer teardown()
addrsToAdd := 1500
b := n.NeedMoreAddresses()
b := amgr.NeedMoreAddresses()
if !b {
t.Errorf("Expected that we need more addresses")
}
@@ -310,7 +344,7 @@ func TestNeedMoreAddresses(t *testing.T) {
var err error
for i := 0; i < addrsToAdd; i++ {
s := fmt.Sprintf("%d.%d.173.147:8333", i/128+60, i%128+60)
addrs[i], err = n.DeserializeNetAddress(s)
addrs[i], err = amgr.DeserializeNetAddress(s)
if err != nil {
t.Errorf("Failed to turn %s into an address: %v", s, err)
}
@@ -318,13 +352,13 @@ func TestNeedMoreAddresses(t *testing.T) {
srcAddr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 8333, 0)
n.AddAddresses(addrs, srcAddr, nil)
numAddrs := n.TotalNumAddresses()
amgr.AddAddresses(addrs, srcAddr, nil)
numAddrs := amgr.TotalNumAddresses()
if numAddrs > addrsToAdd {
t.Errorf("Number of addresses is too many %d vs %d", numAddrs, addrsToAdd)
}
b = n.NeedMoreAddresses()
b = amgr.NeedMoreAddresses()
if b {
t.Errorf("Expected that we don't need more addresses")
}
@@ -340,7 +374,8 @@ func TestGood(t *testing.T) {
})
defer config.SetActiveConfig(originalActiveCfg)
n := New("testgood", lookupFunc, nil)
amgr, teardown := newAddrManagerForTest(t, "TestGood", nil)
defer teardown()
addrsToAdd := 64 * 64
addrs := make([]*wire.NetAddress, addrsToAdd)
subnetworkCount := 32
@@ -349,7 +384,7 @@ func TestGood(t *testing.T) {
var err error
for i := 0; i < addrsToAdd; i++ {
s := fmt.Sprintf("%d.173.147.%d:8333", i/64+60, i%64+60)
addrs[i], err = n.DeserializeNetAddress(s)
addrs[i], err = amgr.DeserializeNetAddress(s)
if err != nil {
t.Errorf("Failed to turn %s into an address: %v", s, err)
}
@@ -361,24 +396,24 @@ func TestGood(t *testing.T) {
srcAddr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 8333, 0)
n.AddAddresses(addrs, srcAddr, nil)
amgr.AddAddresses(addrs, srcAddr, nil)
for i, addr := range addrs {
n.Good(addr, subnetworkIDs[i%subnetworkCount])
amgr.Good(addr, subnetworkIDs[i%subnetworkCount])
}
numAddrs := n.TotalNumAddresses()
numAddrs := amgr.TotalNumAddresses()
if numAddrs >= addrsToAdd {
t.Errorf("Number of addresses is too many: %d vs %d", numAddrs, addrsToAdd)
}
numCache := len(n.AddressCache(true, nil))
numCache := len(amgr.AddressCache(true, nil))
if numCache == 0 || numCache >= numAddrs/4 {
t.Errorf("Number of addresses in cache: got %d, want positive and less than %d",
numCache, numAddrs/4)
}
for i := 0; i < subnetworkCount; i++ {
numCache = len(n.AddressCache(false, subnetworkIDs[i]))
numCache = len(amgr.AddressCache(false, subnetworkIDs[i]))
if numCache == 0 || numCache >= numAddrs/subnetworkCount {
t.Errorf("Number of addresses in subnetwork cache: got %d, want positive and less than %d",
numCache, numAddrs/4/subnetworkCount)
@@ -396,17 +431,18 @@ func TestGoodChangeSubnetworkID(t *testing.T) {
})
defer config.SetActiveConfig(originalActiveCfg)
n := New("test_good_change_subnetwork_id", lookupFunc, nil)
amgr, teardown := newAddrManagerForTest(t, "TestGoodChangeSubnetworkID", nil)
defer teardown()
addr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 8333, 0)
addrKey := NetAddressKey(addr)
srcAddr := wire.NewNetAddressIPPort(net.IPv4(173, 144, 173, 111), 8333, 0)
oldSubnetwork := subnetworkid.SubnetworkIDNative
n.AddAddress(addr, srcAddr, oldSubnetwork)
n.Good(addr, oldSubnetwork)
amgr.AddAddress(addr, srcAddr, oldSubnetwork)
amgr.Good(addr, oldSubnetwork)
// make sure address was saved to addrIndex under oldSubnetwork
ka := n.find(addr)
ka := amgr.find(addr)
if ka == nil {
t.Fatalf("Address was not found after first time .Good called")
}
@@ -415,7 +451,7 @@ func TestGoodChangeSubnetworkID(t *testing.T) {
}
// make sure address was added to correct bucket under oldSubnetwork
bucket := n.addrTried[*oldSubnetwork][n.getTriedBucket(addr)]
bucket := amgr.addrTried[*oldSubnetwork][amgr.getTriedBucket(addr)]
wasFound := false
for e := bucket.Front(); e != nil; e = e.Next() {
if NetAddressKey(e.Value.(*KnownAddress).NetAddress()) == addrKey {
@@ -428,10 +464,10 @@ func TestGoodChangeSubnetworkID(t *testing.T) {
// now call .Good again with a different subnetwork
newSubnetwork := subnetworkid.SubnetworkIDRegistry
n.Good(addr, newSubnetwork)
amgr.Good(addr, newSubnetwork)
// make sure address was updated in addrIndex under newSubnetwork
ka = n.find(addr)
ka = amgr.find(addr)
if ka == nil {
t.Fatalf("Address was not found after second time .Good called")
}
@@ -440,7 +476,7 @@ func TestGoodChangeSubnetworkID(t *testing.T) {
}
// make sure address was removed from bucket under oldSubnetwork
bucket = n.addrTried[*oldSubnetwork][n.getTriedBucket(addr)]
bucket = amgr.addrTried[*oldSubnetwork][amgr.getTriedBucket(addr)]
wasFound = false
for e := bucket.Front(); e != nil; e = e.Next() {
if NetAddressKey(e.Value.(*KnownAddress).NetAddress()) == addrKey {
@@ -452,7 +488,7 @@ func TestGoodChangeSubnetworkID(t *testing.T) {
}
// make sure address was added to correct bucket under newSubnetwork
bucket = n.addrTried[*newSubnetwork][n.getTriedBucket(addr)]
bucket = amgr.addrTried[*newSubnetwork][amgr.getTriedBucket(addr)]
wasFound = false
for e := bucket.Front(); e != nil; e = e.Next() {
if NetAddressKey(e.Value.(*KnownAddress).NetAddress()) == addrKey {
@@ -475,34 +511,35 @@ func TestGetAddress(t *testing.T) {
defer config.SetActiveConfig(originalActiveCfg)
localSubnetworkID := &subnetworkid.SubnetworkID{0xff}
n := New("testgetaddress", lookupFunc, localSubnetworkID)
amgr, teardown := newAddrManagerForTest(t, "TestGetAddress", localSubnetworkID)
defer teardown()
// Get an address from an empty set (should error)
if rv := n.GetAddress(); rv != nil {
if rv := amgr.GetAddress(); rv != nil {
t.Errorf("GetAddress failed: got: %v want: %v\n", rv, nil)
}
// Add a new address and get it
err := n.AddAddressByIP(someIP+":8332", localSubnetworkID)
err := amgr.AddAddressByIP(someIP+":8332", localSubnetworkID)
if err != nil {
t.Fatalf("Adding address failed: %v", err)
}
ka := n.GetAddress()
ka := amgr.GetAddress()
if ka == nil {
t.Fatalf("Did not get an address where there is one in the pool")
}
n.Attempt(ka.NetAddress())
amgr.Attempt(ka.NetAddress())
// Checks that we don't get it if we find that it has other subnetwork ID than expected.
actualSubnetworkID := &subnetworkid.SubnetworkID{0xfe}
n.Good(ka.NetAddress(), actualSubnetworkID)
ka = n.GetAddress()
amgr.Good(ka.NetAddress(), actualSubnetworkID)
ka = amgr.GetAddress()
if ka != nil {
t.Errorf("Didn't expect to get an address because there shouldn't be any address from subnetwork ID %s or nil", localSubnetworkID)
}
// Checks that the total number of addresses incremented although the new address is not full node or a partial node of the same subnetwork as the local node.
numAddrs := n.TotalNumAddresses()
numAddrs := amgr.TotalNumAddresses()
if numAddrs != 1 {
t.Errorf("Wrong number of addresses: got %d, want %d", numAddrs, 1)
}
@@ -510,11 +547,11 @@ func TestGetAddress(t *testing.T) {
// Now we repeat the same process, but now the address has the expected subnetwork ID.
// Add a new address and get it
err = n.AddAddressByIP(someIP+":8333", localSubnetworkID)
err = amgr.AddAddressByIP(someIP+":8333", localSubnetworkID)
if err != nil {
t.Fatalf("Adding address failed: %v", err)
}
ka = n.GetAddress()
ka = amgr.GetAddress()
if ka == nil {
t.Fatalf("Did not get an address where there is one in the pool")
}
@@ -524,11 +561,11 @@ func TestGetAddress(t *testing.T) {
if !ka.SubnetworkID().IsEqual(localSubnetworkID) {
t.Errorf("Wrong Subnetwork ID: got %v, want %v", *ka.SubnetworkID(), localSubnetworkID)
}
n.Attempt(ka.NetAddress())
amgr.Attempt(ka.NetAddress())
// Mark this as a good address and get it
n.Good(ka.NetAddress(), localSubnetworkID)
ka = n.GetAddress()
amgr.Good(ka.NetAddress(), localSubnetworkID)
ka = amgr.GetAddress()
if ka == nil {
t.Fatalf("Did not get an address where there is one in the pool")
}
@@ -539,7 +576,7 @@ func TestGetAddress(t *testing.T) {
t.Errorf("Wrong Subnetwork ID: got %v, want %v", ka.SubnetworkID(), localSubnetworkID)
}
numAddrs = n.TotalNumAddresses()
numAddrs = amgr.TotalNumAddresses()
if numAddrs != 2 {
t.Errorf("Wrong number of addresses: got %d, want %d", numAddrs, 1)
}
@@ -604,7 +641,8 @@ func TestGetBestLocalAddress(t *testing.T) {
*/
}
amgr := New("testgetbestlocaladdress", nil, nil)
amgr, teardown := newAddrManagerForTest(t, "TestGetBestLocalAddress", nil)
defer teardown()
// Test against default when there's no address
for x, test := range tests {

View File

@@ -105,8 +105,6 @@ func (dag *BlockDAG) maybeAcceptBlock(block *util.Block, flags BehaviorFlags) er
}
}
block.SetBlueScore(newNode.blueScore)
// Connect the passed block to the DAG. This also handles validation of the
// transaction scripts.
chainUpdates, err := dag.addBlock(newNode, block, selectedParentAnticone, flags)
@@ -133,17 +131,17 @@ func (dag *BlockDAG) maybeAcceptBlock(block *util.Block, flags BehaviorFlags) er
return nil
}
func lookupParentNodes(block *util.Block, blockDAG *BlockDAG) (blockSet, error) {
func lookupParentNodes(block *util.Block, dag *BlockDAG) (blockSet, error) {
header := block.MsgBlock().Header
parentHashes := header.ParentHashes
nodes := newBlockSet()
for _, parentHash := range parentHashes {
node := blockDAG.index.LookupNode(parentHash)
if node == nil {
node, ok := dag.index.LookupNode(parentHash)
if !ok {
str := fmt.Sprintf("parent block %s is unknown", parentHash)
return nil, ruleError(ErrParentBlockUnknown, str)
} else if blockDAG.index.NodeStatus(node).KnownInvalid() {
} else if dag.index.NodeStatus(node).KnownInvalid() {
str := fmt.Sprintf("parent block %s is known to be invalid", parentHash)
return nil, ruleError(ErrInvalidAncestorBlock, str)
}

View File

@@ -63,7 +63,10 @@ func TestMaybeAcceptBlockErrors(t *testing.T) {
if isOrphan {
t.Fatalf("TestMaybeAcceptBlockErrors: incorrectly returned block 1 is an orphan")
}
blockNode1 := dag.index.LookupNode(block1.Hash())
blockNode1, ok := dag.index.LookupNode(block1.Hash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", block1.Hash())
}
dag.index.SetStatusFlags(blockNode1, statusValidateFailed)
block2 := blocks[2]

View File

@@ -11,14 +11,14 @@ import (
func TestBlockHeap(t *testing.T) {
// Create a new database and DAG instance to run tests against.
dag, teardownFunc, err := DAGSetup("TestBlockHeap", true, Config{
DAGParams: &dagconfig.MainnetParams,
DAGParams: &dagconfig.SimnetParams,
})
if err != nil {
t.Fatalf("TestBlockHeap: Failed to setup DAG instance: %s", err)
}
defer teardownFunc()
block0Header := dagconfig.MainnetParams.GenesisBlock.Header
block0Header := dagconfig.SimnetParams.GenesisBlock.Header
block0, _ := dag.newBlockNode(&block0Header, newBlockSet())
block100000Header := Block100000.Header

View File

@@ -50,11 +50,11 @@ func (bi *blockIndex) HaveBlock(hash *daghash.Hash) bool {
// return nil if there is no entry for the hash.
//
// This function is safe for concurrent access.
func (bi *blockIndex) LookupNode(hash *daghash.Hash) *blockNode {
func (bi *blockIndex) LookupNode(hash *daghash.Hash) (*blockNode, bool) {
bi.RLock()
defer bi.RUnlock()
node := bi.index[*hash]
return node
node, ok := bi.index[*hash]
return node, ok
}
// AddNode adds the provided node to the block index and marks it as dirty.

View File

@@ -29,8 +29,15 @@ func (dag *BlockDAG) BlockLocatorFromHashes(highHash, lowHash *daghash.Hash) (Bl
dag.dagLock.RLock()
defer dag.dagLock.RUnlock()
highNode := dag.index.LookupNode(highHash)
lowNode := dag.index.LookupNode(lowHash)
highNode, ok := dag.index.LookupNode(highHash)
if !ok {
return nil, errors.Errorf("block %s is unknown", highHash)
}
lowNode, ok := dag.index.LookupNode(lowHash)
if !ok {
return nil, errors.Errorf("block %s is unknown", lowHash)
}
return dag.blockLocator(highNode, lowNode)
}
@@ -88,8 +95,8 @@ func (dag *BlockDAG) FindNextLocatorBoundaries(locator BlockLocator) (highHash,
lowNode := dag.genesis
nextBlockLocatorIndex := int64(len(locator) - 1)
for i, hash := range locator {
node := dag.index.LookupNode(hash)
if node != nil {
node, ok := dag.index.LookupNode(hash)
if ok {
lowNode = node
nextBlockLocatorIndex = int64(i) - 1
break

View File

@@ -133,7 +133,10 @@ func TestBlueBlockWindow(t *testing.T) {
t.Fatalf("block %v was unexpectedly orphan", blockData.id)
}
node := dag.index.LookupNode(utilBlock.Hash())
node, ok := dag.index.LookupNode(utilBlock.Hash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", utilBlock.Hash())
}
blockByIDMap[blockData.id] = node
idByBlockMap[node] = blockData.id

View File

@@ -5,15 +5,14 @@ import (
"bytes"
"encoding/binary"
"github.com/kaspanet/kaspad/dbaccess"
"github.com/kaspanet/kaspad/util/subnetworkid"
"github.com/pkg/errors"
"io"
"math"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/util/coinbasepayload"
"github.com/kaspanet/kaspad/util/daghash"
"github.com/kaspanet/kaspad/util/subnetworkid"
"github.com/kaspanet/kaspad/util/txsort"
"github.com/kaspanet/kaspad/wire"
"github.com/pkg/errors"
"io"
)
// compactFeeData is a specialized data type to store a compact list of fees
@@ -98,7 +97,10 @@ func (node *blockNode) validateCoinbaseTransaction(dag *BlockDAG, block *util.Bl
return nil
}
blockCoinbaseTx := block.CoinbaseTransaction().MsgTx()
scriptPubKey, extraData, err := DeserializeCoinbasePayload(blockCoinbaseTx)
_, scriptPubKey, extraData, err := coinbasepayload.DeserializeCoinbasePayload(blockCoinbaseTx)
if errors.Is(err, coinbasepayload.ErrIncorrectScriptPubKeyLen) {
return ruleError(ErrBadCoinbaseTransaction, err.Error())
}
if err != nil {
return err
}
@@ -125,16 +127,15 @@ func (node *blockNode) expectedCoinbaseTransaction(dag *BlockDAG, txsAcceptanceD
txOuts := []*wire.TxOut{}
for _, blue := range node.blues {
txIn, txOut, err := coinbaseInputAndOutputForBlueBlock(dag, blue, txsAcceptanceData, bluesFeeData)
txOut, err := coinbaseOutputForBlueBlock(dag, blue, txsAcceptanceData, bluesFeeData)
if err != nil {
return nil, err
}
txIns = append(txIns, txIn)
if txOut != nil {
txOuts = append(txOuts, txOut)
}
}
payload, err := SerializeCoinbasePayload(scriptPubKey, extraData)
payload, err := coinbasepayload.SerializeCoinbasePayload(node.blueScore, scriptPubKey, extraData)
if err != nil {
return nil, err
}
@@ -143,83 +144,33 @@ func (node *blockNode) expectedCoinbaseTransaction(dag *BlockDAG, txsAcceptanceD
return util.NewTx(sortedCoinbaseTx), nil
}
// SerializeCoinbasePayload builds the coinbase payload based on the provided scriptPubKey and extra data.
func SerializeCoinbasePayload(scriptPubKey []byte, extraData []byte) ([]byte, error) {
w := &bytes.Buffer{}
err := wire.WriteVarInt(w, uint64(len(scriptPubKey)))
if err != nil {
return nil, err
}
_, err = w.Write(scriptPubKey)
if err != nil {
return nil, err
}
_, err = w.Write(extraData)
if err != nil {
return nil, err
}
return w.Bytes(), nil
}
// DeserializeCoinbasePayload deserialize the coinbase payload to its component (scriptPubKey and extra data).
func DeserializeCoinbasePayload(tx *wire.MsgTx) (scriptPubKey []byte, extraData []byte, err error) {
r := bytes.NewReader(tx.Payload)
scriptPubKeyLen, err := wire.ReadVarInt(r)
if err != nil {
return nil, nil, err
}
scriptPubKey = make([]byte, scriptPubKeyLen)
_, err = r.Read(scriptPubKey)
if err != nil {
return nil, nil, err
}
extraData = make([]byte, r.Len())
if r.Len() != 0 {
_, err = r.Read(extraData)
if err != nil {
return nil, nil, err
}
}
return scriptPubKey, extraData, nil
}
// feeInputAndOutputForBlueBlock calculates the input and output that should go into the coinbase transaction of blueBlock
// If blueBlock gets no fee - returns only txIn and nil for txOut
func coinbaseInputAndOutputForBlueBlock(dag *BlockDAG, blueBlock *blockNode,
txsAcceptanceData MultiBlockTxsAcceptanceData, feeData map[daghash.Hash]compactFeeData) (
*wire.TxIn, *wire.TxOut, error) {
// coinbaseOutputForBlueBlock calculates the output that should go into the coinbase transaction of blueBlock
// If blueBlock gets no fee - returns nil for txOut
func coinbaseOutputForBlueBlock(dag *BlockDAG, blueBlock *blockNode,
txsAcceptanceData MultiBlockTxsAcceptanceData, feeData map[daghash.Hash]compactFeeData) (*wire.TxOut, error) {
blockTxsAcceptanceData, ok := txsAcceptanceData.FindAcceptanceData(blueBlock.hash)
if !ok {
return nil, nil, errors.Errorf("No txsAcceptanceData for block %s", blueBlock.hash)
return nil, errors.Errorf("No txsAcceptanceData for block %s", blueBlock.hash)
}
blockFeeData, ok := feeData[*blueBlock.hash]
if !ok {
return nil, nil, errors.Errorf("No feeData for block %s", blueBlock.hash)
return nil, errors.Errorf("No feeData for block %s", blueBlock.hash)
}
if len(blockTxsAcceptanceData.TxAcceptanceData) != blockFeeData.Len() {
return nil, nil, errors.Errorf(
return nil, errors.Errorf(
"length of accepted transaction data(%d) and fee data(%d) is not equal for block %s",
len(blockTxsAcceptanceData.TxAcceptanceData), blockFeeData.Len(), blueBlock.hash)
}
txIn := &wire.TxIn{
SignatureScript: []byte{},
PreviousOutpoint: wire.Outpoint{
TxID: daghash.TxID(*blueBlock.hash),
Index: math.MaxUint32,
},
Sequence: wire.MaxTxInSequenceNum,
}
totalFees := uint64(0)
feeIterator := blockFeeData.iterator()
for _, txAcceptanceData := range blockTxsAcceptanceData.TxAcceptanceData {
fee, err := feeIterator.next()
if err != nil {
return nil, nil, errors.Errorf("Error retrieving fee from compactFeeData iterator: %s", err)
return nil, errors.Errorf("Error retrieving fee from compactFeeData iterator: %s", err)
}
if txAcceptanceData.IsAccepted {
totalFees += fee
@@ -229,13 +180,13 @@ func coinbaseInputAndOutputForBlueBlock(dag *BlockDAG, blueBlock *blockNode,
totalReward := CalcBlockSubsidy(blueBlock.blueScore, dag.dagParams) + totalFees
if totalReward == 0 {
return txIn, nil, nil
return nil, nil
}
// the ScriptPubKey for the coinbase is parsed from the coinbase payload
scriptPubKey, _, err := DeserializeCoinbasePayload(blockTxsAcceptanceData.TxAcceptanceData[0].Tx.MsgTx())
_, scriptPubKey, _, err := coinbasepayload.DeserializeCoinbasePayload(blockTxsAcceptanceData.TxAcceptanceData[0].Tx.MsgTx())
if err != nil {
return nil, nil, err
return nil, err
}
txOut := &wire.TxOut{
@@ -243,5 +194,5 @@ func coinbaseInputAndOutputForBlueBlock(dag *BlockDAG, blueBlock *blockNode,
ScriptPubKey: scriptPubKey,
}
return txIn, txOut, nil
return txOut, nil
}

View File

@@ -178,8 +178,8 @@ func prepareAndProcessBlockByParentMsgBlocks(t *testing.T, dag *BlockDAG, parent
}
func nodeByMsgBlock(t *testing.T, dag *BlockDAG, block *wire.MsgBlock) *blockNode {
node := dag.index.LookupNode(block.BlockHash())
if node == nil {
node, ok := dag.index.LookupNode(block.BlockHash())
if !ok {
t.Fatalf("couldn't find block node with hash %s", block.BlockHash())
}
return node

View File

@@ -151,9 +151,10 @@ type BlockDAG struct {
lastFinalityPoint *blockNode
utxoDiffStore *utxoDiffStore
reachabilityStore *reachabilityStore
multisetStore *multisetStore
utxoDiffStore *utxoDiffStore
multisetStore *multisetStore
reachabilityTree *reachabilityTree
recentBlockProcessingTimestamps []time.Time
startTime time.Time
@@ -165,7 +166,7 @@ type BlockDAG struct {
//
// This function is safe for concurrent access.
func (dag *BlockDAG) IsKnownBlock(hash *daghash.Hash) bool {
return dag.IsInDAG(hash) || dag.IsKnownOrphan(hash) || dag.isKnownDelayedBlock(hash)
return dag.IsInDAG(hash) || dag.IsKnownOrphan(hash) || dag.isKnownDelayedBlock(hash) || dag.IsKnownInvalid(hash)
}
// AreKnownBlocks returns whether or not the DAG instances has all blocks represented
@@ -209,8 +210,8 @@ func (dag *BlockDAG) IsKnownOrphan(hash *daghash.Hash) bool {
//
// This function is safe for concurrent access.
func (dag *BlockDAG) IsKnownInvalid(hash *daghash.Hash) bool {
node := dag.index.LookupNode(hash)
if node == nil {
node, ok := dag.index.LookupNode(hash)
if !ok {
return false
}
return dag.index.NodeStatus(node).KnownInvalid()
@@ -551,8 +552,8 @@ func (node *blockNode) validateAcceptedIDMerkleRoot(dag *BlockDAG, txsAcceptance
func (dag *BlockDAG) connectBlock(node *blockNode,
block *util.Block, selectedParentAnticone []*blockNode, fastAdd bool) (*chainUpdates, error) {
// No warnings about unknown rules or versions until the DAG is
// current.
if dag.isCurrent() {
// synced.
if dag.isSynced() {
// Warn if any unknown new rules are either about to activate or
// have already been activated.
if err := dag.warnUnknownRuleActivations(node); err != nil {
@@ -566,7 +567,7 @@ func (dag *BlockDAG) connectBlock(node *blockNode,
}
}
if err := dag.checkFinalityRules(node); err != nil {
if err := dag.checkFinalityViolation(node); err != nil {
return nil, err
}
@@ -577,10 +578,6 @@ func (dag *BlockDAG) connectBlock(node *blockNode,
newBlockPastUTXO, txsAcceptanceData, newBlockFeeData, newBlockMultiSet, err :=
node.verifyAndBuildUTXO(dag, block.Transactions(), fastAdd)
if err != nil {
var ruleErr RuleError
if ok := errors.As(err, &ruleErr); ok {
return nil, ruleError(ruleErr.ErrorCode, fmt.Sprintf("error verifying UTXO for %s: %s", node, err))
}
return nil, errors.Wrapf(err, "error verifying UTXO for %s", node)
}
@@ -658,22 +655,20 @@ func (node *blockNode) selectedParentMultiset(dag *BlockDAG) (*secp256k1.MultiSe
}
func addTxToMultiset(ms *secp256k1.MultiSet, tx *wire.MsgTx, pastUTXO UTXOSet, blockBlueScore uint64) (*secp256k1.MultiSet, error) {
isCoinbase := tx.IsCoinBase()
if !isCoinbase {
for _, txIn := range tx.TxIn {
entry, ok := pastUTXO.Get(txIn.PreviousOutpoint)
if !ok {
return nil, errors.Errorf("Couldn't find entry for outpoint %s", txIn.PreviousOutpoint)
}
for _, txIn := range tx.TxIn {
entry, ok := pastUTXO.Get(txIn.PreviousOutpoint)
if !ok {
return nil, errors.Errorf("Couldn't find entry for outpoint %s", txIn.PreviousOutpoint)
}
var err error
ms, err = removeUTXOFromMultiset(ms, entry, &txIn.PreviousOutpoint)
if err != nil {
return nil, err
}
var err error
ms, err = removeUTXOFromMultiset(ms, entry, &txIn.PreviousOutpoint)
if err != nil {
return nil, err
}
}
isCoinbase := tx.IsCoinBase()
for i, txOut := range tx.TxOut {
outpoint := *wire.NewOutpoint(tx.TxID(), uint32(i))
entry := NewUTXOEntry(txOut, isCoinbase, blockBlueScore)
@@ -706,7 +701,7 @@ func (dag *BlockDAG) saveChangesFromBlock(block *util.Block, virtualUTXODiff *UT
return err
}
err = dag.reachabilityStore.flushToDB(dbTx)
err = dag.reachabilityTree.storeState(dbTx)
if err != nil {
return err
}
@@ -766,7 +761,7 @@ func (dag *BlockDAG) saveChangesFromBlock(block *util.Block, virtualUTXODiff *UT
dag.index.clearDirtyEntries()
dag.utxoDiffStore.clearDirtyEntries()
dag.utxoDiffStore.clearOldEntries()
dag.reachabilityStore.clearDirtyEntries()
dag.reachabilityTree.store.clearDirtyEntries()
dag.multisetStore.clearNewEntries()
return nil
@@ -822,20 +817,40 @@ func (dag *BlockDAG) LastFinalityPointHash() *daghash.Hash {
return dag.lastFinalityPoint.hash
}
// checkFinalityRules checks the new block does not violate the finality rules
// specifically - the new block selectedParent chain should contain the old finality point
func (dag *BlockDAG) checkFinalityRules(newNode *blockNode) error {
// isInSelectedParentChainOf returns whether `node` is in the selected parent chain of `other`.
func (dag *BlockDAG) isInSelectedParentChainOf(node *blockNode, other *blockNode) (bool, error) {
// By definition, a node is not in the selected parent chain of itself.
if node == other {
return false, nil
}
return dag.reachabilityTree.isReachabilityTreeAncestorOf(node, other)
}
// checkFinalityViolation checks the new block does not violate the finality rules
// specifically - the new block selectedParent chain should contain the old finality point.
func (dag *BlockDAG) checkFinalityViolation(newNode *blockNode) error {
// the genesis block can not violate finality rules
if newNode.isGenesis() {
return nil
}
for currentNode := newNode; currentNode != dag.lastFinalityPoint; currentNode = currentNode.selectedParent {
// If we went past dag's last finality point without encountering it -
// the new block has violated finality.
if currentNode.blueScore <= dag.lastFinalityPoint.blueScore {
return ruleError(ErrFinality, "The last finality point is not in the selected chain of this block")
}
// Because newNode doesn't have reachability data we
// need to check if the last finality point is in the
// selected parent chain of newNode.selectedParent, so
// we explicitly check if newNode.selectedParent is
// the finality point.
if dag.lastFinalityPoint == newNode.selectedParent {
return nil
}
isInSelectedChain, err := dag.isInSelectedParentChainOf(dag.lastFinalityPoint, newNode.selectedParent)
if err != nil {
return err
}
if !isInSelectedChain {
return ruleError(ErrFinality, "the last finality point is not in the selected parent chain of this block")
}
return nil
}
@@ -900,10 +915,10 @@ func (dag *BlockDAG) finalizeNodesBelowFinalityPoint(deleteDiffData bool) {
// IsKnownFinalizedBlock returns whether the block is below the finality point.
// IsKnownFinalizedBlock might be false-negative because node finality status is
// updated in a separate goroutine. To get a definite answer if a block
// is finalized or not, use dag.checkFinalityRules.
// is finalized or not, use dag.checkFinalityViolation.
func (dag *BlockDAG) IsKnownFinalizedBlock(blockHash *daghash.Hash) bool {
node := dag.index.LookupNode(blockHash)
return node != nil && node.isFinalized
node, ok := dag.index.LookupNode(blockHash)
return ok && node.isFinalized
}
// NextBlockCoinbaseTransaction prepares the coinbase transaction for the next mined block
@@ -951,8 +966,8 @@ func (dag *BlockDAG) TxsAcceptedByVirtual() (MultiBlockTxsAcceptanceData, error)
//
// This function MUST be called with the DAG read-lock held
func (dag *BlockDAG) TxsAcceptedByBlockHash(blockHash *daghash.Hash) (MultiBlockTxsAcceptanceData, error) {
node := dag.index.LookupNode(blockHash)
if node == nil {
node, ok := dag.index.LookupNode(blockHash)
if !ok {
return nil, errors.Errorf("Couldn't find block %s", blockHash)
}
_, _, txsAcceptanceData, err := dag.pastUTXO(node)
@@ -976,10 +991,10 @@ func (dag *BlockDAG) applyDAGChanges(node *blockNode, newBlockPastUTXO UTXOSet,
newBlockMultiset *secp256k1.MultiSet, selectedParentAnticone []*blockNode) (
virtualUTXODiff *UTXODiff, chainUpdates *chainUpdates, err error) {
// Add the block to the reachability structures
err = dag.updateReachability(node, selectedParentAnticone)
// Add the block to the reachability tree
err = dag.reachabilityTree.addBlock(node, selectedParentAnticone)
if err != nil {
return nil, nil, errors.Wrap(err, "failed updating reachability")
return nil, nil, errors.Wrap(err, "failed adding block to the reachability tree")
}
dag.multisetStore.setMultiset(node, newBlockMultiset)
@@ -1310,18 +1325,18 @@ func updateTipsUTXO(dag *BlockDAG, virtualUTXO UTXOSet) error {
return nil
}
// isCurrent returns whether or not the DAG believes it is current. Several
// isSynced returns whether or not the DAG believes it is synced. Several
// factors are used to guess, but the key factors that allow the DAG to
// believe it is current are:
// believe it is synced are:
// - Latest block has a timestamp newer than 24 hours ago
//
// This function MUST be called with the DAG state lock held (for reads).
func (dag *BlockDAG) isCurrent() bool {
// Not current if the virtual's selected parent has a timestamp
func (dag *BlockDAG) isSynced() bool {
// Not synced if the virtual's selected parent has a timestamp
// before 24 hours ago. If the DAG is empty, we take the genesis
// block timestamp.
//
// The DAG appears to be current if none of the checks reported
// The DAG appears to be syncned if none of the checks reported
// otherwise.
var dagTimestamp int64
selectedTip := dag.selectedTip()
@@ -1341,17 +1356,17 @@ func (dag *BlockDAG) Now() time.Time {
return dag.timeSource.Now()
}
// IsCurrent returns whether or not the DAG believes it is current. Several
// IsSynced returns whether or not the DAG believes it is synced. Several
// factors are used to guess, but the key factors that allow the DAG to
// believe it is current are:
// believe it is synced are:
// - Latest block has a timestamp newer than 24 hours ago
//
// This function is safe for concurrent access.
func (dag *BlockDAG) IsCurrent() bool {
func (dag *BlockDAG) IsSynced() bool {
dag.dagLock.RLock()
defer dag.dagLock.RUnlock()
return dag.isCurrent()
return dag.isSynced()
}
// selectedTip returns the current selected tip for the DAG.
@@ -1407,8 +1422,8 @@ func (dag *BlockDAG) GetUTXOEntry(outpoint wire.Outpoint) (*UTXOEntry, bool) {
// BlueScoreByBlockHash returns the blue score of a block with the given hash.
func (dag *BlockDAG) BlueScoreByBlockHash(hash *daghash.Hash) (uint64, error) {
node := dag.index.LookupNode(hash)
if node == nil {
node, ok := dag.index.LookupNode(hash)
if !ok {
return 0, errors.Errorf("block %s is unknown", hash)
}
@@ -1417,8 +1432,8 @@ func (dag *BlockDAG) BlueScoreByBlockHash(hash *daghash.Hash) (uint64, error) {
// BluesByBlockHash returns the blues of the block for the given hash.
func (dag *BlockDAG) BluesByBlockHash(hash *daghash.Hash) ([]*daghash.Hash, error) {
node := dag.index.LookupNode(hash)
if node == nil {
node, ok := dag.index.LookupNode(hash)
if !ok {
return nil, errors.Errorf("block %s is unknown", hash)
}
@@ -1449,8 +1464,8 @@ func (dag *BlockDAG) BlockConfirmationsByHashNoLock(hash *daghash.Hash) (uint64,
return 0, nil
}
node := dag.index.LookupNode(hash)
if node == nil {
node, ok := dag.index.LookupNode(hash)
if !ok {
return 0, errors.Errorf("block %s is unknown", hash)
}
@@ -1565,8 +1580,8 @@ func (dag *BlockDAG) oldestChainBlockWithBlueScoreGreaterThan(blueScore uint64)
//
// This method MUST be called with the DAG lock held
func (dag *BlockDAG) IsInSelectedParentChain(blockHash *daghash.Hash) (bool, error) {
blockNode := dag.index.LookupNode(blockHash)
if blockNode == nil {
blockNode, ok := dag.index.LookupNode(blockHash)
if !ok {
str := fmt.Sprintf("block %s is not in the DAG", blockHash)
return false, errNotInDAG(str)
}
@@ -1598,7 +1613,10 @@ func (dag *BlockDAG) SelectedParentChain(blockHash *daghash.Hash) ([]*daghash.Ha
for !isBlockInSelectedParentChain {
removedChainHashes = append(removedChainHashes, blockHash)
node := dag.index.LookupNode(blockHash)
node, ok := dag.index.LookupNode(blockHash)
if !ok {
return nil, nil, errors.Errorf("block %s does not exist in the DAG", blockHash)
}
blockHash = node.selectedParent.hash
isBlockInSelectedParentChain, err = dag.IsInSelectedParentChain(blockHash)
@@ -1661,8 +1679,8 @@ func (dag *BlockDAG) CurrentBits() uint32 {
// HeaderByHash returns the block header identified by the given hash or an
// error if it doesn't exist.
func (dag *BlockDAG) HeaderByHash(hash *daghash.Hash) (*wire.BlockHeader, error) {
node := dag.index.LookupNode(hash)
if node == nil {
node, ok := dag.index.LookupNode(hash)
if !ok {
err := errors.Errorf("block %s is not known", hash)
return &wire.BlockHeader{}, err
}
@@ -1675,8 +1693,8 @@ func (dag *BlockDAG) HeaderByHash(hash *daghash.Hash) (*wire.BlockHeader, error)
//
// This function is safe for concurrent access.
func (dag *BlockDAG) ChildHashesByHash(hash *daghash.Hash) ([]*daghash.Hash, error) {
node := dag.index.LookupNode(hash)
if node == nil {
node, ok := dag.index.LookupNode(hash)
if !ok {
str := fmt.Sprintf("block %s is not in the DAG", hash)
return nil, errNotInDAG(str)
@@ -1690,8 +1708,8 @@ func (dag *BlockDAG) ChildHashesByHash(hash *daghash.Hash) ([]*daghash.Hash, err
//
// This function is safe for concurrent access.
func (dag *BlockDAG) SelectedParentHash(blockHash *daghash.Hash) (*daghash.Hash, error) {
node := dag.index.LookupNode(blockHash)
if node == nil {
node, ok := dag.index.LookupNode(blockHash)
if !ok {
str := fmt.Sprintf("block %s is not in the DAG", blockHash)
return nil, errNotInDAG(str)
@@ -1725,12 +1743,12 @@ func (dag *BlockDAG) antiPastHashesBetween(lowHash, highHash *daghash.Hash, maxH
//
// This function MUST be called with the DAG state lock held (for reads).
func (dag *BlockDAG) antiPastBetween(lowHash, highHash *daghash.Hash, maxEntries uint64) ([]*blockNode, error) {
lowNode := dag.index.LookupNode(lowHash)
if lowNode == nil {
lowNode, ok := dag.index.LookupNode(lowHash)
if !ok {
return nil, errors.Errorf("Couldn't find low hash %s", lowHash)
}
highNode := dag.index.LookupNode(highHash)
if highNode == nil {
highNode, ok := dag.index.LookupNode(highHash)
if !ok {
return nil, errors.Errorf("Couldn't find high hash %s", highHash)
}
if lowNode.blueScore >= highNode.blueScore {
@@ -1763,7 +1781,7 @@ func (dag *BlockDAG) antiPastBetween(lowHash, highHash *daghash.Hash, maxEntries
continue
}
visited.add(current)
isCurrentAncestorOfLowNode, err := dag.isAncestorOf(current, lowNode)
isCurrentAncestorOfLowNode, err := dag.isInPast(current, lowNode)
if err != nil {
return nil, err
}
@@ -1789,6 +1807,10 @@ func (dag *BlockDAG) antiPastBetween(lowHash, highHash *daghash.Hash, maxEntries
return nodes, nil
}
func (dag *BlockDAG) isInPast(this *blockNode, other *blockNode) (bool, error) {
return dag.reachabilityTree.isInPast(this, other)
}
// AntiPastHashesBetween returns the hashes of the blocks between the
// lowHash's antiPast and highHash's antiPast, or up to the provided
// max number of block hashes.
@@ -1825,8 +1847,9 @@ func (dag *BlockDAG) antiPastHeadersBetween(lowHash, highHash *daghash.Hash, max
func (dag *BlockDAG) GetTopHeaders(highHash *daghash.Hash, maxHeaders uint64) ([]*wire.BlockHeader, error) {
highNode := &dag.virtual.blockNode
if highHash != nil {
highNode = dag.index.LookupNode(highHash)
if highNode == nil {
var ok bool
highNode, ok = dag.index.LookupNode(highHash)
if !ok {
return nil, errors.Errorf("Couldn't find the high hash %s in the dag", highHash)
}
}
@@ -2041,8 +2064,8 @@ func New(config *Config) (*BlockDAG, error) {
dag.virtual = newVirtualBlock(dag, nil)
dag.utxoDiffStore = newUTXODiffStore(dag)
dag.reachabilityStore = newReachabilityStore(dag)
dag.multisetStore = newMultisetStore(dag)
dag.reachabilityTree = newReachabilityTree(dag)
// Initialize the DAG state from the passed database. When the db
// does not yet contain any DAG state, both it and the DAG state
@@ -2061,9 +2084,9 @@ func New(config *Config) (*BlockDAG, error) {
}
}
genesis := index.LookupNode(params.GenesisHash)
genesis, ok := index.LookupNode(params.GenesisHash)
if genesis == nil {
if !ok {
genesisBlock := util.NewBlock(dag.dagParams.GenesisBlock)
// To prevent the creation of a new err variable unintentionally so the
// defered function above could read err - declare isOrphan and isDelayed explicitly.
@@ -2073,12 +2096,15 @@ func New(config *Config) (*BlockDAG, error) {
return nil, err
}
if isDelayed {
return nil, errors.New("Genesis block shouldn't be in the future")
return nil, errors.New("genesis block shouldn't be in the future")
}
if isOrphan {
return nil, errors.New("Genesis block is unexpectedly orphan")
return nil, errors.New("genesis block is unexpectedly orphan")
}
genesis, ok = index.LookupNode(params.GenesisHash)
if !ok {
return nil, errors.New("genesis is not found in the DAG after it was proccessed")
}
genesis = index.LookupNode(params.GenesisHash)
}
// Save a reference to the genesis block.

View File

@@ -207,7 +207,7 @@ func TestIsKnownBlock(t *testing.T) {
{hash: dagconfig.SimnetParams.GenesisHash.String(), want: true},
// Block 3b should be present (as a second child of Block 2).
{hash: "48a752afbe36ad66357f751f8dee4f75665d24e18f644d83a3409b398405b46b", want: true},
{hash: "2eb8903d3eb7f977ab329649f56f4125afa532662f7afe5dba0d4a3f1b93746f", want: true},
// Block 100000 should be present (as an orphan).
{hash: "65b20b048a074793ebfd1196e49341c8d194dabfc6b44a4fd0c607406e122baf", want: true},
@@ -621,7 +621,10 @@ func TestAcceptingInInit(t *testing.T) {
testBlock := blocks[1]
// Create a test blockNode with an unvalidated status
genesisNode := dag.index.LookupNode(genesisBlock.Hash())
genesisNode, ok := dag.index.LookupNode(genesisBlock.Hash())
if !ok {
t.Fatalf("genesis block does not exist in the DAG")
}
testNode, _ := dag.newBlockNode(&testBlock.MsgBlock().Header, blockSetFromSlice(genesisNode))
testNode.status = statusDataStored
@@ -659,7 +662,11 @@ func TestAcceptingInInit(t *testing.T) {
}
// Make sure that the test node's status is valid
testNode = dag.index.LookupNode(testBlock.Hash())
testNode, ok = dag.index.LookupNode(testBlock.Hash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", testBlock.Hash())
}
if testNode.status&statusValid == 0 {
t.Fatalf("testNode is unexpectedly invalid")
}
@@ -1045,8 +1052,8 @@ func TestDAGIndexFailedStatus(t *testing.T) {
"is an orphan\n")
}
invalidBlockNode := dag.index.LookupNode(invalidBlock.Hash())
if invalidBlockNode == nil {
invalidBlockNode, ok := dag.index.LookupNode(invalidBlock.Hash())
if !ok {
t.Fatalf("invalidBlockNode wasn't added to the block index as expected")
}
if invalidBlockNode.status&statusValidateFailed != statusValidateFailed {
@@ -1074,8 +1081,8 @@ func TestDAGIndexFailedStatus(t *testing.T) {
t.Fatalf("ProcessBlock incorrectly returned invalidBlockChild " +
"is an orphan\n")
}
invalidBlockChildNode := dag.index.LookupNode(invalidBlockChild.Hash())
if invalidBlockChildNode == nil {
invalidBlockChildNode, ok := dag.index.LookupNode(invalidBlockChild.Hash())
if !ok {
t.Fatalf("invalidBlockChild wasn't added to the block index as expected")
}
if invalidBlockChildNode.status&statusInvalidAncestor != statusInvalidAncestor {
@@ -1102,8 +1109,8 @@ func TestDAGIndexFailedStatus(t *testing.T) {
t.Fatalf("ProcessBlock incorrectly returned invalidBlockGrandChild " +
"is an orphan\n")
}
invalidBlockGrandChildNode := dag.index.LookupNode(invalidBlockGrandChild.Hash())
if invalidBlockGrandChildNode == nil {
invalidBlockGrandChildNode, ok := dag.index.LookupNode(invalidBlockGrandChild.Hash())
if !ok {
t.Fatalf("invalidBlockGrandChild wasn't added to the block index as expected")
}
if invalidBlockGrandChildNode.status&statusInvalidAncestor != statusInvalidAncestor {
@@ -1257,7 +1264,7 @@ func TestDoubleSpends(t *testing.T) {
func TestUTXOCommitment(t *testing.T) {
// Create a new database and dag instance to run tests against.
params := dagconfig.DevnetParams
params := dagconfig.SimnetParams
params.BlockCoinbaseMaturity = 0
dag, teardownFunc, err := DAGSetup("TestUTXOCommitment", true, Config{
DAGParams: &params,
@@ -1312,8 +1319,8 @@ func TestUTXOCommitment(t *testing.T) {
blockD := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{blockB.BlockHash(), blockC.BlockHash()}, blockDTxs)
// Get the pastUTXO of blockD
blockNodeD := dag.index.LookupNode(blockD.BlockHash())
if blockNodeD == nil {
blockNodeD, ok := dag.index.LookupNode(blockD.BlockHash())
if !ok {
t.Fatalf("TestUTXOCommitment: blockNode for block D not found")
}
blockDPastUTXO, _, _, _ := dag.pastUTXO(blockNodeD)
@@ -1372,8 +1379,8 @@ func TestPastUTXOMultiSet(t *testing.T) {
blockC := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{blockB.BlockHash()}, nil)
// Take blockC's selectedParentMultiset
blockNodeC := dag.index.LookupNode(blockC.BlockHash())
if blockNodeC == nil {
blockNodeC, ok := dag.index.LookupNode(blockC.BlockHash())
if !ok {
t.Fatalf("TestPastUTXOMultiSet: blockNode for blockC not found")
}
blockCSelectedParentMultiset, err := blockNodeC.selectedParentMultiset(dag)

View File

@@ -209,7 +209,7 @@ func (dag *BlockDAG) initDAGState() error {
}
log.Debugf("Loading reachability data...")
err = dag.reachabilityStore.init(dbaccess.NoTx())
err = dag.reachabilityTree.init(dbaccess.NoTx())
if err != nil {
return err
}
@@ -233,7 +233,12 @@ func (dag *BlockDAG) initDAGState() error {
}
log.Debugf("Setting the last finality point...")
dag.lastFinalityPoint = dag.index.LookupNode(dagState.LastFinalityPoint)
var ok bool
dag.lastFinalityPoint, ok = dag.index.LookupNode(dagState.LastFinalityPoint)
if !ok {
return errors.Errorf("finality point block %s "+
"does not exist in the DAG", dagState.LastFinalityPoint)
}
dag.finalizeNodesBelowFinalityPoint(false)
log.Debugf("Processing unprocessed blockNodes...")
@@ -348,8 +353,8 @@ func (dag *BlockDAG) initUTXOSet() (fullUTXOCollection utxoCollection, err error
func (dag *BlockDAG) initVirtualBlockTips(state *dagState) error {
tips := newBlockSet()
for _, tipHash := range state.TipHashes {
tip := dag.index.LookupNode(tipHash)
if tip == nil {
tip, ok := dag.index.LookupNode(tipHash)
if !ok {
return errors.Errorf("cannot find "+
"DAG tip %s in block index", state.TipHashes)
}
@@ -426,8 +431,8 @@ func (dag *BlockDAG) deserializeBlockNode(blockRow []byte) (*blockNode, error) {
node.parents = newBlockSet()
for _, hash := range header.ParentHashes {
parent := dag.index.LookupNode(hash)
if parent == nil {
parent, ok := dag.index.LookupNode(hash)
if !ok {
return nil, errors.Errorf("deserializeBlockNode: Could "+
"not find parent %s for block %s", hash, header.BlockHash())
}
@@ -447,7 +452,11 @@ func (dag *BlockDAG) deserializeBlockNode(blockRow []byte) (*blockNode, error) {
// Because genesis doesn't have selected parent, it's serialized as zero hash
if !selectedParentHash.IsEqual(&daghash.ZeroHash) {
node.selectedParent = dag.index.LookupNode(selectedParentHash)
var ok bool
node.selectedParent, ok = dag.index.LookupNode(selectedParentHash)
if !ok {
return nil, errors.Errorf("block %s does not exist in the DAG", selectedParentHash)
}
}
node.blueScore, err = binaryserializer.Uint64(buffer, byteOrder)
@@ -466,7 +475,12 @@ func (dag *BlockDAG) deserializeBlockNode(blockRow []byte) (*blockNode, error) {
if _, err := io.ReadFull(buffer, hash[:]); err != nil {
return nil, err
}
node.blues[i] = dag.index.LookupNode(hash)
var ok bool
node.blues[i], ok = dag.index.LookupNode(hash)
if !ok {
return nil, errors.Errorf("block %s does not exist in the DAG", selectedParentHash)
}
}
bluesAnticoneSizesLen, err := wire.ReadVarInt(buffer)
@@ -484,8 +498,8 @@ func (dag *BlockDAG) deserializeBlockNode(blockRow []byte) (*blockNode, error) {
if err != nil {
return nil, err
}
blue := dag.index.LookupNode(hash)
if blue == nil {
blue, ok := dag.index.LookupNode(hash)
if !ok {
return nil, errors.Errorf("couldn't find block with hash %s", hash)
}
node.bluesAnticoneSizes[blue] = dagconfig.KType(bluesAnticoneSize)
@@ -590,8 +604,8 @@ func blockHashFromBlockIndexKey(BlockIndexKey []byte) (*daghash.Hash, error) {
// This function is safe for concurrent access.
func (dag *BlockDAG) BlockByHash(hash *daghash.Hash) (*util.Block, error) {
// Lookup the block hash in block index and ensure it is in the DAG
node := dag.index.LookupNode(hash)
if node == nil {
node, ok := dag.index.LookupNode(hash)
if !ok {
str := fmt.Sprintf("block %s is not in the DAG", hash)
return nil, errNotInDAG(str)
}

View File

@@ -114,7 +114,11 @@ func TestDifficulty(t *testing.T) {
if isOrphan {
t.Fatalf("block was unexpectedly orphan")
}
return dag.index.LookupNode(block.BlockHash())
node, ok := dag.index.LookupNode(block.BlockHash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", block.BlockHash())
}
return node
}
tip := dag.genesis
for i := uint64(0); i < dag.difficultyAdjustmentWindowSize; i++ {

View File

@@ -69,6 +69,9 @@ const (
// the expected value.
ErrBadUTXOCommitment
// ErrInvalidSubnetwork indicates the subnetwork is now allowed.
ErrInvalidSubnetwork
// ErrFinalityPointTimeTooOld indicates a block has a timestamp before the
// last finality point.
ErrFinalityPointTimeTooOld

View File

@@ -186,6 +186,7 @@ func TestSubnetworkRegistry(t *testing.T) {
params := dagconfig.SimnetParams
params.K = 1
params.BlockCoinbaseMaturity = 0
params.EnableNonNativeSubnetworks = true
dag, teardownFunc, err := blockdag.DAGSetup("TestSubnetworkRegistry", true, blockdag.Config{
DAGParams: &params,
})
@@ -410,6 +411,7 @@ func TestGasLimit(t *testing.T) {
params := dagconfig.SimnetParams
params.K = 1
params.BlockCoinbaseMaturity = 0
params.EnableNonNativeSubnetworks = true
dag, teardownFunc, err := blockdag.DAGSetup("TestSubnetworkRegistry", true, blockdag.Config{
DAGParams: &params,
})

View File

@@ -57,7 +57,7 @@ func (dag *BlockDAG) ghostdag(newNode *blockNode) (selectedParentAnticone []*blo
// newNode is always in the future of blueCandidate, so there's
// no point in checking it.
if chainBlock != newNode {
if isAncestorOfBlueCandidate, err := dag.isAncestorOf(chainBlock, blueCandidate); err != nil {
if isAncestorOfBlueCandidate, err := dag.isInPast(chainBlock, blueCandidate); err != nil {
return nil, err
} else if isAncestorOfBlueCandidate {
break
@@ -66,7 +66,7 @@ func (dag *BlockDAG) ghostdag(newNode *blockNode) (selectedParentAnticone []*blo
for _, block := range chainBlock.blues {
// Skip blocks that exist in the past of blueCandidate.
if isAncestorOfBlueCandidate, err := dag.isAncestorOf(block, blueCandidate); err != nil {
if isAncestorOfBlueCandidate, err := dag.isInPast(block, blueCandidate); err != nil {
return nil, err
} else if isAncestorOfBlueCandidate {
continue
@@ -148,7 +148,7 @@ func (dag *BlockDAG) selectedParentAnticone(node *blockNode) ([]*blockNode, erro
if anticoneSet.contains(parent) || selectedParentPast.contains(parent) {
continue
}
isAncestorOfSelectedParent, err := dag.isAncestorOf(parent, node.selectedParent)
isAncestorOfSelectedParent, err := dag.isInPast(parent, node.selectedParent)
if err != nil {
return nil, err
}

View File

@@ -33,7 +33,7 @@ func TestGHOSTDAG(t *testing.T) {
}{
{
k: 3,
expectedReds: []string{"F", "G", "H", "I", "N", "O"},
expectedReds: []string{"F", "G", "H", "I", "N", "Q"},
dagData: []*testBlockData{
{
parents: []string{"A"},
@@ -166,7 +166,7 @@ func TestGHOSTDAG(t *testing.T) {
id: "T",
expectedScore: 13,
expectedSelectedParent: "S",
expectedBlues: []string{"S", "P", "Q"},
expectedBlues: []string{"S", "O", "P"},
},
},
},
@@ -215,7 +215,10 @@ func TestGHOSTDAG(t *testing.T) {
t.Fatalf("TestGHOSTDAG: block %v was unexpectedly orphan", blockData.id)
}
node := dag.index.LookupNode(utilBlock.Hash())
node, ok := dag.index.LookupNode(utilBlock.Hash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", utilBlock.Hash())
}
blockByIDMap[blockData.id] = node
idByBlockMap[node] = blockData.id
@@ -305,8 +308,15 @@ func TestBlueAnticoneSizeErrors(t *testing.T) {
}
// Get references to the tips of the two chains
blockNodeA := dag.index.LookupNode(currentBlockA.BlockHash())
blockNodeB := dag.index.LookupNode(currentBlockB.BlockHash())
blockNodeA, ok := dag.index.LookupNode(currentBlockA.BlockHash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", currentBlockA.BlockHash())
}
blockNodeB, ok := dag.index.LookupNode(currentBlockB.BlockHash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", currentBlockB.BlockHash())
}
// Try getting the blueAnticoneSize between them. Since the two
// blocks are not in the anticones of eachother, this should fail.
@@ -339,7 +349,7 @@ func TestGHOSTDAGErrors(t *testing.T) {
block3 := prepareAndProcessBlockByParentMsgBlocks(t, dag, block1, block2)
// Clear the reachability store
dag.reachabilityStore.loaded = map[daghash.Hash]*reachabilityData{}
dag.reachabilityTree.store.loaded = map[daghash.Hash]*reachabilityData{}
dbTx, err := dbaccess.NewTx()
if err != nil {
@@ -359,12 +369,15 @@ func TestGHOSTDAGErrors(t *testing.T) {
// Try to rerun GHOSTDAG on the last block. GHOSTDAG uses
// reachability data, so we expect it to fail.
blockNode3 := dag.index.LookupNode(block3.BlockHash())
blockNode3, ok := dag.index.LookupNode(block3.BlockHash())
if !ok {
t.Fatalf("block %s does not exist in the DAG", block3.BlockHash())
}
_, err = dag.ghostdag(blockNode3)
if err == nil {
t.Fatalf("TestGHOSTDAGErrors: ghostdag unexpectedly succeeded")
}
expectedErrSubstring := "Couldn't find reachability data"
expectedErrSubstring := "couldn't find reachability data"
if !strings.Contains(err.Error(), expectedErrSubstring) {
t.Fatalf("TestGHOSTDAGErrors: ghostdag returned wrong error. "+
"Want: %s, got: %s", expectedErrSubstring, err)

View File

@@ -59,13 +59,13 @@ func (idx *AcceptanceIndex) Init(dag *blockdag.BlockDAG) error {
//
// This is part of the Indexer interface.
func (idx *AcceptanceIndex) recover() error {
dbTx, err := dbaccess.NewTx()
if err != nil {
return err
}
defer dbTx.RollbackUnlessClosed()
return idx.dag.ForEachHash(func(hash daghash.Hash) error {
dbTx, err := dbaccess.NewTx()
if err != nil {
return err
}
defer dbTx.RollbackUnlessClosed()
err = idx.dag.ForEachHash(func(hash daghash.Hash) error {
exists, err := dbaccess.HasAcceptanceData(dbTx, &hash)
if err != nil {
return err
@@ -77,13 +77,13 @@ func (idx *AcceptanceIndex) recover() error {
if err != nil {
return err
}
return idx.ConnectBlock(dbTx, &hash, txAcceptanceData)
})
if err != nil {
return err
}
err = idx.ConnectBlock(dbTx, &hash, txAcceptanceData)
if err != nil {
return err
}
return dbTx.Commit()
return dbTx.Commit()
})
}
// ConnectBlock is invoked by the index manager when a new block has been

View File

@@ -63,8 +63,8 @@ func TestProcessOrphans(t *testing.T) {
}
// Make sure that the child block had been rejected
node := dag.index.LookupNode(childBlock.Hash())
if node == nil {
node, ok := dag.index.LookupNode(childBlock.Hash())
if !ok {
t.Fatalf("TestProcessOrphans: child block missing from block index")
}
if !dag.index.NodeStatus(node).KnownInvalid() {

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
package blockdag
import (
"github.com/kaspanet/kaspad/dagconfig"
"github.com/kaspanet/kaspad/util/daghash"
"reflect"
"strings"
"testing"
@@ -11,19 +13,20 @@ func TestAddChild(t *testing.T) {
// root -> a -> b -> c...
// Create the root node of a new reachability tree
root := newReachabilityTreeNode(&blockNode{})
root.setInterval(newReachabilityInterval(1, 100))
root.interval = newReachabilityInterval(1, 100)
// Add a chain of child nodes just before a reindex occurs (2^6=64 < 100)
currentTip := root
for i := 0; i < 6; i++ {
node := newReachabilityTreeNode(&blockNode{})
modifiedNodes, err := currentTip.addChild(node)
modifiedNodes := newModifiedTreeNodes()
err := currentTip.addChild(node, root, modifiedNodes)
if err != nil {
t.Fatalf("TestAddChild: addChild failed: %s", err)
}
// Expect only the node and its parent to be affected
expectedModifiedNodes := []*reachabilityTreeNode{currentTip, node}
expectedModifiedNodes := newModifiedTreeNodes(currentTip, node)
if !reflect.DeepEqual(modifiedNodes, expectedModifiedNodes) {
t.Fatalf("TestAddChild: unexpected modifiedNodes. "+
"want: %s, got: %s", expectedModifiedNodes, modifiedNodes)
@@ -34,7 +37,8 @@ func TestAddChild(t *testing.T) {
// Add another node to the tip of the chain to trigger a reindex (100 < 2^7=128)
lastChild := newReachabilityTreeNode(&blockNode{})
modifiedNodes, err := currentTip.addChild(lastChild)
modifiedNodes := newModifiedTreeNodes()
err := currentTip.addChild(lastChild, root, modifiedNodes)
if err != nil {
t.Fatalf("TestAddChild: addChild failed: %s", err)
}
@@ -45,19 +49,23 @@ func TestAddChild(t *testing.T) {
t.Fatalf("TestAddChild: unexpected amount of modifiedNodes.")
}
// Expect the tip to have an interval of 1 and remaining interval of 0
// Expect the tip to have an interval of 1 and remaining interval of 0 both before and after
tipInterval := lastChild.interval.size()
if tipInterval != 1 {
t.Fatalf("TestAddChild: unexpected tip interval size: want: 1, got: %d", tipInterval)
}
tipRemainingInterval := lastChild.remainingInterval.size()
if tipRemainingInterval != 0 {
t.Fatalf("TestAddChild: unexpected tip interval size: want: 0, got: %d", tipRemainingInterval)
tipRemainingIntervalBefore := lastChild.remainingIntervalBefore().size()
if tipRemainingIntervalBefore != 0 {
t.Fatalf("TestAddChild: unexpected tip interval before size: want: 0, got: %d", tipRemainingIntervalBefore)
}
tipRemainingIntervalAfter := lastChild.remainingIntervalAfter().size()
if tipRemainingIntervalAfter != 0 {
t.Fatalf("TestAddChild: unexpected tip interval after size: want: 0, got: %d", tipRemainingIntervalAfter)
}
// Expect all nodes to be descendant nodes of root
currentNode := currentTip
for currentNode != nil {
for currentNode != root {
if !root.isAncestorOf(currentNode) {
t.Fatalf("TestAddChild: currentNode is not a descendant of root")
}
@@ -68,19 +76,20 @@ func TestAddChild(t *testing.T) {
// root -> a, b, c...
// Create the root node of a new reachability tree
root = newReachabilityTreeNode(&blockNode{})
root.setInterval(newReachabilityInterval(1, 100))
root.interval = newReachabilityInterval(1, 100)
// Add child nodes to root just before a reindex occurs (2^6=64 < 100)
childNodes := make([]*reachabilityTreeNode, 6)
for i := 0; i < len(childNodes); i++ {
childNodes[i] = newReachabilityTreeNode(&blockNode{})
modifiedNodes, err := root.addChild(childNodes[i])
modifiedNodes := newModifiedTreeNodes()
err := root.addChild(childNodes[i], root, modifiedNodes)
if err != nil {
t.Fatalf("TestAddChild: addChild failed: %s", err)
}
// Expect only the node and the root to be affected
expectedModifiedNodes := []*reachabilityTreeNode{root, childNodes[i]}
expectedModifiedNodes := newModifiedTreeNodes(root, childNodes[i])
if !reflect.DeepEqual(modifiedNodes, expectedModifiedNodes) {
t.Fatalf("TestAddChild: unexpected modifiedNodes. "+
"want: %s, got: %s", expectedModifiedNodes, modifiedNodes)
@@ -89,7 +98,8 @@ func TestAddChild(t *testing.T) {
// Add another node to the root to trigger a reindex (100 < 2^7=128)
lastChild = newReachabilityTreeNode(&blockNode{})
modifiedNodes, err = root.addChild(lastChild)
modifiedNodes = newModifiedTreeNodes()
err = root.addChild(lastChild, root, modifiedNodes)
if err != nil {
t.Fatalf("TestAddChild: addChild failed: %s", err)
}
@@ -100,14 +110,18 @@ func TestAddChild(t *testing.T) {
t.Fatalf("TestAddChild: unexpected amount of modifiedNodes.")
}
// Expect the last-added child to have an interval of 1 and remaining interval of 0
// Expect the last-added child to have an interval of 1 and remaining interval of 0 both before and after
lastChildInterval := lastChild.interval.size()
if lastChildInterval != 1 {
t.Fatalf("TestAddChild: unexpected lastChild interval size: want: 1, got: %d", lastChildInterval)
}
lastChildRemainingInterval := lastChild.remainingInterval.size()
if lastChildRemainingInterval != 0 {
t.Fatalf("TestAddChild: unexpected lastChild interval size: want: 0, got: %d", lastChildRemainingInterval)
lastChildRemainingIntervalBefore := lastChild.remainingIntervalBefore().size()
if lastChildRemainingIntervalBefore != 0 {
t.Fatalf("TestAddChild: unexpected lastChild interval before size: want: 0, got: %d", lastChildRemainingIntervalBefore)
}
lastChildRemainingIntervalAfter := lastChild.remainingIntervalAfter().size()
if lastChildRemainingIntervalAfter != 0 {
t.Fatalf("TestAddChild: unexpected lastChild interval after size: want: 0, got: %d", lastChildRemainingIntervalAfter)
}
// Expect all nodes to be descendant nodes of root
@@ -118,6 +132,91 @@ func TestAddChild(t *testing.T) {
}
}
func TestReachabilityTreeNodeIsAncestorOf(t *testing.T) {
root := newReachabilityTreeNode(&blockNode{})
currentTip := root
const numberOfDescendants = 6
descendants := make([]*reachabilityTreeNode, numberOfDescendants)
for i := 0; i < numberOfDescendants; i++ {
node := newReachabilityTreeNode(&blockNode{})
err := currentTip.addChild(node, root, newModifiedTreeNodes())
if err != nil {
t.Fatalf("TestReachabilityTreeNodeIsAncestorOf: addChild failed: %s", err)
}
descendants[i] = node
currentTip = node
}
// Expect all descendants to be in the future of root
for _, node := range descendants {
if !root.isAncestorOf(node) {
t.Fatalf("TestReachabilityTreeNodeIsAncestorOf: node is not a descendant of root")
}
}
if !root.isAncestorOf(root) {
t.Fatalf("TestReachabilityTreeNodeIsAncestorOf: root is expected to be an ancestor of root")
}
}
func TestIntervalContains(t *testing.T) {
tests := []struct {
name string
this, other *reachabilityInterval
thisContainsOther bool
}{
{
name: "this == other",
this: newReachabilityInterval(10, 100),
other: newReachabilityInterval(10, 100),
thisContainsOther: true,
},
{
name: "this.start == other.start && this.end < other.end",
this: newReachabilityInterval(10, 90),
other: newReachabilityInterval(10, 100),
thisContainsOther: false,
},
{
name: "this.start == other.start && this.end > other.end",
this: newReachabilityInterval(10, 100),
other: newReachabilityInterval(10, 90),
thisContainsOther: true,
},
{
name: "this.start > other.start && this.end == other.end",
this: newReachabilityInterval(20, 100),
other: newReachabilityInterval(10, 100),
thisContainsOther: false,
},
{
name: "this.start < other.start && this.end == other.end",
this: newReachabilityInterval(10, 100),
other: newReachabilityInterval(20, 100),
thisContainsOther: true,
},
{
name: "this.start > other.start && this.end < other.end",
this: newReachabilityInterval(20, 90),
other: newReachabilityInterval(10, 100),
thisContainsOther: false,
},
{
name: "this.start < other.start && this.end > other.end",
this: newReachabilityInterval(10, 100),
other: newReachabilityInterval(20, 90),
thisContainsOther: true,
},
}
for _, test := range tests {
if thisContainsOther := test.this.contains(test.other); thisContainsOther != test.thisContainsOther {
t.Errorf("test.this.contains(test.other) is expected to be %t but got %t",
test.thisContainsOther, thisContainsOther)
}
}
}
func TestSplitFraction(t *testing.T) {
tests := []struct {
interval *reachabilityInterval
@@ -346,140 +445,140 @@ func TestSplitWithExponentialBias(t *testing.T) {
}
}
func TestIsInFuture(t *testing.T) {
blocks := futureCoveringBlockSet{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(2, 3)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(4, 67)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(67, 77)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(657, 789)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)}},
func TestHasAncestorOf(t *testing.T) {
treeNodes := futureCoveringTreeNodeSet{
&reachabilityTreeNode{interval: newReachabilityInterval(2, 3)},
&reachabilityTreeNode{interval: newReachabilityInterval(4, 67)},
&reachabilityTreeNode{interval: newReachabilityInterval(67, 77)},
&reachabilityTreeNode{interval: newReachabilityInterval(657, 789)},
&reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)},
&reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)},
}
tests := []struct {
block *futureCoveringBlock
treeNode *reachabilityTreeNode
expectedResult bool
}{
{
block: &futureCoveringBlock{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1, 1)}},
treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1, 1)},
expectedResult: false,
},
{
block: &futureCoveringBlock{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(5, 7)}},
treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(5, 7)},
expectedResult: true,
},
{
block: &futureCoveringBlock{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(67, 76)}},
treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(67, 76)},
expectedResult: true,
},
{
block: &futureCoveringBlock{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(78, 100)}},
treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(78, 100)},
expectedResult: false,
},
{
block: &futureCoveringBlock{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1980, 2000)}},
treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1980, 2000)},
expectedResult: false,
},
{
block: &futureCoveringBlock{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1920)}},
treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1920)},
expectedResult: true,
},
}
for i, test := range tests {
result := blocks.isInFuture(test.block)
result := treeNodes.hasAncestorOf(test.treeNode)
if result != test.expectedResult {
t.Errorf("TestIsInFuture: unexpected result in test #%d. Want: %t, got: %t",
t.Errorf("TestHasAncestorOf: unexpected result in test #%d. Want: %t, got: %t",
i, test.expectedResult, result)
}
}
}
func TestInsertBlock(t *testing.T) {
blocks := futureCoveringBlockSet{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1, 3)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(4, 67)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(67, 77)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(657, 789)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)}},
func TestInsertNode(t *testing.T) {
treeNodes := futureCoveringTreeNodeSet{
&reachabilityTreeNode{interval: newReachabilityInterval(1, 3)},
&reachabilityTreeNode{interval: newReachabilityInterval(4, 67)},
&reachabilityTreeNode{interval: newReachabilityInterval(67, 77)},
&reachabilityTreeNode{interval: newReachabilityInterval(657, 789)},
&reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)},
&reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)},
}
tests := []struct {
toInsert []*futureCoveringBlock
expectedResult futureCoveringBlockSet
toInsert []*reachabilityTreeNode
expectedResult futureCoveringTreeNodeSet
}{
{
toInsert: []*futureCoveringBlock{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(5, 7)}},
toInsert: []*reachabilityTreeNode{
{interval: newReachabilityInterval(5, 7)},
},
expectedResult: futureCoveringBlockSet{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1, 3)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(4, 67)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(67, 77)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(657, 789)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)}},
expectedResult: futureCoveringTreeNodeSet{
&reachabilityTreeNode{interval: newReachabilityInterval(1, 3)},
&reachabilityTreeNode{interval: newReachabilityInterval(4, 67)},
&reachabilityTreeNode{interval: newReachabilityInterval(67, 77)},
&reachabilityTreeNode{interval: newReachabilityInterval(657, 789)},
&reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)},
&reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)},
},
},
{
toInsert: []*futureCoveringBlock{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(65, 78)}},
toInsert: []*reachabilityTreeNode{
{interval: newReachabilityInterval(65, 78)},
},
expectedResult: futureCoveringBlockSet{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1, 3)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(4, 67)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(65, 78)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(657, 789)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)}},
expectedResult: futureCoveringTreeNodeSet{
&reachabilityTreeNode{interval: newReachabilityInterval(1, 3)},
&reachabilityTreeNode{interval: newReachabilityInterval(4, 67)},
&reachabilityTreeNode{interval: newReachabilityInterval(65, 78)},
&reachabilityTreeNode{interval: newReachabilityInterval(657, 789)},
&reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)},
&reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)},
},
},
{
toInsert: []*futureCoveringBlock{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(88, 97)}},
toInsert: []*reachabilityTreeNode{
{interval: newReachabilityInterval(88, 97)},
},
expectedResult: futureCoveringBlockSet{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1, 3)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(4, 67)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(67, 77)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(88, 97)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(657, 789)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)}},
expectedResult: futureCoveringTreeNodeSet{
&reachabilityTreeNode{interval: newReachabilityInterval(1, 3)},
&reachabilityTreeNode{interval: newReachabilityInterval(4, 67)},
&reachabilityTreeNode{interval: newReachabilityInterval(67, 77)},
&reachabilityTreeNode{interval: newReachabilityInterval(88, 97)},
&reachabilityTreeNode{interval: newReachabilityInterval(657, 789)},
&reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)},
&reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)},
},
},
{
toInsert: []*futureCoveringBlock{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(88, 97)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(3000, 3010)}},
toInsert: []*reachabilityTreeNode{
{interval: newReachabilityInterval(88, 97)},
{interval: newReachabilityInterval(3000, 3010)},
},
expectedResult: futureCoveringBlockSet{
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1, 3)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(4, 67)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(67, 77)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(88, 97)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(657, 789)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)}},
{treeNode: &reachabilityTreeNode{interval: newReachabilityInterval(3000, 3010)}},
expectedResult: futureCoveringTreeNodeSet{
&reachabilityTreeNode{interval: newReachabilityInterval(1, 3)},
&reachabilityTreeNode{interval: newReachabilityInterval(4, 67)},
&reachabilityTreeNode{interval: newReachabilityInterval(67, 77)},
&reachabilityTreeNode{interval: newReachabilityInterval(88, 97)},
&reachabilityTreeNode{interval: newReachabilityInterval(657, 789)},
&reachabilityTreeNode{interval: newReachabilityInterval(1000, 1000)},
&reachabilityTreeNode{interval: newReachabilityInterval(1920, 1921)},
&reachabilityTreeNode{interval: newReachabilityInterval(3000, 3010)},
},
},
}
for i, test := range tests {
// Create a clone of blocks so that we have a clean start for every test
blocksClone := make(futureCoveringBlockSet, len(blocks))
for i, block := range blocks {
blocksClone[i] = block
// Create a clone of treeNodes so that we have a clean start for every test
treeNodesClone := make(futureCoveringTreeNodeSet, len(treeNodes))
for i, treeNode := range treeNodes {
treeNodesClone[i] = treeNode
}
for _, block := range test.toInsert {
blocksClone.insertBlock(block)
for _, treeNode := range test.toInsert {
treeNodesClone.insertNode(treeNode)
}
if !reflect.DeepEqual(blocksClone, test.expectedResult) {
t.Errorf("TestInsertBlock: unexpected result in test #%d. Want: %s, got: %s",
i, test.expectedResult, blocksClone)
if !reflect.DeepEqual(treeNodesClone, test.expectedResult) {
t.Errorf("TestInsertNode: unexpected result in test #%d. Want: %s, got: %s",
i, test.expectedResult, treeNodesClone)
}
}
}
@@ -580,14 +679,14 @@ func TestSplitWithExponentialBiasErrors(t *testing.T) {
func TestReindexIntervalErrors(t *testing.T) {
// Create a treeNode and give it size = 100
treeNode := newReachabilityTreeNode(&blockNode{})
treeNode.setInterval(newReachabilityInterval(0, 99))
treeNode.interval = newReachabilityInterval(0, 99)
// Add a chain of 100 child treeNodes to treeNode
var err error
currentTreeNode := treeNode
for i := 0; i < 100; i++ {
childTreeNode := newReachabilityTreeNode(&blockNode{})
_, err = currentTreeNode.addChild(childTreeNode)
err = currentTreeNode.addChild(childTreeNode, treeNode, newModifiedTreeNodes())
if err != nil {
break
}
@@ -619,12 +718,12 @@ func BenchmarkReindexInterval(b *testing.B) {
// its first child gets half of the interval, so a reindex
// from the root should happen after adding subTreeSize
// nodes.
root.setInterval(newReachabilityInterval(0, subTreeSize*2))
root.interval = newReachabilityInterval(0, subTreeSize*2)
currentTreeNode := root
for i := 0; i < subTreeSize; i++ {
childTreeNode := newReachabilityTreeNode(&blockNode{})
_, err := currentTreeNode.addChild(childTreeNode)
err := currentTreeNode.addChild(childTreeNode, root, newModifiedTreeNodes())
if err != nil {
b.Fatalf("addChild: %s", err)
}
@@ -632,50 +731,47 @@ func BenchmarkReindexInterval(b *testing.B) {
currentTreeNode = childTreeNode
}
remainingIntervalBefore := *root.remainingInterval
originalRemainingInterval := *root.remainingIntervalAfter()
// After we added subTreeSize nodes, adding the next
// node should lead to a reindex from root.
fullReindexTriggeringNode := newReachabilityTreeNode(&blockNode{})
b.StartTimer()
_, err := currentTreeNode.addChild(fullReindexTriggeringNode)
err := currentTreeNode.addChild(fullReindexTriggeringNode, root, newModifiedTreeNodes())
b.StopTimer()
if err != nil {
b.Fatalf("addChild: %s", err)
}
if *root.remainingInterval == remainingIntervalBefore {
if *root.remainingIntervalAfter() == originalRemainingInterval {
b.Fatal("Expected a reindex from root, but it didn't happen")
}
}
}
func TestFutureCoveringBlockSetString(t *testing.T) {
func TestFutureCoveringTreeNodeSetString(t *testing.T) {
treeNodeA := newReachabilityTreeNode(&blockNode{})
treeNodeA.setInterval(newReachabilityInterval(123, 456))
treeNodeA.interval = newReachabilityInterval(123, 456)
treeNodeB := newReachabilityTreeNode(&blockNode{})
treeNodeB.setInterval(newReachabilityInterval(457, 789))
futureCoveringSet := futureCoveringBlockSet{
&futureCoveringBlock{treeNode: treeNodeA},
&futureCoveringBlock{treeNode: treeNodeB},
}
treeNodeB.interval = newReachabilityInterval(457, 789)
futureCoveringSet := futureCoveringTreeNodeSet{treeNodeA, treeNodeB}
str := futureCoveringSet.String()
expectedStr := "[123,456][457,789]"
if str != expectedStr {
t.Fatalf("TestFutureCoveringBlockSetString: unexpected "+
t.Fatalf("TestFutureCoveringTreeNodeSetString: unexpected "+
"string. Want: %s, got: %s", expectedStr, str)
}
}
func TestReachabilityTreeNodeString(t *testing.T) {
treeNodeA := newReachabilityTreeNode(&blockNode{})
treeNodeA.setInterval(newReachabilityInterval(100, 199))
treeNodeA.interval = newReachabilityInterval(100, 199)
treeNodeB1 := newReachabilityTreeNode(&blockNode{})
treeNodeB1.setInterval(newReachabilityInterval(100, 150))
treeNodeB1.interval = newReachabilityInterval(100, 150)
treeNodeB2 := newReachabilityTreeNode(&blockNode{})
treeNodeB2.setInterval(newReachabilityInterval(150, 199))
treeNodeB2.interval = newReachabilityInterval(150, 199)
treeNodeC := newReachabilityTreeNode(&blockNode{})
treeNodeC.setInterval(newReachabilityInterval(100, 149))
treeNodeC.interval = newReachabilityInterval(100, 149)
treeNodeA.children = []*reachabilityTreeNode{treeNodeB1, treeNodeB2}
treeNodeB2.children = []*reachabilityTreeNode{treeNodeC}
@@ -686,3 +782,268 @@ func TestReachabilityTreeNodeString(t *testing.T) {
"string. Want: %s, got: %s", expectedStr, str)
}
}
func TestIsInPast(t *testing.T) {
// Create a new database and DAG instance to run tests against.
dag, teardownFunc, err := DAGSetup("TestIsInPast", true, Config{
DAGParams: &dagconfig.SimnetParams,
})
if err != nil {
t.Fatalf("TestIsInPast: Failed to setup DAG instance: %v", err)
}
defer teardownFunc()
// Add a chain of two blocks above the genesis. This will be the
// selected parent chain.
blockA := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
blockB := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{blockA.BlockHash()}, nil)
// Add another block above the genesis
blockC := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
nodeC, ok := dag.index.LookupNode(blockC.BlockHash())
if !ok {
t.Fatalf("TestIsInPast: block C is not in the block index")
}
// Add a block whose parents are the two tips
blockD := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{blockB.BlockHash(), blockC.BlockHash()}, nil)
nodeD, ok := dag.index.LookupNode(blockD.BlockHash())
if !ok {
t.Fatalf("TestIsInPast: block C is not in the block index")
}
// Make sure that node C is in the past of node D
isInFuture, err := dag.reachabilityTree.isInPast(nodeC, nodeD)
if err != nil {
t.Fatalf("TestIsInPast: isInPast unexpectedly failed: %s", err)
}
if !isInFuture {
t.Fatalf("TestIsInPast: node C is unexpectedly not the past of node D")
}
}
func TestAddChildThatPointsDirectlyToTheSelectedParentChainBelowReindexRoot(t *testing.T) {
// Create a new database and DAG instance to run tests against.
dag, teardownFunc, err := DAGSetup("TestAddChildThatPointsDirectlyToTheSelectedParentChainBelowReindexRoot",
true, Config{DAGParams: &dagconfig.SimnetParams})
if err != nil {
t.Fatalf("Failed to setup DAG instance: %v", err)
}
defer teardownFunc()
// Set the reindex window to a low number to make this test run fast
originalReachabilityReindexWindow := reachabilityReindexWindow
reachabilityReindexWindow = 10
defer func() {
reachabilityReindexWindow = originalReachabilityReindexWindow
}()
// Add a block on top of the genesis block
chainRootBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
// Add chain of reachabilityReindexWindow blocks above chainRootBlock.
// This should move the reindex root
chainRootBlockTipHash := chainRootBlock.BlockHash()
for i := uint64(0); i < reachabilityReindexWindow; i++ {
chainBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{chainRootBlockTipHash}, nil)
chainRootBlockTipHash = chainBlock.BlockHash()
}
// Add another block over genesis
PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
}
func TestUpdateReindexRoot(t *testing.T) {
// Create a new database and DAG instance to run tests against.
dag, teardownFunc, err := DAGSetup("TestUpdateReindexRoot", true, Config{
DAGParams: &dagconfig.SimnetParams,
})
if err != nil {
t.Fatalf("Failed to setup DAG instance: %v", err)
}
defer teardownFunc()
// Set the reindex window to a low number to make this test run fast
originalReachabilityReindexWindow := reachabilityReindexWindow
reachabilityReindexWindow = 10
defer func() {
reachabilityReindexWindow = originalReachabilityReindexWindow
}()
// Add two blocks on top of the genesis block
chain1RootBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
chain2RootBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
// Add chain of reachabilityReindexWindow - 1 blocks above chain1RootBlock and
// chain2RootBlock, respectively. This should not move the reindex root
chain1RootBlockTipHash := chain1RootBlock.BlockHash()
chain2RootBlockTipHash := chain2RootBlock.BlockHash()
genesisTreeNode, err := dag.reachabilityTree.store.treeNodeByBlockHash(dag.genesis.hash)
if err != nil {
t.Fatalf("failed to get tree node: %s", err)
}
for i := uint64(0); i < reachabilityReindexWindow-1; i++ {
chain1Block := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{chain1RootBlockTipHash}, nil)
chain1RootBlockTipHash = chain1Block.BlockHash()
chain2Block := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{chain2RootBlockTipHash}, nil)
chain2RootBlockTipHash = chain2Block.BlockHash()
if dag.reachabilityTree.reindexRoot != genesisTreeNode {
t.Fatalf("reindex root unexpectedly moved")
}
}
// Add another block over chain1. This will move the reindex root to chain1RootBlock
PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{chain1RootBlockTipHash}, nil)
// Make sure that chain1RootBlock is now the reindex root
chain1RootTreeNode, err := dag.reachabilityTree.store.treeNodeByBlockHash(chain1RootBlock.BlockHash())
if err != nil {
t.Fatalf("failed to get tree node: %s", err)
}
if dag.reachabilityTree.reindexRoot != chain1RootTreeNode {
t.Fatalf("chain1RootBlock is not the reindex root after reindex")
}
// Make sure that tight intervals have been applied to chain2. Since
// we added reachabilityReindexWindow-1 blocks to chain2, the size
// of the interval at its root should be equal to reachabilityReindexWindow
chain2RootTreeNode, err := dag.reachabilityTree.store.treeNodeByBlockHash(chain2RootBlock.BlockHash())
if err != nil {
t.Fatalf("failed to get tree node: %s", err)
}
if chain2RootTreeNode.interval.size() != reachabilityReindexWindow {
t.Fatalf("got unexpected chain2RootNode interval. Want: %d, got: %d",
chain2RootTreeNode.interval.size(), reachabilityReindexWindow)
}
// Make sure that the rest of the interval has been allocated to
// chain1RootNode, minus slack from both sides
expectedChain1RootIntervalSize := genesisTreeNode.interval.size() - 1 -
chain2RootTreeNode.interval.size() - 2*reachabilityReindexSlack
if chain1RootTreeNode.interval.size() != expectedChain1RootIntervalSize {
t.Fatalf("got unexpected chain1RootNode interval. Want: %d, got: %d",
chain1RootTreeNode.interval.size(), expectedChain1RootIntervalSize)
}
}
func TestReindexIntervalsEarlierThanReindexRoot(t *testing.T) {
// Create a new database and DAG instance to run tests against.
dag, teardownFunc, err := DAGSetup("TestReindexIntervalsEarlierThanReindexRoot", true, Config{
DAGParams: &dagconfig.SimnetParams,
})
if err != nil {
t.Fatalf("Failed to setup DAG instance: %v", err)
}
defer teardownFunc()
// Set the reindex window and slack to low numbers to make this test
// run fast
originalReachabilityReindexWindow := reachabilityReindexWindow
originalReachabilityReindexSlack := reachabilityReindexSlack
reachabilityReindexWindow = 10
reachabilityReindexSlack = 5
defer func() {
reachabilityReindexWindow = originalReachabilityReindexWindow
reachabilityReindexSlack = originalReachabilityReindexSlack
}()
// Add three children to the genesis: leftBlock, centerBlock, rightBlock
leftBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
centerBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
rightBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
// Add a chain of reachabilityReindexWindow blocks above centerBlock.
// This will move the reindex root to centerBlock
centerTipHash := centerBlock.BlockHash()
for i := uint64(0); i < reachabilityReindexWindow; i++ {
block := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{centerTipHash}, nil)
centerTipHash = block.BlockHash()
}
// Make sure that centerBlock is now the reindex root
centerTreeNode, err := dag.reachabilityTree.store.treeNodeByBlockHash(centerBlock.BlockHash())
if err != nil {
t.Fatalf("failed to get tree node: %s", err)
}
if dag.reachabilityTree.reindexRoot != centerTreeNode {
t.Fatalf("centerBlock is not the reindex root after reindex")
}
// Get the current interval for leftBlock. The reindex should have
// resulted in a tight interval there
leftTreeNode, err := dag.reachabilityTree.store.treeNodeByBlockHash(leftBlock.BlockHash())
if err != nil {
t.Fatalf("failed to get tree node: %s", err)
}
if leftTreeNode.interval.size() != 1 {
t.Fatalf("leftBlock interval not tight after reindex")
}
// Get the current interval for rightBlock. The reindex should have
// resulted in a tight interval there
rightTreeNode, err := dag.reachabilityTree.store.treeNodeByBlockHash(rightBlock.BlockHash())
if err != nil {
t.Fatalf("failed to get tree node: %s", err)
}
if rightTreeNode.interval.size() != 1 {
t.Fatalf("rightBlock interval not tight after reindex")
}
// Get the current interval for centerBlock. Its interval should be:
// genesisInterval - 1 - leftInterval - leftSlack - rightInterval - rightSlack
genesisTreeNode, err := dag.reachabilityTree.store.treeNodeByBlockHash(dag.genesis.hash)
if err != nil {
t.Fatalf("failed to get tree node: %s", err)
}
expectedCenterInterval := genesisTreeNode.interval.size() - 1 -
leftTreeNode.interval.size() - reachabilityReindexSlack -
rightTreeNode.interval.size() - reachabilityReindexSlack
if centerTreeNode.interval.size() != expectedCenterInterval {
t.Fatalf("unexpected centerBlock interval. Want: %d, got: %d",
expectedCenterInterval, centerTreeNode.interval.size())
}
// Add a chain of reachabilityReindexWindow - 1 blocks above leftBlock.
// Each addition will trigger a low-than-reindex-root reindex. We
// expect the centerInterval to shrink by 1 each time, but its child
// to remain unaffected
treeChildOfCenterBlock := centerTreeNode.children[0]
treeChildOfCenterBlockOriginalIntervalSize := treeChildOfCenterBlock.interval.size()
leftTipHash := leftBlock.BlockHash()
for i := uint64(0); i < reachabilityReindexWindow-1; i++ {
block := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{leftTipHash}, nil)
leftTipHash = block.BlockHash()
expectedCenterInterval--
if centerTreeNode.interval.size() != expectedCenterInterval {
t.Fatalf("unexpected centerBlock interval. Want: %d, got: %d",
expectedCenterInterval, centerTreeNode.interval.size())
}
if treeChildOfCenterBlock.interval.size() != treeChildOfCenterBlockOriginalIntervalSize {
t.Fatalf("the interval of centerBlock's child unexpectedly changed")
}
}
// Add a chain of reachabilityReindexWindow - 1 blocks above rightBlock.
// Each addition will trigger a low-than-reindex-root reindex. We
// expect the centerInterval to shrink by 1 each time, but its child
// to remain unaffected
rightTipHash := rightBlock.BlockHash()
for i := uint64(0); i < reachabilityReindexWindow-1; i++ {
block := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{rightTipHash}, nil)
rightTipHash = block.BlockHash()
expectedCenterInterval--
if centerTreeNode.interval.size() != expectedCenterInterval {
t.Fatalf("unexpected centerBlock interval. Want: %d, got: %d",
expectedCenterInterval, centerTreeNode.interval.size())
}
if treeChildOfCenterBlock.interval.size() != treeChildOfCenterBlockOriginalIntervalSize {
t.Fatalf("the interval of centerBlock's child unexpectedly changed")
}
}
}

View File

@@ -12,7 +12,7 @@ import (
type reachabilityData struct {
treeNode *reachabilityTreeNode
futureCoveringSet futureCoveringBlockSet
futureCoveringSet futureCoveringTreeNodeSet
}
type reachabilityStore struct {
@@ -41,11 +41,11 @@ func (store *reachabilityStore) setTreeNode(treeNode *reachabilityTreeNode) {
store.setBlockAsDirty(node.hash)
}
func (store *reachabilityStore) setFutureCoveringSet(node *blockNode, futureCoveringSet futureCoveringBlockSet) error {
func (store *reachabilityStore) setFutureCoveringSet(node *blockNode, futureCoveringSet futureCoveringTreeNodeSet) error {
// load the reachability data from DB to store.loaded
_, exists := store.reachabilityDataByHash(node.hash)
if !exists {
return reachabilityNotFoundError(node)
return reachabilityNotFoundError(node.hash)
}
store.loaded[*node.hash].futureCoveringSet = futureCoveringSet
@@ -57,22 +57,26 @@ func (store *reachabilityStore) setBlockAsDirty(blockHash *daghash.Hash) {
store.dirty[*blockHash] = struct{}{}
}
func reachabilityNotFoundError(node *blockNode) error {
return errors.Errorf("Couldn't find reachability data for block %s", node.hash)
func reachabilityNotFoundError(hash *daghash.Hash) error {
return errors.Errorf("couldn't find reachability data for block %s", hash)
}
func (store *reachabilityStore) treeNodeByBlockNode(node *blockNode) (*reachabilityTreeNode, error) {
reachabilityData, exists := store.reachabilityDataByHash(node.hash)
func (store *reachabilityStore) treeNodeByBlockHash(hash *daghash.Hash) (*reachabilityTreeNode, error) {
reachabilityData, exists := store.reachabilityDataByHash(hash)
if !exists {
return nil, reachabilityNotFoundError(node)
return nil, reachabilityNotFoundError(hash)
}
return reachabilityData.treeNode, nil
}
func (store *reachabilityStore) futureCoveringSetByBlockNode(node *blockNode) (futureCoveringBlockSet, error) {
func (store *reachabilityStore) treeNodeByBlockNode(node *blockNode) (*reachabilityTreeNode, error) {
return store.treeNodeByBlockHash(node.hash)
}
func (store *reachabilityStore) futureCoveringSetByBlockNode(node *blockNode) (futureCoveringTreeNodeSet, error) {
reachabilityData, exists := store.reachabilityDataByHash(node.hash)
if !exists {
return nil, reachabilityNotFoundError(node)
return nil, reachabilityNotFoundError(node.hash)
}
return reachabilityData.futureCoveringSet, nil
}
@@ -176,7 +180,10 @@ func (store *reachabilityStore) loadReachabilityDataFromCursor(cursor database.C
}
// Connect the treeNode with its blockNode
reachabilityData.treeNode.blockNode = store.dag.index.LookupNode(hash)
reachabilityData.treeNode.blockNode, ok = store.dag.index.LookupNode(hash)
if !ok {
return errors.Errorf("block %s does not exist in the DAG", hash)
}
return nil
}
@@ -212,12 +219,6 @@ func (store *reachabilityStore) serializeTreeNode(w io.Writer, treeNode *reachab
return err
}
// Serialize the remaining interval
err = store.serializeReachabilityInterval(w, treeNode.remainingInterval)
if err != nil {
return err
}
// Serialize the parent
// If this is the genesis block, write the zero hash instead
parentHash := &daghash.ZeroHash
@@ -262,16 +263,16 @@ func (store *reachabilityStore) serializeReachabilityInterval(w io.Writer, inter
return nil
}
func (store *reachabilityStore) serializeFutureCoveringSet(w io.Writer, futureCoveringSet futureCoveringBlockSet) error {
func (store *reachabilityStore) serializeFutureCoveringSet(w io.Writer, futureCoveringSet futureCoveringTreeNodeSet) error {
// Serialize the set size
err := wire.WriteVarInt(w, uint64(len(futureCoveringSet)))
if err != nil {
return err
}
// Serialize each block in the set
for _, block := range futureCoveringSet {
err = wire.WriteElement(w, block.blockNode.hash)
// Serialize each node in the set
for _, node := range futureCoveringSet {
err = wire.WriteElement(w, node.blockNode.hash)
if err != nil {
return err
}
@@ -308,13 +309,6 @@ func (store *reachabilityStore) deserializeTreeNode(r io.Reader, destination *re
}
destination.treeNode.interval = interval
// Deserialize the remaining interval
remainingInterval, err := store.deserializeReachabilityInterval(r)
if err != nil {
return err
}
destination.treeNode.remainingInterval = remainingInterval
// Deserialize the parent
// If this is the zero hash, this node is the genesis and as such doesn't have a parent
parentHash := &daghash.Hash{}
@@ -385,25 +379,18 @@ func (store *reachabilityStore) deserializeFutureCoveringSet(r io.Reader, destin
}
// Deserialize each block in the set
futureCoveringSet := make(futureCoveringBlockSet, setSize)
futureCoveringSet := make(futureCoveringTreeNodeSet, setSize)
for i := uint64(0); i < setSize; i++ {
blockHash := &daghash.Hash{}
err = wire.ReadElement(r, blockHash)
if err != nil {
return err
}
blockNode := store.dag.index.LookupNode(blockHash)
if blockNode == nil {
return errors.Errorf("blockNode not found for hash %s", blockHash)
}
blockReachabilityData, ok := store.reachabilityDataByHash(blockHash)
if !ok {
return errors.Errorf("block reachability data not found for hash: %s", blockHash)
}
futureCoveringSet[i] = &futureCoveringBlock{
blockNode: blockNode,
treeNode: blockReachabilityData.treeNode,
}
futureCoveringSet[i] = blockReachabilityData.treeNode
}
destination.futureCoveringSet = futureCoveringSet

View File

@@ -179,11 +179,6 @@ func newTxValidator(utxoSet UTXOSet, flags txscript.ScriptFlags, sigCache *txscr
// ValidateTransactionScripts validates the scripts for the passed transaction
// using multiple goroutines.
func ValidateTransactionScripts(tx *util.Tx, utxoSet UTXOSet, flags txscript.ScriptFlags, sigCache *txscript.SigCache) error {
// Don't validate coinbase transaction scripts.
if tx.IsCoinBase() {
return nil
}
// Collect all of the transaction inputs and required information for
// validation.
txIns := tx.MsgTx().TxIn
@@ -213,10 +208,6 @@ func checkBlockScripts(block *blockNode, utxoSet UTXOSet, transactions []*util.T
}
txValItems := make([]*txValidateItem, 0, numInputs)
for _, tx := range transactions {
// Skip coinbase transactions.
if tx.IsCoinBase() {
continue
}
for txInIdx, txIn := range tx.MsgTx().TxIn {
txVI := &txValidateItem{
txInIndex: txInIdx,

View File

@@ -5,9 +5,11 @@ package blockdag
import (
"compress/bzip2"
"encoding/binary"
"github.com/kaspanet/kaspad/database/ffldb/ldb"
"github.com/kaspanet/kaspad/dbaccess"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
"github.com/syndtr/goleveldb/leveldb/opt"
"io"
"io/ioutil"
"os"
@@ -62,6 +64,15 @@ func DAGSetup(dbName string, openDb bool, config Config) (*BlockDAG, func(), err
return nil, nil, errors.Errorf("error creating temp dir: %s", err)
}
// We set ldb.Options here to return nil because normally
// the database is initialized with very large caches that
// can make opening/closing the database for every test
// quite heavy.
originalLDBOptions := ldb.Options
ldb.Options = func() *opt.Options {
return nil
}
dbPath := filepath.Join(tmpDir, dbName)
_ = os.RemoveAll(dbPath)
err = dbaccess.Open(dbPath)
@@ -75,6 +86,7 @@ func DAGSetup(dbName string, openDb bool, config Config) (*BlockDAG, func(), err
spawnWaitGroup.Wait()
spawn = realSpawn
dbaccess.Close()
ldb.Options = originalLDBOptions
os.RemoveAll(dbPath)
}
} else {
@@ -146,8 +158,8 @@ func SetVirtualForTest(dag *BlockDAG, virtual VirtualForTest) VirtualForTest {
func GetVirtualFromParentsForTest(dag *BlockDAG, parentHashes []*daghash.Hash) (VirtualForTest, error) {
parents := newBlockSet()
for _, hash := range parentHashes {
parent := dag.index.LookupNode(hash)
if parent == nil {
parent, ok := dag.index.LookupNode(hash)
if !ok {
return nil, errors.Errorf("GetVirtualFromParentsForTest: didn't found node for hash %s", hash)
}
parents.add(parent)

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -336,8 +336,8 @@ func (dag *BlockDAG) initThresholdCaches() error {
}
// No warnings about unknown rules or versions until the DAG is
// current.
if dag.isCurrent() {
// synced.
if dag.isSynced() {
// Warn if a high enough percentage of the last blocks have
// unexpected versions.
bestNode := dag.selectedTip()

View File

@@ -156,17 +156,24 @@ func (diffStore *utxoDiffStore) clearDirtyEntries() {
var maxBlueScoreDifferenceToKeepLoaded uint64 = 100
// clearOldEntries removes entries whose blue score is lower than
// virtual.blueScore - maxBlueScoreDifferenceToKeepLoaded.
// virtual.blueScore - maxBlueScoreDifferenceToKeepLoaded. Note
// that tips are not removed either even if their blue score is
// lower than the above.
func (diffStore *utxoDiffStore) clearOldEntries() {
diffStore.mtx.HighPriorityWriteLock()
defer diffStore.mtx.HighPriorityWriteUnlock()
virtualBlueScore := diffStore.dag.VirtualBlueScore()
minBlueScore := virtualBlueScore - maxBlueScoreDifferenceToKeepLoaded
if maxBlueScoreDifferenceToKeepLoaded > virtualBlueScore {
minBlueScore = 0
}
tips := diffStore.dag.virtual.tips()
toRemove := make(map[*blockNode]struct{})
for node := range diffStore.loaded {
if node.blueScore < minBlueScore {
if node.blueScore < minBlueScore && !tips.contains(node) {
toRemove[node] = struct{}{}
}
}

View File

@@ -1,12 +1,13 @@
package blockdag
import (
"reflect"
"testing"
"github.com/kaspanet/kaspad/dagconfig"
"github.com/kaspanet/kaspad/dbaccess"
"github.com/kaspanet/kaspad/util/daghash"
"github.com/kaspanet/kaspad/wire"
"reflect"
"testing"
)
func TestUTXODiffStore(t *testing.T) {
@@ -114,8 +115,8 @@ func TestClearOldEntries(t *testing.T) {
for i := 0; i < 10; i++ {
processedBlock := PrepareAndProcessBlockForTest(t, dag, dag.TipHashes(), nil)
node := dag.index.LookupNode(processedBlock.BlockHash())
if node == nil {
node, ok := dag.index.LookupNode(processedBlock.BlockHash())
if !ok {
t.Fatalf("TestClearOldEntries: missing blockNode for hash %s", processedBlock.BlockHash())
}
blockNodes[i] = node
@@ -144,15 +145,16 @@ func TestClearOldEntries(t *testing.T) {
// Add a block on top of the genesis to force the retrieval of all diffData
processedBlock := PrepareAndProcessBlockForTest(t, dag, []*daghash.Hash{dag.genesis.hash}, nil)
node := dag.index.LookupNode(processedBlock.BlockHash())
if node == nil {
node, ok := dag.index.LookupNode(processedBlock.BlockHash())
if !ok {
t.Fatalf("TestClearOldEntries: missing blockNode for hash %s", processedBlock.BlockHash())
}
// Make sure that the child-of-genesis node isn't in the loaded set
_, ok := dag.utxoDiffStore.loaded[node]
if ok {
t.Fatalf("TestClearOldEntries: diffData for node %s is in the loaded set", node.hash)
// Make sure that the child-of-genesis node is in the loaded set, since it
// is a tip.
_, ok = dag.utxoDiffStore.loaded[node]
if !ok {
t.Fatalf("TestClearOldEntries: diffData for node %s is not in the loaded set", node.hash)
}
// Make sure that all the old nodes still do not exist in the loaded set

View File

@@ -52,7 +52,12 @@ func (diffStore *utxoDiffStore) deserializeBlockUTXODiffData(serializedDiffData
if err != nil {
return nil, err
}
diffData.diffChild = diffStore.dag.index.LookupNode(hash)
var ok bool
diffData.diffChild, ok = diffStore.dag.index.LookupNode(hash)
if !ok {
return nil, errors.Errorf("block %s does not exist in the DAG", hash)
}
}
diffData.diff, err = deserializeUTXODiff(r)

View File

@@ -457,17 +457,15 @@ func (fus *FullUTXOSet) WithDiff(other *UTXODiff) (UTXOSet, error) {
//
// This function MUST be called with the DAG lock held.
func (fus *FullUTXOSet) AddTx(tx *wire.MsgTx, blueScore uint64) (isAccepted bool, err error) {
isCoinbase := tx.IsCoinBase()
if !isCoinbase {
if !fus.containsInputs(tx) {
return false, nil
}
for _, txIn := range tx.TxIn {
fus.remove(txIn.PreviousOutpoint)
}
if !fus.containsInputs(tx) {
return false, nil
}
for _, txIn := range tx.TxIn {
fus.remove(txIn.PreviousOutpoint)
}
isCoinbase := tx.IsCoinBase()
for i, txOut := range tx.TxOut {
outpoint := *wire.NewOutpoint(tx.TxID(), uint32(i))
entry := NewUTXOEntry(txOut, isCoinbase, blueScore)
@@ -543,12 +541,11 @@ func (dus *DiffUTXOSet) WithDiff(other *UTXODiff) (UTXOSet, error) {
// If dus.UTXODiff.useMultiset is true, this function MUST be
// called with the DAG lock held.
func (dus *DiffUTXOSet) AddTx(tx *wire.MsgTx, blockBlueScore uint64) (bool, error) {
isCoinbase := tx.IsCoinBase()
if !isCoinbase && !dus.containsInputs(tx) {
if !dus.containsInputs(tx) {
return false, nil
}
err := dus.appendTx(tx, blockBlueScore, isCoinbase)
err := dus.appendTx(tx, blockBlueScore)
if err != nil {
return false, err
}
@@ -556,20 +553,19 @@ func (dus *DiffUTXOSet) AddTx(tx *wire.MsgTx, blockBlueScore uint64) (bool, erro
return true, nil
}
func (dus *DiffUTXOSet) appendTx(tx *wire.MsgTx, blockBlueScore uint64, isCoinbase bool) error {
if !isCoinbase {
for _, txIn := range tx.TxIn {
entry, ok := dus.Get(txIn.PreviousOutpoint)
if !ok {
return errors.Errorf("Couldn't find entry for outpoint %s", txIn.PreviousOutpoint)
}
err := dus.UTXODiff.RemoveEntry(txIn.PreviousOutpoint, entry)
if err != nil {
return err
}
func (dus *DiffUTXOSet) appendTx(tx *wire.MsgTx, blockBlueScore uint64) error {
for _, txIn := range tx.TxIn {
entry, ok := dus.Get(txIn.PreviousOutpoint)
if !ok {
return errors.Errorf("couldn't find entry for outpoint %s", txIn.PreviousOutpoint)
}
err := dus.UTXODiff.RemoveEntry(txIn.PreviousOutpoint, entry)
if err != nil {
return err
}
}
isCoinbase := tx.IsCoinBase()
for i, txOut := range tx.TxOut {
outpoint := *wire.NewOutpoint(tx.TxID(), uint32(i))
entry := NewUTXOEntry(txOut, isCoinbase, blockBlueScore)

View File

@@ -1,7 +1,6 @@
package blockdag
import (
"math"
"reflect"
"testing"
@@ -947,12 +946,9 @@ func TestUTXOSetDiffRules(t *testing.T) {
// TestDiffUTXOSet_addTx makes sure that diffUTXOSet addTx works as expected
func TestDiffUTXOSet_addTx(t *testing.T) {
// coinbaseTX is coinbase. As such, it has exactly one input with hash zero and MaxUInt32 index
txID0, _ := daghash.NewTxIDFromStr("0000000000000000000000000000000000000000000000000000000000000000")
txIn0 := &wire.TxIn{SignatureScript: []byte{}, PreviousOutpoint: wire.Outpoint{TxID: *txID0, Index: math.MaxUint32}, Sequence: 0}
txOut0 := &wire.TxOut{ScriptPubKey: []byte{0}, Value: 10}
utxoEntry0 := NewUTXOEntry(txOut0, true, 0)
coinbaseTX := wire.NewSubnetworkMsgTx(1, []*wire.TxIn{txIn0}, []*wire.TxOut{txOut0}, subnetworkid.SubnetworkIDCoinbase, 0, nil)
coinbaseTX := wire.NewSubnetworkMsgTx(1, []*wire.TxIn{}, []*wire.TxOut{txOut0}, subnetworkid.SubnetworkIDCoinbase, 0, nil)
// transaction1 spends coinbaseTX
id1 := coinbaseTX.TxID()

View File

@@ -400,10 +400,11 @@ func CalcTxMass(tx *util.Tx, previousScriptPubKeys [][]byte) uint64 {
//
// The flags do not modify the behavior of this function directly, however they
// are needed to pass along to checkProofOfWork.
func (dag *BlockDAG) checkBlockHeaderSanity(header *wire.BlockHeader, flags BehaviorFlags) (delay time.Duration, err error) {
func (dag *BlockDAG) checkBlockHeaderSanity(block *util.Block, flags BehaviorFlags) (delay time.Duration, err error) {
// Ensure the proof of work bits in the block header is in min/max range
// and the block hash is less than the target value described by the
// bits.
header := &block.MsgBlock().Header
err = dag.checkProofOfWork(header, flags)
if err != nil {
return 0, err
@@ -465,101 +466,182 @@ func checkBlockParentsOrder(header *wire.BlockHeader) error {
// The flags do not modify the behavior of this function directly, however they
// are needed to pass along to checkBlockHeaderSanity.
func (dag *BlockDAG) checkBlockSanity(block *util.Block, flags BehaviorFlags) (time.Duration, error) {
msgBlock := block.MsgBlock()
header := &msgBlock.Header
delay, err := dag.checkBlockHeaderSanity(header, flags)
delay, err := dag.checkBlockHeaderSanity(block, flags)
if err != nil {
return 0, err
}
err = dag.checkBlockContainsAtLeastOneTransaction(block)
if err != nil {
return 0, err
}
err = dag.checkBlockContainsLessThanMaxBlockMassTransactions(block)
if err != nil {
return 0, err
}
err = dag.checkFirstBlockTransactionIsCoinbase(block)
if err != nil {
return 0, err
}
err = dag.checkBlockContainsOnlyOneCoinbase(block)
if err != nil {
return 0, err
}
err = dag.checkBlockTransactionOrder(block)
if err != nil {
return 0, err
}
err = dag.checkNoNonNativeTransactions(block)
if err != nil {
return 0, err
}
err = dag.checkBlockTransactionSanity(block)
if err != nil {
return 0, err
}
err = dag.checkBlockHashMerkleRoot(block)
if err != nil {
return 0, err
}
// A block must have at least one transaction.
numTx := len(msgBlock.Transactions)
if numTx == 0 {
return 0, ruleError(ErrNoTransactions, "block does not contain "+
"any transactions")
// The following check will be fairly quick since the transaction IDs
// are already cached due to building the merkle tree above.
err = dag.checkBlockDuplicateTransactions(block)
if err != nil {
return 0, err
}
err = dag.checkBlockDoubleSpends(block)
if err != nil {
return 0, err
}
return delay, nil
}
func (dag *BlockDAG) checkBlockContainsAtLeastOneTransaction(block *util.Block) error {
transactions := block.Transactions()
numTx := len(transactions)
if numTx == 0 {
return ruleError(ErrNoTransactions, "block does not contain "+
"any transactions")
}
return nil
}
func (dag *BlockDAG) checkBlockContainsLessThanMaxBlockMassTransactions(block *util.Block) error {
// A block must not have more transactions than the max block mass or
// else it is certainly over the block mass limit.
transactions := block.Transactions()
numTx := len(transactions)
if numTx > wire.MaxMassPerBlock {
str := fmt.Sprintf("block contains too many transactions - "+
"got %d, max %d", numTx, wire.MaxMassPerBlock)
return 0, ruleError(ErrBlockMassTooHigh, str)
return ruleError(ErrBlockMassTooHigh, str)
}
return nil
}
// The first transaction in a block must be a coinbase.
func (dag *BlockDAG) checkFirstBlockTransactionIsCoinbase(block *util.Block) error {
transactions := block.Transactions()
if !transactions[util.CoinbaseTransactionIndex].IsCoinBase() {
return 0, ruleError(ErrFirstTxNotCoinbase, "first transaction in "+
return ruleError(ErrFirstTxNotCoinbase, "first transaction in "+
"block is not a coinbase")
}
return nil
}
txOffset := util.CoinbaseTransactionIndex + 1
// A block must not have more than one coinbase. And transactions must be
// ordered by subnetwork
for i, tx := range transactions[txOffset:] {
func (dag *BlockDAG) checkBlockContainsOnlyOneCoinbase(block *util.Block) error {
transactions := block.Transactions()
for i, tx := range transactions[util.CoinbaseTransactionIndex+1:] {
if tx.IsCoinBase() {
str := fmt.Sprintf("block contains second coinbase at "+
"index %d", i+2)
return 0, ruleError(ErrMultipleCoinbases, str)
}
if i != 0 && subnetworkid.Less(&tx.MsgTx().SubnetworkID, &transactions[i].MsgTx().SubnetworkID) {
return 0, ruleError(ErrTransactionsNotSorted, "transactions must be sorted by subnetwork")
return ruleError(ErrMultipleCoinbases, str)
}
}
return nil
}
// Do some preliminary checks on each transaction to ensure they are
// sane before continuing.
func (dag *BlockDAG) checkBlockTransactionOrder(block *util.Block) error {
transactions := block.Transactions()
for i, tx := range transactions[util.CoinbaseTransactionIndex+1:] {
if i != 0 && subnetworkid.Less(&tx.MsgTx().SubnetworkID, &transactions[i].MsgTx().SubnetworkID) {
return ruleError(ErrTransactionsNotSorted, "transactions must be sorted by subnetwork")
}
}
return nil
}
func (dag *BlockDAG) checkNoNonNativeTransactions(block *util.Block) error {
// Disallow non-native/coinbase subnetworks in networks that don't allow them
if !dag.dagParams.EnableNonNativeSubnetworks {
transactions := block.Transactions()
for _, tx := range transactions {
if !(tx.MsgTx().SubnetworkID.IsEqual(subnetworkid.SubnetworkIDNative) ||
tx.MsgTx().SubnetworkID.IsEqual(subnetworkid.SubnetworkIDCoinbase)) {
return ruleError(ErrInvalidSubnetwork, "non-native/coinbase subnetworks are not allowed")
}
}
}
return nil
}
func (dag *BlockDAG) checkBlockTransactionSanity(block *util.Block) error {
transactions := block.Transactions()
for _, tx := range transactions {
err := CheckTransactionSanity(tx, dag.subnetworkID)
if err != nil {
return 0, err
return err
}
}
return nil
}
func (dag *BlockDAG) checkBlockHashMerkleRoot(block *util.Block) error {
// Build merkle tree and ensure the calculated merkle root matches the
// entry in the block header. This also has the effect of caching all
// of the transaction hashes in the block to speed up future hash
// checks.
hashMerkleTree := BuildHashMerkleTreeStore(block.Transactions())
calculatedHashMerkleRoot := hashMerkleTree.Root()
if !header.HashMerkleRoot.IsEqual(calculatedHashMerkleRoot) {
if !block.MsgBlock().Header.HashMerkleRoot.IsEqual(calculatedHashMerkleRoot) {
str := fmt.Sprintf("block hash merkle root is invalid - block "+
"header indicates %s, but calculated value is %s",
header.HashMerkleRoot, calculatedHashMerkleRoot)
return 0, ruleError(ErrBadMerkleRoot, str)
block.MsgBlock().Header.HashMerkleRoot, calculatedHashMerkleRoot)
return ruleError(ErrBadMerkleRoot, str)
}
return nil
}
// Check for duplicate transactions. This check will be fairly quick
// since the transaction IDs are already cached due to building the
// merkle tree above.
func (dag *BlockDAG) checkBlockDuplicateTransactions(block *util.Block) error {
existingTxIDs := make(map[daghash.TxID]struct{})
transactions := block.Transactions()
for _, tx := range transactions {
id := tx.ID()
if _, exists := existingTxIDs[*id]; exists {
str := fmt.Sprintf("block contains duplicate "+
"transaction %s", id)
return 0, ruleError(ErrDuplicateTx, str)
return ruleError(ErrDuplicateTx, str)
}
existingTxIDs[*id] = struct{}{}
}
return nil
}
// Check for double spends with transactions on the same block.
func (dag *BlockDAG) checkBlockDoubleSpends(block *util.Block) error {
usedOutpoints := make(map[wire.Outpoint]*daghash.TxID)
transactions := block.Transactions()
for _, tx := range transactions {
for _, txIn := range tx.MsgTx().TxIn {
if spendingTxID, exists := usedOutpoints[txIn.PreviousOutpoint]; exists {
str := fmt.Sprintf("transaction %s spends "+
"outpoint %s that was already spent by "+
"transaction %s in this block", tx.ID(), txIn.PreviousOutpoint, spendingTxID)
return 0, ruleError(ErrDoubleSpendInSameBlock, str)
return ruleError(ErrDoubleSpendInSameBlock, str)
}
usedOutpoints[txIn.PreviousOutpoint] = tx.ID()
}
}
return delay, nil
return nil
}
// checkBlockHeaderContext performs several validation checks on the block header
@@ -616,7 +698,7 @@ func (dag *BlockDAG) validateParents(blockHeader *wire.BlockHeader, parents bloc
for parentA := range parents {
// isFinalized might be false-negative because node finality status is
// updated in a separate goroutine. This is why later the block is
// checked more thoroughly on the finality rules in dag.checkFinalityRules.
// checked more thoroughly on the finality rules in dag.checkFinalityViolation.
if parentA.isFinalized {
return ruleError(ErrFinality, fmt.Sprintf("block %s is a finalized "+
"parent of block %s", parentA.hash, blockHeader.BlockHash()))
@@ -627,7 +709,7 @@ func (dag *BlockDAG) validateParents(blockHeader *wire.BlockHeader, parents bloc
continue
}
isAncestorOf, err := dag.isAncestorOf(parentA, parentB)
isAncestorOf, err := dag.isInPast(parentA, parentB)
if err != nil {
return err
}

View File

@@ -125,12 +125,6 @@ func TestCheckConnectBlockTemplate(t *testing.T) {
"block 4: %v", err)
}
blockNode3 := dag.index.LookupNode(blocks[3].Hash())
blockNode4 := dag.index.LookupNode(blocks[4].Hash())
if blockNode3.children.contains(blockNode4) {
t.Errorf("Block 4 wasn't successfully detached as a child from block3")
}
// Block 3a should connect even though it does not build on dag tips.
err = dag.CheckConnectBlockTemplateNoLock(blocks[5])
if err != nil {

View File

@@ -97,7 +97,7 @@ func TestVirtualBlock(t *testing.T) {
tipsToSet: []*blockNode{},
tipsToAdd: []*blockNode{node0, node1, node2, node3, node4, node5, node6},
expectedTips: blockSetFromSlice(node2, node5, node6),
expectedSelectedParent: node6,
expectedSelectedParent: node5,
},
}

View File

@@ -36,6 +36,7 @@ type configFlags struct {
RPCServer string `short:"s" long:"rpcserver" description:"RPC server to connect to"`
RPCCert string `short:"c" long:"rpccert" description:"RPC server certificate chain for validation"`
DisableTLS bool `long:"notls" description:"Disable TLS"`
MiningAddr string `long:"miningaddr" description:"Address to mine to"`
Verbose bool `long:"verbose" short:"v" description:"Enable logging of RPC requests"`
NumberOfBlocks uint64 `short:"n" long:"numblocks" description:"Number of blocks to mine. If omitted, will mine until the process is interrupted."`
BlockDelay uint64 `long:"block-delay" description:"Delay for block submission (in milliseconds). This is used only for testing purposes."`

View File

@@ -2,6 +2,7 @@ package main
import (
"fmt"
"github.com/kaspanet/kaspad/util"
"os"
"github.com/kaspanet/kaspad/version"
@@ -39,15 +40,20 @@ func main() {
client, err := connectToServer(cfg)
if err != nil {
panic(errors.Wrap(err, "Error connecting to the RPC server"))
panic(errors.Wrap(err, "error connecting to the RPC server"))
}
defer client.Disconnect()
miningAddr, err := util.DecodeAddress(cfg.MiningAddr, cfg.ActiveNetParams.Prefix)
if err != nil {
panic(errors.Wrap(err, "error decoding mining address"))
}
doneChan := make(chan struct{})
spawn(func() {
err = mineLoop(client, cfg.NumberOfBlocks, cfg.BlockDelay, cfg.MineWhenNotSynced)
err = mineLoop(client, cfg.NumberOfBlocks, cfg.BlockDelay, cfg.MineWhenNotSynced, miningAddr)
if err != nil {
panic(errors.Errorf("Error in mine loop: %s", err))
panic(errors.Wrap(err, "error in mine loop"))
}
doneChan <- struct{}{}
})

View File

@@ -25,7 +25,9 @@ var hashesTried uint64
const logHashRateInterval = 10 * time.Second
func mineLoop(client *minerClient, numberOfBlocks uint64, blockDelay uint64, mineWhenNotSynced bool) error {
func mineLoop(client *minerClient, numberOfBlocks uint64, blockDelay uint64, mineWhenNotSynced bool,
miningAddr util.Address) error {
errChan := make(chan error)
templateStopChan := make(chan struct{})
@@ -35,7 +37,7 @@ func mineLoop(client *minerClient, numberOfBlocks uint64, blockDelay uint64, min
wg := sync.WaitGroup{}
for i := uint64(0); numberOfBlocks == 0 || i < numberOfBlocks; i++ {
foundBlock := make(chan *util.Block)
mineNextBlock(client, foundBlock, mineWhenNotSynced, templateStopChan, errChan)
mineNextBlock(client, miningAddr, foundBlock, mineWhenNotSynced, templateStopChan, errChan)
block := <-foundBlock
templateStopChan <- struct{}{}
wg.Add(1)
@@ -80,12 +82,12 @@ func logHashRate() {
})
}
func mineNextBlock(client *minerClient, foundBlock chan *util.Block, mineWhenNotSynced bool,
func mineNextBlock(client *minerClient, miningAddr util.Address, foundBlock chan *util.Block, mineWhenNotSynced bool,
templateStopChan chan struct{}, errChan chan error) {
newTemplateChan := make(chan *rpcmodel.GetBlockTemplateResult)
spawn(func() {
templatesLoop(client, newTemplateChan, errChan, templateStopChan)
templatesLoop(client, miningAddr, newTemplateChan, errChan, templateStopChan)
})
spawn(func() {
solveLoop(newTemplateChan, foundBlock, mineWhenNotSynced, errChan)
@@ -134,7 +136,7 @@ func parseBlock(template *rpcmodel.GetBlockTemplateResult) (*util.Block, error)
wire.NewBlockHeader(template.Version, parentHashes, &daghash.Hash{},
acceptedIDMerkleRoot, utxoCommitment, bits, 0))
for i, txResult := range append([]rpcmodel.GetBlockTemplateResultTx{*template.CoinbaseTxn}, template.Transactions...) {
for i, txResult := range template.Transactions {
reader := hex.NewDecoder(strings.NewReader(txResult.Data))
tx := &wire.MsgTx{}
if err := tx.KaspaDecode(reader, 0); err != nil {
@@ -169,7 +171,9 @@ func solveBlock(block *util.Block, stopChan chan struct{}, foundBlock chan *util
}
func templatesLoop(client *minerClient, newTemplateChan chan *rpcmodel.GetBlockTemplateResult, errChan chan error, stopChan chan struct{}) {
func templatesLoop(client *minerClient, miningAddr util.Address,
newTemplateChan chan *rpcmodel.GetBlockTemplateResult, errChan chan error, stopChan chan struct{}) {
longPollID := ""
getBlockTemplateLongPoll := func() {
if longPollID != "" {
@@ -177,7 +181,7 @@ func templatesLoop(client *minerClient, newTemplateChan chan *rpcmodel.GetBlockT
} else {
log.Infof("Requesting template without longPollID from %s", client.Host())
}
template, err := getBlockTemplate(client, longPollID)
template, err := getBlockTemplate(client, miningAddr, longPollID)
if nativeerrors.Is(err, rpcclient.ErrResponseTimedOut) {
log.Infof("Got timeout while requesting template '%s' from %s", longPollID, client.Host())
return
@@ -205,8 +209,8 @@ func templatesLoop(client *minerClient, newTemplateChan chan *rpcmodel.GetBlockT
}
}
func getBlockTemplate(client *minerClient, longPollID string) (*rpcmodel.GetBlockTemplateResult, error) {
return client.GetBlockTemplate([]string{"coinbasetxn"}, longPollID)
func getBlockTemplate(client *minerClient, miningAddr util.Address, longPollID string) (*rpcmodel.GetBlockTemplateResult, error) {
return client.GetBlockTemplate(miningAddr.String(), longPollID)
}
func solveLoop(newTemplateChan chan *rpcmodel.GetBlockTemplateResult, foundBlock chan *util.Block,

View File

@@ -117,7 +117,6 @@ type Flags struct {
Upnp bool `long:"upnp" description:"Use UPnP to map our listening port outside of NAT"`
MinRelayTxFee float64 `long:"minrelaytxfee" description:"The minimum transaction fee in KAS/kB to be considered a non-zero fee."`
MaxOrphanTxs int `long:"maxorphantx" description:"Max number of orphan transactions to keep in memory"`
MiningAddrs []string `long:"miningaddr" description:"Add the specified payment address to the list of addresses to use for generated blocks -- At least one address is required if the generate option is set"`
BlockMaxMass uint64 `long:"blockmaxmass" description:"Maximum transaction mass to be used when creating a block"`
UserAgentComments []string `long:"uacomment" description:"Comment to add to the user agent -- See BIP 14 for more information."`
NoPeerBloomFilters bool `long:"nopeerbloomfilters" description:"Disable bloom filtering support"`
@@ -127,7 +126,6 @@ type Flags struct {
DropAcceptanceIndex bool `long:"dropacceptanceindex" description:"Deletes the hash-based acceptance index from the database on start up and then exits."`
RelayNonStd bool `long:"relaynonstd" description:"Relay non-standard transactions regardless of the default settings for the active network."`
RejectNonStd bool `long:"rejectnonstd" description:"Reject non-standard transactions regardless of the default settings for the active network."`
Subnetwork string `long:"subnetwork" description:"If subnetwork ID is specified, than node will request and process only payloads from specified subnetwork. And if subnetwork ID is ommited, than payloads of all subnetworks are processed. Subnetworks with IDs 2 through 255 are reserved for future use and are currently not allowed."`
ResetDatabase bool `long:"reset-db" description:"Reset database before starting node. It's needed when switching between subnetworks."`
NetworkFlags
}
@@ -608,36 +606,6 @@ func loadConfig() (*Config, []string, error) {
return nil, nil, err
}
// Check mining addresses are valid and saved parsed versions.
activeConfig.MiningAddrs = make([]util.Address, 0, len(activeConfig.Flags.MiningAddrs))
for _, strAddr := range activeConfig.Flags.MiningAddrs {
addr, err := util.DecodeAddress(strAddr, activeConfig.NetParams().Prefix)
if err != nil {
str := "%s: mining address '%s' failed to decode: %s"
err := errors.Errorf(str, funcName, strAddr, err)
fmt.Fprintln(os.Stderr, err)
fmt.Fprintln(os.Stderr, usageMessage)
return nil, nil, err
}
if !addr.IsForPrefix(activeConfig.NetParams().Prefix) {
str := "%s: mining address '%s' is on the wrong network"
err := errors.Errorf(str, funcName, strAddr)
fmt.Fprintln(os.Stderr, err)
fmt.Fprintln(os.Stderr, usageMessage)
return nil, nil, err
}
activeConfig.MiningAddrs = append(activeConfig.MiningAddrs, addr)
}
if activeConfig.Flags.Subnetwork != "" {
activeConfig.SubnetworkID, err = subnetworkid.NewFromStr(activeConfig.Flags.Subnetwork)
if err != nil {
return nil, nil, err
}
} else {
activeConfig.SubnetworkID = nil
}
// Add default port to all listener addresses if needed and remove
// duplicate addresses.
activeConfig.Listeners, err = network.NormalizeAddresses(activeConfig.Listeners,

View File

@@ -6,17 +6,18 @@ package connmgr
import (
"fmt"
"github.com/kaspanet/kaspad/addrmgr"
"github.com/kaspanet/kaspad/config"
"github.com/kaspanet/kaspad/dagconfig"
"github.com/pkg/errors"
"io"
"io/ioutil"
"net"
"os"
"sync/atomic"
"testing"
"time"
"github.com/kaspanet/kaspad/addrmgr"
"github.com/kaspanet/kaspad/config"
"github.com/kaspanet/kaspad/dagconfig"
"github.com/kaspanet/kaspad/dbaccess"
"github.com/pkg/errors"
)
func init() {
@@ -192,19 +193,26 @@ func addressManagerForTest(t *testing.T, testName string, numAddresses uint8) (*
}
func createEmptyAddressManagerForTest(t *testing.T, testName string) (*addrmgr.AddrManager, func()) {
path, err := ioutil.TempDir("", fmt.Sprintf("%s-addressmanager", testName))
path, err := ioutil.TempDir("", fmt.Sprintf("%s-database", testName))
if err != nil {
t.Fatalf("createEmptyAddressManagerForTest: TempDir unexpectedly "+
"failed: %s", err)
}
return addrmgr.New(path, nil, nil), func() {
// Wait for the connection manager to finish
err = dbaccess.Open(path)
if err != nil {
t.Fatalf("error creating db: %s", err)
}
return addrmgr.New(nil, nil), func() {
// Wait for the connection manager to finish, so it'll
// have access to the address manager as long as it's
// alive.
time.Sleep(10 * time.Millisecond)
err := os.RemoveAll(path)
err := dbaccess.Close()
if err != nil {
t.Fatalf("couldn't remove path %s", path)
t.Fatalf("error closing the database: %s", err)
}
}
}
@@ -568,7 +576,7 @@ func TestMaxRetryDuration(t *testing.T) {
connected := make(chan *ConnReq)
cmgr, err := New(&Config{
RetryDuration: time.Millisecond,
TargetOutbound: 1,
TargetOutbound: 0,
Dial: timedDialer,
OnConnection: func(c *ConnReq, conn net.Conn) {
connected <- c

View File

@@ -5,7 +5,6 @@
package dagconfig
import (
"math"
"time"
"github.com/kaspanet/kaspad/util/daghash"
@@ -13,22 +12,10 @@ import (
"github.com/kaspanet/kaspad/wire"
)
var genesisTxIns = []*wire.TxIn{
{
PreviousOutpoint: wire.Outpoint{
TxID: daghash.TxID{},
Index: math.MaxUint32,
},
SignatureScript: []byte{
0x00, 0x00, 0x0b, 0x2f, 0x50, 0x32, 0x53, 0x48,
0x2f, 0x62, 0x74, 0x63, 0x64, 0x2f,
},
Sequence: math.MaxUint64,
},
}
var genesisTxOuts = []*wire.TxOut{}
var genesisTxPayload = []byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Blue score
0x17, // Varint
0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, // OP-TRUE p2sh
0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7,
@@ -37,58 +24,46 @@ var genesisTxPayload = []byte{
// genesisCoinbaseTx is the coinbase transaction for the genesis blocks for
// the main network.
var genesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, genesisTxIns, genesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, genesisTxPayload)
var genesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, []*wire.TxIn{}, genesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, genesisTxPayload)
// genesisHash is the hash of the first block in the block DAG for the main
// network (genesis block).
var genesisHash = daghash.Hash{
0x9b, 0x22, 0x59, 0x44, 0x66, 0xf0, 0xbe, 0x50,
0x7c, 0x1c, 0x8a, 0xf6, 0x06, 0x27, 0xe6, 0x33,
0x38, 0x7e, 0xd1, 0xd5, 0x8c, 0x42, 0x59, 0x1a,
0x31, 0xac, 0x9a, 0xa6, 0x2e, 0xd5, 0x2b, 0x0f,
0xb3, 0x5d, 0x34, 0x3c, 0xf6, 0xbb, 0xd5, 0xaf,
0x40, 0x4b, 0xff, 0x3f, 0x83, 0x27, 0x71, 0x1e,
0xe1, 0x83, 0xf6, 0x41, 0x32, 0x8c, 0xba, 0xe6,
0xd3, 0xba, 0x13, 0xef, 0x7b, 0x7e, 0x61, 0x65,
}
// genesisMerkleRoot is the hash of the first transaction in the genesis block
// for the main network.
var genesisMerkleRoot = daghash.Hash{
0x72, 0x10, 0x35, 0x85, 0xdd, 0xac, 0x82, 0x5c,
0x49, 0x13, 0x9f, 0xc0, 0x0e, 0x37, 0xc0, 0x45,
0x71, 0xdf, 0xd9, 0xf6, 0x36, 0xdf, 0x4c, 0x42,
0x72, 0x7b, 0x9e, 0x86, 0xdd, 0x37, 0xd2, 0xbd,
0xca, 0x85, 0x56, 0x27, 0xc7, 0x6a, 0xb5, 0x7a,
0x26, 0x1d, 0x63, 0x62, 0x1e, 0x57, 0x21, 0xf0,
0x5e, 0x60, 0x1f, 0xee, 0x1d, 0x4d, 0xaa, 0x53,
0x72, 0xe1, 0x16, 0xda, 0x4b, 0xb3, 0xd8, 0x0e,
}
// genesisBlock defines the genesis block of the block DAG which serves as the
// public transaction ledger for the main network.
var genesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
Version: 0x10000000,
ParentHashes: []*daghash.Hash{},
HashMerkleRoot: &genesisMerkleRoot,
AcceptedIDMerkleRoot: &daghash.Hash{},
UTXOCommitment: &daghash.ZeroHash,
Timestamp: time.Unix(0x5cdac4b0, 0),
Timestamp: time.Unix(0x5edf4ce0, 0),
Bits: 0x207fffff,
Nonce: 0x1,
Nonce: 0,
},
Transactions: []*wire.MsgTx{genesisCoinbaseTx},
}
var devnetGenesisTxIns = []*wire.TxIn{
{
PreviousOutpoint: wire.Outpoint{
TxID: daghash.TxID{},
Index: math.MaxUint32,
},
SignatureScript: []byte{
0x00, 0x00, 0x0b, 0x2f, 0x50, 0x32, 0x53, 0x48,
0x2f, 0x62, 0x74, 0x63, 0x64, 0x2f,
},
Sequence: math.MaxUint64,
},
}
var devnetGenesisTxOuts = []*wire.TxOut{}
var devnetGenesisTxPayload = []byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Blue score
0x17, // Varint
0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, // OP-TRUE p2sh
0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7,
@@ -98,58 +73,46 @@ var devnetGenesisTxPayload = []byte{
// devnetGenesisCoinbaseTx is the coinbase transaction for the genesis blocks for
// the development network.
var devnetGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, devnetGenesisTxIns, devnetGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, devnetGenesisTxPayload)
var devnetGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, []*wire.TxIn{}, devnetGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, devnetGenesisTxPayload)
// devGenesisHash is the hash of the first block in the block DAG for the development
// network (genesis block).
var devnetGenesisHash = daghash.Hash{
0x17, 0x59, 0x5c, 0x09, 0xdd, 0x1a, 0x51, 0x65,
0x14, 0xbc, 0x19, 0xff, 0x29, 0xea, 0xf3, 0xcb,
0xe2, 0x76, 0xf0, 0xc7, 0x86, 0xf8, 0x0c, 0x53,
0x59, 0xbe, 0xee, 0x0c, 0x2b, 0x5d, 0x00, 0x00,
0x50, 0x92, 0xd1, 0x1f, 0xaa, 0xba, 0xd3, 0x58,
0xa8, 0x22, 0xd7, 0xec, 0x8e, 0xe3, 0xf4, 0x26,
0x17, 0x18, 0x74, 0xd7, 0x87, 0x05, 0x9d, 0xed,
0x33, 0xcd, 0xe1, 0x26, 0x1a, 0x69, 0x00, 0x00,
}
// devnetGenesisMerkleRoot is the hash of the first transaction in the genesis block
// for the devopment network.
var devnetGenesisMerkleRoot = daghash.Hash{
0x16, 0x0a, 0xc6, 0x8b, 0x77, 0x08, 0xf4, 0x96,
0xa3, 0x07, 0x05, 0xbc, 0x92, 0xda, 0xee, 0x73,
0x26, 0x5e, 0xd0, 0x85, 0x78, 0xa2, 0x5d, 0x02,
0x49, 0x8a, 0x2a, 0x22, 0xef, 0x41, 0xc9, 0xc3,
0x68, 0x60, 0xe7, 0x77, 0x47, 0x74, 0x7f, 0xd5,
0x55, 0x58, 0x8a, 0xb5, 0xc2, 0x29, 0x0c, 0xa6,
0x65, 0x44, 0xb4, 0x4f, 0xfa, 0x31, 0x7a, 0xfa,
0x55, 0xe0, 0xcf, 0xac, 0x9c, 0x86, 0x30, 0x2a,
}
// devnetGenesisBlock defines the genesis block of the block DAG which serves as the
// public transaction ledger for the development network.
var devnetGenesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
Version: 0x10000000,
ParentHashes: []*daghash.Hash{},
HashMerkleRoot: &devnetGenesisMerkleRoot,
AcceptedIDMerkleRoot: &daghash.Hash{},
UTXOCommitment: &daghash.ZeroHash,
Timestamp: time.Unix(0x5e15e758, 0),
Timestamp: time.Unix(0x5edf4ce0, 0),
Bits: 0x1e7fffff,
Nonce: 0x282ac,
Nonce: 0xb3ed,
},
Transactions: []*wire.MsgTx{devnetGenesisCoinbaseTx},
}
var regtestGenesisTxIns = []*wire.TxIn{
{
PreviousOutpoint: wire.Outpoint{
TxID: daghash.TxID{},
Index: math.MaxUint32,
},
SignatureScript: []byte{
0x00, 0x00, 0x0b, 0x2f, 0x50, 0x32, 0x53, 0x48,
0x2f, 0x62, 0x74, 0x63, 0x64, 0x2f,
},
Sequence: math.MaxUint64,
},
}
var regtestGenesisTxOuts = []*wire.TxOut{}
var regtestGenesisTxPayload = []byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Blue score
0x17, // Varint
0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, // OP-TRUE p2sh
0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7,
@@ -159,58 +122,46 @@ var regtestGenesisTxPayload = []byte{
// regtestGenesisCoinbaseTx is the coinbase transaction for
// the genesis blocks for the regtest network.
var regtestGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, regtestGenesisTxIns, regtestGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, regtestGenesisTxPayload)
var regtestGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, []*wire.TxIn{}, regtestGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, regtestGenesisTxPayload)
// devGenesisHash is the hash of the first block in the block DAG for the development
// network (genesis block).
var regtestGenesisHash = daghash.Hash{
0xfc, 0x02, 0x19, 0x6f, 0x79, 0x7a, 0xed, 0x2d,
0x0f, 0x31, 0xa5, 0xbd, 0x32, 0x13, 0x29, 0xc7,
0x7c, 0x0c, 0x5c, 0x1a, 0x5b, 0x7c, 0x20, 0x68,
0xb7, 0xc9, 0x9f, 0x61, 0x13, 0x11, 0x00, 0x00,
0xf8, 0x1d, 0xe9, 0x86, 0xa5, 0x60, 0xe0, 0x34,
0x0f, 0x02, 0xaa, 0x8d, 0xea, 0x6f, 0x1f, 0xc6,
0x2a, 0xb4, 0x77, 0xbd, 0xca, 0xed, 0xad, 0x3c,
0x99, 0xe6, 0x98, 0x7c, 0x7b, 0x5e, 0x00, 0x00,
}
// regtestGenesisMerkleRoot is the hash of the first transaction in the genesis block
// for the regtest.
var regtestGenesisMerkleRoot = daghash.Hash{
0x3a, 0x9f, 0x62, 0xc9, 0x2b, 0x16, 0x17, 0xb3,
0x41, 0x6d, 0x9e, 0x2d, 0x87, 0x93, 0xfd, 0x72,
0x77, 0x4d, 0x1d, 0x6f, 0x6d, 0x38, 0x5b, 0xf1,
0x24, 0x1b, 0xdc, 0x96, 0xce, 0xbf, 0xa1, 0x09,
0x1e, 0x08, 0xae, 0x1f, 0x43, 0xf5, 0xfc, 0x24,
0xe6, 0xec, 0x54, 0x5b, 0xf7, 0x52, 0x99, 0xe4,
0xcc, 0x4c, 0xa0, 0x79, 0x41, 0xfc, 0xbe, 0x76,
0x72, 0x4c, 0x7e, 0xd8, 0xa3, 0x43, 0x65, 0x94,
}
// regtestGenesisBlock defines the genesis block of the block DAG which serves as the
// public transaction ledger for the development network.
var regtestGenesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
Version: 0x10000000,
ParentHashes: []*daghash.Hash{},
HashMerkleRoot: &regtestGenesisMerkleRoot,
AcceptedIDMerkleRoot: &daghash.Hash{},
UTXOCommitment: &daghash.ZeroHash,
Timestamp: time.Unix(0x5e15e2d8, 0),
Timestamp: time.Unix(0x5edf4ce0, 0),
Bits: 0x1e7fffff,
Nonce: 0x15a6,
Nonce: 0x4a78,
},
Transactions: []*wire.MsgTx{regtestGenesisCoinbaseTx},
}
var simnetGenesisTxIns = []*wire.TxIn{
{
PreviousOutpoint: wire.Outpoint{
TxID: daghash.TxID{},
Index: math.MaxUint32,
},
SignatureScript: []byte{
0x00, 0x00, 0x0b, 0x2f, 0x50, 0x32, 0x53, 0x48,
0x2f, 0x62, 0x74, 0x63, 0x64, 0x2f,
},
Sequence: math.MaxUint64,
},
}
var simnetGenesisTxOuts = []*wire.TxOut{}
var simnetGenesisTxPayload = []byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Blue score
0x17, // Varint
0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, // OP-TRUE p2sh
0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7,
@@ -219,96 +170,84 @@ var simnetGenesisTxPayload = []byte{
}
// simnetGenesisCoinbaseTx is the coinbase transaction for the simnet genesis block.
var simnetGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, simnetGenesisTxIns, simnetGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, simnetGenesisTxPayload)
var simnetGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, []*wire.TxIn{}, simnetGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, simnetGenesisTxPayload)
// simnetGenesisHash is the hash of the first block in the block DAG for
// the simnet (genesis block).
var simnetGenesisHash = daghash.Hash{
0xff, 0x69, 0xcc, 0x45, 0x45, 0x74, 0x5b, 0xf9,
0xd5, 0x4e, 0x43, 0x56, 0x4f, 0x1b, 0xdf, 0x31,
0x09, 0xb7, 0x76, 0xaa, 0x2a, 0x33, 0x35, 0xc9,
0xa1, 0x80, 0xe0, 0x92, 0xbb, 0xae, 0xcd, 0x49,
0x34, 0x43, 0xed, 0xdc, 0xab, 0x0c, 0x39, 0x53,
0xa2, 0xc5, 0x6d, 0x12, 0x4b, 0xc2, 0x41, 0x1c,
0x1a, 0x05, 0x24, 0xb4, 0xff, 0xeb, 0xe8, 0xbd,
0xee, 0x6e, 0x9a, 0x77, 0xc7, 0xbb, 0x70, 0x7d,
}
// simnetGenesisMerkleRoot is the hash of the first transaction in the genesis block
// for the devopment network.
var simnetGenesisMerkleRoot = daghash.Hash{
0xb0, 0x1c, 0x3b, 0x9e, 0x0d, 0x9a, 0xc0, 0x80,
0x0a, 0x08, 0x42, 0x50, 0x02, 0xa3, 0xea, 0xdb,
0xed, 0xc8, 0xd0, 0xad, 0x35, 0x03, 0xd8, 0x0e,
0x11, 0x3c, 0x7b, 0xb2, 0xb5, 0x20, 0xe5, 0x84,
0x47, 0x52, 0xc7, 0x23, 0x70, 0x4d, 0x89, 0x17,
0xbd, 0x44, 0x26, 0xfa, 0x82, 0x7e, 0x1b, 0xa9,
0xc6, 0x46, 0x1a, 0x37, 0x5a, 0x73, 0x88, 0x09,
0xe8, 0x17, 0xff, 0xb1, 0xdb, 0x1a, 0xb3, 0x3f,
}
// simnetGenesisBlock defines the genesis block of the block DAG which serves as the
// public transaction ledger for the development network.
var simnetGenesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
Version: 0x10000000,
ParentHashes: []*daghash.Hash{},
HashMerkleRoot: &simnetGenesisMerkleRoot,
AcceptedIDMerkleRoot: &daghash.Hash{},
UTXOCommitment: &daghash.ZeroHash,
Timestamp: time.Unix(0x5e15d31c, 0),
Timestamp: time.Unix(0x5ede5261, 0),
Bits: 0x207fffff,
Nonce: 0x3,
Nonce: 0x2,
},
Transactions: []*wire.MsgTx{simnetGenesisCoinbaseTx},
}
var testnetGenesisTxIns = []*wire.TxIn{
{
PreviousOutpoint: wire.Outpoint{
TxID: daghash.TxID{},
Index: math.MaxUint32,
},
SignatureScript: []byte{
0x00, 0x00, 0x0b, 0x2f, 0x50, 0x32, 0x53, 0x48,
0x2f, 0x62, 0x74, 0x63, 0x64, 0x2f,
},
Sequence: math.MaxUint64,
},
}
var testnetGenesisTxOuts = []*wire.TxOut{}
var testnetGenesisTxPayload = []byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Blue score
0x01, // Varint
0x00, // OP-FALSE
0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d, 0x74, 0x65, 0x73, 0x74, 0x6e, 0x65, 0x74, // kaspa-testnet
}
// testnetGenesisCoinbaseTx is the coinbase transaction for the testnet genesis block.
var testnetGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, testnetGenesisTxIns, testnetGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, testnetGenesisTxPayload)
var testnetGenesisCoinbaseTx = wire.NewSubnetworkMsgTx(1, []*wire.TxIn{}, testnetGenesisTxOuts, subnetworkid.SubnetworkIDCoinbase, 0, testnetGenesisTxPayload)
// testnetGenesisHash is the hash of the first block in the block DAG for the test
// network (genesis block).
var testnetGenesisHash = daghash.Hash{
0x22, 0x15, 0x34, 0xa9, 0xff, 0x10, 0xdd, 0x47,
0xcd, 0x21, 0x11, 0x25, 0xc5, 0x6d, 0x85, 0x9a,
0x97, 0xc8, 0x63, 0x63, 0x79, 0x40, 0x80, 0x04,
0x74, 0xe6, 0x29, 0x7b, 0xbc, 0x08, 0x00, 0x00,
0xFC, 0x21, 0x64, 0x1A, 0xB5, 0x59, 0x61, 0x8E,
0xF3, 0x9A, 0x95, 0xF1, 0xDA, 0x07, 0x79, 0xBD,
0x11, 0x2F, 0x90, 0xFC, 0x8B, 0x33, 0x14, 0x8A,
0x90, 0x6B, 0x76, 0x08, 0x4B, 0x52, 0x00, 0x00,
}
// testnetGenesisMerkleRoot is the hash of the first transaction in the genesis block
// for testnet.
var testnetGenesisMerkleRoot = daghash.Hash{
0x88, 0x05, 0xd0, 0xe7, 0x8f, 0x41, 0x77, 0x39,
0x2c, 0xb6, 0xbb, 0xb4, 0x19, 0xa8, 0x48, 0x4a,
0xdf, 0x77, 0xb0, 0x82, 0xd6, 0x70, 0xd8, 0x24,
0x6a, 0x36, 0x05, 0xaa, 0xbd, 0x7a, 0xd1, 0x62,
0xA0, 0xA1, 0x3D, 0xFD, 0x86, 0x41, 0x35, 0xC8,
0xBD, 0xBB, 0xE6, 0x37, 0x35, 0xBB, 0x4C, 0x51,
0x11, 0x7B, 0x26, 0x90, 0x15, 0x64, 0x0F, 0x42,
0x6D, 0x2B, 0x6F, 0x37, 0x4D, 0xC1, 0xA9, 0x72,
}
// testnetGenesisBlock defines the genesis block of the block DAG which serves as the
// public transaction ledger for testnet.
var testnetGenesisBlock = wire.MsgBlock{
Header: wire.BlockHeader{
Version: 1,
Version: 0x10000000,
ParentHashes: []*daghash.Hash{},
HashMerkleRoot: &testnetGenesisMerkleRoot,
AcceptedIDMerkleRoot: &daghash.ZeroHash,
UTXOCommitment: &daghash.ZeroHash,
Timestamp: time.Unix(0x5e15adfe, 0),
Timestamp: time.Unix(0x5efc2128, 0),
Bits: 0x1e7fffff,
Nonce: 0x20a1,
Nonce: 0x1124,
},
Transactions: []*wire.MsgTx{testnetGenesisCoinbaseTx},
}

View File

@@ -148,187 +148,101 @@ func TestDevnetGenesisBlock(t *testing.T) {
// genesisBlockBytes are the wire encoded bytes for the genesis block of the
// main network as of protocol version 1.
var genesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x72, 0x10, 0x35, 0x85, 0xdd, 0xac, 0x82, 0x5c, 0x49, 0x13, 0x9f,
0xc0, 0x0e, 0x37, 0xc0, 0x45, 0x71, 0xdf, 0xd9, 0xf6, 0x36, 0xdf, 0x4c, 0x42, 0x72, 0x7b, 0x9e,
0x86, 0xdd, 0x37, 0xd2, 0xbd, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x10, 0x00, 0xca, 0x85, 0x56, 0x27, 0xc7, 0x6a, 0xb5, 0x7a, 0x26, 0x1d, 0x63,
0x62, 0x1e, 0x57, 0x21, 0xf0, 0x5e, 0x60, 0x1f, 0xee, 0x1d, 0x4d, 0xaa, 0x53, 0x72, 0xe1, 0x16,
0xda, 0x4b, 0xb3, 0xd8, 0x0e, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0xb0, 0xc4, 0xda, 0x5c, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x20, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0xe0, 0x4c, 0xdf, 0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff,
0xff, 0xff, 0xff, 0x0e, 0x00, 0x00, 0x0b, 0x2f, 0x50, 0x32, 0x53, 0x48, 0x2f, 0x62, 0x74, 0x63,
0x64, 0x2f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd2,
0xea, 0x82, 0x4e, 0xb8, 0x87, 0x42, 0xd0, 0x6d, 0x1f, 0x8d, 0xc3, 0xad, 0x9f, 0x43, 0x9e, 0xed,
0x6f, 0x43, 0x3c, 0x02, 0x71, 0x71, 0x69, 0xfb, 0xbc, 0x91, 0x44, 0xac, 0xf1, 0x93, 0xd3, 0x18,
0x17, 0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71,
0xc7, 0x7e, 0xba, 0x30, 0xcd, 0x5a, 0x4b, 0x87,
0x00, 0x00, 0x00, 0x00, 0xc4, 0x41, 0xe6, 0x78, 0x1d, 0xf7, 0xb3, 0x39, 0x66, 0x4d, 0x1a, 0x03,
0x97, 0x63, 0xc7, 0x2c, 0xfc, 0x70, 0xd7, 0x75, 0xb6, 0xd9, 0xfc, 0x1a, 0x96, 0xf0, 0xac, 0x07,
0xef, 0xfa, 0x26, 0x38, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xa9, 0x14,
0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7, 0x7e, 0xba,
0x30, 0xcd, 0x5a, 0x4b, 0x87,
}
// regtestGenesisBlockBytes are the wire encoded bytes for the genesis block of
// the regression test network as of protocol version 1.
var regtestGenesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x3a, 0x9f, 0x62,
0xc9, 0x2b, 0x16, 0x17, 0xb3, 0x41, 0x6d, 0x9e,
0x2d, 0x87, 0x93, 0xfd, 0x72, 0x77, 0x4d, 0x1d,
0x6f, 0x6d, 0x38, 0x5b, 0xf1, 0x24, 0x1b, 0xdc,
0x96, 0xce, 0xbf, 0xa1, 0x09, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0xd8, 0xe2, 0x15,
0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x1e, 0xa6, 0x15, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff,
0xff, 0xff, 0xff, 0x0e, 0x00, 0x00, 0x0b, 0x2f,
0x50, 0x32, 0x53, 0x48, 0x2f, 0x62, 0x74, 0x63,
0x64, 0x2f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xed,
0x32, 0xec, 0xb4, 0xf8, 0x3c, 0x7a, 0x32, 0x0f,
0xd2, 0xe5, 0x24, 0x77, 0x89, 0x43, 0x3a, 0x78,
0x0a, 0xda, 0x68, 0x2d, 0xf6, 0xaa, 0xb1, 0x19,
0xdd, 0xd8, 0x97, 0x15, 0x4b, 0xcb, 0x42, 0x25,
0x17, 0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5,
0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71,
0xc7, 0x7e, 0xba, 0x30, 0xcd, 0x5a, 0x4b, 0x87,
0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d, 0x72, 0x65,
0x67, 0x74, 0x65, 0x73, 0x74,
0x00, 0x00, 0x00, 0x10, 0x00, 0x1e, 0x08, 0xae, 0x1f, 0x43, 0xf5, 0xfc, 0x24, 0xe6, 0xec, 0x54,
0x5b, 0xf7, 0x52, 0x99, 0xe4, 0xcc, 0x4c, 0xa0, 0x79, 0x41, 0xfc, 0xbe, 0x76, 0x72, 0x4c, 0x7e,
0xd8, 0xa3, 0x43, 0x65, 0x94, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0xe0, 0x4c, 0xdf, 0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x1e, 0x78, 0x4a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xd4, 0xc4, 0x87, 0x77, 0xf2, 0xe7, 0x5d, 0xf7, 0xff, 0x2d, 0xbb, 0xb6,
0x2a, 0x73, 0x1f, 0x54, 0x36, 0x33, 0xa7, 0x99, 0xad, 0xb1, 0x09, 0x65, 0xc0, 0xf0, 0xf4, 0x53,
0xba, 0xfb, 0x88, 0xae, 0x2d, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xa9, 0x14,
0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7, 0x7e, 0xba,
0x30, 0xcd, 0x5a, 0x4b, 0x87, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d, 0x72, 0x65, 0x67, 0x74, 0x65,
0x73, 0x74,
}
// testnetGenesisBlockBytes are the wire encoded bytes for the genesis block of
// the test network as of protocol version 1.
var testnetGenesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x88, 0x05, 0xd0,
0xe7, 0x8f, 0x41, 0x77, 0x39, 0x2c, 0xb6, 0xbb,
0xb4, 0x19, 0xa8, 0x48, 0x4a, 0xdf, 0x77, 0xb0,
0x82, 0xd6, 0x70, 0xd8, 0x24, 0x6a, 0x36, 0x05,
0xaa, 0xbd, 0x7a, 0xd1, 0x62, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0xfe, 0xad, 0x15,
0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x1e, 0xa1, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff,
0xff, 0xff, 0xff, 0x0e, 0x00, 0x00, 0x0b, 0x2f,
0x50, 0x32, 0x53, 0x48, 0x2f, 0x62, 0x74, 0x63,
0x64, 0x2f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xcc,
0x72, 0xe6, 0x7e, 0x37, 0xa1, 0x34, 0x89, 0x23,
0x24, 0xaf, 0xae, 0x99, 0x1f, 0x89, 0x09, 0x41,
0x1a, 0x4d, 0x58, 0xfe, 0x5a, 0x04, 0xb0, 0x3e,
0xeb, 0x1b, 0x5b, 0xb8, 0x65, 0xa8, 0x65, 0x0f,
0x01, 0x00, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d,
0x74, 0x65, 0x73, 0x74, 0x6e, 0x65, 0x74,
0x00, 0x00, 0x00, 0x10, 0x00, 0xa0, 0xa1, 0x3d, 0xfd, 0x86, 0x41, 0x35, 0xc8, 0xbd, 0xbb, 0xe6,
0x37, 0x35, 0xbb, 0x4c, 0x51, 0x11, 0x7b, 0x26, 0x90, 0x15, 0x64, 0x0f, 0x42, 0x6d, 0x2b, 0x6f,
0x37, 0x4d, 0xc1, 0xa9, 0x72, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x28, 0x21, 0xfc, 0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x1e, 0x24, 0x11, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xf5, 0x41, 0x4c, 0xf4, 0xa8, 0xa2, 0x8c, 0x47, 0x9d, 0xb5, 0x75, 0x5e,
0x0f, 0x38, 0xd3, 0x27, 0x82, 0xc6, 0xd1, 0x89, 0xc1, 0x60, 0x49, 0xd9, 0x99, 0xc6, 0x2e, 0xbf,
0x4b, 0x5a, 0x3a, 0xcf, 0x17, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x6b,
0x61, 0x73, 0x70, 0x61, 0x2d, 0x74, 0x65, 0x73, 0x74, 0x6e, 0x65, 0x74,
}
// simnetGenesisBlockBytes are the wire encoded bytes for the genesis block of
// the simulation test network as of protocol version 1.
var simnetGenesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0xb0, 0x1c, 0x3b,
0x9e, 0x0d, 0x9a, 0xc0, 0x80, 0x0a, 0x08, 0x42,
0x50, 0x02, 0xa3, 0xea, 0xdb, 0xed, 0xc8, 0xd0,
0xad, 0x35, 0x03, 0xd8, 0x0e, 0x11, 0x3c, 0x7b,
0xb2, 0xb5, 0x20, 0xe5, 0x84, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x1c, 0xd3, 0x15,
0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x20, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff,
0xff, 0xff, 0xff, 0x0e, 0x00, 0x00, 0x0b, 0x2f,
0x50, 0x32, 0x53, 0x48, 0x2f, 0x62, 0x74, 0x63,
0x64, 0x2f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x89,
0x48, 0xd3, 0x23, 0x9c, 0xf9, 0x88, 0x2b, 0x63,
0xc7, 0x33, 0x0f, 0xa3, 0x64, 0xf2, 0xdb, 0x39,
0x73, 0x5f, 0x2b, 0xa8, 0xd5, 0x7b, 0x5c, 0x31,
0x68, 0xc9, 0x63, 0x37, 0x5c, 0xe7, 0x41, 0x24,
0x17, 0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5,
0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71,
0xc7, 0x7e, 0xba, 0x30, 0xcd, 0x5a, 0x4b, 0x87,
0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d, 0x73, 0x69,
0x6d, 0x6e, 0x65, 0x74,
0x00, 0x00, 0x00, 0x10, 0x00, 0x47, 0x52, 0xc7, 0x23, 0x70, 0x4d, 0x89, 0x17, 0xbd, 0x44, 0x26,
0xfa, 0x82, 0x7e, 0x1b, 0xa9, 0xc6, 0x46, 0x1a, 0x37, 0x5a, 0x73, 0x88, 0x09, 0xe8, 0x17, 0xff,
0xb1, 0xdb, 0x1a, 0xb3, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x61, 0x52, 0xde, 0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x20, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xd9, 0x39, 0x5f, 0x40, 0x2a, 0x5e, 0x24, 0x09, 0x1b, 0x9a, 0x4b, 0xdf,
0x7f, 0x0c, 0x03, 0x7f, 0xf1, 0xd2, 0x48, 0x8c, 0x26, 0xb0, 0xa3, 0x74, 0x60, 0xd9, 0x48, 0x18,
0x2b, 0x33, 0x22, 0x64, 0x2c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xa9, 0x14,
0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7, 0x7e, 0xba,
0x30, 0xcd, 0x5a, 0x4b, 0x87, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d, 0x73, 0x69, 0x6d, 0x6e, 0x65,
0x74,
}
// devnetGenesisBlockBytes are the wire encoded bytes for the genesis block of
// the development network as of protocol version 1.
var devnetGenesisBlockBytes = []byte{
0x01, 0x00, 0x00, 0x00, 0x00, 0x16, 0x0a, 0xc6,
0x8b, 0x77, 0x08, 0xf4, 0x96, 0xa3, 0x07, 0x05,
0xbc, 0x92, 0xda, 0xee, 0x73, 0x26, 0x5e, 0xd0,
0x85, 0x78, 0xa2, 0x5d, 0x02, 0x49, 0x8a, 0x2a,
0x22, 0xef, 0x41, 0xc9, 0xc3, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x58, 0xe7, 0x15,
0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x1e, 0xac, 0x82, 0x02, 0x00, 0x00, 0x00, 0x00,
0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff,
0xff, 0xff, 0xff, 0x0e, 0x00, 0x00, 0x0b, 0x2f,
0x50, 0x32, 0x53, 0x48, 0x2f, 0x62, 0x74, 0x63,
0x64, 0x2f, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
0xff, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x11,
0xc7, 0x0c, 0x02, 0x9e, 0xb2, 0x2e, 0xb3, 0xad,
0x24, 0x10, 0xfe, 0x2c, 0xdb, 0x8e, 0x1d, 0xde,
0x81, 0x5b, 0xbb, 0x42, 0xfe, 0xb4, 0x93, 0xd6,
0xe3, 0xbe, 0x86, 0x02, 0xe6, 0x3a, 0x65, 0x24,
0x17, 0xa9, 0x14, 0xda, 0x17, 0x45, 0xe9, 0xb5,
0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71,
0xc7, 0x7e, 0xba, 0x30, 0xcd, 0x5a, 0x4b, 0x87,
0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d, 0x64, 0x65,
0x76, 0x6e, 0x65, 0x74,
0x00, 0x00, 0x00, 0x10, 0x00, 0x68, 0x60, 0xe7, 0x77, 0x47, 0x74, 0x7f, 0xd5, 0x55, 0x58, 0x8a,
0xb5, 0xc2, 0x29, 0x0c, 0xa6, 0x65, 0x44, 0xb4, 0x4f, 0xfa, 0x31, 0x7a, 0xfa, 0x55, 0xe0, 0xcf,
0xac, 0x9c, 0x86, 0x30, 0x2a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0xe0, 0x4c, 0xdf, 0x5e, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f,
0x1e, 0xed, 0xb3, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x1d, 0x1c, 0x05, 0x21, 0x10, 0x45, 0x61, 0xed, 0xc6, 0x0b, 0xdc, 0x85,
0xc0, 0x0a, 0x70, 0x2b, 0x15, 0xd5, 0x3c, 0x07, 0xb0, 0x54, 0x4f, 0x5b, 0x1a, 0x04, 0xcd, 0x49,
0xf1, 0x7b, 0xd6, 0x27, 0x2c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x17, 0xa9, 0x14,
0xda, 0x17, 0x45, 0xe9, 0xb5, 0x49, 0xbd, 0x0b, 0xfa, 0x1a, 0x56, 0x99, 0x71, 0xc7, 0x7e, 0xba,
0x30, 0xcd, 0x5a, 0x4b, 0x87, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x2d, 0x64, 0x65, 0x76, 0x6e, 0x65,
0x74,
}

View File

@@ -176,6 +176,9 @@ type Params struct {
// Address encoding magics
PrivateKeyID byte // First byte of a WIF private key
// EnableNonNativeSubnetworks enables non-native/coinbase transactions
EnableNonNativeSubnetworks bool
}
// NormalizeRPCServerAddress returns addr with the current network default
@@ -230,6 +233,9 @@ var MainnetParams = Params{
// Address encoding magics
PrivateKeyID: 0x80, // starts with 5 (uncompressed) or K (compressed)
// EnableNonNativeSubnetworks enables non-native/coinbase transactions
EnableNonNativeSubnetworks: false,
}
// RegressionNetParams defines the network parameters for the regression test
@@ -280,6 +286,9 @@ var RegressionNetParams = Params{
// Address encoding magics
PrivateKeyID: 0xef, // starts with 9 (uncompressed) or c (compressed)
// EnableNonNativeSubnetworks enables non-native/coinbase transactions
EnableNonNativeSubnetworks: false,
}
// TestnetParams defines the network parameters for the test Kaspa network.
@@ -328,6 +337,9 @@ var TestnetParams = Params{
// Address encoding magics
PrivateKeyID: 0xef, // starts with 9 (uncompressed) or c (compressed)
// EnableNonNativeSubnetworks enables non-native/coinbase transactions
EnableNonNativeSubnetworks: false,
}
// SimnetParams defines the network parameters for the simulation test Kaspa
@@ -380,6 +392,9 @@ var SimnetParams = Params{
PrivateKeyID: 0x64, // starts with 4 (uncompressed) or F (compressed)
// Human-readable part for Bech32 encoded addresses
Prefix: util.Bech32PrefixKaspaSim,
// EnableNonNativeSubnetworks enables non-native/coinbase transactions
EnableNonNativeSubnetworks: false,
}
// DevnetParams defines the network parameters for the development Kaspa network.
@@ -428,6 +443,9 @@ var DevnetParams = Params{
// Address encoding magics
PrivateKeyID: 0xef, // starts with 9 (uncompressed) or c (compressed)
// EnableNonNativeSubnetworks enables non-native/coinbase transactions
EnableNonNativeSubnetworks: false,
}
var (

View File

@@ -15,7 +15,7 @@ type LevelDB struct {
// NewLevelDB opens a leveldb instance defined by the given path.
func NewLevelDB(path string) (*LevelDB, error) {
// Open leveldb. If it doesn't exist, create it.
ldb, err := leveldb.OpenFile(path, nil)
ldb, err := leveldb.OpenFile(path, Options())
// If the database is corrupted, attempt to recover.
if _, corrupted := err.(*ldbErrors.ErrCorrupted); corrupted {

View File

@@ -0,0 +1,19 @@
package ldb
import "github.com/syndtr/goleveldb/leveldb/opt"
var (
defaultOptions = opt.Options{
Compression: opt.NoCompression,
BlockCacheCapacity: 256 * opt.MiB,
WriteBuffer: 128 * opt.MiB,
DisableSeeksCompaction: true,
}
// Options is a function that returns a leveldb
// opt.Options struct for opening a database.
// It's defined as a variable for the sake of testing.
Options = func() *opt.Options {
return &defaultOptions
}
)

26
dbaccess/peers.go Normal file
View File

@@ -0,0 +1,26 @@
package dbaccess
import "github.com/kaspanet/kaspad/database"
var (
peersKey = database.MakeBucket().Key([]byte("peers"))
)
// StorePeersState stores the peers state in the database.
func StorePeersState(context Context, peersState []byte) error {
accessor, err := context.accessor()
if err != nil {
return err
}
return accessor.Put(peersKey, peersState)
}
// FetchPeersState retrieves the peers state from the database.
// Returns ErrNotFound if the state is missing from the database.
func FetchPeersState(context Context) ([]byte, error) {
accessor, err := context.accessor()
if err != nil {
return nil, err
}
return accessor.Get(peersKey)
}

View File

@@ -6,6 +6,7 @@ import (
)
var reachabilityDataBucket = database.MakeBucket([]byte("reachability"))
var reachabilityReindexKey = database.MakeBucket().Key([]byte("reachability-reindex-root"))
func reachabilityKey(hash *daghash.Hash) *database.Key {
return reachabilityDataBucket.Key(hash[:])
@@ -38,3 +39,26 @@ func StoreReachabilityData(context Context, blockHash *daghash.Hash, reachabilit
func ClearReachabilityData(dbTx *TxContext) error {
return clearBucket(dbTx, reachabilityDataBucket)
}
// StoreReachabilityReindexRoot stores the reachability reindex root in the database.
func StoreReachabilityReindexRoot(context Context, reachabilityReindexRoot *daghash.Hash) error {
accessor, err := context.accessor()
if err != nil {
return err
}
return accessor.Put(reachabilityReindexKey, reachabilityReindexRoot[:])
}
// FetchReachabilityReindexRoot retrieves the reachability reindex root from the database.
// Returns ErrNotFound if the state is missing from the database.
func FetchReachabilityReindexRoot(context Context) (*daghash.Hash, error) {
accessor, err := context.accessor()
if err != nil {
return nil, err
}
bytes, err := accessor.Get(reachabilityReindexKey)
if err != nil {
return nil, err
}
return daghash.NewHash(bytes)
}

2
go.mod
View File

@@ -14,7 +14,7 @@ require (
github.com/kaspanet/go-secp256k1 v0.0.2
github.com/kr/pretty v0.1.0 // indirect
github.com/pkg/errors v0.9.1
github.com/syndtr/goleveldb v1.0.0
github.com/syndtr/goleveldb v1.0.1-0.20190923125748-758128399b1d
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59
golang.org/x/sys v0.0.0-20190426135247-a129542de9ae // indirect
golang.org/x/text v0.3.2 // indirect

2
go.sum
View File

@@ -38,6 +38,8 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/syndtr/goleveldb v1.0.0 h1:fBdIW9lB4Iz0n9khmH8w27SJ3QEJ7+IgjPEwGSZiFdE=
github.com/syndtr/goleveldb v1.0.0/go.mod h1:ZVVdQEZoIme9iO1Ch2Jdy24qqXrMMOU6lpPAyBWyWuQ=
github.com/syndtr/goleveldb v1.0.1-0.20190923125748-758128399b1d h1:gZZadD8H+fF+n9CmNhYL1Y0dJB+kLOmKd7FbPJLeGHs=
github.com/syndtr/goleveldb v1.0.1-0.20190923125748-758128399b1d/go.mod h1:9OrXJhf154huy1nPWmuSrkgjPUtUNhA+Zmy+6AESzuA=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550 h1:ObdrDkeb4kJdCP557AjRjq69pTHfNouLtWZG7j9rPN8=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=

View File

@@ -10,7 +10,6 @@ import (
"os"
"path/filepath"
"runtime"
"runtime/debug"
"runtime/pprof"
"strings"
"time"
@@ -265,12 +264,6 @@ func main() {
// Use all processor cores.
runtime.GOMAXPROCS(runtime.NumCPU())
// Block and transaction processing can cause bursty allocations. This
// limits the garbage collector from excessively overallocating during
// bursts. This value was arrived at with the help of profiling live
// usage.
debug.SetGCPercent(10)
// Up some limits.
if err := limits.SetLimits(); err != nil {
fmt.Fprintf(os.Stderr, "failed to set limits: %s\n", err)

View File

@@ -668,12 +668,16 @@ func (mp *TxPool) RemoveDoubleSpends(tx *util.Tx) {
func (mp *TxPool) addTransaction(tx *util.Tx, fee uint64, parentsInPool []*wire.Outpoint) (*TxDesc, error) {
// Add the transaction to the pool and mark the referenced outpoints
// as spent by the pool.
mass, err := blockdag.CalcTxMassFromUTXOSet(tx, mp.mpUTXOSet)
if err != nil {
return nil, err
}
txD := &TxDesc{
TxDesc: mining.TxDesc{
Tx: tx,
Added: time.Now(),
Fee: fee,
FeePerKB: fee * 1000 / uint64(tx.MsgTx().SerializeSize()),
Tx: tx,
Added: time.Now(),
Fee: fee,
FeePerMegaGram: fee * 1e6 / mass,
},
depCount: len(parentsInPool),
}
@@ -808,6 +812,14 @@ func (mp *TxPool) maybeAcceptTransaction(tx *util.Tx, rejectDupOrphans bool) ([]
return nil, nil, txRuleError(wire.RejectInvalid, str)
}
// Disallow non-native/coinbase subnetworks in networks that don't allow them
if !mp.cfg.DAGParams.EnableNonNativeSubnetworks {
if !(tx.MsgTx().SubnetworkID.IsEqual(subnetworkid.SubnetworkIDNative) ||
tx.MsgTx().SubnetworkID.IsEqual(subnetworkid.SubnetworkIDCoinbase)) {
return nil, nil, txRuleError(wire.RejectInvalid, "non-native/coinbase subnetworks are not allowed")
}
}
// Perform preliminary sanity checks on the transaction. This makes
// use of blockDAG which contains the invariant rules for what
// transactions are allowed into blocks.

View File

@@ -849,7 +849,7 @@ func TestDoubleSpendsFromDAG(t *testing.T) {
//TestFetchTransaction checks that FetchTransaction
//returns only transaction from the main pool and not from the orphan pool
func TestFetchTransaction(t *testing.T) {
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestFetchTransaction")
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestFetchTransaction")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -895,7 +895,7 @@ func TestFetchTransaction(t *testing.T) {
// they are all orphans. Finally, it adds the linking transaction and ensures
// the entire orphan chain is moved to the transaction pool.
func TestSimpleOrphanChain(t *testing.T) {
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestSimpleOrphanChain")
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestSimpleOrphanChain")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -955,7 +955,7 @@ func TestSimpleOrphanChain(t *testing.T) {
// TestOrphanReject ensures that orphans are properly rejected when the allow
// orphans flag is not set on ProcessTransaction.
func TestOrphanReject(t *testing.T) {
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestOrphanReject")
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestOrphanReject")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1009,7 +1009,7 @@ func TestOrphanReject(t *testing.T) {
// it will check if we are beyond nextExpireScan, and if so, it will remove
// all expired orphan transactions
func TestOrphanExpiration(t *testing.T) {
tc, _, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestOrphanExpiration")
tc, _, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestOrphanExpiration")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1071,7 +1071,7 @@ func TestOrphanExpiration(t *testing.T) {
//TestMaxOrphanTxSize ensures that a transaction that is
//bigger than MaxOrphanTxSize will get rejected
func TestMaxOrphanTxSize(t *testing.T) {
tc, _, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestMaxOrphanTxSize")
tc, _, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestMaxOrphanTxSize")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1098,7 +1098,7 @@ func TestMaxOrphanTxSize(t *testing.T) {
}
func TestRemoveTransaction(t *testing.T) {
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 2, "TestRemoveTransaction")
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 2, "TestRemoveTransaction")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1141,7 +1141,7 @@ func TestRemoveTransaction(t *testing.T) {
// TestOrphanEviction ensures that exceeding the maximum number of orphans
// evicts entries to make room for the new ones.
func TestOrphanEviction(t *testing.T) {
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestOrphanEviction")
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestOrphanEviction")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1202,7 +1202,7 @@ func TestOrphanEviction(t *testing.T) {
// Attempt to remove orphans by tag,
// and ensure the state of all other orphans are unaffected.
func TestRemoveOrphansByTag(t *testing.T) {
tc, _, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestRemoveOrphansByTag")
tc, _, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestRemoveOrphansByTag")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1255,7 +1255,7 @@ func TestRemoveOrphansByTag(t *testing.T) {
// redeems it and when there is not.
func TestBasicOrphanRemoval(t *testing.T) {
const maxOrphans = 4
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestBasicOrphanRemoval")
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestBasicOrphanRemoval")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1329,7 +1329,7 @@ func TestBasicOrphanRemoval(t *testing.T) {
// from other orphans) are removed as expected.
func TestOrphanChainRemoval(t *testing.T) {
const maxOrphans = 10
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestOrphanChainRemoval")
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestOrphanChainRemoval")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1394,7 +1394,7 @@ func TestOrphanChainRemoval(t *testing.T) {
// output that is spend by another transaction entering the pool are removed.
func TestMultiInputOrphanDoubleSpend(t *testing.T) {
const maxOrphans = 4
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestMultiInputOrphanDoubleSpend")
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestMultiInputOrphanDoubleSpend")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1478,7 +1478,7 @@ func TestMultiInputOrphanDoubleSpend(t *testing.T) {
// TestCheckSpend tests that CheckSpend returns the expected spends found in
// the mempool.
func TestCheckSpend(t *testing.T) {
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestCheckSpend")
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestCheckSpend")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1544,7 +1544,7 @@ func TestCheckSpend(t *testing.T) {
}
func TestCount(t *testing.T) {
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 1, "TestCount")
tc, outputs, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 1, "TestCount")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1647,7 +1647,7 @@ func TestExtractRejectCode(t *testing.T) {
// TestHandleNewBlock
func TestHandleNewBlock(t *testing.T) {
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.MainnetParams, 2, "TestHandleNewBlock")
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &dagconfig.SimnetParams, 2, "TestHandleNewBlock")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)
}
@@ -1770,21 +1770,7 @@ var dummyBlock = wire.MsgBlock{
Transactions: []*wire.MsgTx{
{
Version: 1,
TxIn: []*wire.TxIn{
{
PreviousOutpoint: wire.Outpoint{
TxID: daghash.TxID{
0x9b, 0x22, 0x59, 0x44, 0x66, 0xf0, 0xbe, 0x50,
0x7c, 0x1c, 0x8a, 0xf6, 0x06, 0x27, 0xe6, 0x33,
0x38, 0x7e, 0xd1, 0xd5, 0x8c, 0x42, 0x59, 0x1a,
0x31, 0xac, 0x9a, 0xa6, 0x2e, 0xd5, 0x2b, 0x0f,
},
Index: 0xffffffff,
},
SignatureScript: nil,
Sequence: math.MaxUint64,
},
},
TxIn: []*wire.TxIn{},
TxOut: []*wire.TxOut{
{
Value: 0x12a05f200, // 5000000000
@@ -1797,7 +1783,10 @@ var dummyBlock = wire.MsgBlock{
},
LockTime: 0,
SubnetworkID: *subnetworkid.SubnetworkIDCoinbase,
Payload: []byte{0x00},
Payload: []byte{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00,
},
PayloadHash: &daghash.Hash{
0x14, 0x06, 0xe0, 0x58, 0x81, 0xe2, 0x99, 0x36,
0x77, 0x66, 0xd3, 0x13, 0xe2, 0x6c, 0x05, 0x56,
@@ -1811,6 +1800,7 @@ var dummyBlock = wire.MsgBlock{
func TestTransactionGas(t *testing.T) {
params := dagconfig.SimnetParams
params.BlockCoinbaseMaturity = 0
params.EnableNonNativeSubnetworks = true
tc, spendableOuts, teardownFunc, err := newPoolHarness(t, &params, 6, "TestTransactionGas")
if err != nil {
t.Fatalf("unable to create test pool: %v", err)

View File

@@ -35,8 +35,8 @@ type TxDesc struct {
// Fee is the total fee the transaction associated with the entry pays.
Fee uint64
// FeePerKB is the fee the transaction pays in sompi per 1000 bytes.
FeePerKB uint64
// FeePerMegaGram is the fee the transaction pays in sompi per million gram.
FeePerMegaGram uint64
}
// TxSource represents a source of transactions to consider for inclusion in
@@ -79,12 +79,6 @@ type BlockTemplate struct {
// Height is the height at which the block template connects to the DAG
Height uint64
// ValidPayAddress indicates whether or not the template coinbase pays
// to an address or is redeemable by anyone. See the documentation on
// NewBlockTemplate for details on which this can be useful to generate
// templates without a coinbase payment address.
ValidPayAddress bool
}
// BlkTmplGenerator provides a type that can be used to generate block templates
@@ -212,10 +206,9 @@ func (g *BlkTmplGenerator) NewBlockTemplate(payToAddress util.Address, extraNonc
txsForBlockTemplate.totalMass, util.CompactToBig(msgBlock.Header.Bits))
return &BlockTemplate{
Block: msgBlock,
TxMasses: txsForBlockTemplate.txMasses,
Fees: txsForBlockTemplate.txFees,
ValidPayAddress: payToAddress != nil,
Block: msgBlock,
TxMasses: txsForBlockTemplate.txMasses,
Fees: txsForBlockTemplate.txFees,
}, nil
}

View File

@@ -301,8 +301,8 @@ func (g *BlkTmplGenerator) populateTemplateFromCandidates(candidateTxs []*candid
txsForBlockTemplate.totalMass += selectedTx.txMass
txsForBlockTemplate.totalFees += selectedTx.txDesc.Fee
log.Tracef("Adding tx %s (feePerKB %d)",
tx.ID(), selectedTx.txDesc.FeePerKB)
log.Tracef("Adding tx %s (feePerMegaGram %d)",
tx.ID(), selectedTx.txDesc.FeePerMegaGram)
markCandidateTxForDeletion(selectedTx)
}

View File

@@ -132,13 +132,13 @@ type requestQueueAndSet struct {
// peerSyncState stores additional information that the SyncManager tracks
// about a peer.
type peerSyncState struct {
syncCandidate bool
lastSelectedTipRequest time.Time
isPendingForSelectedTip bool
requestQueueMtx sync.Mutex
requestQueues map[wire.InvType]*requestQueueAndSet
requestedTxns map[daghash.TxID]struct{}
requestedBlocks map[daghash.Hash]struct{}
syncCandidate bool
lastSelectedTipRequest time.Time
peerShouldSendSelectedTip bool
requestQueueMtx sync.Mutex
requestQueues map[wire.InvType]*requestQueueAndSet
requestedTxns map[daghash.TxID]struct{}
requestedBlocks map[daghash.Hash]struct{}
}
// SyncManager is used to communicate block related messages with peers. The
@@ -158,6 +158,7 @@ type SyncManager struct {
wg sync.WaitGroup
quit chan struct{}
syncPeerLock sync.Mutex
isSyncing bool
// These fields should only be accessed from the messageHandler thread
rejectedTxns map[daghash.TxID]struct{}
@@ -206,13 +207,21 @@ func (sm *SyncManager) startSync() {
syncPeer.SelectedTipHash(), syncPeer.Addr())
syncPeer.PushGetBlockLocatorMsg(syncPeer.SelectedTipHash(), sm.dagParams.GenesisHash)
sm.isSyncing = true
sm.syncPeer = syncPeer
return
}
pendingForSelectedTips := false
if sm.shouldQueryPeerSelectedTips() {
sm.isSyncing = true
hasSyncCandidates := false
for peer, state := range sm.peerStates {
if state.peerShouldSendSelectedTip {
pendingForSelectedTips = true
continue
}
if !state.syncCandidate {
continue
}
@@ -222,21 +231,26 @@ func (sm *SyncManager) startSync() {
continue
}
queueMsgGetSelectedTip(peer, state)
sm.queueMsgGetSelectedTip(peer, state)
pendingForSelectedTips = true
}
if !hasSyncCandidates {
log.Warnf("No sync peer candidates available")
}
}
if !pendingForSelectedTips {
sm.isSyncing = false
}
}
func (sm *SyncManager) shouldQueryPeerSelectedTips() bool {
return sm.dag.Now().Sub(sm.dag.CalcPastMedianTime()) > minDAGTimeDelay
}
func queueMsgGetSelectedTip(peer *peerpkg.Peer, state *peerSyncState) {
func (sm *SyncManager) queueMsgGetSelectedTip(peer *peerpkg.Peer, state *peerSyncState) {
state.lastSelectedTipRequest = time.Now()
state.isPendingForSelectedTip = true
state.peerShouldSendSelectedTip = true
peer.QueueMessage(wire.NewMsgGetSelectedTip(), nil)
}
@@ -284,7 +298,7 @@ func (sm *SyncManager) handleNewPeerMsg(peer *peerpkg.Peer) {
// Initialize the peer state
isSyncCandidate := sm.isSyncCandidate(peer)
requestQueues := make(map[wire.InvType]*requestQueueAndSet)
requestQueueInvTypes := []wire.InvType{wire.InvTypeTx, wire.InvTypeBlock, wire.InvTypeSyncBlock}
requestQueueInvTypes := []wire.InvType{wire.InvTypeTx, wire.InvTypeBlock, wire.InvTypeSyncBlock, wire.InvTypeMissingAncestor}
for _, invType := range requestQueueInvTypes {
requestQueues[invType] = &requestQueueAndSet{
set: make(map[daghash.Hash]struct{}),
@@ -337,8 +351,6 @@ func (sm *SyncManager) handleDonePeerMsg(peer *peerpkg.Peer) {
}
func (sm *SyncManager) stopSyncFromPeer(peer *peerpkg.Peer) {
// Attempt to find a new peer to sync from if the quitting peer is the
// sync peer.
if sm.syncPeer == peer {
sm.syncPeer = nil
sm.restartSyncIfNeeded()
@@ -362,9 +374,8 @@ func (sm *SyncManager) handleTxMsg(tmsg *txMsg) {
// If we didn't ask for this transaction then the peer is misbehaving.
txID := tmsg.tx.ID()
if _, exists = state.requestedTxns[*txID]; !exists {
log.Warnf("Got unrequested transaction %s from %s -- "+
"disconnecting", txID, peer.Addr())
peer.Disconnect()
peer.AddBanScoreAndPushRejectMsg(wire.CmdTx, wire.RejectNotRequested, (*daghash.Hash)(txID),
peerpkg.BanScoreUnrequestedTx, 0, fmt.Sprintf("got unrequested transaction %s", txID))
return
}
@@ -398,36 +409,31 @@ func (sm *SyncManager) handleTxMsg(tmsg *txMsg) {
// When the error is a rule error, it means the transaction was
// simply rejected as opposed to something actually going wrong,
// so log it as such. Otherwise, something really did go wrong,
// so log it as an actual error.
if errors.As(err, &mempool.RuleError{}) {
log.Debugf("Rejected transaction %s from %s: %s",
txID, peer, err)
} else {
log.Errorf("Failed to process transaction %s: %s",
txID, err)
// so panic.
ruleErr := &mempool.RuleError{}
if !errors.As(err, ruleErr) {
panic(errors.Wrapf(err, "failed to process transaction %s", txID))
}
// Convert the error into an appropriate reject message and
// send it.
code, reason := mempool.ErrToRejectErr(err)
peer.PushRejectMsg(wire.CmdTx, code, reason, (*daghash.Hash)(txID), false)
shouldIncreaseBanScore := false
if txRuleErr := (&mempool.TxRuleError{}); errors.As(ruleErr.Err, txRuleErr) {
if txRuleErr.RejectCode == wire.RejectInvalid {
shouldIncreaseBanScore = true
}
} else if dagRuleErr := (&blockdag.RuleError{}); errors.As(ruleErr.Err, dagRuleErr) {
shouldIncreaseBanScore = true
}
if shouldIncreaseBanScore {
peer.AddBanScoreAndPushRejectMsg(wire.CmdTx, wire.RejectInvalid, (*daghash.Hash)(txID),
peerpkg.BanScoreInvalidTx, 0, fmt.Sprintf("rejected transaction %s: %s", txID, err))
}
return
}
sm.peerNotifier.AnnounceNewTransactions(acceptedTxs)
}
// current returns true if we believe we are synced with our peers, false if we
// still have blocks to check
//
// We consider ourselves current iff both of the following are true:
// 1. there's no syncPeer, a.k.a. all connected peers are at the same tip
// 2. the DAG considers itself current - to prevent attacks where a peer sends an
// unknown tip but never lets us sync to it.
func (sm *SyncManager) current() bool {
return sm.syncPeer == nil && sm.dag.IsCurrent()
}
// restartSyncIfNeeded finds a new sync candidate if we're not expecting any
// blocks from the current one.
func (sm *SyncManager) restartSyncIfNeeded() {
@@ -477,9 +483,8 @@ func (sm *SyncManager) handleBlockMsg(bmsg *blockMsg) {
// mode in this case so the DAG code is actually fed the
// duplicate blocks.
if sm.dagParams != &dagconfig.RegressionNetParams {
log.Warnf("Got unrequested block %s from %s -- "+
"disconnecting", blockHash, peer.Addr())
peer.Disconnect()
peer.AddBanScoreAndPushRejectMsg(wire.CmdBlock, wire.RejectNotRequested, blockHash,
peerpkg.BanScoreUnrequestedBlock, 0, fmt.Sprintf("got unrequested block %s", blockHash))
return
}
}
@@ -515,13 +520,8 @@ func (sm *SyncManager) handleBlockMsg(bmsg *blockMsg) {
log.Infof("Rejected block %s from %s: %s", blockHash,
peer, err)
// Convert the error into an appropriate reject message and
// send it.
code, reason := mempool.ErrToRejectErr(err)
peer.PushRejectMsg(wire.CmdBlock, code, reason, blockHash, false)
// Disconnect from the misbehaving peer.
peer.Disconnect()
peer.AddBanScoreAndPushRejectMsg(wire.CmdBlock, wire.RejectInvalid, blockHash,
peerpkg.BanScoreInvalidBlock, 0, fmt.Sprintf("got invalid block: %s", err))
return
}
@@ -529,15 +529,34 @@ func (sm *SyncManager) handleBlockMsg(bmsg *blockMsg) {
return
}
// Request the parents for the orphan block from the peer that sent it.
if isOrphan {
blueScore, err := bmsg.block.BlueScore()
if err != nil {
log.Errorf("Received an orphan block %s with malformed blue score from %s. Disconnecting...",
blockHash, peer)
peer.AddBanScoreAndPushRejectMsg(wire.CmdBlock, wire.RejectInvalid, blockHash,
peerpkg.BanScoreMalformedBlueScoreInOrphan, 0,
fmt.Sprintf("Received an orphan block %s with malformed blue score", blockHash))
return
}
const maxOrphanBlueScoreDiff = 10000
selectedTipBlueScore := sm.dag.SelectedTipBlueScore()
if blueScore > selectedTipBlueScore+maxOrphanBlueScoreDiff {
log.Infof("Orphan block %s has blue score %d and the selected tip blue score is "+
"%d. Ignoring orphans with a blue score difference from the selected tip greater than %d",
blockHash, blueScore, selectedTipBlueScore, maxOrphanBlueScoreDiff)
return
}
// Request the parents for the orphan block from the peer that sent it.
missingAncestors, err := sm.dag.GetOrphanMissingAncestorHashes(blockHash)
if err != nil {
log.Errorf("Failed to find missing ancestors for block %s: %s",
blockHash, err)
return
}
sm.addBlocksToRequestQueue(state, missingAncestors, false)
sm.addBlocksToRequestQueue(state, missingAncestors, wire.InvTypeMissingAncestor)
} else {
// When the block is not an orphan, log information about it and
// update the DAG state.
@@ -563,15 +582,11 @@ func (sm *SyncManager) handleBlockMsg(bmsg *blockMsg) {
}
}
func (sm *SyncManager) addBlocksToRequestQueue(state *peerSyncState, hashes []*daghash.Hash, isRelayedInv bool) {
func (sm *SyncManager) addBlocksToRequestQueue(state *peerSyncState, hashes []*daghash.Hash, invType wire.InvType) {
state.requestQueueMtx.Lock()
defer state.requestQueueMtx.Unlock()
for _, hash := range hashes {
if _, exists := sm.requestedBlocks[*hash]; !exists {
invType := wire.InvTypeSyncBlock
if isRelayedInv {
invType = wire.InvTypeBlock
}
iv := wire.NewInvVect(invType, hash)
state.addInvToRequestQueueNoLock(iv)
}
@@ -583,10 +598,13 @@ func (state *peerSyncState) addInvToRequestQueueNoLock(iv *wire.InvVect) {
if !ok {
panic(errors.Errorf("got unsupported inventory type %s", iv.Type))
}
if _, exists := requestQueue.set[*iv.Hash]; !exists {
requestQueue.set[*iv.Hash] = struct{}{}
requestQueue.queue = append(requestQueue.queue, iv)
if _, exists := requestQueue.set[*iv.Hash]; exists {
return
}
requestQueue.set[*iv.Hash] = struct{}{}
requestQueue.queue = append(requestQueue.queue, iv)
}
func (state *peerSyncState) addInvToRequestQueue(iv *wire.InvVect) {
@@ -602,6 +620,8 @@ func (state *peerSyncState) addInvToRequestQueue(iv *wire.InvVect) {
// (either the main pool or orphan pool).
func (sm *SyncManager) haveInventory(invVect *wire.InvVect) (bool, error) {
switch invVect.Type {
case wire.InvTypeMissingAncestor:
fallthrough
case wire.InvTypeSyncBlock:
fallthrough
case wire.InvTypeBlock:
@@ -677,6 +697,7 @@ func (sm *SyncManager) handleInvMsg(imsg *invMsg) {
case wire.InvTypeSyncBlock:
case wire.InvTypeTx:
default:
log.Warnf("got unsupported inv type %s from %s", iv.Type, peer)
continue
}
@@ -715,6 +736,11 @@ func (sm *SyncManager) handleInvMsg(imsg *invMsg) {
}
if iv.IsBlockOrSyncBlock() {
if sm.dag.IsKnownInvalid(iv.Hash) {
peer.AddBanScoreAndPushRejectMsg(imsg.inv.Command(), wire.RejectInvalid, iv.Hash,
peerpkg.BanScoreInvalidInvBlock, 0, fmt.Sprintf("sent inv of invalid block %s", iv.Hash))
return
}
// The block is an orphan block that we already have.
// When the existing orphan was processed, it requested
// the missing parent blocks. When this scenario
@@ -726,13 +752,22 @@ func (sm *SyncManager) handleInvMsg(imsg *invMsg) {
// to signal there are more missing blocks that need to
// be requested.
if sm.dag.IsKnownOrphan(iv.Hash) {
if iv.Type == wire.InvTypeSyncBlock {
peer.AddBanScoreAndPushRejectMsg(imsg.inv.Command(), wire.RejectInvalid, iv.Hash,
peerpkg.BanScoreOrphanInvAsPartOfNetsync, 0,
fmt.Sprintf("sent inv of orphan block %s as part of netsync", iv.Hash))
// Whether the peer will be banned or not, syncing from a node that doesn't follow
// the netsync protocol is undesired.
sm.stopSyncFromPeer(peer)
return
}
missingAncestors, err := sm.dag.GetOrphanMissingAncestorHashes(iv.Hash)
if err != nil {
log.Errorf("Failed to find missing ancestors for block %s: %s",
iv.Hash, err)
return
}
sm.addBlocksToRequestQueue(state, missingAncestors, iv.Type != wire.InvTypeSyncBlock)
sm.addBlocksToRequestQueue(state, missingAncestors, wire.InvTypeMissingAncestor)
continue
}
@@ -754,7 +789,7 @@ func (sm *SyncManager) handleInvMsg(imsg *invMsg) {
log.Errorf("Failed to send invs from queue: %s", err)
}
if haveUnknownInvBlock && !sm.current() {
if haveUnknownInvBlock && !sm.isSyncing {
// If one of the inv messages is an unknown block
// it is an indication that one of our peers has more
// up-to-date data than us.
@@ -806,6 +841,8 @@ func (sm *SyncManager) addInvsToGetDataMessageFromQueue(gdmsg *wire.MsgGetData,
for _, iv := range invsToAdd {
delete(requestQueue.set, *iv.Hash)
switch invType {
case wire.InvTypeMissingAncestor:
addBlockInv(iv)
case wire.InvTypeSyncBlock:
addBlockInv(iv)
case wire.InvTypeBlock:
@@ -834,13 +871,21 @@ func (sm *SyncManager) addInvsToGetDataMessageFromQueue(gdmsg *wire.MsgGetData,
func (sm *SyncManager) sendInvsFromRequestQueue(peer *peerpkg.Peer, state *peerSyncState) error {
state.requestQueueMtx.Lock()
defer state.requestQueueMtx.Unlock()
if len(sm.requestedBlocks) != 0 {
return nil
}
gdmsg := wire.NewMsgGetData()
err := sm.addInvsToGetDataMessageFromQueue(gdmsg, state, wire.InvTypeSyncBlock, wire.MaxSyncBlockInvPerGetDataMsg)
if err != nil {
return err
}
if sm.syncPeer == nil || sm.isSynced() {
err := sm.addInvsToGetDataMessageFromQueue(gdmsg, state, wire.InvTypeBlock, wire.MaxInvPerGetDataMsg)
if !sm.isSyncing || sm.isSynced() {
err := sm.addInvsToGetDataMessageFromQueue(gdmsg, state, wire.InvTypeMissingAncestor, wire.MaxInvPerGetDataMsg)
if err != nil {
return err
}
err = sm.addInvsToGetDataMessageFromQueue(gdmsg, state, wire.InvTypeBlock, wire.MaxInvPerGetDataMsg)
if err != nil {
return err
}
@@ -909,15 +954,12 @@ func (sm *SyncManager) handleSelectedTipMsg(msg *selectedTipMsg) {
peer := msg.peer
selectedTipHash := msg.selectedTipHash
state := sm.peerStates[peer]
if !state.isPendingForSelectedTip {
log.Warnf("Got unrequested selected tip message from %s -- "+
"disconnecting", peer.Addr())
peer.Disconnect()
}
state.isPendingForSelectedTip = false
if selectedTipHash.IsEqual(peer.SelectedTipHash()) {
if !state.peerShouldSendSelectedTip {
peer.AddBanScoreAndPushRejectMsg(wire.CmdSelectedTip, wire.RejectNotRequested, nil,
peerpkg.BanScoreUnrequestedSelectedTip, 0, "got unrequested selected tip message")
return
}
state.peerShouldSendSelectedTip = false
peer.SetSelectedTipHash(selectedTipHash)
sm.restartSyncIfNeeded()
}
@@ -1017,17 +1059,22 @@ func (sm *SyncManager) handleBlockDAGNotification(notification *blockdag.Notific
}
})
// Relay if we are current and the block was not just now unorphaned.
// Otherwise peers that are current should already know about it
if sm.current() && !data.WasUnorphaned {
iv := wire.NewInvVect(wire.InvTypeBlock, block.Hash())
sm.peerNotifier.RelayInventory(iv, block.MsgBlock().Header)
}
// sm.peerNotifier sends messages to the rebroadcastHandler, so we call
// it in its own goroutine so it won't block dag.ProcessBlock in case
// rebroadcastHandler channel is full.
spawn(func() {
// Relay if we are current and the block was not just now unorphaned.
// Otherwise peers that are current should already know about it
if sm.isSynced() && !data.WasUnorphaned {
iv := wire.NewInvVect(wire.InvTypeBlock, block.Hash())
sm.peerNotifier.RelayInventory(iv, block.MsgBlock().Header)
}
for msg := range ch {
sm.peerNotifier.TransactionConfirmed(msg.Tx)
sm.peerNotifier.AnnounceNewTransactions(msg.AcceptedTxs)
}
for msg := range ch {
sm.peerNotifier.TransactionConfirmed(msg.Tx)
sm.peerNotifier.AnnounceNewTransactions(msg.AcceptedTxs)
}
})
}
}

36
peer/banscores.go Normal file
View File

@@ -0,0 +1,36 @@
package peer
// Ban scores for misbehaving nodes
const (
BanScoreUnrequestedBlock = 100
BanScoreInvalidBlock = 100
BanScoreInvalidInvBlock = 100
BanScoreOrphanInvAsPartOfNetsync = 100
BanScoreMalformedBlueScoreInOrphan = 100
BanScoreUnrequestedSelectedTip = 20
BanScoreUnrequestedTx = 20
BanScoreInvalidTx = 100
BanScoreMalformedMessage = 10
BanScoreNonVersionFirstMessage = 1
BanScoreDuplicateVersion = 1
BanScoreDuplicateVerack = 1
BanScoreSentTooManyAddresses = 20
BanScoreMsgAddrWithInvalidSubnetwork = 10
BanScoreInvalidFeeFilter = 100
BanScoreNoFilterLoaded = 5
BanScoreInvalidMsgGetBlockInvs = 10
BanScoreInvalidMsgBlockLocator = 100
BanScoreSentTxToBlocksOnly = 20
BanScoreNodeBloomFlagViolation = 100
BanScoreStallTimeout = 1
)

View File

@@ -31,6 +31,7 @@ func mockRemotePeer() error {
UserAgentVersion: "1.0.0", // User agent version to advertise.
DAGParams: &dagconfig.SimnetParams,
SelectedTipHash: fakeSelectedTipFn,
SubnetworkID: nil,
}
// Accept connections on the simnet port.
@@ -90,6 +91,7 @@ func Example_newOutboundPeer() {
},
},
SelectedTipHash: fakeSelectedTipFn,
SubnetworkID: nil,
}
p, err := peer.NewOutboundPeer(peerCfg, "127.0.0.1:18555")
if err != nil {

View File

@@ -46,6 +46,8 @@ func invSummary(invList []*wire.InvVect) string {
return fmt.Sprintf("block %s", iv.Hash)
case wire.InvTypeSyncBlock:
return fmt.Sprintf("sync block %s", iv.Hash)
case wire.InvTypeMissingAncestor:
return fmt.Sprintf("missing ancestor %s", iv.Hash)
case wire.InvTypeTx:
return fmt.Sprintf("tx %s", iv.Hash)
}

View File

@@ -199,6 +199,13 @@ type Config struct {
// the DAG.
IsInDAG func(*daghash.Hash) bool
// AddBanScore increases the persistent and decaying ban score fields by the
// values passed as parameters. If the resulting score exceeds half of the ban
// threshold, a warning is logged including the reason provided. Further, if
// the score is above the ban threshold, the peer will be banned and
// disconnected.
AddBanScore func(persistent, transient uint32, reason string)
// HostToNetAddress returns the netaddress for the given host. This can be
// nil in which case the host will be parsed as an IP address.
HostToNetAddress HostToNetAddrFunc
@@ -646,6 +653,22 @@ func (p *Peer) IsSelectedTipKnown() bool {
return !p.cfg.IsInDAG(p.selectedTipHash)
}
// AddBanScore increases the persistent and decaying ban score fields by the
// values passed as parameters. If the resulting score exceeds half of the ban
// threshold, a warning is logged including the reason provided. Further, if
// the score is above the ban threshold, the peer will be banned and
// disconnected.
func (p *Peer) AddBanScore(persistent, transient uint32, reason string) {
p.cfg.AddBanScore(persistent, transient, reason)
}
// AddBanScoreAndPushRejectMsg increases ban score and sends a
// reject message to the misbehaving peer.
func (p *Peer) AddBanScoreAndPushRejectMsg(command string, code wire.RejectCode, hash *daghash.Hash, persistent, transient uint32, reason string) {
p.PushRejectMsg(command, code, reason, hash, true)
p.cfg.AddBanScore(persistent, transient, reason)
}
// LastSend returns the last send time of the peer.
//
// This function is safe for concurrent access.
@@ -755,18 +778,12 @@ func (p *Peer) localVersionMsg() (*wire.MsgVersion, error) {
// addresses. This function is useful over manually sending the message via
// QueueMessage since it automatically limits the addresses to the maximum
// number allowed by the message and randomizes the chosen addresses when there
// are too many. It returns the addresses that were actually sent and no
// message will be sent if there are no entries in the provided addresses slice.
// are too many. It returns the addresses that were actually sent.
//
// This function is safe for concurrent access.
func (p *Peer) PushAddrMsg(addresses []*wire.NetAddress, subnetworkID *subnetworkid.SubnetworkID) ([]*wire.NetAddress, error) {
addressCount := len(addresses)
// Nothing to send.
if addressCount == 0 {
return nil, nil
}
msg := wire.NewMsgAddr(false, subnetworkID)
msg.AddrList = make([]*wire.NetAddress, addressCount)
copy(msg.AddrList, addresses)
@@ -897,6 +914,11 @@ func (p *Peer) handleRemoteVersionMsg(msg *wire.MsgVersion) error {
return errors.New(reason)
}
// Disconnect from partial nodes in networks that don't allow them
if !p.cfg.DAGParams.EnableNonNativeSubnetworks && msg.SubnetworkID != nil {
return errors.New("partial nodes are not allowed")
}
// Disconnect if:
// - we are a full node and the outbound connection we've initiated is a partial node
// - the remote node is partial and our subnetwork doesn't match their subnetwork
@@ -1240,9 +1262,7 @@ out:
continue
}
log.Debugf("Peer %s appears to be stalled or "+
"misbehaving, %s timeout -- "+
"disconnecting", p, command)
p.AddBanScore(BanScoreStallTimeout, 0, fmt.Sprintf("got timeout for command %s", command))
p.Disconnect()
break
}
@@ -1317,15 +1337,15 @@ out:
log.Errorf(errMsg)
}
// Push a reject message for the malformed message and wait for
// the message to be sent before disconnecting.
// Add ban score, push a reject message for the malformed message
// and wait for the message to be sent before disconnecting.
//
// NOTE: Ideally this would include the command in the header if
// at least that much of the message was valid, but that is not
// currently exposed by wire, so just used malformed for the
// command.
p.PushRejectMsg("malformed", wire.RejectMalformed, errMsg, nil,
true)
p.AddBanScoreAndPushRejectMsg("malformed", wire.RejectMalformed, nil,
BanScoreMalformedMessage, 0, errMsg)
}
break out
}
@@ -1337,18 +1357,18 @@ out:
switch msg := rmsg.(type) {
case *wire.MsgVersion:
p.PushRejectMsg(msg.Command(), wire.RejectDuplicate,
"duplicate version message", nil, true)
break out
reason := "duplicate version message"
p.AddBanScoreAndPushRejectMsg(msg.Command(), wire.RejectDuplicate, nil,
BanScoreDuplicateVersion, 0, reason)
case *wire.MsgVerAck:
// No read lock is necessary because verAckReceived is not written
// to in any other goroutine.
if p.verAckReceived {
log.Infof("Already received 'verack' from peer %s -- "+
"disconnecting", p)
break out
p.AddBanScoreAndPushRejectMsg(msg.Command(), wire.RejectDuplicate, nil,
BanScoreDuplicateVerack, 0, "verack sent twice")
log.Warnf("Already received 'verack' from peer %s", p)
}
p.markVerAckReceived()
if p.cfg.Listeners.OnVerAck != nil {
@@ -1538,7 +1558,7 @@ out:
// No handshake? They'll find out soon enough.
if p.VersionKnown() {
// If this is a new block, then we'll blast it
// out immediately, sipping the inv trickle
// out immediately, skipping the inv trickle
// queue.
if iv.Type == wire.InvTypeBlock {
invMsg := wire.NewMsgInvSizeHint(1)
@@ -1868,6 +1888,8 @@ func (p *Peer) readRemoteVersionMsg() error {
errStr := "A version message must precede all others"
log.Errorf(errStr)
p.AddBanScore(BanScoreNonVersionFirstMessage, 0, errStr)
rejectMsg := wire.NewMsgReject(msg.Command(), wire.RejectMalformed,
errStr)
return p.writeMessage(rejectMsg)

View File

@@ -223,6 +223,7 @@ func TestPeerConnection(t *testing.T) {
ProtocolVersion: wire.ProtocolVersion, // Configure with older version
Services: 0,
SelectedTipHash: fakeSelectedTipFn,
SubnetworkID: nil,
}
outPeerCfg := &Config{
Listeners: MessageListeners{
@@ -243,6 +244,7 @@ func TestPeerConnection(t *testing.T) {
ProtocolVersion: wire.ProtocolVersion + 1,
Services: wire.SFNodeNetwork,
SelectedTipHash: fakeSelectedTipFn,
SubnetworkID: nil,
}
wantStats1 := peerStats{
@@ -401,6 +403,7 @@ func TestPeerListeners(t *testing.T) {
DAGParams: &dagconfig.MainnetParams,
Services: wire.SFNodeBloom,
SelectedTipHash: fakeSelectedTipFn,
SubnetworkID: nil,
}
outPeerCfg := &Config{}
@@ -517,6 +520,7 @@ func TestOutboundPeer(t *testing.T) {
UserAgentComments: []string{"comment"},
DAGParams: &dagconfig.MainnetParams,
Services: 0,
SubnetworkID: nil,
}
_, p, err := setupPeers(peerCfg, peerCfg)

View File

@@ -71,11 +71,11 @@ type FutureGetBlockTemplateResult chan *response
// the returned instance.
//
// See GetBlockTemplate for the blocking version and more details
func (c *Client) GetBlockTemplateAsync(capabilities []string, longPollID string) FutureGetBlockTemplateResult {
func (c *Client) GetBlockTemplateAsync(payAddress string, longPollID string) FutureGetBlockTemplateResult {
request := &rpcmodel.TemplateRequest{
Mode: "template",
Capabilities: capabilities,
LongPollID: longPollID,
Mode: "template",
LongPollID: longPollID,
PayAddress: payAddress,
}
cmd := rpcmodel.NewGetBlockTemplateCmd(request)
return c.sendCmd(cmd)
@@ -97,6 +97,6 @@ func (r FutureGetBlockTemplateResult) Receive() (*rpcmodel.GetBlockTemplateResul
}
// GetBlockTemplate request a block template from the server, to mine upon
func (c *Client) GetBlockTemplate(capabilities []string, longPollID string) (*rpcmodel.GetBlockTemplateResult, error) {
return c.GetBlockTemplateAsync(capabilities, longPollID).Receive()
func (c *Client) GetBlockTemplate(payAddress string, longPollID string) (*rpcmodel.GetBlockTemplateResult, error) {
return c.GetBlockTemplateAsync(payAddress, longPollID).Receive()
}

View File

@@ -186,26 +186,26 @@ func (c *Client) PingAsync() FuturePingResult {
// Ping queues a ping to be sent to each connected peer.
//
// Use the GetPeerInfo function and examine the PingTime and PingWait fields to
// Use the GetConnectedPeerInfo function and examine the PingTime and PingWait fields to
// access the ping times.
func (c *Client) Ping() error {
return c.PingAsync().Receive()
}
// FutureGetPeerInfoResult is a future promise to deliver the result of a
// GetPeerInfoAsync RPC invocation (or an applicable error).
type FutureGetPeerInfoResult chan *response
// FutureGetConnectedPeerInfo is a future promise to deliver the result of a
// GetConnectedPeerInfoAsync RPC invocation (or an applicable error).
type FutureGetConnectedPeerInfo chan *response
// Receive waits for the response promised by the future and returns data about
// each connected network peer.
func (r FutureGetPeerInfoResult) Receive() ([]rpcmodel.GetPeerInfoResult, error) {
func (r FutureGetConnectedPeerInfo) Receive() ([]rpcmodel.GetConnectedPeerInfoResult, error) {
res, err := receiveFuture(r)
if err != nil {
return nil, err
}
// Unmarshal result as an array of getpeerinfo result objects.
var peerInfo []rpcmodel.GetPeerInfoResult
// Unmarshal result as an array of getConnectedPeerInfo result objects.
var peerInfo []rpcmodel.GetConnectedPeerInfoResult
err = json.Unmarshal(res, &peerInfo)
if err != nil {
return nil, err
@@ -214,19 +214,19 @@ func (r FutureGetPeerInfoResult) Receive() ([]rpcmodel.GetPeerInfoResult, error)
return peerInfo, nil
}
// GetPeerInfoAsync returns an instance of a type that can be used to get the
// GetConnectedPeerInfoAsync returns an instance of a type that can be used to get the
// result of the RPC at some future time by invoking the Receive function on the
// returned instance.
//
// See GetPeerInfo for the blocking version and more details.
func (c *Client) GetPeerInfoAsync() FutureGetPeerInfoResult {
cmd := rpcmodel.NewGetPeerInfoCmd()
// See GetConnectedPeerInfo for the blocking version and more details.
func (c *Client) GetConnectedPeerInfoAsync() FutureGetConnectedPeerInfo {
cmd := rpcmodel.NewGetConnectedPeerInfoCmd()
return c.sendCmd(cmd)
}
// GetPeerInfo returns data about each connected network peer.
func (c *Client) GetPeerInfo() ([]rpcmodel.GetPeerInfoResult, error) {
return c.GetPeerInfoAsync().Receive()
// GetConnectedPeerInfo returns data about each connected network peer.
func (c *Client) GetConnectedPeerInfo() ([]rpcmodel.GetConnectedPeerInfoResult, error) {
return c.GetConnectedPeerInfoAsync().Receive()
}
// FutureGetNetTotalsResult is a future promise to deliver the result of a

View File

@@ -206,8 +206,7 @@ func NewGetBlockHeaderCmd(hash string, verbose *bool) *GetBlockHeaderCmd {
// TemplateRequest is a request object as defined in BIP22. It is optionally
// provided as an pointer argument to GetBlockTemplateCmd.
type TemplateRequest struct {
Mode string `json:"mode,omitempty"`
Capabilities []string `json:"capabilities,omitempty"`
Mode string `json:"mode,omitempty"`
// Optional long polling.
LongPollID string `json:"longPollId,omitempty"`
@@ -225,6 +224,8 @@ type TemplateRequest struct {
// "proposal".
Data string `json:"data,omitempty"`
WorkID string `json:"workId,omitempty"`
PayAddress string `json:"payAddress"`
}
// convertTemplateRequestField potentially converts the provided value as
@@ -381,13 +382,13 @@ func NewGetNetTotalsCmd() *GetNetTotalsCmd {
return &GetNetTotalsCmd{}
}
// GetPeerInfoCmd defines the getPeerInfo JSON-RPC command.
type GetPeerInfoCmd struct{}
// GetConnectedPeerInfoCmd defines the getConnectedPeerInfo JSON-RPC command.
type GetConnectedPeerInfoCmd struct{}
// NewGetPeerInfoCmd returns a new instance which can be used to issue a getpeer
// NewGetConnectedPeerInfoCmd returns a new instance which can be used to issue a getpeer
// JSON-RPC command.
func NewGetPeerInfoCmd() *GetPeerInfoCmd {
return &GetPeerInfoCmd{}
func NewGetConnectedPeerInfoCmd() *GetConnectedPeerInfoCmd {
return &GetConnectedPeerInfoCmd{}
}
// GetRawMempoolCmd defines the getmempool JSON-RPC command.
@@ -654,6 +655,14 @@ type VersionCmd struct{}
// version command.
func NewVersionCmd() *VersionCmd { return new(VersionCmd) }
// GetPeerAddressesCmd defines the getPeerAddresses JSON-RPC command.
type GetPeerAddressesCmd struct {
}
// NewGetPeerAddressesCmd returns a new instance which can be used to issue a JSON-RPC
// getPeerAddresses command.
func NewGetPeerAddressesCmd() *GetPeerAddressesCmd { return new(GetPeerAddressesCmd) }
func init() {
// No special flags for commands in this file.
flags := UsageFlag(0)
@@ -680,7 +689,8 @@ func init() {
MustRegisterCommand("getMempoolInfo", (*GetMempoolInfoCmd)(nil), flags)
MustRegisterCommand("getNetworkInfo", (*GetNetworkInfoCmd)(nil), flags)
MustRegisterCommand("getNetTotals", (*GetNetTotalsCmd)(nil), flags)
MustRegisterCommand("getPeerInfo", (*GetPeerInfoCmd)(nil), flags)
MustRegisterCommand("getConnectedPeerInfo", (*GetConnectedPeerInfoCmd)(nil), flags)
MustRegisterCommand("getPeerAddresses", (*GetPeerAddressesCmd)(nil), flags)
MustRegisterCommand("getRawMempool", (*GetRawMempoolCmd)(nil), flags)
MustRegisterCommand("getSubnetwork", (*GetSubnetworkCmd)(nil), flags)
MustRegisterCommand("getTxOut", (*GetTxOutCmd)(nil), flags)

View File

@@ -256,72 +256,72 @@ func TestRPCServerCommands(t *testing.T) {
{
name: "getBlockTemplate optional - template request",
newCmd: func() (interface{}, error) {
return rpcmodel.NewCommand("getBlockTemplate", `{"mode":"template","capabilities":["longpoll","coinbasetxn"]}`)
return rpcmodel.NewCommand("getBlockTemplate", `{"mode":"template","payAddress":"kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3"}`)
},
staticCmd: func() interface{} {
template := rpcmodel.TemplateRequest{
Mode: "template",
Capabilities: []string{"longpoll", "coinbasetxn"},
Mode: "template",
PayAddress: "kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3",
}
return rpcmodel.NewGetBlockTemplateCmd(&template)
},
marshalled: `{"jsonrpc":"1.0","method":"getBlockTemplate","params":[{"mode":"template","capabilities":["longpoll","coinbasetxn"]}],"id":1}`,
marshalled: `{"jsonrpc":"1.0","method":"getBlockTemplate","params":[{"mode":"template","payAddress":"kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3"}],"id":1}`,
unmarshalled: &rpcmodel.GetBlockTemplateCmd{
Request: &rpcmodel.TemplateRequest{
Mode: "template",
Capabilities: []string{"longpoll", "coinbasetxn"},
Mode: "template",
PayAddress: "kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3",
},
},
},
{
name: "getBlockTemplate optional - template request with tweaks",
newCmd: func() (interface{}, error) {
return rpcmodel.NewCommand("getBlockTemplate", `{"mode":"template","capabilities":["longPoll","coinbaseTxn"],"sigOpLimit":500,"massLimit":100000000,"maxVersion":1}`)
return rpcmodel.NewCommand("getBlockTemplate", `{"mode":"template","sigOpLimit":500,"massLimit":100000000,"maxVersion":1,"payAddress":"kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3"}`)
},
staticCmd: func() interface{} {
template := rpcmodel.TemplateRequest{
Mode: "template",
Capabilities: []string{"longPoll", "coinbaseTxn"},
SigOpLimit: 500,
MassLimit: 100000000,
MaxVersion: 1,
Mode: "template",
PayAddress: "kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3",
SigOpLimit: 500,
MassLimit: 100000000,
MaxVersion: 1,
}
return rpcmodel.NewGetBlockTemplateCmd(&template)
},
marshalled: `{"jsonrpc":"1.0","method":"getBlockTemplate","params":[{"mode":"template","capabilities":["longPoll","coinbaseTxn"],"sigOpLimit":500,"massLimit":100000000,"maxVersion":1}],"id":1}`,
marshalled: `{"jsonrpc":"1.0","method":"getBlockTemplate","params":[{"mode":"template","sigOpLimit":500,"massLimit":100000000,"maxVersion":1,"payAddress":"kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3"}],"id":1}`,
unmarshalled: &rpcmodel.GetBlockTemplateCmd{
Request: &rpcmodel.TemplateRequest{
Mode: "template",
Capabilities: []string{"longPoll", "coinbaseTxn"},
SigOpLimit: int64(500),
MassLimit: int64(100000000),
MaxVersion: 1,
Mode: "template",
PayAddress: "kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3",
SigOpLimit: int64(500),
MassLimit: int64(100000000),
MaxVersion: 1,
},
},
},
{
name: "getBlockTemplate optional - template request with tweaks 2",
newCmd: func() (interface{}, error) {
return rpcmodel.NewCommand("getBlockTemplate", `{"mode":"template","capabilities":["longPoll","coinbaseTxn"],"sigOpLimit":true,"massLimit":100000000,"maxVersion":1}`)
return rpcmodel.NewCommand("getBlockTemplate", `{"mode":"template","payAddress":"kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3","sigOpLimit":true,"massLimit":100000000,"maxVersion":1}`)
},
staticCmd: func() interface{} {
template := rpcmodel.TemplateRequest{
Mode: "template",
Capabilities: []string{"longPoll", "coinbaseTxn"},
SigOpLimit: true,
MassLimit: 100000000,
MaxVersion: 1,
Mode: "template",
PayAddress: "kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3",
SigOpLimit: true,
MassLimit: 100000000,
MaxVersion: 1,
}
return rpcmodel.NewGetBlockTemplateCmd(&template)
},
marshalled: `{"jsonrpc":"1.0","method":"getBlockTemplate","params":[{"mode":"template","capabilities":["longPoll","coinbaseTxn"],"sigOpLimit":true,"massLimit":100000000,"maxVersion":1}],"id":1}`,
marshalled: `{"jsonrpc":"1.0","method":"getBlockTemplate","params":[{"mode":"template","sigOpLimit":true,"massLimit":100000000,"maxVersion":1,"payAddress":"kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3"}],"id":1}`,
unmarshalled: &rpcmodel.GetBlockTemplateCmd{
Request: &rpcmodel.TemplateRequest{
Mode: "template",
Capabilities: []string{"longPoll", "coinbaseTxn"},
SigOpLimit: true,
MassLimit: int64(100000000),
MaxVersion: 1,
Mode: "template",
PayAddress: "kaspa:qph364lxa0ul5h0jrvl3u7xu8erc7mu3dv7prcn7x3",
SigOpLimit: true,
MassLimit: int64(100000000),
MaxVersion: 1,
},
},
},
@@ -444,15 +444,15 @@ func TestRPCServerCommands(t *testing.T) {
unmarshalled: &rpcmodel.GetNetTotalsCmd{},
},
{
name: "getPeerInfo",
name: "getConnectedPeerInfo",
newCmd: func() (interface{}, error) {
return rpcmodel.NewCommand("getPeerInfo")
return rpcmodel.NewCommand("getConnectedPeerInfo")
},
staticCmd: func() interface{} {
return rpcmodel.NewGetPeerInfoCmd()
return rpcmodel.NewGetConnectedPeerInfoCmd()
},
marshalled: `{"jsonrpc":"1.0","method":"getPeerInfo","params":[],"id":1}`,
unmarshalled: &rpcmodel.GetPeerInfoCmd{},
marshalled: `{"jsonrpc":"1.0","method":"getConnectedPeerInfo","params":[],"id":1}`,
unmarshalled: &rpcmodel.GetConnectedPeerInfoCmd{},
},
{
name: "getRawMempool",

View File

@@ -4,7 +4,10 @@
package rpcmodel
import "encoding/json"
import (
"encoding/json"
"github.com/kaspanet/kaspad/addrmgr"
)
// GetBlockHeaderVerboseResult models the data from the getblockheader command when
// the verbose flag is set. When the verbose flag is not set, getblockheader
@@ -127,12 +130,6 @@ type GetBlockTemplateResultTx struct {
Fee uint64 `json:"fee"`
}
// GetBlockTemplateResultAux models the coinbaseaux field of the
// getblocktemplate command.
type GetBlockTemplateResultAux struct {
Flags string `json:"flags"`
}
// GetBlockTemplateResult models the data returned from the getblocktemplate
// command.
type GetBlockTemplateResult struct {
@@ -147,9 +144,6 @@ type GetBlockTemplateResult struct {
AcceptedIDMerkleRoot string `json:"acceptedIdMerkleRoot"`
UTXOCommitment string `json:"utxoCommitment"`
Version int32 `json:"version"`
CoinbaseAux *GetBlockTemplateResultAux `json:"coinbaseAux,omitempty"`
CoinbaseTxn *GetBlockTemplateResultTx `json:"coinbaseTxn,omitempty"`
CoinbaseValue *uint64 `json:"coinbaseValue,omitempty"`
WorkID string `json:"workId,omitempty"`
IsSynced bool `json:"isSynced"`
@@ -168,8 +162,8 @@ type GetBlockTemplateResult struct {
NonceRange string `json:"nonceRange,omitempty"`
// Block proposal from BIP 0023.
Capabilities []string `json:"capabilities,omitempty"`
RejectReasion string `json:"rejectReason,omitempty"`
Capabilities []string `json:"capabilities,omitempty"`
RejectReason string `json:"rejectReason,omitempty"`
}
// GetMempoolEntryResult models the data returned from the getMempoolEntry
@@ -222,8 +216,8 @@ type GetNetworkInfoResult struct {
Warnings string `json:"warnings"`
}
// GetPeerInfoResult models the data returned from the getpeerinfo command.
type GetPeerInfoResult struct {
// GetConnectedPeerInfoResult models the data returned from the getConnectedPeerInfo command.
type GetConnectedPeerInfoResult struct {
ID int32 `json:"id"`
Addr string `json:"addr"`
Services string `json:"services"`
@@ -245,6 +239,34 @@ type GetPeerInfoResult struct {
SyncNode bool `json:"syncNode"`
}
// GetPeerAddressesResult models the data returned from the getPeerAddresses command.
type GetPeerAddressesResult struct {
Version int
Key [32]byte
Addresses []*GetPeerAddressesKnownAddressResult
NewBuckets map[string]*GetPeerAddressesNewBucketResult // string is Subnetwork ID
NewBucketFullNodes GetPeerAddressesNewBucketResult
TriedBuckets map[string]*GetPeerAddressesTriedBucketResult // string is Subnetwork ID
TriedBucketFullNodes GetPeerAddressesTriedBucketResult
}
// GetPeerAddressesKnownAddressResult models a GetPeerAddressesResult known address.
type GetPeerAddressesKnownAddressResult struct {
Addr string
Src string
SubnetworkID string
Attempts int
TimeStamp int64
LastAttempt int64
LastSuccess int64
}
// GetPeerAddressesNewBucketResult models a GetPeerAddressesResult new bucket.
type GetPeerAddressesNewBucketResult [addrmgr.NewBucketCount][]string
// GetPeerAddressesTriedBucketResult models a GetPeerAddressesResult tried bucket.
type GetPeerAddressesTriedBucketResult [addrmgr.TriedBucketCount][]string
// GetRawMempoolVerboseResult models the data returned from the getrawmempool
// command when the verbose flag is set. When the verbose flag is not set,
// getrawmempool returns an array of transaction hashes.

View File

@@ -1,6 +1,8 @@
package p2p
import (
"fmt"
"github.com/kaspanet/kaspad/addrmgr"
"github.com/kaspanet/kaspad/config"
"github.com/kaspanet/kaspad/peer"
"github.com/kaspanet/kaspad/wire"
@@ -18,18 +20,16 @@ func (sp *Peer) OnAddr(_ *peer.Peer, msg *wire.MsgAddr) {
return
}
// A message that has no addresses is invalid.
if len(msg.AddrList) == 0 {
peerLog.Errorf("Command [%s] from %s does not contain any addresses",
msg.Command(), sp.Peer)
sp.Disconnect()
if len(msg.AddrList) > addrmgr.GetAddrMax {
sp.AddBanScoreAndPushRejectMsg(msg.Command(), wire.RejectInvalid, nil,
peer.BanScoreSentTooManyAddresses, 0, fmt.Sprintf("address count excceeded %d", addrmgr.GetAddrMax))
return
}
if msg.IncludeAllSubnetworks {
peerLog.Errorf("Got unexpected IncludeAllSubnetworks=true in [%s] command from %s",
msg.Command(), sp.Peer)
sp.Disconnect()
sp.AddBanScoreAndPushRejectMsg(msg.Command(), wire.RejectInvalid, nil,
peer.BanScoreMsgAddrWithInvalidSubnetwork, 0,
fmt.Sprintf("got unexpected IncludeAllSubnetworks=true in [%s] command", msg.Command()))
return
} else if !msg.SubnetworkID.IsEqual(config.ActiveConfig().SubnetworkID) && msg.SubnetworkID != nil {
peerLog.Errorf("Only full nodes and %s subnetwork IDs are allowed in [%s] command, but got subnetwork ID %s from %s",
@@ -59,5 +59,5 @@ func (sp *Peer) OnAddr(_ *peer.Peer, msg *wire.MsgAddr) {
// Add addresses to server address manager. The address manager handles
// the details of things such as preventing duplicate addresses, max
// addresses, and last seen updates.
sp.server.addrManager.AddAddresses(msg.AddrList, sp.NA(), msg.SubnetworkID)
sp.server.AddrManager.AddAddresses(msg.AddrList, sp.NA(), msg.SubnetworkID)
}

View File

@@ -1,6 +1,7 @@
package p2p
import (
"fmt"
"github.com/kaspanet/kaspad/peer"
"github.com/kaspanet/kaspad/util"
"github.com/kaspanet/kaspad/wire"
@@ -14,9 +15,8 @@ import (
func (sp *Peer) OnFeeFilter(_ *peer.Peer, msg *wire.MsgFeeFilter) {
// Check that the passed minimum fee is a valid amount.
if msg.MinFee < 0 || msg.MinFee > util.MaxSompi {
peerLog.Debugf("Peer %s sent an invalid feefilter '%s' -- "+
"disconnecting", sp, util.Amount(msg.MinFee))
sp.Disconnect()
sp.AddBanScoreAndPushRejectMsg(msg.Command(), wire.RejectInvalid, nil,
peer.BanScoreInvalidFeeFilter, 0, fmt.Sprintf("sent an invalid feefilter '%s'", util.Amount(msg.MinFee)))
return
}

View File

@@ -17,9 +17,8 @@ func (sp *Peer) OnFilterAdd(_ *peer.Peer, msg *wire.MsgFilterAdd) {
}
if sp.filter.IsLoaded() {
peerLog.Debugf("%s sent a filteradd request with no filter "+
"loaded -- disconnecting", sp)
sp.Disconnect()
sp.AddBanScoreAndPushRejectMsg(wire.CmdFilterAdd, wire.RejectInvalid, nil,
peer.BanScoreNoFilterLoaded, 0, "sent a filteradd request with no filter loaded")
return
}

View File

@@ -17,9 +17,8 @@ func (sp *Peer) OnFilterClear(_ *peer.Peer, msg *wire.MsgFilterClear) {
}
if !sp.filter.IsLoaded() {
peerLog.Debugf("%s sent a filterclear request with no "+
"filter loaded -- disconnecting", sp)
sp.Disconnect()
sp.AddBanScoreAndPushRejectMsg(wire.CmdFilterClear, wire.RejectInvalid, nil,
peer.BanScoreNoFilterLoaded, 0, "sent a filterclear request with no filter loaded")
return
}

View File

@@ -34,7 +34,7 @@ func (sp *Peer) OnGetAddr(_ *peer.Peer, msg *wire.MsgGetAddr) {
sp.sentAddrs = true
// Get the current known addresses from the address manager.
addrCache := sp.server.addrManager.AddressCache(msg.IncludeAllSubnetworks, msg.SubnetworkID)
addrCache := sp.server.AddrManager.AddressCache(msg.IncludeAllSubnetworks, msg.SubnetworkID)
// Push the addresses.
sp.pushAddrMsg(addrCache, sp.SubnetworkID())

View File

@@ -1,6 +1,7 @@
package p2p
import (
"fmt"
"github.com/kaspanet/kaspad/peer"
"github.com/kaspanet/kaspad/wire"
)
@@ -23,8 +24,9 @@ func (sp *Peer) OnGetBlockInvs(_ *peer.Peer, msg *wire.MsgGetBlockInvs) {
hashList, err := dag.AntiPastHashesBetween(msg.LowHash, msg.HighHash,
wire.MaxInvPerMsg)
if err != nil {
peerLog.Warnf("Error getting antiPast hashes between %s and %s: %s", msg.LowHash, msg.HighHash, err)
sp.Disconnect()
sp.AddBanScoreAndPushRejectMsg(wire.CmdGetBlockInvs, wire.RejectInvalid, nil,
peer.BanScoreInvalidMsgGetBlockInvs, 0,
fmt.Sprintf("error getting antiPast hashes between %s and %s: %s", msg.LowHash, msg.HighHash, err))
return
}

View File

@@ -11,13 +11,13 @@ import (
func (sp *Peer) OnGetBlockLocator(_ *peer.Peer, msg *wire.MsgGetBlockLocator) {
locator, err := sp.server.DAG.BlockLocatorFromHashes(msg.HighHash, msg.LowHash)
if err != nil || len(locator) == 0 {
warning := fmt.Sprintf("Couldn't build a block locator between blocks "+
"%s and %s that was requested from peer %s", msg.HighHash, msg.LowHash, sp)
if err != nil {
warning = fmt.Sprintf("%s: %s", warning, err)
peerLog.Warnf("Couldn't build a block locator between blocks "+
"%s and %s that was requested from peer %s: %s", msg.HighHash, msg.LowHash, sp, err)
}
peerLog.Warnf(warning)
sp.Disconnect()
sp.AddBanScoreAndPushRejectMsg(msg.Command(), wire.RejectInvalid, nil,
peer.BanScoreInvalidMsgBlockLocator, 0,
fmt.Sprintf("couldn't build a block locator between blocks %s and %s", msg.HighHash, msg.LowHash))
return
}

View File

@@ -44,6 +44,8 @@ func (sp *Peer) OnGetData(_ *peer.Peer, msg *wire.MsgGetData) {
err = sp.server.pushTxMsg(sp, (*daghash.TxID)(iv.Hash), c, waitChan)
case wire.InvTypeSyncBlock:
fallthrough
case wire.InvTypeMissingAncestor:
fallthrough
case wire.InvTypeBlock:
err = sp.server.pushBlockMsg(sp, iv.Hash, c, waitChan)
case wire.InvTypeFilteredBlock:

View File

@@ -23,9 +23,8 @@ func (sp *Peer) OnInv(_ *peer.Peer, msg *wire.MsgInv) {
if invVect.Type == wire.InvTypeTx {
peerLog.Tracef("Ignoring tx %s in inv from %s -- "+
"blocksonly enabled", invVect.Hash, sp)
peerLog.Infof("Peer %s is announcing "+
"transactions -- disconnecting", sp)
sp.Disconnect()
sp.AddBanScoreAndPushRejectMsg(msg.Command(), wire.RejectNotRequested, invVect.Hash,
peer.BanScoreSentTxToBlocksOnly, 0, "announced transactions when blocksonly is enabled")
return
}
err := newInv.AddInvVect(invVect)

View File

@@ -21,13 +21,13 @@ func (sp *Peer) OnVersion(_ *peer.Peer, msg *wire.MsgVersion) {
// to specified peers and actively avoids advertising and connecting to
// discovered peers.
if !config.ActiveConfig().Simnet {
addrManager := sp.server.addrManager
addrManager := sp.server.AddrManager
// Outbound connections.
if !sp.Inbound() {
// TODO(davec): Only do this if not doing the initial block
// download and the local address is routable.
if !config.ActiveConfig().DisableListen /* && isCurrent? */ {
if !config.ActiveConfig().DisableListen {
// Get address that best matches.
lna := addrManager.GetBestLocalAddress(sp.NA())
if addrmgr.IsRoutable(lna) {

View File

@@ -8,6 +8,7 @@ package p2p
import (
"crypto/rand"
"encoding/binary"
"fmt"
"math"
"net"
"runtime"
@@ -109,15 +110,6 @@ type relayMsg struct {
data interface{}
}
type outboundPeerConnectedMsg struct {
connReq *connmgr.ConnReq
conn net.Conn
}
type outboundPeerConnectionFailedMsg struct {
connReq *connmgr.ConnReq
}
// Peer extends the peer to maintain state shared by the server and
// the blockmanager.
type Peer struct {
@@ -222,26 +214,24 @@ type Server struct {
shutdownSched int32
DAGParams *dagconfig.Params
addrManager *addrmgr.AddrManager
AddrManager *addrmgr.AddrManager
connManager *connmgr.ConnManager
SigCache *txscript.SigCache
SyncManager *netsync.SyncManager
DAG *blockdag.BlockDAG
TxMemPool *mempool.TxPool
modifyRebroadcastInv chan interface{}
newPeers chan *Peer
donePeers chan *Peer
banPeers chan *Peer
newOutboundConnection chan *outboundPeerConnectedMsg
newOutboundConnectionFailed chan *outboundPeerConnectionFailedMsg
Query chan interface{}
relayInv chan relayMsg
broadcast chan broadcastMsg
wg sync.WaitGroup
nat serverutils.NAT
TimeSource blockdag.TimeSource
services wire.ServiceFlag
modifyRebroadcastInv chan interface{}
newPeers chan *Peer
donePeers chan *Peer
banPeers chan *Peer
Query chan interface{}
relayInv chan relayMsg
broadcast chan broadcastMsg
wg sync.WaitGroup
nat serverutils.NAT
TimeSource blockdag.TimeSource
services wire.ServiceFlag
// We add to quitWaitGroup before every instance in which we wait for
// the quit channel so that all those instances finish before we shut
@@ -339,13 +329,8 @@ func (sp *Peer) pushAddrMsg(addresses []*wire.NetAddress, subnetworkID *subnetwo
// the score is above the ban threshold, the peer will be banned and
// disconnected.
func (sp *Peer) addBanScore(persistent, transient uint32, reason string) {
// No warning is logged and no score is calculated if banning is disabled.
if config.ActiveConfig().DisableBanning {
return
}
if sp.isWhitelisted {
peerLog.Debugf("Misbehaving whitelisted peer %s: %s", sp, reason)
return
}
warnThreshold := config.ActiveConfig().BanThreshold >> 1
@@ -359,16 +344,22 @@ func (sp *Peer) addBanScore(persistent, transient uint32, reason string) {
}
return
}
score := sp.DynamicBanScore.Increase(persistent, transient)
logMsg := fmt.Sprintf("Misbehaving peer %s: %s -- ban score increased to %d",
sp, reason, score)
if score > warnThreshold {
peerLog.Warnf("Misbehaving peer %s: %s -- ban score increased to %d",
sp, reason, score)
if score > config.ActiveConfig().BanThreshold {
peerLog.Warn(logMsg)
if !config.ActiveConfig().DisableBanning && !sp.isWhitelisted && score > config.ActiveConfig().BanThreshold {
peerLog.Warnf("Misbehaving peer %s -- banning and disconnecting",
sp)
sp.server.BanPeer(sp)
sp.Disconnect()
}
} else if persistent != 0 {
peerLog.Warn(logMsg)
} else {
peerLog.Trace(logMsg)
}
}
@@ -386,7 +377,7 @@ func (sp *Peer) enforceNodeBloomFlag(cmd string) bool {
// Disconnect the peer regardless of whether it was
// banned.
sp.addBanScore(100, 0, cmd)
sp.addBanScore(peer.BanScoreNodeBloomFlagViolation, 0, cmd)
sp.Disconnect()
return false
}
@@ -678,7 +669,7 @@ func (s *Server) handleDonePeerMsg(state *peerState, sp *Peer) {
// Update the address' last seen time if the peer has acknowledged
// our version and has sent us its version as well.
if sp.VerAckReceived() && sp.VersionKnown() && sp.NA() != nil {
s.addrManager.Connected(sp.NA())
s.AddrManager.Connected(sp.NA())
}
// If we get here it means that either we didn't know about the peer
@@ -725,7 +716,7 @@ func (s *Server) handleRelayInvMsg(state *peerState, msg relayMsg) {
// Don't relay the transaction if the transaction fee-per-kb
// is less than the peer's feefilter.
feeFilter := uint64(atomic.LoadInt64(&sp.FeeFilterInt))
if feeFilter > 0 && txD.FeePerKB < feeFilter {
if feeFilter > 0 && txD.FeePerMegaGram < feeFilter {
return true
}
@@ -948,7 +939,8 @@ func newPeerConfig(sp *Peer) *peer.Config {
},
SelectedTipHash: sp.selectedTipHash,
IsInDAG: sp.blockExists,
HostToNetAddress: sp.server.addrManager.HostToNetAddress,
AddBanScore: sp.addBanScore,
HostToNetAddress: sp.server.AddrManager.HostToNetAddress,
Proxy: config.ActiveConfig().Proxy,
UserAgentName: userAgentName,
UserAgentVersion: userAgentVersion,
@@ -977,19 +969,19 @@ func (s *Server) inboundPeerConnected(conn net.Conn) {
// peer instance, associates it with the relevant state such as the connection
// request instance and the connection itself, and finally notifies the address
// manager of the attempt.
func (s *Server) outboundPeerConnected(state *peerState, msg *outboundPeerConnectedMsg) {
sp := newServerPeer(s, msg.connReq.Permanent)
outboundPeer, err := peer.NewOutboundPeer(newPeerConfig(sp), msg.connReq.Addr.String())
func (s *Server) outboundPeerConnected(connReq *connmgr.ConnReq, conn net.Conn) {
sp := newServerPeer(s, connReq.Permanent)
outboundPeer, err := peer.NewOutboundPeer(newPeerConfig(sp), connReq.Addr.String())
if err != nil {
srvrLog.Debugf("Cannot create outbound peer %s: %s", msg.connReq.Addr, err)
s.connManager.Disconnect(msg.connReq.ID())
srvrLog.Debugf("Cannot create outbound peer %s: %s", connReq.Addr, err)
s.connManager.Disconnect(connReq.ID())
}
sp.Peer = outboundPeer
sp.connReq = msg.connReq
sp.connReq = connReq
s.peerConnected(sp, msg.conn)
s.peerConnected(sp, conn)
s.addrManager.Attempt(sp.NA())
s.AddrManager.Attempt(sp.NA())
}
func (s *Server) peerConnected(sp *Peer, conn net.Conn) {
@@ -1012,20 +1004,20 @@ func (s *Server) peerConnected(sp *Peer, conn net.Conn) {
// outboundPeerConnected is invoked by the connection manager when a new
// outbound connection failed to be established.
func (s *Server) outboundPeerConnectionFailed(msg *outboundPeerConnectionFailedMsg) {
func (s *Server) outboundPeerConnectionFailed(connReq *connmgr.ConnReq) {
// If the connection request has no address
// associated to it, do nothing.
if msg.connReq.Addr == nil {
if connReq.Addr == nil {
return
}
host, portStr, err := net.SplitHostPort(msg.connReq.Addr.String())
host, portStr, err := net.SplitHostPort(connReq.Addr.String())
if err != nil {
srvrLog.Debugf("Cannot extract address host and port %s: %s", msg.connReq.Addr, err)
srvrLog.Debugf("Cannot extract address host and port %s: %s", connReq.Addr, err)
}
port, err := strconv.ParseUint(portStr, 10, 16)
if err != nil {
srvrLog.Debugf("Cannot parse port %s: %s", msg.connReq.Addr, err)
srvrLog.Debugf("Cannot parse port %s: %s", connReq.Addr, err)
}
// defaultServices is used here because Attempt makes no use
@@ -1033,7 +1025,7 @@ func (s *Server) outboundPeerConnectionFailed(msg *outboundPeerConnectionFailedM
// take nil for it.
netAddress := wire.NewNetAddressIPPort(net.ParseIP(host), uint16(port), defaultServices)
s.addrManager.Attempt(netAddress)
s.AddrManager.Attempt(netAddress)
}
// peerDoneHandler handles peer disconnects by notifiying the server that it's
@@ -1066,7 +1058,10 @@ func (s *Server) peerHandler() {
// to this handler and rather than adding more channels to sychronize
// things, it's easier and slightly faster to simply start and stop them
// in this handler.
s.addrManager.Start()
err := s.AddrManager.Start()
if err != nil {
panic(errors.Wrap(err, "address manager failed to start"))
}
s.SyncManager.Start()
s.quitWaitGroup.Add(1)
@@ -1087,7 +1082,7 @@ func (s *Server) peerHandler() {
// Kaspad uses a lookup of the dns seeder here. Since seeder returns
// IPs of nodes and not its own IP, we can not know real IP of
// source. So we'll take first returned address as source.
s.addrManager.AddAddresses(addrs, addrs[0], subnetworkID)
s.AddrManager.AddAddresses(addrs, addrs[0], subnetworkID)
})
}
@@ -1137,12 +1132,6 @@ out:
})
s.quitWaitGroup.Done()
break out
case opcMsg := <-s.newOutboundConnection:
s.outboundPeerConnected(state, opcMsg)
case opcfMsg := <-s.newOutboundConnectionFailed:
s.outboundPeerConnectionFailed(opcfMsg)
}
}
@@ -1152,7 +1141,7 @@ out:
s.connManager.Stop()
s.SyncManager.Stop()
s.addrManager.Stop()
s.AddrManager.Stop()
// Drain channels before exiting so nothing is left waiting around
// to send.
@@ -1445,7 +1434,7 @@ out:
}
na := wire.NewNetAddressIPPort(externalip, uint16(listenPort),
s.services)
err = s.addrManager.AddLocalAddress(na, addrmgr.UpnpPrio)
err = s.AddrManager.AddLocalAddress(na, addrmgr.UpnpPrio)
if err != nil {
// XXX DeletePortMapping?
}
@@ -1479,13 +1468,13 @@ func NewServer(listenAddrs []string, dagParams *dagconfig.Params, interrupt <-ch
services &^= wire.SFNodeBloom
}
amgr := addrmgr.New(config.ActiveConfig().DataDir, serverutils.KaspadLookup, config.ActiveConfig().SubnetworkID)
addressManager := addrmgr.New(serverutils.KaspadLookup, config.ActiveConfig().SubnetworkID)
var listeners []net.Listener
var nat serverutils.NAT
if !config.ActiveConfig().DisableListen {
var err error
listeners, nat, err = initListeners(amgr, listenAddrs, services)
listeners, nat, err = initListeners(addressManager, listenAddrs, services)
if err != nil {
return nil, err
}
@@ -1497,23 +1486,21 @@ func NewServer(listenAddrs []string, dagParams *dagconfig.Params, interrupt <-ch
maxPeers := config.ActiveConfig().TargetOutboundPeers + config.ActiveConfig().MaxInboundPeers
s := Server{
DAGParams: dagParams,
addrManager: amgr,
newPeers: make(chan *Peer, maxPeers),
donePeers: make(chan *Peer, maxPeers),
banPeers: make(chan *Peer, maxPeers),
Query: make(chan interface{}),
relayInv: make(chan relayMsg, maxPeers),
broadcast: make(chan broadcastMsg, maxPeers),
quit: make(chan struct{}),
modifyRebroadcastInv: make(chan interface{}),
newOutboundConnection: make(chan *outboundPeerConnectedMsg, config.ActiveConfig().TargetOutboundPeers),
newOutboundConnectionFailed: make(chan *outboundPeerConnectionFailedMsg, config.ActiveConfig().TargetOutboundPeers),
nat: nat,
TimeSource: blockdag.NewTimeSource(),
services: services,
SigCache: txscript.NewSigCache(config.ActiveConfig().SigCacheMaxSize),
notifyNewTransactions: notifyNewTransactions,
DAGParams: dagParams,
AddrManager: addressManager,
newPeers: make(chan *Peer, maxPeers),
donePeers: make(chan *Peer, maxPeers),
banPeers: make(chan *Peer, maxPeers),
Query: make(chan interface{}),
relayInv: make(chan relayMsg, maxPeers),
broadcast: make(chan broadcastMsg, maxPeers),
quit: make(chan struct{}),
modifyRebroadcastInv: make(chan interface{}),
nat: nat,
TimeSource: blockdag.NewTimeSource(),
services: services,
SigCache: txscript.NewSigCache(config.ActiveConfig().SigCacheMaxSize),
notifyNewTransactions: notifyNewTransactions,
}
// Create indexes if needed.
@@ -1576,23 +1563,14 @@ func NewServer(listenAddrs []string, dagParams *dagconfig.Params, interrupt <-ch
// Create a connection manager.
cmgr, err := connmgr.New(&connmgr.Config{
Listeners: listeners,
OnAccept: s.inboundPeerConnected,
RetryDuration: connectionRetryInterval,
TargetOutbound: uint32(config.ActiveConfig().TargetOutboundPeers),
Dial: serverutils.KaspadDial,
OnConnection: func(c *connmgr.ConnReq, conn net.Conn) {
s.newOutboundConnection <- &outboundPeerConnectedMsg{
connReq: c,
conn: conn,
}
},
OnConnectionFailed: func(c *connmgr.ConnReq) {
s.newOutboundConnectionFailed <- &outboundPeerConnectionFailedMsg{
connReq: c,
}
},
AddrManager: s.addrManager,
Listeners: listeners,
OnAccept: s.inboundPeerConnected,
RetryDuration: connectionRetryInterval,
TargetOutbound: uint32(config.ActiveConfig().TargetOutboundPeers),
Dial: serverutils.KaspadDial,
OnConnection: s.outboundPeerConnected,
OnConnectionFailed: s.outboundPeerConnectionFailed,
AddrManager: s.AddrManager,
})
if err != nil {
return nil, err

View File

@@ -4,7 +4,6 @@ import (
"bytes"
"encoding/hex"
"fmt"
"math/rand"
"strconv"
"strings"
"sync"
@@ -44,16 +43,6 @@ var (
"time", "transactions/add", "parentblock", "coinbase/append",
}
// gbtCoinbaseAux describes additional data that miners should include
// in the coinbase signature script. It is declared here to avoid the
// overhead of creating a new object on every invocation for constant
// data.
gbtCoinbaseAux = &rpcmodel.GetBlockTemplateResultAux{
Flags: hex.EncodeToString(builderScript(txscript.
NewScriptBuilder().
AddData([]byte(mining.CoinbaseFlags)))),
}
// gbtCapabilities describes additional capabilities returned with a
// block template generated by the getBlockTemplate RPC. It is
// declared here to avoid the overhead of creating the slice on every
@@ -72,6 +61,7 @@ type gbtWorkState struct {
template *mining.BlockTemplate
notifyMap map[string]map[int64]chan struct{}
timeSource blockdag.TimeSource
payAddress util.Address
}
// newGbtWorkState returns a new instance of a gbtWorkState with all internal
@@ -122,42 +112,8 @@ func handleGetBlockTemplate(s *Server, cmd interface{}, closeChan <-chan struct{
// handleGetBlockTemplateRequest is a helper for handleGetBlockTemplate which
// deals with generating and returning block templates to the caller. It
// handles both long poll requests as specified by BIP 0022 as well as regular
// requests. In addition, it detects the capabilities reported by the caller
// in regards to whether or not it supports creating its own coinbase (the
// coinbasetxn and coinbasevalue capabilities) and modifies the returned block
// template accordingly.
// requests.
func handleGetBlockTemplateRequest(s *Server, request *rpcmodel.TemplateRequest, closeChan <-chan struct{}) (interface{}, error) {
// Extract the relevant passed capabilities and restrict the result to
// either a coinbase value or a coinbase transaction object depending on
// the request. Default to only providing a coinbase value.
useCoinbaseValue := true
if request != nil {
var hasCoinbaseValue, hasCoinbaseTxn bool
for _, capability := range request.Capabilities {
switch capability {
case "coinbasetxn":
hasCoinbaseTxn = true
case "coinbasevalue":
hasCoinbaseValue = true
}
}
if hasCoinbaseTxn && !hasCoinbaseValue {
useCoinbaseValue = false
}
}
// When a coinbase transaction has been requested, respond with an error
// if there are no addresses to pay the created block template to.
if !useCoinbaseValue && len(config.ActiveConfig().MiningAddrs) == 0 {
return nil, &rpcmodel.RPCError{
Code: rpcmodel.ErrRPCInternal.Code,
Message: "A coinbase transaction has been requested, " +
"but the server has not been configured with " +
"any payment addresses via --miningaddr",
}
}
// Return an error if there are no peers connected since there is no
// way to relay a found block or receive transactions to work on.
// However, allow this state when running in the regression test or
@@ -171,12 +127,16 @@ func handleGetBlockTemplateRequest(s *Server, request *rpcmodel.TemplateRequest,
}
}
payAddr, err := util.DecodeAddress(request.PayAddress, s.cfg.DAGParams.Prefix)
if err != nil {
return nil, err
}
// When a long poll ID was provided, this is a long poll request by the
// client to be notified when block template referenced by the ID should
// be replaced with a new one.
if request != nil && request.LongPollID != "" {
return handleGetBlockTemplateLongPoll(s, request.LongPollID,
useCoinbaseValue, closeChan)
return handleGetBlockTemplateLongPoll(s, request.LongPollID, payAddr, closeChan)
}
// Protect concurrent access when updating block templates.
@@ -190,10 +150,10 @@ func handleGetBlockTemplateRequest(s *Server, request *rpcmodel.TemplateRequest,
// seconds since the last template was generated. Otherwise, the
// timestamp for the existing block template is updated (and possibly
// the difficulty on testnet per the consesus rules).
if err := state.updateBlockTemplate(s, useCoinbaseValue); err != nil {
if err := state.updateBlockTemplate(s, payAddr); err != nil {
return nil, err
}
return state.blockTemplateResult(s, useCoinbaseValue)
return state.blockTemplateResult(s)
}
// handleGetBlockTemplateLongPoll is a helper for handleGetBlockTemplateRequest
@@ -204,10 +164,10 @@ func handleGetBlockTemplateRequest(s *Server, request *rpcmodel.TemplateRequest,
// old block template is no longer valid due to a solution already being found
// and added to the block DAG, or new transactions have shown up and some time
// has passed without finding a solution.
func handleGetBlockTemplateLongPoll(s *Server, longPollID string, useCoinbaseValue bool, closeChan <-chan struct{}) (interface{}, error) {
func handleGetBlockTemplateLongPoll(s *Server, longPollID string, payAddr util.Address, closeChan <-chan struct{}) (interface{}, error) {
state := s.gbtWorkState
result, longPollChan, err := blockTemplateOrLongPollChan(s, longPollID, useCoinbaseValue)
result, longPollChan, err := blockTemplateOrLongPollChan(s, longPollID, payAddr)
if err != nil {
return nil, err
}
@@ -231,14 +191,14 @@ func handleGetBlockTemplateLongPoll(s *Server, longPollID string, useCoinbaseVal
state.Lock()
defer state.Unlock()
if err := state.updateBlockTemplate(s, useCoinbaseValue); err != nil {
if err := state.updateBlockTemplate(s, payAddr); err != nil {
return nil, err
}
// Include whether or not it is valid to submit work against the old
// block template depending on whether or not a solution has already
// been found and added to the block DAG.
result, err = state.blockTemplateResult(s, useCoinbaseValue)
result, err = state.blockTemplateResult(s)
if err != nil {
return nil, err
}
@@ -250,7 +210,7 @@ func handleGetBlockTemplateLongPoll(s *Server, longPollID string, useCoinbaseVal
// template identified by the provided long poll ID is stale or
// invalid. Otherwise, it returns a channel that will notify
// when there's a more current template.
func blockTemplateOrLongPollChan(s *Server, longPollID string, useCoinbaseValue bool) (*rpcmodel.GetBlockTemplateResult, chan struct{}, error) {
func blockTemplateOrLongPollChan(s *Server, longPollID string, payAddr util.Address) (*rpcmodel.GetBlockTemplateResult, chan struct{}, error) {
state := s.gbtWorkState
state.Lock()
@@ -259,7 +219,7 @@ func blockTemplateOrLongPollChan(s *Server, longPollID string, useCoinbaseValue
// be manually unlocked before waiting for a notification about block
// template changes.
if err := state.updateBlockTemplate(s, useCoinbaseValue); err != nil {
if err := state.updateBlockTemplate(s, payAddr); err != nil {
return nil, nil, err
}
@@ -267,7 +227,7 @@ func blockTemplateOrLongPollChan(s *Server, longPollID string, useCoinbaseValue
// the caller is invalid.
parentHashes, lastGenerated, err := decodeLongPollID(longPollID)
if err != nil {
result, err := state.blockTemplateResult(s, useCoinbaseValue)
result, err := state.blockTemplateResult(s)
if err != nil {
return nil, nil, err
}
@@ -285,7 +245,7 @@ func blockTemplateOrLongPollChan(s *Server, longPollID string, useCoinbaseValue
// Include whether or not it is valid to submit work against the
// old block template depending on whether or not a solution has
// already been found and added to the block DAG.
result, err := state.blockTemplateResult(s, useCoinbaseValue)
result, err := state.blockTemplateResult(s)
if err != nil {
return nil, nil, err
}
@@ -566,7 +526,7 @@ func (state *gbtWorkState) templateUpdateChan(tipHashes []*daghash.Hash, lastGen
// addresses.
//
// This function MUST be called with the state locked.
func (state *gbtWorkState) updateBlockTemplate(s *Server, useCoinbaseValue bool) error {
func (state *gbtWorkState) updateBlockTemplate(s *Server, payAddr util.Address) error {
generator := s.cfg.Generator
lastTxUpdate := generator.TxSource().LastUpdated()
if lastTxUpdate.IsZero() {
@@ -583,6 +543,7 @@ func (state *gbtWorkState) updateBlockTemplate(s *Server, useCoinbaseValue bool)
template := state.template
if template == nil || state.tipHashes == nil ||
!daghash.AreEqual(state.tipHashes, tipHashes) ||
state.payAddress.String() != payAddr.String() ||
(state.lastTxUpdate != lastTxUpdate &&
time.Now().After(state.lastGenerated.Add(time.Second*
gbtRegenerateSeconds))) {
@@ -592,14 +553,6 @@ func (state *gbtWorkState) updateBlockTemplate(s *Server, useCoinbaseValue bool)
// again.
state.tipHashes = nil
// Choose a payment address at random if the caller requests a
// full coinbase as opposed to only the pertinent details needed
// to create their own coinbase.
var payAddr util.Address
if !useCoinbaseValue {
payAddr = config.ActiveConfig().MiningAddrs[rand.Intn(len(config.ActiveConfig().MiningAddrs))]
}
// Create a new block template that has a coinbase which anyone
// can redeem. This is only acceptable because the returned
// block template doesn't include the coinbase, so the caller
@@ -634,6 +587,7 @@ func (state *gbtWorkState) updateBlockTemplate(s *Server, useCoinbaseValue bool)
state.lastTxUpdate = lastTxUpdate
state.tipHashes = tipHashes
state.minTimestamp = minTimestamp
state.payAddress = payAddr
log.Debugf("Generated block template (timestamp %s, "+
"target %s, merkle root %s)",
@@ -650,32 +604,6 @@ func (state *gbtWorkState) updateBlockTemplate(s *Server, useCoinbaseValue bool)
// trigger a new block template to be generated. So, update the
// existing block template.
// When the caller requires a full coinbase as opposed to only
// the pertinent details needed to create their own coinbase,
// add a payment address to the output of the coinbase of the
// template if it doesn't already have one. Since this requires
// mining addresses to be specified via the config, an error is
// returned if none have been specified.
if !useCoinbaseValue && !template.ValidPayAddress {
// Choose a payment address at random.
payToAddr := config.ActiveConfig().MiningAddrs[rand.Intn(len(config.ActiveConfig().MiningAddrs))]
// Update the block coinbase output of the template to
// pay to the randomly selected payment address.
scriptPubKey, err := txscript.PayToAddrScript(payToAddr)
if err != nil {
context := "Failed to create pay-to-addr script"
return internalRPCError(err.Error(), context)
}
template.Block.Transactions[util.CoinbaseTransactionIndex].TxOut[0].ScriptPubKey = scriptPubKey
template.ValidPayAddress = true
// Update the merkle root.
block := util.NewBlock(template.Block)
hashMerkleTree := blockdag.BuildHashMerkleTreeStore(block.Transactions())
template.Block.Header.HashMerkleRoot = hashMerkleTree.Root()
}
// Set locals for convenience.
msgBlock = template.Block
targetDifficulty = fmt.Sprintf("%064x",
@@ -700,7 +628,7 @@ func (state *gbtWorkState) updateBlockTemplate(s *Server, useCoinbaseValue bool)
// and returned to the caller.
//
// This function MUST be called with the state locked.
func (state *gbtWorkState) blockTemplateResult(s *Server, useCoinbaseValue bool) (*rpcmodel.GetBlockTemplateResult, error) {
func (state *gbtWorkState) blockTemplateResult(s *Server) (*rpcmodel.GetBlockTemplateResult, error) {
dag := s.cfg.DAG
// Ensure the timestamps are still in valid range for the template.
// This should really only ever happen if the local clock is changed
@@ -731,11 +659,6 @@ func (state *gbtWorkState) blockTemplateResult(s *Server, useCoinbaseValue bool)
txID := tx.TxID()
txIndex[*txID] = int64(i)
// Skip the coinbase transaction.
if i == 0 {
continue
}
// Create an array of 1-based indices to transactions that come
// before this one in the transactions list which this one
// depends on. This is necessary since the created block must
@@ -775,7 +698,7 @@ func (state *gbtWorkState) blockTemplateResult(s *Server, useCoinbaseValue bool)
// Including MinTime -> time/decrement
// Omitting CoinbaseTxn -> coinbase, generation
targetDifficulty := fmt.Sprintf("%064x", util.CompactToBig(header.Bits))
longPollID := encodeLongPollID(state.tipHashes, state.lastGenerated)
longPollID := encodeLongPollID(state.tipHashes, state.payAddress, state.lastGenerated)
// Check whether this node is synced with the rest of of the
// network. There's almost never a good reason to mine on top
@@ -806,48 +729,13 @@ func (state *gbtWorkState) blockTemplateResult(s *Server, useCoinbaseValue bool)
IsSynced: isSynced,
}
if useCoinbaseValue {
reply.CoinbaseAux = gbtCoinbaseAux
reply.CoinbaseValue = &msgBlock.Transactions[util.CoinbaseTransactionIndex].TxOut[0].Value
} else {
// Ensure the template has a valid payment address associated
// with it when a full coinbase is requested.
if !template.ValidPayAddress {
return nil, &rpcmodel.RPCError{
Code: rpcmodel.ErrRPCInternal.Code,
Message: "A coinbase transaction has been " +
"requested, but the server has not " +
"been configured with any payment " +
"addresses via --miningaddr",
}
}
// Serialize the transaction for conversion to hex.
tx := msgBlock.Transactions[util.CoinbaseTransactionIndex]
txBuf := bytes.NewBuffer(make([]byte, 0, tx.SerializeSize()))
if err := tx.Serialize(txBuf); err != nil {
context := "Failed to serialize transaction"
return nil, internalRPCError(err.Error(), context)
}
resultTx := rpcmodel.GetBlockTemplateResultTx{
Data: hex.EncodeToString(txBuf.Bytes()),
ID: tx.TxID().String(),
Depends: []int64{},
Mass: template.TxMasses[0],
Fee: template.Fees[0],
}
reply.CoinbaseTxn = &resultTx
}
return &reply, nil
}
// encodeLongPollID encodes the passed details into an ID that can be used to
// uniquely identify a block template.
func encodeLongPollID(parentHashes []*daghash.Hash, lastGenerated time.Time) string {
return fmt.Sprintf("%s-%d", daghash.JoinHashesStrings(parentHashes, ""), lastGenerated.Unix())
func encodeLongPollID(parentHashes []*daghash.Hash, miningAddress util.Address, lastGenerated time.Time) string {
return fmt.Sprintf("%s-%s-%d", daghash.JoinHashesStrings(parentHashes, ""), miningAddress, lastGenerated.Unix())
}
// decodeLongPollID decodes an ID that is used to uniquely identify a block

View File

@@ -6,14 +6,14 @@ import (
"time"
)
// handleGetPeerInfo implements the getPeerInfo command.
func handleGetPeerInfo(s *Server, cmd interface{}, closeChan <-chan struct{}) (interface{}, error) {
// handleGetConnectedPeerInfo implements the getConnectedPeerInfo command.
func handleGetConnectedPeerInfo(s *Server, cmd interface{}, closeChan <-chan struct{}) (interface{}, error) {
peers := s.cfg.ConnMgr.ConnectedPeers()
syncPeerID := s.cfg.SyncMgr.SyncPeerID()
infos := make([]*rpcmodel.GetPeerInfoResult, 0, len(peers))
infos := make([]*rpcmodel.GetConnectedPeerInfoResult, 0, len(peers))
for _, p := range peers {
statsSnap := p.ToPeer().StatsSnapshot()
info := &rpcmodel.GetPeerInfoResult{
info := &rpcmodel.GetConnectedPeerInfoResult{
ID: statsSnap.ID,
Addr: statsSnap.Addr,
Services: fmt.Sprintf("%08d", uint64(statsSnap.Services)),

View File

@@ -0,0 +1,57 @@
package rpc
import "github.com/kaspanet/kaspad/rpcmodel"
// handleGetPeerAddresses handles getPeerAddresses commands.
func handleGetPeerAddresses(s *Server, cmd interface{}, closeChan <-chan struct{}) (interface{}, error) {
peersState, err := s.cfg.addressManager.PeersStateForSerialization()
if err != nil {
return nil, err
}
rpcPeersState := rpcmodel.GetPeerAddressesResult{
Version: peersState.Version,
Key: peersState.Key,
Addresses: make([]*rpcmodel.GetPeerAddressesKnownAddressResult, len(peersState.Addresses)),
NewBuckets: make(map[string]*rpcmodel.GetPeerAddressesNewBucketResult),
NewBucketFullNodes: rpcmodel.GetPeerAddressesNewBucketResult{},
TriedBuckets: make(map[string]*rpcmodel.GetPeerAddressesTriedBucketResult),
TriedBucketFullNodes: rpcmodel.GetPeerAddressesTriedBucketResult{},
}
for i, addr := range peersState.Addresses {
rpcPeersState.Addresses[i] = &rpcmodel.GetPeerAddressesKnownAddressResult{
Addr: addr.Addr,
Src: addr.Src,
SubnetworkID: addr.SubnetworkID,
Attempts: addr.Attempts,
TimeStamp: addr.TimeStamp,
LastAttempt: addr.LastAttempt,
LastSuccess: addr.LastSuccess,
}
}
for subnetworkID, bucket := range peersState.NewBuckets {
rpcPeersState.NewBuckets[subnetworkID] = &rpcmodel.GetPeerAddressesNewBucketResult{}
for i, addr := range bucket {
rpcPeersState.NewBuckets[subnetworkID][i] = addr
}
}
for i, addr := range peersState.NewBucketFullNodes {
rpcPeersState.NewBucketFullNodes[i] = addr
}
for subnetworkID, bucket := range peersState.TriedBuckets {
rpcPeersState.TriedBuckets[subnetworkID] = &rpcmodel.GetPeerAddressesTriedBucketResult{}
for i, addr := range bucket {
rpcPeersState.TriedBuckets[subnetworkID][i] = addr
}
}
for i, addr := range peersState.TriedBucketFullNodes {
rpcPeersState.TriedBucketFullNodes[i] = addr
}
return rpcPeersState, nil
}

View File

@@ -12,6 +12,7 @@ import (
"encoding/base64"
"encoding/json"
"fmt"
"github.com/kaspanet/kaspad/addrmgr"
"io"
"io/ioutil"
"math/rand"
@@ -83,7 +84,8 @@ var rpcHandlersBeforeInit = map[string]commandHandler{
"getMempoolInfo": handleGetMempoolInfo,
"getMempoolEntry": handleGetMempoolEntry,
"getNetTotals": handleGetNetTotals,
"getPeerInfo": handleGetPeerInfo,
"getConnectedPeerInfo": handleGetConnectedPeerInfo,
"getPeerAddresses": handleGetPeerAddresses,
"getRawMempool": handleGetRawMempool,
"getSubnetwork": handleGetSubnetwork,
"getTxOut": handleGetTxOut,
@@ -783,6 +785,9 @@ type rpcserverConfig struct {
// These fields define any optional indexes the RPC server can make use
// of to provide additional data when queried.
AcceptanceIndex *indexers.AcceptanceIndex
// addressManager defines the address manager for the RPC server to use.
addressManager *addrmgr.AddrManager
}
// setupRPCListeners returns a slice of listeners that are configured for use
@@ -855,6 +860,7 @@ func NewRPCServer(
StartupTime: startupTime,
ConnMgr: &rpcConnManager{p2pServer},
SyncMgr: &rpcSyncMgr{p2pServer, p2pServer.SyncManager},
addressManager: p2pServer.AddrManager,
TimeSource: p2pServer.TimeSource,
DAGParams: p2pServer.DAGParams,
TxMemPool: p2pServer.TxMemPool,

View File

@@ -283,15 +283,15 @@ var helpDescsEnUS = map[string]string{
"getBlockHeaderVerboseResult-childHashes": "The hashes of the child blocks (only if there are any)",
// TemplateRequest help.
"templateRequest-mode": "This is 'template', 'proposal', or omitted",
"templateRequest-capabilities": "List of capabilities",
"templateRequest-longPollId": "The long poll ID of a job to monitor for expiration; required and valid only for long poll requests ",
"templateRequest-sigOpLimit": "Number of signature operations allowed in blocks (this parameter is ignored)",
"templateRequest-massLimit": "Max transaction mass allowed in blocks (this parameter is ignored)",
"templateRequest-maxVersion": "Highest supported block version number (this parameter is ignored)",
"templateRequest-target": "The desired target for the block template (this parameter is ignored)",
"templateRequest-data": "Hex-encoded block data (only for mode=proposal)",
"templateRequest-workId": "The server provided workid if provided in block template (not applicable)",
"templateRequest-mode": "This is 'template', 'proposal', or omitted",
"templateRequest-payAddress": "The address the coinbase pays to",
"templateRequest-longPollId": "The long poll ID of a job to monitor for expiration; required and valid only for long poll requests ",
"templateRequest-sigOpLimit": "Number of signature operations allowed in blocks (this parameter is ignored)",
"templateRequest-massLimit": "Max transaction mass allowed in blocks (this parameter is ignored)",
"templateRequest-maxVersion": "Highest supported block version number (this parameter is ignored)",
"templateRequest-target": "The desired target for the block template (this parameter is ignored)",
"templateRequest-data": "Hex-encoded block data (only for mode=proposal)",
"templateRequest-workId": "The server provided workid if provided in block template (not applicable)",
// GetBlockTemplateResultTx help.
"getBlockTemplateResultTx-data": "Hex-encoded transaction data (byte-for-byte)",
@@ -416,29 +416,55 @@ var helpDescsEnUS = map[string]string{
"getNetTotalsResult-totalBytesSent": "Total bytes sent",
"getNetTotalsResult-timeMillis": "Number of milliseconds since 1 Jan 1970 GMT",
// GetPeerInfoResult help.
"getPeerInfoResult-id": "A unique node ID",
"getPeerInfoResult-addr": "The ip address and port of the peer",
"getPeerInfoResult-services": "Services bitmask which represents the services supported by the peer",
"getPeerInfoResult-relayTxes": "Peer has requested transactions be relayed to it",
"getPeerInfoResult-lastSend": "Time the last message was received in seconds since 1 Jan 1970 GMT",
"getPeerInfoResult-lastRecv": "Time the last message was sent in seconds since 1 Jan 1970 GMT",
"getPeerInfoResult-bytesSent": "Total bytes sent",
"getPeerInfoResult-bytesRecv": "Total bytes received",
"getPeerInfoResult-connTime": "Time the connection was made in seconds since 1 Jan 1970 GMT",
"getPeerInfoResult-timeOffset": "The time offset of the peer",
"getPeerInfoResult-pingTime": "Number of microseconds the last ping took",
"getPeerInfoResult-pingWait": "Number of microseconds a queued ping has been waiting for a response",
"getPeerInfoResult-version": "The protocol version of the peer",
"getPeerInfoResult-subVer": "The user agent of the peer",
"getPeerInfoResult-inbound": "Whether or not the peer is an inbound connection",
"getPeerInfoResult-selectedTip": "The selected tip of the peer",
"getPeerInfoResult-banScore": "The ban score",
"getPeerInfoResult-feeFilter": "The requested minimum fee a transaction must have to be announced to the peer",
"getPeerInfoResult-syncNode": "Whether or not the peer is the sync peer",
// GetConnectedPeerInfoResult help.
"getConnectedPeerInfoResult-id": "A unique node ID",
"getConnectedPeerInfoResult-addr": "The ip address and port of the peer",
"getConnectedPeerInfoResult-services": "Services bitmask which represents the services supported by the peer",
"getConnectedPeerInfoResult-relayTxes": "Peer has requested transactions be relayed to it",
"getConnectedPeerInfoResult-lastSend": "Time the last message was received in seconds since 1 Jan 1970 GMT",
"getConnectedPeerInfoResult-lastRecv": "Time the last message was sent in seconds since 1 Jan 1970 GMT",
"getConnectedPeerInfoResult-bytesSent": "Total bytes sent",
"getConnectedPeerInfoResult-bytesRecv": "Total bytes received",
"getConnectedPeerInfoResult-connTime": "Time the connection was made in seconds since 1 Jan 1970 GMT",
"getConnectedPeerInfoResult-timeOffset": "The time offset of the peer",
"getConnectedPeerInfoResult-pingTime": "Number of microseconds the last ping took",
"getConnectedPeerInfoResult-pingWait": "Number of microseconds a queued ping has been waiting for a response",
"getConnectedPeerInfoResult-version": "The protocol version of the peer",
"getConnectedPeerInfoResult-subVer": "The user agent of the peer",
"getConnectedPeerInfoResult-inbound": "Whether or not the peer is an inbound connection",
"getConnectedPeerInfoResult-selectedTip": "The selected tip of the peer",
"getConnectedPeerInfoResult-banScore": "The ban score",
"getConnectedPeerInfoResult-feeFilter": "The requested minimum fee a transaction must have to be announced to the peer",
"getConnectedPeerInfoResult-syncNode": "Whether or not the peer is the sync peer",
// GetPeerInfoCmd help.
"getPeerInfo--synopsis": "Returns data about each connected network peer as an array of json objects.",
// GetConnectedPeerInfoCmd help.
"getConnectedPeerInfo--synopsis": "Returns data about each connected network peer as an array of json objects.",
// GetPeerAddressesResult help.
"getPeerAddressesResult-version": "Peers state serialization version",
"getPeerAddressesResult-key": "Address manager's key for randomness purposes.",
"getPeerAddressesResult-addresses": "The node's known addresses",
"getPeerAddressesResult-newBuckets": "Peers state subnetwork new buckets",
"getPeerAddressesResult-newBuckets--desc": "New buckets keyed by subnetwork ID",
"getPeerAddressesResult-newBuckets--key": "subnetworkId",
"getPeerAddressesResult-newBuckets--value": "New bucket",
"getPeerAddressesResult-newBucketFullNodes": "Peers state full nodes new bucket",
"getPeerAddressesResult-triedBuckets": "Peers state subnetwork tried buckets",
"getPeerAddressesResult-triedBuckets--desc": "Tried buckets keyed by subnetwork ID",
"getPeerAddressesResult-triedBuckets--key": "subnetworkId",
"getPeerAddressesResult-triedBuckets--value": "Tried bucket",
"getPeerAddressesResult-triedBucketFullNodes": "Peers state tried full nodes bucket",
"getPeerAddressesKnownAddressResult-addr": "Address",
"getPeerAddressesKnownAddressResult-src": "Address of the peer that handed the address",
"getPeerAddressesKnownAddressResult-subnetworkId": "Address subnetwork ID",
"getPeerAddressesKnownAddressResult-attempts": "Number of attempts to connect to the address",
"getPeerAddressesKnownAddressResult-timeStamp": "Time the address was added",
"getPeerAddressesKnownAddressResult-lastAttempt": "Last attempt to connect to the address",
"getPeerAddressesKnownAddressResult-lastSuccess": "Last successful attempt to connect to the address",
// GetPeerAddressesCmd help.
"getPeerAddresses--synopsis": "Returns the peers state.",
// GetRawMempoolVerboseResult help.
"getRawMempoolVerboseResult-size": "Transaction size in bytes",
@@ -488,7 +514,7 @@ var helpDescsEnUS = map[string]string{
// PingCmd help.
"ping--synopsis": "Queues a ping to be sent to each connected peer.\n" +
"Ping times are provided by getPeerInfo via the pingtime and pingwait fields.",
"Ping times are provided by getConnectedPeerInfo via the pingtime and pingwait fields.",
// RemoveManualNodeCmd help.
"removeManualNode--synopsis": "Removes a peer from the manual nodes list",
@@ -616,7 +642,8 @@ var rpcResultTypes = map[string][]interface{}{
"getMempoolInfo": {(*rpcmodel.GetMempoolInfoResult)(nil)},
"getMempoolEntry": {(*rpcmodel.GetMempoolEntryResult)(nil)},
"getNetTotals": {(*rpcmodel.GetNetTotalsResult)(nil)},
"getPeerInfo": {(*[]rpcmodel.GetPeerInfoResult)(nil)},
"getConnectedPeerInfo": {(*[]rpcmodel.GetConnectedPeerInfoResult)(nil)},
"getPeerAddresses": {(*[]rpcmodel.GetPeerAddressesResult)(nil)},
"getRawMempool": {(*[]string)(nil), (*rpcmodel.GetRawMempoolVerboseResult)(nil)},
"getSubnetwork": {(*rpcmodel.GetSubnetworkResult)(nil)},
"getTxOut": {(*rpcmodel.GetTxOutResult)(nil)},

View File

@@ -629,7 +629,13 @@ func (m *wsNotificationManager) notifyFilteredBlockAdded(clients map[chan struct
"added notification: %s", err)
return
}
ntfn := rpcmodel.NewFilteredBlockAddedNtfn(block.BlueScore(), hex.EncodeToString(w.Bytes()), nil)
blueScore, err := block.BlueScore()
if err != nil {
log.Errorf("Failed to deserialize blue score for filtered block "+
"added notification: %s", err)
return
}
ntfn := rpcmodel.NewFilteredBlockAddedNtfn(blueScore, hex.EncodeToString(w.Bytes()), nil)
// Search for relevant transactions for each client and save them
// serialized in hex encoding for the notification.

View File

@@ -1,23 +0,0 @@
base58
==========
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](https://choosealicense.com/licenses/isc/)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/kaspanet/kaspad/util/base58)
Package base58 provides an API for encoding and decoding to and from the
modified base58 encoding.
A comprehensive suite of tests is provided to ensure proper functionality.
## Examples
* [Decode Example](http://godoc.org/github.com/kaspanet/kaspad/util/base58#example-Decode)
Demonstrates how to decode modified base58 encoded data.
* [Encode Example](http://godoc.org/github.com/kaspanet/kaspad/util/base58#example-Encode)
Demonstrates how to encode data using the modified base58 encoding scheme.
* [CheckDecode Example](http://godoc.org/github.com/kaspanet/kaspad/util/base58#example-CheckDecode)
Demonstrates how to decode Base58Check encoded data.
* [CheckEncode Example](http://godoc.org/github.com/kaspanet/kaspad/util/base58#example-CheckEncode)
Demonstrates how to encode data using the Base58Check encoding scheme.

View File

@@ -1,49 +0,0 @@
// Copyright (c) 2015 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
// AUTOGENERATED by genalphabet.go; do not edit.
package base58
const (
// alphabet is the modified base58 alphabet used by kaspa.
alphabet = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
alphabetIdx0 = '1'
)
var b58 = [256]byte{
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 0, 1, 2, 3, 4, 5, 6,
7, 8, 255, 255, 255, 255, 255, 255,
255, 9, 10, 11, 12, 13, 14, 15,
16, 255, 17, 18, 19, 20, 21, 255,
22, 23, 24, 25, 26, 27, 28, 29,
30, 31, 32, 255, 255, 255, 255, 255,
255, 33, 34, 35, 36, 37, 38, 39,
40, 41, 42, 43, 255, 44, 45, 46,
47, 48, 49, 50, 51, 52, 53, 54,
55, 56, 57, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
255, 255, 255, 255, 255, 255, 255, 255,
}

View File

@@ -1,75 +0,0 @@
// Copyright (c) 2013-2015 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package base58
import (
"math/big"
)
//go:generate go run genalphabet.go
var bigRadix = big.NewInt(58)
var bigZero = big.NewInt(0)
// Decode decodes a modified base58 string to a byte slice.
func Decode(b string) []byte {
answer := big.NewInt(0)
j := big.NewInt(1)
scratch := new(big.Int)
for i := len(b) - 1; i >= 0; i-- {
tmp := b58[b[i]]
if tmp == 255 {
return []byte("")
}
scratch.SetInt64(int64(tmp))
scratch.Mul(j, scratch)
answer.Add(answer, scratch)
j.Mul(j, bigRadix)
}
tmpval := answer.Bytes()
var numZeros int
for numZeros = 0; numZeros < len(b); numZeros++ {
if b[numZeros] != alphabetIdx0 {
break
}
}
flen := numZeros + len(tmpval)
val := make([]byte, flen)
copy(val[numZeros:], tmpval)
return val
}
// Encode encodes a byte slice to a modified base58 string.
func Encode(b []byte) string {
x := new(big.Int)
x.SetBytes(b)
answer := make([]byte, 0, len(b)*136/100)
for x.Cmp(bigZero) > 0 {
mod := new(big.Int)
x.DivMod(x, bigRadix, mod)
answer = append(answer, alphabet[mod.Int64()])
}
// leading zero bytes
for _, i := range b {
if i != 0 {
break
}
answer = append(answer, alphabetIdx0)
}
// reverse
alen := len(answer)
for i := 0; i < alen/2; i++ {
answer[i], answer[alen-1-i] = answer[alen-1-i], answer[i]
}
return string(answer)
}

View File

@@ -1,97 +0,0 @@
// Copyright (c) 2013-2017 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package base58_test
import (
"bytes"
"encoding/hex"
"github.com/kaspanet/kaspad/util/base58"
"testing"
)
var stringTests = []struct {
in string
out string
}{
{"", ""},
{" ", "Z"},
{"-", "n"},
{"0", "q"},
{"1", "r"},
{"-1", "4SU"},
{"11", "4k8"},
{"abc", "ZiCa"},
{"1234598760", "3mJr7AoUXx2Wqd"},
{"abcdefghijklmnopqrstuvwxyz", "3yxU3u1igY8WkgtjK92fbJQCd4BZiiT1v25f"},
{"00000000000000000000000000000000000000000000000000000000000000", "3sN2THZeE9Eh9eYrwkvZqNstbHGvrxSAM7gXUXvyFQP8XvQLUqNCS27icwUeDT7ckHm4FUHM2mTVh1vbLmk7y"},
}
var invalidStringTests = []struct {
in string
out string
}{
{"0", ""},
{"O", ""},
{"I", ""},
{"l", ""},
{"3mJr0", ""},
{"O3yxU", ""},
{"3sNI", ""},
{"4kl8", ""},
{"0OIl", ""},
{"!@#$%^&*()-_=+~`", ""},
}
var hexTests = []struct {
in string
out string
}{
{"61", "2g"},
{"626262", "a3gV"},
{"636363", "aPEr"},
{"73696d706c792061206c6f6e6720737472696e67", "2cFupjhnEsSn59qHXstmK2ffpLv2"},
{"00eb15231dfceb60925886b67d065299925915aeb172c06647", "1NS17iag9jJgTHD1VXjvLCEnZuQ3rJDE9L"},
{"516b6fcd0f", "ABnLTmg"},
{"bf4f89001e670274dd", "3SEo3LWLoPntC"},
{"572e4794", "3EFU7m"},
{"ecac89cad93923c02321", "EJDM8drfXA6uyA"},
{"10c8511e", "Rt5zm"},
{"00000000000000000000", "1111111111"},
}
func TestBase58(t *testing.T) {
// Encode tests
for x, test := range stringTests {
tmp := []byte(test.in)
if res := base58.Encode(tmp); res != test.out {
t.Errorf("Encode test #%d failed: got: %s want: %s",
x, res, test.out)
continue
}
}
// Decode tests
for x, test := range hexTests {
b, err := hex.DecodeString(test.in)
if err != nil {
t.Errorf("hex.DecodeString failed failed #%d: got: %s", x, test.in)
continue
}
if res := base58.Decode(test.out); !bytes.Equal(res, b) {
t.Errorf("Decode test #%d failed: got: %q want: %q",
x, res, test.in)
continue
}
}
// Decode with invalid input
for x, test := range invalidStringTests {
if res := base58.Decode(test.in); string(res) != test.out {
t.Errorf("Decode invalidString test #%d failed: got: %q want: %q",
x, res, test.out)
continue
}
}
}

View File

@@ -1,34 +0,0 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package base58_test
import (
"bytes"
"github.com/kaspanet/kaspad/util/base58"
"testing"
)
func BenchmarkBase58Encode(b *testing.B) {
b.StopTimer()
data := bytes.Repeat([]byte{0xff}, 5000)
b.SetBytes(int64(len(data)))
b.StartTimer()
for i := 0; i < b.N; i++ {
base58.Encode(data)
}
}
func BenchmarkBase58Decode(b *testing.B) {
b.StopTimer()
data := bytes.Repeat([]byte{0xff}, 5000)
encoded := base58.Encode(data)
b.SetBytes(int64(len(encoded)))
b.StartTimer()
for i := 0; i < b.N; i++ {
base58.Decode(encoded)
}
}

View File

@@ -1,52 +0,0 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package base58
import (
"crypto/sha256"
"github.com/pkg/errors"
)
// ErrChecksum indicates that the checksum of a check-encoded string does not verify against
// the checksum.
var ErrChecksum = errors.New("checksum error")
// ErrInvalidFormat indicates that the check-encoded string has an invalid format.
var ErrInvalidFormat = errors.New("invalid format: version and/or checksum bytes missing")
// checksum: first four bytes of sha256^2
func checksum(input []byte) (cksum [4]byte) {
h := sha256.Sum256(input)
h2 := sha256.Sum256(h[:])
copy(cksum[:], h2[:4])
return
}
// CheckEncode prepends a version byte and appends a four byte checksum.
func CheckEncode(input []byte, version byte) string {
b := make([]byte, 0, 1+len(input)+4)
b = append(b, version)
b = append(b, input[:]...)
cksum := checksum(b)
b = append(b, cksum[:]...)
return Encode(b)
}
// CheckDecode decodes a string that was encoded with CheckEncode and verifies the checksum.
func CheckDecode(input string) (result []byte, version byte, err error) {
decoded := Decode(input)
if len(decoded) < 5 {
return nil, 0, ErrInvalidFormat
}
version = decoded[0]
var cksum [4]byte
copy(cksum[:], decoded[len(decoded)-4:])
if checksum(decoded[:len(decoded)-4]) != cksum {
return nil, 0, ErrChecksum
}
payload := decoded[1 : len(decoded)-4]
result = append(result, payload...)
return
}

View File

@@ -1,65 +0,0 @@
// Copyright (c) 2013-2014 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package base58_test
import (
"github.com/kaspanet/kaspad/util/base58"
"testing"
)
var checkEncodingStringTests = []struct {
version byte
in string
out string
}{
{20, "", "3MNQE1X"},
{20, " ", "B2Kr6dBE"},
{20, "-", "B3jv1Aft"},
{20, "0", "B482yuaX"},
{20, "1", "B4CmeGAC"},
{20, "-1", "mM7eUf6kB"},
{20, "11", "mP7BMTDVH"},
{20, "abc", "4QiVtDjUdeq"},
{20, "1234598760", "ZmNb8uQn5zvnUohNCEPP"},
{20, "abcdefghijklmnopqrstuvwxyz", "K2RYDcKfupxwXdWhSAxQPCeiULntKm63UXyx5MvEH2"},
{20, "00000000000000000000000000000000000000000000000000000000000000", "bi1EWXwJay2udZVxLJozuTb8Meg4W9c6xnmJaRDjg6pri5MBAxb9XwrpQXbtnqEoRV5U2pixnFfwyXC8tRAVC8XxnjK"},
}
func TestBase58Check(t *testing.T) {
for x, test := range checkEncodingStringTests {
// test encoding
if res := base58.CheckEncode([]byte(test.in), test.version); res != test.out {
t.Errorf("CheckEncode test #%d failed: got %s, want: %s", x, res, test.out)
}
// test decoding
res, version, err := base58.CheckDecode(test.out)
if err != nil {
t.Errorf("CheckDecode test #%d failed with err: %v", x, err)
} else if version != test.version {
t.Errorf("CheckDecode test #%d failed: got version: %d want: %d", x, version, test.version)
} else if string(res) != test.in {
t.Errorf("CheckDecode test #%d failed: got: %s want: %s", x, res, test.in)
}
}
// test the two decoding failure cases
// case 1: checksum error
_, _, err := base58.CheckDecode("3MNQE1Y")
if err != base58.ErrChecksum {
t.Error("Checkdecode test failed, expected ErrChecksum")
}
// case 2: invalid formats (string lengths below 5 mean the version byte and/or the checksum
// bytes are missing).
testString := ""
for len := 0; len < 4; len++ {
// make a string of length `len`
_, _, err = base58.CheckDecode(testString)
if err != base58.ErrInvalidFormat {
t.Error("Checkdecode test failed, expected ErrInvalidFormat")
}
}
}

View File

@@ -1,19 +0,0 @@
/*
Package base58 provides an API for working with modified base58 and Base58Check
encodings.
Modified Base58 Encoding
Standard base58 encoding is similar to standard base64 encoding except, as the
name implies, it uses a 58 character alphabet which results in an alphanumeric
string and allows some characters which are problematic for humans to be
excluded. Due to this, there can be various base58 alphabets.
The modified base58 alphabet used by kaspa, and hence this package, omits the
0, O, I, and l characters that look the same in many fonts and are therefore
hard to humans to distinguish.
At the time of this writing, the Base58 encoding scheme is primarily used
for kaspa private keys.
*/
package base58

View File

@@ -1,70 +0,0 @@
// Copyright (c) 2014 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package base58_test
import (
"fmt"
"github.com/kaspanet/kaspad/util/base58"
)
// This example demonstrates how to decode modified base58 encoded data.
func ExampleDecode() {
// Decode example modified base58 encoded data.
encoded := "25JnwSn7XKfNQ"
decoded := base58.Decode(encoded)
// Show the decoded data.
fmt.Println("Decoded Data:", string(decoded))
// Output:
// Decoded Data: Test data
}
// This example demonstrates how to encode data using the modified base58
// encoding scheme.
func ExampleEncode() {
// Encode example data with the modified base58 encoding scheme.
data := []byte("Test data")
encoded := base58.Encode(data)
// Show the encoded data.
fmt.Println("Encoded Data:", encoded)
// Output:
// Encoded Data: 25JnwSn7XKfNQ
}
// This example demonstrates how to decode Base58Check encoded data.
func ExampleCheckDecode() {
// Decode an example Base58Check encoded data.
encoded := "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa"
decoded, version, err := base58.CheckDecode(encoded)
if err != nil {
fmt.Println(err)
return
}
// Show the decoded data.
fmt.Printf("Decoded data: %x\n", decoded)
fmt.Println("Version Byte:", version)
// Output:
// Decoded data: 62e907b15cbf27d5425399ebf6f0fb50ebb88f18
// Version Byte: 0
}
// This example demonstrates how to encode data using the Base58Check encoding
// scheme.
func ExampleCheckEncode() {
// Encode example data with the Base58Check encoding scheme.
data := []byte("Test data")
encoded := base58.CheckEncode(data, 0)
// Show the encoded data.
fmt.Println("Encoded Data:", encoded)
// Output:
// Encoded Data: 182iP79GRURMp7oMHDU
}

View File

@@ -1,79 +0,0 @@
// Copyright (c) 2015 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
//+build ignore
package base58
import (
"bytes"
"io"
"log"
"os"
"strconv"
)
var (
start = []byte(`// Copyright (c) 2015 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
// AUTOGENERATED by genalphabet.go; do not edit.
package base58
const (
// alphabet is the modified base58 alphabet used by kaspa.
alphabet = "123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz"
alphabetIdx0 = '1'
)
var b58 = [256]byte{`)
end = []byte(`}`)
alphabet = []byte("123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz")
tab = []byte("\t")
invalid = []byte("255")
comma = []byte(",")
space = []byte(" ")
nl = []byte("\n")
)
func write(w io.Writer, b []byte) {
_, err := w.Write(b)
if err != nil {
log.Fatal(err)
}
}
func main() {
fi, err := os.Create("alphabet.go")
if err != nil {
log.Fatal(err)
}
defer fi.Close()
write(fi, start)
write(fi, nl)
for i := byte(0); i < 32; i++ {
write(fi, tab)
for j := byte(0); j < 8; j++ {
idx := bytes.IndexByte(alphabet, i*8+j)
if idx == -1 {
write(fi, invalid)
} else {
write(fi, strconv.AppendInt(nil, int64(idx), 10))
}
write(fi, comma)
if j != 7 {
write(fi, space)
}
}
write(fi, nl)
}
write(fi, end)
write(fi, nl)
}

View File

@@ -7,6 +7,7 @@ package util
import (
"bytes"
"fmt"
"github.com/kaspanet/kaspad/util/coinbasepayload"
"io"
"time"
@@ -33,12 +34,23 @@ func (e OutOfRangeError) Error() string {
// transactions on their first access so subsequent accesses don't have to
// repeat the relatively expensive hashing operations.
type Block struct {
msgBlock *wire.MsgBlock // Underlying MsgBlock
serializedBlock []byte // Serialized bytes for the block
blockHash *daghash.Hash // Cached block hash
transactions []*Tx // Transactions
txnsGenerated bool // ALL wrapped transactions generated
blueScore uint64 // Blue score
// Underlying MsgBlock
msgBlock *wire.MsgBlock
// Serialized bytes for the block. This is used only internally, and .Hash() should be used anywhere.
serializedBlock []byte
// Cached block hash. This is used only internally, and .Hash() should be used anywhere.
blockHash *daghash.Hash
// Transactions. This is used only internally, and .Transactions() should be used anywhere.
transactions []*Tx
// ALL wrapped transactions generated
txnsGenerated bool
// Blue score. This is used only internally, and .BlueScore() should be used anywhere.
blueScore *uint64
}
// MsgBlock returns the underlying wire.MsgBlock for the Block.
@@ -200,13 +212,15 @@ func (b *Block) Timestamp() time.Time {
}
// BlueScore returns this block's blue score.
func (b *Block) BlueScore() uint64 {
return b.blueScore
}
// SetBlueScore sets the blue score of the block.
func (b *Block) SetBlueScore(blueScore uint64) {
b.blueScore = blueScore
func (b *Block) BlueScore() (uint64, error) {
if b.blueScore == nil {
blueScore, _, _, err := coinbasepayload.DeserializeCoinbasePayload(b.CoinbaseTransaction().MsgTx())
if err != nil {
return 0, err
}
b.blueScore = &blueScore
}
return *b.blueScore, nil
}
// NewBlock returns a new instance of a kaspa block given an underlying

View File

@@ -212,18 +212,10 @@ func TestFilterInsertWithTweak(t *testing.T) {
// TestFilterInsertKey ensures inserting public keys and addresses works as
// expected.
func TestFilterInsertKey(t *testing.T) {
secret := "5Kg1gnAjaLfKiwhhPpGS3QfRg2m6awQvaj98JCZBZQ5SuS2F15C"
wif, err := util.DecodeWIF(secret)
if err != nil {
t.Errorf("TestFilterInsertKey DecodeWIF failed: %v", err)
return
}
f := bloom.NewFilter(2, 0, 0.001, wire.BloomUpdateAll)
serializedPubKey, err := wif.SerializePubKey()
serializedPubKey, err := hex.DecodeString("045b81f0017e2091e2edcd5eecf10d5bdd120a5514cb3ee65b8447ec18bfc4575c6d5bf415e54e03b1067934a0f0ba76b01c6b9ab227142ee1d543764b69d901e0")
if err != nil {
t.Errorf("TestFilterInsertKey SerializePubKey failed: %v", err)
t.Errorf("TestFilterInsertKey DecodeString failed: %v", err)
return
}
f.Add(serializedPubKey)

View File

@@ -0,0 +1,66 @@
package coinbasepayload
import (
"bytes"
"encoding/binary"
"github.com/kaspanet/kaspad/util/binaryserializer"
"github.com/kaspanet/kaspad/wire"
"github.com/pkg/errors"
)
var byteOrder = binary.LittleEndian
// SerializeCoinbasePayload builds the coinbase payload based on the provided scriptPubKey and extra data.
func SerializeCoinbasePayload(blueScore uint64, scriptPubKey []byte, extraData []byte) ([]byte, error) {
w := &bytes.Buffer{}
err := binaryserializer.PutUint64(w, byteOrder, blueScore)
if err != nil {
return nil, err
}
err = wire.WriteVarInt(w, uint64(len(scriptPubKey)))
if err != nil {
return nil, err
}
_, err = w.Write(scriptPubKey)
if err != nil {
return nil, err
}
_, err = w.Write(extraData)
if err != nil {
return nil, err
}
return w.Bytes(), nil
}
// ErrIncorrectScriptPubKeyLen indicates that the script pub key length is not as expected.
var ErrIncorrectScriptPubKeyLen = errors.New("incorrect script pub key length")
// DeserializeCoinbasePayload deserializes the coinbase payload to its component (scriptPubKey and extra data).
func DeserializeCoinbasePayload(tx *wire.MsgTx) (blueScore uint64, scriptPubKey []byte, extraData []byte, err error) {
r := bytes.NewReader(tx.Payload)
blueScore, err = binaryserializer.Uint64(r, byteOrder)
if err != nil {
return 0, nil, nil, err
}
scriptPubKeyLen, err := wire.ReadVarInt(r)
if err != nil {
return 0, nil, nil, err
}
scriptPubKey = make([]byte, scriptPubKeyLen)
n, err := r.Read(scriptPubKey)
if err != nil {
return 0, nil, nil, err
}
if uint64(n) != scriptPubKeyLen {
return 0, nil, nil,
errors.Wrapf(ErrIncorrectScriptPubKeyLen, "expected %d bytes in script pub key but got %d", scriptPubKeyLen, n)
}
extraData = make([]byte, r.Len())
if r.Len() != 0 {
_, err = r.Read(extraData)
if err != nil {
return 0, nil, nil, err
}
}
return blueScore, scriptPubKey, extraData, nil
}

View File

@@ -1,170 +0,0 @@
// Copyright (c) 2013-2016 The btcsuite developers
// Use of this source code is governed by an ISC
// license that can be found in the LICENSE file.
package util
import (
"bytes"
"github.com/kaspanet/go-secp256k1"
"github.com/kaspanet/kaspad/util/base58"
"github.com/kaspanet/kaspad/util/daghash"
"github.com/pkg/errors"
)
// ErrMalformedPrivateKey describes an error where a WIF-encoded private
// key cannot be decoded due to being improperly formatted. This may occur
// if the byte length is incorrect or an unexpected magic number was
// encountered.
var ErrMalformedPrivateKey = errors.New("malformed private key")
// compressMagic is the magic byte used to identify a WIF encoding for
// an address created from a compressed serialized public key.
const compressMagic byte = 0x01
// WIF contains the individual components described by the Wallet Import Format
// (WIF). A WIF string is typically used to represent a private key and its
// associated address in a way that may be easily copied and imported into or
// exported from wallet software. WIF strings may be decoded into this
// structure by calling DecodeWIF or created with a user-provided private key
// by calling NewWIF.
type WIF struct {
// PrivKey is the private key being imported or exported.
PrivKey *secp256k1.PrivateKey
// CompressPubKey specifies whether the address controlled by the
// imported or exported private key was created by hashing a
// compressed (33-byte) serialized public key, rather than an
// uncompressed (65-byte) one.
CompressPubKey bool
// netID is the kaspa network identifier byte used when
// WIF encoding the private key.
netID byte
}
// NewWIF creates a new WIF structure to export an address and its private key
// as a string encoded in the Wallet Import Format. The compress argument
// specifies whether the address intended to be imported or exported was created
// by serializing the public key compressed rather than uncompressed.
func NewWIF(privKey *secp256k1.PrivateKey, privateKeyID byte, compress bool) (*WIF, error) {
return &WIF{privKey, compress, privateKeyID}, nil
}
// IsForNet returns whether or not the decoded WIF structure is associated
// with the passed kaspa network.
func (w *WIF) IsForNet(privateKeyID byte) bool {
return w.netID == privateKeyID
}
// DecodeWIF creates a new WIF structure by decoding the string encoding of
// the import format.
//
// The WIF string must be a base58-encoded string of the following byte
// sequence:
//
// * 1 byte to identify the network, must be 0x80 for mainnet or 0xef for
// either testnet or the regression test network
// * 32 bytes of a binary-encoded, big-endian, zero-padded private key
// * Optional 1 byte (equal to 0x01) if the address being imported or exported
// was created by taking the RIPEMD160 after SHA256 hash of a serialized
// compressed (33-byte) public key
// * 4 bytes of checksum, must equal the first four bytes of the double SHA256
// of every byte before the checksum in this sequence
//
// If the base58-decoded byte sequence does not match this, DecodeWIF will
// return a non-nil error. ErrMalformedPrivateKey is returned when the WIF
// is of an impossible length or the expected compressed pubkey magic number
// does not equal the expected value of 0x01. ErrChecksumMismatch is returned
// if the expected WIF checksum does not match the calculated checksum.
func DecodeWIF(wif string) (*WIF, error) {
decoded := base58.Decode(wif)
decodedLen := len(decoded)
var compress bool
// Length of base58 decoded WIF must be 32 bytes + an optional 1 byte
// (0x01) if compressed, plus 1 byte for netID + 4 bytes of checksum.
switch decodedLen {
case 1 + secp256k1.SerializedPrivateKeySize + 1 + 4:
if decoded[33] != compressMagic {
return nil, ErrMalformedPrivateKey
}
compress = true
case 1 + secp256k1.SerializedPrivateKeySize + 4:
compress = false
default:
return nil, ErrMalformedPrivateKey
}
// Checksum is first four bytes of double SHA256 of the identifier byte
// and privKey. Verify this matches the final 4 bytes of the decoded
// private key.
var tosum []byte
if compress {
tosum = decoded[:1+secp256k1.SerializedPrivateKeySize+1]
} else {
tosum = decoded[:1+secp256k1.SerializedPrivateKeySize]
}
cksum := daghash.DoubleHashB(tosum)[:4]
if !bytes.Equal(cksum, decoded[decodedLen-4:]) {
return nil, ErrChecksumMismatch
}
netID := decoded[0]
privKeyBytes := decoded[1 : 1+secp256k1.SerializedPrivateKeySize]
privKey, err := secp256k1.DeserializePrivateKeyFromSlice(privKeyBytes)
if err != nil {
return nil, err
}
return &WIF{privKey, compress, netID}, nil
}
// String creates the Wallet Import Format string encoding of a WIF structure.
// See DecodeWIF for a detailed breakdown of the format and requirements of
// a valid WIF string.
func (w *WIF) String() string {
// Precalculate size. Maximum number of bytes before base58 encoding
// is one byte for the network, 32 bytes of private key, possibly one
// extra byte if the pubkey is to be compressed, and finally four
// bytes of checksum.
encodeLen := 1 + secp256k1.SerializedPrivateKeySize + 4
if w.CompressPubKey {
encodeLen++
}
a := make([]byte, 0, encodeLen)
a = append(a, w.netID)
// Pad and append bytes manually, instead of using Serialize, to
// avoid another call to make.
a = paddedAppend(secp256k1.SerializedPrivateKeySize, a, w.PrivKey.Serialize()[:])
if w.CompressPubKey {
a = append(a, compressMagic)
}
cksum := daghash.DoubleHashB(a)[:4]
a = append(a, cksum...)
return base58.Encode(a)
}
// SerializePubKey serializes the associated public key of the imported or
// exported private key in either a compressed or uncompressed format. The
// serialization format chosen depends on the value of w.CompressPubKey.
func (w *WIF) SerializePubKey() ([]byte, error) {
pk, err := w.PrivKey.SchnorrPublicKey()
if err != nil {
return nil, err
}
if w.CompressPubKey {
return pk.SerializeCompressed()
}
return pk.SerializeUncompressed()
}
// paddedAppend appends the src byte slice to dst, returning the new slice.
// If the length of the source is smaller than the passed size, leading zero
// bytes are appended to the dst slice before appending src.
func paddedAppend(size uint, dst, src []byte) []byte {
for i := 0; i < int(size)-len(src); i++ {
dst = append(dst, 0)
}
return append(dst, src...)
}

Some files were not shown because too many files have changed in this diff Show More