Compare commits

...

56 Commits

Author SHA1 Message Date
Svarog
e457503aea Update go-deploy workflow to upload all executables (#1604) 2021-03-14 13:47:54 +02:00
Elichai Turkel
c6c63c7ff9 Add a github action deploy script to build and publish releases (#1586)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-09 12:56:12 +02:00
Mike Zak
dd91ffb7d8 Added changelog for v0.9.1 2021-03-09 12:42:30 +02:00
Svarog
921ca19b42 Update testnet genesis and testnet network name (#1582)
* Update testnet gensis and testnet network name

* Move genesis closer to the present
2021-03-08 15:04:18 +02:00
Mike Zak
98c2dc8189 Update to version 0.9.1 2021-03-08 11:49:38 +02:00
Mike Zak
37654156a6 Update changelog for v0.9.0 2021-03-04 10:53:00 +02:00
Ori Newman
0271858f25 Readd BlockHashes to getBlocks response (#1575) 2021-03-03 16:27:51 +02:00
Elichai Turkel
7909480757 Merge big subdags in pick virtual parents (#1574)
* Refactor mergeSetIncrease to return the current BFS block to allow easier merging

* Remove unneeded Heap/HashSet usages

* Add new IsAnyAncestorOf to DagTopolyManager

* Check if the new candidate is in the future of any existing candidate

* Add comments and fix off-by-one in the mergeSetIncrease queue

* Fixed DAGToplogy test mock

* Fix review comments
2021-03-03 16:17:16 +02:00
Ori Newman
18274c236b Write in the reject message the tx rejection reason (#1573)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-03-02 20:11:31 +02:00
Elichai Turkel
7829a9fd76 Add nil checks for protowire (#1570)
* Handle errors in p2p handshake better

* Add a new errNil in protowire

* Add nil checks for all protowire.toAppMessage() functions for the p2p

* Add nil checks for all protowire.toAppMessage() functions for the RPC

* Add nil check for protwire KaspadMessage

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-02 19:14:31 +02:00
Ori Newman
1548ed9629 Increase getBlocks limit to 1000 (#1572) 2021-03-02 18:24:37 +02:00
Svarog
32cd643e8b Clone transactions before returning them out of mempool (#1571)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-03-02 17:28:53 +02:00
Ori Newman
05df6e3e4e Return RPC error if getBlock's lowHash doesn't exist (#1569)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-02 16:21:55 +02:00
Svarog
ce326b3c7d Add default dns-seeder to testnet (#1568) 2021-03-02 12:57:16 +02:00
Svarog
a4ae263ed5 Don't print stack trace on disconnected (#1567) 2021-03-02 12:25:33 +02:00
Elichai Turkel
79be1edaa5 Fix utxoindex deserialization (#1566)
* Fix broken deserialization in utxoindex

* Add Tests for hashes serialization in utxo index
2021-03-01 19:06:19 +02:00
Svarog
c1ef5f0c56 Add pruning point hash to GetBlockDagInfo response (#1565)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-03-01 18:46:45 +02:00
Elichai Turkel
458e409654 Rename handleRequestBlocksFlow to handleRequestHeadersFlow (#1563) 2021-03-01 15:01:59 +02:00
Svarog
df19bdfaf3 Use EmitUnpopulated so that kaspactl prints all fields, even the default ones (#1561)
Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-03-01 14:47:34 +02:00
Elichai Turkel
103edf97d0 Stop logging an error whenever an RPC/P2P connection is canceled (#1562)
* Don't log error when connectionLoop is canceled

* Print new line after Exiting...

* Add stacktrace to the unknown error from connectionLoops
2021-03-01 14:37:42 +02:00
Elichai Turkel
1f69f9eed9 Cleanup the logger and make it asynchronous (#1524)
* Remove Subsystems map and replace with RegisterSubSystem

* Clean up the logger

* Fix LOGFLAGS and make LongFile work correctly

* Parallelize the logger backend

* More logger cleanup

* Initialize and close the logger backend wherever it's needed

* Move the location where the backend is closed, also print the log if it panics while writing

* Add TestMain to reachability manager tests to preserve the same log level

* Fix review comments

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-01 14:04:40 +02:00
Elichai Turkel
6a12428504 Close iterators (#1542)
* Add Close() function to all the iterators

* Add defer iterator.Close() whenever we open an iterator

* Add isClosed to all iterators and panic/return error if used after closing

Co-authored-by: Svarog <feanorr@gmail.com>
2021-03-01 11:15:59 +02:00
Svarog
63b1d2a05a Add childrenHashes to GetBlock/s RPC commands (#1560)
* Add childrenHashes to GetBlock/s RPC commands

* Fix missed error + implement GetBlockChildren in fakeRelayInvsContext
2021-02-28 18:28:29 +02:00
Svarog
089115c389 Add ScriptPublicKey.Version to RPC (#1559)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-25 17:43:06 +02:00
Elichai Turkel
24a12cf2a1 Fix subnetworks FromString and disable reversal (#1557)
* Remove DomainSubnetworkID reversal

* Fix DomaiNSubnetworkID FromString implementation

* Change RPC conversation logic to use Stringer/FromString

Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-25 16:35:31 +02:00
Mike Zak
6597f24bab Add changelog for v0.8.10 2021-02-25 14:56:05 +02:00
Svarog
dc84913214 Convert ProtocolError to value-type, so that it can be used withh errors.As + fix SubmitBlock ProtocolError condition (#1555)
* Fix condition from || to &&

* Convert ProtocolError to value-type, so that it can be used wihth errors.As

* Simplify condition further
2021-02-24 17:13:59 +02:00
Ori Newman
201646282b Fix the target block rate to create less bursty mining (#1554)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-24 16:35:38 +02:00
Svarog
3f08bf87a8 Fix mempool.RemoveTransactions (#1551)
* Fix mempool.RemoveTransactions

* Don't crash if mempool.RemoveTransactions returns an error. log.Critical instead
2021-02-24 12:47:21 +02:00
Ori Newman
581a12db96 Add RPC reconnection to the miner (#1552)
* Add RPC reconnection to the miner

* Fix wrapf

* Change logs
2021-02-24 10:25:13 +02:00
Svarog
fb6c9c8f21 Remove virtual diff parents (#1550)
* resolveSingleBlockStatus: If the block being resolved is not going to be the next selectedTip - set it's diffParent to be the old selectedTip

* resolveSingleBlockStatus: If the block being resolved is going to be the next selectedTip - set it as old selectedTip's diffChild

* Remove any mentions of virtualDiffParents

* If block is genesis - don't do all the mumbo-jumbo with oldSelectedTip

* Check an unchecked error

* Write a better log message
2021-02-23 17:19:35 +02:00
Ori Newman
2adb4f5d0f Fix UTXO index (#1548)
* Add VirtualUTXODiff and VirtualParents to block insertion result

* Add GetVirtualUTXOs

* Add OnPruningPointUTXOSetOverrideHandler

* Add recovery to UTXO index

* Add UTXO set override notification

* Fix compilation error

* Fix iterators in UTXO index and fix TestUTXOIndex

* Change Dialing to DEBUG

* Change LogBlock location

* Rename StopNotify to StopNotifying

* Add sanity check

* Add comment

* Remove receiver from serialization functions

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-02-23 16:51:51 +02:00
Ori Newman
9ffda2b1da Change disconnect log level to INFO (#1549)
* Change disconnect log level to INFO

* Add disconnected log and change log levels
2021-02-23 13:52:47 +02:00
Svarog
bee0893660 Renamed BlueBlockWindow to just BlockWindow (#1547)
* Renamed BlueBlockWindow to just BlockWindow

* Update comment
2021-02-22 11:33:39 +02:00
talelbaz
a7bb1853f9 Adds tests for transaction validator and block validators (#1531)
* [NOD-1453] cover failing block validation

* [NOD-1453] Complete covering test for invalid block

* [NOD-1453] Fix validator tests after rebase

* [NOD-1453] Cover tests for valid blocks

* [NOD-1453] Implement unit tests for ValidateTransactionInIsolation

* [NOD-1453] Add tests for ValidateTransactionInContextAndPopulateMassAndFee

* [NOD-1453] Cover ValidateHeaderInContext test

* [NOD-1453] Fix after rebase

* not finish

* commited for update the branch.

* Adds new tests to block_body_in_isolation_test.go according to (and instead of ) blockvalisator_test.go

* Adds a comment to type MEDIAN.

* Fixes according to the review notes: add notes and change variables name.

* Fix comment.

* Remove an unused test( all the tests in this file were passed to other test files).

* Change a variable name(txWithAnEmptyInvalidScript to txWithInvalidSignature).

* adds missing '}'.

* Change spaces to tab

Co-authored-by: karim1king <karimkaspersky@yahoo.com>
Co-authored-by: Karim A <karim.a@it-dimension.com>
Co-authored-by: tal <tal@daglabs.com>
2021-02-21 17:46:22 +02:00
talelbaz
35e555e959 Tests validateDifficulty ( ValidatePruningPointViolationAndProofOfWorkAndDifficulty) (#1532)
* Adds tests for validateDifficulty

* fixes according to the review notes: adding the test's goal and fix an unmatch test name on the NewTestConsensus.

* Fixes according to the review notes:delete the function genesisBits - No usages.

* Fix according to review - fix comments.

Co-authored-by: tal <tal@daglabs.com>
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-21 17:00:34 +02:00
Elichai Turkel
a250f697ee Prevent fast failing (#1545) 2021-02-21 16:09:19 +02:00
Elichai Turkel
f66708b3c6 Make the CI more verbose and cache kaspad dependencies in the dockerfile (#1541)
* Make the CI more verbose

* Improve the docker caching of kaspad dependencies

Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-21 12:15:32 +02:00
Svarog
8fbea5d239 Increase the sleep time in kaspaminer when the node is not synced (#1544)
Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-21 11:46:06 +02:00
Svarog
5fa06fe7d7 Upgrade everything to go1.16 (#1539)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-21 09:17:21 +02:00
Ori Newman
06fd6f1b95 Parallelize tests on Dockerfile (#1540)
Co-authored-by: Svarog <feanorr@gmail.com>
2021-02-18 11:38:44 +02:00
Ori Newman
d2f4ed660c Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537) 2021-02-18 10:39:12 +02:00
Elichai Turkel
19878aa062 Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538) 2021-02-18 00:59:11 +02:00
Elichai Turkel
6415e525c3 go test race detector in github actions at cron job (#1534)
Co-authored-by: Ori Newman <orinewman1@gmail.com>
2021-02-17 18:59:42 +02:00
Svarog
995e526dae Make antiPastHashesBetween return blocks sorted in ghostdag-order (#1536)
* Make antiPastHashesBetween return blocks sorted in ghostdag-order

* Return sortedMergeSet instead of blueMergeSet

* Invert the order of parameters of IsAncestorOf

* Add RenderDAGToDot to TestConsensus

* Add HighHash explicitly, unless lowHash == highHash

* Use Equal instead of == when comparing hashes

* Fixed TestSyncManager_GetHashesBetween

* Fix tests

* findHighHashAccordingToMaxBlueScoreDifference: don't start looking if the whole thing fits

* Handle a missed error

* Remove redundant call to RenderToDot

* Fix bug in findHighHashAccordingToMaxBlueScoreDifference
2021-02-17 18:22:08 +02:00
Elichai Turkel
00a023620d Fix a data race in the block logger (#1533) 2021-02-17 17:05:25 +02:00
Ori Newman
2908a46441 Don't ban when sending pruned blocks (#1530) 2021-02-15 16:43:35 +02:00
Ori Newman
e78cdff3d0 Don't mark block that got rejected because of ruleerrors.ErrPrunedBlock as invalid (#1529)
* Don't mark block that got rejected because of ruleerrors.ErrPrunedBlock as invalid

* Update comment
2021-02-15 15:34:21 +02:00
Ori Newman
2a31074fc4 Make getBlock return an error for invalid blocks (#1528) 2021-02-15 14:39:25 +02:00
stasatdaglabs
d835f72e74 Make AddressManager persistent (#1525)
* Move existing address/bannedAddress functionality to a new addressStore object.

* Implement TestAddressManager.

* Implement serializeAddressKey and deserializeAddressKey.

* Implement serializeNetAddress and deserializeNetAddress.

* Store addresses and banned addresses to disk.

* Implement restoreNotBannedAddresses.

* Fix bannedDatabaseKey.

* Implement restoreBannedAddresses.

* Implement TestRestoreAddressManager.

* Defer closing the database in TestRestoreAddressManager.

* Defer closing the database in TestRestoreAddressManager.

* Add a log.

* Return errors where appropriate.

Co-authored-by: Elichai Turkel <elichai.turkel@gmail.com>
2021-02-14 19:08:06 +02:00
Elichai Turkel
a581dea127 Remove unused utils and structures (#1526)
* Remove unused utils

* Remove unneeded randomness from tests

* Remove more unused functions

* Remove unused protobuf structures

* Fix small errors
2021-02-14 18:13:20 +02:00
stasatdaglabs
7b4b5668e2 Enhance UTXOsChanged notifications (#1522)
* In PropagateUTXOsChangedNotifications, add the given addresses to the address list instead of replacing them.

* Add StopNotifyingUtxosChangedRequestMessage to rpc.proto.

* Implement StopNotifyingUTXOsChanged.

* Optimize convertUTXOChangesToUTXOsChangedNotification.
2021-02-14 12:58:29 +02:00
Elichai Turkel
0e2061d838 Make RPC command GetBlocks prepend lowHash to return value and fix error when lowHash=highHash (#1520)
* Don't error out if antiPastHashesBetween have 2 blocks with the same blue score

* Prepend lowHash to RPC GetBlocks request

* Add a test for GetHashesBetween

* Add a test for GetBlocks RPC call

* Update antipast.go

Co-authored-by: stasatdaglabs <39559713+stasatdaglabs@users.noreply.github.com>
2021-02-11 18:13:46 +02:00
Svarog
0a579e7f78 DownloadHeaders: Instead of using doneChan - close blockHeadersMessageChan. (#1523) 2021-02-11 17:01:15 +02:00
hashdag
1ed6c4c086 Update README.md 2021-02-11 15:04:15 +02:00
Svarog
fea83e5c6c Change Testnet name to kaspad-testnet-2 (#1521)
* Change Testnet name to kaspad-testnet-2

* Fix tests that hardcoded network names
2021-02-11 15:02:25 +02:00
302 changed files with 7780 additions and 4357 deletions

77
.github/workflows/go-deploy.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: Build and Upload assets
on:
release:
types: [published]
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-latest, windows-latest, macos-latest ]
name: Building For ${{ matrix.os }}
steps:
- name: Fix windows CRLF
run: git config --global core.autocrlf false
- name: Check out code into the Go module directory
uses: actions/checkout@v2
# We need to increase the page size because the tests run out of memory on github CI windows.
# Use the powershell script from this github action: https://github.com/al-cheb/configure-pagefile-action/blob/master/scripts/SetPageFileSize.ps1
# MIT License (MIT) Copyright (c) 2020 Maxim Lobanov and contributors
- name: Increase page size on windows
if: runner.os == 'Windows'
shell: powershell
run: powershell -command .\.github\workflows\SetPageFileSize.ps1
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Build on linux
if: runner.os == 'Linux'
# `-extldflags=-static` - means static link everything, `-tags netgo,osusergo` means use pure go replacements for "os/user" and "net"
# `-s -w` strips the binary to produce smaller size binaries
run: |
go build -v -ldflags="-s -w -extldflags=-static" -tags netgo,osusergo -o ./bin/ ./...
archive="bin/kaspad-${{ github.event.release.tag_name }}-linux.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-linux.zip"
zip -r "${archive}" ./bin/*
echo "archive=${archive}" >> $GITHUB_ENV
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Build on Windows
if: runner.os == 'Windows'
shell: bash
run: |
go build -v -ldflags="-s -w" -o bin/ ./...
archive="bin/kaspad-${{ github.event.release.tag_name }}-win64.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-win64.zip"
powershell "Compress-Archive bin/* \"${archive}\""
echo "archive=${archive}" >> $GITHUB_ENV
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Build on MacOS
if: runner.os == 'macOS'
run: |
go build -v -ldflags="-s -w" -o ./bin/ ./...
archive="bin/kaspad-${{ github.event.release.tag_name }}-osx.zip"
asset_name="kaspad-${{ github.event.release.tag_name }}-osx.zip"
zip -r "${archive}" ./bin/*
echo "archive=${archive}" >> $GITHUB_ENV
echo "asset_name=${asset_name}" >> $GITHUB_ENV
- name: Upload Release Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ github.event.release.upload_url }}
asset_path: "./${{ env.archive }}"
asset_name: "${{ env.asset_name }}"
asset_content_type: application/zip

49
.github/workflows/go-race.yml vendored Normal file
View File

@@ -0,0 +1,49 @@
name: Go-Race
on:
schedule:
- cron: "0 0 * * *"
workflow_dispatch:
jobs:
race_test:
runs-on: ubuntu-20.04
strategy:
fail-fast: false
matrix:
branch: [ master, latest ]
name: Race detection on ${{ matrix.branch }}
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.15
- name: Set scheduled branch name
shell: bash
if: github.event_name == 'schedule'
run: |
if [ "${{ matrix.branch }}" == "master" ]; then
echo "run_on=master" >> $GITHUB_ENV
fi
if [ "${{ matrix.branch }}" == "latest" ]; then
branch=$(git branch -r | grep 'v\([0-9]\+\.\)\([0-9]\+\.\)\([0-9]\+\)-dev' | sort -Vr | head -1 | xargs)
echo "run_on=${branch}" >> $GITHUB_ENV
fi
- name: Set manual branch name
shell: bash
if: github.event_name == 'workflow_dispatch'
run: echo "run_on=${{ github.ref }}" >> $GITHUB_ENV
- name: Test with race detector
shell: bash
run: |
git checkout "${{ env.run_on }}"
git status
go test -race ./...

View File

@@ -11,6 +11,7 @@ jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ ubuntu-16.04, macos-10.15 ]
name: Testing on on ${{ matrix.os }}
@@ -34,7 +35,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.15
go-version: 1.16
# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
@@ -48,7 +49,7 @@ jobs:
- name: Test
shell: bash
run: ./build_and_test.sh
run: ./build_and_test.sh -v
coverage:
runs-on: ubuntu-20.04
@@ -60,10 +61,10 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: 1.15
go-version: 1.16
- name: Create coverage file
run: go test -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...
run: go test -v -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...
- name: Upload coverage file
run: bash <(curl -s https://codecov.io/bash)

View File

@@ -18,7 +18,7 @@ Kaspa is an attempt at a proof-of-work cryptocurrency with instant confirmations
## Requirements
Go 1.15 or later.
Go 1.16 or later.
## Installation
@@ -65,7 +65,7 @@ is used for this project.
## Documentation
The documentation is a work-in-progress.
The [documentation](https://github.com/kaspanet/docs) is a work-in-progress
## License

View File

@@ -7,19 +7,17 @@ import (
"runtime"
"time"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/kaspanet/kaspad/infrastructure/db/database/ldb"
"github.com/kaspanet/kaspad/infrastructure/os/signal"
"github.com/kaspanet/kaspad/util/profiling"
"github.com/kaspanet/kaspad/version"
"github.com/kaspanet/kaspad/util/panics"
"github.com/kaspanet/kaspad/infrastructure/config"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/os/execenv"
"github.com/kaspanet/kaspad/infrastructure/os/limits"
"github.com/kaspanet/kaspad/infrastructure/os/signal"
"github.com/kaspanet/kaspad/infrastructure/os/winservice"
"github.com/kaspanet/kaspad/util/panics"
"github.com/kaspanet/kaspad/util/profiling"
"github.com/kaspanet/kaspad/version"
)
const leveldbCacheSizeMiB = 256
@@ -51,6 +49,7 @@ func StartApp() error {
fmt.Fprintln(os.Stderr, err)
return err
}
defer logger.BackendLog.Close()
defer panics.HandlePanic(log, "MAIN", nil)
app := &kaspadApp{cfg: cfg}

View File

@@ -164,11 +164,7 @@ func outpointToDomainOutpoint(outpoint *Outpoint) *externalapi.DomainOutpoint {
func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externalapi.DomainTransaction, error) {
inputs := make([]*externalapi.DomainTransactionInput, len(rpcTransaction.Inputs))
for i, input := range rpcTransaction.Inputs {
transactionIDBytes, err := hex.DecodeString(input.PreviousOutpoint.TransactionID)
if err != nil {
return nil, err
}
transactionID, err := transactionid.FromBytes(transactionIDBytes)
transactionID, err := transactionid.FromString(input.PreviousOutpoint.TransactionID)
if err != nil {
return nil, err
}
@@ -198,19 +194,11 @@ func RPCTransactionToDomainTransaction(rpcTransaction *RPCTransaction) (*externa
}
}
subnetworkIDBytes, err := hex.DecodeString(rpcTransaction.SubnetworkID)
subnetworkID, err := subnetworks.FromString(rpcTransaction.SubnetworkID)
if err != nil {
return nil, err
}
subnetworkID, err := subnetworks.FromBytes(subnetworkIDBytes)
if err != nil {
return nil, err
}
payloadHashBytes, err := hex.DecodeString(rpcTransaction.PayloadHash)
if err != nil {
return nil, err
}
payloadHash, err := externalapi.NewDomainHashFromByteSlice(payloadHashBytes)
payloadHash, err := externalapi.NewDomainHashFromString(rpcTransaction.PayloadHash)
if err != nil {
return nil, err
}
@@ -255,7 +243,7 @@ func DomainTransactionToRPCTransaction(transaction *externalapi.DomainTransactio
ScriptPublicKey: &RPCScriptPublicKey{Script: scriptPublicKey, Version: output.ScriptPublicKey.Version},
}
}
subnetworkID := hex.EncodeToString(transaction.SubnetworkID[:])
subnetworkID := transaction.SubnetworkID.String()
payloadHash := transaction.PayloadHash.String()
payload := hex.EncodeToString(transaction.Payload)
return &RPCTransaction{

View File

@@ -117,6 +117,8 @@ const (
CmdNotifyUTXOsChangedRequestMessage
CmdNotifyUTXOsChangedResponseMessage
CmdUTXOsChangedNotificationMessage
CmdStopNotifyingUTXOsChangedRequestMessage
CmdStopNotifyingUTXOsChangedResponseMessage
CmdGetUTXOsByAddressesRequestMessage
CmdGetUTXOsByAddressesResponseMessage
CmdGetVirtualSelectedParentBlueScoreRequestMessage
@@ -130,6 +132,11 @@ const (
CmdUnbanResponseMessage
CmdGetInfoRequestMessage
CmdGetInfoResponseMessage
CmdNotifyPruningPointUTXOSetOverrideRequestMessage
CmdNotifyPruningPointUTXOSetOverrideResponseMessage
CmdPruningPointUTXOSetOverrideNotificationMessage
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage
)
// ProtocolMessageCommandToString maps all MessageCommands to their string representation
@@ -221,6 +228,8 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdNotifyUTXOsChangedRequestMessage: "NotifyUTXOsChangedRequest",
CmdNotifyUTXOsChangedResponseMessage: "NotifyUTXOsChangedResponse",
CmdUTXOsChangedNotificationMessage: "UTXOsChangedNotification",
CmdStopNotifyingUTXOsChangedRequestMessage: "StopNotifyingUTXOsChangedRequest",
CmdStopNotifyingUTXOsChangedResponseMessage: "StopNotifyingUTXOsChangedResponse",
CmdGetUTXOsByAddressesRequestMessage: "GetUTXOsByAddressesRequest",
CmdGetUTXOsByAddressesResponseMessage: "GetUTXOsByAddressesResponse",
CmdGetVirtualSelectedParentBlueScoreRequestMessage: "GetVirtualSelectedParentBlueScoreRequest",
@@ -232,8 +241,13 @@ var RPCMessageCommandToString = map[MessageCommand]string{
CmdBanResponseMessage: "BanResponse",
CmdUnbanRequestMessage: "UnbanRequest",
CmdUnbanResponseMessage: "UnbanResponse",
CmdGetInfoRequestMessage: "GetInfoRequestMessage",
CmdGetInfoResponseMessage: "GeInfoResponseMessage",
CmdGetInfoRequestMessage: "GetInfoRequest",
CmdGetInfoResponseMessage: "GeInfoResponse",
CmdNotifyPruningPointUTXOSetOverrideRequestMessage: "NotifyPruningPointUTXOSetOverrideRequest",
CmdNotifyPruningPointUTXOSetOverrideResponseMessage: "NotifyPruningPointUTXOSetOverrideResponse",
CmdPruningPointUTXOSetOverrideNotificationMessage: "PruningPointUTXOSetOverrideNotification",
CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: "StopNotifyingPruningPointUTXOSetOverrideRequest",
CmdStopNotifyingPruningPointUTXOSetOverrideResponseMessage: "StopNotifyingPruningPointUTXOSetOverrideResponse",
}
// Message is an interface that describes a kaspa message. A type that

View File

@@ -11,15 +11,11 @@ import (
"github.com/davecgh/go-spew/spew"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/util/mstime"
"github.com/kaspanet/kaspad/util/random"
)
// TestBlockHeader tests the MsgBlockHeader API.
func TestBlockHeader(t *testing.T) {
nonce, err := random.Uint64()
if err != nil {
t.Errorf("random.Uint64: Error generating nonce: %v", err)
}
nonce := uint64(0xba4d87a69924a93d)
hashes := []*externalapi.DomainHash{mainnetGenesisHash, simnetGenesisHash}

View File

@@ -6,17 +6,12 @@ package appmessage
import (
"testing"
"github.com/kaspanet/kaspad/util/random"
)
// TestPing tests the MsgPing API against the latest protocol version.
func TestPing(t *testing.T) {
// Ensure we get the same nonce back out.
nonce, err := random.Uint64()
if err != nil {
t.Errorf("random.Uint64: Error generating nonce: %v", err)
}
nonce := uint64(0x61c2c5535902862)
msg := NewMsgPing(nonce)
if msg.Nonce != nonce {
t.Errorf("NewMsgPing: wrong nonce - got %v, want %v",

View File

@@ -6,16 +6,11 @@ package appmessage
import (
"testing"
"github.com/kaspanet/kaspad/util/random"
)
// TestPongLatest tests the MsgPong API against the latest protocol version.
func TestPongLatest(t *testing.T) {
nonce, err := random.Uint64()
if err != nil {
t.Errorf("random.Uint64: error generating nonce: %v", err)
}
nonce := uint64(0x1a05b581a5182c)
msg := NewMsgPong(nonce)
if msg.Nonce != nonce {
t.Errorf("NewMsgPong: wrong nonce - got %v, want %v",

View File

@@ -55,6 +55,7 @@ type BlockVerboseData struct {
Bits string
Difficulty float64
ParentHashes []string
ChildrenHashes []string
SelectedParentHash string
BlueScore uint64
IsHeaderOnly bool
@@ -104,4 +105,5 @@ type ScriptPubKeyResult struct {
Hex string
Type string
Address string
Version uint16
}

View File

@@ -27,6 +27,7 @@ type GetBlockDAGInfoResponseMessage struct {
VirtualParentHashes []string
Difficulty float64
PastMedianTime int64
PruningPointHash string
Error *RPCError
}

View File

@@ -0,0 +1,83 @@
package appmessage
// NotifyPruningPointUTXOSetOverrideRequestMessage is an appmessage corresponding to
// its respective RPC message
type NotifyPruningPointUTXOSetOverrideRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *NotifyPruningPointUTXOSetOverrideRequestMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideRequestMessage
}
// NewNotifyPruningPointUTXOSetOverrideRequestMessage returns a instance of the message
func NewNotifyPruningPointUTXOSetOverrideRequestMessage() *NotifyPruningPointUTXOSetOverrideRequestMessage {
return &NotifyPruningPointUTXOSetOverrideRequestMessage{}
}
// NotifyPruningPointUTXOSetOverrideResponseMessage is an appmessage corresponding to
// its respective RPC message
type NotifyPruningPointUTXOSetOverrideResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *NotifyPruningPointUTXOSetOverrideResponseMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideResponseMessage
}
// NewNotifyPruningPointUTXOSetOverrideResponseMessage returns a instance of the message
func NewNotifyPruningPointUTXOSetOverrideResponseMessage() *NotifyPruningPointUTXOSetOverrideResponseMessage {
return &NotifyPruningPointUTXOSetOverrideResponseMessage{}
}
// PruningPointUTXOSetOverrideNotificationMessage is an appmessage corresponding to
// its respective RPC message
type PruningPointUTXOSetOverrideNotificationMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *PruningPointUTXOSetOverrideNotificationMessage) Command() MessageCommand {
return CmdPruningPointUTXOSetOverrideNotificationMessage
}
// NewPruningPointUTXOSetOverrideNotificationMessage returns a instance of the message
func NewPruningPointUTXOSetOverrideNotificationMessage() *PruningPointUTXOSetOverrideNotificationMessage {
return &PruningPointUTXOSetOverrideNotificationMessage{}
}
// StopNotifyingPruningPointUTXOSetOverrideRequestMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingPruningPointUTXOSetOverrideRequestMessage struct {
baseMessage
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingPruningPointUTXOSetOverrideRequestMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideRequestMessage
}
// NewStopNotifyingPruningPointUTXOSetOverrideRequestMessage returns a instance of the message
func NewStopNotifyingPruningPointUTXOSetOverrideRequestMessage() *StopNotifyingPruningPointUTXOSetOverrideRequestMessage {
return &StopNotifyingPruningPointUTXOSetOverrideRequestMessage{}
}
// StopNotifyingPruningPointUTXOSetOverrideResponseMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingPruningPointUTXOSetOverrideResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingPruningPointUTXOSetOverrideResponseMessage) Command() MessageCommand {
return CmdNotifyPruningPointUTXOSetOverrideResponseMessage
}
// NewStopNotifyingPruningPointUTXOSetOverrideResponseMessage returns a instance of the message
func NewStopNotifyingPruningPointUTXOSetOverrideResponseMessage() *StopNotifyingPruningPointUTXOSetOverrideResponseMessage {
return &StopNotifyingPruningPointUTXOSetOverrideResponseMessage{}
}

View File

@@ -0,0 +1,37 @@
package appmessage
// StopNotifyingUTXOsChangedRequestMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingUTXOsChangedRequestMessage struct {
baseMessage
Addresses []string
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingUTXOsChangedRequestMessage) Command() MessageCommand {
return CmdStopNotifyingUTXOsChangedRequestMessage
}
// NewStopNotifyingUTXOsChangedRequestMessage returns a instance of the message
func NewStopNotifyingUTXOsChangedRequestMessage(addresses []string) *StopNotifyingUTXOsChangedRequestMessage {
return &StopNotifyingUTXOsChangedRequestMessage{
Addresses: addresses,
}
}
// StopNotifyingUTXOsChangedResponseMessage is an appmessage corresponding to
// its respective RPC message
type StopNotifyingUTXOsChangedResponseMessage struct {
baseMessage
Error *RPCError
}
// Command returns the protocol command string for the message
func (msg *StopNotifyingUTXOsChangedResponseMessage) Command() MessageCommand {
return CmdStopNotifyingUTXOsChangedResponseMessage
}
// NewStopNotifyingUTXOsChangedResponseMessage returns a instance of the message
func NewStopNotifyingUTXOsChangedResponseMessage() *StopNotifyingUTXOsChangedResponseMessage {
return &StopNotifyingUTXOsChangedResponseMessage{}
}

View File

@@ -90,14 +90,18 @@ func NewComponentManager(cfg *config.Config, db infrastructuredatabase.Database,
return nil, err
}
addressManager, err := addressmanager.New(addressmanager.NewConfig(cfg))
addressManager, err := addressmanager.New(addressmanager.NewConfig(cfg), db)
if err != nil {
return nil, err
}
var utxoIndex *utxoindex.UTXOIndex
if cfg.UTXOIndex {
utxoIndex = utxoindex.New(domain.Consensus(), db)
utxoIndex, err = utxoindex.New(domain.Consensus(), db)
if err != nil {
return nil, err
}
log.Infof("UTXO index started")
}
@@ -144,6 +148,7 @@ func setupRPC(
shutDownChan,
)
protocolManager.SetOnBlockAddedToDAGHandler(rpcManager.NotifyBlockAddedToDAG)
protocolManager.SetOnPruningPointUTXOSetOverrideHandler(rpcManager.NotifyPruningPointUTXOSetOverride)
return rpcManager
}

View File

@@ -7,8 +7,6 @@ package app
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.KASD)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("KASD")

View File

@@ -2,6 +2,7 @@ package flowcontext
import (
peerpkg "github.com/kaspanet/kaspad/app/protocol/peer"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/pkg/errors"
@@ -56,6 +57,15 @@ func (f *FlowContext) OnNewBlock(block *externalapi.DomainBlock,
return nil
}
// OnPruningPointUTXOSetOverride calls the handler function whenever the UTXO set
// resets due to pruning point change via IBD.
func (f *FlowContext) OnPruningPointUTXOSetOverride() error {
if f.onPruningPointUTXOSetOverrideHandler != nil {
return f.onPruningPointUTXOSetOverrideHandler()
}
return nil
}
func (f *FlowContext) broadcastTransactionsAfterBlockAdded(
block *externalapi.DomainBlock, transactionsAcceptedToMempool []*externalapi.DomainTransaction) error {
@@ -98,6 +108,10 @@ func (f *FlowContext) SharedRequestedBlocks() *blockrelay.SharedRequestedBlocks
// AddBlock adds the given block to the DAG and propagates it.
func (f *FlowContext) AddBlock(block *externalapi.DomainBlock) error {
if len(block.Transactions) == 0 {
return protocolerrors.Errorf(false, "cannot add header only block")
}
blockInsertionResult, err := f.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if errors.As(err, &ruleerrors.RuleError{}) {

View File

@@ -18,7 +18,7 @@ import (
func (*FlowContext) HandleError(err error, flowName string, isStopping *uint32, errChan chan<- error) {
isErrRouteClosed := errors.Is(err, router.ErrRouteClosed)
if !isErrRouteClosed {
if protocolErr := &(protocolerrors.ProtocolError{}); !errors.As(err, &protocolErr) {
if protocolErr := (protocolerrors.ProtocolError{}); !errors.As(err, &protocolErr) {
panic(err)
}

View File

@@ -23,6 +23,10 @@ import (
// when a block is added to the DAG
type OnBlockAddedToDAGHandler func(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
// OnPruningPointUTXOSetOverrideHandler is a handle function that's triggered whenever the UTXO set
// resets due to pruning point change via IBD.
type OnPruningPointUTXOSetOverrideHandler func() error
// OnTransactionAddedToMempoolHandler is a handler function that's triggered
// when a transaction is added to the mempool
type OnTransactionAddedToMempoolHandler func()
@@ -38,8 +42,9 @@ type FlowContext struct {
timeStarted int64
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
onBlockAddedToDAGHandler OnBlockAddedToDAGHandler
onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler
onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler
transactionsToRebroadcastLock sync.Mutex
transactionsToRebroadcast map[externalapi.DomainTransactionID]*externalapi.DomainTransaction
@@ -82,6 +87,11 @@ func (f *FlowContext) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler OnBlo
f.onBlockAddedToDAGHandler = onBlockAddedToDAGHandler
}
// SetOnPruningPointUTXOSetOverrideHandler sets the onPruningPointUTXOSetOverrideHandler handler
func (f *FlowContext) SetOnPruningPointUTXOSetOverrideHandler(onPruningPointUTXOSetOverrideHandler OnPruningPointUTXOSetOverrideHandler) {
f.onPruningPointUTXOSetOverrideHandler = onPruningPointUTXOSetOverrideHandler
}
// SetOnTransactionAddedToMempoolHandler sets the onTransactionAddedToMempool handler
func (f *FlowContext) SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler OnTransactionAddedToMempoolHandler) {
f.onTransactionAddedToMempoolHandler = onTransactionAddedToMempoolHandler

View File

@@ -2,8 +2,6 @@ package flowcontext
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("PROT")

View File

@@ -35,6 +35,5 @@ func ReceiveAddresses(context ReceiveAddressesContext, incomingRoute *router.Rou
return protocolerrors.Errorf(true, "address count exceeded %d", addressmanager.GetAddressesMax)
}
context.AddressManager().AddAddresses(msgAddresses.AddressList...)
return nil
return context.AddressManager().AddAddresses(msgAddresses.AddressList...)
}

View File

@@ -24,6 +24,7 @@ type RelayInvsContext interface {
Domain() domain.Domain
Config() *config.Config
OnNewBlock(block *externalapi.DomainBlock, blockInsertionResult *externalapi.BlockInsertionResult) error
OnPruningPointUTXOSetOverride() error
SharedRequestedBlocks() *SharedRequestedBlocks
Broadcast(message appmessage.Message) error
AddOrphan(orphanBlock *externalapi.DomainBlock)
@@ -104,9 +105,19 @@ func (flow *handleRelayInvsFlow) start() error {
continue
}
err = flow.banIfBlockIsHeaderOnly(block)
if err != nil {
return err
}
log.Debugf("Processing block %s", inv.Hash)
missingParents, blockInsertionResult, err := flow.processBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrPrunedBlock) {
log.Infof("Ignoring pruned block %s", inv.Hash)
continue
}
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {
log.Infof("Ignoring duplicate block %s", inv.Hash)
continue
@@ -135,6 +146,15 @@ func (flow *handleRelayInvsFlow) start() error {
}
}
func (flow *handleRelayInvsFlow) banIfBlockIsHeaderOnly(block *externalapi.DomainBlock) error {
if len(block.Transactions) == 0 {
return protocolerrors.Errorf(true, "sent header of %s block where expected block with body",
consensushashing.BlockHash(block))
}
return nil
}
func (flow *handleRelayInvsFlow) readInv() (*appmessage.MsgInvRelayBlock, error) {
if len(flow.invsQueue) > 0 {
var inv *appmessage.MsgInvRelayBlock

View File

@@ -17,7 +17,7 @@ type RequestIBDBlocksContext interface {
Domain() domain.Domain
}
type handleRequestBlocksFlow struct {
type handleRequestHeadersFlow struct {
RequestIBDBlocksContext
incomingRoute, outgoingRoute *router.Route
peer *peer.Peer
@@ -27,7 +27,7 @@ type handleRequestBlocksFlow struct {
func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router.Route,
outgoingRoute *router.Route, peer *peer.Peer) error {
flow := &handleRequestBlocksFlow{
flow := &handleRequestHeadersFlow{
RequestIBDBlocksContext: context,
incomingRoute: incomingRoute,
outgoingRoute: outgoingRoute,
@@ -36,7 +36,7 @@ func HandleRequestHeaders(context RequestIBDBlocksContext, incomingRoute *router
return flow.start()
}
func (flow *handleRequestBlocksFlow) start() error {
func (flow *handleRequestHeadersFlow) start() error {
for {
lowHash, highHash, err := receiveRequestHeaders(flow.incomingRoute)
if err != nil {

View File

@@ -221,7 +221,6 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
// headers
blockHeadersMessageChan := make(chan *appmessage.BlockHeadersMessage, 2)
errChan := make(chan error)
doneChan := make(chan interface{})
spawn("handleRelayInvsFlow-downloadHeaders", func() {
for {
blockHeadersMessage, doneIBD, err := flow.receiveHeaders()
@@ -230,7 +229,7 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
return
}
if doneIBD {
doneChan <- struct{}{}
close(blockHeadersMessageChan)
return
}
@@ -246,7 +245,10 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
for {
select {
case blockHeadersMessage := <-blockHeadersMessageChan:
case blockHeadersMessage, ok := <-blockHeadersMessageChan:
if !ok {
return nil
}
for _, header := range blockHeadersMessage.BlockHeaders {
err = flow.processHeader(header)
if err != nil {
@@ -255,8 +257,6 @@ func (flow *handleRelayInvsFlow) downloadHeaders(highestSharedBlockHash *externa
}
case err := <-errChan:
return err
case <-doneChan:
return nil
}
}
}
@@ -410,6 +410,11 @@ func (flow *handleRelayInvsFlow) fetchMissingUTXOSet(pruningPointHash *externala
return false, protocolerrors.ConvertToBanningProtocolErrorIfRuleError(err, "error with pruning point UTXO set")
}
err = flow.OnPruningPointUTXOSetOverride()
if err != nil {
return false, err
}
return true, nil
}
@@ -533,6 +538,11 @@ func (flow *handleRelayInvsFlow) syncMissingBlockBodies(highHash *externalapi.Do
return protocolerrors.Errorf(true, "expected block %s but got %s", expectedHash, blockHash)
}
err = flow.banIfBlockIsHeaderOnly(block)
if err != nil {
return err
}
blockInsertionResult, err := flow.Domain().Consensus().ValidateAndInsertBlock(block)
if err != nil {
if errors.Is(err, ruleerrors.ErrDuplicateBlock) {

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -89,7 +89,10 @@ func HandleHandshake(context HandleHandshakeContext, netConnection *netadapter.N
}
if peerAddress != nil {
context.AddressManager().AddAddresses(peerAddress)
err := context.AddressManager().AddAddresses(peerAddress)
if err != nil {
return nil, err
}
}
return peer, nil
}
@@ -104,7 +107,7 @@ func handleError(err error, flowName string, isStopping *uint32, errChan chan er
return
}
if protocolErr := &(protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if protocolErr := (protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
log.Errorf("Handshake protocol error from %s: %s", flowName, err)
if atomic.AddUint32(isStopping, 1) == 1 {
errChan <- err

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -1,14 +1,15 @@
package testing
import (
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/pkg/errors"
"strings"
"testing"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/pkg/errors"
)
func checkFlowError(t *testing.T, err error, isProtocolError bool, shouldBan bool, contains string) {
pErr := &protocolerrors.ProtocolError{}
pErr := protocolerrors.ProtocolError{}
if errors.As(err, &pErr) != isProtocolError {
t.Fatalf("Unexepcted error %+v", err)
}

View File

@@ -24,6 +24,19 @@ import (
"github.com/pkg/errors"
)
var headerOnlyBlock = &externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
constants.MaxBlockVersion,
[]*externalapi.DomainHash{externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})},
&externalapi.DomainHash{},
&externalapi.DomainHash{},
&externalapi.DomainHash{},
0,
0,
0,
),
}
var orphanBlock = &externalapi.DomainBlock{
Header: blockheader.NewImmutableBlockHeader(
constants.MaxBlockVersion,
@@ -35,6 +48,7 @@ var orphanBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var validPruningPointBlock = &externalapi.DomainBlock{
@@ -48,6 +62,7 @@ var validPruningPointBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var invalidPruningPointBlock = &externalapi.DomainBlock{
@@ -61,6 +76,7 @@ var invalidPruningPointBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var unexpectedIBDBlock = &externalapi.DomainBlock{
@@ -74,6 +90,7 @@ var unexpectedIBDBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var invalidBlock = &externalapi.DomainBlock{
@@ -87,6 +104,7 @@ var invalidBlock = &externalapi.DomainBlock{
0,
0,
),
Transactions: []*externalapi.DomainTransaction{{}},
}
var unknownBlockHash = externalapi.NewDomainHashFromByteArray(&[externalapi.DomainHashSize]byte{1})
@@ -95,6 +113,7 @@ var validPruningPointHash = consensushashing.BlockHash(validPruningPointBlock)
var invalidBlockHash = consensushashing.BlockHash(invalidBlock)
var invalidPruningPointHash = consensushashing.BlockHash(invalidPruningPointBlock)
var orphanBlockHash = consensushashing.BlockHash(orphanBlock)
var headerOnlyBlockHash = consensushashing.BlockHash(headerOnlyBlock)
type fakeRelayInvsContext struct {
testName string
@@ -110,6 +129,18 @@ type fakeRelayInvsContext struct {
rwLock sync.RWMutex
}
func (f *fakeRelayInvsContext) GetBlockChildren(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
panic(errors.Errorf("called unimplemented function from test '%s'", f.testName))
}
func (f *fakeRelayInvsContext) OnPruningPointUTXOSetOverride() error {
return nil
}
func (f *fakeRelayInvsContext) GetVirtualUTXOs(expectedVirtualParents []*externalapi.DomainHash, fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error) {
panic(errors.Errorf("called unimplemented function from test '%s'", f.testName))
}
func (f *fakeRelayInvsContext) Anticone(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
panic(errors.Errorf("called unimplemented function from test '%s'", f.testName))
}
@@ -450,6 +481,29 @@ func TestHandleRelayInvs(t *testing.T) {
expectsBan: true,
expectsErrToContain: "got unrequested block",
},
{
name: "sending header only block on relay",
funcToExecute: func(t *testing.T, incomingRoute, outgoingRoute *router.Route, context *fakeRelayInvsContext) {
err := incomingRoute.Enqueue(appmessage.NewMsgInvBlock(headerOnlyBlockHash))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
}
msg, err := outgoingRoute.DequeueWithTimeout(time.Second)
if err != nil {
t.Fatalf("DequeueWithTimeout: %+v", err)
}
_ = msg.(*appmessage.MsgRequestRelayBlocks)
err = incomingRoute.Enqueue(appmessage.DomainBlockToMsgBlock(headerOnlyBlock))
if err != nil {
t.Fatalf("Enqueue: %+v", err)
}
},
expectsProtocolError: true,
expectsBan: true,
expectsErrToContain: "block where expected block with body",
},
{
name: "sending invalid block",
funcToExecute: func(t *testing.T, incomingRoute, outgoingRoute *router.Route, context *fakeRelayInvsContext) {

View File

@@ -191,7 +191,7 @@ func (flow *handleRelayedTransactionsFlow) receiveTransactions(requestedTransact
continue
}
return protocolerrors.Errorf(true, "rejected transaction %s", txID)
return protocolerrors.Errorf(true, "rejected transaction %s: %s", txID, ruleErr)
}
err = flow.broadcastAcceptedTransactions([]*externalapi.DomainTransactionID{txID})
if err != nil {

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var log = logger.RegisterSubSystem("PROT")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -69,6 +69,11 @@ func (m *Manager) SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler flowconte
m.context.SetOnBlockAddedToDAGHandler(onBlockAddedToDAGHandler)
}
// SetOnPruningPointUTXOSetOverrideHandler sets the OnPruningPointUTXOSetOverride handler
func (m *Manager) SetOnPruningPointUTXOSetOverrideHandler(onPruningPointUTXOSetOverrideHandler flowcontext.OnPruningPointUTXOSetOverrideHandler) {
m.context.SetOnPruningPointUTXOSetOverrideHandler(onPruningPointUTXOSetOverrideHandler)
}
// SetOnTransactionAddedToMempoolHandler sets the onTransactionAddedToMempool handler
func (m *Manager) SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler flowcontext.OnTransactionAddedToMempoolHandler) {
m.context.SetOnTransactionAddedToMempoolHandler(onTransactionAddedToMempoolHandler)

View File

@@ -2,8 +2,6 @@ package peer
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.PROT)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("PROT")

View File

@@ -1,9 +1,10 @@
package protocol
import (
"sync/atomic"
"github.com/kaspanet/kaspad/app/protocol/flows/rejects"
"github.com/kaspanet/kaspad/infrastructure/network/connmanager"
"sync/atomic"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/flows/addressexchange"
@@ -58,8 +59,20 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
peer, err := handshake.HandleHandshake(m.context, netConnection, receiveVersionRoute,
sendVersionRoute, router.OutgoingRoute())
if err != nil {
m.handleError(err, netConnection, router.OutgoingRoute())
// non-blocking read from channel
select {
case innerError := <-errChan:
if errors.Is(err, routerpkg.ErrRouteClosed) {
m.handleError(innerError, netConnection, router.OutgoingRoute())
} else {
log.Errorf("Peer %s sent invalid message: %s", netConnection, innerError)
m.handleError(err, netConnection, router.OutgoingRoute())
}
default:
m.handleError(err, netConnection, router.OutgoingRoute())
}
return
}
defer m.context.RemoveFromPeers(peer)
@@ -75,7 +88,7 @@ func (m *Manager) routerInitializer(router *routerpkg.Router, netConnection *net
}
func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection, outgoingRoute *routerpkg.Route) {
if protocolErr := &(protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if protocolErr := (protocolerrors.ProtocolError{}); errors.As(err, &protocolErr) {
if !m.context.Config().DisableBanning && protocolErr.ShouldBan {
log.Warnf("Banning %s (reason: %s)", netConnection, protocolErr.Cause)
@@ -89,7 +102,7 @@ func (m *Manager) handleError(err error, netConnection *netadapter.NetConnection
panic(err)
}
}
log.Debugf("Disconnecting from %s (reason: %s)", netConnection, protocolErr.Cause)
log.Infof("Disconnecting from %s (reason: %s)", netConnection, protocolErr.Cause)
netConnection.Disconnect()
return
}

View File

@@ -12,19 +12,19 @@ type ProtocolError struct {
Cause error
}
func (e *ProtocolError) Error() string {
func (e ProtocolError) Error() string {
return e.Cause.Error()
}
// Unwrap returns the cause of ProtocolError, to be used with `errors.Unwrap()`
func (e *ProtocolError) Unwrap() error {
func (e ProtocolError) Unwrap() error {
return e.Cause
}
// Errorf formats according to a format specifier and returns the string
// as a ProtocolError.
func Errorf(shouldBan bool, format string, args ...interface{}) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.Errorf(format, args...),
}
@@ -33,7 +33,7 @@ func Errorf(shouldBan bool, format string, args ...interface{}) error {
// New returns a ProtocolError with the supplied message.
// New also records the stack trace at the point it was called.
func New(shouldBan bool, message string) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.New(message),
}
@@ -41,7 +41,7 @@ func New(shouldBan bool, message string) error {
// Wrap wraps the given error and returns it as a ProtocolError.
func Wrap(shouldBan bool, err error, message string) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.Wrap(err, message),
}
@@ -49,7 +49,7 @@ func Wrap(shouldBan bool, err error, message string) error {
// Wrapf wraps the given error with the given format and returns it as a ProtocolError.
func Wrapf(shouldBan bool, err error, format string, args ...interface{}) error {
return &ProtocolError{
return ProtocolError{
ShouldBan: shouldBan,
Cause: errors.Wrapf(err, format, args...),
}

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.RPCS)
var log = logger.RegisterSubSystem("RPCS")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -78,6 +78,22 @@ func (m *Manager) NotifyBlockAddedToDAG(block *externalapi.DomainBlock, blockIns
return m.context.NotificationManager.NotifyBlockAdded(blockAddedNotification)
}
// NotifyPruningPointUTXOSetOverride notifies the manager whenever the UTXO index
// resets due to pruning point change via IBD.
func (m *Manager) NotifyPruningPointUTXOSetOverride() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyPruningPointUTXOSetOverride")
defer onEnd()
if m.context.Config.UTXOIndex {
err := m.notifyPruningPointUTXOSetOverride()
if err != nil {
return err
}
}
return nil
}
// NotifyFinalityConflict notifies the manager that there's a finality conflict in the DAG
func (m *Manager) NotifyFinalityConflict(violatingBlockHash string) error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyFinalityConflict")
@@ -100,13 +116,25 @@ func (m *Manager) notifyUTXOsChanged(blockInsertionResult *externalapi.BlockInse
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyUTXOsChanged")
defer onEnd()
utxoIndexChanges, err := m.context.UTXOIndex.Update(blockInsertionResult.VirtualSelectedParentChainChanges)
utxoIndexChanges, err := m.context.UTXOIndex.Update(blockInsertionResult)
if err != nil {
return err
}
return m.context.NotificationManager.NotifyUTXOsChanged(utxoIndexChanges)
}
func (m *Manager) notifyPruningPointUTXOSetOverride() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.notifyPruningPointUTXOSetOverride")
defer onEnd()
err := m.context.UTXOIndex.Reset()
if err != nil {
return err
}
return m.context.NotificationManager.NotifyPruningPointUTXOSetOverride()
}
func (m *Manager) notifyVirtualSelectedParentBlueScoreChanged() error {
onEnd := logger.LogAndMeasureExecutionTime(log, "RPCManager.NotifyVirtualSelectedParentBlueScoreChanged")
defer onEnd()

View File

@@ -35,12 +35,15 @@ var handlers = map[appmessage.MessageCommand]handler{
appmessage.CmdShutDownRequestMessage: rpchandlers.HandleShutDown,
appmessage.CmdGetHeadersRequestMessage: rpchandlers.HandleGetHeaders,
appmessage.CmdNotifyUTXOsChangedRequestMessage: rpchandlers.HandleNotifyUTXOsChanged,
appmessage.CmdStopNotifyingUTXOsChangedRequestMessage: rpchandlers.HandleStopNotifyingUTXOsChanged,
appmessage.CmdGetUTXOsByAddressesRequestMessage: rpchandlers.HandleGetUTXOsByAddresses,
appmessage.CmdGetVirtualSelectedParentBlueScoreRequestMessage: rpchandlers.HandleGetVirtualSelectedParentBlueScore,
appmessage.CmdNotifyVirtualSelectedParentBlueScoreChangedRequestMessage: rpchandlers.HandleNotifyVirtualSelectedParentBlueScoreChanged,
appmessage.CmdBanRequestMessage: rpchandlers.HandleBan,
appmessage.CmdUnbanRequestMessage: rpchandlers.HandleUnban,
appmessage.CmdGetInfoRequestMessage: rpchandlers.HandleGetInfo,
appmessage.CmdNotifyPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleNotifyPruningPointUTXOSetOverrideRequest,
appmessage.CmdStopNotifyingPruningPointUTXOSetOverrideRequestMessage: rpchandlers.HandleStopNotifyingPruningPointUTXOSetOverrideRequest,
}
func (m *Manager) routerInitializer(router *router.Router, netConnection *netadapter.NetConnection) {

View File

@@ -2,8 +2,6 @@ package rpccontext
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.RPCS)
var spawn = panics.GoroutineWrapperFunc(log)
var log = logger.RegisterSubSystem("RPCS")

View File

@@ -30,8 +30,9 @@ type NotificationListener struct {
propagateFinalityConflictResolvedNotifications bool
propagateUTXOsChangedNotifications bool
propagateVirtualSelectedParentBlueScoreChangedNotifications bool
propagatePruningPointUTXOSetOverrideNotifications bool
propagateUTXOsChangedNotificationAddresses []*UTXOsChangedNotificationAddress
propagateUTXOsChangedNotificationAddresses map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress
}
// NewNotificationManager creates a new NotificationManager
@@ -180,6 +181,23 @@ func (nm *NotificationManager) NotifyVirtualSelectedParentBlueScoreChanged(
return nil
}
// NotifyPruningPointUTXOSetOverride notifies the notification manager that the UTXO index
// reset due to pruning point change via IBD.
func (nm *NotificationManager) NotifyPruningPointUTXOSetOverride() error {
nm.RLock()
defer nm.RUnlock()
for router, listener := range nm.listeners {
if listener.propagatePruningPointUTXOSetOverrideNotifications {
err := router.OutgoingRoute().Enqueue(appmessage.NewPruningPointUTXOSetOverrideNotificationMessage())
if err != nil {
return err
}
}
}
return nil
}
func newNotificationListener() *NotificationListener {
return &NotificationListener{
propagateBlockAddedNotifications: false,
@@ -188,6 +206,7 @@ func newNotificationListener() *NotificationListener {
propagateFinalityConflictResolvedNotifications: false,
propagateUTXOsChangedNotifications: false,
propagateVirtualSelectedParentBlueScoreChangedNotifications: false,
propagatePruningPointUTXOSetOverrideNotifications: false,
}
}
@@ -216,34 +235,70 @@ func (nl *NotificationListener) PropagateFinalityConflictResolvedNotifications()
}
// PropagateUTXOsChangedNotifications instructs the listener to send UTXOs changed notifications
// to the remote listener
// to the remote listener for the given addresses. Subsequent calls instruct the listener to
// send UTXOs changed notifications for those addresses along with the old ones. Duplicate addresses
// are ignored.
func (nl *NotificationListener) PropagateUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
nl.propagateUTXOsChangedNotifications = true
nl.propagateUTXOsChangedNotificationAddresses = addresses
if !nl.propagateUTXOsChangedNotifications {
nl.propagateUTXOsChangedNotifications = true
nl.propagateUTXOsChangedNotificationAddresses =
make(map[utxoindex.ScriptPublicKeyString]*UTXOsChangedNotificationAddress, len(addresses))
}
for _, address := range addresses {
nl.propagateUTXOsChangedNotificationAddresses[address.ScriptPublicKeyString] = address
}
}
// StopPropagatingUTXOsChangedNotifications instructs the listener to stop sending UTXOs
// changed notifications to the remote listener for the given addresses. Addresses for which
// notifications are not currently sent are ignored.
func (nl *NotificationListener) StopPropagatingUTXOsChangedNotifications(addresses []*UTXOsChangedNotificationAddress) {
if !nl.propagateUTXOsChangedNotifications {
return
}
for _, address := range addresses {
delete(nl.propagateUTXOsChangedNotificationAddresses, address.ScriptPublicKeyString)
}
}
func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
utxoChanges *utxoindex.UTXOChanges) *appmessage.UTXOsChangedNotificationMessage {
// As an optimization, we iterate over the smaller set (O(n)) among the two below
// and check existence over the larger set (O(1))
utxoChangesSize := len(utxoChanges.Added) + len(utxoChanges.Removed)
addressesSize := len(nl.propagateUTXOsChangedNotificationAddresses)
notification := &appmessage.UTXOsChangedNotificationMessage{}
for _, listenerAddress := range nl.propagateUTXOsChangedNotificationAddresses {
listenerScriptPublicKeyString := listenerAddress.ScriptPublicKeyString
if addedPairs, ok := utxoChanges.Added[listenerScriptPublicKeyString]; ok {
notification.Added = append(notification.Added,
ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)...)
if utxoChangesSize < addressesSize {
for scriptPublicKeyString, addedPairs := range utxoChanges.Added {
if listenerAddress, ok := nl.propagateUTXOsChangedNotificationAddresses[scriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
}
if removedOutpoints, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
for outpoint := range removedOutpoints {
notification.Removed = append(notification.Removed, &appmessage.UTXOsByAddressesEntry{
Address: listenerAddress.Address,
Outpoint: &appmessage.RPCOutpoint{
TransactionID: outpoint.TransactionID.String(),
Index: outpoint.Index,
},
})
for scriptPublicKeyString, removedOutpoints := range utxoChanges.Removed {
if listenerAddress, ok := nl.propagateUTXOsChangedNotificationAddresses[scriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
} else {
for _, listenerAddress := range nl.propagateUTXOsChangedNotificationAddresses {
listenerScriptPublicKeyString := listenerAddress.ScriptPublicKeyString
if addedPairs, ok := utxoChanges.Added[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(listenerAddress.Address, addedPairs)
notification.Added = append(notification.Added, utxosByAddressesEntries...)
}
if removedOutpoints, ok := utxoChanges.Removed[listenerScriptPublicKeyString]; ok {
utxosByAddressesEntries := convertUTXOOutpointsToUTXOsByAddressesEntries(listenerAddress.Address, removedOutpoints)
notification.Removed = append(notification.Removed, utxosByAddressesEntries...)
}
}
}
return notification
}
@@ -252,3 +307,15 @@ func (nl *NotificationListener) convertUTXOChangesToUTXOsChangedNotification(
func (nl *NotificationListener) PropagateVirtualSelectedParentBlueScoreChangedNotifications() {
nl.propagateVirtualSelectedParentBlueScoreChangedNotifications = true
}
// PropagatePruningPointUTXOSetOverrideNotifications instructs the listener to send pruning point UTXO set override notifications
// to the remote listener.
func (nl *NotificationListener) PropagatePruningPointUTXOSetOverrideNotifications() {
nl.propagatePruningPointUTXOSetOverrideNotifications = true
}
// StopPropagatingPruningPointUTXOSetOverrideNotifications instructs the listener to stop sending pruning
// point UTXO set override notifications to the remote listener.
func (nl *NotificationListener) StopPropagatingPruningPointUTXOSetOverrideNotifications() {
nl.propagatePruningPointUTXOSetOverrideNotifications = false
}

View File

@@ -2,6 +2,9 @@ package rpccontext
import (
"encoding/hex"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/util"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/utxoindex"
@@ -28,3 +31,43 @@ func ConvertUTXOOutpointEntryPairsToUTXOsByAddressesEntries(address string, pair
}
return utxosByAddressesEntries
}
// convertUTXOOutpointsToUTXOsByAddressesEntries converts
// UTXOOutpoints to a slice of UTXOsByAddressesEntry
func convertUTXOOutpointsToUTXOsByAddressesEntries(address string, outpoints utxoindex.UTXOOutpoints) []*appmessage.UTXOsByAddressesEntry {
utxosByAddressesEntries := make([]*appmessage.UTXOsByAddressesEntry, 0, len(outpoints))
for outpoint := range outpoints {
utxosByAddressesEntries = append(utxosByAddressesEntries, &appmessage.UTXOsByAddressesEntry{
Address: address,
Outpoint: &appmessage.RPCOutpoint{
TransactionID: outpoint.TransactionID.String(),
Index: outpoint.Index,
},
})
}
return utxosByAddressesEntries
}
// ConvertAddressStringsToUTXOsChangedNotificationAddresses converts address strings
// to UTXOsChangedNotificationAddresses
func (ctx *Context) ConvertAddressStringsToUTXOsChangedNotificationAddresses(
addressStrings []string) ([]*UTXOsChangedNotificationAddress, error) {
addresses := make([]*UTXOsChangedNotificationAddress, len(addressStrings))
for i, addressString := range addressStrings {
address, err := util.DecodeAddress(addressString, ctx.Config.ActiveNetParams.Prefix)
if err != nil {
return nil, errors.Errorf("Could not decode address '%s': %s", addressString, err)
}
scriptPublicKey, err := txscript.PayToAddrScript(address)
if err != nil {
return nil, errors.Errorf("Could not create a scriptPublicKey for address '%s': %s", addressString, err)
}
scriptPublicKeyString := utxoindex.ConvertScriptPublicKeyToString(scriptPublicKey)
addresses[i] = &UTXOsChangedNotificationAddress{
Address: addressString,
ScriptPublicKeyString: scriptPublicKeyString,
}
}
return addresses, nil
}

View File

@@ -3,13 +3,15 @@ package rpccontext
import (
"encoding/hex"
"fmt"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/difficulty"
"math"
"math/big"
"strconv"
"github.com/kaspanet/kaspad/domain/consensus/utils/constants"
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/difficulty"
"github.com/pkg/errors"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/estimatedsize"
@@ -21,9 +23,11 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/util/pointers"
)
// ErrBuildBlockVerboseDataInvalidBlock indicates that a block that was given to BuildBlockVerboseData is invalid.
var ErrBuildBlockVerboseDataInvalidBlock = errors.New("ErrBuildBlockVerboseDataInvalidBlock")
// BuildBlockVerboseData builds a BlockVerboseData from the given blockHeader.
// A block may optionally also be given if it's available in the calling context.
func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, block *externalapi.DomainBlock,
@@ -38,6 +42,17 @@ func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, b
if err != nil {
return nil, err
}
if blockInfo.BlockStatus == externalapi.StatusInvalid {
return nil, errors.Wrap(ErrBuildBlockVerboseDataInvalidBlock, "cannot build verbose data for "+
"invalid block")
}
childrenHashes, err := ctx.Domain.Consensus().GetBlockChildren(hash)
if err != nil {
return nil, err
}
result := &appmessage.BlockVerboseData{
Hash: hash.String(),
Version: blockHeader.Version(),
@@ -46,6 +61,7 @@ func (ctx *Context) BuildBlockVerboseData(blockHeader externalapi.BlockHeader, b
AcceptedIDMerkleRoot: blockHeader.AcceptedIDMerkleRoot().String(),
UTXOCommitment: blockHeader.UTXOCommitment().String(),
ParentHashes: hashes.ToStrings(blockHeader.ParentHashes()),
ChildrenHashes: hashes.ToStrings(childrenHashes),
Nonce: blockHeader.Nonce(),
Time: blockHeader.TimeInMilliseconds(),
Bits: strconv.FormatInt(int64(blockHeader.Bits()), 16),
@@ -179,7 +195,7 @@ func (ctx *Context) buildTransactionVerboseOutputs(tx *externalapi.DomainTransac
passesFilter := len(filterAddrMap) == 0
var encodedAddr string
if addr != nil {
encodedAddr = *pointers.String(addr.EncodeAddress())
encodedAddr = addr.EncodeAddress()
// If the filter doesn't already pass, make it pass if
// the address exists in the filter.
@@ -196,6 +212,7 @@ func (ctx *Context) buildTransactionVerboseOutputs(tx *externalapi.DomainTransac
output.Index = uint32(i)
output.Value = transactionOutput.Value
output.ScriptPubKey = &appmessage.ScriptPubKeyResult{
Version: transactionOutput.ScriptPublicKey.Version,
Address: encodedAddr,
Hex: hex.EncodeToString(transactionOutput.ScriptPublicKey.Script),
Type: scriptClass.String(),

View File

@@ -5,6 +5,7 @@ import (
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/pkg/errors"
)
// HandleGetBlock handles the respectively named RPC command
@@ -30,8 +31,14 @@ func HandleGetBlock(context *rpccontext.Context, _ *router.Router, request appme
blockVerboseData, err := context.BuildBlockVerboseData(header, nil, getBlockRequest.IncludeTransactionVerboseData)
if err != nil {
if errors.Is(err, rpccontext.ErrBuildBlockVerboseDataInvalidBlock) {
errorMessage := &appmessage.GetBlockResponseMessage{}
errorMessage.Error = appmessage.RPCErrorf("Block %s is invalid", hash)
return errorMessage, nil
}
return nil, err
}
response.BlockVerboseData = blockVerboseData
return response, nil

View File

@@ -36,5 +36,11 @@ func HandleGetBlockDAGInfo(context *rpccontext.Context, _ *router.Router, _ appm
response.Difficulty = context.GetDifficultyRatio(virtualInfo.Bits, context.Config.ActiveNetParams)
response.PastMedianTime = virtualInfo.PastMedianTime
pruningPoint, err := context.Domain.Consensus().PruningPoint()
if err != nil {
return nil, err
}
response.PruningPointHash = pruningPoint.String()
return response, nil
}

View File

@@ -11,7 +11,7 @@ import (
const (
// maxBlocksInGetBlocksResponse is the max amount of blocks that are
// allowed in a GetBlocksResult.
maxBlocksInGetBlocksResponse = 100
maxBlocksInGetBlocksResponse = 1000
)
// HandleGetBlocks handles the respectively named RPC command
@@ -37,6 +37,17 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
Error: appmessage.RPCErrorf("Could not decode lowHash %s: %s", getBlocksRequest.LowHash, err),
}, nil
}
blockInfo, err := context.Domain.Consensus().GetBlockInfo(lowHash)
if err != nil {
return nil, err
}
if !blockInfo.Exists {
return &appmessage.GetBlocksResponseMessage{
Error: appmessage.RPCErrorf("Could not find lowHash %s", getBlocksRequest.LowHash),
}, nil
}
}
// Get hashes between lowHash and virtualSelectedParent
@@ -50,6 +61,9 @@ func HandleGetBlocks(context *rpccontext.Context, _ *router.Router, request appm
return nil, err
}
// prepend low hash to make it inclusive
blockHashes = append([]*externalapi.DomainHash{lowHash}, blockHashes...)
// If there are no maxBlocksInGetBlocksResponse between lowHash and virtualSelectedParent -
// add virtualSelectedParent's anticone
if len(blockHashes) < maxBlocksInGetBlocksResponse {

View File

@@ -0,0 +1,144 @@
package rpchandlers_test
import (
"reflect"
"sort"
"testing"
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/app/rpc/rpchandlers"
"github.com/kaspanet/kaspad/domain/consensus"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/model/testapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/hashes"
"github.com/kaspanet/kaspad/domain/consensus/utils/testutils"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/domain/miningmanager"
"github.com/kaspanet/kaspad/infrastructure/config"
)
type fakeDomain struct {
testapi.TestConsensus
}
func (d fakeDomain) Consensus() externalapi.Consensus { return d }
func (d fakeDomain) MiningManager() miningmanager.MiningManager { return nil }
func TestHandleGetBlocks(t *testing.T) {
testutils.ForAllNets(t, true, func(t *testing.T, params *dagconfig.Params) {
factory := consensus.NewFactory()
tc, teardown, err := factory.NewTestConsensus(params, false, "TestHandleGetBlocks")
if err != nil {
t.Fatalf("Error setting up consensus: %+v", err)
}
defer teardown(false)
fakeContext := rpccontext.Context{
Config: &config.Config{Flags: &config.Flags{NetworkFlags: config.NetworkFlags{ActiveNetParams: params}}},
Domain: fakeDomain{tc},
}
getBlocks := func(lowHash *externalapi.DomainHash) *appmessage.GetBlocksResponseMessage {
request := appmessage.GetBlocksRequestMessage{}
if lowHash != nil {
request.LowHash = lowHash.String()
}
response, err := rpchandlers.HandleGetBlocks(&fakeContext, nil, &request)
if err != nil {
t.Fatalf("Expected empty request to not fail, instead: '%v'", err)
}
return response.(*appmessage.GetBlocksResponseMessage)
}
filterAntiPast := func(povBlock *externalapi.DomainHash, slice []*externalapi.DomainHash) []*externalapi.DomainHash {
antipast := make([]*externalapi.DomainHash, 0, len(slice))
for _, blockHash := range slice {
isInPastOfPovBlock, err := tc.DAGTopologyManager().IsAncestorOf(blockHash, povBlock)
if err != nil {
t.Fatalf("Failed doing reachability check: '%v'", err)
}
if !isInPastOfPovBlock {
antipast = append(antipast, blockHash)
}
}
return antipast
}
// Create a DAG with the following structure:
// merging block
// / | \
// split1 split2 split3
// \ | /
// merging block
// / | \
// split1 split2 split3
// \ | /
// etc.
expectedOrder := make([]*externalapi.DomainHash, 0, 40)
mergingBlock := params.GenesisHash
for i := 0; i < 10; i++ {
splitBlocks := make([]*externalapi.DomainHash, 0, 3)
for j := 0; j < 3; j++ {
blockHash, _, err := tc.AddBlock([]*externalapi.DomainHash{mergingBlock}, nil, nil)
if err != nil {
t.Fatalf("Failed adding block: %v", err)
}
splitBlocks = append(splitBlocks, blockHash)
}
sort.Sort(sort.Reverse(testutils.NewTestGhostDAGSorter(splitBlocks, tc, t)))
restOfSplitBlocks, selectedParent := splitBlocks[:len(splitBlocks)-1], splitBlocks[len(splitBlocks)-1]
expectedOrder = append(expectedOrder, selectedParent)
expectedOrder = append(expectedOrder, restOfSplitBlocks...)
mergingBlock, _, err = tc.AddBlock(splitBlocks, nil, nil)
if err != nil {
t.Fatalf("Failed adding block: %v", err)
}
expectedOrder = append(expectedOrder, mergingBlock)
}
virtualSelectedParent, err := tc.GetVirtualSelectedParent()
if err != nil {
t.Fatalf("Failed getting SelectedParent: %v", err)
}
if !virtualSelectedParent.Equal(expectedOrder[len(expectedOrder)-1]) {
t.Fatalf("Expected %s to be selectedParent, instead found: %s", expectedOrder[len(expectedOrder)-1], virtualSelectedParent)
}
requestSelectedParent := getBlocks(virtualSelectedParent)
if !reflect.DeepEqual(requestSelectedParent.BlockHashes, hashes.ToStrings([]*externalapi.DomainHash{virtualSelectedParent})) {
t.Fatalf("TestHandleGetBlocks expected:\n%v\nactual:\n%v", virtualSelectedParent, requestSelectedParent.BlockHashes)
}
for i, blockHash := range expectedOrder {
expectedBlocks := filterAntiPast(blockHash, expectedOrder)
expectedBlocks = append([]*externalapi.DomainHash{blockHash}, expectedBlocks...)
actualBlocks := getBlocks(blockHash)
if !reflect.DeepEqual(actualBlocks.BlockHashes, hashes.ToStrings(expectedBlocks)) {
t.Fatalf("TestHandleGetBlocks %d \nexpected: \n%v\nactual:\n%v", i,
hashes.ToStrings(expectedBlocks), actualBlocks.BlockHashes)
}
}
// Make explicitly sure that if lowHash==highHash we get a slice with a single hash.
actualBlocks := getBlocks(virtualSelectedParent)
if !reflect.DeepEqual(actualBlocks.BlockHashes, []string{virtualSelectedParent.String()}) {
t.Fatalf("TestHandleGetBlocks expected blocks to contain just '%s', instead got: \n%v",
virtualSelectedParent, actualBlocks.BlockHashes)
}
expectedOrder = append([]*externalapi.DomainHash{params.GenesisHash}, expectedOrder...)
actualOrder := getBlocks(nil)
if !reflect.DeepEqual(actualOrder.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: %v \nactual:\n%v", expectedOrder, actualOrder.BlockHashes)
}
requestAllExplictly := getBlocks(params.GenesisHash)
if !reflect.DeepEqual(requestAllExplictly.BlockHashes, hashes.ToStrings(expectedOrder)) {
t.Fatalf("TestHandleGetBlocks \nexpected: \n%v\n. actual:\n%v", expectedOrder, requestAllExplictly.BlockHashes)
}
})
}

View File

@@ -5,5 +5,5 @@ import (
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.RPCS)
var log = logger.RegisterSubSystem("RPCS")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -0,0 +1,19 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleNotifyPruningPointUTXOSetOverrideRequest handles the respectively named RPC command
func HandleNotifyPruningPointUTXOSetOverrideRequest(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.PropagatePruningPointUTXOSetOverrideNotifications()
response := appmessage.NewNotifyPruningPointUTXOSetOverrideResponseMessage()
return response, nil
}

View File

@@ -3,10 +3,7 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/utils/txscript"
"github.com/kaspanet/kaspad/domain/utxoindex"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
"github.com/kaspanet/kaspad/util"
)
// HandleNotifyUTXOsChanged handles the respectively named RPC command
@@ -18,26 +15,11 @@ func HandleNotifyUTXOsChanged(context *rpccontext.Context, router *router.Router
}
notifyUTXOsChangedRequest := request.(*appmessage.NotifyUTXOsChangedRequestMessage)
addresses := make([]*rpccontext.UTXOsChangedNotificationAddress, len(notifyUTXOsChangedRequest.Addresses))
for i, addressString := range notifyUTXOsChangedRequest.Addresses {
address, err := util.DecodeAddress(addressString, context.Config.ActiveNetParams.Prefix)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Could not decode address '%s': %s", addressString, err)
return errorMessage, nil
}
scriptPublicKey, err := txscript.PayToAddrScript(address)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Could not create a scriptPublicKey for address '%s': %s", addressString, err)
return errorMessage, nil
}
scriptPublicKeyString := utxoindex.ConvertScriptPublicKeyToString(scriptPublicKey)
addresses[i] = &rpccontext.UTXOsChangedNotificationAddress{
Address: addressString,
ScriptPublicKeyString: scriptPublicKeyString,
}
addresses, err := context.ConvertAddressStringsToUTXOsChangedNotificationAddresses(notifyUTXOsChangedRequest.Addresses)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Parsing error: %s", err)
return errorMessage, nil
}
listener, err := context.NotificationManager.Listener(router)

View File

@@ -0,0 +1,19 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleStopNotifyingPruningPointUTXOSetOverrideRequest handles the respectively named RPC command
func HandleStopNotifyingPruningPointUTXOSetOverrideRequest(context *rpccontext.Context, router *router.Router, _ appmessage.Message) (appmessage.Message, error) {
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.StopPropagatingPruningPointUTXOSetOverrideNotifications()
response := appmessage.NewStopNotifyingPruningPointUTXOSetOverrideResponseMessage()
return response, nil
}

View File

@@ -0,0 +1,33 @@
package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/router"
)
// HandleStopNotifyingUTXOsChanged handles the respectively named RPC command
func HandleStopNotifyingUTXOsChanged(context *rpccontext.Context, router *router.Router, request appmessage.Message) (appmessage.Message, error) {
if !context.Config.UTXOIndex {
errorMessage := appmessage.NewStopNotifyingUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Method unavailable when kaspad is run without --utxoindex")
return errorMessage, nil
}
stopNotifyingUTXOsChangedRequest := request.(*appmessage.StopNotifyingUTXOsChangedRequestMessage)
addresses, err := context.ConvertAddressStringsToUTXOsChangedNotificationAddresses(stopNotifyingUTXOsChangedRequest.Addresses)
if err != nil {
errorMessage := appmessage.NewNotifyUTXOsChangedResponseMessage()
errorMessage.Error = appmessage.RPCErrorf("Parsing error: %s", err)
return errorMessage, nil
}
listener, err := context.NotificationManager.Listener(router)
if err != nil {
return nil, err
}
listener.StopPropagatingUTXOsChangedNotifications(addresses)
response := appmessage.NewStopNotifyingUTXOsChangedResponseMessage()
return response, nil
}

View File

@@ -2,6 +2,7 @@ package rpchandlers
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/app/protocol/protocolerrors"
"github.com/kaspanet/kaspad/app/rpc/rpccontext"
"github.com/kaspanet/kaspad/domain/consensus/ruleerrors"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
@@ -25,9 +26,11 @@ func HandleSubmitBlock(context *rpccontext.Context, _ *router.Router, request ap
err := context.ProtocolManager.AddBlock(domainBlock)
if err != nil {
if !errors.As(err, &ruleerrors.RuleError{}) {
isProtocolOrRuleError := errors.As(err, &ruleerrors.RuleError{}) || errors.As(err, &protocolerrors.ProtocolError{})
if !isProtocolOrRuleError {
return nil, err
}
return &appmessage.SubmitBlockResponseMessage{
Error: appmessage.RPCErrorf("Block rejected. Reason: %s", err),
RejectReason: appmessage.RejectReasonBlockInvalid,

35
changelog.txt Normal file
View File

@@ -0,0 +1,35 @@
Kaspad v0.9.1 - 2021-03-14
===========================
* Testnet network reset
Kaspad v0.9.0 - 2021-03-04
===========================
* Merge big subdags in pick virtual parents (#1574)
* Write in the reject message the tx rejection reason (#1573)
* Add nil checks for protowire (#1570)
* Increase getBlocks limit to 1000 (#1572)
* Return RPC error if getBlock's lowHash doesn't exist (#1569)
* Add default dns-seeder to testnet (#1568)
* Fix utxoindex deserialization (#1566)
* Add pruning point hash to GetBlockDagInfo response (#1565)
* Use EmitUnpopulated so that kaspactl prints all fields, even the default ones (#1561)
* Stop logging an error whenever an RPC/P2P connection is canceled (#1562)
* Cleanup the logger and make it asynchronous (#1524)
* Close all iterators (#1542)
* Add childrenHashes to GetBlock/s RPC commands (#1560)
* Add ScriptPublicKey.Version to RPC (#1559)
* Fix the target block rate to create less bursty mining (#1554)
Kaspad v0.8.10 - 2021-02-25
===========================
* Fix bug where invalid mempool transactions were not removed (#1551)
* Add RPC reconnection to the miner (#1552)
* Remove virtual diff parents - only selectedTip is virtualDiffParent now (#1550)
* Fix UTXO index (#1548)
* Prevent fast failing (#1545)
* Increase the sleep time in kaspaminer when the node is not synced (#1544)
* Disallow header only blocks on RPC, relay and when requesting IBD full blocks (#1537)
* Make templateManager hold a DomainBlock and isSynced bool instead of a GetBlockTemplateResponseMessage (#1538)

View File

@@ -4,7 +4,7 @@ kaspactl is an RPC client for kaspad
## Requirements
Go 1.15 or later.
Go 1.16 or later.
## Installation

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.15-alpine AS build
FROM golang:1.16-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad

View File

@@ -2,10 +2,11 @@ package main
import (
"fmt"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/server/grpcserver/protowire"
"os"
"time"
"github.com/kaspanet/kaspad/infrastructure/network/netadapter/server/grpcserver/protowire"
"github.com/pkg/errors"
"google.golang.org/protobuf/encoding/protojson"
@@ -67,7 +68,7 @@ func postCommand(cfg *configFlags, client *grpcclient.GRPCClient, responseChan c
if err != nil {
printErrorAndExit(fmt.Sprintf("error posting the request to the RPC server: %s", err))
}
responseBytes, err := protojson.Marshal(response)
responseBytes, err := protojson.MarshalOptions{EmitUnpopulated: true}.Marshal(response)
if err != nil {
printErrorAndExit(errors.Wrapf(err, "error parsing the response from the RPC server").Error())
}
@@ -92,6 +93,7 @@ func prettifyResponse(response string) string {
marshalOptions := &protojson.MarshalOptions{}
marshalOptions.Indent = " "
marshalOptions.EmitUnpopulated = true
return marshalOptions.Format(kaspadMessage)
}

View File

@@ -4,7 +4,7 @@ Kaspaminer is a CPU-based miner for kaspad
## Requirements
Go 1.15 or later.
Go 1.16 or later.
## Installation

View File

@@ -5,42 +5,95 @@ import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/infrastructure/network/rpcclient"
"github.com/pkg/errors"
"sync"
"sync/atomic"
"time"
)
const minerTimeout = 10 * time.Second
type minerClient struct {
*rpcclient.RPCClient
isReconnecting uint32
clientLock sync.RWMutex
rpcClient *rpcclient.RPCClient
cfg *configFlags
blockAddedNotificationChan chan struct{}
}
func newMinerClient(cfg *configFlags) (*minerClient, error) {
rpcAddress, err := cfg.NetParams().NormalizeRPCServerAddress(cfg.RPCServer)
if err != nil {
return nil, err
}
rpcClient, err := rpcclient.NewRPCClient(rpcAddress)
if err != nil {
return nil, err
}
rpcClient.SetTimeout(minerTimeout)
rpcClient.SetLogger(backendLog, logger.LevelTrace)
func (mc *minerClient) safeRPCClient() *rpcclient.RPCClient {
mc.clientLock.RLock()
defer mc.clientLock.RUnlock()
return mc.rpcClient
}
minerClient := &minerClient{
RPCClient: rpcClient,
blockAddedNotificationChan: make(chan struct{}),
func (mc *minerClient) reconnect() {
swapped := atomic.CompareAndSwapUint32(&mc.isReconnecting, 0, 1)
if !swapped {
return
}
err = rpcClient.RegisterForBlockAddedNotifications(func(_ *appmessage.BlockAddedNotificationMessage) {
defer atomic.StoreUint32(&mc.isReconnecting, 0)
mc.clientLock.Lock()
defer mc.clientLock.Unlock()
retryDuration := time.Second
const maxRetryDuration = time.Minute
log.Infof("Reconnecting RPC connection")
for {
err := mc.connect()
if err == nil {
return
}
if retryDuration < time.Minute {
retryDuration *= 2
} else {
retryDuration = maxRetryDuration
}
log.Errorf("Got error '%s' while reconnecting. Trying again in %s", err, retryDuration)
time.Sleep(retryDuration)
}
}
func (mc *minerClient) connect() error {
rpcAddress, err := mc.cfg.NetParams().NormalizeRPCServerAddress(mc.cfg.RPCServer)
if err != nil {
return err
}
mc.rpcClient, err = rpcclient.NewRPCClient(rpcAddress)
if err != nil {
return err
}
mc.rpcClient.SetTimeout(minerTimeout)
mc.rpcClient.SetLogger(backendLog, logger.LevelTrace)
err = mc.rpcClient.RegisterForBlockAddedNotifications(func(_ *appmessage.BlockAddedNotificationMessage) {
select {
case minerClient.blockAddedNotificationChan <- struct{}{}:
case mc.blockAddedNotificationChan <- struct{}{}:
default:
}
})
if err != nil {
return nil, errors.Wrapf(err, "error requesting block-added notifications")
return errors.Wrapf(err, "error requesting block-added notifications")
}
log.Infof("Connected to %s", rpcAddress)
return nil
}
func newMinerClient(cfg *configFlags) (*minerClient, error) {
minerClient := &minerClient{
cfg: cfg,
blockAddedNotificationChan: make(chan struct{}),
}
err := minerClient.connect()
if err != nil {
return nil, err
}
return minerClient, nil

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.15-alpine AS build
FROM golang:1.16-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad

View File

@@ -14,6 +14,7 @@ var (
)
func initLog(logFile, errLogFile string) {
log.SetLevel(logger.LevelDebug)
err := backendLog.AddLogFile(logFile, logger.LevelTrace)
if err != nil {
fmt.Fprintf(os.Stderr, "Error adding log file %s as log rotator for level %s: %s", logFile, logger.LevelTrace, err)
@@ -24,4 +25,15 @@ func initLog(logFile, errLogFile string) {
fmt.Fprintf(os.Stderr, "Error adding log file %s as log rotator for level %s: %s", errLogFile, logger.LevelWarn, err)
os.Exit(1)
}
err = backendLog.AddLogWriter(os.Stdout, logger.LevelInfo)
if err != nil {
fmt.Fprintf(os.Stderr, "Error adding stdout to the loggerfor level %s: %s", logger.LevelWarn, err)
os.Exit(1)
}
err = backendLog.Run()
if err != nil {
fmt.Fprintf(os.Stderr, "Error starting the logger: %s ", err)
os.Exit(1)
}
}

View File

@@ -26,6 +26,7 @@ func main() {
fmt.Fprintf(os.Stderr, "Error parsing command-line arguments: %s\n", err)
os.Exit(1)
}
defer backendLog.Close()
// Show version at startup.
log.Infof("Version %s", version.Version())
@@ -39,7 +40,7 @@ func main() {
if err != nil {
panic(errors.Wrap(err, "error connecting to the RPC server"))
}
defer client.Disconnect()
defer client.safeRPCClient().Disconnect()
miningAddr, err := util.DecodeAddress(cfg.MiningAddr, cfg.ActiveNetParams.Prefix)
if err != nil {

View File

@@ -2,13 +2,14 @@ package main
import (
nativeerrors "errors"
"github.com/kaspanet/kaspad/cmd/kaspaminer/templatemanager"
"github.com/kaspanet/kaspad/domain/consensus/model/pow"
"github.com/kaspanet/kaspad/util/difficulty"
"math/rand"
"sync/atomic"
"time"
"github.com/kaspanet/kaspad/cmd/kaspaminer/templatemanager"
"github.com/kaspanet/kaspad/domain/consensus/model/pow"
"github.com/kaspanet/kaspad/util/difficulty"
"github.com/kaspanet/kaspad/domain/consensus/utils/consensushashing"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
@@ -52,6 +53,8 @@ func mineLoop(client *minerClient, numberOfBlocks uint64, targetBlocksPerSecond
}
blockInWindowIndex := 0
sleepTime := 0 * time.Second
for {
foundBlockChan <- mineNextBlock(mineWhenNotSynced)
@@ -60,15 +63,16 @@ func mineLoop(client *minerClient, numberOfBlocks uint64, targetBlocksPerSecond
if blockInWindowIndex == windowSize-1 {
deviation := windowExpectedEndTime.Sub(time.Now())
if deviation > 0 {
log.Infof("Finished to mine %d blocks %s earlier than expected. Sleeping %s to compensate",
windowSize, deviation, deviation)
time.Sleep(deviation)
sleepTime = deviation / windowSize
log.Infof("Finished to mine %d blocks %s earlier than expected. Setting the miner "+
"to sleep %s between blocks to compensate",
windowSize, deviation, sleepTime)
}
blockInWindowIndex = 0
windowExpectedEndTime = time.Now().Add(expectedDurationForWindow)
}
time.Sleep(sleepTime)
}
}
})
@@ -112,12 +116,13 @@ func logHashRate() {
func handleFoundBlock(client *minerClient, block *externalapi.DomainBlock) error {
blockHash := consensushashing.BlockHash(block)
log.Infof("Submitting block %s to %s", blockHash, client.Address())
log.Infof("Submitting block %s to %s", blockHash, client.safeRPCClient().Address())
rejectReason, err := client.SubmitBlock(block)
rejectReason, err := client.safeRPCClient().SubmitBlock(block)
if err != nil {
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while submitting block %s to %s: %s", blockHash, client.Address(), err)
log.Warnf("Got timeout while submitting block %s to %s: %s", blockHash, client.safeRPCClient().Address(), err)
client.reconnect()
return nil
}
if rejectReason == appmessage.RejectReasonIsInIBD {
@@ -126,7 +131,7 @@ func handleFoundBlock(client *minerClient, block *externalapi.DomainBlock) error
time.Sleep(waitTime)
return nil
}
return errors.Errorf("Error submitting block %s to %s: %s", blockHash, client.Address(), err)
return errors.Wrapf(err, "Error submitting block %s to %s", blockHash, client.safeRPCClient().Address())
}
return nil
}
@@ -155,11 +160,15 @@ func mineNextBlock(mineWhenNotSynced bool) *externalapi.DomainBlock {
func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
tryCount := 0
const sleepTime = 500 * time.Millisecond
const sleepTimeWhenNotSynced = 5 * time.Second
for {
tryCount++
const sleepTime = 500 * time.Millisecond
shouldLog := (tryCount-1)%10 == 0
template := templatemanager.Get()
template, isSynced := templatemanager.Get()
if template == nil {
if shouldLog {
log.Info("Waiting for the initial template")
@@ -167,27 +176,28 @@ func getBlockForMining(mineWhenNotSynced bool) *externalapi.DomainBlock {
time.Sleep(sleepTime)
continue
}
if !template.IsSynced && !mineWhenNotSynced {
if !isSynced && !mineWhenNotSynced {
if shouldLog {
log.Warnf("Kaspad is not synced. Skipping current block template")
}
time.Sleep(sleepTime)
time.Sleep(sleepTimeWhenNotSynced)
continue
}
return appmessage.MsgBlockToDomainBlock(template.MsgBlock)
return template
}
}
func templatesLoop(client *minerClient, miningAddr util.Address, errChan chan error) {
getBlockTemplate := func() {
template, err := client.GetBlockTemplate(miningAddr.String())
template, err := client.safeRPCClient().GetBlockTemplate(miningAddr.String())
if nativeerrors.Is(err, router.ErrTimeout) {
log.Warnf("Got timeout while requesting block template from %s: %s", client.Address(), err)
log.Warnf("Got timeout while requesting block template from %s: %s", client.safeRPCClient().Address(), err)
client.reconnect()
return
}
if err != nil {
errChan <- errors.Errorf("Error getting block template from %s: %s", client.Address(), err)
errChan <- errors.Wrapf(err, "Error getting block template from %s", client.safeRPCClient().Address())
return
}
templatemanager.Set(template)

View File

@@ -2,22 +2,31 @@ package templatemanager
import (
"github.com/kaspanet/kaspad/app/appmessage"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"sync"
)
var currentTemplate *appmessage.GetBlockTemplateResponseMessage
var currentTemplate *externalapi.DomainBlock
var isSynced bool
var lock = &sync.Mutex{}
// Get returns the template to work on
func Get() *appmessage.GetBlockTemplateResponseMessage {
func Get() (*externalapi.DomainBlock, bool) {
lock.Lock()
defer lock.Unlock()
return currentTemplate
// Shallow copy the block so when the user replaces the header it won't affect the template here.
if currentTemplate == nil {
return nil, false
}
block := *currentTemplate
return &block, isSynced
}
// Set sets the current template to work on
func Set(template *appmessage.GetBlockTemplateResponseMessage) {
block := appmessage.MsgBlockToDomainBlock(template.MsgBlock)
lock.Lock()
defer lock.Unlock()
currentTemplate = template
currentTemplate = block
isSynced = template.IsSynced
}

View File

@@ -10,7 +10,7 @@ It is capable of generating wallet key-pairs, printing a wallet's current balanc
## Requirements
Go 1.15 or later.
Go 1.16 or later.
## Installation

View File

@@ -1,5 +1,5 @@
# -- multistage docker build: stage #1: build stage
FROM golang:1.15-alpine AS build
FROM golang:1.16-alpine AS build
RUN mkdir -p /go/src/github.com/kaspanet/kaspad
@@ -16,9 +16,12 @@ RUN go get -u golang.org/x/lint/golint \
COPY go.mod .
COPY go.sum .
# Cache kaspad dependencies
RUN go mod download
COPY . .
RUN NO_PARALLEL=1 ./build_and_test.sh
RUN ./build_and_test.sh
# --- multistage docker build: stage #2: runtime image
FROM alpine

View File

@@ -157,6 +157,15 @@ func (s *consensus) GetBlockInfo(blockHash *externalapi.DomainHash) (*externalap
return blockInfo, nil
}
func (s *consensus) GetBlockChildren(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error) {
blockRelation, err := s.blockRelationStore.BlockRelation(s.databaseContext, blockHash)
if err != nil {
return nil, err
}
return blockRelation.Children, nil
}
func (s *consensus) GetBlockAcceptanceData(blockHash *externalapi.DomainHash) (externalapi.AcceptanceData, error) {
s.lock.Lock()
defer s.lock.Unlock()
@@ -223,6 +232,30 @@ func (s *consensus) GetPruningPointUTXOs(expectedPruningPointHash *externalapi.D
return pruningPointUTXOs, nil
}
func (s *consensus) GetVirtualUTXOs(expectedVirtualParents []*externalapi.DomainHash,
fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error) {
s.lock.Lock()
defer s.lock.Unlock()
virtualParents, err := s.dagTopologyManager.Parents(model.VirtualBlockHash)
if err != nil {
return nil, err
}
if !externalapi.HashesEqual(expectedVirtualParents, virtualParents) {
return nil, errors.Wrapf(ruleerrors.ErrGetVirtualUTXOsWrongVirtualParents, "expected virtual parents %s but got %s",
expectedVirtualParents,
virtualParents)
}
virtualUTXOs, err := s.consensusStateStore.VirtualUTXOs(s.databaseContext, fromOutpoint, limit)
if err != nil {
return nil, err
}
return virtualUTXOs, nil
}
func (s *consensus) PruningPoint() (*externalapi.DomainHash, error) {
s.lock.Lock()
defer s.lock.Unlock()

View File

@@ -1,6 +1,21 @@
package consensus
import "github.com/kaspanet/kaspad/domain/consensus/model"
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"math/big"
"time"
)
// GHOSTDAGManagerConstructor is the function signature for a constructor of a type implementing model.GHOSTDAGManager
type GHOSTDAGManagerConstructor func(model.DBReader, model.DAGTopologyManager, model.GHOSTDAGDataStore, model.BlockHeaderStore, model.KType) model.GHOSTDAGManager
type GHOSTDAGManagerConstructor func(model.DBReader, model.DAGTopologyManager,
model.GHOSTDAGDataStore, model.BlockHeaderStore, model.KType) model.GHOSTDAGManager
// DifficultyManagerConstructor is the function signature for a constructor of a type implementing model.DifficultyManager
type DifficultyManagerConstructor func(model.DBReader, model.GHOSTDAGManager, model.GHOSTDAGDataStore,
model.BlockHeaderStore, model.DAGTopologyManager, model.DAGTraversalManager, *big.Int, int, bool, time.Duration,
*externalapi.DomainHash) model.DifficultyManager
// PastMedianTimeManagerConstructor is the function signature for a constructor of a type implementing model.PastMedianTimeManager
type PastMedianTimeManagerConstructor func(int, model.DBReader, model.DAGTraversalManager, model.BlockHeaderStore,
model.GHOSTDAGDataStore) model.PastMedianTimeManager

View File

@@ -3,25 +3,40 @@ package database
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/infrastructure/db/database"
"github.com/pkg/errors"
)
type dbCursor struct {
cursor database.Cursor
cursor database.Cursor
isClosed bool
}
func (d dbCursor) Next() bool {
if d.isClosed {
panic("Tried using a closed DBCursor")
}
return d.cursor.Next()
}
func (d dbCursor) First() bool {
if d.isClosed {
panic("Tried using a closed DBCursor")
}
return d.cursor.First()
}
func (d dbCursor) Seek(key model.DBKey) error {
if d.isClosed {
return errors.New("Tried using a closed DBCursor")
}
return d.cursor.Seek(dbKeyToDatabaseKey(key))
}
func (d dbCursor) Key() (model.DBKey, error) {
if d.isClosed {
return nil, errors.New("Tried using a closed DBCursor")
}
key, err := d.cursor.Key()
if err != nil {
return nil, err
@@ -31,11 +46,23 @@ func (d dbCursor) Key() (model.DBKey, error) {
}
func (d dbCursor) Value() ([]byte, error) {
if d.isClosed {
return nil, errors.New("Tried using a closed DBCursor")
}
return d.cursor.Value()
}
func (d dbCursor) Close() error {
return d.cursor.Close()
if d.isClosed {
return errors.New("Tried using a closed DBCursor")
}
d.isClosed = true
err := d.cursor.Close()
if err != nil {
return err
}
d.cursor = nil
return nil
}
func newDBCursor(cursor database.Cursor) model.DBCursor {

View File

@@ -1473,100 +1473,6 @@ func (x *DbUtxoDiff) GetToRemove() []*DbUtxoCollectionItem {
return nil
}
type DbPruningPointUTXOSetBytes struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Bytes []byte `protobuf:"bytes,1,opt,name=bytes,proto3" json:"bytes,omitempty"`
}
func (x *DbPruningPointUTXOSetBytes) Reset() {
*x = DbPruningPointUTXOSetBytes{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[24]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DbPruningPointUTXOSetBytes) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DbPruningPointUTXOSetBytes) ProtoMessage() {}
func (x *DbPruningPointUTXOSetBytes) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[24]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DbPruningPointUTXOSetBytes.ProtoReflect.Descriptor instead.
func (*DbPruningPointUTXOSetBytes) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{24}
}
func (x *DbPruningPointUTXOSetBytes) GetBytes() []byte {
if x != nil {
return x.Bytes
}
return nil
}
type DbHeaderTips struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Tips []*DbHash `protobuf:"bytes,1,rep,name=tips,proto3" json:"tips,omitempty"`
}
func (x *DbHeaderTips) Reset() {
*x = DbHeaderTips{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[25]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DbHeaderTips) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DbHeaderTips) ProtoMessage() {}
func (x *DbHeaderTips) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[25]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DbHeaderTips.ProtoReflect.Descriptor instead.
func (*DbHeaderTips) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{25}
}
func (x *DbHeaderTips) GetTips() []*DbHash {
if x != nil {
return x.Tips
}
return nil
}
type DbTips struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -1578,7 +1484,7 @@ type DbTips struct {
func (x *DbTips) Reset() {
*x = DbTips{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[26]
mi := &file_dbobjects_proto_msgTypes[24]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1591,7 +1497,7 @@ func (x *DbTips) String() string {
func (*DbTips) ProtoMessage() {}
func (x *DbTips) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[26]
mi := &file_dbobjects_proto_msgTypes[24]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1604,7 +1510,7 @@ func (x *DbTips) ProtoReflect() protoreflect.Message {
// Deprecated: Use DbTips.ProtoReflect.Descriptor instead.
func (*DbTips) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{26}
return file_dbobjects_proto_rawDescGZIP(), []int{24}
}
func (x *DbTips) GetTips() []*DbHash {
@@ -1614,53 +1520,6 @@ func (x *DbTips) GetTips() []*DbHash {
return nil
}
type DbVirtualDiffParents struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
VirtualDiffParents []*DbHash `protobuf:"bytes,1,rep,name=virtualDiffParents,proto3" json:"virtualDiffParents,omitempty"`
}
func (x *DbVirtualDiffParents) Reset() {
*x = DbVirtualDiffParents{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[27]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *DbVirtualDiffParents) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DbVirtualDiffParents) ProtoMessage() {}
func (x *DbVirtualDiffParents) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[27]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DbVirtualDiffParents.ProtoReflect.Descriptor instead.
func (*DbVirtualDiffParents) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{27}
}
func (x *DbVirtualDiffParents) GetVirtualDiffParents() []*DbHash {
if x != nil {
return x.VirtualDiffParents
}
return nil
}
type DbBlockCount struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -1672,7 +1531,7 @@ type DbBlockCount struct {
func (x *DbBlockCount) Reset() {
*x = DbBlockCount{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[28]
mi := &file_dbobjects_proto_msgTypes[25]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1685,7 +1544,7 @@ func (x *DbBlockCount) String() string {
func (*DbBlockCount) ProtoMessage() {}
func (x *DbBlockCount) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[28]
mi := &file_dbobjects_proto_msgTypes[25]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1698,7 +1557,7 @@ func (x *DbBlockCount) ProtoReflect() protoreflect.Message {
// Deprecated: Use DbBlockCount.ProtoReflect.Descriptor instead.
func (*DbBlockCount) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{28}
return file_dbobjects_proto_rawDescGZIP(), []int{25}
}
func (x *DbBlockCount) GetCount() uint64 {
@@ -1719,7 +1578,7 @@ type DbBlockHeaderCount struct {
func (x *DbBlockHeaderCount) Reset() {
*x = DbBlockHeaderCount{}
if protoimpl.UnsafeEnabled {
mi := &file_dbobjects_proto_msgTypes[29]
mi := &file_dbobjects_proto_msgTypes[26]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -1732,7 +1591,7 @@ func (x *DbBlockHeaderCount) String() string {
func (*DbBlockHeaderCount) ProtoMessage() {}
func (x *DbBlockHeaderCount) ProtoReflect() protoreflect.Message {
mi := &file_dbobjects_proto_msgTypes[29]
mi := &file_dbobjects_proto_msgTypes[26]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -1745,7 +1604,7 @@ func (x *DbBlockHeaderCount) ProtoReflect() protoreflect.Message {
// Deprecated: Use DbBlockHeaderCount.ProtoReflect.Descriptor instead.
func (*DbBlockHeaderCount) Descriptor() ([]byte, []int) {
return file_dbobjects_proto_rawDescGZIP(), []int{29}
return file_dbobjects_proto_rawDescGZIP(), []int{26}
}
func (x *DbBlockHeaderCount) GetCount() uint64 {
@@ -1981,32 +1840,19 @@ var file_dbobjects_proto_rawDesc = []byte{
0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x18, 0x02, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x23, 0x2e, 0x73,
0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x55,
0x74, 0x78, 0x6f, 0x43, 0x6f, 0x6c, 0x6c, 0x65, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x74, 0x65,
0x6d, 0x52, 0x08, 0x74, 0x6f, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x22, 0x32, 0x0a, 0x1a, 0x44,
0x62, 0x50, 0x72, 0x75, 0x6e, 0x69, 0x6e, 0x67, 0x50, 0x6f, 0x69, 0x6e, 0x74, 0x55, 0x54, 0x58,
0x4f, 0x53, 0x65, 0x74, 0x42, 0x79, 0x74, 0x65, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x62, 0x79, 0x74,
0x65, 0x73, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0c, 0x52, 0x05, 0x62, 0x79, 0x74, 0x65, 0x73, 0x22,
0x39, 0x0a, 0x0c, 0x44, 0x62, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x54, 0x69, 0x70, 0x73, 0x12,
0x29, 0x0a, 0x04, 0x74, 0x69, 0x70, 0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e,
0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62,
0x48, 0x61, 0x73, 0x68, 0x52, 0x04, 0x74, 0x69, 0x70, 0x73, 0x22, 0x33, 0x0a, 0x06, 0x44, 0x62,
0x54, 0x69, 0x70, 0x73, 0x12, 0x29, 0x0a, 0x04, 0x74, 0x69, 0x70, 0x73, 0x18, 0x01, 0x20, 0x03,
0x28, 0x0b, 0x32, 0x15, 0x2e, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x48, 0x61, 0x73, 0x68, 0x52, 0x04, 0x74, 0x69, 0x70, 0x73, 0x22,
0x5d, 0x0a, 0x14, 0x44, 0x62, 0x56, 0x69, 0x72, 0x74, 0x75, 0x61, 0x6c, 0x44, 0x69, 0x66, 0x66,
0x50, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x73, 0x12, 0x45, 0x0a, 0x12, 0x76, 0x69, 0x72, 0x74, 0x75,
0x61, 0x6c, 0x44, 0x69, 0x66, 0x66, 0x50, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x73, 0x18, 0x01, 0x20,
0x6d, 0x52, 0x08, 0x74, 0x6f, 0x52, 0x65, 0x6d, 0x6f, 0x76, 0x65, 0x22, 0x33, 0x0a, 0x06, 0x44,
0x62, 0x54, 0x69, 0x70, 0x73, 0x12, 0x29, 0x0a, 0x04, 0x74, 0x69, 0x70, 0x73, 0x18, 0x01, 0x20,
0x03, 0x28, 0x0b, 0x32, 0x15, 0x2e, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74,
0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x48, 0x61, 0x73, 0x68, 0x52, 0x12, 0x76, 0x69, 0x72, 0x74,
0x75, 0x61, 0x6c, 0x44, 0x69, 0x66, 0x66, 0x50, 0x61, 0x72, 0x65, 0x6e, 0x74, 0x73, 0x22, 0x24,
0x0a, 0x0c, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x14,
0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63,
0x6f, 0x75, 0x6e, 0x74, 0x22, 0x2a, 0x0a, 0x12, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x48,
0x65, 0x61, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f,
0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74,
0x42, 0x2a, 0x5a, 0x28, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6b,
0x61, 0x73, 0x70, 0x61, 0x6e, 0x65, 0x74, 0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x64, 0x2f, 0x73,
0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x62, 0x06, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x33,
0x69, 0x6f, 0x6e, 0x2e, 0x44, 0x62, 0x48, 0x61, 0x73, 0x68, 0x52, 0x04, 0x74, 0x69, 0x70, 0x73,
0x22, 0x24, 0x0a, 0x0c, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63, 0x6b, 0x43, 0x6f, 0x75, 0x6e, 0x74,
0x12, 0x14, 0x0a, 0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52,
0x05, 0x63, 0x6f, 0x75, 0x6e, 0x74, 0x22, 0x2a, 0x0a, 0x12, 0x44, 0x62, 0x42, 0x6c, 0x6f, 0x63,
0x6b, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x43, 0x6f, 0x75, 0x6e, 0x74, 0x12, 0x14, 0x0a, 0x05,
0x63, 0x6f, 0x75, 0x6e, 0x74, 0x18, 0x01, 0x20, 0x01, 0x28, 0x04, 0x52, 0x05, 0x63, 0x6f, 0x75,
0x6e, 0x74, 0x42, 0x2a, 0x5a, 0x28, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d,
0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x6e, 0x65, 0x74, 0x2f, 0x6b, 0x61, 0x73, 0x70, 0x61, 0x64,
0x2f, 0x73, 0x65, 0x72, 0x69, 0x61, 0x6c, 0x69, 0x7a, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x62, 0x06,
0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
@@ -2021,7 +1867,7 @@ func file_dbobjects_proto_rawDescGZIP() []byte {
return file_dbobjects_proto_rawDescData
}
var file_dbobjects_proto_msgTypes = make([]protoimpl.MessageInfo, 30)
var file_dbobjects_proto_msgTypes = make([]protoimpl.MessageInfo, 27)
var file_dbobjects_proto_goTypes = []interface{}{
(*DbBlock)(nil), // 0: serialization.DbBlock
(*DbBlockHeader)(nil), // 1: serialization.DbBlockHeader
@@ -2047,12 +1893,9 @@ var file_dbobjects_proto_goTypes = []interface{}{
(*DbReachabilityData)(nil), // 21: serialization.DbReachabilityData
(*DbReachabilityInterval)(nil), // 22: serialization.DbReachabilityInterval
(*DbUtxoDiff)(nil), // 23: serialization.DbUtxoDiff
(*DbPruningPointUTXOSetBytes)(nil), // 24: serialization.DbPruningPointUTXOSetBytes
(*DbHeaderTips)(nil), // 25: serialization.DbHeaderTips
(*DbTips)(nil), // 26: serialization.DbTips
(*DbVirtualDiffParents)(nil), // 27: serialization.DbVirtualDiffParents
(*DbBlockCount)(nil), // 28: serialization.DbBlockCount
(*DbBlockHeaderCount)(nil), // 29: serialization.DbBlockHeaderCount
(*DbTips)(nil), // 24: serialization.DbTips
(*DbBlockCount)(nil), // 25: serialization.DbBlockCount
(*DbBlockHeaderCount)(nil), // 26: serialization.DbBlockHeaderCount
}
var file_dbobjects_proto_depIdxs = []int32{
1, // 0: serialization.DbBlock.header:type_name -> serialization.DbBlockHeader
@@ -2090,14 +1933,12 @@ var file_dbobjects_proto_depIdxs = []int32{
2, // 32: serialization.DbReachabilityData.futureCoveringSet:type_name -> serialization.DbHash
18, // 33: serialization.DbUtxoDiff.toAdd:type_name -> serialization.DbUtxoCollectionItem
18, // 34: serialization.DbUtxoDiff.toRemove:type_name -> serialization.DbUtxoCollectionItem
2, // 35: serialization.DbHeaderTips.tips:type_name -> serialization.DbHash
2, // 36: serialization.DbTips.tips:type_name -> serialization.DbHash
2, // 37: serialization.DbVirtualDiffParents.virtualDiffParents:type_name -> serialization.DbHash
38, // [38:38] is the sub-list for method output_type
38, // [38:38] is the sub-list for method input_type
38, // [38:38] is the sub-list for extension type_name
38, // [38:38] is the sub-list for extension extendee
0, // [0:38] is the sub-list for field type_name
2, // 35: serialization.DbTips.tips:type_name -> serialization.DbHash
36, // [36:36] is the sub-list for method output_type
36, // [36:36] is the sub-list for method input_type
36, // [36:36] is the sub-list for extension type_name
36, // [36:36] is the sub-list for extension extendee
0, // [0:36] is the sub-list for field type_name
}
func init() { file_dbobjects_proto_init() }
@@ -2395,30 +2236,6 @@ func file_dbobjects_proto_init() {
}
}
file_dbobjects_proto_msgTypes[24].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbPruningPointUTXOSetBytes); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_dbobjects_proto_msgTypes[25].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbHeaderTips); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_dbobjects_proto_msgTypes[26].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbTips); i {
case 0:
return &v.state
@@ -2430,19 +2247,7 @@ func file_dbobjects_proto_init() {
return nil
}
}
file_dbobjects_proto_msgTypes[27].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbVirtualDiffParents); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_dbobjects_proto_msgTypes[28].Exporter = func(v interface{}, i int) interface{} {
file_dbobjects_proto_msgTypes[25].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbBlockCount); i {
case 0:
return &v.state
@@ -2454,7 +2259,7 @@ func file_dbobjects_proto_init() {
return nil
}
}
file_dbobjects_proto_msgTypes[29].Exporter = func(v interface{}, i int) interface{} {
file_dbobjects_proto_msgTypes[26].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*DbBlockHeaderCount); i {
case 0:
return &v.state
@@ -2473,7 +2278,7 @@ func file_dbobjects_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_dbobjects_proto_rawDesc,
NumEnums: 0,
NumMessages: 30,
NumMessages: 27,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -139,22 +139,10 @@ message DbUtxoDiff {
repeated DbUtxoCollectionItem toRemove = 2;
}
message DbPruningPointUTXOSetBytes {
bytes bytes = 1;
}
message DbHeaderTips {
repeated DbHash tips = 1;
}
message DbTips {
repeated DbHash tips = 1;
}
message DbVirtualDiffParents {
repeated DbHash virtualDiffParents = 1;
}
message DbBlockCount {
uint64 count = 1;
}

View File

@@ -1,17 +0,0 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// HeaderTipsToDBHeaderTips converts a slice of hashes to DbHeaderTips
func HeaderTipsToDBHeaderTips(tips []*externalapi.DomainHash) *DbHeaderTips {
return &DbHeaderTips{
Tips: DomainHashesToDbHashes(tips),
}
}
// DBHeaderTipsToHeaderTips converts DbHeaderTips to a slice of hashes
func DBHeaderTipsToHeaderTips(dbHeaderTips *DbHeaderTips) ([]*externalapi.DomainHash, error) {
return DbHashesToDomainHashes(dbHeaderTips.Tips)
}

View File

@@ -1,15 +1,15 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
)
func utxoCollectionToDBUTXOCollection(utxoCollection model.UTXOCollection) ([]*DbUtxoCollectionItem, error) {
func utxoCollectionToDBUTXOCollection(utxoCollection externalapi.UTXOCollection) ([]*DbUtxoCollectionItem, error) {
items := make([]*DbUtxoCollectionItem, utxoCollection.Len())
i := 0
utxoIterator := utxoCollection.Iterator()
defer utxoIterator.Close()
for ok := utxoIterator.First(); ok; ok = utxoIterator.Next() {
outpoint, entry, err := utxoIterator.Get()
if err != nil {
@@ -26,7 +26,7 @@ func utxoCollectionToDBUTXOCollection(utxoCollection model.UTXOCollection) ([]*D
return items, nil
}
func dbUTXOCollectionToUTXOCollection(items []*DbUtxoCollectionItem) (model.UTXOCollection, error) {
func dbUTXOCollectionToUTXOCollection(items []*DbUtxoCollectionItem) (externalapi.UTXOCollection, error) {
utxoMap := make(map[externalapi.DomainOutpoint]externalapi.UTXOEntry, len(items))
for _, item := range items {
outpoint, err := DbOutpointToDomainOutpoint(item.Outpoint)

View File

@@ -1,12 +1,12 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/utxo"
)
// UTXODiffToDBUTXODiff converts UTXODiff to DbUtxoDiff
func UTXODiffToDBUTXODiff(diff model.UTXODiff) (*DbUtxoDiff, error) {
func UTXODiffToDBUTXODiff(diff externalapi.UTXODiff) (*DbUtxoDiff, error) {
toAdd, err := utxoCollectionToDBUTXOCollection(diff.ToAdd())
if err != nil {
return nil, err
@@ -24,7 +24,7 @@ func UTXODiffToDBUTXODiff(diff model.UTXODiff) (*DbUtxoDiff, error) {
}
// DBUTXODiffToUTXODiff converts DbUtxoDiff to UTXODiff
func DBUTXODiffToUTXODiff(diff *DbUtxoDiff) (model.UTXODiff, error) {
func DBUTXODiffToUTXODiff(diff *DbUtxoDiff) (externalapi.UTXODiff, error) {
toAdd, err := dbUTXOCollectionToUTXOCollection(diff.ToAdd)
if err != nil {
return nil, err

View File

@@ -1,17 +0,0 @@
package serialization
import (
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
// VirtualDiffParentsToDBHeaderVirtualDiffParents converts a slice of hashes to DbVirtualDiffParents
func VirtualDiffParentsToDBHeaderVirtualDiffParents(tips []*externalapi.DomainHash) *DbVirtualDiffParents {
return &DbVirtualDiffParents{
VirtualDiffParents: DomainHashesToDbHashes(tips),
}
}
// DBVirtualDiffParentsToVirtualDiffParents converts DbHeaderTips to a slice of hashes
func DBVirtualDiffParentsToVirtualDiffParents(dbVirtualDiffParents *DbVirtualDiffParents) ([]*externalapi.DomainHash, error) {
return DbHashesToDomainHashes(dbVirtualDiffParents.VirtualDiffParents)
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/consensus/utils/lrucache"
"github.com/pkg/errors"
)
var bucket = database.MakeBucket([]byte("blocks"))
@@ -214,18 +215,28 @@ func (bs *blockStore) serializeBlockCount(count uint64) ([]byte, error) {
}
type allBlockHashesIterator struct {
cursor model.DBCursor
cursor model.DBCursor
isClosed bool
}
func (a allBlockHashesIterator) First() bool {
if a.isClosed {
panic("Tried using a closed AllBlockHashesIterator")
}
return a.cursor.First()
}
func (a allBlockHashesIterator) Next() bool {
if a.isClosed {
panic("Tried using a closed AllBlockHashesIterator")
}
return a.cursor.Next()
}
func (a allBlockHashesIterator) Get() (*externalapi.DomainHash, error) {
if a.isClosed {
return nil, errors.New("Tried using a closed AllBlockHashesIterator")
}
key, err := a.cursor.Key()
if err != nil {
return nil, err
@@ -235,6 +246,19 @@ func (a allBlockHashesIterator) Get() (*externalapi.DomainHash, error) {
return externalapi.NewDomainHashFromByteSlice(blockHashBytes)
}
func (a allBlockHashesIterator) Close() error {
if a.isClosed {
return errors.New("Tried using a closed AllBlockHashesIterator")
}
a.isClosed = true
err := a.cursor.Close()
if err != nil {
return err
}
a.cursor = nil
return nil
}
func (bs *blockStore) AllBlockHashesIterator(dbContext model.DBReader) (model.BlockIterator, error) {
cursor, err := dbContext.Cursor(bucket)
if err != nil {

View File

@@ -8,14 +8,12 @@ import (
// consensusStateStore represents a store for the current consensus state
type consensusStateStore struct {
tipsStaging []*externalapi.DomainHash
virtualDiffParentsStaging []*externalapi.DomainHash
virtualUTXODiffStaging model.UTXODiff
tipsStaging []*externalapi.DomainHash
virtualUTXODiffStaging externalapi.UTXODiff
virtualUTXOSetCache *utxolrucache.LRUCache
tipsCache []*externalapi.DomainHash
virtualDiffParentsCache []*externalapi.DomainHash
tipsCache []*externalapi.DomainHash
}
// New instantiates a new ConsensusStateStore
@@ -28,7 +26,6 @@ func New(utxoSetCacheSize int, preallocate bool) model.ConsensusStateStore {
func (css *consensusStateStore) Discard() {
css.tipsStaging = nil
css.virtualUTXODiffStaging = nil
css.virtualDiffParentsStaging = nil
}
func (css *consensusStateStore) Commit(dbTx model.DBTransaction) error {
@@ -36,10 +33,6 @@ func (css *consensusStateStore) Commit(dbTx model.DBTransaction) error {
if err != nil {
return err
}
err = css.commitVirtualDiffParents(dbTx)
if err != nil {
return err
}
err = css.commitVirtualUTXODiff(dbTx)
if err != nil {
@@ -53,6 +46,5 @@ func (css *consensusStateStore) Commit(dbTx model.DBTransaction) error {
func (css *consensusStateStore) IsStaged() bool {
return css.tipsStaging != nil ||
css.virtualDiffParentsStaging != nil ||
css.virtualUTXODiffStaging != nil
}

View File

@@ -19,7 +19,7 @@ func utxoKey(outpoint *externalapi.DomainOutpoint) (model.DBKey, error) {
return utxoSetBucket.Key(serializedOutpoint), nil
}
func (css *consensusStateStore) StageVirtualUTXODiff(virtualUTXODiff model.UTXODiff) {
func (css *consensusStateStore) StageVirtualUTXODiff(virtualUTXODiff externalapi.UTXODiff) {
css.virtualUTXODiffStaging = virtualUTXODiff
}
@@ -37,6 +37,7 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
}
toRemoveIterator := css.virtualUTXODiffStaging.ToRemove().Iterator()
defer toRemoveIterator.Close()
for ok := toRemoveIterator.First(); ok; ok = toRemoveIterator.Next() {
toRemoveOutpoint, _, err := toRemoveIterator.Get()
if err != nil {
@@ -56,6 +57,7 @@ func (css *consensusStateStore) commitVirtualUTXODiff(dbTx model.DBTransaction)
}
toAddIterator := css.virtualUTXODiffStaging.ToAdd().Iterator()
defer toAddIterator.Close()
for ok := toAddIterator.First(); ok; ok = toAddIterator.Next() {
toAddOutpoint, toAddEntry, err := toAddIterator.Get()
if err != nil {
@@ -149,7 +151,45 @@ func (css *consensusStateStore) hasUTXOByOutpointFromStagedVirtualUTXODiff(dbCon
return dbContext.Has(key)
}
func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader) (model.ReadOnlyUTXOSetIterator, error) {
func (css *consensusStateStore) VirtualUTXOs(dbContext model.DBReader,
fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error) {
cursor, err := dbContext.Cursor(utxoSetBucket)
if err != nil {
return nil, err
}
defer cursor.Close()
if fromOutpoint != nil {
serializedFromOutpoint, err := serializeOutpoint(fromOutpoint)
if err != nil {
return nil, err
}
seekKey := utxoSetBucket.Key(serializedFromOutpoint)
err = cursor.Seek(seekKey)
if err != nil {
return nil, err
}
}
iterator := newCursorUTXOSetIterator(cursor)
defer iterator.Close()
outpointAndUTXOEntryPairs := make([]*externalapi.OutpointAndUTXOEntryPair, 0, limit)
for len(outpointAndUTXOEntryPairs) < limit && iterator.Next() {
outpoint, utxoEntry, err := iterator.Get()
if err != nil {
return nil, err
}
outpointAndUTXOEntryPairs = append(outpointAndUTXOEntryPairs, &externalapi.OutpointAndUTXOEntryPair{
Outpoint: outpoint,
UTXOEntry: utxoEntry,
})
}
return outpointAndUTXOEntryPairs, nil
}
func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader) (externalapi.ReadOnlyUTXOSetIterator, error) {
cursor, err := dbContext.Cursor(utxoSetBucket)
if err != nil {
return nil, err
@@ -164,22 +204,32 @@ func (css *consensusStateStore) VirtualUTXOSetIterator(dbContext model.DBReader)
}
type utxoSetIterator struct {
cursor model.DBCursor
cursor model.DBCursor
isClosed bool
}
func newCursorUTXOSetIterator(cursor model.DBCursor) model.ReadOnlyUTXOSetIterator {
func newCursorUTXOSetIterator(cursor model.DBCursor) externalapi.ReadOnlyUTXOSetIterator {
return &utxoSetIterator{cursor: cursor}
}
func (u utxoSetIterator) First() bool {
if u.isClosed {
panic("Tried using a closed utxoSetIterator")
}
return u.cursor.First()
}
func (u utxoSetIterator) Next() bool {
if u.isClosed {
panic("Tried using a closed utxoSetIterator")
}
return u.cursor.Next()
}
func (u utxoSetIterator) Get() (outpoint *externalapi.DomainOutpoint, utxoEntry externalapi.UTXOEntry, err error) {
if u.isClosed {
return nil, nil, errors.New("Tried using a closed utxoSetIterator")
}
key, err := u.cursor.Key()
if err != nil {
panic(err)
@@ -202,3 +252,16 @@ func (u utxoSetIterator) Get() (outpoint *externalapi.DomainOutpoint, utxoEntry
return outpoint, utxoEntry, nil
}
func (u utxoSetIterator) Close() error {
if u.isClosed {
return errors.New("Tried using a closed utxoSetIterator")
}
u.isClosed = true
err := u.cursor.Close()
if err != nil {
return err
}
u.cursor = nil
return nil
}

View File

@@ -1,74 +0,0 @@
package consensusstatestore
import (
"github.com/golang/protobuf/proto"
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
)
var virtualDiffParentsKey = database.MakeBucket(nil).Key([]byte("virtual-diff-parents"))
func (css *consensusStateStore) VirtualDiffParents(dbContext model.DBReader) ([]*externalapi.DomainHash, error) {
if css.virtualDiffParentsStaging != nil {
return externalapi.CloneHashes(css.virtualDiffParentsStaging), nil
}
if css.virtualDiffParentsCache != nil {
return externalapi.CloneHashes(css.virtualDiffParentsCache), nil
}
virtualDiffParentsBytes, err := dbContext.Get(virtualDiffParentsKey)
if err != nil {
return nil, err
}
virtualDiffParents, err := css.deserializeVirtualDiffParents(virtualDiffParentsBytes)
if err != nil {
return nil, err
}
css.virtualDiffParentsCache = virtualDiffParents
return externalapi.CloneHashes(virtualDiffParents), nil
}
func (css *consensusStateStore) StageVirtualDiffParents(tipHashes []*externalapi.DomainHash) {
css.virtualDiffParentsStaging = externalapi.CloneHashes(tipHashes)
}
func (css *consensusStateStore) commitVirtualDiffParents(dbTx model.DBTransaction) error {
if css.virtualDiffParentsStaging == nil {
return nil
}
virtualDiffParentsBytes, err := css.serializeVirtualDiffParents(css.virtualDiffParentsStaging)
if err != nil {
return err
}
err = dbTx.Put(virtualDiffParentsKey, virtualDiffParentsBytes)
if err != nil {
return err
}
css.virtualDiffParentsCache = css.virtualDiffParentsStaging
// Note: we don't discard the staging here since that's
// being done at the end of Commit()
return nil
}
func (css *consensusStateStore) serializeVirtualDiffParents(virtualDiffParentsBytes []*externalapi.DomainHash) ([]byte, error) {
virtualDiffParents := serialization.VirtualDiffParentsToDBHeaderVirtualDiffParents(virtualDiffParentsBytes)
return proto.Marshal(virtualDiffParents)
}
func (css *consensusStateStore) deserializeVirtualDiffParents(virtualDiffParentsBytes []byte) ([]*externalapi.DomainHash,
error) {
dbVirtualDiffParents := &serialization.DbVirtualDiffParents{}
err := proto.Unmarshal(virtualDiffParentsBytes, dbVirtualDiffParents)
if err != nil {
return nil, err
}
return serialization.DBVirtualDiffParentsToVirtualDiffParents(dbVirtualDiffParents)
}

View File

@@ -3,6 +3,7 @@ package consensusstatestore
import (
"github.com/kaspanet/kaspad/domain/consensus/database"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/pkg/errors"
)
@@ -21,7 +22,7 @@ func (css *consensusStateStore) FinishImportingPruningPointUTXOSet(dbContext mod
}
func (css *consensusStateStore) ImportPruningPointUTXOSetIntoVirtualUTXOSet(dbContext model.DBWriter,
pruningPointUTXOSetIterator model.ReadOnlyUTXOSetIterator) error {
pruningPointUTXOSetIterator externalapi.ReadOnlyUTXOSetIterator) error {
if css.virtualUTXODiffStaging != nil {
return errors.New("cannot import virtual UTXO set while virtual UTXO diff is staged")
@@ -44,6 +45,7 @@ func (css *consensusStateStore) ImportPruningPointUTXOSetIntoVirtualUTXOSet(dbCo
if err != nil {
return err
}
defer deleteCursor.Close()
for ok := deleteCursor.First(); ok; ok = deleteCursor.Next() {
key, err := deleteCursor.Key()
if err != nil {

View File

@@ -6,6 +6,7 @@ import (
"github.com/kaspanet/kaspad/domain/consensus/database/serialization"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/pkg/errors"
)
var importedPruningPointUTXOsBucket = database.MakeBucket([]byte("imported-pruning-point-utxos"))
@@ -16,6 +17,7 @@ func (ps *pruningStore) ClearImportedPruningPointUTXOs(dbContext model.DBWriter)
if err != nil {
return err
}
defer cursor.Close()
for ok := cursor.First(); ok; ok = cursor.Next() {
key, err := cursor.Key()
@@ -51,7 +53,7 @@ func (ps *pruningStore) AppendImportedPruningPointUTXOs(dbTx model.DBTransaction
return nil
}
func (ps *pruningStore) ImportedPruningPointUTXOIterator(dbContext model.DBReader) (model.ReadOnlyUTXOSetIterator, error) {
func (ps *pruningStore) ImportedPruningPointUTXOIterator(dbContext model.DBReader) (externalapi.ReadOnlyUTXOSetIterator, error) {
cursor, err := dbContext.Cursor(importedPruningPointUTXOsBucket)
if err != nil {
return nil, err
@@ -60,22 +62,32 @@ func (ps *pruningStore) ImportedPruningPointUTXOIterator(dbContext model.DBReade
}
type utxoSetIterator struct {
cursor model.DBCursor
cursor model.DBCursor
isClosed bool
}
func (ps *pruningStore) newCursorUTXOSetIterator(cursor model.DBCursor) model.ReadOnlyUTXOSetIterator {
func (ps *pruningStore) newCursorUTXOSetIterator(cursor model.DBCursor) externalapi.ReadOnlyUTXOSetIterator {
return &utxoSetIterator{cursor: cursor}
}
func (u *utxoSetIterator) First() bool {
if u.isClosed {
panic("Tried using a closed utxoSetIterator")
}
return u.cursor.First()
}
func (u *utxoSetIterator) Next() bool {
if u.isClosed {
panic("Tried using a closed utxoSetIterator")
}
return u.cursor.Next()
}
func (u *utxoSetIterator) Get() (outpoint *externalapi.DomainOutpoint, utxoEntry externalapi.UTXOEntry, err error) {
if u.isClosed {
return nil, nil, errors.New("Tried using a closed utxoSetIterator")
}
key, err := u.cursor.Key()
if err != nil {
panic(err)
@@ -99,6 +111,19 @@ func (u *utxoSetIterator) Get() (outpoint *externalapi.DomainOutpoint, utxoEntry
return outpoint, utxoEntry, nil
}
func (u *utxoSetIterator) Close() error {
if u.isClosed {
return errors.New("Tried using a closed utxoSetIterator")
}
u.isClosed = true
err := u.cursor.Close()
if err != nil {
return err
}
u.cursor = nil
return nil
}
func (ps *pruningStore) importedPruningPointUTXOKey(outpoint *externalapi.DomainOutpoint) (model.DBKey, error) {
serializedOutpoint, err := serializeOutpoint(outpoint)
if err != nil {
@@ -175,6 +200,7 @@ func (ps *pruningStore) CommitImportedPruningPointUTXOSet(dbContext model.DBWrit
if err != nil {
return err
}
defer deleteCursor.Close()
for ok := deleteCursor.First(); ok; ok = deleteCursor.Next() {
key, err := deleteCursor.Key()
if err != nil {
@@ -191,6 +217,7 @@ func (ps *pruningStore) CommitImportedPruningPointUTXOSet(dbContext model.DBWrit
if err != nil {
return err
}
defer insertCursor.Close()
for ok := insertCursor.First(); ok; ok = insertCursor.Next() {
importedPruningPointUTXOSetKey, err := insertCursor.Key()
if err != nil {

View File

@@ -117,13 +117,14 @@ func (ps *pruningStore) Commit(dbTx model.DBTransaction) error {
}
func (ps *pruningStore) UpdatePruningPointUTXOSet(dbContext model.DBWriter,
utxoSetIterator model.ReadOnlyUTXOSetIterator) error {
utxoSetIterator externalapi.ReadOnlyUTXOSetIterator) error {
// Delete all the old UTXOs from the database
deleteCursor, err := dbContext.Cursor(pruningPointUTXOSetBucket)
if err != nil {
return err
}
defer deleteCursor.Close()
for ok := deleteCursor.First(); ok; ok = deleteCursor.Next() {
key, err := deleteCursor.Key()
if err != nil {
@@ -196,20 +197,6 @@ func (ps *pruningStore) deserializePruningPoint(pruningPointBytes []byte) (*exte
return serialization.DbHashToDomainHash(dbHash)
}
func (ps *pruningStore) serializeUTXOSetBytes(pruningPointUTXOSetBytes []byte) ([]byte, error) {
return proto.Marshal(&serialization.DbPruningPointUTXOSetBytes{Bytes: pruningPointUTXOSetBytes})
}
func (ps *pruningStore) deserializeUTXOSetBytes(dbPruningPointUTXOSetBytes []byte) ([]byte, error) {
dbPruningPointUTXOSet := &serialization.DbPruningPointUTXOSetBytes{}
err := proto.Unmarshal(dbPruningPointUTXOSetBytes, dbPruningPointUTXOSet)
if err != nil {
return nil, err
}
return dbPruningPointUTXOSet.Bytes, nil
}
func (ps *pruningStore) HasPruningPoint(dbContext model.DBReader) (bool, error) {
if ps.pruningPointStaging != nil {
return true, nil
@@ -229,6 +216,7 @@ func (ps *pruningStore) PruningPointUTXOs(dbContext model.DBReader,
if err != nil {
return nil, err
}
defer cursor.Close()
if fromOutpoint != nil {
serializedFromOutpoint, err := serializeOutpoint(fromOutpoint)
@@ -243,6 +231,7 @@ func (ps *pruningStore) PruningPointUTXOs(dbContext model.DBReader,
}
pruningPointUTXOIterator := ps.newCursorUTXOSetIterator(cursor)
defer pruningPointUTXOIterator.Close()
outpointAndUTXOEntryPairs := make([]*externalapi.OutpointAndUTXOEntryPair, 0, limit)
for len(outpointAndUTXOEntryPairs) < limit && pruningPointUTXOIterator.Next() {

View File

@@ -15,7 +15,7 @@ var utxoDiffChildBucket = database.MakeBucket([]byte("utxo-diff-children"))
// utxoDiffStore represents a store of UTXODiffs
type utxoDiffStore struct {
utxoDiffStaging map[externalapi.DomainHash]model.UTXODiff
utxoDiffStaging map[externalapi.DomainHash]externalapi.UTXODiff
utxoDiffChildStaging map[externalapi.DomainHash]*externalapi.DomainHash
toDelete map[externalapi.DomainHash]struct{}
utxoDiffCache *lrucache.LRUCache
@@ -25,7 +25,7 @@ type utxoDiffStore struct {
// New instantiates a new UTXODiffStore
func New(cacheSize int, preallocate bool) model.UTXODiffStore {
return &utxoDiffStore{
utxoDiffStaging: make(map[externalapi.DomainHash]model.UTXODiff),
utxoDiffStaging: make(map[externalapi.DomainHash]externalapi.UTXODiff),
utxoDiffChildStaging: make(map[externalapi.DomainHash]*externalapi.DomainHash),
toDelete: make(map[externalapi.DomainHash]struct{}),
utxoDiffCache: lrucache.New(cacheSize, preallocate),
@@ -34,7 +34,7 @@ func New(cacheSize int, preallocate bool) model.UTXODiffStore {
}
// Stage stages the given utxoDiff for the given blockHash
func (uds *utxoDiffStore) Stage(blockHash *externalapi.DomainHash, utxoDiff model.UTXODiff, utxoDiffChild *externalapi.DomainHash) {
func (uds *utxoDiffStore) Stage(blockHash *externalapi.DomainHash, utxoDiff externalapi.UTXODiff, utxoDiffChild *externalapi.DomainHash) {
uds.utxoDiffStaging[*blockHash] = utxoDiff
if utxoDiffChild != nil {
@@ -55,7 +55,7 @@ func (uds *utxoDiffStore) IsBlockHashStaged(blockHash *externalapi.DomainHash) b
}
func (uds *utxoDiffStore) Discard() {
uds.utxoDiffStaging = make(map[externalapi.DomainHash]model.UTXODiff)
uds.utxoDiffStaging = make(map[externalapi.DomainHash]externalapi.UTXODiff)
uds.utxoDiffChildStaging = make(map[externalapi.DomainHash]*externalapi.DomainHash)
uds.toDelete = make(map[externalapi.DomainHash]struct{})
}
@@ -107,13 +107,13 @@ func (uds *utxoDiffStore) Commit(dbTx model.DBTransaction) error {
}
// UTXODiff gets the utxoDiff associated with the given blockHash
func (uds *utxoDiffStore) UTXODiff(dbContext model.DBReader, blockHash *externalapi.DomainHash) (model.UTXODiff, error) {
func (uds *utxoDiffStore) UTXODiff(dbContext model.DBReader, blockHash *externalapi.DomainHash) (externalapi.UTXODiff, error) {
if utxoDiff, ok := uds.utxoDiffStaging[*blockHash]; ok {
return utxoDiff, nil
}
if utxoDiff, ok := uds.utxoDiffCache.Get(blockHash); ok {
return utxoDiff.(model.UTXODiff), nil
return utxoDiff.(externalapi.UTXODiff), nil
}
utxoDiffBytes, err := dbContext.Get(uds.utxoDiffHashAsKey(blockHash))
@@ -187,7 +187,7 @@ func (uds *utxoDiffStore) utxoDiffChildHashAsKey(hash *externalapi.DomainHash) m
return utxoDiffChildBucket.Key(hash.ByteSlice())
}
func (uds *utxoDiffStore) serializeUTXODiff(utxoDiff model.UTXODiff) ([]byte, error) {
func (uds *utxoDiffStore) serializeUTXODiff(utxoDiff externalapi.UTXODiff) ([]byte, error) {
dbUtxoDiff, err := serialization.UTXODiffToDBUTXODiff(utxoDiff)
if err != nil {
return nil, err
@@ -201,7 +201,7 @@ func (uds *utxoDiffStore) serializeUTXODiff(utxoDiff model.UTXODiff) ([]byte, er
return bytes, nil
}
func (uds *utxoDiffStore) deserializeUTXODiff(utxoDiffBytes []byte) (model.UTXODiff, error) {
func (uds *utxoDiffStore) deserializeUTXODiff(utxoDiffBytes []byte) (externalapi.UTXODiff, error) {
dbUTXODiff := &serialization.DbUtxoDiff{}
err := proto.Unmarshal(utxoDiffBytes, dbUTXODiff)
if err != nil {

View File

@@ -63,19 +63,25 @@ type Factory interface {
SetTestGHOSTDAGManager(ghostdagConstructor GHOSTDAGManagerConstructor)
SetTestLevelDBCacheSize(cacheSizeMiB int)
SetTestPreAllocateCache(preallocateCaches bool)
SetTestPastMedianTimeManager(medianTimeConstructor PastMedianTimeManagerConstructor)
SetTestDifficultyManager(difficultyConstructor DifficultyManagerConstructor)
}
type factory struct {
dataDir string
ghostdagConstructor GHOSTDAGManagerConstructor
cacheSizeMiB *int
preallocateCaches *bool
dataDir string
ghostdagConstructor GHOSTDAGManagerConstructor
pastMedianTimeConsructor PastMedianTimeManagerConstructor
difficultyConstructor DifficultyManagerConstructor
cacheSizeMiB *int
preallocateCaches *bool
}
// NewFactory creates a new Consensus factory
func NewFactory() Factory {
return &factory{
ghostdagConstructor: ghostdagmanager.New,
ghostdagConstructor: ghostdagmanager.New,
pastMedianTimeConsructor: pastmediantimemanager.New,
difficultyConstructor: difficultymanager.New,
}
}
@@ -144,7 +150,7 @@ func (f *factory) NewConsensus(dagParams *dagconfig.Params, db infrastructuredat
reachabilityDataStore,
ghostdagManager,
consensusStateStore)
pastMedianTimeManager := pastmediantimemanager.New(
pastMedianTimeManager := f.pastMedianTimeConsructor(
dagParams.TimestampDeviationTolerance,
dbManager,
dagTraversalManager,
@@ -159,7 +165,7 @@ func (f *factory) NewConsensus(dagParams *dagconfig.Params, db infrastructuredat
dbManager,
pastMedianTimeManager,
ghostdagDataStore)
difficultyManager := difficultymanager.New(
difficultyManager := f.difficultyConstructor(
dbManager,
ghostdagManager,
ghostdagDataStore,
@@ -466,6 +472,15 @@ func (f *factory) SetTestGHOSTDAGManager(ghostdagConstructor GHOSTDAGManagerCons
f.ghostdagConstructor = ghostdagConstructor
}
func (f *factory) SetTestPastMedianTimeManager(medianTimeConstructor PastMedianTimeManagerConstructor) {
f.pastMedianTimeConsructor = medianTimeConstructor
}
// SetTestDifficultyManager is a setter for the difficultyManager field on the factory.
func (f *factory) SetTestDifficultyManager(difficultyConstructor DifficultyManagerConstructor) {
f.difficultyConstructor = difficultyConstructor
}
func (f *factory) SetTestLevelDBCacheSize(cacheSizeMiB int) {
f.cacheSizeMiB = &cacheSizeMiB
}

View File

@@ -2,6 +2,8 @@ package consensus
import (
"github.com/kaspanet/kaspad/infrastructure/logger"
"github.com/kaspanet/kaspad/util/panics"
)
var log, _ = logger.Get(logger.SubsystemTags.BDAG)
var log = logger.RegisterSubSystem("BDAG")
var spawn = panics.GoroutineWrapperFunc(log)

View File

@@ -7,4 +7,5 @@ type BlockIterator interface {
First() bool
Next() bool
Get() (*externalapi.DomainHash, error)
Close() error
}

View File

@@ -9,11 +9,13 @@ type Consensus interface {
GetBlock(blockHash *DomainHash) (*DomainBlock, error)
GetBlockHeader(blockHash *DomainHash) (BlockHeader, error)
GetBlockInfo(blockHash *DomainHash) (*BlockInfo, error)
GetBlockChildren(blockHash *DomainHash) ([]*DomainHash, error)
GetBlockAcceptanceData(blockHash *DomainHash) (AcceptanceData, error)
GetHashesBetween(lowHash, highHash *DomainHash, maxBlueScoreDifference uint64) ([]*DomainHash, error)
GetMissingBlockBodyHashes(highHash *DomainHash) ([]*DomainHash, error)
GetPruningPointUTXOs(expectedPruningPointHash *DomainHash, fromOutpoint *DomainOutpoint, limit int) ([]*OutpointAndUTXOEntryPair, error)
GetVirtualUTXOs(expectedVirtualParents []*DomainHash, fromOutpoint *DomainOutpoint, limit int) ([]*OutpointAndUTXOEntryPair, error)
PruningPoint() (*DomainHash, error)
ClearImportedPruningPointData() error
AppendImportedPruningPointUTXOs(outpointAndUTXOEntryPairs []*OutpointAndUTXOEntryPair) error

View File

@@ -3,6 +3,8 @@ package externalapi
// BlockInsertionResult is auxiliary data returned from ValidateAndInsertBlock
type BlockInsertionResult struct {
VirtualSelectedParentChainChanges *SelectedChainPath
VirtualUTXODiff UTXODiff
VirtualParents []*DomainHash
}
// SelectedChainPath is a path the of the selected chains between two blocks.

View File

@@ -0,0 +1,10 @@
package externalapi
// ReadOnlyUTXOSetIterator is an iterator over all entries in a
// ReadOnlyUTXOSet
type ReadOnlyUTXOSetIterator interface {
First() bool
Next() bool
Get() (outpoint *DomainOutpoint, utxoEntry UTXOEntry, err error)
Close() error
}

View File

@@ -10,9 +10,6 @@ type DomainSubnetworkID [DomainSubnetworkIDSize]byte
// String stringifies a subnetwork ID.
func (id DomainSubnetworkID) String() string {
for i := 0; i < DomainSubnetworkIDSize/2; i++ {
id[i], id[DomainSubnetworkIDSize-1-i] = id[DomainSubnetworkIDSize-1-i], id[i]
}
return hex.EncodeToString(id[:])
}

View File

@@ -1,12 +1,10 @@
package model
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
package externalapi
// UTXOCollection represents a collection of UTXO entries, indexed by their outpoint
type UTXOCollection interface {
Iterator() ReadOnlyUTXOSetIterator
Get(outpoint *externalapi.DomainOutpoint) (externalapi.UTXOEntry, bool)
Contains(outpoint *externalapi.DomainOutpoint) bool
Get(outpoint *DomainOutpoint) (UTXOEntry, bool)
Contains(outpoint *DomainOutpoint) bool
Len() int
}
@@ -29,5 +27,5 @@ type MutableUTXODiff interface {
ToRemove() UTXOCollection
WithDiffInPlace(other UTXODiff) error
AddTransaction(transaction *externalapi.DomainTransaction, blockBlueScore uint64) error
AddTransaction(transaction *DomainTransaction, blockBlueScore uint64) error
}

View File

@@ -7,19 +7,18 @@ type ConsensusStateStore interface {
Store
IsStaged() bool
StageVirtualUTXODiff(virtualUTXODiff UTXODiff)
StageVirtualUTXODiff(virtualUTXODiff externalapi.UTXODiff)
UTXOByOutpoint(dbContext DBReader, outpoint *externalapi.DomainOutpoint) (externalapi.UTXOEntry, error)
HasUTXOByOutpoint(dbContext DBReader, outpoint *externalapi.DomainOutpoint) (bool, error)
VirtualUTXOSetIterator(dbContext DBReader) (ReadOnlyUTXOSetIterator, error)
StageVirtualDiffParents(virtualDiffParents []*externalapi.DomainHash)
VirtualDiffParents(dbContext DBReader) ([]*externalapi.DomainHash, error)
VirtualUTXOSetIterator(dbContext DBReader) (externalapi.ReadOnlyUTXOSetIterator, error)
VirtualUTXOs(dbContext DBReader,
fromOutpoint *externalapi.DomainOutpoint, limit int) ([]*externalapi.OutpointAndUTXOEntryPair, error)
StageTips(tipHashes []*externalapi.DomainHash)
Tips(dbContext DBReader) ([]*externalapi.DomainHash, error)
StartImportingPruningPointUTXOSet(dbContext DBWriter) error
HadStartedImportingPruningPointUTXOSet(dbContext DBWriter) (bool, error)
ImportPruningPointUTXOSetIntoVirtualUTXOSet(dbContext DBWriter, pruningPointUTXOSetIterator ReadOnlyUTXOSetIterator) error
ImportPruningPointUTXOSetIntoVirtualUTXOSet(dbContext DBWriter, pruningPointUTXOSetIterator externalapi.ReadOnlyUTXOSetIterator) error
FinishImportingPruningPointUTXOSet(dbContext DBWriter) error
}

View File

@@ -16,11 +16,11 @@ type PruningStore interface {
StageStartUpdatingPruningPointUTXOSet()
HadStartedUpdatingPruningPointUTXOSet(dbContext DBWriter) (bool, error)
FinishUpdatingPruningPointUTXOSet(dbContext DBWriter) error
UpdatePruningPointUTXOSet(dbContext DBWriter, utxoSetIterator ReadOnlyUTXOSetIterator) error
UpdatePruningPointUTXOSet(dbContext DBWriter, utxoSetIterator externalapi.ReadOnlyUTXOSetIterator) error
ClearImportedPruningPointUTXOs(dbContext DBWriter) error
AppendImportedPruningPointUTXOs(dbTx DBTransaction, outpointAndUTXOEntryPairs []*externalapi.OutpointAndUTXOEntryPair) error
ImportedPruningPointUTXOIterator(dbContext DBReader) (ReadOnlyUTXOSetIterator, error)
ImportedPruningPointUTXOIterator(dbContext DBReader) (externalapi.ReadOnlyUTXOSetIterator, error)
ClearImportedPruningPointMultiset(dbContext DBWriter) error
ImportedPruningPointMultiset(dbContext DBReader) (Multiset, error)
UpdateImportedPruningPointMultiset(dbTx DBTransaction, multiset Multiset) error

View File

@@ -5,9 +5,9 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// UTXODiffStore represents a store of UTXODiffs
type UTXODiffStore interface {
Store
Stage(blockHash *externalapi.DomainHash, utxoDiff UTXODiff, utxoDiffChild *externalapi.DomainHash)
Stage(blockHash *externalapi.DomainHash, utxoDiff externalapi.UTXODiff, utxoDiffChild *externalapi.DomainHash)
IsStaged() bool
UTXODiff(dbContext DBReader, blockHash *externalapi.DomainHash) (UTXODiff, error)
UTXODiff(dbContext DBReader, blockHash *externalapi.DomainHash) (externalapi.UTXODiff, error)
UTXODiffChild(dbContext DBReader, blockHash *externalapi.DomainHash) (*externalapi.DomainHash, error)
HasUTXODiffChild(dbContext DBReader, blockHash *externalapi.DomainHash) (bool, error)
Delete(blockHash *externalapi.DomainHash)

View File

@@ -4,11 +4,11 @@ import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// ConsensusStateManager manages the node's consensus state
type ConsensusStateManager interface {
AddBlock(blockHash *externalapi.DomainHash) (*externalapi.SelectedChainPath, error)
AddBlock(blockHash *externalapi.DomainHash) (*externalapi.SelectedChainPath, externalapi.UTXODiff, error)
PopulateTransactionWithUTXOEntries(transaction *externalapi.DomainTransaction) error
ImportPruningPoint(newPruningPoint *externalapi.DomainBlock) error
RestorePastUTXOSetIterator(blockHash *externalapi.DomainHash) (ReadOnlyUTXOSetIterator, error)
CalculatePastUTXOAndAcceptanceData(blockHash *externalapi.DomainHash) (UTXODiff, externalapi.AcceptanceData, Multiset, error)
RestorePastUTXOSetIterator(blockHash *externalapi.DomainHash) (externalapi.ReadOnlyUTXOSetIterator, error)
CalculatePastUTXOAndAcceptanceData(blockHash *externalapi.DomainHash) (externalapi.UTXODiff, externalapi.AcceptanceData, Multiset, error)
GetVirtualSelectedParentChainFromBlock(blockHash *externalapi.DomainHash) (*externalapi.SelectedChainPath, error)
RecoverUTXOIfRequired() error
}

View File

@@ -11,6 +11,7 @@ type DAGTopologyManager interface {
IsChildOf(blockHashA *externalapi.DomainHash, blockHashB *externalapi.DomainHash) (bool, error)
IsAncestorOf(blockHashA *externalapi.DomainHash, blockHashB *externalapi.DomainHash) (bool, error)
IsAncestorOfAny(blockHash *externalapi.DomainHash, potentialDescendants []*externalapi.DomainHash) (bool, error)
IsAnyAncestorOf(potentialAncestors []*externalapi.DomainHash, blockHash *externalapi.DomainHash) (bool, error)
IsInSelectedParentChainOf(blockHashA *externalapi.DomainHash, blockHashB *externalapi.DomainHash) (bool, error)
ChildInSelectedParentChainOf(context, highHash *externalapi.DomainHash) (*externalapi.DomainHash, error)

View File

@@ -11,7 +11,7 @@ type DAGTraversalManager interface {
// from lowHash (exclusive) to highHash (inclusive) over highHash's selected parent chain
SelectedChildIterator(highHash, lowHash *externalapi.DomainHash) (BlockIterator, error)
Anticone(blockHash *externalapi.DomainHash) ([]*externalapi.DomainHash, error)
BlueWindow(highHash *externalapi.DomainHash, windowSize int) ([]*externalapi.DomainHash, error)
BlockWindow(highHash *externalapi.DomainHash, windowSize int) ([]*externalapi.DomainHash, error)
NewDownHeap() BlockHeap
NewUpHeap() BlockHeap
CalculateChainPath(

View File

@@ -1,17 +0,0 @@
package model
import "github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
// ReadOnlyUTXOSet represents a UTXOSet that can only be read from
type ReadOnlyUTXOSet interface {
Iterator() ReadOnlyUTXOSetIterator
Entry(outpoint *externalapi.DomainOutpoint) externalapi.UTXOEntry
}
// ReadOnlyUTXOSetIterator is an iterator over all entries in a
// ReadOnlyUTXOSet
type ReadOnlyUTXOSetIterator interface {
First() bool
Next() bool
Get() (outpoint *externalapi.DomainOutpoint, utxoEntry externalapi.UTXOEntry, err error)
}

View File

@@ -12,7 +12,7 @@ type TestBlockBuilder interface {
// BuildBlockWithParents builds a block with provided parents, coinbaseData and transactions,
// and returns the block together with its past UTXO-diff from the virtual.
BuildBlockWithParents(parentHashes []*externalapi.DomainHash, coinbaseData *externalapi.DomainCoinbaseData,
transactions []*externalapi.DomainTransaction) (*externalapi.DomainBlock, model.UTXODiff, error)
transactions []*externalapi.DomainTransaction) (*externalapi.DomainBlock, externalapi.UTXODiff, error)
BuildUTXOInvalidHeader(parentHashes []*externalapi.DomainHash) (externalapi.BlockHeader, error)

View File

@@ -1,11 +1,12 @@
package testapi
import (
"io"
"github.com/kaspanet/kaspad/domain/consensus/model"
"github.com/kaspanet/kaspad/domain/consensus/model/externalapi"
"github.com/kaspanet/kaspad/domain/dagconfig"
"github.com/kaspanet/kaspad/infrastructure/db/database"
"io"
)
// MineJSONBlockType indicates which type of blocks MineJSON mines
@@ -31,7 +32,7 @@ type TestConsensus interface {
Database() database.Database
BuildBlockWithParents(parentHashes []*externalapi.DomainHash, coinbaseData *externalapi.DomainCoinbaseData,
transactions []*externalapi.DomainTransaction) (*externalapi.DomainBlock, model.UTXODiff, error)
transactions []*externalapi.DomainTransaction) (*externalapi.DomainBlock, externalapi.UTXODiff, error)
BuildHeaderWithParents(parentHashes []*externalapi.DomainHash) (externalapi.BlockHeader, error)
@@ -50,6 +51,8 @@ type TestConsensus interface {
MineJSON(r io.Reader, blockType MineJSONBlockType) (tips []*externalapi.DomainHash, err error)
DiscardAllStores()
RenderDAGToDot(filename string) error
AcceptanceDataStore() model.AcceptanceDataStore
BlockHeaderStore() model.BlockHeaderStore
BlockRelationStore() model.BlockRelationStore

View File

@@ -4,4 +4,4 @@ import (
"github.com/kaspanet/kaspad/infrastructure/logger"
)
var log, _ = logger.Get(logger.SubsystemTags.BDAG)
var log = logger.RegisterSubSystem("BDAG")

Some files were not shown because too many files have changed in this diff Show More