Ori Newman d207888b67
Implement pruned headers node (#1787)
* Pruning headers p2p basic structure

* Remove headers-first

* Fix consensus tests except TestValidateAndInsertPruningPointWithSideBlocks and TestValidateAndInsertImportedPruningPoint

* Add virtual genesis

* Implement PruningPointAndItsAnticoneWithMetaData

* Start fixing TestValidateAndInsertImportedPruningPoint

* Fix TestValidateAndInsertImportedPruningPoint

* Fix BlockWindow

* Update p2p and gRPC

* Fix all tests except TestHandleRelayInvs

* Delete TestHandleRelayInvs parts that cover the old IBD flow

* Fix lint errors

* Add p2p_request_ibd_blocks.go

* Clean code

* Make MsgBlockWithMetaData implement its own representation

* Remove redundant check if highest share block is below the pruning point

* Fix TestCheckLockTimeVerifyConditionedByAbsoluteTimeWithWrongLockTime

* Fix comments, errors ane names

* Fix window size to the real value

* Check reindex root after each block at TestUpdateReindexRoot

* Remove irrelevant check

* Renames and comments

* Remove redundant argument from sendGetBlockLocator

* Don't delete staging on non-recoverable errors

* Renames and comments

* Remove redundant code

* Commit changes inside ResolveVirtual

* Add comment to IsRecoverableError

* Remove blocksWithMetaDataGHOSTDAGDataStore

* Increase windows pagefile

* Move DeleteStagingConsensus outside of defer

* Get rid of mustAccepted in receiveBlockWithMetaData

* Ban on invalid pruning point

* Rename interface_datastructures_daawindowstore.go to interface_datastructures_blocks_with_meta_data_daa_window_store.go

* * Change GetVirtualSelectedParentChainFromBlockResponseMessage and VirtualSelectedParentChainChangedNotificationMessage to show only added block hashes
*  Remove ResolveVirtual
* Use externalapi.ConsensusWrapper inside MiningManager
* Fix pruningmanager.blockwithmetadata

* Set pruning point selected child when importing the pruning point UTXO set

* Change virtual genesis hash

* replace the selected parent with virtual genesis on removePrunedBlocksFromGHOSTDAGData

* Get rid of low hash in block locators

* Remove +1 from everywhere we use difficultyAdjustmentWindowSize and increase the default value by one

* Add comments about consensus wrapper

* Don't use separate staging area when resolving resolveBlockStatus

* Fix netsync stability test

* Fix checkResolveVirtual

* Rename ConsensusWrapper->ConsensusReference

* Get rid of blockHeapNode

* Add comment to defaultDifficultyAdjustmentWindowSize

* Add SelectedChild to DAGTraversalManager

* Remove redundant copy

* Rename blockWindowHeap->calculateBlockWindowHeap

* Move isVirtualGenesisOnlyParent to utils

* Change BlockWithMetaData->BlockWithTrustedData

* Get rid of maxReasonLength

* Split IBD to 100 blocks each time

* Fix a bug in calculateBlockWindowHeap

* Switch to trusted data when encountering virtual genesis in blockWithTrustedData

* Move ConsensusReference to domain

* Update ConsensusReference comment

* Add comment

* Rename shouldNotAddGenesis->skipAddingGenesis
2021-07-26 12:24:07 +03:00

67 lines
1.5 KiB
Go

package main
import (
"sync/atomic"
"github.com/kaspanet/kaspad/stability-tests/common"
"github.com/kaspanet/kaspad/util/panics"
"github.com/kaspanet/kaspad/util/profiling"
"github.com/pkg/errors"
)
func main() {
defer panics.HandlePanic(log, "netsync-main", nil)
err := parseConfig()
if err != nil {
panic(errors.Wrap(err, "error in parseConfig"))
}
defer backendLog.Close()
common.UseLogger(backendLog, log.Level())
cfg := activeConfig()
if cfg.Profile != "" {
profiling.Start(cfg.Profile, log)
}
shutdown := uint64(0)
syncerClient, syncerTeardown, err := setupSyncer()
if err != nil {
panic(errors.Wrap(err, "error in setupSyncer"))
}
syncerClient.SetOnErrorHandler(func(err error) {
if atomic.LoadUint64(&shutdown) == 0 {
log.Debugf("received error from SYNCER: %s", err)
}
})
defer func() {
syncerClient.Disconnect()
syncerTeardown()
}()
syncedClient, syncedTeardown, err := setupSyncee()
if err != nil {
panic(errors.Wrap(err, "error in setupSyncee"))
}
syncedClient.SetOnErrorHandler(func(err error) {
if atomic.LoadUint64(&shutdown) == 0 {
log.Debugf("received error from SYNCEE: %s", err)
}
})
defer func() {
syncedClient.Disconnect()
syncedTeardown()
}()
err = checkSyncRate(syncerClient, syncedClient)
if err != nil {
panic(errors.Wrap(err, "error in checkSyncRate"))
}
err = checkResolveVirtual(syncerClient, syncedClient)
if err != nil {
panic(errors.Wrap(err, "error in checkResolveVirtual"))
}
atomic.StoreUint64(&shutdown, 1)
}