Problem with old code was that during downgrade only members with
downgrade target version were allowed to join. This is unrealistic as
it doesn't handle any members to disconnect/rejoin.
To make it possible to alert on misconfigured etcd clusters that have
missing/superfluous peers, expose the list of peers as a metric.
This metric can, for example, be compared to the control-plane nodes of
a kubernetes cluster.
Thanks to this change:
- all the maps bucket -> buffer are indexed by int's instead of
string. No need to do: byte[] -> string -> hash conversion on each
access.
- buckets are strongly typed in backend/mvcc API.
This makes (bbolt) backend a full feature snapshot in term of WAL/raft,
i.e. carries:
- commit : (applied_index)
- confState
Benefits:
- Backend will be a sufficient point in time definition sufficient to
start replaying WAL. We have applied_index & confState in consistent
state.
- In case of emergency a backend state can be used for recovery
correct 'backend' (bbolt) context in aspect of membership.
Prior to this change the 'restored' backend used to still contain:
- old memberid (mvcc deletion used, why the membership is in bolt
bucket, but not mvcc part):
```
mvs := mvcc.NewStore(s.lg, be, lessor, ci, mvcc.StoreConfig{CompactionBatchLimit: math.MaxInt32})
defer mvs.Close()
txn := mvs.Write(traceutil.TODO())
btx := be.BatchTx()
del := func(k, v []byte) error {
txn.DeleteRange(k, nil)
return nil
}
// delete stored members from old cluster since using new members
btx.UnsafeForEach([]byte("members"), del)
```
- didn't get new members added.
ClusterVersionSet, ClusterMemberAttrSet, DowngradeInfoSet functions are
writing both to V2store and backend. Prior this CL there were
in a branch not executed if shouldApplyV3 was false,
e.g. during restore when Backend is up-to-date (has high
consistency-index) while v2store requires replay from WAL log.
The most serious consequence of this bug was that v2store after restore
could have different index (revision) than the same exact store before restore,
so potentially different content between replicas.
Also this change is supressing double-applying of Membership
(ClusterConfig) changes on Backend (store v3) - that lackilly are not
part of MVCC/KeyValue store, so they didn't caused Revisions to be
bumped.
Inspired by jingyih@ comment:
https://github.com/etcd-io/etcd/pull/12820#issuecomment-815299406