This PR:
- moves wrapping of appliers (due to Alarms) out of server.go into uber_applier.go
- clearly devides the application logic into: chain of:
a) 'WrapApply' (generic logic across all the methods)
b) dispatcher (translation of Apply into specific method like 'Put')
c) chain of 'wrappers' of the specific methods (like Put).
- when we do recovery (restore from snapshot) we create new instance of appliers.
The purpose is to make sure we control all the depencies of the apply process, i.e.
we can supply e.g. special instance of 'backend' to the application logic.
The PR removes calls to applierV3base logic from server.go that is NOT part of 'application'.
The original idea was that read-only transaction and Range call shared logic with Apply,
so they can call appliers directly (but bypassing all 'corrupt', 'quota' and 'auth' wrappers).
This PR moves all the logic to a separate file (that later can become package on its own).
A client-side optimization was made in #6100 to filter ascending key sorts to avoid an unnecessary re-sort since this is the order already returned by the back-end logic.
It seems to me that this really belongs on the server side since it's tied to the server implementation and should apply for any caller of the kv api (for example non-go clients).
Related, the client/v3 syncer depends on this default sorting which isn't explicit in the kv api contract. So I'm proposing the required sort parameters be included explicitly; it will take the fast path either way.
Narrowly prevent etcd from crashing when given a bad ACTIVATE payload, e.g.:
$ curl -d "{\"action\":\"ACTIVATE\"}" ${ETCD}/v3/maintenance/alarm
curl: (52) Empty reply from server
ClusterVersionSet, ClusterMemberAttrSet, DowngradeInfoSet functions are
writing both to V2store and backend. Prior this CL there were
in a branch not executed if shouldApplyV3 was false,
e.g. during restore when Backend is up-to-date (has high
consistency-index) while v2store requires replay from WAL log.
The most serious consequence of this bug was that v2store after restore
could have different index (revision) than the same exact store before restore,
so potentially different content between replicas.
Also this change is supressing double-applying of Membership
(ClusterConfig) changes on Backend (store v3) - that lackilly are not
part of MVCC/KeyValue store, so they didn't caused Revisions to be
bumped.
Inspired by jingyih@ comment:
https://github.com/etcd-io/etcd/pull/12820#issuecomment-815299406