When clients have no permission to perform whatever operation, then
the applying may fail. We should also move consistent_index forward
in this case, otherwise the consitent_index may smaller than the
snapshot index.
When a user sets up a Mirror with a restricted user that doesn't have
access to the `foo` path, we will fail to get the most recent revision
due to permissions issues.
With this change, when a prefix is provided we will get the initial
revision from the prefix rather than /foo. This allows restricted users
to setup sync.
When etcdserver receives a LeaseRenew request, it may be still in
progress of processing the LeaseGrantRequest on exact the same
leaseID. Accordingly it may return a TTL=0 to client due to the
leaseID not found error. So the leader should wait for the appliedID
to be available before processing client requests.
To avoid inconsistant behavior during cluster upgrade we are feature
gating persistance behind cluster version. This should ensure that
all cluster members are upgraded to v3.6 before changing behavior.
To allow backporting this fix to v3.5 we are also introducing flag
--experimental-enable-lease-checkpoint-persist that will allow for
smooth upgrade in v3.5 clusters with this feature enabled.
Current checkpointing mechanism is buggy. New checkpoints for any lease
are scheduled only until the first leader change. Added fix for that
and a test that will check it.
Motivation is as follows:
- etcdctl we only depend on clientv3 APIs, no dependencies of bolt, backend, mvcc, file-layout
- etcdctl can be officially supported across wide range of versions, while etcdutl is pretty specific to file format at particular version.
it's step towards desired modules layout, documented in: https://etcd.io/docs/next/dev-internal/modules/
Infrequently the test flaked. Reproducable with:
```
go test go.etcd.io/etcd/tests/v3/integration --run TestFirstCommitNotification --count=500
```
The moveLeader finishes when configchange is commited by quorum.
It doesn't guarantee that the 'empty' record was committed by the new leader.
From time to time happened that appliedLeaderIndex was returning 9
(without empty entry) and the test flaked. In healthy case the
appliedIndex returned 10.
Fixed by putting kv pair after leader change. The pair is guaranteed
to be stored on index when put finishes (so the empty entry as well).
So far each instance of embed server was overriding the grpc loggers and zap.global loggers.
It's counter intutitive that last created Embedded server was 'wining' and more-over it was breaking grpc expectation to change it "only" before the grpc stack is being used.
This PR introduces explicit call: `embed.Config::SetupGlobalLoggers()`, that changes the loggers where requested. The call is used by etcd main binary.
The immediate benefit from this change is reduction of test flakiness, as there were flakes due to not a proper logger being used across tests.
ClusterVersionSet, ClusterMemberAttrSet, DowngradeInfoSet functions are
writing both to V2store and backend. Prior this CL there were
in a branch not executed if shouldApplyV3 was false,
e.g. during restore when Backend is up-to-date (has high
consistency-index) while v2store requires replay from WAL log.
The most serious consequence of this bug was that v2store after restore
could have different index (revision) than the same exact store before restore,
so potentially different content between replicas.
Also this change is supressing double-applying of Membership
(ClusterConfig) changes on Backend (store v3) - that lackilly are not
part of MVCC/KeyValue store, so they didn't caused Revisions to be
bumped.
Inspired by jingyih@ comment:
https://github.com/etcd-io/etcd/pull/12820#issuecomment-815299406