This is because etcd v3.5 will panic when it encounters
ClusterVersionSet entry with version >3.5.0. For downgrades to v3.5 to
work we need to make sure this entry is snapshotted.
Problem with old code was that during downgrade only members with
downgrade target version were allowed to join. This is unrealistic as
it doesn't handle any members to disconnect/rejoin.
To make it possible to alert on misconfigured etcd clusters that have
missing/superfluous peers, expose the list of peers as a metric.
This metric can, for example, be compared to the control-plane nodes of
a kubernetes cluster.
Storage version should follow cluster version. During upgrades this
should be immidiate as storage version can be always upgraded as storage
is backward compatible. During downgrades it will be delayed and will
require time for incompatible changes to be snapshotted.
As storage version change can happen long after cluster is running, we
need to add a step during bootstrap to validate if loaded data can be
understood by migrator.
A client-side optimization was made in #6100 to filter ascending key sorts to avoid an unnecessary re-sort since this is the order already returned by the back-end logic.
It seems to me that this really belongs on the server side since it's tied to the server implementation and should apply for any caller of the kv api (for example non-go clients).
Related, the client/v3 syncer depends on this default sorting which isn't explicit in the kv api contract. So I'm proposing the required sort parameters be included explicitly; it will take the fast path either way.
When a unary request takes more than predefined duration, this request
is defined as "expensive" and a warning is printed. The expensive request
duration is hard-coded to 300 ms. It can be not enough for example
for transactions with a lot of operations. The warnings just blow up
the log files and reduce throughput.
This fix allows user to configure the "expensive" request duration.
Signed-off-by: Alexey Roytman <roytman@il.ibm.com>
The file `zap_raft.go` adds the raft.Logger proxy logger on top of `*zap.Logger`.
Adding a proxy requires adding the option `zap.AddCallerSkip(1)`,
so that the logging message specifies the correct caller,
two of the three constructors in the `zap_raft.go` adds this option.
This commit fixes the third constructor so that it also adds `zap.AddCallerSkip`.
Before fix:
`{"level":"info","ts":"2021-07-22T17:46:01.435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd07d29169ff0c5a [logterm: 2, index: 8, vote: 38447ba545569bbe] ignored MsgPreVote from c7baeaad79d6d5ed [logterm: 2, index: 8] at term 2: lease is not expired (remaining ticks: 10)"}`
After fix:
`{"level":"info","ts":"2021-07-22T17:46:51.227Z","logger":"raft","caller":"raft/raft.go:859","msg":"bd07d29169ff0c5a [logterm: 2, index: 8, vote: c7baeaad79d6d5ed] ignored MsgPreVote from 38447ba545569bbe [logterm: 2, index: 8] at term 2: lease is not expired (remaining ticks: 9)"}`