It's to deflake TestAuthMemberRemove.
When the client has multiple endpoints, the client might send a request
with valid token to the follower member which hasn't received token
replicated log yet. The member will reject the request.
For instance, the maintenance.Status API will return "auth: invalid auth
token". But the client doesn't identify the error. The client won't retry to
refresh auth token. The maintenance.Status should togRPCError before return
so that the client can reflesh token. It's align with existing API.
Since the maintenance client always creates one connection to target
member, the member will have the token after refresh auth.
Maybe we can introduce a sync to wait for member is ready with token,
instead of refreshing.
Fixes: #15758
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Progress notifications requested using ProgressRequest were sent
directly using the ctrlStream, which means that they could race
against watch responses in the watchStream.
This would especially happen when the stream was not synced - e.g. if
you requested a progress notification on a freshly created unsynced
watcher, the notification would typically arrive indicating a revision
for which not all watch responses had been sent.
This changes the behaviour so that v3rpc always goes through the watch
stream, using a new RequestProgressAll function that closely matches
the behaviour of the v3rpc code - i.e.
1. Generate a message with WatchId -1, indicating the revision for
*all* watchers in the stream
2. Guarantee that a response is (eventually) sent
The latter might require us to defer the response until all watchers
are synced, which is likely as it should be. Note that we do *not*
guarantee that the number of progress notifications matches the number
of requests, only that eventually at least one gets sent.
Signed-off-by: Peter Wortmann <peter.wortmann@skao.int>
The huge (100k+) value was justified when storev2 was being dumped completely with every snapshot.
With storev2 being decomissioned we can checkpoint more frequently for faster recovery.
Signed-off-by: James Blair <mail@jamesblair.net>
The old name(raftDone) of the channel(notifyc) which indicates the apply has been
completed is left unchanged in the comments, resulting in confusion when reading
the source code.
Signed-off-by: caojiamingalan <alan.c.19971111@gmail.com>
When promoting a learner, we need to wait until the leader's applied ID
catches up to the commitId. Afterwards, check whether the learner ID
exist or not, and return `membership.ErrIDNotFound` directly in the API
if the member ID not found, to avoid the request being unnecessarily
delivered to raft.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
We need to return io.ErrUnexpectedEOF in the error chain, so that
etcdserver can repair it automatically.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
The pb provides an accessor method to get field and it will not panic if
the owner is nil. And add non-empty RangeRespone into the test case.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
TODO:
1. Update Documentation/contributor-guide/modules.svg;
2. Update bill-of-materials.json when raft and raftexample are removed in future;
Signed-off-by: Benjamin Wang <wachao@vmware.com>
When two members in a 5 member cluster are corrupted, and they
have different hashes, etcd will raise alarm for both members,
but the order isn't guaranteed. But if the two corrupted members
have the same hash, then the order is guaranteed. The leader
always raise alarm in the same order as the member list.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
If quorum doesn't exist, we don't know which members data are
corrupted. In such situation, we intentionally set the memberID
as 0, it means it affects the whole cluster.
It's align with what we did for 3.4 and 3.5 in
https://github.com/etcd-io/etcd/issues/14849
Signed-off-by: Benjamin Wang <wachao@vmware.com>
When the leader detects data inconsistency by comparing hashes,
currently it assumes that the follower is the corrupted member.
It isn't correct, the leader might be the corrupted member as well.
We should depend on quorum to identify the corrupted member.
For example, for 3 member cluster, if 2 members have the same hash,
the the member with different hash is the corrupted one. For 5 member
cluster, if 3 members have the same same, the corrupted member is one
of the left two members; it's also possible that both the left members
are corrupted.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Comments fixed as per goword in go test files that shell
function go_srcs_in_module lists as per changes on #14827
Helps in #14827
Signed-off-by: Bhargav Ravuri <bhargav.ravuri@infracloud.io>
This commit makes the rarely used `raftpb.Message.Snapshot` field nullable.
In doing so, it reduces the memory size of a `raftpb.Message` message from
264 bytes to 128 bytes — a 52% reduction in size.
While this commit does not change the protobuf encoding, it does change
how that encoding is used. `(gogoproto.nullable) = false` instruct the
generated proto marshaling logic to always encode a value for the field,
even if that value is empty. `(gogoproto.nullable) = true` instructs the
generated proto marshaling logic to omit an encoded value for the field
if the field is nil.
This raises compatibility concerns in both directions. Messages encoded
by new binary versions without a `Snapshot` field will be decoded as an
empty field by old binary versions. In other words, old binary versions
can't tell the difference. However, messages encoded by old binary versions
with an empty Snapshot field will be decoded as a non-nil, empty field by
new binary versions. As a result, new binary versions need to be prepared
to handle such messages.
While Message.Snapshot is not intentionally part of the external interface
of this library, it was possible for users of the library to access it and
manipulate it. As such, this change may be considered a breaking change.
Signed-off-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>