When users use the TLS CommonName based authentication, the
authTokenBundle is always nil. But it's possible for the clients
to get `rpctypes.ErrAuthOldRevision` response when the clients
concurrently modify auth data (e.g, addUser, deleteUser etc.).
In this case, there is no need to refresh the token; instead the
clients just need to retry the operations (e.g. Put, Delete etc).
Signed-off-by: Benjamin Wang <wachao@vmware.com>
The client retries connection without backoff when the server is gone
after the watch stream is established. This results in high CPU usage
in the client process. This change introduces backoff when the stream is
failed and unavailable.
Signed-off-by: Hisanobu Tomari <posco.grubb@gmail.com>
If one of the nodes in the cluster has lost a dns record,
restarting the second node will break it.
This PR makes an attempt to add a comparison without using a resolver,
which allows to protect cluster from dns errors and does not break
the current logic of comparing urls in the URLStringsEqual function.
You can read more in the issue #7798Fixes#7798
Signed-off-by: Prasad Chandrasekaran <prasadc@vmware.com>
Due to a duplicate call of clientConfigFromCmd, the move-leader command
would fail with "conflicting environment variable is shadowed by corresponding command-line flag".
Also in scenarios where no command-line flag was supplied.
Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
A WAL object was closed by defer, however the WAL was rewritten afterwards,
so defer closed already closed WAL but not the new one. It caused a data
race between writing file and cleaning up a temporary test directory,
which led to a non-deterministic bug.
Fixes#14332
Signed-off-by: Vladimir Sokolov <vsvastey@gmail.com>
For a cluster with only one member, the raft always send identical
unstable entries and committed entries to etcdserver, and etcd
responds to the client once it finishes (actually partially) the
applying workflow.
When the client receives the response, it doesn't mean etcd has already
successfully saved the data, including BoltDB and WAL, because:
1. etcd commits the boltDB transaction periodically instead of on each request;
2. etcd saves WAL entries in parallel with applying the committed entries.
Accordingly, it may run into a situation of data loss when the etcd crashes
immediately after responding to the client and before the boltDB and WAL
successfully save the data to disk.
Note that this issue can only happen for clusters with only one member.
For clusters with multiple members, it isn't an issue, because etcd will
not commit & apply the data before it being replicated to majority members.
When the client receives the response, it means the data must have been applied.
It further means the data must have been committed.
Note: for clusters with multiple members, the raft will never send identical
unstable entries and committed entries to etcdserver.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
In v3.5 it is assumed that the logger should not be nil, however it is
still a case in v3.4. The PR targeted to v3.5 was backported to 3.4 and
that's why it's possible to get panic on nil logger in 3.4. This commit
fixed this issue.
Fixes#14402
Signed-off-by: Vladimir Sokolov <vsvastey@gmail.com>