The last step with gRPC update behavior changes auditing to resolve CVE #16740 in 3.5
This PR backports #14922, #16338, #16587, #16630, #16636 and #16739 to release-3.5.
Signed-off-by: Chao Chen <chaochn@amazon.com>
so that they cabn be configured via config file.
Co-authored-by: Shawn Gerrard <shawn.gerrard@gmail.com>
Signed-off-by: James Blair <mail@jamesblair.net>
fix the unexpected blocking when using Barrier.Wait(), e.g.
NewBarrier(client, "a").Wait() will block if key "a" is not existed but "a0" is existed, but it should return immediately.
Signed-off-by: zhangwenkang <zwenkang@vmware.com>
Historic capnslog timestamps are in microsecond resolution. We need to match that when we migrate to the zap logger.
Signed-off-by: James Blair <mail@jamesblair.net>
In order to fix https://github.com/etcd-io/etcd/issues/12385,
PR https://github.com/etcd-io/etcd/pull/14322 introduced a change
in which the client side may retry based on the error message
returned from server side.
This is not good, as it's too fragile and it's also changed the
protocol between client and server. Please see the discussion
in https://github.com/kubernetes/kubernetes/pull/114403
Note: The issue https://github.com/etcd-io/etcd/issues/12385 only
happens when auth is enabled, and client side reuse the same client
to watch.
So we decided to rollback the change on 3.5, reasons:
1.K8s doesn't enable auth at all. It has no any impact on K8s.
2.It's very easy for client application to workaround the issue.
The client just needs to create a new client each time before watching.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Add empty implementation for reuse port socket option since Solaris does not
support SO_REUSEPORT.
(cherry picked from commit af626ebfdeb46c1025f9a717959b241fecc44d0a)
Conflicts:
client/pkg/transport/sockopt_unix.go
Signed-off-by: Andrew Stormont <andyjstormont@gmail.com>
When users use the TLS CommonName based authentication, the
authTokenBundle is always nil. But it's possible for the clients
to get `rpctypes.ErrAuthOldRevision` response when the clients
concurrently modify auth data (e.g, addUser, deleteUser etc.).
In this case, there is no need to refresh the token; instead the
clients just need to retry the operations (e.g. Put, Delete etc).
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Check the client count before creating the ephemeral key, do not
create the key if there are already too many clients. Check the
count after creating the key again, if the total kvs is bigger
than the expected count, then check the rev of the current key,
and take action accordingly based on its rev. If its rev is in
the first ${count}, then it's valid client, otherwise, it should
fail.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
The client retries connection without backoff when the server is gone
after the watch stream is established. This results in high CPU usage
in the client process. This change introduces backoff when the stream is
failed and unavailable.
Signed-off-by: Hisanobu Tomari <posco.grubb@gmail.com>
Only `net.TCPConn` supports `SetKeepAlive` and `SetKeepAlivePeriod`
by default, so if you want to warp multiple layers of net.Listener,
the `keepaliveListener` should be the one which is closest to the
original `net.Listener` implementation, namely `TCPListener`.
Also refer to https://github.com/etcd-io/etcd/pull/14356
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Streams are now closed after being used in the lessor `keepAliveOnce` method.
This prevents the "failed to receive lease keepalive request from gRPC stream"
message from being logged by the server after the context is cancelled by the
client.
Signed-off-by: Justin Kolberg <amd.prophet@gmail.com>