When implementing the fix for progress notifications
(https://github.com/etcd-io/etcd/pull/15237) we made a incorrect
assumption that that unsynched watches will always get at least one event.
Unsynched watches include not only slow watchers, but also newly created
watches that requested current or older revision. In case that non of the events
match watch filter, those newly created watches might become synched
without any event going through.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
Backport of PR #16822, commits f7e488dc9262685d6624755e0d3bb0a655863248,
67f17166bf2ba337dafb8e0ea8eea5f74a990767,
and f7ff898fd6c2d6dbb54278343073aa4fa5f46a03.
Co-authored-by: Benjamin Wang <benjamin.wang@broadcom.com>
Signed-off-by: Ivan Valdes <ivan@vald.es>
Add two separate probes, one for liveness and one for readiness. The liveness probe would check that the local individual node is up and running, or else restart the node, while the readiness probe would check that the cluster is ready to serve traffic. This would make etcd health-check fully Kubernetes API complient.
Signed-off-by: Siyuan Zhang <sizhang@google.com>
Disable following redirects from peer HTTP communication on the client's side.
Etcd server may run into SSRF (Server-side request forgery) when adding a new
member. If users provide a malicious peer URL, the existing etcd members may be
redirected to another unexpected internal URL when getting the new member's
version.
Signed-off-by: Ivan Valdes <ivan@vald.es>
This is not yet implementation, just API and tests to be filled
with implementation in next CLs,
tracked by: https://github.com/etcd-io/etcd/issues/12652
We propose here 3 packages:
- clientv3/naming/endpoints ->
That is abstraction layer over etcd that allows to write, read &
watch Endpoints information. It's independent from GRPC API. It hides
the storage details.
- clientv3/naming/endpoints/internal ->
That contains the grpc's compatible Update class to preserve the
internal JSON mashalling format.
- clientv3/naming/resolver ->
That implements the GRPC resolver API, such that etcd can be
used for connection.Dial in grpc.
Please see the grpc_naming.md document changes & grpcproxy/cluster.go
new integration, to see how the new abstractions work.
Signed-off-by: Chao Chen <chaochn@amazon.com>
This contains a slight refactoring to expose enough information
to write meaningful tests for auth applier v3.
Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
Mitigates #15993 by not checking each key individually for permission
when auth is entirely disabled or admin user is calling the method.
Backport of #16005
Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
Progress notifications requested using ProgressRequest were sent
directly using the ctrlStream, which means that they could race
against watch responses in the watchStream.
This would especially happen when the stream was not synced - e.g. if
you requested a progress notification on a freshly created unsynced
watcher, the notification would typically arrive indicating a revision
for which not all watch responses had been sent.
This changes the behaviour so that v3rpc always goes through the watch
stream, using a new RequestProgressAll function that closely matches
the behaviour of the v3rpc code - i.e.
1. Generate a message with WatchId -1, indicating the revision for
*all* watchers in the stream
2. Guarantee that a response is (eventually) sent
The latter might require us to defer the response until all watchers
are synced, which is likely as it should be. Note that we do *not*
guarantee that the number of progress notifications matches the number
of requests, only that eventually at least one gets sent.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Prevent etcd from crashing when given a bad grant payload, e.g.:
$ curl -d '{"name": "foo"}' http://localhost:2379/v3/auth/role/add
{"header":{"cluster_id":"14841639068965178418", ...
$ curl -d '{"name": "foo"}' http://localhost:2379/v3/auth/role/grant
curl: (52) Empty reply from server
Signed-off-by: Gyuho Lee <leegyuho@amazon.com>
Signed-off-by: J. David Lowe <j.david.lowe@gmail.com>
Backport https://github.com/etcd-io/etcd/pull/15095 to 3.4.
When promoting a learner, we need to wait until the leader's applied ID
catches up to the commitId. Afterwards, check whether the learner ID
exist or not, and return `membership.ErrIDNotFound` directly in the API
if the member ID not found, to avoid the request being unnecessarily
delivered to raft.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
For a cluster with only one member, the raft always send identical
unstable entries and committed entries to etcdserver, and etcd
responds to the client once it finishes (actually partially) the
applying workflow.
When the client receives the response, it doesn't mean etcd has already
successfully saved the data, including BoltDB and WAL, because:
1. etcd commits the boltDB transaction periodically instead of on each request;
2. etcd saves WAL entries in parallel with applying the committed entries.
Accordingly, it may run into a situation of data loss when the etcd crashes
immediately after responding to the client and before the boltDB and WAL
successfully save the data to disk.
Note that this issue can only happen for clusters with only one member.
For clusters with multiple members, it isn't an issue, because etcd will
not commit & apply the data before it being replicated to majority members.
When the client receives the response, it means the data must have been applied.
It further means the data must have been committed.
Note: for clusters with multiple members, the raft will never send identical
unstable entries and committed entries to etcdserver.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
In v3.5 it is assumed that the logger should not be nil, however it is
still a case in v3.4. The PR targeted to v3.5 was backported to 3.4 and
that's why it's possible to get panic on nil logger in 3.4. This commit
fixed this issue.
Fixes#14402
Signed-off-by: Vladimir Sokolov <vsvastey@gmail.com>