`err` variable shared throughout the NewServer function and used on line
396 to defer decision whether backend should be closed when starting
the server failed.
`snapshot` variable is first defined 407, redeclared locally on line 496 and later
again used on line 625. Creation of local variable is a bug introduced
in https://github.com/etcd-io/etcd/pull/11888.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
Disable following redirects from peer HTTP communication on the client's side.
Etcd server may run into SSRF (Server-side request forgery) when adding a new
member. If users provide a malicious peer URL, the existing etcd members may be
redirected to another unexpected internal URL when getting the new member's
version.
Signed-off-by: Ivan Valdes <ivan@vald.es>
Add two separate probes, one for liveness and one for readiness. The liveness probe would check that the local individual node is up and running, or else restart the node, while the readiness probe would check that the cluster is ready to serve traffic. This would make etcd health-check fully Kubernetes API complient.
Signed-off-by: Siyuan Zhang <sizhang@google.com>
It's possible that etcd server may run into SSRF situation when adding a new member. If users provide a malicious peer URL, the existing etcd members may be redirected to other unexpected internal URL when getting the new member's version.
Signed-off-by: James Blair <mail@jamesblair.net>
This contains a slight refactoring to expose enough information
to write meaningful tests for auth applier v3.
Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
Mitigates etcd-io#15993 by not checking each key individually for permission
when auth is entirely disabled or admin user is calling the method.
Signed-off-by: Thomas Jungblut <tjungblu@redhat.com>
Progress notifications requested using ProgressRequest were sent
directly using the ctrlStream, which means that they could race
against watch responses in the watchStream.
This would especially happen when the stream was not synced - e.g. if
you requested a progress notification on a freshly created unsynced
watcher, the notification would typically arrive indicating a revision
for which not all watch responses had been sent.
This changes the behaviour so that v3rpc always goes through the watch
stream, using a new RequestProgressAll function that closely matches
the behaviour of the v3rpc code - i.e.
1. Generate a message with WatchId -1, indicating the revision for
*all* watchers in the stream
2. Guarantee that a response is (eventually) sent
The latter might require us to defer the response until all watchers
are synced, which is likely as it should be. Note that we do *not*
guarantee that the number of progress notifications matches the number
of requests, only that eventually at least one gets sent.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Backport https://github.com/etcd-io/etcd/pull/15095.
When promoting a learner, we need to wait until the leader's applied ID
catches up to the commitId. Afterwards, check whether the learner ID
exist or not, and return `membership.ErrIDNotFound` directly in the API
if the member ID not found, to avoid the request being unnecessarily
delivered to raft.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
We need to return io.ErrUnexpectedEOF in the error chain, so that
etcdserver can repair it automatically.
Backport https://github.com/etcd-io/etcd/pull/15068
Signed-off-by: Benjamin Wang <wachao@vmware.com>
For a cluster with only one member, the raft always send identical
unstable entries and committed entries to etcdserver, and etcd
responds to the client once it finishes (actually partially) the
applying workflow.
When the client receives the response, it doesn't mean etcd has already
successfully saved the data, including BoltDB and WAL, because:
1. etcd commits the boltDB transaction periodically instead of on each request;
2. etcd saves WAL entries in parallel with applying the committed entries.
Accordingly, it may run into a situation of data loss when the etcd crashes
immediately after responding to the client and before the boltDB and WAL
successfully save the data to disk.
Note that this issue can only happen for clusters with only one member.
For clusters with multiple members, it isn't an issue, because etcd will
not commit & apply the data before it being replicated to majority members.
When the client receives the response, it means the data must have been applied.
It further means the data must have been committed.
Note: for clusters with multiple members, the raft will never send identical
unstable entries and committed entries to etcdserver.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Problem: We pass grpc context down to applier in readonly serializable txn.
This context can be cancelled for example due to timeout.
This will trigger panic inside applyTxn
Solution: Only panic for transactions with write operations
fixes https://github.com/etcd-io/etcd/issues/14110
main PR https://github.com/etcd-io/etcd/pull/14149
Signed-off-by: Bogdan Kanivets <bkanivets@apple.com>