Progress notifications requested using ProgressRequest were sent
directly using the ctrlStream, which means that they could race
against watch responses in the watchStream.
This would especially happen when the stream was not synced - e.g. if
you requested a progress notification on a freshly created unsynced
watcher, the notification would typically arrive indicating a revision
for which not all watch responses had been sent.
This changes the behaviour so that v3rpc always goes through the watch
stream, using a new RequestProgressAll function that closely matches
the behaviour of the v3rpc code - i.e.
1. Generate a message with WatchId -1, indicating the revision for
*all* watchers in the stream
2. Guarantee that a response is (eventually) sent
The latter might require us to defer the response until all watchers
are synced, which is likely as it should be. Note that we do *not*
guarantee that the number of progress notifications matches the number
of requests, only that eventually at least one gets sent.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
To avoid inconsistant behavior during cluster upgrade we are feature
gating persistance behind cluster version. This should ensure that
all cluster members are upgraded to v3.6 before changing behavior.
To allow backporting this fix to v3.5 we are also introducing flag
--experimental-enable-lease-checkpoint-persist that will allow for
smooth upgrade in v3.5 clusters with this feature enabled.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
This is a backport of https://github.com/etcd-io/etcd/pull/13435 and is
part of the work for 3.4.20
https://github.com/etcd-io/etcd/issues/14232.
The original change had a second commit that modifies a changelog file.
The 3.4 branch does not include any changelog file, so that part was not
cherry-picked.
Local Testing:
- `make build`
- `make test`
Both succeed.
Signed-off-by: Ramsés Morales <ramses@gmail.com>
Current checkpointing mechanism is buggy. New checkpoints for any lease
are scheduled only until the first leader change. Added fix for that
and a test that will check it.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
Cherry pick https://github.com/etcd-io/etcd/pull/13932 to 3.4.
When etcdserver receives a LeaseRenew request, it may be still in
progress of processing the LeaseGrantRequest on exact the same
leaseID. Accordingly it may return a TTL=0 to client due to the
leaseID not found error. So the leader should wait for the appliedID
to be available before processing client requests.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Fix the following error in integration pipeline,
```
=== RUN TestTLSReloadCopy
v3_grpc_test.go:1754: tls: failed to find any PEM data in key input
v3_grpc_test.go:1754: tls: private key does not match public key
v3_grpc_test.go:1754: tls: private key does not match public key
v3_grpc_test.go:1754: tls: private key does not match public key
```
Refer to https://github.com/etcd-io/etcd/runs/7123775361?check_suite_focus=true
Signed-off-by: Benjamin Wang <wachao@vmware.com>
We shouldn't fail the grpc-server (completely) by a not implemented RPC.
Failing whole server by remote request is anti-pattern and security
risk.
Refer to https://github.com/etcd-io/etcd/runs/7034342964?check_suite_focus=true#step:5:2284
```
=== RUN TestWatchRequestProgress/1-watcher
panic: not implemented
goroutine 83024 [running]:
go.etcd.io/etcd/proxy/grpcproxy.(*watchProxyStream).recvLoop(0xc009232f00, 0x4a73e1, 0xc00e2406e0)
/home/runner/work/etcd/etcd/proxy/grpcproxy/watch.go:265 +0xbf2
go.etcd.io/etcd/proxy/grpcproxy.(*watchProxy).Watch.func1(0xc0038a3bc0, 0xc009232f00)
/home/runner/work/etcd/etcd/proxy/grpcproxy/watch.go:125 +0x70
created by go.etcd.io/etcd/proxy/grpcproxy.(*watchProxy).Watch
/home/runner/work/etcd/etcd/proxy/grpcproxy/watch.go:123 +0x73b
FAIL go.etcd.io/etcd/clientv3/integration 222.813s
FAIL
```
Signed-off-by: Benjamin Wang <wachao@vmware.com>
grpc proxy opens additional 2 watching channels. The metric is shared
between etcd-server & grpc_proxy, so all assertions on number of open
watch channels need to take in consideration the additional "2"
channels.
In go1.13, the TLS13 is enablled by default, and as per go1.13 release notes :
TLS 1.3 cipher suites are not configurable. All supported cipher suites are safe,
and if PreferServerCipherSuites is set in Config the preference order is based
on the available hardware.
Fixing the test case for go1.13 by limiting the TLS version to TLS12
Currently, watch cancel requests are only sent to the server after a
message comes through on a watch where the client has cancelled. This
means that cancelled watches that don't receive any new messages are
never cancelled; they persist for the lifetime of the client stream.
This has negative connotations for locking applications where a watch
may observe a key which might never change again after cancellation,
leading to many accumulating watches on the server.
By cancelling proactively, in most cases we simply move the cancel
request to happen earlier, and additionally we solve the case where the
cancel request would never be sent.
Fixes#9416
Heavy inspiration drawn from the solutions proposed there.