Difference in load configuration for watch delay tests show how huge the
impact is. Even with random write scheduler grpc under http
server can only handle 500 KB with 2 seconds delay. On the other hand,
separate grpc server easily hits 10, 100 or even 1000 MB within 100 miliseconds.
Priority write scheduler that was used in most previous releases
is far worse than random one.
Tests configured to only 5 MB to avoid flakes and taking too long to fill
etcd.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
There are two goroutines accessing the `gs` grpc server var. Before
insecure `gs` server start, the `gs` can be changed to secure server and
then the client will fail to connect to etcd with insecure request. It
is data-race. We should use argument for reference in the new goroutine.
fix: #15495
Signed-off-by: Wei Fu <fuweid89@gmail.com>
(cherry picked from commit a9988e2625eede1af81d189b5f2ecf7d4af3edf1)
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Problem: during restore in watchableStore.Restore, synced watchers are moved to unsynced.
minRev will be behind since it's not updated when watcher stays synced.
Solution: update minRev
fixes: https://github.com/etcd-io/etcd/issues/15271
Signed-off-by: Bogdan Kanivets <bkanivets@apple.com>
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
Backport https://github.com/etcd-io/etcd/pull/15095.
When promoting a learner, we need to wait until the leader's applied ID
catches up to the commitId. Afterwards, check whether the learner ID
exist or not, and return `membership.ErrIDNotFound` directly in the API
if the member ID not found, to avoid the request being unnecessarily
delivered to raft.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
We need to return io.ErrUnexpectedEOF in the error chain, so that
etcdserver can repair it automatically.
Backport https://github.com/etcd-io/etcd/pull/15068
Signed-off-by: Benjamin Wang <wachao@vmware.com>
`unsafeCommit` is called by both `(*batchTxBuffered) commit` and
`(*backend) defrag`. When users perform the defragmentation
operation, etcd doesn't update the consistent index. If etcd
crashes(e.g. panicking) in the process for whatever reason, then
etcd replays the WAL entries starting from the latest snapshot,
accordingly it may re-apply entries which might have already been
applied, eventually the revision isn't consistent with other members.
Refer to discussion in https://github.com/etcd-io/etcd/pull/14685
Signed-off-by: Benjamin Wang <wachao@vmware.com>
A WAL object was closed by defer, however the WAL was rewritten afterwards,
so defer closed already closed WAL but not the new one. It caused a data
race between writing file and cleaning up a temporary test directory,
which led to a non-deterministic bug.
Fixes#14332
Signed-off-by: Vladimir Sokolov <vsvastey@gmail.com>