For a cluster with only one member, the raft always send identical
unstable entries and committed entries to etcdserver, and etcd
responds to the client once it finishes (actually partially) the
applying workflow.
When the client receives the response, it doesn't mean etcd has already
successfully saved the data, including BoltDB and WAL, because:
1. etcd commits the boltDB transaction periodically instead of on each request;
2. etcd saves WAL entries in parallel with applying the committed entries.
Accordingly, it may run into a situation of data loss when the etcd crashes
immediately after responding to the client and before the boltDB and WAL
successfully save the data to disk.
Note that this issue can only happen for clusters with only one member.
For clusters with multiple members, it isn't an issue, because etcd will
not commit & apply the data before it being replicated to majority members.
When the client receives the response, it means the data must have been applied.
It further means the data must have been committed.
Note: for clusters with multiple members, the raft will never send identical
unstable entries and committed entries to etcdserver.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
In v3.5 it is assumed that the logger should not be nil, however it is
still a case in v3.4. The PR targeted to v3.5 was backported to 3.4 and
that's why it's possible to get panic on nil logger in 3.4. This commit
fixed this issue.
Fixes#14402
Signed-off-by: Vladimir Sokolov <vsvastey@gmail.com>
The latest vesion v0.1.12 was just released On Jul 27, 2022,
and it is causing issue (see below) on the govet check,
```
govet_shadow' started at Sun Jul 31 23:23:27 PDT 2022
go get: upgraded golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 => v0.0.0-20220722155237-a158d28d115b
go get: upgraded golang.org/x/sys v0.0.0-20211019181941-9d821ace8654 => v0.0.0-20220722155257-8c9f86f7a55f
go get: upgraded golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135 => v0.1.12
/root/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-prometheus@v1.2.0/client_metrics.go:7:2: missing go.sum entry for module providing package golang.org/x/net/context (imported by go.etcd.io/etcd/etcdserver/etcdserverpb); to add:
go get go.etcd.io/etcd/etcdserver/etcdserverpb
/root/go/pkg/mod/google.golang.org/grpc@v1.26.0/internal/transport/controlbuf.go:28:2: missing go.sum entry for module providing package golang.org/x/net/http2 (imported by go.etcd.io/etcd/embed); to add:
go get go.etcd.io/etcd/embed
/root/go/pkg/mod/google.golang.org/grpc@v1.26.0/internal/transport/controlbuf.go:29:2: missing go.sum entry for module providing package golang.org/x/net/http2/hpack (imported by github.com/soheilhy/cmux); to add:
go get github.com/soheilhy/cmux@v0.1.4
/root/go/pkg/mod/google.golang.org/grpc@v1.26.0/server.go:36:2: missing go.sum entry for module providing package golang.org/x/net/trace (imported by go.etcd.io/etcd/embed); to add:
go get go.etcd.io/etcd/embed
```
It isn't good to always to use the latest version. Instead, we should
lock down the version, and v0.1.11 was confirmed to be working.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
To avoid inconsistant behavior during cluster upgrade we are feature
gating persistance behind cluster version. This should ensure that
all cluster members are upgraded to v3.6 before changing behavior.
To allow backporting this fix to v3.5 we are also introducing flag
--experimental-enable-lease-checkpoint-persist that will allow for
smooth upgrade in v3.5 clusters with this feature enabled.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
This is a backport of https://github.com/etcd-io/etcd/pull/13435 and is
part of the work for 3.4.20
https://github.com/etcd-io/etcd/issues/14232.
The original change had a second commit that modifies a changelog file.
The 3.4 branch does not include any changelog file, so that part was not
cherry-picked.
Local Testing:
- `make build`
- `make test`
Both succeed.
Signed-off-by: Ramsés Morales <ramses@gmail.com>
Current checkpointing mechanism is buggy. New checkpoints for any lease
are scheduled only until the first leader change. Added fix for that
and a test that will check it.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
To improve debuggability of `agreement among raft nodes before
linearized reading`, we added some tracing inside
`linearizableReadLoop`.
This will allow us to know the timing of `s.r.ReadIndex` vs
`s.applyWait.Wait(rs.Index)`.
Signed-off-by: Chao Chen <chaochn@amazon.com>
This change is to ensure that all members returned during the client's
AutoSync are started and are not learners, which are not valid
etcd members to make requests to.
Signed-off-by: Chris Ayoub <cayoub@hubspot.com>
Cherry pick https://github.com/etcd-io/etcd/pull/13932 to 3.4.
When etcdserver receives a LeaseRenew request, it may be still in
progress of processing the LeaseGrantRequest on exact the same
leaseID. Accordingly it may return a TTL=0 to client due to the
leaseID not found error. So the leader should wait for the appliedID
to be available before processing client requests.
Signed-off-by: Benjamin Wang <wachao@vmware.com>