raftNode was being initialized in start(), which was causing
hangs when trying to stop the etcd server since the stop channel
would not be initialized in time for the stop call. Instead,
setup non-configurable bits in a constructor.
Fixes#7668
Fix https://github.com/coreos/etcd/issues/7470.
This patch removes unnecessary term look-up in
'createMergedSnapshotMessage', which can trigger panic
if raft entry at etcdProgress.appliedi got compacted
by subsequent 'MsgSnap' messages--if a follower is
being (in this case, network latency spikes), it
could receive subsequent 'MsgSnap' requests from leader.
etcd server-side 'applyAll' routine and raft's Ready
processing routine becomes asynchronous after raft
entries are persisted. And given that raft Ready routine
takes less time to finish, it is possible that second
'MsgSnap' is being handled, while the slow 'applyAll'
is still processing the first(old) 'MsgSnap'. Then raft
Ready routine can compact the log entries at future
index to 'applyAll'. That is how 'createMergedSnapshotMessage'
tried to look up raft term with outdated etcdProgress.appliedi.
Signed-off-by: Gyu-Ho Lee <gyuhox@gmail.com>
This commit adds jwt token support in v3 auth API.
Remaining major ToDos:
- Currently token type isn't hidden from etcdserver. In the near
future the information should be completely invisible from
etcdserver package.
- Configurable expiration of token. Currently tokens can be valid
until keys are changed.
How to use:
1. generate keys for signing and verfying jwt tokens:
$ openssl genrsa -out app.rsa 1024
$ openssl rsa -in app.rsa -pubout > app.rsa.pub
2. add command line options to etcd like below:
--auth-token-type jwt \
--auth-jwt-pub-key app.rsa.pub --auth-jwt-priv-key app.rsa \
--auth-jwt-sign-method RS512
3. launch etcd cluster
Below is a performance comparison of serializable read w/ and w/o jwt
token. Every (3) etcd node is executed on a single machine. Signing
method is RS512 and key length is 1024 bit. As the results show, jwt
based token introduces a performance overhead but it would be
acceptable for a case that requires authentication.
w/o jwt token auth (no auth):
Summary:
Total: 1.6172 secs.
Slowest: 0.0125 secs.
Fastest: 0.0001 secs.
Average: 0.0002 secs.
Stddev: 0.0004 secs.
Requests/sec: 6183.5877
Response time histogram:
0.000 [1] |
0.001 [9982] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
0.003 [1] |
0.004 [1] |
0.005 [0] |
0.006 [0] |
0.008 [6] |
0.009 [0] |
0.010 [1] |
0.011 [5] |
0.013 [3] |
Latency distribution:
10% in 0.0001 secs.
25% in 0.0001 secs.
50% in 0.0001 secs.
75% in 0.0001 secs.
90% in 0.0002 secs.
95% in 0.0002 secs.
99% in 0.0003 secs.
w/ jwt token auth:
Summary:
Total: 2.5364 secs.
Slowest: 0.0182 secs.
Fastest: 0.0002 secs.
Average: 0.0003 secs.
Stddev: 0.0005 secs.
Requests/sec: 3942.5185
Response time histogram:
0.000 [1] |
0.002 [9975] |∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎∎
0.004 [0] |
0.006 [1] |
0.007 [11] |
0.009 [2] |
0.011 [4] |
0.013 [5] |
0.015 [0] |
0.016 [0] |
0.018 [1] |
Latency distribution:
10% in 0.0002 secs.
25% in 0.0002 secs.
50% in 0.0002 secs.
75% in 0.0002 secs.
90% in 0.0003 secs.
95% in 0.0003 secs.
99% in 0.0004 secs.
Keep more wal entries in memory for fast follower recovery.
10,000 was a too small number that triggers quite a few snapshots.
ZK proves that 100,000 is a reasonable number for even old less prowerful
machines.
Eventually we should provide both count and max memory (for large entries).
suppose a lease granting request from a follower goes through and followed by a lease look up or renewal, the leader might not apply the lease grant request locally. So the leader might not find the lease from the lease look up or renewal request which will result lease not found error. To fix this issue, we force the leader to apply its pending commited index before looking up lease.
FIX#6978
This commit protects membership change operations with auth. Only
users that have root role can issue the operations.
Implements https://github.com/coreos/etcd/issues/6899
If we promote the lessor before finish applying all
entries from the last term, we might incorrectly renew
the already revoked leases.
Here is an example:
- Term 1: revoke lease A accepted by raft
- Old leader failed, new election happened
- Term 2: promote
- Term 2: keep alive A succeed. A now has 10 seconds TTL
- Term 2: revoke lease A from Term 1 got committed and applied
- Term 2: the lease A with 10 seconds TTL is revoked
To solve this, the new leader MUST apply all entries from old term
before promote its lessor to start accept renew requests.
When 1000 leases expired at the same time, etcd takes more than 5 seconds to clean them. This means that even after the leases have expired, keys associated with leases are still accessible. I increase the deletion throughput by parallelizing leases deletion process.
All outstanding goroutines now go into the etcdserver waitgroup. goroutines are
shutdown with a "stopping" channel which is closed when the run() goroutine
shutsdown. The done channel will only close once the waitgroup is totally cleared.
If a user upgrades etcd from 2.3.x to 3.0 and shutdown the
cluster immediately without triggering any new backend writes,
then the consistent index in backend would be zero.
The user cannot restart etcdserver due to today's strick index
match checking. We now have to lose this a bit for this case.
kv.commit updates the consistent index in backend. When
executing in parallel with apply, it might grab tx lock
after apply update the consistent index and before apply
starts to execute the opeartion. If the server dies right
after kv.commit, the consistent is updated but the opeartion
is not executed. If we restart etcd server, etcd will skip
the operation. :(
There are a few other places that we need to take care of,
but let us fix this first.
If a server isn't serving txn requests from a client, the server
doesn't need the result of range requests in the txn.
This is a succeeding commit of
https://github.com/coreos/etcd/pull/5689