Check the values of myKey and myRev first in Unlock method to prevent calling Unlock without Lock. Because this may cause the value of pfx to be deleted by mistake.
Signed-off-by: chenyahui <cyhone@qq.com>
Check the client count before creating the ephemeral key, do not
create the key if there are already too many clients. Check the
count after creating the key again, if the total kvs is bigger
than the expected count, then check the rev of the current key,
and take action accordingly based on its rev. If its rev is in
the first ${count}, then it's valid client, otherwise, it should
fail.
Signed-off-by: Benjamin Wang <wachao@vmware.com>
Since http2 spec defines the receive windows's size and max size of
frame in the stream, the underlayer - gRPC client can pre-read data
from server even if the application layer hasn't read it yet.
And the initialized cluster has 20KiB snapshot, which can be pre-read
by underlayer. We should increase the snapshot's size, just in case
that the io.Copy won't return the canceled or timeout error.
Fixes: #14477
Signed-off-by: Wei Fu <fuweid89@gmail.com>
To verify distributed tracing feature is correctly setup, this PR adds
an integration test for this feature.
In the process of writing the test, I discovered a goroutine leak due to
the TraceProvider not being closed. This PR fixs this issue as well.
Signed-off-by: Yingrong Zhao <yingrong.zhao@gmail.com>
When clients have no permission to perform whatever operation, then
the applying may fail. We should also move consistent_index forward
in this case, otherwise the consitent_index may smaller than the
snapshot index.
When a user sets up a Mirror with a restricted user that doesn't have
access to the `foo` path, we will fail to get the most recent revision
due to permissions issues.
With this change, when a prefix is provided we will get the initial
revision from the prefix rather than /foo. This allows restricted users
to setup sync.
When etcdserver receives a LeaseRenew request, it may be still in
progress of processing the LeaseGrantRequest on exact the same
leaseID. Accordingly it may return a TTL=0 to client due to the
leaseID not found error. So the leader should wait for the appliedID
to be available before processing client requests.
I think strong (not-equal) relationship was too restrictive when expressed with 1s granularity.
```
logger.go:130: 2022-04-03T22:15:15.242+0200 WARN m1 leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk {"member": "m1", "to": "cb785755eb80ac1", "heartbeat-interval": "10ms", "expected-duration": "20ms", "exceeded-duration": "24.666613ms"}
logger.go:130: 2022-04-03T22:15:15.262+0200 INFO m-1 published local member to cluster through raft {"member": "m-1", "local-member-id": "e2dd9f523aa7be87", "local-member-attributes": "{Name:m-1 ClientURLs:[unix://127.0.0.1:2196386040]}", "cluster-id": "b4b8e7e41c23c8b5", "publish-timeout": "5.2s"}
v3_lease_test.go:415: Expected lease ttl (4m58s) to be greather than (4m58s)
```