etcd tester runs etcd runner as a separate binary.
it signals sigstop to the runner when tester wants to stop stressing.
it signals sigcont to the runner when tester wants to start stressing.
when tester needs to clean up, it signals sigint to runner.
FIXES#7026
functional tester sometime experiences timeout during compaction phase. I changed the timeout calculation base on number of entries created and deleted.
FIX#6805
The checkers and stressers should be composable without special cases; this
patch tries to address that while refactoring out some old cruft.
Namely,
* Single stresser/checker for a tester; built from composition
* Composite stresser via comma-separated list of stressers
* Split stressers into separate files
* Removed v2 only flags and special cases
* Rate limiter shared among key stresser and leases stresser
* Composite checker is now concurrent
* Stresser can return a Checker to check its invariants
* Each lease checker only operates on a single lease stresser
This commit decouples stresser from the tester of
functional-tester. For doing it, this commit adds a new option
--stresser to etcd-tester. The option accepts two types of stresser:
"default" and "nop". If the option is "default", etcd-tester stresses
its etcd cluster with the existing stresser. If the option is "nop",
etcd-tester does nothing for stressing.
Partially fixes https://github.com/coreos/etcd/issues/6446
1. fix failure case counting
2. match ErrClientConnClosing in stresser
3. longer timeout for set-health-key
4. fixed range for range/delete stresser
5. remove Limit in RangeRequest
Fix https://github.com/coreos/etcd/issues/5573.
Currently stresser starts at the same time as cluster start.
If the stresser got launched too fast/early, all stressers
exit from the error 'etcdserver: not capable', which
means the cluster is not ready yet. This adds additional
error checking, so stresser can retry.
when closed errors will be one of:
```
grpc.ErrorDesc(err) == context.Canceled.Error() ||
grpc.ErrorDesc(err) == context.DeadlineExceeded.Error() ||
grpc.ErrorDesc(err) == "transport is closing" ||
grpc.ErrorDesc(err) == "grpc: the client connection is closing"
```
Currently gRPC connection just gets recreated
for every Stress call. When Stress ends or gets
canceled, gRPC connection must also be closed.
For https://github.com/coreos/etcd/issues/4464.
Extend the timeout from 1s to defaultRequestTimeout 5s.
The 1s may bring unwanted burden to the target member. If the member is
busy at recovering, it has limited bandwidth for client requests. A
short timeout at client side will retry quickly while keeping the
on-going connections. Thus, etcd will queue lots of requests and
connections and takes long time to clear them. This finally causes the
timeout of member health check.
This problem is a general one that how etcd handles amounts of requests
at the same time in a good way. We don't plan to address it at current
stage.