When compactKV request is halted before final acknowledgement,
it used to just continue on the next endpoint. But there could be
a case than compactKV is requested twice, and the first one is already
replicated and applied by the time the second request was to be
applied (returning compact revision error). This skips the case
by parsing the error message.
GRPC and v2 client address share the same host(port)
but GRPC does not work with schema specified. This fixes
it by adding another member for GRPC without schema, as
we had before.
Standard log package by default only prints out the second-scale
so the 3rd party log feeder mixes the order of the events, which makes
the debugging hard. This replaces it with capnslog and make them consistent
with all other etcd log formats.
when closed errors will be one of:
```
grpc.ErrorDesc(err) == context.Canceled.Error() ||
grpc.ErrorDesc(err) == context.DeadlineExceeded.Error() ||
grpc.ErrorDesc(err) == "transport is closing" ||
grpc.ErrorDesc(err) == "grpc: the client connection is closing"
```
Currently gRPC connection just gets recreated
for every Stress call. When Stress ends or gets
canceled, gRPC connection must also be closed.
For https://github.com/coreos/etcd/issues/4464.
Hash method returns either (nil, err) or (Hash, nil).
The current error checking is wrong. It only needs to check
the error is either nil or non-nil.
This causes panic in https://github.com/coreos/etcd/issues/4463
by allowing the case when resp is nil, but err is not nil.
Extend the timeout from 1s to defaultRequestTimeout 5s.
The 1s may bring unwanted burden to the target member. If the member is
busy at recovering, it has limited bandwidth for client requests. A
short timeout at client side will retry quickly while keeping the
on-going connections. Thus, etcd will queue lots of requests and
connections and takes long time to clear them. This finally causes the
timeout of member health check.
This problem is a general one that how etcd handles amounts of requests
at the same time in a good way. We don't plan to address it at current
stage.