The single watcher / group watcher distinction limited and
complicated watcher coalescing more than necessary. Reworked:
Each server watcher is represented by a WatchBroadcast, each
client "Watcher" attaches to some WatchBroadcast. WatchBroadcasts
hold all WatchBroadcast instances for a range. WatchRanges holds
all WatchBroadcasts for the proxy.
WatchProxyStreams represent a grpc watch stream between the proxy and
a client. When a client requests a new watcher through its grpc stream,
the ProxyStream will allocate a Watcher and WatchRanges assigns it to
some WatchBroadcast based on its range.
Coalescing is done by WatchBroadcasts when it receives an update
notification from a WatchBroadcast.
Supports leader failure detection so watches on a bad member
can migrate to other members. Coincidentally, Fixes#6303.
This changes the two immutable defaults into constants which allows
packages embedding etcd to import them as const! If they are variables,
then you'll fail with "const initializer foo is not a constant".
And remove all legacy packages in glide.yaml on sub-dependency.
They were added when we migrated from godep. glide will handle
it automatically with glide.lock file.
'glide vc --no-tests' flag removes 'testify/assert' deps
in v2 client. Until we deprecate v2 tests, just copy the
necessary files as workaround.
And remove '--skip-tests' flags in case we add dependencies
in test files.
if we want to add an endpoint with lease, we need this option.
for example:
resp, err := cli.Grant(context.TODO(), 5)
if err != nil {
log.Fatal(err)
}
err = r.Update(context.TODO(), serviceName, naming.Update{Op:naming.Add, Addr: exposedAddr}, clientv3.WithLease(resp.ID))
if err != nil {
log.Fatalf(err)
}
This exists to prevent sending too many requests that
would lead into applier falling behind Raft accepting-proposal.
Based on recent benchmarks, etcd was able to process high workloads
(2 million writes with 1K concurrent clients).
The limit 1000 is too conservative to test those high workloads.
functional tester sometime experiences timeout during compaction phase. I changed the timeout calculation base on number of entries created and deleted.
FIX#6805
If MsgTimeoutNow arrived after a node was removed, the node could start
and win an election, then panic in becomeLeader (see
cockroachdb/cockroach#8535)