When adding new member, the etcdserver generates the ID based on the current time
and the given peerurls. We include time to add the uniqueness, since the node with
same peerurls should be able to (add, then remove) several times.
This adds the remaining two stats endpoints: `/v2/stats/self`, for
various statistics on the EtcdServer, and `/v2/stats/leader`, for
statistics on a leader's followers.
By and large most of the stats code is copied across from 0.4.x, updated
where necessary to integrate with the new decoupling of raft from
transport.
This does not satisfactorily resolve the question of name vs ID. In the
old world, names were unique in the cluster and transmitted over the
wire, so they could be used safely in all statistics. In the new world,
a given EtcdServer only knows its own name, and it is instead IDs that
are communicated among the cluster members. Hence in most places here we
simply substitute a string-encoded ID in place of name, and only where
possible do we retain the actual given name of the EtcdServer.
It's slightly unclear why we expose this timeout as being configurable,
and the `-timeout` flag does not exist in 0.4.x, so for now, remove the
flag until we have evidence that it is needed.
This adds a StartIndex field to the Watcher interface, which represents
the Etcd-Index at which the Watcher is created.
Also refactors the HTTP tests to use a table for most handleWatch tests
This introduces two new concepts: the cluster and the member.
Members are logical etcd instances that have a name, raft ID, and a list
of peer and client addresses.
A cluster is made up of a list of members.
This adds an EtcdIndex field to store.Event and uses that as the header
instead of the node's modifiedIndex. To facilitate this in a non-racy
way, we set the EtcdIndex while holding the lock.