Database snapshot can be as large as 5GB. It is reasonable
to log before receiving it. Or the user might not know what
is happening and why etcd starts to use IO intensively.
The issue is not caused by this code, but by reading snapshot
from disk. etcd assumes the snapshot of v3 kv should live in
memory. If not, etcd does not work well anyway.
This moves the code to create listener and roundTripper for raft communication
to the same place, and use explicit functions to build them. This prevents
possible development errors in the future.
rafthttp logs repeated messages when amounts of message-drop logs
happen, and it becomes log spamming.
Use MergeLogger to merge log lines in this case.
The logging mechanism is verbose, so it is removed from peerStatus.
We would like to see the status change
of connection with peers, and one error that leads to deactivation.
There is no need to print out all non-repeated errors.
This pairs with remote timeout listeners.
etcd uses timeout listener, and times out the accepted connections
if there is no activity. So the idle connections may time out easily.
Becaus timeout transport doesn't reuse connections, it prevents using
timeouted connection.
This fixes the problem that etcd fail to get version of peers.
In rafthttp, when making request to some endpoint, it may receive
response with unexpected status code and header. This indicates the endpoint
doesn't function correctly. It should mark the endpoint unreachable.
pipeline.handle is a long-living one, and should continue to receive
next message to send out when current message fails to send. So it
should `continue` instead of `return` here.
When snapshot store requests raft snapshot from etcdserver apply loop,
it may block on the channel for some time, or wait some time for KV to
snapshot. This is unexpected because raft state machine should be unblocked.
Even worse, this block may lead to deadlock:
1. raft state machine waits on getting snapshot from raft memory storage
2. raft memory storage waits snapshot store to get snapshot
3. snapshot store requests raft snapshot from apply loop
4. apply loop is applying entries, and waits raftNode loop to finish
messages sending
5. raftNode loop waits peer loop in Transport to send out messages
6. peer loop in Transport waits for raft state machine to process message
Fix it by changing the logic of getSnap to be asynchronously creation.
streamTypeMsgApp is only used in etcd 2.0. etcd 2.3 should not talk to
etcd 2.0, either send or receive requests. So I deprecate streamTypeMsgApp
and its related stuffs from rafthttp package.
updating term is only used from streamTypeMsgApp, so it is removed too.
Use snapshotSender to send v3 snapshot message. It puts raft snapshot
message and v3 snapshot into request body, then sends it to the target peer.
When it receives http.StatusNoContent, it knows the message has been
received and processed successfully.
As receiver, snapHandler saves v3 snapshot and then processes the raft snapshot
message, then respond with http.StatusNoContent.
rafthttp has different requirements for connections created by the
transport for different usage, and this is hard to achieve when giving
one http.RoundTripper. Pass into pkg the data needed to build transport
now, and let rafthttp build its own transports.
It specifies request timeout error possibly caused by connection lost,
and print out better log for user to understand.
It handles two cases:
1. the leader cannot connect to majority of cluster.
2. the connection between follower and leader is down for a while,
and it losts proposals.
log format:
```
20:04:19 etcd3 | 2015-08-25 20:04:19.368126 E | etcdhttp: etcdserver:
request timed out, possibly due to connection lost
20:04:19 etcd3 | 2015-08-25 20:04:19.368227 E | etcdhttp: etcdserver:
request timed out, possibly due to connection lost
```
The original workflow may fail to cancel if stop() cancels the finished
request just before dial() assigning a new cancel. This commit checks
streamReader status before setting cancel to avoid this problem.
It is tested at travis for 300 times. go 1.5 always works well, while
go 1.4 fails to stop once.