We should wrap the blocking function with a closure. And first
creates a go routine to execute the function. Or the inner function
blocks before creating the go routine.
We should open real txn for applying txn requests. Or the intermediate
state might be observed by reader.
This also fixes#3803. Same consistent(raft) index per multiple indenpendent
operations confuses consistentStore.
We need to be able to force an election (on one node) after creating a
new group (cockroachdb/cockroach#1384), but it is difficult to ensure
that our call to Campaign does not race with an election that may be
started by raft itself. A redundant call to Campaign should be a no-op
instead of a panic. (But the panic in becomeCandidate remains, because
we don't want to update the term or change the committed index in this
case)
Rather than copying in .proto files, use the same symlink
trick we do for doing the actual etcd build.
Also check for exact version of protoc early on.
Extend the timeout from 1s to defaultRequestTimeout 5s.
The 1s may bring unwanted burden to the target member. If the member is
busy at recovering, it has limited bandwidth for client requests. A
short timeout at client side will retry quickly while keeping the
on-going connections. Thus, etcd will queue lots of requests and
connections and takes long time to clear them. This finally causes the
timeout of member health check.
This problem is a general one that how etcd handles amounts of requests
at the same time in a good way. We don't plan to address it at current
stage.
This adds build constraints in order to pass memory-map flags to bolt.Option.
If backend package passes syscall.MAP_POPULATE flag, the boltdb does read-ahead
which speeds up entire-database read, which then leads to faster storage
Restore. Benchmark result shows for 4GB, it opens and loads 6x faster in SSD,
and 12x faster in HDD. For 2GB, 1.6x faster with MAP_POPULATE in SSD.
The primary goal of this doc is to confirm the memory
consumption of watch is as expected. Each connection
consumes O(10kb) of memory. Each stream consumes O(10kb)
of memory. Each watching consumes < O(1kb) of memory.
Then when you have a large number of watching with small
number of connections and streams, the ave memory consumption
per watch will be O(1kb).
Edit to replace a relative link (won't work with that target) with
an absolute link.
Heading 1 Title Case.
Polish graf 3.
Fixes https://github.com/coreos/docs/issues/662
We have a structure called InternalRaftRequest. Making the function
shorter by calling it processInternalRaftReq seems to be random and
reduce the readability. So we just use the full name.
This fixes coreos#3647 by giving shorter proxy refresh interval whenever
there is no endpoints found. Deleted sleep command in Procfile and proxy
documentation accordingly.