mirror of
https://github.com/etcd-io/etcd.git
synced 2024-09-27 06:25:44 +00:00
Merge pull request #9394 from jeis2497052/master
*: fix typos in markdown docs
This commit is contained in:
commit
abedaa31e1
@ -26,7 +26,7 @@ Go OS/Arch: linux/amd64
|
||||
|
||||
Bootstrap another machine, outside of the etcd cluster, and run the [`hey` HTTP benchmark tool](https://github.com/rakyll/hey) with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions](../../hack/benchmark/) for the patch and the steps to reproduce our procedures.
|
||||
|
||||
The performance is calulated through results of 100 benchmark rounds.
|
||||
The performance is calculated through results of 100 benchmark rounds.
|
||||
|
||||
## Performance
|
||||
|
||||
|
@ -38,7 +38,7 @@ Therefore, the permission checking logic should be added to the state machine of
|
||||
|
||||
### Authentication
|
||||
|
||||
At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The reponse will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.
|
||||
At first, a client must create a gRPC connection only to authenticate its user ID and password. An etcd server will respond with an authentication reply. The response will be an authentication token on success or an error on failure. The client can use its authentication token to present its credentials to etcd when making API requests.
|
||||
|
||||
The client connection used to request the authentication token is typically thrown away; it cannot carry the new token's credentials. This is because gRPC doesn't provide a way for adding per RPC credential after creation of the connection (calling `grpc.Dial()`). Therefore, a client cannot assign a token to its connection that is obtained through the connection. The client needs a new connection for using the token.
|
||||
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
etcd provides stable, sustained high performance. Two factors define performance: latency and throughput. Latency is the time taken to complete an operation. Throughput is the total operations completed within some time period. Usually average latency increases as the overall throughput increases when etcd accepts concurrent client requests. In common cloud environments, like a standard `n-4` on Google Compute Engine (GCE) or a comparable machine type on AWS, a three member etcd cluster finishes a request in less than one millisecond under light load, and can complete more than 30,000 requests per second under heavy load.
|
||||
|
||||
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanant storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
|
||||
etcd uses the Raft consensus algorithm to replicate requests among members and reach agreement. Consensus performance, especially commit latency, is limited by two physical constraints: network IO latency and disk IO latency. The minimum time to finish an etcd request is the network Round Trip Time (RTT) between members, plus the time `fdatasync` requires to commit the data to permanent storage. The RTT within a datacenter may be as long as several hundred microseconds. A typical RTT within the United States is around 50ms, and can be as slow as 400ms between continents. The typical fdatasync latency for a spinning disk is about 10ms. For SSDs, the latency is often lower than 1ms. To increase throughput, etcd batches multiple requests together and submits them to Raft. This batching policy lets etcd attain high throughput despite heavy load.
|
||||
|
||||
There are other sub-systems which impact the overall performance of etcd. Each serialized etcd request must run through etcd’s boltdb-backed MVCC storage engine, which usually takes tens of microseconds to finish. Periodically etcd incrementally snapshots its recently applied requests, merging them back with the previous on-disk snapshot. This process may lead to a latency spike. Although this is usually not a problem on SSDs, it may double the observed latency on HDD. Likewise, inflight compactions can impact etcd’s performance. Fortunately, the impact is often insignificant since the compaction is staggered so it does not compete for resources with regular requests. The RPC system, gRPC, gives etcd a well-defined, extensible API, but it also introduces additional latency, especially for local reads.
|
||||
|
||||
|
@ -31,7 +31,7 @@ Go OS/Arch: linux/amd64
|
||||
|
||||
Bootstrap another machine, outside of the etcd cluster, and run the [`boom` HTTP benchmark tool][boom] with a connection reuse patch to send requests to each etcd cluster member. See the [benchmark instructions][hack] for the patch and the steps to reproduce our procedures.
|
||||
|
||||
The performance is calulated through results of 100 benchmark rounds.
|
||||
The performance is calculated through results of 100 benchmark rounds.
|
||||
|
||||
## Performance
|
||||
|
||||
|
@ -50,7 +50,7 @@ After extracting the contents of the tar file and running the install script, th
|
||||
A oneshot service which wipes all etcd2 data and restores a single-node cluster from the rclone backup. This is for restoring the **founding member** only.
|
||||
|
||||
* `etcd2-join.service`
|
||||
A oneshot service which wipes all etcd2 data and re-joins the new cluster. This is for adding members **after** the **founding member** has succesfully established the new cluster via `etcd2-restore.service`
|
||||
A oneshot service which wipes all etcd2 data and re-joins the new cluster. This is for adding members **after** the **founding member** has successfully established the new cluster via `etcd2-restore.service`
|
||||
|
||||
## Recovery
|
||||
|
||||
|
@ -1143,7 +1143,7 @@ RPC: AuthEnable/AuthDisable
|
||||
|
||||
### ROLE \<subcommand\>
|
||||
|
||||
ROLE is used to specify differnt roles which can be assigned to etcd user(s).
|
||||
ROLE is used to specify different roles which can be assigned to etcd user(s).
|
||||
|
||||
### ROLE ADD \<role name\>
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user