mirror of
https://github.com/etcd-io/etcd.git
synced 2024-09-27 06:25:44 +00:00
Documentation: clear out some TODOs
This commit is contained in:
parent
b4162f8a45
commit
11fdf2dd18
@ -2,7 +2,7 @@
|
||||
|
||||
## System requirements
|
||||
|
||||
TODO
|
||||
The etcd performance benchmarks run etcd on 8 vCPU, 16GB RAM, 50GB SSD GCE instances, but any relatively modern machine with low latency storage and a few gigabytes of memory should suffice for most use cases. Applications with large v2 data stores will require more memory than a large v3 data store since data is kept in anonymous memory instead of memory mapped from a file. than For running etcd on a cloud provider, we suggest at least a medium instance on AWS or a standard-1 instance on GCE.
|
||||
|
||||
## Download the pre-built binary
|
||||
|
||||
|
@ -2,9 +2,9 @@
|
||||
|
||||
etcd is designed to withstand machine failures. An etcd cluster automatically recovers from temporary failures (e.g., machine reboots) and tolerates up to *(N-1)/2* permanent failures for a cluster of N members. When a member permanently fails, whether due to hardware failure or disk corruption, it loses access to the cluster. If the cluster permanently loses more than *(N-1)/2* members then it disastrously fails, irrevocably losing quorum. Once quorum is lost, the cluster cannot reach consensus and therefore cannot continue accepting updates.
|
||||
|
||||
To recover from disastrous failure, etcd provides snapshot and restore facilities to recreate the cluster without data loss.
|
||||
To recover from disastrous failure, etcd v3 provides snapshot and restore facilities to recreate the cluster without v3 key data loss. To recover v2 keys, refer to the [v2 admin guide][v2_recover].
|
||||
|
||||
TODO(xiangli): add note to clarify this only recovers for the kv store of etcd3.
|
||||
[v2_recover]: ../v2/admin_guide.md#disaster-recovery
|
||||
|
||||
### Snapshotting the keyspace
|
||||
|
||||
|
@ -48,9 +48,7 @@ All changes to the cluster are done one at a time:
|
||||
* To decrease from 5 to 3, make two remove operations
|
||||
|
||||
All of these examples will use the `etcdctl` command line tool that ships with etcd.
|
||||
To change membership without `etcdctl`, use the [members API][member-api].
|
||||
|
||||
TODO: v3 member API documentation
|
||||
To change membership without `etcdctl`, use the [v2 HTTP members API][member-api] or the [v3 gRPC members API][member-api-grpc].
|
||||
|
||||
### Update a member
|
||||
|
||||
@ -105,7 +103,7 @@ It is safe to remove the leader, however the cluster will be inactive while a ne
|
||||
|
||||
Adding a member is a two step process:
|
||||
|
||||
* Add the new member to the cluster via the [members API][member-api] or the `etcdctl member add` command.
|
||||
* Add the new member to the cluster via the [HTTP members API][member-api], the [gRPC members API][member-api-grpc], or the `etcdctl member add` command.
|
||||
* Start the new member with the new cluster configuration, including a list of the updated members (existing members + the new member).
|
||||
|
||||
Using `etcdctl` let's add the new member to the cluster by specifying its [name][conf-name] and [advertised peer URLs][conf-adv-peer]:
|
||||
@ -181,6 +179,7 @@ It is recommended to enable this option. However, it is disabled by default beca
|
||||
[fault tolerance table]: ../v2/admin_guide.md#fault-tolerance-table
|
||||
[majority failure]: #restart-cluster-from-majority-failure
|
||||
[member-api]: ../v2/members_api.md
|
||||
[member-api-grpc]: ../dev-guide/api_reference_v3.md#service-cluster-etcdserveretcdserverpbrpcproto
|
||||
[member migration]: ../v2/admin_guide.md#member-migration
|
||||
[remove member]: #remove-a-member
|
||||
[runtime-reconf]: runtime-reconf-design.md
|
||||
|
Loading…
x
Reference in New Issue
Block a user