mirror of
https://github.com/etcd-io/etcd.git
synced 2024-09-27 06:25:44 +00:00
Documentation: cleanup and fix some typo
Signed-off-by: Hui Kang <kangh@us.ibm.com>
This commit is contained in:
parent
b7cf080e2c
commit
663f8835cf
@ -430,9 +430,9 @@ Here is the command to keep the same lease alive:
|
||||
|
||||
```bash
|
||||
$ etcdctl lease keep-alive 32695410dcc0ca06
|
||||
lease 32695410dcc0ca06 keepalived with TTL(100)
|
||||
lease 32695410dcc0ca06 keepalived with TTL(100)
|
||||
lease 32695410dcc0ca06 keepalived with TTL(100)
|
||||
lease 32695410dcc0ca06 keepalived with TTL(10)
|
||||
lease 32695410dcc0ca06 keepalived with TTL(10)
|
||||
lease 32695410dcc0ca06 keepalived with TTL(10)
|
||||
...
|
||||
```
|
||||
|
||||
@ -472,4 +472,3 @@ lease 694d5765fc71500b granted with TTL(500s), remaining(132s), attached keys([z
|
||||
# if the lease has expired or does not exist it will give the below response:
|
||||
Error: etcdserver: requested lease not found
|
||||
```
|
||||
|
||||
|
@ -47,7 +47,7 @@ message ResponseHeader {
|
||||
|
||||
An application may read the Cluster_ID (Member_ID) field to ensure it is communicating with the intended cluster (member).
|
||||
|
||||
Applications can use the `Revision` to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make time `travel query` and wishes to know the latest revision at the time of the request.
|
||||
Applications can use the `Revision` to know the latest revision of the key-value store. This is especially useful when applications specify a historical revision to make time `travel query` and wish to know the latest revision at the time of the request.
|
||||
|
||||
Applications can use `Raft_Term` to detect when the cluster completes a new leader election.
|
||||
|
||||
|
@ -2,15 +2,15 @@
|
||||
|
||||
etcd is designed to reliably store infrequently updated data and provide reliable watch queries. etcd exposes previous versions of key-value pairs to support inexpensive snapshots and watch history events (“time travel queries”). A persistent, multi-version, concurrency-control data model is a good fit for these use cases.
|
||||
|
||||
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generates a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
|
||||
etcd stores data in a multiversion [persistent][persistent-ds] key-value store. The persistent key-value store preserves the previous version of a key-value pair when its value is superseded with new data. The key-value store is effectively immutable; its operations do not update the structure in-place, but instead always generate a new updated structure. All past versions of keys are still accessible and watchable after modification. To prevent the data store from growing indefinitely over time and from maintaining old versions, the store may be compacted to shed the oldest versions of superseded data.
|
||||
|
||||
### Logical view
|
||||
|
||||
The store’s logical view is a flat binary key space. The key space has a lexically sorted index on byte string keys so range queries are inexpensive.
|
||||
|
||||
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to recover space, revisions before the compact revision will be removed.
|
||||
The key space maintains multiple revisions. Each atomic mutative operation (e.g., a transaction operation may contain multiple operations) creates a new revision on the key space. All data held by previous revisions remains unchanged. Old versions of key can still be accessed through previous revisions. Likewise, revisions are indexed as well; ranging over revisions with watchers is efficient. If the store is compacted to save space, revisions before the compact revision will be removed.
|
||||
|
||||
A key’s lifetime spans a generation. Each key may have one or multiple generations. Creating a key increments the generation of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the key’s current generation. Each modification of a key creates a new version of the key. Once a compaction happens, any generation ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
|
||||
A key’s lifetime spans a generation, denoted by its version. Each key may have one or multiple generations. Creating a key increments the version of that key, starting at 1 if the key never existed. Deleting a key generates a key tombstone, concluding the key’s current generation by resetting its version. Each modification of a key creates a new generation of the key and increases its version. Once a compaction happens, any version ended before the given revision will be removed and values set before the compaction revision except the latest one will be removed.
|
||||
|
||||
### Physical view
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
|
||||
|
||||
Reconfiguration requests can only be processed when a majority of cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
|
||||
Reconfiguration requests can only be processed when a majority of cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not be able to make progress and need to [restart from majority failure][majority failure].
|
||||
|
||||
To better understand the design behind runtime reconfiguration, please read [the runtime reconfiguration document][runtime-reconf].
|
||||
|
||||
@ -41,7 +41,7 @@ Before making any change, a simple majority (quorum) of etcd members must be ava
|
||||
All changes to the cluster must be done sequentially:
|
||||
|
||||
* To update a single member peerURLs, issue an update operation
|
||||
* To replace a healthy single member, add a new member then remove the old member
|
||||
* To replace a healthy single member, remove the old member then add a new member
|
||||
* To increase from 3 to 5 members, issue two add operations
|
||||
* To decrease from 5 to 3, issue two remove operations
|
||||
|
||||
@ -57,7 +57,7 @@ To update the advertise client URLs of a member, simply restart that member with
|
||||
|
||||
To update the advertise peer URLs of a member, first update it explicitly via member command and then restart the member. The additional action is required since updating peer URLs changes the cluster wide configuration and can affect the health of the etcd cluster.
|
||||
|
||||
To update the peer URLs, first find the target member's ID. To list all members with `etcdctl`:
|
||||
To update the advertise peer URLs, first find the target member's ID. To list all members with `etcdctl`:
|
||||
|
||||
```sh
|
||||
$ etcdctl member list
|
||||
@ -69,7 +69,7 @@ a8266ecf031671f3: name=node1 peerURLs=http://localhost:23801 clientURLs=http://1
|
||||
This example will `update` a8266ecf031671f3 member ID and change its peerURLs value to `http://10.0.1.10:2380`:
|
||||
|
||||
```sh
|
||||
$ etcdctl member update a8266ecf031671f3 http://10.0.1.10:2380
|
||||
$ etcdctl member update a8266ecf031671f3 --peer-urls=http://10.0.1.10:2380
|
||||
Updated member with ID a8266ecf031671f3 in cluster
|
||||
```
|
||||
|
||||
|
@ -10,13 +10,13 @@ In etcd, every runtime reconfiguration has to go through [two phases][add-member
|
||||
|
||||
Phase 1 - Inform cluster of new configuration
|
||||
|
||||
To add a member into etcd cluster, make an API call to request a new member to be added to the cluster. This is only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
|
||||
To add a member into etcd cluster, make an API call to request a new member to be added to the cluster. This is the only way to add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
|
||||
|
||||
Phase 2 - Start new member
|
||||
|
||||
To join the etcd member into the existing cluster, specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, the cluster has reached the expected configuration.
|
||||
To join the new etcd member into the existing cluster, specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, the cluster has reached the expected configuration.
|
||||
|
||||
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change,, it will not be accepted by the cluster.
|
||||
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change, it will not be accepted by the cluster.
|
||||
|
||||
Without the explicit workflow around cluster membership etcd would be vulnerable to unexpected cluster membership changes. For example, if etcd is running under an init system such as systemd, etcd would be restarted after being removed via the membership API, and attempt to rejoin the cluster on startup. This cycle would continue every time a member is removed via the API and systemd is set to restart etcd after failing, which is unexpected.
|
||||
|
||||
@ -28,7 +28,7 @@ If a cluster permanently loses a majority of its members, a new cluster will nee
|
||||
|
||||
It is entirely possible to force removing the failed members from the existing cluster to recover. However, we decided not to support this method since it bypasses the normal consensus committing phase, which is unsafe. If the member to remove is not actually dead or force removed through different members in the same cluster, etcd will end up with a diverged cluster with same clusterID. This is very dangerous and hard to debug/fix afterwards.
|
||||
|
||||
With a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and prepare for permanent majority lose before putting etcd into production.
|
||||
With a correct deployment, the possibility of permanent majority lose is very low. But it is a severe enough problem that worth special care. We strongly suggest reading the [disaster recovery documentation][disaster-recovery] and preparing for permanent majority lose before putting etcd into production.
|
||||
|
||||
## Do not use public discovery service for runtime reconfiguration
|
||||
|
||||
|
@ -15,13 +15,13 @@ In etcd, every runtime reconfiguration has to go through [two phases][add-member
|
||||
|
||||
Phase 1 - Inform cluster of new configuration
|
||||
|
||||
To add a member into etcd cluster, you need to make an API call to request a new member to be added to the cluster. And this is only way that you can add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
|
||||
To add a member into etcd cluster, you need to make an API call to request a new member to be added to the cluster. And this is the only way that you can add a new member into an existing cluster. The API call returns when the cluster agrees on the configuration change.
|
||||
|
||||
Phase 2 - Start new member
|
||||
|
||||
To join the etcd member into the existing cluster, you need to specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, you know your cluster reached the expected configuration.
|
||||
To join the new etcd member into the existing cluster, you need to specify the correct `initial-cluster` and set `initial-cluster-state` to `existing`. When the member starts, it will contact the existing cluster first and verify the current cluster configuration matches the expected one specified in `initial-cluster`. When the new member successfully starts, you know your cluster reached the expected configuration.
|
||||
|
||||
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change,, it will not be accepted by the cluster.
|
||||
By splitting the process into two discrete phases users are forced to be explicit regarding cluster membership changes. This actually gives users more flexibility and makes things easier to reason about. For example, if there is an attempt to add a new member with the same ID as an existing member in an etcd cluster, the action will fail immediately during phase one without impacting the running cluster. Similar protection is provided to prevent adding new members by mistake. If a new etcd member attempts to join the cluster before the cluster has accepted the configuration change, it will not be accepted by the cluster.
|
||||
|
||||
Without the explicit workflow around cluster membership etcd would be vulnerable to unexpected cluster membership changes. For example, if etcd is running under an init system such as systemd, etcd would be restarted after being removed via the membership API, and attempt to rejoin the cluster on startup. This cycle would continue every time a member is removed via the API and systemd is set to restart etcd after failing, which is unexpected.
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user