mirror of
https://github.com/etcd-io/etcd.git
synced 2024-09-27 06:25:44 +00:00
Documentation: fix typos
I found some typos. Please let me know if you have any feedback. Thanks, Documentation: fix metrics.md typo Documentation: trim blank lines in metrics.md
This commit is contained in:
parent
eadfd138a4
commit
d05d6f8bbb
@ -134,7 +134,7 @@ $ etcdctl role remove myrolename
|
||||
|
||||
## Enabling authentication
|
||||
|
||||
The minimal steps to enabling auth follow. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
|
||||
The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference.
|
||||
|
||||
Make sure the root user is created:
|
||||
|
||||
|
@ -384,7 +384,7 @@ $ etcd --proxy on -discovery-srv example.com
|
||||
|
||||
#### Error Cases
|
||||
|
||||
You might see the an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets.
|
||||
You might see an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets.
|
||||
|
||||
# 0.4 to 2.0+ Migration Guide
|
||||
|
||||
|
@ -244,7 +244,7 @@ For example, it may panic if other members in the cluster are still alive.
|
||||
Follow the instructions when using these flags.
|
||||
|
||||
##### -force-new-cluster
|
||||
+ Force to create a new one-member cluster. It commits configuration changes in force to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
|
||||
+ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore].
|
||||
+ default: false
|
||||
+ env variable: ETCD_FORCE_NEW_CLUSTER
|
||||
|
||||
|
@ -37,7 +37,7 @@ timeout.
|
||||
|
||||
A proxy is a redirection server to the etcd cluster. The proxy handles the
|
||||
redirection of a client to the current configuration of the etcd cluster. A
|
||||
typical usecase is to start a proxy on a machine, and on first boot up of the
|
||||
typical use case is to start a proxy on a machine, and on first boot up of the
|
||||
proxy specify both the `--proxy` flag and the `--initial-cluster` flag.
|
||||
|
||||
From there, any etcdctl client that starts up automatically speaks to the local
|
||||
@ -57,7 +57,7 @@ and their integration with the reconfiguration API.
|
||||
Thus, a member that is down, even infinitely, will never be automatically
|
||||
removed from the etcd cluster member list.
|
||||
|
||||
This makes sense because its usually an application level / administrative
|
||||
This makes sense because it's usually an application level / administrative
|
||||
action to determine whether a reconfiguration should happen based on health.
|
||||
|
||||
For more information, refer to [Documentation/runtime-reconfiguration.md].
|
||||
|
@ -46,7 +46,7 @@ ExecStart=/usr/bin/etcd
|
||||
|
||||
There are several error cases:
|
||||
|
||||
0) Init has already ran and the data directory is already configured
|
||||
0) Init has already run and the data directory is already configured
|
||||
1) Discovery fails because of network timeout, etc
|
||||
2) Discovery fails because the cluster is already full and etcd needs to fall back to proxy
|
||||
3) Static cluster configuration fails because of conflict, misconfiguration or timeout
|
||||
|
@ -7,9 +7,9 @@ etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the serve
|
||||
The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics` of etcd. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/).
|
||||
|
||||
|
||||
You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Promethus server and monitor etcd metrics.
|
||||
You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Prometheus server and monitor etcd metrics.
|
||||
|
||||
The naming of metrics follows the suggested [best practice of Promethus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
|
||||
The naming of metrics follows the suggested [best practice of Prometheus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`).
|
||||
|
||||
etcd now exposes the following metrics:
|
||||
|
||||
@ -127,4 +127,4 @@ Example Prometheus queries that may be useful from these metrics (across all etc
|
||||
* `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)`
|
||||
|
||||
Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to etcd cluster.
|
||||
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.
|
||||
|
||||
Reconfiguration requests can only be processed when the the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
|
||||
Reconfiguration requests can only be processed when the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure].
|
||||
|
||||
To better understand the design behind runtime reconfiguration, we suggest you read [this](runtime-reconf-design.md).
|
||||
|
||||
@ -154,7 +154,7 @@ etcdserver: assign ids error: unmatched member while checking PeerURLs
|
||||
exit 1
|
||||
```
|
||||
|
||||
When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any alive member in the cluster:
|
||||
When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any active member in the cluster:
|
||||
|
||||
```sh
|
||||
$ etcd
|
||||
|
@ -38,10 +38,10 @@ Discovery service is designed for bootstrapping an etcd cluster in the cloud env
|
||||
|
||||
It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles:
|
||||
|
||||
1. it introduces a external dependencies for the entire life-cycle of your cluster, not just bootstrap time. If there is a network issue between your cluster and public discover service, your cluster will suffer from it.
|
||||
1. it introduces external dependencies for the entire life-cycle of your cluster, not just bootstrap time. If there is a network issue between your cluster and public discovery service, your cluster will suffer from it.
|
||||
|
||||
2. public discovery service must reflect correct runtime configuration of your cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard.
|
||||
|
||||
3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload.
|
||||
|
||||
If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one.
|
||||
If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one.
|
||||
|
@ -10,7 +10,7 @@ The network isn't the only source of latency. Each request and response may be i
|
||||
The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline.
|
||||
The first parameter is called the *Heartbeat Interval*.
|
||||
This is the frequency with which the leader will notify followers that it is still the leader.
|
||||
For best pratices, the parameter should be set around round-trip time between members.
|
||||
For best practices, the parameter should be set around round-trip time between members.
|
||||
By default, etcd uses a `100ms` heartbeat interval.
|
||||
|
||||
The second parameter is the *Election Timeout*.
|
||||
|
Loading…
x
Reference in New Issue
Block a user