From d05d6f8bbb2575effb1f8cf38f09f06f4756a714 Mon Sep 17 00:00:00 2001 From: Gyu-Ho Lee Date: Fri, 9 Oct 2015 07:27:03 -0700 Subject: [PATCH] Documentation: fix typos I found some typos. Please let me know if you have any feedback. Thanks, Documentation: fix metrics.md typo Documentation: trim blank lines in metrics.md --- Documentation/authentication.md | 2 +- Documentation/clustering.md | 2 +- Documentation/configuration.md | 2 +- Documentation/faq.md | 4 ++-- Documentation/implementation-faq.md | 2 +- Documentation/metrics.md | 6 +++--- Documentation/runtime-configuration.md | 4 ++-- Documentation/runtime-reconf-design.md | 4 ++-- Documentation/tuning.md | 2 +- 9 files changed, 14 insertions(+), 14 deletions(-) diff --git a/Documentation/authentication.md b/Documentation/authentication.md index d43ba1659..0ce6233d5 100644 --- a/Documentation/authentication.md +++ b/Documentation/authentication.md @@ -134,7 +134,7 @@ $ etcdctl role remove myrolename ## Enabling authentication -The minimal steps to enabling auth follow. The administrator can set up users and roles before or after enabling authentication, as a matter of preference. +The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference. Make sure the root user is created: diff --git a/Documentation/clustering.md b/Documentation/clustering.md index a2300cab4..eed72890e 100644 --- a/Documentation/clustering.md +++ b/Documentation/clustering.md @@ -384,7 +384,7 @@ $ etcd --proxy on -discovery-srv example.com #### Error Cases -You might see the an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. +You might see an error like `cannot find local etcd $name from SRV records.`. That means the etcd member fails to find itself from the cluster defined in SRV records. The resolved address in `-initial-advertise-peer-urls` *must match* one of the resolved addresses in the SRV targets. # 0.4 to 2.0+ Migration Guide diff --git a/Documentation/configuration.md b/Documentation/configuration.md index 657c9223f..e17bfe232 100644 --- a/Documentation/configuration.md +++ b/Documentation/configuration.md @@ -244,7 +244,7 @@ For example, it may panic if other members in the cluster are still alive. Follow the instructions when using these flags. ##### -force-new-cluster -+ Force to create a new one-member cluster. It commits configuration changes in force to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore]. ++ Force to create a new one-member cluster. It commits configuration changes forcing to remove all existing members in the cluster and add itself. It needs to be set to [restore a backup][restore]. + default: false + env variable: ETCD_FORCE_NEW_CLUSTER diff --git a/Documentation/faq.md b/Documentation/faq.md index 963db85c2..a2bcf56ad 100644 --- a/Documentation/faq.md +++ b/Documentation/faq.md @@ -37,7 +37,7 @@ timeout. A proxy is a redirection server to the etcd cluster. The proxy handles the redirection of a client to the current configuration of the etcd cluster. A -typical usecase is to start a proxy on a machine, and on first boot up of the +typical use case is to start a proxy on a machine, and on first boot up of the proxy specify both the `--proxy` flag and the `--initial-cluster` flag. From there, any etcdctl client that starts up automatically speaks to the local @@ -57,7 +57,7 @@ and their integration with the reconfiguration API. Thus, a member that is down, even infinitely, will never be automatically removed from the etcd cluster member list. -This makes sense because its usually an application level / administrative +This makes sense because it's usually an application level / administrative action to determine whether a reconfiguration should happen based on health. For more information, refer to [Documentation/runtime-reconfiguration.md]. diff --git a/Documentation/implementation-faq.md b/Documentation/implementation-faq.md index f045a4e9e..d6d68d713 100644 --- a/Documentation/implementation-faq.md +++ b/Documentation/implementation-faq.md @@ -46,7 +46,7 @@ ExecStart=/usr/bin/etcd There are several error cases: -0) Init has already ran and the data directory is already configured +0) Init has already run and the data directory is already configured 1) Discovery fails because of network timeout, etc 2) Discovery fails because the cluster is already full and etcd needs to fall back to proxy 3) Static cluster configuration fails because of conflict, misconfiguration or timeout diff --git a/Documentation/metrics.md b/Documentation/metrics.md index fc096693d..016405e7c 100644 --- a/Documentation/metrics.md +++ b/Documentation/metrics.md @@ -7,9 +7,9 @@ etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the serve The simplest way to see the available metrics is to cURL the metrics endpoint `/metrics` of etcd. The format is described [here](http://prometheus.io/docs/instrumenting/exposition_formats/). -You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Promethus server and monitor etcd metrics. +You can also follow the doc [here](http://prometheus.io/docs/introduction/getting_started/) to start a Prometheus server and monitor etcd metrics. -The naming of metrics follows the suggested [best practice of Promethus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`). +The naming of metrics follows the suggested [best practice of Prometheus](http://prometheus.io/docs/practices/naming/). A metric name has an `etcd` prefix as its namespace and a subsystem prefix (for example `wal` and `etcdserver`). etcd now exposes the following metrics: @@ -127,4 +127,4 @@ Example Prometheus queries that may be useful from these metrics (across all etc * `sum(rate(etcd_proxy_dropped_total{job="etcd"}[1m])) by (proxying_error)` Number of failed request on the proxy. This should be 0, spikes here indicate connectivity issues to etcd cluster. - \ No newline at end of file + diff --git a/Documentation/runtime-configuration.md b/Documentation/runtime-configuration.md index a84497f9c..edc5c5738 100644 --- a/Documentation/runtime-configuration.md +++ b/Documentation/runtime-configuration.md @@ -2,7 +2,7 @@ etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time. -Reconfiguration requests can only be processed when the the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure]. +Reconfiguration requests can only be processed when the majority of the cluster members are functioning. It is **highly recommended** to always have a cluster size greater than two in production. It is unsafe to remove a member from a two member cluster. The majority of a two member cluster is also two. If there is a failure during the removal process, the cluster might not able to make progress and need to [restart from majority failure][majority failure]. To better understand the design behind runtime reconfiguration, we suggest you read [this](runtime-reconf-design.md). @@ -154,7 +154,7 @@ etcdserver: assign ids error: unmatched member while checking PeerURLs exit 1 ``` -When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any alive member in the cluster: +When we start etcd using the data directory of a removed member, etcd will exit automatically if it connects to any active member in the cluster: ```sh $ etcd diff --git a/Documentation/runtime-reconf-design.md b/Documentation/runtime-reconf-design.md index cbbdd993b..314c7486e 100644 --- a/Documentation/runtime-reconf-design.md +++ b/Documentation/runtime-reconf-design.md @@ -38,10 +38,10 @@ Discovery service is designed for bootstrapping an etcd cluster in the cloud env It seems that using public discovery service is a convenient way to do runtime reconfiguration, after all discovery service already has all the cluster configuration information. However relying on public discovery service brings troubles: -1. it introduces a external dependencies for the entire life-cycle of your cluster, not just bootstrap time. If there is a network issue between your cluster and public discover service, your cluster will suffer from it. +1. it introduces external dependencies for the entire life-cycle of your cluster, not just bootstrap time. If there is a network issue between your cluster and public discovery service, your cluster will suffer from it. 2. public discovery service must reflect correct runtime configuration of your cluster during it life-cycle. It has to provide security mechanism to avoid bad actions, and it is hard. 3. public discovery service has to keep tens of thousands of cluster configurations. Our public discovery service backend is not ready for that workload. -If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one. \ No newline at end of file +If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one. diff --git a/Documentation/tuning.md b/Documentation/tuning.md index c04ff3282..264e09ee1 100644 --- a/Documentation/tuning.md +++ b/Documentation/tuning.md @@ -10,7 +10,7 @@ The network isn't the only source of latency. Each request and response may be i The underlying distributed consensus protocol relies on two separate time parameters to ensure that nodes can handoff leadership if one stalls or goes offline. The first parameter is called the *Heartbeat Interval*. This is the frequency with which the leader will notify followers that it is still the leader. -For best pratices, the parameter should be set around round-trip time between members. +For best practices, the parameter should be set around round-trip time between members. By default, etcd uses a `100ms` heartbeat interval. The second parameter is the *Election Timeout*.