mirror of
https://github.com/etcd-io/etcd.git
synced 2024-09-27 06:25:44 +00:00
Merge pull request #3327 from yichengq/bench-2.2
docs/benchmarks: add benchmark result for 2.2
This commit is contained in:
commit
e1dfcec0ab
69
Documentation/benchmarks/etcd-2-2-benchmarks.md
Normal file
69
Documentation/benchmarks/etcd-2-2-benchmarks.md
Normal file
@ -0,0 +1,69 @@
|
||||
## Physical machines
|
||||
|
||||
GCE n1-highcpu-2 machine type
|
||||
|
||||
- 1x dedicated local SSD mounted under /var/lib/etcd
|
||||
- 1x dedicated slow disk for the OS
|
||||
- 1.8 GB memory
|
||||
- 2x CPUs
|
||||
|
||||
## etcd Cluster
|
||||
|
||||
3 etcd 2.2.0-alpha.1 members, each runs on a single machine.
|
||||
|
||||
Detailed versions:
|
||||
|
||||
```
|
||||
etcd Version: 2.2.0-alpha.1+git
|
||||
Git SHA: 28b61ac
|
||||
Go Version: go1.4.2
|
||||
Go OS/Arch: linux/amd64
|
||||
```
|
||||
|
||||
Also, we use 3 etcd 2.1.0 alpha-stage members to form cluster to get base performance. etcd's commit head is at [c7146bd5](https://github.com/coreos/etcd/commits/c7146bd5f2c73716091262edc638401bb8229144), which is the same as the one that we use in [etcd 2.1 benchmark](./etcd-2-1-0-benchmarks.md).
|
||||
|
||||
## Testing
|
||||
|
||||
Bootstrap another machine and use benchmark tool [boom](https://github.com/rakyll/boom) to send requests to each etcd member. Check [here](../../hack/benchmark/) for instructions.
|
||||
|
||||
## Performance
|
||||
|
||||
### reading one single key
|
||||
|
||||
| key size in bytes | number of clients | target etcd server | read QPS | 90th Percentile Latency (ms) |
|
||||
|-------------------|-------------------|--------------------|----------|---------------|
|
||||
| 64 | 1 | leader only | 2216 (+5%) | 0.5 (-17%) |
|
||||
| 64 | 64 | leader only | 16038 (-10%) | 6.1 (+0%) |
|
||||
| 64 | 256 | leader only | 15497 (-16%) | 22.4 (+5%) |
|
||||
| 256 | 1 | leader only | 2115 (-8%) | 0.5 (+0%) |
|
||||
| 256 | 64 | leader only | 16083 (-13%) | 6.1 (+8%) |
|
||||
| 256 | 256 | leader only | 15444 (-17%) | 21.9 (+2%) |
|
||||
| 64 | 64 | all servers | 45101 (-9%) | 2.1 (+5%) |
|
||||
| 64 | 256 | all servers | 50558 (-14%) | 8.0 (+8%) |
|
||||
| 256 | 64 | all servers | 45415 (-8%) | 2.1 (+5%) |
|
||||
| 256 | 256 | all servers | 50531 (-14%) | 8.1 (+20%) |
|
||||
|
||||
### writing one single key
|
||||
|
||||
| key size in bytes | number of clients | target etcd server | write QPS | 90th Percentile Latency (ms) |
|
||||
|-------------------|-------------------|--------------------|-----------|---------------|
|
||||
| 64 | 1 | leader only | 61 (+3%) | 18.0 (-15%) |
|
||||
| 64 | 64 | leader only | 2092 (+14%) | 37.2 (-8%) |
|
||||
| 64 | 256 | leader only | 2407 (-43%) | 71.0 (+2%) |
|
||||
| 256 | 1 | leader only | 60 (+15%) | 18.5 (-38%) |
|
||||
| 256 | 64 | leader only | 2186 (+33%) | 37.2 (-16%) |
|
||||
| 256 | 256 | leader only | 2385 (-42%) | 81.9 (+8%) |
|
||||
| 64 | 64 | all servers | 1758 (+72%) | 53.1 (-50%) |
|
||||
| 64 | 256 | all servers | 4547 (+31%) | 86.7 (-31%) |
|
||||
| 256 | 64 | all servers | 1667 (+66%) | 54.7 (-50%) |
|
||||
| 256 | 256 | all servers | 4695 (+33%) | 81.3 (-25%) |
|
||||
|
||||
### performance changes explanation
|
||||
|
||||
- read QPS in all scenarios is decreased by 10~20%. One reason is that etcd records store metrics for each store operation. The metrics is important for monitoring and debugging, so this is acceptable. The other reason is that HTTP handler checks key access permission in each request for authentication purpose. We could improve this by skipping the check when authentication feature is disabled.
|
||||
|
||||
- write QPS to leader is increased by 10~20%, except 256-client cases. This is because we decouple raft main loop and entry apply loop, which avoids them blocking each other.
|
||||
|
||||
- write QPS to leader using 256 clients is decreased by 40%. This is caused by etcd limiting the number of client connections improperly. We will enhance the method to eliminate this performance downgrade.
|
||||
|
||||
- write QPS to all servers is increased by 30~70% because follower could receive latest commit index earlier and commit proposals faster.
|
14
hack/benchmark/README.md
Normal file
14
hack/benchmark/README.md
Normal file
@ -0,0 +1,14 @@
|
||||
## Usage
|
||||
|
||||
Benchmark 3-member etcd cluster to get its read and write performance.
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Start 3-member etcd cluster on 3 machines
|
||||
2. Update `$leader` and `$servers` in the script
|
||||
3. Run the script in a separate machine
|
||||
|
||||
## Caveat
|
||||
|
||||
1. Set environment variable `GOMAXPROCS` as the number of available cores to maximize CPU resources for both etcd member and bench process.
|
||||
2. Set the number of open files per process as 10000 for amounts of client connections for both etcd member and benchmark process.
|
65
hack/benchmark/bench.sh
Normal file
65
hack/benchmark/bench.sh
Normal file
@ -0,0 +1,65 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
leader=http://10.240.201.15:4001
|
||||
# assume three servers
|
||||
servers=( http://10.240.201.15:4001 http://10.240.212.209:4001 http://10.240.95.3:4001 )
|
||||
|
||||
keyarray=( 64 256 )
|
||||
|
||||
for keysize in ${keyarray[@]}; do
|
||||
|
||||
echo write, 1 client, $keysize key size, to leader
|
||||
./boom -m PUT -n 10 -d value=`head -c $keysize < /dev/zero | tr '\0' '\141'` -c 1 $leader/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
|
||||
echo write, 64 client, $keysize key size, to leader
|
||||
./boom -m PUT -n 640 -d value=`head -c $keysize < /dev/zero | tr '\0' '\141'` -c 64 $leader/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
|
||||
echo write, 256 client, $keysize key size, to leader
|
||||
./boom -m PUT -n 2560 -d value=`head -c $keysize < /dev/zero | tr '\0' '\141'` -c 256 $leader/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
|
||||
echo write, 64 client, $keysize key size, to all servers
|
||||
for i in ${servers[@]}; do
|
||||
./boom -m PUT -n 210 -d value=`head -c $keysize < /dev/zero | tr '\0' '\141'` -c 21 $i/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo &
|
||||
done
|
||||
# wait for all booms to start running
|
||||
sleep 3
|
||||
# wait for all booms to finish
|
||||
for pid in $(pgrep 'boom'); do
|
||||
while kill -0 "$pid" 2> /dev/null; do
|
||||
sleep 3
|
||||
done
|
||||
done
|
||||
|
||||
echo write, 256 client, $keysize key size, to all servers
|
||||
for i in ${servers[@]}; do
|
||||
./boom -m PUT -n 850 -d value=`head -c $keysize < /dev/zero | tr '\0' '\141'` -c 85 $i/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo &
|
||||
done
|
||||
sleep 3
|
||||
for pid in $(pgrep 'boom'); do
|
||||
while kill -0 "$pid" 2> /dev/null; do
|
||||
sleep 3
|
||||
done
|
||||
done
|
||||
|
||||
echo read, 1 client, $keysize key size, to leader
|
||||
./boom -n 100 -c 1 $leader/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
|
||||
echo read, 64 client, $keysize key size, to leader
|
||||
./boom -n 6400 -c 64 $leader/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
|
||||
echo read, 256 client, $keysize key size, to leader
|
||||
./boom -n 25600 -c 256 $leader/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
|
||||
echo read, 64 client, $keysize key size, to all servers
|
||||
# bench servers one by one, so it doesn't overload this benchmark machine
|
||||
# It doesn't impact correctness because read request doesn't involve peer interaction.
|
||||
for i in ${servers[@]}; do
|
||||
./boom -n 21000 -c 21 $i/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
done
|
||||
|
||||
echo read, 256 client, $keysize key size, to all servers
|
||||
for i in ${servers[@]}; do
|
||||
./boom -n 85000 -c 85 $i/v2/keys/foo | grep -e "Requests/sec" -e "Latency" -e "90%" | tr "\n" "\t" | xargs echo
|
||||
done
|
||||
|
||||
done
|
Loading…
x
Reference in New Issue
Block a user