Currenly Member exports GrpcURL already as a struct variable. However,
when addressing the var-naming linting issues, renaming it from GrpcURL
to GRPCURL, clashes with the GRPCURL() function.
Given that it's already an exported variable, there's no need to define
a getter function. The use of this variable is also mixed, with calls to
both the exported variable (GrpcURL) and the function [GRPCURL()].
Signed-off-by: Ivan Valdes <ivan@vald.es>
Unlike SnapshotWithVersion, the client.Snapshot doesn't wait for first
response. The server could open db after we close connection or shutdown
the server. We can read few bytes to ensure server opens boltdb.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Comments fixed as per goword in go test files that shell
function go_srcs_in_module lists as per changes on #14827
Helps in #14827
Signed-off-by: Bhargav Ravuri <bhargav.ravuri@infracloud.io>
Since http2 spec defines the receive windows's size and max size of
frame in the stream, the underlayer - gRPC client can pre-read data
from server even if the application layer hasn't read it yet.
And the initialized cluster has 20KiB snapshot, which can be pre-read
by underlayer. We should increase the snapshot's size, just in case
that the io.Copy won't return the canceled or timeout error.
Fixes: #14477
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Nearly none of the tests was checking the value... just assuming WaitLeader success.
```
maintenance_test.go:277: Waiting for leader...
logger.go:130: 2022-04-03T08:01:09.914+0200 INFO m0 cluster version differs from storage version. {"member": "m0", "cluster-version": "3.6.0", "storage-version": "3.5.0"}
logger.go:130: 2022-04-03T08:01:09.915+0200 WARN m0 leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk {"member": "m0", "to": "2acc3d3b521981", "heartbeat-interval": "10ms", "expected-duration": "20ms", "exceeded-duration": "103.756219ms"}
logger.go:130: 2022-04-03T08:01:09.916+0200 INFO m0 updated storage version {"member": "m0", "new-storage-version": "3.6.0"}
...
logger.go:130: 2022-04-03T08:01:09.926+0200 INFO grpc [[roundrobin] roundrobinPicker: Build called with info: {map[0xc002630ac0:{{unix:localhost:m0 localhost <nil> 0 <nil>}} 0xc002630af0:{{unix:localhost:m1 localhost <nil> 0 <nil>}} 0xc002630b20:{{unix:localhost:m2 localhost <nil> 0 <nil>}}]}]
logger.go:130: 2022-04-03T08:01:09.926+0200 WARN m0 apply request took too long {"member": "m0", "took": "114.661766ms", "expected-duration": "100ms", "prefix": "", "request": "header:<ID:12658633312866157316 > cluster_version_set:<ver:\"3.6.0\" > ", "response": ""}
logger.go:130: 2022-04-03T08:01:09.927+0200 INFO m0 cluster version is updated {"member": "m0", "cluster-version": "3.6"}
logger.go:130: 2022-04-03T08:01:09.955+0200 INFO m2.raft 9f96af25a04e2ec3 [logterm: 2, index: 8, vote: 9903a56eaf96afac] ignored MsgVote from 2acc3d3b521981 [logterm: 2, index: 8] at term 2: lease is not expired (remaining ticks: 10) {"member": "m2"}
logger.go:130: 2022-04-03T08:01:09.955+0200 INFO m0.raft 9903a56eaf96afac [logterm: 2, index: 8, vote: 9903a56eaf96afac] ignored MsgVote from 2acc3d3b521981 [logterm: 2, index: 8] at term 2: lease is not expired (remaining ticks: 5) {"member": "m0"}
logger.go:130: 2022-04-03T08:01:09.955+0200 INFO m0.raft 9903a56eaf96afac [term: 2] received a MsgAppResp message with higher term from 2acc3d3b521981 [term: 3] {"member": "m0"}
logger.go:130: 2022-04-03T08:01:09.955+0200 INFO m0.raft 9903a56eaf96afac became follower at term 3 {"member": "m0"}
logger.go:130: 2022-04-03T08:01:09.955+0200 INFO m0.raft raft.node: 9903a56eaf96afac lost leader 9903a56eaf96afac at term 3 {"member": "m0"}
maintenance_test.go:279: Leader established.
```
Tmp
The io/ioutil package has been deprecated as of Go 1.16, see
https://golang.org/doc/go1.16#ioutil. This commit replaces the existing
io/ioutil functions with their new definitions in io and os packages.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
When running 100 times in row those tests flaked around 10-20%. Based on
some experimentation 10 keys was enough to ensure that wal snapshot is
created and prevented any flakes.
They used to take >10min with coverage, so were causing interrupted
Travis runs.
Know thay fit in 100-150s (together), thanks also to parallel
execution.