Limit can cause multiple request due to pagination.
For reads after a failed write we would like to return to normal write
request as soon as possible.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
Kubernetes uses WithRequireLeader to modify the context passed
to the Watch() and RequestProgress() calls in order to ensure
that a leader is present in the cluster before serving the request.
This commit mimics that behaviour in the Kubernetes traffic.
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Kubernetes relies on the PrevKV() option in the watches it opens
against etcd. This commit adds a robustness test to validate the
same.
A watch response returned with PrevKV() is valid if:
The value in current event's prevKV matches the previous
event's value of the same key if this is not a create event.
There are cases where there can be a prevKV for a create event
as well, for example if a watch is opened after the key is creatd.
Since we don't simulate for that, we don't check for that.
Further, this adjusts revision numbers such that we can successfully create
a new replay. Needed now since we will have unit tests with
and without PrevKV co-existing and we requite creation of a
new replay everytime we validate PrevKV.
We also regenerate test data with so that prevKV exists in it
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>
Having too many delete requests is bad as they are not unique requests, so
linearization is more prone to timeout on them.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>
For now we just validate stale read revision, but not response content.
Reason is that etcd model only stores latest version of keys, and no
history like real etcd.
Validating stale read contents needs to be done outside of model
as storing whole history is just to costly for linearization validation.
Signed-off-by: Marek Siarkowicz <siarkowicz@google.com>