-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support follower read #5051
support follower read #5051
Conversation
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
/run-all-tests |
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rest LGTM
cluster.must_transfer_leader(region.get_id(), p2.clone()); | ||
|
||
// Block all write cmd applying of Peer 3. | ||
fail::cfg("on_apply_write_cmd", "sleep(5000)").unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about pause
? It works like sleep but blocks forever until turning it off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The apply thread could not exit after the assertion failed (before the fail point was turned off) if we use pause
.
Signed-off-by: 5kbpers <[email protected]>
Signed-off-by: 5kbpers <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/run-all-tests |
* raftstore,server: support follower read Signed-off-by: 5kbpers <[email protected]> * raftstore: fix condition check of read on replica Signed-off-by: 5kbpers <[email protected]> * raftstore: follower read waits for apply index reaches read index Signed-off-by: 5kbpers <[email protected]> * add a test of waiting for read index Signed-off-by: 5kbpers <[email protected]> * fix test_wait_for_apply_index Signed-off-by: 5kbpers <[email protected]> * dec pending reads count after follower handle read index cmd Signed-off-by: 5kbpers <[email protected]> * update comments Signed-off-by: 5kbpers <[email protected]> * remove unused file Signed-off-by: 5kbpers <[email protected]> * fix test_wait_for_apply_index Signed-off-by: 5kbpers <[email protected]> * update comments Signed-off-by: 5kbpers <[email protected]> * update test_wait_for_apply_index Signed-off-by: 5kbpers <[email protected]> * update dependency 'kvproto' Signed-off-by: 5kbpers <[email protected]>
* raftstore,server: support follower read
Signed-off-by: 5kbpers [email protected]
What have you changed? (mandatory)
This PR introduces a new feature, which enables clients to read data from follower peers.
We add a new RPC option
follower_read
to structContext
within every request, see 5kbpers/kvproto@e55978cWhat are the type of the changes? (mandatory)
How has this PR been tested? (mandatory)
integration tests
Does this PR affect tidb-ansible update? (mandatory)
No.
Refer to a related PR or issue link (optional)
pingcap/kvproto#424