-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce reusable query buffer for client reads #337
base: unstable
Are you sure you want to change the base?
Conversation
This PR optimizes client query buffer handling in Valkey by introducing a shared query buffer that is used by default for client reads. This reduces memory usage by ~20KB per client by avoiding allocations for most clients using short (<16KB) complete commands. For larger or partial commands, the client still gets its own private buffer. The primary changes are: * Adding a shared query buffer `shared_qb` that clients use by default * Modifying client querybuf initialization and reset logic * Copying any partial query from shared to private buffer before command execution * Freeing idle client query buffers when empty to allow reuse of shared buffer * Master client query buffers are kept private as their contents need to be preserved for replication stream In addition to the memory savings, this change shows a 3% improvement in latency and throughput when running with 1000 active clients. The memory reduction may also help reduce the need to evict clients when reaching max memory limit, as the query buffer is the main memory consumer per client. --------- Signed-off-by: Uri Yagelnik <[email protected]> Signed-off-by: Madelyn Olson <[email protected]> Co-authored-by: Madelyn Olson <[email protected]>
redis#593) Test `query buffer resized correctly` start to fail (https://github.com/valkey-io/valkey/actions/runs/9278013807) with non-jemalloc allocators after valkey-io/valkey#258 PR. With Jemalloc we allocate ~20K for the query buffer, in the test we read 1 byte in the first read, in the second read we make sure we have at least 16KB free place in the query buffer and we have as Jemalloc allocated 20KB, But with non jemalloc we allocate in the first read exactly 16KB. in the second read we check and see that we don't have 16KB free space as we already read 1 byte hence we reallocate this time greedly (*2 of the requested size of 16KB+1) hence the test condition that the querybuf size is < 32KB is no longer true The `query buffer resized correctly test` starts [failing](https://github.com/valkey-io/valkey/actions/runs/9278013807) with non-jemalloc allocators after PR #258 . With jemalloc, we allocate ~20KB for the query buffer. In the test, we read 1 byte initially and then ensure there is at least 16KB of free space in the buffer for the second read, which is satisfied by jemalloc's 20KB allocation. However, with non-jemalloc allocators, the first read allocates exactly 16KB. When we check again, we don't have 16KB free due to the 1 byte already read. This triggers a greedy reallocation (doubling the requested size of 16KB+1), causing the query buffer size to exceed the 32KB limit, thus failing the test condition. This PR adjusted the test query buffer upper limit to be 32KB +2. Signed-off-by: Uri Yagelnik <[email protected]>
…_qb_used" ProcessingEventsWhileBlocked is not thread safe This reverts commit 9d70fa3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you look for any followup commits fixing bugs introduced by the one you cherry picked? maybe comparing these areas in the code with the latest branch?
src/networking.c
Outdated
__thread sds thread_shared_qb = NULL; | ||
__thread int thread_shared_qb_used = 0; /* Avoid multiple clients using shared query | ||
* buffer due to nested command execution. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only now (when looking at the code for the first time) i realize the buffer isn't shared. it's re-usable.
i.e. not serving multiple clients at the same time.
that said, i suppose we won't want to rename it...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, it's more of a public query buffer.
do you mean it should be thread_querybuffer
or thread_querybuffer_reusable
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in my eyes it's a bad term for this purpose. i'd think that a shared buffer is one that used by multiple entities at the same time. i'd just replace the term "shared" with "reusable", in both variable names and comments.
but considering we might wanna cherry pick later fixes from valkey, this rename might be unproductive.
so i'd be ok with keeping the name and just editing the comment that describes it.
your call.
src/networking.c
Outdated
* and a new empty buffer will be allocated for the shared buffer. */ | ||
static void resetSharedQueryBuf(client *c) { | ||
serverAssert(c->flags & CLIENT_SHARED_QUERYBUFFER); | ||
if (c->querybuf != thread_shared_qb || sdslen(c->querybuf) > c->qb_pos) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c->querybuf != thread_shared_qb
how is this possible?
and if it is, do we really want to do thread_shared_qb = NULL
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c->querybuf != thread_shared_qb how is this possible?
c->querybuf
may be expanded in processMultibulkBuffer()
.
https://github.com/redis/redis/blob/60f22ca830c59a630b4156b112f5e73ce75adc64/src/networking.c#L2401
and if it is, do we really want to do thread_shared_qb = NULL?
this means that c->querybuf
has acquired ownership, the old pointer to thread_shared_qb is invalid, and we need to reset it so that it can be created again when it is used again.
@@ -2674,6 +2701,7 @@ void readQueryFromClient(connection *conn) { | |||
if (c->reqtype == PROTO_REQ_MULTIBULK && c->multibulklen && c->bulklen != -1 | |||
&& c->bulklen >= PROTO_MBULK_BIG_ARG) | |||
{ | |||
if (!c->querybuf) c->querybuf = sdsempty(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add a comment that we don't reuse the shared buffer here because we aim for the big arg optimization? or do you think it's clear form the context above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done with 3ebb1b3
(#337)
tests/unit/querybuf.tcl
Outdated
@@ -63,7 +104,7 @@ start_server {tags {"querybuf slow"}} { | |||
# Write something smaller, so query buf peak can shrink | |||
$rd set x [string repeat A 100] | |||
set new_test_client_qbuf [client_query_buffer test_client] | |||
if {$new_test_client_qbuf < $orig_test_client_qbuf} { break } | |||
if {$new_test_client_qbuf < $orig_test_client_qbuf && $new_test_client_qbuf > 0} { break } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was there a race condition here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is CP mistake from the unstable Valkey.
tests/unit/querybuf.tcl
Outdated
@@ -78,6 +119,11 @@ start_server {tags {"querybuf slow"}} { | |||
$rd write "*3\r\n\$3\r\nset\r\n\$1\r\na\r\n\$1000000\r\n" | |||
$rd flush | |||
|
|||
after 200 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to wait for redis to read that incomplete command? maybe we better use wait_for to avoid timing issues.
same in theory applies for the after 20
below, but i'm not sure we can detect it.
maybe by looking at the argv-mem
and qbuf
fields in CLIENT LIST?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to wait for redis to read that incomplete command? maybe we better use wait_for to avoid timing issues.
fixed in e5a4a67
(#337).
same in theory applies for the
after 20
below, but i'm not sure we can detect it. maybe by looking at theargv-mem
andqbuf
fields in CLIENT LIST?
IIRC, the after 20
is used to verify that the next client cron doesn't shrink the client's query buffer.
in e5a4a67
(#337), we turn on cron before after
, so after 120
should be more reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, that can still have a race condition, if we want to wait for cron, we can maybe use a wait_for on some new tick metric. but we don't do that elsewhere. i suppose 120 is good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but in theory it's hard to run for more than 2 seconds, I'll check it from daily CI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need. if we'll ever see it fail we'll adjust. we have similar after 120 in other places.
test "Client executes small argv commands using shared query buffer" { | ||
set rd [redis_deferring_client] | ||
$rd client setname test_client | ||
set res [r client list] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have no guarantee that the previous command (setname) was run (missing rd read
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing rd read
, fixed it.
Co-authored-by: oranagra <[email protected]>
Co-authored-by: oranagra <[email protected]>
OK, i'll double check it. |
We've been seeing some pretty consistent failures from `test-valgrind-test` and `test-sanitizer-address` because of the querybuf test periodically failing. I tracked it down to the test periodically taking too long and the client cron getting triggered. A simple solution is to just disable the cron during the key race condition. I was able to run this locally for 100 iterations without seeing a failure. Example: https://github.com/valkey-io/valkey/actions/runs/9474458354/job/26104103514 and https://github.com/valkey-io/valkey/actions/runs/9474458354/job/26104106830. Signed-off-by: Madelyn Olson <[email protected]>
src/networking.c
Outdated
@@ -28,7 +28,7 @@ int postponeClientRead(client *c); | |||
char *getClientSockname(client *c); | |||
int ProcessingEventsWhileBlocked = 0; /* See processEventsWhileBlocked(). */ | |||
__thread sds thread_shared_qb = NULL; | |||
__thread int thread_shared_qb_used = 0; /* Avoid multiple clients using shared query | |||
__thread int thread_shared_qb_used = 0; /* Avoid multiple clients using reusable query |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you renamed everything, but didn't rename the variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙃i only updated the name in the comments and the resetReusableQueryBuf()
method (I added), so it looks like almost everything has changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, it looked like you changed a lot of lines for that.. (could cause a lot of conflict).
i thought i'm suggesting just editing one comment.
but i didn't look at the original code. do what you think is best.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i actually reworked a pretty big chunk of code, the code is now much more efficient and readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, so feel free to rename the variable too
This PR is based on the commits from PR valkey-io/valkey#258, valkey-io/valkey#593, valkey-io/valkey#639
This PR optimizes client query buffer handling in Redis by introducing
a reusable query buffer that is used by default for client reads. This
reduces memory usage by ~20KB per client by avoiding allocations for
most clients using short (<16KB) complete commands. For larger or
partial commands, the client still gets its own private buffer.
The primary changes are:
thread_shared_qb
that clients use by default.In addition to the memory savings, this change shows a 3% improvement in
latency and throughput when running with 1000 active clients.
The memory reduction may also help reduce the need to evict clients when
reaching max memory limit, as the query buffer is the main memory
consumer per client.
This PR is different from valkey-io/valkey#258
thread_shared_qb_used
to avoid multiple clients requiring the reusable query buffer at the same time.Signed-off-by: Uri Yagelnik [email protected]
Signed-off-by: Madelyn Olson [email protected]
Co-authored-by: Uri Yagelnik [email protected]
Co-authored-by: Madelyn Olson [email protected]