-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipeline mode #2295
Comments
Here's how psycopg implemented this: psycopg/psycopg#89. with conn.pipeline() as pipeline:
cur.execute(...)
cur2.execute(...)
pipeline.sync() # calls PQpipelineSync
r1 = cur.fetchone()
r2 = cur2.fetchall() We might have to pitch this feature on hasql. |
Supporting pipeline mode would be great! |
Maybe this could also help us with doing an EXPLAIN before the request is executed. Related to #915 (comment) |
Out of curiosity I pgbenched pipeline mode on fa50be2. I got:
So an increase in 14% TPS. This was tested on pg + pgrst on the same machine, I assume if postgREST is separated then the gains would be more. cc @robx |
Thanks for this, particularly explicitly showing how to get those numbers! I'll wrap up some related experiments with postgrest itself in the loop and post numbers in a bit. |
Ok, here are some bounds on what pipeline mode could conceivably get us. With #2682, I ran PostgREST:
Networking:
Then the "request rate" output of postgrest-loadtest is:
This ratio is an upper bound on the improvement we could see with pipelining. The results show that the "actual work" done by
(I'm not that confident in my findings here, if someone wants to replicate this (should be straightforward using #2682) that would be great!) |
It might be interesting to get some number which use a |
That's a good point, I missed the fact that there's potentially two statements in Though maybe it's better to just go ahead with trying out pipelining -- I came at this thinking that those database roundtrips we save probably don't matter enough to be worth the effort of introducing pipelining. My benchmarking could have proved that the potential benefit is irrelevant, but I don't think it has. (It also doesn't prove we will get those 10-20% improvement, of course.) |
Tried the above. Replaced with
I'm able to reproduce the above numbers with setPgLocals, so SELECT 1 is a bit faster. Likely it's not that noticeable with |
Currently we send 2 queries for each request, one for the http context(+ role auth + search_path) and another one for the crud operation.
Removing the http context query grants about 33% increase in TPS with plain
pgbench
tests.The 2 queries cannot be merged into one because the crud query needs the
role
+search_path
setting beforehand.I believe libpq pipeline mode could help us gain perf here. With pipeline mode we wouldn't need to wait for the result of the first query(which we don't care) before sending the second.
The text was updated successfully, but these errors were encountered: