-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implicit pipelining is slightly slower than not pipelining #147
Comments
Oh, It seems to run the benchmark for each person's environment. I'll close this issue and let people can refer to this benchmark result :)
BenchmarkTestQPS/simple-hkeys-40 5000000 8942 ns/op 284 B/op 7 allocs/op
BenchmarkTestQPS/simple-hkeys-40 5000000 8634 ns/op 286 B/op 7 allocs/op
BenchmarkTestQPS/simple-hkeys-40 5000000 9868 ns/op 289 B/op 7 allocs/op |
The default of 150us was chosen as it's the recommended value for production in https://github.com/joomcode/redispipe (where we got the inspiration for the implicit pipelining from). There is no other hard reasoning behind it and we could even change it in the future if it turns out too be too much for the common case, but the "correct" value will always depend on the workload and requirements. |
@nussjustin Thanks for the information! |
Hi guys, I still keep on my eyes on Radix project!
This issue came from the first time that I applied Radix in our production product. We noticed that our latency was slightly slower by 1~2 ms and we've got the hint from the
pprof
. We saw some of pipelining relative codes are showed up in profiling image.I've tried a simple test with the latest commit and this is the result.
Option 1) With implicit pipelining (no other options)
Option 2) Without implicit pipelining and add options
We're handling 25,000+ req/s in our production service, and I think 1000 ns difference can be affected our latency. Any ideas on this?
The text was updated successfully, but these errors were encountered: