-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka perf improvements #1964
Kafka perf improvements #1964
Conversation
consumer.store_offset(&topic, partition, offset)?; | ||
last_offset = msg.offset(); | ||
|
||
buffer.push(sender.send(&consumer_group_id, msg)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This send
down the hood awaits on the bifrost append.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some thinking, perhaps for the time being, it just makes sense to remove this commit with the buffering. In any case this code will dramatically change when #1651 happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's your take on this @AhmedSoliman ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest you use Appender or BackgroundAppender when writing to bifrost, it's an easy change and will get you even better performance IMHO. This is adjacent to what we'll do regarding ingress -> pp communication in the future. The downside with all current options (your PR doesn't introduce the issue) is that we don't have proper back-pressure or control over faireness from partition processor so it's easy to cause overwhelm the partition processor if kafka's ingestion rate is faster than PP's processing rate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I won't do this change, we should revisit this though once we sort out how the kafka ingress behaves in the distributed setup.
2e50cf3
to
806b0ff
Compare
… to split the main consumer queue into subqueues, such that we can spawn a subtask for each topic-partition tuple. This roughly gives us 8-10x improvement in throughput (on my machine).
…fede1024/rust-rdkafka#638 (comment) for more details. No visible performance differences.
…aph the consumption rate.
806b0ff
to
8762e9d
Compare
consumer.store_offset(&topic, partition, offset)?; | ||
last_offset = msg.offset(); | ||
|
||
buffer.push(sender.send(&consumer_group_id, msg)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest you use Appender or BackgroundAppender when writing to bifrost, it's an easy change and will get you even better performance IMHO. This is adjacent to what we'll do regarding ingress -> pp communication in the future. The downside with all current options (your PR doesn't introduce the issue) is that we don't have proper back-pressure or control over faireness from partition processor so it's easy to cause overwhelm the partition processor if kafka's ingestion rate is faster than PP's processing rate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice improvements indeed. 🚢
This is the result of investigating the Kafka ingress performance.
Before:
After:
Red line is Bifrost append thpt, green line is PP Invoke command thpt. The kafka load tool generates as much load as possible (around 25k/s records on my machine).
In both situations the initial slow section seems to be caused by the load tool, which takes a good amount of my cpu. After it finishes producing, restate takes all the CPU.
In the after case, the kafka container OOMs before finishing (probably caused by the high load generated by the consumer), so I cut the section afterwards.
I ran this test using Rust SDK and a virtual object as target, and the following subscription:
The Kafka topic has 24 partitions (same number of Restate's partitions). The thpt improvement in this PR is greatly affected by the Kafka topic partition number, meaning higher number of Kafka partitions equals to higher throughput.
Tuning the knobs though has irrelevant impact most of the times (at least on my machine). I tried to tune
fetch.wait.max.ms
too, just increases CPU usage.