-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Combine small chunks in sinks for streaming pipelines #14346
Merged
ritchie46
merged 15 commits into
pola-rs:main
from
itamarst:11699-combine-small-chunks-when-writing
Feb 13, 2024
Merged
Changes from 1 commit
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
de43480
perf: Make sure we don't write tiny chunks to a file (#11699)
pythonspeed 8e82812
Refactor to be agnostic to the number of threads.
pythonspeed 0f221d2
Choose a faster constant.
pythonspeed 64b039c
A better way to deal with interleaved small and large chunks.
pythonspeed 21e98e1
Sketch of adding buffering to OrderedSink.
pythonspeed 92d7d4d
A nicer implementation.
pythonspeed 424da2f
Move to a better location, better docs.
pythonspeed 27a7060
Add a test.
pythonspeed 60929c9
Fix formatting.
pythonspeed e571f0b
Switch away from a const generic.
pythonspeed e62f122
Restrict vstacking to OrderedSink only.
pythonspeed 6030433
Drop proptest.
pythonspeed 463568f
Making chunks contiguous is now the caller's responsibility.
pythonspeed db50a74
only in file-writer
ritchie46 126ccc1
feature gate
ritchie46 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we would rechunk, we could simply rechunk here. But I don't want to do that as that should be left to the consumer of the streaming engine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case it should probably be done in
write_parquet()
, otherwise thecollect(streaming=True).write_parquet()
case will continue to be slow.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which would requite the new struct be e.g. moved into the
polars-core
crate and made public.Here's the runtime with latest commit on my computer, last case is slow again:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But maybe also
write_feather()
. or the cloud parquet writer. etc.. (Having it in OrderedSink seemed like a low-cost smoothing of performance bumps, limited to a single place.)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then that logic should be in
write_parquet
indeed. That writer should check the chunk sizes.I will first merge this one, and then we can follow up with the
write_parquet
optimization.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but it is more expensive for other operations. Operations themselves should knie their best chunking strategies.