Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit parallelism / buffering when writing to a single parquet file in parallel #7591

Closed
alamb opened this issue Sep 18, 2023 · 0 comments · Fixed by #7655
Closed

Limit parallelism / buffering when writing to a single parquet file in parallel #7591

alamb opened this issue Sep 18, 2023 · 0 comments · Fixed by #7655
Labels
enhancement New feature or request

Comments

@alamb
Copy link
Contributor

alamb commented Sep 18, 2023

Is your feature request related to a problem or challenge?

When writing to a parquet file in parallel, the implementation in #7562, will potentially buffer parquet data faster than can be written to the final output as there is no back pressure and the intermediate files are all buffered in memory.

As described by @devinjdangelo in #7562 (comment)

I think the best possible solution would consume the sub parquet files incrementally from memory as they are produced, rather than buffering the entire file.

And #7562 (comment)

Ultimately, I'd like to be able to call SerializedRowGroupWriter.append_column as soon as possible -- before any parquet file has been completely serialized in memory. I.e. as a parallel tasks finishes encoding a single column for a single row group, eagerly flush those bytes to the concatenation task, then flush to ObjectStore and discard from memory. If the concatenation task can keep up with all of the parallel serializing tasks, then we could prevent ever buffering an entire row group in memory.

Describe the solution you'd like

I would like to see the output row groups written as they are produced, rather than all buffered and written after the fact, as suggested by @dev

Describe alternatives you've considered

No response

Additional context

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
1 participant