Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support streaming datasets with pyarrow.parquet.read_table #6251

Merged

Conversation

albertvillanova
Copy link
Member

Support streaming datasets with pyarrow.parquet.read_table.

See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2

CC: @AndreaFrancis

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Sep 20, 2023

The documentation is not available anymore as the PR was closed or merged.

@mariosasko
Copy link
Collaborator

This function reads an entire Arrow table in one go, which is not ideal memory-wise, so I don't think we should encourage using this function, considering we want to keep RAM usage as low as possible in the streaming mode.

(Note that Parquet files are compressed, meaning the loaded table can be significantly larger than the size in Parquet.)

Instead, we should suggest the authors to use:

with open(doc_path, "rb") as f:
    parquet_file = pq.ParquetFile(f)
    for batch in parquet_file.iter_batches():
        pa_table = pa.Table.from_batches([batch])
        yield idx, pa_table
        idx += 1

@albertvillanova
Copy link
Member Author

@mariosasko I think the potential problem you evoke is independent of whether or not we support streaming mode:

  • if the user's script with read_table works in non-streaming mode, it will also work in streaming mode after this PR

In fact, what we should suggest instead is to follow the scriptless approach, so that our parquet packaged module is used, with all the optimizations implemented. But this approach is not possible in all cases, and some use cases need to implement a script. And if they have small Parquet files and use read_table, I think we should support streaming.

In summary, let me clarify the goal and the scope of this PR:

  • a user needs using a loading script
  • their files are small enough so that they use read_table
  • their loading script works in non-streaming mode
  • therefore, this PR allows loading their dataset in streaming mode as well

@mariosasko
Copy link
Collaborator

Yes, the no-script approach with metadata configs makes the most sense.

their files are small enough so that they use read_table

Some of the Parquet files in that repo are larger than 1 GB ...

Also, I'd wait for more instances of people using the read_table function on the Hub before merging this PR.

@albertvillanova
Copy link
Member Author

@mariosasko, yes, this solution is not specifically for the "uonlp/CulturaX" dataset, but for other use cases as I explained above: indeed, they finally removed the use of read_table because their data files are too large.

Also, I'd wait for more instances of people using the read_table function on the Hub before merging this PR.

Do you know how many datasets are currently using read_table?

@mariosasko
Copy link
Collaborator

Do you know how many datasets are currently using read_table?

Zero (based on the script that checks the script contents of the public Hub datasets).

@albertvillanova
Copy link
Member Author

I see... Thanks! 🤗

@albertvillanova
Copy link
Member Author

I'm merging this PR as discussed in private.

@albertvillanova albertvillanova merged commit dd786d3 into huggingface:main Sep 27, 2023
10 of 12 checks passed
@albertvillanova albertvillanova deleted the stream-pq-read-table branch September 27, 2023 06:26
@github-actions
Copy link

Show benchmarks

PyArrow==8.0.0

Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.008267 / 0.011353 (-0.003086) 0.005813 / 0.011008 (-0.005195) 0.108802 / 0.038508 (0.070294) 0.093996 / 0.023109 (0.070886) 0.403115 / 0.275898 (0.127217) 0.457299 / 0.323480 (0.133819) 0.006277 / 0.007986 (-0.001709) 0.004701 / 0.004328 (0.000373) 0.080700 / 0.004250 (0.076449) 0.077906 / 0.037052 (0.040854) 0.409972 / 0.258489 (0.151483) 0.477707 / 0.293841 (0.183867) 0.041816 / 0.128546 (-0.086731) 0.011250 / 0.075646 (-0.064397) 0.390634 / 0.419271 (-0.028637) 0.065361 / 0.043533 (0.021828) 0.404501 / 0.255139 (0.149362) 0.448162 / 0.283200 (0.164962) 0.032823 / 0.141683 (-0.108860) 1.899892 / 1.452155 (0.447737) 2.044561 / 1.492716 (0.551844)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.241093 / 0.018006 (0.223086) 0.482111 / 0.000490 (0.481622) 0.005505 / 0.000200 (0.005305) 0.000094 / 0.000054 (0.000039)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.034861 / 0.037411 (-0.002551) 0.109296 / 0.014526 (0.094770) 0.127594 / 0.176557 (-0.048962) 0.191815 / 0.737135 (-0.545320) 0.122630 / 0.296338 (-0.173709)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.452194 / 0.215209 (0.236985) 4.486200 / 2.077655 (2.408545) 2.155635 / 1.504120 (0.651515) 2.004569 / 1.541195 (0.463374) 2.142570 / 1.468490 (0.674080) 0.561488 / 4.584777 (-4.023289) 4.381102 / 3.745712 (0.635390) 3.914920 / 5.269862 (-1.354942) 2.474271 / 4.565676 (-2.091406) 0.067528 / 0.424275 (-0.356747) 0.008723 / 0.007607 (0.001116) 0.536077 / 0.226044 (0.310033) 5.342050 / 2.268929 (3.073122) 2.735747 / 55.444624 (-52.708877) 2.353938 / 6.876477 (-4.522539) 2.442878 / 2.142072 (0.300805) 0.685404 / 4.805227 (-4.119823) 0.156657 / 6.500664 (-6.344007) 0.071714 / 0.075469 (-0.003755)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.562852 / 1.841788 (-0.278935) 24.538203 / 8.074308 (16.463895) 16.857777 / 10.191392 (6.666385) 0.184221 / 0.680424 (-0.496203) 0.021688 / 0.534201 (-0.512513) 0.470700 / 0.579283 (-0.108583) 0.470593 / 0.434364 (0.036229) 0.645066 / 0.540337 (0.104729) 0.756075 / 1.386936 (-0.630861)
PyArrow==latest
Show updated benchmarks!

Benchmark: benchmark_array_xd.json

metric read_batch_formatted_as_numpy after write_array2d read_batch_formatted_as_numpy after write_flattened_sequence read_batch_formatted_as_numpy after write_nested_sequence read_batch_unformated after write_array2d read_batch_unformated after write_flattened_sequence read_batch_unformated after write_nested_sequence read_col_formatted_as_numpy after write_array2d read_col_formatted_as_numpy after write_flattened_sequence read_col_formatted_as_numpy after write_nested_sequence read_col_unformated after write_array2d read_col_unformated after write_flattened_sequence read_col_unformated after write_nested_sequence read_formatted_as_numpy after write_array2d read_formatted_as_numpy after write_flattened_sequence read_formatted_as_numpy after write_nested_sequence read_unformated after write_array2d read_unformated after write_flattened_sequence read_unformated after write_nested_sequence write_array2d write_flattened_sequence write_nested_sequence
new / old (diff) 0.009486 / 0.011353 (-0.001867) 0.004694 / 0.011008 (-0.006314) 0.080216 / 0.038508 (0.041708) 0.093479 / 0.023109 (0.070369) 0.537353 / 0.275898 (0.261455) 0.551631 / 0.323480 (0.228151) 0.007373 / 0.007986 (-0.000613) 0.004044 / 0.004328 (-0.000285) 0.075301 / 0.004250 (0.071051) 0.069408 / 0.037052 (0.032355) 0.527962 / 0.258489 (0.269473) 0.559423 / 0.293841 (0.265582) 0.039351 / 0.128546 (-0.089195) 0.010801 / 0.075646 (-0.064845) 0.092803 / 0.419271 (-0.326468) 0.058876 / 0.043533 (0.015343) 0.513742 / 0.255139 (0.258603) 0.574666 / 0.283200 (0.291466) 0.030277 / 0.141683 (-0.111406) 1.884936 / 1.452155 (0.432782) 2.008260 / 1.492716 (0.515543)

Benchmark: benchmark_getitem_100B.json

metric get_batch_of_1024_random_rows get_batch_of_1024_rows get_first_row get_last_row
new / old (diff) 0.242162 / 0.018006 (0.224156) 0.467400 / 0.000490 (0.466910) 0.005348 / 0.000200 (0.005148) 0.000103 / 0.000054 (0.000049)

Benchmark: benchmark_indices_mapping.json

metric select shard shuffle sort train_test_split
new / old (diff) 0.038022 / 0.037411 (0.000611) 0.108239 / 0.014526 (0.093713) 0.121514 / 0.176557 (-0.055042) 0.184951 / 0.737135 (-0.552184) 0.123138 / 0.296338 (-0.173200)

Benchmark: benchmark_iterating.json

metric read 5000 read 50000 read_batch 50000 10 read_batch 50000 100 read_batch 50000 1000 read_formatted numpy 5000 read_formatted pandas 5000 read_formatted tensorflow 5000 read_formatted torch 5000 read_formatted_batch numpy 5000 10 read_formatted_batch numpy 5000 1000 shuffled read 5000 shuffled read 50000 shuffled read_batch 50000 10 shuffled read_batch 50000 100 shuffled read_batch 50000 1000 shuffled read_formatted numpy 5000 shuffled read_formatted_batch numpy 5000 10 shuffled read_formatted_batch numpy 5000 1000
new / old (diff) 0.558587 / 0.215209 (0.343377) 5.740312 / 2.077655 (3.662657) 3.172164 / 1.504120 (1.668044) 2.852908 / 1.541195 (1.311713) 2.894435 / 1.468490 (1.425945) 0.586399 / 4.584777 (-3.998378) 4.498342 / 3.745712 (0.752630) 4.000569 / 5.269862 (-1.269292) 2.610988 / 4.565676 (-1.954688) 0.068415 / 0.424275 (-0.355860) 0.008602 / 0.007607 (0.000994) 0.614731 / 0.226044 (0.388686) 6.068158 / 2.268929 (3.799229) 3.301070 / 55.444624 (-52.143554) 2.868034 / 6.876477 (-4.008443) 2.959072 / 2.142072 (0.816999) 0.684174 / 4.805227 (-4.121053) 0.154099 / 6.500664 (-6.346565) 0.070641 / 0.075469 (-0.004828)

Benchmark: benchmark_map_filter.json

metric filter map fast-tokenizer batched map identity map identity batched map no-op batched map no-op batched numpy map no-op batched pandas map no-op batched pytorch map no-op batched tensorflow
new / old (diff) 1.835667 / 1.841788 (-0.006120) 24.981645 / 8.074308 (16.907337) 17.218517 / 10.191392 (7.027125) 0.197055 / 0.680424 (-0.483368) 0.025465 / 0.534201 (-0.508736) 0.523498 / 0.579283 (-0.055785) 0.528268 / 0.434364 (0.093904) 0.599518 / 0.540337 (0.059180) 0.887206 / 1.386936 (-0.499730)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants