-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Data] Avoid pickling LanceFragment when creating read tasks for Lance #45392
Conversation
Signed-off-by: Cheng Su <[email protected]>
Signed-off-by: Cheng Su <[email protected]>
num_rows = sum([f.count_rows() for f in fragments]) | ||
input_files = [ | ||
data_file.path() for f in fragments for data_file in f.data_files() | ||
] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can still keep these. We've established that count_rows
is not the slow part. In fact, it's even faster than get_fragments()
.
Also, small nit: you don't need the []
inside of sum. If you omit them you get a generator expression which bypasses the need to allocate the whole list.
num_rows = sum(f.count_rows() for f in fragments)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, updated.
for fragment_id in fragment_ids: | ||
fragment = lance_ds.get_fragment(fragment_id) | ||
batches = fragment.to_batches(columns=columns, filter=row_filter) | ||
for batch in batches: | ||
yield pyarrow.Table.from_batches([batch]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you wanted something that did some IO prefetching, you could instead do:
for fragment_id in fragment_ids: | |
fragment = lance_ds.get_fragment(fragment_id) | |
batches = fragment.to_batches(columns=columns, filter=row_filter) | |
for batch in batches: | |
yield pyarrow.Table.from_batches([batch]) | |
fragments = [lance_ds.get_fragment(id) for id in fragment_ids] | |
scanner = lance_ds.scanner( | |
columns, | |
filter=row_filter, | |
fragments=fragments, | |
) | |
for batch in scanner.to_reader(): | |
yield pyarrow.Table.from_batches([batch]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, updated.
Signed-off-by: Cheng Su <[email protected]>
Why are these changes needed?
Avoid pickling LanceFragment when creating read tasks for Lance, as this is expensive.
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.