Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[C#] Support compression in IPC format #32370

Closed
asfimport opened this issue Jul 13, 2022 · 1 comment
Closed

[C#] Support compression in IPC format #32370

asfimport opened this issue Jul 13, 2022 · 1 comment

Comments

@asfimport
Copy link
Collaborator

asfimport commented Jul 13, 2022

Hello world between write_feather() and ArrowFileReader.ReadNextRecordBatch() fails with default settings. This is specific to compressed files (see workaround below) and it looks like what happens is C# correctly decompresses the batches but provides the caller with the compressed versions of the data arrays instead of the uncompressed ones. While all of the various Length properties are set correctly in C#, the data arrays are too short to contain all of the values in the file, the bytes do not match what the decompressed bytes should be, and basic data accessors like PrimitiveArray.Values can't be used because they throw ArgumentOutOfRangeException. Looking through the C# classes in the github repo it doesn't appear there's a way for the caller to request decompression. So I'm guessing decompression is supposed to be automatic but, for some reason, isn't.

 

While functionally successful, the workaround of using uncompressed feather isn't great as the uncompressed files are bigger than .csv. In my application the resulting disk space penalty is hundreds of megabytes compared to the footprint of using compressed feather.

 

Simple single field repex:

In R (arrow 8.0.0):
write_feather(tibble(value = seq(0, 1, length.out = 21)), "test lz4.feather")

In C# (Apache.Arrow 8.0.0):

using Apache.Arrow;
using Apache.Arrow.Ipc;
using System.IO;
using System.Runtime.InteropServices;

            using FileStream stream = new("test lz4.feather", FileMode.Open, FileAccess.Read, FileShare.Read);

            using ArrowFileReader arrowFile = new(stream);

            for (RecordBatch batch = arrowFile.ReadNextRecordBatch(); batch != null; batch = arrowFile.ReadNextRecordBatch())
            {

                IArrowArray[] fields = batch.Arrays.ToArray();

                ReadOnlySpan<double> test = MemoryMarshal.Cast<byte, double>(((DoubleArray)fields[0]).ValueBuffer.Span); // 15 incorrect values instead of 21 correctly incrementing ones (0, 0.05, 0.10, ..., 1)

            }

Workaround in R:

write_feather(tibble(value = seq(0, 1, length.out = 21)), "test.feather", compression = "uncompressed")

 

Apologies if this is a known issue. I didn't find anything on a Jira search and this isn't included in the known issues list on github.

Environment: Arrow 8.0.0, R 4.2.1, VS 17.2.4
Reporter: Todd West

Related issues:

Note: This issue was originally created as ARROW-17062. Please see the migration documentation for further details.

@asfimport
Copy link
Collaborator Author

Neal Richardson / @nealrichardson:
It looks like the C# implementation does not yet support compression: https://arrow.apache.org/docs/status.html#ipc-format

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant