-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI][Java] Integration jobs with Spark fail with NoSuchMethodError:io.netty.buffer.PooledByteBufAllocator #36332
Comments
@BryanCutler @lidavidm @kiszk FYI |
Possibly we can do something via runtime reflection (though that would have to be carefully benchmarked) |
Thank you for sharing this. Here is a discussion at Spark side. |
Could we have our Spark tests use Arrow with shaded dependencies? |
I think shaded Netty would be the best solution (it would also hopefully unblock downstream Spark). But we don't build such an artifact currently. Could we force the test to use Arrow with arrow-memory-unsafe added and arrow-memory-netty excluded? |
@lidavidm @BryanCutler is this or should this be a blocker for 13.0.0? |
I think we should evaluate if shading Netty or using arrow-memory-unsafe in place of arrow-memory-netty works, or else evaluate if something reflection-based might work. |
Er, so that is to say, yes, let's consider this a blocker. |
It looks like to shade Netty or use arrow-memory-unsafe, we'd have to modify the Spark pom; I'm not sure if that quite qualifies as a solution for non-HEAD Spark. |
I sent a note to the Spark ML: https://lists.apache.org/thread/ndmj3ht85j2g40n8clfh92ny6qqbvd09 So far, I think we are leaning towards Spark resolving HEAD on their side and breaking backwards compatibility with older Spark versions. |
Thanks @danepitkin ! |
I think Spark will just have to be ignored. |
I never posted the JIRA ticket that was opened on SPARK. Adding it for reference: https://issues.apache.org/jira/projects/SPARK/issues/SPARK-44212 |
I merged that PR - so hopefully this is fixed |
I ran crossbow on that PR - looks like it does pass now |
Thanks @lidavidm , I am closing it then! |
### What changes were proposed in this pull request? This pr upgrade Apache Arrow from 13.0.0 to 14.0.0. ### Why are the changes needed? The Apache Arrow 14.0.0 release brings a number of enhancements and bug fixes. In terms of bug fixes, the release addresses several critical issues that were causing failures in integration jobs with Spark([GH-36332](apache/arrow#36332)) and problems with importing empty data arrays([GH-37056](apache/arrow#37056)). It also optimizes the process of appending variable length vectors([GH-37829](apache/arrow#37829)) and includes C++ libraries for MacOS AARCH 64 in Java-Jars([GH-38076](apache/arrow#38076)). The new features and improvements focus on enhancing the handling and manipulation of data. This includes the introduction of DefaultVectorComparators for large types([GH-25659](apache/arrow#25659)), support for extended expressions in ScannerBuilder([GH-34252](apache/arrow#34252)), and the exposure of the VectorAppender class([GH-37246](apache/arrow#37246)). The release also brings enhancements to the development and testing process, with the CI environment now using JDK 21([GH-36994](apache/arrow#36994)). In addition, the release introduces vector validation consistent with C++, ensuring consistency across different languages([GH-37702](apache/arrow#37702)). Furthermore, the usability of VarChar writers and binary writers has been improved with the addition of extra input methods([GH-37705](apache/arrow#37705)), and VarCharWriter now supports writing from `Text` and `String`([GH-37706](apache/arrow#37706)). The release also adds typed getters for StructVector, improving the ease of accessing data([GH-37863](apache/arrow#37863)). The full release notes as follows: - https://arrow.apache.org/release/14.0.0.html ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Pass GitHub Actions ### Was this patch authored or co-authored using generative AI tooling? No Closes #43650 from LuciferYang/arrow-14. Lead-authored-by: yangjie01 <[email protected]> Co-authored-by: YangJie <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
Describe the bug, including details regarding any error messages, version, and platform.
It does seem that:
#36211
Updated from
PoolThreadCache
toPoolArenasCache
this has made our nightly integration tests with Spark previous and current development versions to fail.The error:
Spark hasn't yet updated to 4.1.94.Final.
I am unsure on how do we fix this but does this mean we break backwards compatibility with previous Spark versions?
Component(s)
Continuous Integration, Java
The text was updated successfully, but these errors were encountered: