-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Hive Parquet test failure on certain Arm machines #15185
Conversation
The code used double-precision calculation to find bounds for decimal values. This is prone to double rounding errors and indeed produces "approximate" results (the approximation is rather poor for decimal precision > 17). On some computers, this could lead to `start` or `end` being beyond allowed range for the type (exceeding precision), so test would fail due to a test value overflow. Failure could result in a NPE because Hive replaces overflows with nulls. For example, printing `BigDecimal.valueOf(Math.pow(10, 29)).subtract(BigDecimal.valueOf(1)).negate().toBigInteger()` was observed producing "-100000000000000009999999999999" (30 characters and the minus sign) "-99999999999999999999999999999" (29 characters and the minus sign) on two different laptops.
Do not use `BigDecimal` where `BigInteger` would suffice.
Can you share what is the difference between architectures here. This seems really interesting. |
@skrzypo987 see first commit message |
Wow. I guess the problem was on M1 cpu. I wonder if common server ARMs suffer the same "feature" |
Shall we assume that |
Nothing calculated using floating point arithmetic has deterministic result anywhere. I wouldn't be surprised if the result from M1 was correct, because it's a new CPU and they forgot to make it wrong to be compatible with older ones. |
CI #15187 |
No description provided.