-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect NaN handling for Min and Max operators on CPU with a single element input #21455
Labels
contributions welcome
lower priority issues for the core ORT teams
core runtime
issues related to core runtime
Comments
github-actions
bot
added
the
ep:CUDA
issues related to the CUDA execution provider
label
Jul 23, 2024
skottmckay
added
core runtime
issues related to core runtime
and removed
ep:CUDA
issues related to the CUDA execution provider
labels
Jul 23, 2024
May need to use e.g. for Min: onnxruntime/onnxruntime/core/providers/cpu/math/element_wise_ops.cc Lines 721 to 733 in dd010ed
|
skottmckay
added
the
contributions welcome
lower priority issues for the core ORT teams
label
Jul 23, 2024
I have a fix nearly ready for this, will make a PR soon. |
tianleiwu
pushed a commit
that referenced
this issue
Sep 24, 2024
This makes min and max with NaN for either operand always return NaN for float16 data, matching the behaviour of float and double. The behaviour for floats and doubles was previously fixed for the CPU provider in #21492 and the CUDA provider in #19984, but these PRs didn't fix the behaviour for float16 due to tests causing asan errors. The memory access violations with float16 data have now been fixed in #22135, so this PR is a follow up to make float16 min and max behave the same as float and double for both the CPU and CUDA providers now that we can add tests for this. ### Motivation and Context Relevant previous issues (not float16 specific): * #21455 * onnx/onnx#6003
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
contributions welcome
lower priority issues for the core ORT teams
core runtime
issues related to core runtime
Describe the issue
The Min and Max operators should return NaN if any input is NaN, but the CPU provider will incorrectly return the value from the second input when it is a single-element tensor and the first input is NaN. The GPU provider behaves correctly (since #19984).
This issue was discovered when exporting a Torch model to ONNX that used the
torch.Tensor.clip
method.To reproduce
This outputs something like:
The first row of the CPU session output should be nan.
If I change
num_cols
to anything > 1 then the behaviour is correct and the first row is all nans for both the CPU and GPU providers. Although I can also reproduce the issue ifnum_cols
is > 1 but the number of rows is 1 and I reverse the order of the inputs to the operator.Urgency
No response
Platform
Linux
OS Version
Fedora 40
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.18.1
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: