-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump iree to 20231130.724 #212
Conversation
(in the future, more descriptive PR title/description please) |
Also, looks like some test failures with the bump. |
The bumping torch-mlir will break the llama test, where some more ops need to supported in linalg:
|
I don't really know where we got this particular test from and it is not exercising our production path. If we most, we can xfail it and fix forward. |
It's from here. I think we should xfail it #36 |
Can we add an issue to fix it when we do that and make sure to work it. It is important to have some generality in coverage, but this stuff is new enough I don't think we need to stop the train for it. |
Here we go #221 |
One more test error with uninitialized=True. SHARK-Turbine/tests/aot/globals_test.py Line 351 in 0c658bd
This was added #201, that was supported in 20231121.715, but with 20231130.724, it has a regression in iree. The unintialized=True support patch might has been reverted in IREE?
|
Something is wrong -- this route could not be happening with a certain in the last two weeks. We need to look at iree and see why it seems to be going back in time. |
Where are you seeing this? I think it must be testing an old wheel somehow. |
b546b4a
to
fb21611
Compare
To solve the batchnorm2d issue nod-ai#110 Xfail llama_test becasue missing ops from torch to linalg
It's on my local machine. You are right. I just double-checked it and it tried to use my old iree_build. |
To solve the issue #110