Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge main -> google #5262

Merged
merged 16 commits into from
Mar 31, 2021
Merged

Conversation

KoolJBlack
Copy link
Contributor

inho9606 and others added 15 commits March 26, 2021 02:20
This allowed for a lot of file IO code to go away - there was needless
abstraction here as there was only a single user of a lot of these things
that was already platform-specialized.

Progress on iree-org#4369 and iree-org#3848.
Fixes iree-org#4642.
Unblocks iree-org#3845, which can now be added cleanly.
This should be done upstream, if someone desires it.
* make bazel build on macOS work again

* run yapf
…ree-org#5234)

Also start adding framework to query tile size and workgroup size for
different ops
This commit adjusts dispatch region formation to additionally
recognize linalg.generic as a root op and reverses the order
with which we decide fusion groups. This enables us to fuse
linalg.generic output tensors into the same group, thus we
can pull in the linalg.fill for reduction linalg.generic. This
works for both tiled cases and non-tiled cases. We actually
already treat linalg.generic as a root op (as a second step
when deciding fusion groups). This simplifies the logic by
unifying them into one.

This avoids sad dispatch regions like the following:

```mlir
  flow.executable @call_dispatch_143 attributes {sym_visibility = "private"} {
    flow.dispatch.entry @call_dispatch_143 attributes {signature = () -> tensor<f32>, workgroup_rank = 3 : index}
    module  {
      func @call_dispatch_143(%arg0: !flow.dispatch.tensor<writeonly:f32>) {
        %cst = constant 0xFF800000 : f32
        %0 = linalg.init_tensor [] : tensor<f32>
        %1 = linalg.fill(%0, %cst) : tensor<f32>, f32 -> tensor<f32>
        flow.dispatch.tensor.store %1, %arg0 : tensor<f32> -> !flow.dispatch.tensor<writeonly:f32>
        return
      }
    }
  }
```
This commit removes the command-line option to force fusion
and its uses in tests. This keeps us honest regarding what we
support and what we not to avoid differences and surprises
between test cases and real use cases.

This commit also removes duplicate tests and merges tests
into one file.
Using unbalanced malloc + system allocator free was breaking tracy.
…ree-org#5254)

We use `std::make_unique` throughout the project. If we needed compatibility with older compilers, we could switch to `absl::make_unique` ([source](https://github.com/abseil/abseil-cpp/blob/9fde5a6eb081ea080f5aa895102a9154c3a2d09f/absl/memory/memory.h#L96-L103)) or add our own implementation without a dep on abseil (see iree-org#3848).
…te (iree-org#5251)

Add pattern to do affine.min canonicalization after tile and distribute
…tch region (iree-org#5236)

In general the operations cloned into a dispatch region could form a
DAG. These operations have to be cloned while keeping the order
amongst them consistent to not violate use-def chains. This changes
adds a method to clone the operations in the right order. Also cleans
up the dispatch region creation code.

Fixes iree-org#5151.
…ree-org#5258)

When moving the target must be released regardless of whether the pointers
match in order to keep things balanced. This could happen in cases where
a register contained a ref ptr and a function call returned the same
pointer with the move bit set, leaking the release and failing to clobber.
Fixes iree-org#5141.
@google-cla google-cla bot added the cla: yes label Mar 31, 2021
@copybara-service copybara-service bot merged commit 54c8bf5 into iree-org:google Mar 31, 2021
@KoolJBlack KoolJBlack mentioned this pull request Mar 31, 2021
KoolJBlack added a commit that referenced this pull request Apr 1, 2021
24774c5 Synchronize submodules with LLVM at llvm/llvm-project@fcf6800
10ae8dc Synchronize submodules with LLVM at llvm/llvm-project@fcf6800
46aa337 Integrate LLVM at llvm/llvm-project@fcf6800
54c8bf5 Merge pull request #5262 from KoolJBlack:main-to-google
fda00cf Integrate LLVM at llvm/llvm-project@8396aeb
16670ba Integrate LLVM at llvm/llvm-project@afed50a
431ede6 Merge branch 'google' into main-to-google
7a8867c Integrate LLVM at llvm/llvm-project@c06a8f9
0a378bb Synchronize submodules with LLVM at llvm/llvm-project@73adc05
4fe87f3 Synchronize submodules with LLVM at llvm/llvm-project@73adc05
2c9e502 Integrate LLVM at llvm/llvm-project@73adc05
20a2ba4 Integrate LLVM at llvm/llvm-project@77d81c2
65945ba Update benefit of numerically unstable Sigmoid legalization to zero
f2f173b Integrate LLVM at llvm/llvm-project@c51e91e
0a0db13 Integrate LLVM at llvm/llvm-project@4822830
3c3cb7c Integrate LLVM at llvm/llvm-project@20d5c42
01e8cb5 Integrate LLVM at llvm/llvm-project@594e0ba
1006028 Integrate LLVM at llvm/llvm-project@4157a07
8455942 Merge main -> google
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants