-
Notifications
You must be signed in to change notification settings - Fork 603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge google -> main #5276
Merged
Merged
Merge google -> main #5276
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
KoolJBlack
commented
Mar 31, 2021
- 24774c5 Synchronize submodules with LLVM at llvm/llvm-project@fcf680050686
- 10ae8dc Synchronize submodules with LLVM at llvm/llvm-project@fcf680050686
- 46aa337 Integrate LLVM at llvm/llvm-project@fcf680050686
- 54c8bf5 Merge pull request Merge main -> google #5262 from KoolJBlack:main-to-google
- fda00cf Integrate LLVM at llvm/llvm-project@8396aeb07cdd
- 16670ba Integrate LLVM at llvm/llvm-project@afed50a14b34
- 431ede6 Merge branch 'google' into main-to-google
- 7a8867c Integrate LLVM at llvm/llvm-project@c06a8f9caa51
- 0a378bb Synchronize submodules with LLVM at llvm/llvm-project@73adc05cedb2
- 4fe87f3 Synchronize submodules with LLVM at llvm/llvm-project@73adc05cedb2
- 2c9e502 Integrate LLVM at llvm/llvm-project@73adc05cedb2
- 20a2ba4 Integrate LLVM at llvm/llvm-project@77d81c2270c6
- 65945ba Update benefit of numerically unstable Sigmoid legalization to zero
- f2f173b Integrate LLVM at llvm/llvm-project@c51e91e04681
- 0a0db13 Integrate LLVM at llvm/llvm-project@482283042f79
- 3c3cb7c Integrate LLVM at llvm/llvm-project@20d5c42e0ef5
- 01e8cb5 Integrate LLVM at llvm/llvm-project@594e0ba96967
- 1006028 Integrate LLVM at llvm/llvm-project@4157a079afbf
- 8455942 Merge main -> google
* da64c93 Integrate MLIR-EmitC at iml130/mlir-emitc@dde739f (iree-org#5228) * a8d6c2a Fix doc publication by escaping our IR snippets in markdown. (iree-org#5227) * bd5e535 Introduce RVV VLS code-gen (iree-org#5199) * d496bc9 Avoid allocating temporary buffer for tensors derived from read-only tensors (.. * 1a93557 Tidying up a few pages of documentation. (iree-org#5225) * 46e8331 Add a dedicated iree_c_module CMake module (iree-org#5214) * 603b208 Merge google -> main (iree-org#5219) * c9f3742 Add support for control flow lowering in the VM to emitc target (iree-org#5208) * c8a7b2f Sort descriptors as expected by the native module (iree-org#5207) * 47183fb Revise compiler tracing to enable statistics aggregation. (iree-org#5218) * 500ec7b Set author to match committer for llvm submodule update action. (iree-org#5217) * 881de81 Generate and use the iree_hal_executable_library_t metadata. (iree-org#5195) * cf8180e Increase K tile size for small matrices. (iree-org#5213) * f5804ec Fix problem bug in MobileBert with vectorization enable. (iree-org#5211) PiperOrigin-RevId: 365076337
Updates LLVM usage to match [4157a079afbf](llvm/llvm-project@4157a079afbf) PiperOrigin-RevId: 365150758
Updates LLVM usage to match [594e0ba96967](llvm/llvm-project@594e0ba96967) PiperOrigin-RevId: 365282704
Updates LLVM usage to match [20d5c42e0ef5](llvm/llvm-project@20d5c42e0ef5) PiperOrigin-RevId: 365666232
Updates LLVM usage to match [482283042f79](llvm/llvm-project@482283042f79) PiperOrigin-RevId: 365710568
Updates LLVM usage to match [c51e91e04681](llvm/llvm-project@c51e91e04681) PiperOrigin-RevId: 365802786
cr/318489247 implemented a numerically stable legalization for MLIR but it wasn't being used as the old implementation was still around. Updating the benefit will result in use of the direct pattern unless the logicistic op is illegal. Also, enable the python test with this numerical issue now that _UnaryOpsComposition is supported in MLIR. Added a TODO to remove the pattern from legalization. Specify generated names so that legalization depth can be computed and legalizations are preferred based on the benefit. PiperOrigin-RevId: 365851709
Updates LLVM usage to match [77d81c2270c6](llvm/llvm-project@77d81c2270c6) PiperOrigin-RevId: 365870238
Updates LLVM usage to match [73adc05cedb2](llvm/llvm-project@73adc05cedb2) PiperOrigin-RevId: 365901717
Updates LLVM dependencies to match [73adc05cedb2](llvm/llvm-project@73adc05cedb2). - llvm-bazel to [fad5434701aa](google/llvm-bazel@fad5434701aa) - TensorFlow to [8bd49272bc4d](tensorflow/tensorflow@8bd49272bc4d) - MLIR-HLO to [e78c59d92779](https://github.com/tensorflow/mlir-hlo/commit/${MLIR_HLO_SHA?}) `./scripts/git/update_to_llvm_syncpoint.py` Automated submodule bump from .github/workflows/update_llvm_dependent_submodules.yml PiperOrigin-RevId: 365931662
Updates LLVM usage to match [c06a8f9caa51](llvm/llvm-project@c06a8f9caa51) PiperOrigin-RevId: 365935998
Updates LLVM usage to match [afed50a14b34](llvm/llvm-project@afed50a14b34) PiperOrigin-RevId: 365986449
Updates LLVM usage to match [8396aeb07cdd](llvm/llvm-project@8396aeb07cdd) PiperOrigin-RevId: 366034463
PiperOrigin-RevId: 366091704
Updates LLVM usage to match [fcf680050686](llvm/llvm-project@fcf680050686) PiperOrigin-RevId: 366101230
Updates LLVM dependencies to match [fcf680050686](llvm/llvm-project@fcf680050686). - llvm-bazel to [14a6c5dcc87f](google/llvm-bazel@14a6c5dcc87f) - TensorFlow to [75e42f8f26b7](tensorflow/tensorflow@75e42f8f26b7) - MLIR-HLO to [7b0a6bfeeedb](https://github.com/tensorflow/mlir-hlo/commit/${MLIR_HLO_SHA?}) `./scripts/git/update_to_llvm_syncpoint.py` Automated submodule bump from .github/workflows/update_llvm_dependent_submodules.yml PiperOrigin-RevId: 366136514
GMNGeoffrey
approved these changes
Apr 1, 2021
copybara-service bot
pushed a commit
that referenced
this pull request
Apr 6, 2021
* 6bd5658 Merge google -> main (#5319) * 2e5257d Merge branch 'main' into google-to-main * 6936ee7 Patch VMLA performance by reserving vector size before pushing to it. (#5316) * f2f0041 NFC: Cleanup ConcretizeTileAmongstWorkgroupsPass. (#5297) * f96726a Add tests to run few other (smaller) models with Linalg on tensors path. (#5306) * fd64070 Revert "Add wasm-micro-runtime submodule and get building with CMake." (#5312) * ce0285f Continue pruning abseil usage: switch from absl::InlinedVector to std::vector... * 71e24b6 Removing hal.buffer.fill and hal.buffer.copy. (#5307) * 3c611d3 Add Mako benchmark config template file. (#5200) * 4d1a394 Fix RFFT bugs in VMLA. (#5308) * 0d55c95 Add configure_bazel.py step to TensorFlow getting started doc. * 1386d2c Switch simple_embedding_test to include drivers explicitly. (#5304) * 402550b Add StripAsserts pass and handle tf.Identity ops on tensor lists. (#5294) * fbdb4ef Add new metrics to MobileNetV2 benchmarks. (#5301) * 99c8eac Implementing Vulkan dispatch tracing. (#5287) * 2681dff Insert clones prior to mutation and not where it originates. (#5292) * aeafd9e Fix CUDA HAL bug and enable more execution tests (#5296) * 2801780 [CUDA Codegen] Enable tiling and vectorization for MatMulOp (#5293) * c61fefe Extend AffineMin canonicalization to support scf.parallel (#5289) * e0ee3f3 Add directory for microbenchmarking (#5260) * b8da32c Set wasm-export-name attributes on exported functions again. (#5286) * e2a2f81 Canonicalize affine min before applying tile-and-vecotrize passes (#5285) * 23861f7 [CUDA codegen] add vectorization infrastructure (#5278) * 6f443c4 Drop deps on Abseil's core_headers, synchronization, macros. (#5275) * e5b9e8a Actually run MobileNet with fake weights to check correctness (#5284) * e56db9a Remove dead code in LinalgToSPIRV (#5281) * 8863aa1 [NFC] Fix typos in variable names. (#5279) * 9cd93ba Turn vectorization on by default for linalg on tensors path (#5280) * 894dac6 Merge google -> main #5276 * b738162 Changing HAL dialect syntax to express all types. (#5239) * 1ba4e88 Merge branch 'main' into google-to-main * 531c73e Fix yml syntax (#5274) * 494fe32 Bumping the tracy version to 0.7.7 (WIP). (#5272) * 3616323 Disable Vulkan float16 tests on Pixel4 (#5273) * ade7ff1 Disable running BERT on Vulkan (see Issue #5268) (#5269) * 25ddc10 Add tracing to allocations made from VMA. (#5271) * df454f4 Changing iree_vm_list_resize to grow by 2x. (#5270) * bd9a113 Adding command buffer queue affinity. (#5265) * de834ae Make status matcher print the message when it fails. (#5266) * 10f5eaf Add f16 e2e tests for vulkan (#5257) * 1bdc3a4 Actually make MobileBERT run in the test. (#5264) * 2e05313 Add support for module almost_eq check for f16 type (#5261) COPYBARA_INTEGRATE_REVIEW=#5321 from NatashaKnk:main-to-google 6bd5658 PiperOrigin-RevId: 366926967
GMNGeoffrey
pushed a commit
that referenced
this pull request
Apr 6, 2021
* 6bd5658 Merge google -> main (#5319) * 2e5257d Merge branch 'main' into google-to-main * 6936ee7 Patch VMLA performance by reserving vector size before pushing to it. (#5316) * f2f0041 NFC: Cleanup ConcretizeTileAmongstWorkgroupsPass. (#5297) * f96726a Add tests to run few other (smaller) models with Linalg on tensors path. (#5306) * fd64070 Revert "Add wasm-micro-runtime submodule and get building with CMake." (#5312) * ce0285f Continue pruning abseil usage: switch from absl::InlinedVector to std::vector... * 71e24b6 Removing hal.buffer.fill and hal.buffer.copy. (#5307) * 3c611d3 Add Mako benchmark config template file. (#5200) * 4d1a394 Fix RFFT bugs in VMLA. (#5308) * 0d55c95 Add configure_bazel.py step to TensorFlow getting started doc. * 1386d2c Switch simple_embedding_test to include drivers explicitly. (#5304) * 402550b Add StripAsserts pass and handle tf.Identity ops on tensor lists. (#5294) * fbdb4ef Add new metrics to MobileNetV2 benchmarks. (#5301) * 99c8eac Implementing Vulkan dispatch tracing. (#5287) * 2681dff Insert clones prior to mutation and not where it originates. (#5292) * aeafd9e Fix CUDA HAL bug and enable more execution tests (#5296) * 2801780 [CUDA Codegen] Enable tiling and vectorization for MatMulOp (#5293) * c61fefe Extend AffineMin canonicalization to support scf.parallel (#5289) * e0ee3f3 Add directory for microbenchmarking (#5260) * b8da32c Set wasm-export-name attributes on exported functions again. (#5286) * e2a2f81 Canonicalize affine min before applying tile-and-vecotrize passes (#5285) * 23861f7 [CUDA codegen] add vectorization infrastructure (#5278) * 6f443c4 Drop deps on Abseil's core_headers, synchronization, macros. (#5275) * e5b9e8a Actually run MobileNet with fake weights to check correctness (#5284) * e56db9a Remove dead code in LinalgToSPIRV (#5281) * 8863aa1 [NFC] Fix typos in variable names. (#5279) * 9cd93ba Turn vectorization on by default for linalg on tensors path (#5280) * 894dac6 Merge google -> main #5276 * b738162 Changing HAL dialect syntax to express all types. (#5239) * 1ba4e88 Merge branch 'main' into google-to-main * 531c73e Fix yml syntax (#5274) * 494fe32 Bumping the tracy version to 0.7.7 (WIP). (#5272) * 3616323 Disable Vulkan float16 tests on Pixel4 (#5273) * ade7ff1 Disable running BERT on Vulkan (see Issue #5268) (#5269) * 25ddc10 Add tracing to allocations made from VMA. (#5271) * df454f4 Changing iree_vm_list_resize to grow by 2x. (#5270) * bd9a113 Adding command buffer queue affinity. (#5265) * de834ae Make status matcher print the message when it fails. (#5266) * 10f5eaf Add f16 e2e tests for vulkan (#5257) * 1bdc3a4 Actually make MobileBERT run in the test. (#5264) * 2e05313 Add support for module almost_eq check for f16 type (#5261) PiperOrigin-RevId: 366926967
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.