Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TUTORIAL] Add unmasked matrix multiply example to triton-cpu #23

Merged
merged 3 commits into from
Jun 14, 2024

Conversation

Kuigesi
Copy link
Collaborator

@Kuigesi Kuigesi commented Jun 14, 2024

Add un-masked tiled matrix-multiplication example to triton-cpu

@Kuigesi Kuigesi requested a review from ptillet as a code owner June 14, 2024 00:59
@Kuigesi
Copy link
Collaborator Author

Kuigesi commented Jun 14, 2024

Current perf number on AMD EPYC 9654 96-Core Processor.

triton_cpu_output_with_torch.float32_inputs=tensor([[ 34.1671,  -0.6265,   2.1590,  ..., -24.7347,  45.6671,  12.1024],
        [-12.4534, -18.2695,   5.6328,  ...,   3.7349, -24.7166,  18.7390],
        [ 10.1346,   7.3148, -15.2706,  ...,  33.5417, -18.3999, -61.9606],
        ...,
        [-25.3489,  16.8383,  25.1225,  ...,  22.5323, -34.2459,  22.0001],
        [  9.3619,  47.0127,  12.2615,  ..., -32.7080, -19.9729,   7.6856],
        [-12.7559, -20.7132,  14.4755,  ...,  23.1968, -14.5273,  19.9374]])
torch_cpu_output_with_torch.float32_inputs=tensor([[ 34.1671,  -0.6265,   2.1590,  ..., -24.7347,  45.6671,  12.1024],
        [-12.4534, -18.2695,   5.6328,  ...,   3.7349, -24.7166,  18.7390],
        [ 10.1346,   7.3148, -15.2706,  ...,  33.5417, -18.3999, -61.9606],
        ...,
        [-25.3489,  16.8383,  25.1225,  ...,  22.5323, -34.2459,  22.0001],
        [  9.3619,  47.0127,  12.2615,  ..., -32.7080, -19.9729,   7.6856],
        [-12.7560, -20.7132,  14.4755,  ...,  23.1968, -14.5273,  19.9374]])
✅ TritonCPU and TorchCPU match
triton_gpu_output_with_torch.float32_inputs=tensor([[ 34.1426,  -0.6313,   2.1575,  ..., -24.7246,  45.6221,  12.0942],
        [-12.4454, -18.2539,   5.6301,  ...,   3.7342, -24.6937,  18.7249],
        [ 10.1283,   7.3111, -15.2640,  ...,  33.5260, -18.3706, -61.9241],
        ...,
        [-25.3207,  16.8165,  25.1075,  ...,  22.5187, -34.2108,  22.0029],
        [  9.3466,  46.9721,  12.2512,  ..., -32.6826, -19.9512,   7.6826],
        [-12.7462, -20.6899,  14.4676,  ...,  23.1768, -14.5369,  19.9426]],
       device='cuda:0')
torch_gpu_output_with_torch.float32_inputs=tensor([[ 34.1671,  -0.6265,   2.1589,  ..., -24.7347,  45.6671,  12.1024],
        [-12.4534, -18.2695,   5.6328,  ...,   3.7349, -24.7166,  18.7390],
        [ 10.1346,   7.3148, -15.2706,  ...,  33.5417, -18.3999, -61.9606],
        ...,
        [-25.3489,  16.8383,  25.1225,  ...,  22.5323, -34.2459,  22.0001],
        [  9.3619,  47.0127,  12.2615,  ..., -32.7080, -19.9729,   7.6856],
        [-12.7559, -20.7132,  14.4755,  ...,  23.1968, -14.5273,  19.9374]],
       device='cuda:0')
❌ TritonGPU and TorchGPU differ, the maximum difference is 0.08477020263671875
matmul-performance-fp32 (BLOCK_SIZE_M=32, BLOCK_SIZE_N=32, BLOCK_SIZE_K=32, GROUP_SIZE_M=8):
         M       N       K  TritonCPU 1   TritonCPU     TorchCPU     TritonGPU      TorchGPU
0    256.0   256.0   256.0     6.115126  122.798460   501.471045   4369.066777   2716.518137
1    384.0   384.0   384.0     5.901555  218.960189  1492.595597  11835.932848   7237.104321
2    512.0   512.0   512.0     5.477325  222.247163  1672.036592  23109.112999  14563.555040
3    640.0   640.0   640.0     5.838525  224.329285  1518.372663  34933.902558  19252.643521
4    768.0   768.0   768.0     5.792854  240.503161  2605.757184  45082.088535  23340.108060
5    896.0   896.0   896.0     5.986253  246.079258  2646.360390  52034.370283  32791.901709
6   1024.0  1024.0  1024.0     5.308255  189.351354  3101.865729  60295.476559  33238.664573
7   1152.0  1152.0  1152.0     6.071139  245.742297  3270.295361  63489.358641  36581.732159
8   1280.0  1280.0  1280.0     5.922744  210.926092  3200.078081  63719.978592  32331.523340
9   1408.0  1408.0  1408.0     5.886528  196.662598  3501.813199  67462.041266  39757.711162
10  1536.0  1536.0  1536.0     5.728521  239.912651   265.572162  67690.498883  37032.769105
11  1664.0  1664.0  1664.0     5.965041  207.735833  3762.382046  69139.302925  38085.594705
12  1792.0  1792.0  1792.0     5.849714  224.266499   332.486163  70439.003490  40940.417199
13  1920.0  1920.0  1920.0     5.996897  252.569990  3426.911560  71074.551326  43520.880582
14  2048.0  2048.0  2048.0     5.221438  231.761808  3642.000616  71649.661170  50775.137875
15  2176.0  2176.0  2176.0     5.765917  233.517503   586.022025  72265.375659  42677.231838
16  2304.0  2304.0  2304.0     5.762947  249.061190  3426.411733  72815.001096  40069.818786
17  2432.0  2432.0  2432.0     5.769728  237.455235  3775.321086  73186.487986  44444.474595
18  2560.0  2560.0  2560.0     5.462908  211.604101  1489.292651  72944.416068  46809.340035

@Kuigesi Kuigesi changed the title [CPU] Add unmasked matrix multiply example to triton-cpu [TUTORIAL] Add unmasked matrix multiply example to triton-cpu Jun 14, 2024
@minjang minjang self-requested a review June 14, 2024 01:32
Copy link
Collaborator

@minjang minjang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@minjang
Copy link
Collaborator

minjang commented Jun 14, 2024

Let me figure it out how to correctly add TorchCPU with inductor. But this is good. Thanks!

@minjang minjang merged commit 7bf0591 into triton-lang:main Jun 14, 2024
1 of 4 checks passed
minjang pushed a commit to minjang/triton-cpu that referenced this pull request Jun 22, 2024
…-lang#23)

* add un-masked tiled matrix-multiplication for triton-cpu

* clean and add comment

* move test under tutorials
minjang pushed a commit that referenced this pull request Jun 24, 2024
When running
[convert_blocked1d_to_slice0](https://github.com/triton-lang/triton/blob/0ba5f0c3cd029d5c3d1f01b9bf29dac32c27345e/test/Conversion/tritongpu_to_llvm.mlir#L924)
Triton ends up computing a rank of a matrix with 0 columns during linear
layout lowering, which trips up f2reduce, and causes undefined behavior,
detectable through
[UBSAN](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html).

Fix this by returning the rank (0) early in these cases, without calling
f2reduce.

<details><summary>Stack trace</summary>
<p>

```
third_party/triton/third_party/f2reduce/f2reduce.cpp:421:30: runtime error: shift exponent 18446744073709551615 is too large for 64-bit type 'unsigned long long'
    #0 0x556ee2fea3be in inplace_rref_small third_party/triton/third_party/f2reduce/f2reduce.cpp:421:30
    #1 0x556ee2fea3be in f2reduce::inplace_rref_strided(unsigned long*, unsigned long, unsigned long, unsigned long) third_party/triton/third_party/f2reduce/f2reduce.cpp:470:9
    #2 0x556ee2ea70da in getMatrixRank third_party/triton/lib/Tools/LinearLayout.cpp:125:3
    #3 0x556ee2ea70da in mlir::triton::LinearLayout::checkInvariants(bool) third_party/triton/lib/Tools/LinearLayout.cpp:299:7
    #4 0x556ee2ea656d in mlir::triton::LinearLayout::tryCreate(llvm::MapVector<mlir::StringAttr, std::__u::vector<std::__u::vector<int, std::__u::allocator<int>>, std::__u::allocator<std::__u::vector<int, std::__u::allocator<int>>>>, llvm::DenseMap<mlir::StringAttr, unsigned int, llvm::DenseMapInfo<mlir::StringAttr, void>, llvm::detail::DenseMapPair<mlir::StringAttr, unsigned int>>, llvm::SmallVector<std::__u::pair<mlir::StringAttr, std::__u::vector<std::__u::vector<int, std::__u::allocator<int>>, std::__u::allocator<std::__u::vector<int, std::__u::allocator<int>>>>>, 0u>>, llvm::ArrayRef<std::__u::pair<mlir::StringAttr, int>>, bool) third_party/triton/lib/Tools/LinearLayout.cpp:190:41
    #5 0x556ee2eb2150 in mlir::triton::LinearLayout::divideRight(mlir::triton::LinearLayout const&) third_party/triton/lib/Tools/LinearLayout.cpp:654:51
    #6 0x556ee2ee1c39 in mlir::cvtNeedsSharedMemory(mlir::RankedTensorType, mlir::RankedTensorType) third_party/triton/lib/Analysis/Utility.cpp:652:14
    #7 0x556ee2cf38fd in mlir::triton::getRepShapeForCvtLayout(mlir::triton::gpu::ConvertLayoutOp) third_party/triton/lib/Analysis/Allocation.cpp:66:8
    #8 0x556ee2cf3efa in mlir::triton::getScratchConfigForCvtLayout(mlir::triton::gpu::ConvertLayoutOp, unsigned int&, unsigned int&) third_party/triton/lib/Analysis/Allocation.cpp:95:19
    #9 0x556ee2cf6057 in mlir::triton::AllocationAnalysis::getScratchValueSize(mlir::Operation*) third_party/triton/lib/Analysis/Allocation.cpp:272:24
    #10 0x556ee2cf5499 in operator() third_party/triton/lib/Analysis/Allocation.cpp:343:7
    #11 0x556ee2cf5499 in void llvm::function_ref<void (mlir::Operation*)>::callback_fn<mlir::triton::AllocationAnalysis::getValuesAndSizes()::'lambda'(mlir::Operation*)>(long, mlir::Operation*) third_party/llvm/llvm-project/llvm/include/llvm/ADT/STLFunctionalExtras.h:45:12
    #12 0x556edeeee7a9 in operator() third_party/llvm/llvm-project/llvm/include/llvm/ADT/STLFunctionalExtras.h:68:12
    #13 0x556edeeee7a9 in void mlir::detail::walk<mlir::ForwardIterator>(mlir::Operation*, llvm::function_ref<void (mlir::Operation*)>, mlir::WalkOrder) third_party/llvm/llvm-project/mlir/include/mlir/IR/Visitors.h:174:5
    #14 0x556edeeee87c in void mlir::detail::walk<mlir::ForwardIterator>(mlir::Operation*, llvm::function_ref<void (mlir::Operation*)>, mlir::WalkOrder) third_party/llvm/llvm-project/mlir/include/mlir/IR/Visitors.h:182:9
    #15 0x556ee2cf49e7 in walk<(mlir::WalkOrder)0, mlir::ForwardIterator, (lambda at third_party/triton/lib/Analysis/Allocation.cpp:341:42), mlir::Operation *, void> third_party/llvm/llvm-project/mlir/include/mlir/IR/Visitors.h:313:10
    #16 0x556ee2cf49e7 in walk<(mlir::WalkOrder)0, mlir::ForwardIterator, (lambda at third_party/triton/lib/Analysis/Allocation.cpp:341:42), void> third_party/llvm/llvm-project/mlir/include/mlir/IR/Operation.h:794:12
    #17 0x556ee2cf49e7 in mlir::triton::AllocationAnalysis::getValuesAndSizes() third_party/triton/lib/Analysis/Allocation.cpp:341:16
    #18 0x556ee2cf4852 in run third_party/triton/lib/Analysis/Allocation.cpp:182:5
    #19 0x556ee2cf4852 in AllocationAnalysis third_party/triton/lib/Analysis/Allocation.cpp:169:5
    #20 0x556ee2cf4852 in mlir::Allocation::run(llvm::DenseMap<mlir::FunctionOpInterface, mlir::Allocation, llvm::DenseMapInfo<mlir::FunctionOpInterface, void>, llvm::detail::DenseMapPair<mlir::FunctionOpInterface, mlir::Allocation>>&) third_party/triton/lib/Analysis/Allocation.cpp:627:3
    #21 0x556ee1677402 in operator() third_party/triton/include/triton/Analysis/Allocation.h:227:26
    #22 0x556ee1677402 in void mlir::CallGraph<mlir::Allocation>::doWalk<(mlir::WalkOrder)0, (mlir::WalkOrder)1, mlir::ModuleAllocation::ModuleAllocation(mlir::ModuleOp)::'lambda'(mlir::CallOpInterface, mlir::FunctionOpInterface), mlir::ModuleAllocation::ModuleAllocation(mlir::ModuleOp)::'lambda'(mlir::FunctionOpInterface)>(mlir::FunctionOpInterface, llvm::DenseSet<mlir::FunctionOpInterface, llvm::DenseMapInfo<mlir::FunctionOpInterface, void>>&, mlir::ModuleAllocation::ModuleAllocation(mlir::ModuleOp)::'lambda'(mlir::CallOpInterface, mlir::FunctionOpInterface), mlir::ModuleAllocation::ModuleAllocation(mlir::ModuleOp)::'lambda'(mlir::FunctionOpInterface)) third_party/triton/include/triton/Analysis/Utility.h:350:7
    #23 0x556ee16756b3 in walk<(mlir::WalkOrder)0, (mlir::WalkOrder)1, (lambda at third_party/triton/include/triton/Analysis/Allocation.h:222:9), (lambda at third_party/triton/include/triton/Analysis/Allocation.h:224:9)> third_party/triton/include/triton/Analysis/Utility.h:242:7
    #24 0x556ee16756b3 in mlir::ModuleAllocation::ModuleAllocation(mlir::ModuleOp) third_party/triton/include/triton/Analysis/Allocation.h:220:5
    #25 0x556ee2c2bf18 in (anonymous namespace)::AllocateSharedMemory::runOnOperation() third_party/triton/lib/Conversion/TritonGPUToLLVM/AllocateSharedMemory.cpp:26:22
...
UndefinedBehaviorSanitizer: invalid-shift-exponent third_party/triton/third_party/f2reduce/f2reduce.cpp:421:30 
```
</p>
</details>
minjang pushed a commit that referenced this pull request Jun 24, 2024
* add un-masked tiled matrix-multiplication for triton-cpu

* clean and add comment

* move test under tutorials
Devjiu pushed a commit to Devjiu/triton-cpu that referenced this pull request Aug 13, 2024
…-lang#23)

* add un-masked tiled matrix-multiplication for triton-cpu

* clean and add comment

* move test under tutorials
int3 pushed a commit that referenced this pull request Aug 29, 2024
* add un-masked tiled matrix-multiplication for triton-cpu

* clean and add comment

* move test under tutorials
minjang pushed a commit that referenced this pull request Sep 22, 2024
* add un-masked tiled matrix-multiplication for triton-cpu

* clean and add comment

* move test under tutorials
minjang pushed a commit that referenced this pull request Oct 22, 2024
* add un-masked tiled matrix-multiplication for triton-cpu

* clean and add comment

* move test under tutorials
minjang pushed a commit that referenced this pull request Oct 24, 2024
* add un-masked tiled matrix-multiplication for triton-cpu

* clean and add comment

* move test under tutorials
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants