Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transposition-aware matrix multiplication #200

Closed
daphne-eu opened this issue Mar 1, 2022 · 0 comments · Fixed by #406
Closed

Transposition-aware matrix multiplication #200

daphne-eu opened this issue Mar 1, 2022 · 0 comments · Fixed by #406
Assignees
Labels
AMLS summer 2022 Student project for the Architecture of Machine Learning Systems lecture at TU Graz (summer 2022). student project Suitable for a bachelor/master student's programming project.

Comments

@daphne-eu
Copy link
Owner

In GitLab by @pdamme on Mar 1, 2022, 19:24

Matrix multiplications are at the heart of most ML algorithms and can be very expensive in terms of runtime. There are highly optimized routines (e.g., BLAS) for executing matrix multiplications. In many cases, one or both inputs are transposed matrices. Thus, linear algebra libraries like BLAS can efficiently process matrices without materializing the transposed representation. However, DAPHNE's MatMulOp is currently not aware of whether its inputs are transposed. Thus, an expression like C = t(A) @ B; would first transpose A before it calculates the matrix multiplication.

The task is to modify the existing MatMulOp such that it has two boolean flags indicating if the left/right-hand side input is transposed. The DaphneDSL parser shall initially create MatMulOps assuming non-transposed inputs, while a new compiler pass shall identify if an input to a MatMulOp is the result of a TransposeOp and rewrite the program accordingly. Finally, the runtime kernels of the MatMulOp must pass this information on transposition to the BLAS kernels they call internally. Implementation in C++.

@daphne-eu daphne-eu self-assigned this Mar 31, 2022
@Hiebl Hiebl assigned Hiebl and unassigned daphne-eu Apr 5, 2022
pdamme pushed a commit that referenced this issue Jun 26, 2022
* Adds a Canonicalization Pass for Transposition-aware matrix multiplication
* Refactors the MatMul-Kernel to utilize transposition information and
* Adds Boolean Flag and Script Level Tests for MatMul
Hiebl added a commit that referenced this issue Jun 29, 2022
* Adds "nested" transpose rewrites
* Fixes shape inference on transpose rewrites
@corepointer corepointer linked a pull request Aug 31, 2022 that will close this issue
pdamme pushed a commit that referenced this issue Jun 15, 2023
* Adds a Canonicalization Pass for Transposition-aware matrix multiplication
* Refactors the MatMul-Kernel to utilize transposition information and
* Adds Boolean Flag and Script Level Tests for MatMul
pdamme pushed a commit that referenced this issue Jun 15, 2023
* Adds "nested" transpose rewrites
* Fixes shape inference on transpose rewrites
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AMLS summer 2022 Student project for the Architecture of Machine Learning Systems lecture at TU Graz (summer 2022). student project Suitable for a bachelor/master student's programming project.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants