-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check matmul types and error at compile-time if the backend doesn't support them #540
Conversation
/blossom-ci |
1942fbb
to
0bd925c
Compare
/blossom-ci |
@cliffburdick Great work! But I wonder why Code to reproduce: #include "matx.h"
#include <cassert>
#include <cstdio>
using namespace matx;
#define TYPE int8_t
int main() {
MATX_ENTER_HANDLER();
index_t M = 2;
index_t N = 3;
auto m = make_tensor<TYPE>({M, N});
auto v = make_tensor<TYPE>({N, 1});
m.SetVals({{1, 2, 3},
{4, 5, 6}});
v.SetVals({{1, 2, 3}});
auto out = make_tensor<TYPE>({M, 1});
(out = matmul(m, v)).run();
cudaStreamSynchronize(0);
printf("m:\n");
print(m);
printf("v:\n");
print(v);
printf("out:\n");
print(out);
CUDA_CHECK_LAST_ERROR();
MATX_EXIT_HANDLER();
} Output:
|
I'm going by the support matrix here: I will try it out |
@AtomicVar the reason it's not working is the compute and scale types are wrong. Looking into it. |
Even with the correct scalar and compute types it's failing the heuristic check. Still investigating |
I'm also facing this weird issue. Don't know where's the problem. |
/blossom-ci |
Fixes #538