-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for MacOS Ventura #25
Comments
I know somebody run dlprimitives on M1, unless there are some bugs it should run. I suggest try building. Start from dlprimitives to check that backend works and than build pytorch backend. Or you can start directly from pytorch - shouldn't be a big problem.
No, only 32 bit float is supported at this point. |
I installed OpenCL-CLHPP
Set:
But I'm getting this build error:
What step am I missing? |
I 'hacked' /Users/davidlaxer/pytorch_dlprim/dlprimitives/include/dlprim/opencl_include.hpp:
Ran:
It built and installed without errors.
|
Change it to |
The 'mnist.py' test program completes with: --device 'mps' and 'cpu'.
|
Looks like first device is actually CPU. You should have another platforms/devices for GPU. Assuming that opencl drivers are installed for the gpu. What is output of |
Correct!
It's faster then 'mps'. |
Great! What is 'mps' device?
Ohhh... that is interesting. Probably I'll need to add special case for header detection of both Thanks. Can you build the dlprimitives (outside the pytorch) and run some tests to see that it works properly (Note you'll probably need to set cmake parameter something like |
'MPS' is Apple's Metal Performance Shader framework. PyTorch now supports 'MPS' as a backend (e.g. - CUDA, MPS, ...) https://pytorch.org/docs/stable/notes/mps.html How do I run test.py?
From iPython:
|
|
Interesting. Can you run some benchmarks of opencl vs metal backend, here some examples:
Check please for variants:
And of course for mps for comparison - it would be highly interesting how my result is compared to apples Metal results.
Because once opencl support was planned and opencl is reserved device type - but it never realised. For out of tree backend I can use privateuseone device that I can rename to 'ocl' as well. But |
|
There were a two errors bulding dlprimitives error in
Here are the results of running $make test
|
|
|
You are running on device 0:0 instead of 0:1 See:
See I mentioned before
I think you need to rerun And than run make test |
What I see in the benchmark that it does not use Winograd convolution kernel and it is very important for resnet performance. What I see in clinfo log
When I check for wingorad compatibility I use:
And to check AMD I do:
While the vendor ID is clearly not the same... Need to think how to fix it. |
Can you try to change the line:
To something like:
And than rerun flops to see that some of the kernels appear to use winograd convolution and not only gemm. |
|
... and after editing context.cpp:
Test
|
Will pytorch_dlprim run on MacOS 13.2 Ventura?
I have an AMD Radeon Pro 5700XT GPU.
Does OpenCL support torch.float64, torch.cfloat data types?
The text was updated successfully, but these errors were encountered: