Apple Silicon Support #115
Replies: 7 comments 10 replies
-
I've also looked into this and came back with the impression the MPS backend is not really ready yet. There's an issue here tracking supported operations pytorch/pytorch#77764 |
Beta Was this translation helpful? Give feedback.
-
You could just do the ffts on cpu (or use precompiled conditioning latents). There is a section on the latter in the docs. I believe that is the only step that computes an fft. The rest of the model is traditional matrix math. It should probably work. The only other thing I'd be concerned about is memory consumption. Id love to hear how fast it is if you get it working. |
Beta Was this translation helpful? Give feedback.
-
I'm going to accept your "MPS backend is not really ready yet" feedback and walk away from Apple Silicon support for now 😂 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
No luck here either. |
Beta Was this translation helpful? Give feedback.
-
Is there any update on this? It has been over a year. |
Beta Was this translation helpful? Give feedback.
-
Merged in #550! |
Beta Was this translation helpful? Give feedback.
-
Hi! Any plans to add Apple Silicon support? I'm just a basic C# programmer professionally, but I gave it a good try. I had to compile torch audio from source to get tortoise-tts running natively on my M1 Max with GPU support. Unfortunately, it seems that the M1 Max GPU (or specifically the "mps" device equivalent to "cuda" in torch) doesn't support ... something over my head. I got errors about fft ops not being supported.
I don't know how involved this would be, but if it could be made to work it seems like the unified memory architecture in Apple Silicon could be an advantage. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions