Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ANE support #18

Closed
tmc opened this issue Dec 6, 2023 · 15 comments
Closed

ANE support #18

tmc opened this issue Dec 6, 2023 · 15 comments
Labels
wontfix This will not be worked on

Comments

@tmc
Copy link

tmc commented Dec 6, 2023

The top level readme mentions that current device support is limited to COU and GPU, is ANE support in the works?

@vade
Copy link

vade commented Dec 7, 2023

This is a bit above my pay grade, but my understanding is that

  1. The ANE is mostly designed as an inference only device that supports only forward prop
  2. The ANE has layer support implemented in hardware and can't be easily extended (?)
  3. The ANE only supports half Float (Float 16) accelerated compute, everything is managed / converted to it by the runtime.
  4. The ANE requires talking to the OS runtime for scheduling, and there's never a guarantee that you can be resident on the ANE, you can just request it, and hope you get a time slice
  5. The API is only exposed via CoreML / Swift
  6. Internally the API is Espresso (? IIRC from stack traces) / a C++ library which isn't public
  7. CoreMLTools, the public pythonic way to create CoreML models has a CoreML runtime exposed, but requires CoreML model specs (protobufs) to run on the ANE

I doubt that Apple would let an open source project leak the internal tooling of the ANE Runtime (Espresso?)

There's some ANE reverse engineering work that's sporadically happening, but I suspect this will be Metal / GPU for a while unless Apple exposes some cool new ways to publicly run arbitrary programs on the ANE (which would be dope)

Sorry to pop bubbles, and apologies if any of this is factually incorrect!

@vade
Copy link

vade commented Dec 7, 2023

One thought which would be cool however, to get both MLX and ANE inference would be:

  1. Implement MLX via swift runtime.
  2. Expose custom layers to your CoreML model export
  3. Implement those layers via MLX in a Swift native app
  4. Use CoreML runtime to load the model, request ANE inference

You could also implement MLX preprocessing to get IOSurface backed memory buffers in half float easily which would grant your app the same unified memory access and avoid a ton of overhead of moving data to the ANE which is default path without IOSurface backed buffers.

In theory you'd get:

  1. Fast MLX pre processing with unified memory
  2. CoreML ANE acceleration on native layers
  3. MLX layer ops with unified memory outputs

That actually sounds fucking awesome.

@vade vade mentioned this issue Dec 7, 2023
@awni
Copy link
Member

awni commented Dec 8, 2023

@vade basically said it all already, but at the moment we don't have plans to support ANE in MLX given it is a closed source API.

If / when that changes we will be first in line to add it as a supported device.

@MikeyBeez
Copy link

MikeyBeez commented Jan 7, 2024

Apple blocks its developers. It always has. That way, only Apple can write good modern code. Then they don't and ignore their desktop anyway. New features are always only for integration with devices. I call their development environment block-ware, and Apple excels at it. Developers can't use the ANE. They can't use native TTS or STT. So how can one write a modern app? Pyobjc is a mess. Apple breaks their own peripherals with new versions of MacOS, etc. So just buy their new stuff, if they bother to create it, and forget about developing anything meaningful on their desktop platform.

@vade
Copy link

vade commented Jan 7, 2024

What are you talking about?

I ship / have shipped code for ANE via CoreML.

You can use TTS via NSSpeechSynthesizer or 3rd party APIs. You can use STT with NSSpeechRecognizer or via 3rd party tools like Whisper which have ironically been leveraged to use CoreML or Metal (see Whisper.cpp)

I'm not sure what your problem is other than not having accurate information.

@MikeyBeez
Copy link

@vade, On Apple Silicon? Neither API works on my M1 Mac.

@vade
Copy link

vade commented Jan 7, 2024

Yes, on Apple Silicon.

@MikeyBeez
Copy link

MikeyBeez commented Jan 7, 2024

BTW, yes pywhispercpp does work, but I don't think that uses NSSpeechRecognizer. Whispercpp uses its own model which means running one on precious unified memory. If you have a code sample that does work on Apple silicon through NSSpeechRecognizer, I'd love to see it.

@vade
Copy link

vade commented Jan 7, 2024

This is getting off topic. I never claimed Whisper CPP uses apple native API. it clearly doesnt. The point I was making is there are both working native and 3rd party solutions for TTS and STT.

@MikeyBeez
Copy link

@vade, I misread you. Yes CoreML does work, but I've been unable to convert Huggingface models to .mlmodel format. There is one example for doing this, but I have not been able to extend the method to converting other models. And the example says the new model won't be as good anyway because the conversion process is lossy.

@vade
Copy link

vade commented Jan 7, 2024

CoreML is def a bit of a black art for conversion. We've had to learn a ton. Best to check Apples CoreML Tools repo / examples and git issues for guidance. The conversion process is only lossy if you choose to natively support the neural engine which as stated in this issue only supports 16 bit float natively. You an run CoreML on CPU or GPU at native 32 bit float however.

@MikeyBeez
Copy link

@vade, I appreciate your reply. Thank you, but I think I'm done trying to get Apple's block-ware to run. I don't want to "learn a ton" for something that should only require a simple API call. But that's what Apple does to its developers. Something that takes five minutes on Linux takes months on Apple. Why? Because it's block-ware. It's intended to be impossible or nearly impossible. Apple gives users the best user experience, but it screws its would-be developers. As I said, Apple wants a monopoly on meaningful development. Then it doesn't bother. I loved xcode years ago. Now it's a nightmare. They discontinued their best developer tools like QuartzComposer. Why? Because that's for in-house developers.

@RahulBhalley
Copy link

@MikeyBeez I agree with you somewhat. I did had trouble converting my model to CoreML because it's impossible to implement SVD op using whatever basic op implementations exist in CoreMLTools. I was stuck at this problem for ~8 months. It took me 2-3 days to do the same with LibTorch-Lite library. There was no support for FFT ops for 3 years since requested. Still there's no support 5-dimensional arrays in CoreML.

CoreML is hard to use unless Apple's in-house developers build with it.

I was super surprised to see Apple released Stable Diffusion models converted to CoreML to run on-device on iPhone while I couldn't run my comparatively lightweight model (<300 MBs) on 1024x1024 images!

@fakerybakery
Copy link

fakerybakery commented Feb 21, 2024

@vade
Copy link

vade commented Feb 21, 2024

Top is inference only as ANE doesnt support back prop as stated earlier

Second is private API / reverse engineering of ANE, which - if you think about it, wont be sanctioned, supported by Apple in any real scenario.

Now that MLX Swift exists, in theory there ways of doing zero copy CoreML Custom layers implemented in MLX, so you can take a model, cut it up so the graph that has operations that can run on ANE run on ANE, and layers that cant, can be implemented in MLX

In theory its best of both worlds, but requires ad hoc support on a per model implementation (or at perhaps better phrased as per architecture)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

6 participants