Replies: 2 comments
-
Great question, it's an open feature request #290. I don't have a timeline for that yet, but we are interested in supporting exporting to ONNX. |
Beta Was this translation helpful? Give feedback.
0 replies
-
I tried to quantize (mlx_lm. convert), the use optimum. Optimum works on the finetune model by
I try to find a solution. It is now interesting because webGPU works fine with Phi-3 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Are there plans for MLX to ONNX conversion?
Beta Was this translation helpful? Give feedback.
All reactions