Replies: 3 comments 1 reply
-
Tagging @dfm and @superbobry, who have been working on JAX FFI. |
Beta Was this translation helpful? Give feedback.
-
For a reference, the library in question has some 16000 C++ files. Or something in that range. It's |
Beta Was this translation helpful? Give feedback.
-
It's hard to give very concrete suggestions given the generality of this discussion, but I would say that what you're suggesting probably isn't straightforward. It is possible to call external C++ libraries (see dfm/extending-jax and Custom operations for GPUs with C++ and CUDA for more details), but it's not always simple to expose the appropriate interface. And, in particular, when you call out to external libraries, you are now responsible for providing implementations of the relevant operations to support autodiff, if that's something you require. All that to say, I don't think there's any magical solution for you here, but hopefully those links can give you a starting point for investigating the tradeoffs for your use case! |
Beta Was this translation helpful? Give feedback.
-
Sup Jax,
I'm building an application where I need a dependancy on a large codebase related to 3d processing. Millions of C++ code, that I need to use as an environment for a 3d ML model.
I know that making an application pure jax and running it on GPUs/TPUs makes the computation some 100-4000 times faster due to JIT and MLIR, and no time between CPU and TPU.
Is there a way we could use C++ support in an application without a very hefty rewrite? What if the C++ codebase can be run exclusively on TPUs - would the Jax part of the application suffer a lot in terms of performance?
Thanks everyone,
MRiabov
Beta Was this translation helpful? Give feedback.
All reactions