Replies: 8 comments 5 replies
-
Natively, Isaac Sim supports numpy and torch backend, while Orbit right now mainly uses torch since that is more useful for GPU parallelization. Additionally, for small-dimension vectors, there isn't a lot of overhead moving them from torch to numpy when needed (such as for ROS). Our future plan is to support warp backend since that would simplify the processing of large image observations in a batched manner. We haven't yet investigated using Jax. From what I saw here, there isn't any Pytorch<->Jax operations available right now. It would definitely be useful for researchers since Jax has its benefits. Are there any Gym environments that are supporting Jax? We can take a look and see how much effort this would take. |
Beta Was this translation helpful? Give feedback.
-
Some gym envs in jax: https://github.com/RobertTLange/gymnax For converting jax to torch tensors directly on cuda, you can use dlpack as so
|
Beta Was this translation helpful? Give feedback.
-
Nice! Thanks a lot for pointing this out @StoneT2000 Is there significant overhead in these conversions when dealing with large tensors? If not, then I am happy to have an interface that converts the environments to be compatible with JAX-based libraries. |
Beta Was this translation helpful? Give feedback.
-
Maybe? I myself haven't tested it just yet at scale altho I plan to soon, would be nice to see a speed test |
Beta Was this translation helpful? Give feedback.
-
I tried to make a script out of the code mentioned above but haven't been able to run it. Always get some I am leaving the script here for someone to try it out :) import torch
import torch.utils.dlpack
import jax
import jax.dlpack
import time
# A generic mechanism for turning a JAX function into a PyTorch function.
def j2t(x_jax):
x_torch = torch.utils.dlpack.from_dlpack(jax.dlpack.to_dlpack(x_jax))
return x_torch
def t2j(x_torch):
x_torch = x_torch.contiguous() # https://github.com/google/jax/issues/8082
x_jax = jax.dlpack.from_dlpack(torch.utils.dlpack.to_dlpack(x_torch))
return x_jax
# time the JAX version
x = torch.randn(2048, 512, 4).cuda()
start = time.perf_counter()
for _ in range(100):
y = t2j(x)
print(f"JAX time : {(time.perf_counter() - start) / 100:.6f} s")
# time the PyTorch version
x = jax.random.normal(jax.random.PRNGKey(0), (2048, 512, 4))
start = time.perf_counter()
for _ in range(100):
y = j2t(x)
print(f"PyTorch time: {(time.perf_counter() - start) / 100:.6f} s") |
Beta Was this translation helpful? Give feedback.
-
Modified the code to run
Seems like no serious performance drop, and obviously transferring larger matrices in fewer loops it makes it even faster |
Beta Was this translation helpful? Give feedback.
-
For people further interested in this functionality, I found another library that seems useful: https://github.com/luchris429/purejaxrl |
Beta Was this translation helpful? Give feedback.
-
Starting with skrl v1.0.0-rc.1, there is JAX support for Isaac Gym, Omniverse Isaac Gym and Isaac Orbit among others. Working on the integration of skrl's JAX examples into the Isaac Orbit extensions :) |
Beta Was this translation helpful? Give feedback.
-
Is it possible to use Jax as learning framework without any performance/speed of simulation loss ?
Beta Was this translation helpful? Give feedback.
All reactions