You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 28, 2024. It is now read-only.
I knew we had example of google cloud and AWS in folder kuberay.
I need to set up Ray-LLM in Azure cluster. I need to at least know cpu and gpu type. For example, When should I use accelerator_type_a10:0.01 and when to use accelerator_type_v100: 1 as in #44.
At least give more explanation of the meaning of accelerator_type_a10 and accelerator_type_v100 so that we can figure out the azure config on our own.
For example, in the accelerator_type_a100_80g, I can understand what V100 means from https://docs.ray.io/en/latest/ray-core/accelerator-types.html, but what does 80g mean?
The text was updated successfully, but these errors were encountered:
I knew we had example of google cloud and AWS in folder kuberay.
I need to set up Ray-LLM in Azure cluster. I need to at least know cpu and gpu type. For example, When should I use
accelerator_type_a10:0.01
and when to useaccelerator_type_v100: 1
as in #44.At least give more explanation of the meaning of accelerator_type_a10 and accelerator_type_v100 so that we can figure out the azure config on our own.
For example, in the
accelerator_type_a100_80g
, I can understand what V100 means from https://docs.ray.io/en/latest/ray-core/accelerator-types.html, but what does80g
mean?The text was updated successfully, but these errors were encountered: