You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 28, 2024. It is now read-only.
However, when I run the serve command, I will get 2024-01-24 16:34:15,360 INFO scripts.py:411 -- Running config file: '/home/ray/serve_configs/amazon--LightGPT.yaml'. 2024-01-24 16:34:18,947 INFO worker.py:1715 -- Started a local Ray instance. View the dashboard at 127.0.0.1:8265
And it will hang for a bit and not create a Ray instance.
I am running this on RedHat9, and I know the GPU is connect because I can run nvidia-smi.
Interestingly, I cannot cd into data when in the pod as it says I need sudo permissions. Could this be part of the problem?
The text was updated successfully, but these errors were encountered:
I am trying to deploy rayLLM locally, and these are the commands I am running
However, when I run the serve command, I will get
2024-01-24 16:34:15,360 INFO scripts.py:411 -- Running config file: '/home/ray/serve_configs/amazon--LightGPT.yaml'. 2024-01-24 16:34:18,947 INFO worker.py:1715 -- Started a local Ray instance. View the dashboard at 127.0.0.1:8265
And it will hang for a bit and not create a Ray instance.
I am running this on RedHat9, and I know the GPU is connect because I can run nvidia-smi.
Interestingly, I cannot cd into data when in the pod as it says I need sudo permissions. Could this be part of the problem?
The text was updated successfully, but these errors were encountered: