Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cuda does not install #71

Open
dorooddorood606 opened this issue Apr 12, 2021 · 18 comments
Open

cuda does not install #71

dorooddorood606 opened this issue Apr 12, 2021 · 18 comments

Comments

@dorooddorood606
Copy link

dorooddorood606 commented Apr 12, 2021

python : 3.7
cuda : 11.1
pytorch : 1.8

I am trying to compile the cuda code which does not work, could you have a look please? thanks

Traceback (most recent call last):
  File "jit.py", line 3, in <module>
    'lltm_cuda', ['lltm_cuda.cpp', 'lltm_cuda_kernel.cu'], verbose=True)
  File "/user/x/anaconda3/envs/test1/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1091, in load
    keep_intermediates=keep_intermediates)
  File "/user/x/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1302, in _jit_compile
    is_standalone=is_standalone)
  File "/user/x/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1400, in _write_ninja_file_and_build_library
    is_standalone=is_standalone)
  File "/user/x/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1782, in _write_ninja_file_to_build_library
    cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
  File "/user/x/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1561, in _get_cuda_arch_flags
    arch_list[-1] += '+PTX'

IndexError: list index out of range
@gaetan-landreau
Copy link

gaetan-landreau commented Apr 22, 2021

Tried to investigate a bit this issue since I've faced the same problem in one of my Docker container.

If you're currently running your code through a setup.py , you should first add TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" to run:

python TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" setup.py install

(or an ARG TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" in your Dockerfile for instance )

Additional infos. can be found here: https://pytorch.org/docs/stable/cpp_extension.html

@kuzand
Copy link

kuzand commented Jul 29, 2021

Tried to investigate a bit this issue since I've faced the same problem in one of my Docker container.

If you're currently running your code through a setup.py , you should first add TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" to run:

python TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" setup.py install

(or an ARG TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" in your Dockerfile for instance )

Additional infos. can be found here: https://pytorch.org/docs/stable/cpp_extension.html

How to find the "YOUR_GPUs_CC+PTX" of my gpu?

@gaetan-landreau
Copy link

You should find everything you need on this link (go to section CUDA-Enabled NVIDIA Quadro and NVIDIA RTX)

@darkdevahm
Copy link

Tried to investigate a bit this issue since I've faced the same problem in one of my Docker container.
If you're currently running your code through a setup.py , you should first add TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" to run:
python TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" setup.py install
(or an ARG TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" in your Dockerfile for instance )
Additional infos. can be found here: https://pytorch.org/docs/stable/cpp_extension.html

How to find the "YOUR_GPUs_CC+PTX" of my gpu?

Have you solved this issue?

@oliver-batchelor
Copy link

Is torch.cuda.is_available() False? I have had this only when I try to compile with a broken install of pytorch or cuda.

@darkdevahm
Copy link

Is torch.cuda.is_available() False? I have had this only when I try to compile with a broken install of pytorch or cuda.

Which cuda and pytorch version did you use?

@oliver-batchelor
Copy link

oliver-batchelor commented Oct 17, 2021 via email

@MalteEbner
Copy link

MalteEbner commented Mar 8, 2022

The solution that worked for me on Linux:
The docker requires access to the cuda library during build time. To ensure this, make sure that
your /etc/docker/daemon.json file looks as follows:

{
    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"
}

If not, you need to change it and then restart docker with

sudo systemctl restart docker

@ClementPinard
Copy link
Contributor

Hello, for anyone visiting this issue, the problem is caused here : https://github.com/pytorch/pytorch/blob/master/torch/utils/cpp_extension.py#L1694

basically, the arch_list is supposed to be constructed with discovered architectures with torch.cuda.get_device_capability(i)

The thing is, when no CUDA card is detected, the function torch.cuda.device_count() returns 0 and thus no architecture is added to that list.

The leads to the last line, which essentially says "add '+PTX' to the name of last architecture, whicvh obviously fails when the arch_list is empty

As such, this problem is essentially because no cuda hardware was found by torch. Possible reasons and solutions:

If there is no way to detect gpu at build time, but you know what architecture it should run on, you can explicitly set it with environment variable, like said in this comment ( #71 (comment) )

@apatsekin
Copy link

apatsekin commented Jul 13, 2022

if you are building in Nvidia docker container without actual GPU, you can use something like this:

CUDA_VERSION=$(/usr/local/cuda/bin/nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p')
if [[ ${CUDA_VERSION} == 9.0* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;7.0+PTX"
elif [[ ${CUDA_VERSION} == 9.2* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0+PTX"
elif [[ ${CUDA_VERSION} == 10.* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5+PTX"
elif [[ ${CUDA_VERSION} == 11.0* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0+PTX"
elif [[ ${CUDA_VERSION} == 11.* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0;8.6+PTX"
else
    echo "unsupported cuda version."
    exit 1
fi

@andriworld
Copy link

andriworld commented Aug 20, 2022

I had the same error running in WSL on Windows. The above solutions of setting the TORCH_CUDA_ARCH_LIST environment variable fixed the issue.

@XiangFeng66
Copy link

how to solve this problem on windows platform @gaetan-landreau @ClementPinard

@earor-R
Copy link

earor-R commented Mar 7, 2023

Tried to investigate a bit this issue since I've faced the same problem in one of my Docker container.
If you're currently running your code through a setup.py , you should first add TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" to run:
python TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" setup.py install
(or an ARG TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" in your Dockerfile for instance )
Additional infos. can be found here: https://pytorch.org/docs/stable/cpp_extension.html

How to find the "YOUR_GPUs_CC+PTX" of my gpu?

If the gpu driver is loaded correctly, execute the following statement in the python console

>>> torch.cuda.get_device_capability(0)
(6, 1)

that means TORCH_CUDA_ARCH_LIST="6.1". However, in most cases, cuda is unavailable because you have specified gpu incorrectly, such as whether you have set CUDA_ VISIBLE_ DEVICES and the specified gpu is not available?

@mnpenner
Copy link

I got cuda working inside of docker on Windows 10 thanks to the instructions here and a little help from ChatGPT.

The issue is as @earor-R said, you can figure out the TORCH_CUDA_ARCH_LIST but the GPU still isn't available during docker build. You can, however, make it available during docker run by adding --gpus=all.

So you can set up half the Dockerfile automated like

FROM nvidia/cuda:11.7.1-devel-ubuntu22.04

WORKDIR /srv

RUN apt update && apt install -y curl build-essential git

RUN curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > /tmp/miniconda.sh

RUN bash /tmp/miniconda.sh -b -p /opt/miniconda

ENV PATH="/opt/miniconda/bin:$PATH"

RUN pip install torch torchvision torchaudio

RUN git clone https://github.com/oobabooga/text-generation-webui .

RUN mkdir /srv/repositories
RUN cd /srv/repositories && git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda

Then build it:

docker build . -t oobabooga --progress=plain

Then run it, give the container a name, add --gpus all, and don't add --rm:

docker run --gpus all -it --name temp-container oobabooga /bin/bash

Then once inside you can get the cuda version like @earor-R said and finish the install:

python -c 'import torch; print(".".join(map(str, torch.cuda.get_device_capability(0))))'
export TORCH_CUDA_ARCH_LIST=="8.6+PTX"
cd /srv/repositories/GPTQ-for-LLaMa && python setup_cuda.py install

Then exit the container and commit it back into an image:

 docker commit temp-container oobabooga-run

And then finally you can run it:

docker run -it --gpus=all --rm -p 7860:7860 --mount "type=bind,src=$(wslpath -w text-generation-webui/models),dst=/srv/models,readonly" oobabooga-run python server.py --auto-devices --chat --model=gpt4-x-alpaca-13b-native-4bit-128g --wbits=4 --groupsize=128 --gpu-memory=18 --listen

I wish I could automate the build easier so this is maintainable but that's the best I've got right now.

@alexmeri98
Copy link

Tried to investigate a bit this issue since I've faced the same problem in one of my Docker container.
If you're currently running your code through a setup.py , you should first add TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" to run:
python TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" setup.py install
(or an ARG TORCH_CUDA_ARCH_LIST="YOUR_GPUs_CC+PTX" in your Dockerfile for instance )
Additional infos. can be found here: https://pytorch.org/docs/stable/cpp_extension.html

How to find the "YOUR_GPUs_CC+PTX" of my gpu?

You can use the next scrip to obtain your GPUs arch:
import torch torch.cuda.get_arch_list()

You will get ['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86'] and you will have to parse this into "3.7 5.0 6.0 7.0 7.5 8.0 8.6+PTX"

@imzeroan
Copy link

I solve this by running: # TORCH_CUDA_ARCH_LIST="6.1+PTX" python setup.py install for my GTX1080ti. The GPU_CC number6.1 is according to 1080ti refer to https://developer.nvidia.com/cuda-gpus

@Leask
Copy link

Leask commented Oct 25, 2023

CUDA_VERSION=$(/usr/local/cuda/bin/nvcc --version | sed -n 's/^.release ([0-9]+.[0-9]+).$/\1/p')
if [[ ${CUDA_VERSION} == 9.0* ]]; then
export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;7.0+PTX"
elif [[ ${CUDA_VERSION} == 9.2* ]]; then
export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0+PTX"
elif [[ ${CUDA_VERSION} == 10.* ]]; then
export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5+PTX"
elif [[ ${CUDA_VERSION} == 11.0* ]]; then
export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0+PTX"
elif [[ ${CUDA_VERSION} == 11.* ]]; then
export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0;8.6+PTX"
else
echo "unsupported cuda version."
exit 1
fi

updated this workaround to support cuda v12:

CUDA_VERSION=$(/usr/local/cuda/bin/nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p')
if [[ ${CUDA_VERSION} == 9.0* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;7.0+PTX"
elif [[ ${CUDA_VERSION} == 9.2* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0+PTX"
elif [[ ${CUDA_VERSION} == 10.* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5+PTX"
elif [[ ${CUDA_VERSION} == 11.0* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0+PTX"
elif [[ ${CUDA_VERSION} == 11.* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0;8.6+PTX"
elif [[ ${CUDA_VERSION} == 12.* ]]; then
    export TORCH_CUDA_ARCH_LIST="5.0;5.2;5.3;6.0;6.1;6.2;7.0;7.2;7.5;8.0;8.6;8.7;8.9;9.0+PTX"
else
    echo "unsupported cuda version."
    exit 1
fi

@VAllens
Copy link

VAllens commented Oct 6, 2024

if you are building in Nvidia docker container without actual GPU, you can use something like this:

CUDA_VERSION=$(/usr/local/cuda/bin/nvcc --version | sed -n 's/^.*release \([0-9]\+\.[0-9]\+\).*$/\1/p')
if [[ ${CUDA_VERSION} == 9.0* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;7.0+PTX"
elif [[ ${CUDA_VERSION} == 9.2* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0+PTX"
elif [[ ${CUDA_VERSION} == 10.* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5+PTX"
elif [[ ${CUDA_VERSION} == 11.0* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0+PTX"
elif [[ ${CUDA_VERSION} == 11.* ]]; then
    export TORCH_CUDA_ARCH_LIST="3.5;5.0;6.0;6.1;7.0;7.5;8.0;8.6+PTX"
else
    echo "unsupported cuda version."
    exit 1
fi

This works for me, thanks.
I develop and compile projects on computers without NVIDIA graphics installed,
and run test programs on computers with NVIDIA graphics installed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests