Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guard CUDA calls with an explicit check #516

Merged
merged 1 commit into from
Jul 12, 2023

Conversation

st235
Copy link
Contributor

@st235 st235 commented Jun 10, 2023

If called on MacOS where CUDA is not available the method will crash with the following error:

  File "/nanodet/model/arch/one_stage_detector.py", line 50, in inference
    torch.cuda.synchronize()
  File "/python3.9/site-packages/torch/cuda/__init__.py", line 564, in synchronize
    _lazy_init()
  File "/python3.9/site-packages/torch/cuda/__init__.py", line 221, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

The proposed fix is to check if CUDA is available before accessing the API

@st235
Copy link
Contributor Author

st235 commented Jun 12, 2023

@RangiLyu can you, please, take a look?

@deshpandeneeraj
Copy link

I would suggest using next(self.parameters()).device instead. Since cuda can be initialized but still not used

@st235
Copy link
Contributor Author

st235 commented Jun 26, 2023

Thank you for the feedback.

I would stick with the proposed approach as it is consistent with the existing checks they have in the repo.
For example, torch.cuda.memory_reserved() / 1e9 if torch.cuda.is_available() else 0.

Since cuda can be initialized but still not used

I believe this case is less severe as it does not lead to runtime crash.

@st235
Copy link
Contributor Author

st235 commented Jun 27, 2023

@RangiLyu any luck you can take a look?

@RangiLyu
Copy link
Owner

@RangiLyu any luck you can take a look?

Sorry for the late reply! (I've been too busy lately)
Thanks a lot for your contribution!

@RangiLyu RangiLyu merged commit 3c9607c into RangiLyu:main Jul 12, 2023
@st235 st235 deleted the fix/unguarded_cuda_call branch July 12, 2023 07:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants