Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory. Tried to allocate 768.00 MiB (GPU 0; 6.00 GiB total capacity; 3.63 GiB already allocated; 328.06 MiB free; 3.90 GiB reserved in total by PyTorch) If r eserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF #11

Closed
nanshanvv opened this issue May 25, 2023 · 5 comments

Comments

@nanshanvv
Copy link

I set the batch_size to 1 and still get this error when I run the model.
Here is the gpu information of my computer.
image
Could you tell me how to fix this error?

@WuJunde
Copy link
Collaborator

WuJunde commented May 25, 2023

  1. if 3D, decrease -chunk, -num_sample, and some other parameters I mentioned in README
  2. parameter pruning to fp32 or fp16. I have not implemented it, so you need to implement yourself. Luckily, there are many relevant sources on GitHub. I may update it in the future
  3. change a bigger GPU, 6GB memory is not enough for most DL projects now

@nanshanvv
Copy link
Author

Thank you for your reply

@WuJunde WuJunde closed this as completed May 25, 2023
@AmrinKareem
Copy link

AmrinKareem commented Jun 7, 2023

Hi @nanshanvv have you solved this issue? Could you please share your solution? Thanks!

@nanshanvv
Copy link
Author

Hi @nanshanvv have you solved this issue? Could you please share your solution? Thanks!

Hi, I solved this problem by buying cloud servers online to get more computing power.

@Nimophilist
Copy link

Hi,Could you share what the configuration of GPU used is?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants