Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing binding for cuda.empty_cache #896

Open
xd009642 opened this issue Oct 1, 2024 · 2 comments
Open

Missing binding for cuda.empty_cache #896

xd009642 opened this issue Oct 1, 2024 · 2 comments

Comments

@xd009642
Copy link

xd009642 commented Oct 1, 2024

In the tch and torch-sys crates there doesn't appear to be a version of https://pytorch.org/docs/stable/generated/torch.cuda.empty_cache.html#torch-cuda-empty-cache or torch._C.cuda_emptyCache which it called. I'll have a deeper look into this and PRing it but any guidance would be appreciated as this is a fairly important feature when sharing GPUs with other jobs.

@xd009642
Copy link
Author

xd009642 commented Oct 1, 2024

That and the other memory controls - but for my own issue I'm happy going for the blunt force approach to get torch to free up some of it's excessive allocations.

@xd009642
Copy link
Author

Bumping slightly as this is posing more of a problem. I'm probably going to resort to using https://crates.io/crates/nvml-wrapper in the short term to detect issues before they happen and otherwise try to find some time to look into how empty_cache would be implemented

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant