Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training process (distill) #85

Open
NikitaVasilevN opened this issue Jul 24, 2024 · 1 comment
Open

Training process (distill) #85

NikitaVasilevN opened this issue Jul 24, 2024 · 1 comment

Comments

@NikitaVasilevN
Copy link

NikitaVasilevN commented Jul 24, 2024

Dear authors, thank you for your work.
I try to reproduce your results on dataset Scanner. I train my model on A40 (46 Gb GPU) with different number of scences (1000, 500, 10) and default parametrs in config ours_openseg.yaml, but can't do it due to I don't have enough memory. Can you say how many memory have you used to train your model on Scannet at least? Or what should I do to run distill process on the whole dataset?

Also how did you run test mode and preprocess scannet test dataset?

Thanks in advance

@oneHFR
Copy link

oneHFR commented Nov 13, 2024

Hi, I encountered a similar issue.

Initially, I also encountered issues with distill on Scannet when using an RTX 3090 (24 GB), where the process would get killed. After switching to an L20 with 48 GB, I was able to run it smoothly. I monitored the GPU memory usage in real-time, and at certain points, it required close to 45 GB. So, it’s likely that memory is the main limitation here.

Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants