You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear authors, thank you for your work.
I try to reproduce your results on dataset Scanner. I train my model on A40 (46 Gb GPU) with different number of scences (1000, 500, 10) and default parametrs in config ours_openseg.yaml, but can't do it due to I don't have enough memory. Can you say how many memory have you used to train your model on Scannet at least? Or what should I do to run distill process on the whole dataset?
Also how did you run test mode and preprocess scannet test dataset?
Thanks in advance
The text was updated successfully, but these errors were encountered:
Initially, I also encountered issues with distill on Scannet when using an RTX 3090 (24 GB), where the process would get killed. After switching to an L20 with 48 GB, I was able to run it smoothly. I monitored the GPU memory usage in real-time, and at certain points, it required close to 45 GB. So, it’s likely that memory is the main limitation here.
Dear authors, thank you for your work.
I try to reproduce your results on dataset Scanner. I train my model on A40 (46 Gb GPU) with different number of scences (1000, 500, 10) and default parametrs in config ours_openseg.yaml, but can't do it due to I don't have enough memory. Can you say how many memory have you used to train your model on Scannet at least? Or what should I do to run distill process on the whole dataset?
Also how did you run test mode and preprocess scannet test dataset?
Thanks in advance
The text was updated successfully, but these errors were encountered: