You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train the reconstruction model on a Tesla K40 (12GB memory) but unfortunately running out of GPU memory mid-way training. I'm training the network with the CoMA dataset as given in the code (more than 20k faces).
Could you help me with some strategies to combat this issue? I do have a cluster of GPUs available at my disposal but since the paper doesn't talk about any resources required to train the networks, I thought I would ask you first.
Or perhaps if it's possible, could you please share the pretrained model for reconstruction? I would like to use the encoder network on some of my in-house dataset.
Thank you,
Niraj Pandkar
The text was updated successfully, but these errors were encountered:
Hello Shunwang Gong,
I am trying to train the reconstruction model on a Tesla K40 (12GB memory) but unfortunately running out of GPU memory mid-way training. I'm training the network with the CoMA dataset as given in the code (more than 20k faces).
Could you help me with some strategies to combat this issue? I do have a cluster of GPUs available at my disposal but since the paper doesn't talk about any resources required to train the networks, I thought I would ask you first.
Or perhaps if it's possible, could you please share the pretrained model for reconstruction? I would like to use the encoder network on some of my in-house dataset.
Thank you,
Niraj Pandkar
The text was updated successfully, but these errors were encountered: