Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretraining memory #9

Open
Dobby114 opened this issue Aug 30, 2022 · 1 comment
Open

Pretraining memory #9

Dobby114 opened this issue Aug 30, 2022 · 1 comment

Comments

@Dobby114
Copy link

Hello, I found that during the pre-training process, the memory occupied keeps increasing in the iteration process, I want to know why this is, is the same for your training process and how much memory does it take to train an epoch? Thanks!

@JonghwanMun
Copy link
Contributor

The training has been done using 8 x NVIDIA V100 GPUs with no memory issue.
Each V100 GPU is 32GB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants