-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rule of thumb for kimg? #274
Comments
General understanding is : 1kimg =1000inmg. This means 1000 images are shown to the network during the training. According to my experience, I would suggest 4000kimg (--kimg=4000) in the training configuration, as a good starting point to observe how the G and D behaves. After that, you may go for lesser kimg or higher, depends on your dataset. |
Thank you, for the explanation. What's a good metric to observe both the generator and discriminator? FID for the generator and logits for the discriminator? Or simply logits for both? |
@lebeli KID is good for small datasets as the original KID paper (Demystifying MMD GANs Mikołaj Bińkowski, Danica J. Sutherland, Michael Arbel, Arthur Gretton https://arxiv.org/abs/1801.01401) suggests. FID is widely used and good fit for large datasets such as over 10k images, according to my understanding at the moment. One metric for both. |
I have a dataset with ~4000 images (CAD images without any noise). Is there a rule of thumb for choosing the kimg hyperparameter? Also, the kimg hyperparameter is basically how one can controll the number of epochs, right?
The text was updated successfully, but these errors were encountered: