-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training on 7k dataset encounters mode collapse and generator leakage #8
Comments
Hi, I always found the warmup of 5e5 to be sufficient but if the fake images till warmup iteration are quite worse then it will be good to try a longer warmup stage. Depending on the dataset, Let me know if these things does not address your issue. Thanks |
Hi, Also does changing to diffAugment using the command below help in any way?
Another thing that might be useful in this case is to resume from an FFHQ pre-trained model. Thanks. |
Hi,
Does finetuning from an FFHQ trained model with vision-aided-loss help? |
Hi!
Sorry to contact you frequently recently! I'm very interested in your work!
I encountered some problems in the reproduction process. When I used my own 7k datasets to train with the following commands:
python train.py --outdir=training-runs --data=datasets/face7k.zip --aug=ada --warmup=5e5 --cfg=paper256_2fmap --gpus=2 --kimg=5000 --batch=16 --snap=25 --cv-loss=multilevel_sigmoid_s --augcv=ada --cv=input-clip-output-conv_multi_level --metrics=none
The probability of the adaptive discriminator enhancement increased rapidly during the training process. At present, the quality of the generated samples is very poor. Compared with stylegan2 ada, whether the problem of mode collapse and generator leakage is very serious. I don't know what details I missed.
I hope you can help me! Thank you again for answering my questions before!
The text was updated successfully, but these errors were encountered: