-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training gets overfiting #18
Comments
Hi Hai, If you set the parameter for self.G_img_loss to be 0.000, you only trained the autoencoder. You may set it to be 0.0001 after the autoencoder is stable. |
Hi @susanqq , Thank for your reply. I tried that configuration, but I still get overfitting. What is your actual configuration to get the results? |
Hi Hai, The results published in the original paper are trained on a large face aging dataset where UTKFace is one part. Due to the copyright, we cannot provide the other dataset. All the faces are aligned with some affine transformation. If you want to further improve the performance, you can add a mask to remove the background. |
Hi, By the way, I am trying to reproduce your result to compare with our team 's approach for next research papers. Would you mind sharing with me your good models as well as the images testing code? If Ok, you can send me through my email: [email protected]. I really appreciate if you can help. |
I think if you set the std for all the weight init value to be 0.02 can solve this problem. |
change the code related to weigth initalizer: |
@susanqq how can i train the autoencoder to be stable? Does it mean we will train |
@shz1314 Stability here means you can generate good (but blurry) reconstructed images which are quite similar to the given input. You can then increase the lambda to generate more shape results. It is not necessary to train them seperately. However, in practice, we noticed that it is easy and stable to train the autoencoder first, and then add the adversarial loss. |
Thank you very much for your reply,i have a new problem ,when i use Local Binary Patterns as a loss function,the result doesnt get any better.i want to do some work in texture Information. |
Hi @susanqq Thanks, |
Hi HaiPhan, |
Hello @susanqq Thank you in advance. |
Hi @ZZUTK ,
Thanks for your great work. I have trained on UTKFace with configurations as follow:
self.loss_EG = self.EG_loss + 0.0000 * self.G_img_loss + 0.01 * self.E_z_loss + 0.0000 * self.tv_loss
The epoch result for sample is good, but when I try some test images. The results are not good.
Could you tell me exactly number you chose to get your results?
Thanks,
Hai
Input:
Output:
The text was updated successfully, but these errors were encountered: