You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am using these codes on my own project and I got a few confusions since I am new on cGANs.
the first one is how to set a suitable batch_size. Others said that batch_size=1 led to better perfomance. However, my training set is a bit larger (about 42k images, and another 8k images for test and validation). Should the batch_size kept 1 or setting a larger value, such as 128? For the first attemption, I used batch_size = 1 but the gen_loss increased fast within one epoch. However, when I set batch_size to 128, the gen_loss slowed down.
so, here is the second question, when to stop training. I learned from some blogs that an ideal stopping condition was that the discriminator could not tell real and generated images, which means predict_fake = predict_real = 0.5. If I understood correctly, the gen_loss (-log(predict_fake)) should be about ~0.7, while the discri_loss ( -(log(predict_fake) + log(predict_real))) should be ~1.4. Is this the correct condition to stop training?
Hoping for your expert explanations. Thanks a lot
The text was updated successfully, but these errors were encountered:
Hi, I am using these codes on my own project and I got a few confusions since I am new on cGANs.
the first one is how to set a suitable batch_size. Others said that batch_size=1 led to better perfomance. However, my training set is a bit larger (about 42k images, and another 8k images for test and validation). Should the batch_size kept 1 or setting a larger value, such as 128? For the first attemption, I used batch_size = 1 but the gen_loss increased fast within one epoch. However, when I set batch_size to 128, the gen_loss slowed down.
so, here is the second question, when to stop training. I learned from some blogs that an ideal stopping condition was that the discriminator could not tell real and generated images, which means predict_fake = predict_real = 0.5. If I understood correctly, the gen_loss (-log(predict_fake)) should be about ~0.7, while the discri_loss ( -(log(predict_fake) + log(predict_real))) should be ~1.4. Is this the correct condition to stop training?
Hoping for your expert explanations. Thanks a lot
The text was updated successfully, but these errors were encountered: