Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

a question #20

Open
mikeyjhom opened this issue Apr 1, 2024 · 2 comments
Open

a question #20

mikeyjhom opened this issue Apr 1, 2024 · 2 comments

Comments

@mikeyjhom
Copy link

Dear Mr. Tal Daniel
I apologize for taking up your time. I have a question and would like to request your help. I trained the soft-intro_vae model using the code you provided, and the image generated during the training process was very good. However, after the model training was completed, I used the trained model to generate images, but received strange results
This is the image generated during the training process

FXF8PMII9)~LRL@0)IWFABF

This is the image generated by calling the trained model

mmexport1711957926710
image_2
Is the method I used to call the trained model incorrect?
Think about seeking your advice on a solution. I hope to receive your help. Thank you very much

@taldatech
Copy link
Owner

Hmmmm I think it has to do with Batch Normalization. I assume you use the standard architecture (and not the style-based one). Try turning BatchNorm on/off (model.train()/model.eval()) and see how it affects the results. The architecture in this model is very out-dated. I would replace all the Batch Normalization with Group Normalization to avoid depending on the batch size statistics.

@mikeyjhom
Copy link
Author

Okay, thank you very much for your reply. I will try to solve the problem based on your suggestions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants