You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I appreciate your paper very much, and have read it many times!
Recently I started to read your code to figure out some implementation details. But It seems that your code is not consistent with your description in the paper, which makes me hard to follow.
For example:
One key observation claimed in your paper is to reuse the autoencoder when training the GAN. But the autoencoder is not pre-trained in your code, how does this affect your final result?
In the paragraph "Constrained random code sampling" of section 5, you mentioned that "Enc (in the VAE) is the recursive encoder originally trained with the autoencoder (before adversarial tuning), running in test mode. ". But according to your code, the parameter of Enc in the VAE is also updated, which makes me confused.
According to your description in your paper, it seems that the Enc and D share the same parameter. But in your code, they are not. How does this affect your final result?
In the paragraph "Structure prior for G" of section 5, you mentioned to constrain the hierarchies inferred by G to lie in a plausible set. How to achieve this? I hasn't figured it out from your code.
Could you please give more introduction or explanation about your implementation, so that I can quickly read through your code?
Thank you in advance. @junli-lj
The text was updated successfully, but these errors were encountered:
Thanks for your interesting in our paper.
According to your concerns, please see my following comments:
In the final implementation, we did not use the pre-trained autoencoder to initialize the GAN, since we found without the initialization, our network could also get convergence with the same performance.
Actually, in VAE-GAN, the VAE network and GAN network must be trained jointly. You can use pre-trained model to do the parameter initialization, but then you need to fine-tune or train them jointly. So the Enc in VAE should be updated.
The Enc in VAE and D in GAN do not share the parameter, they are different. We only mentioned that the D can be initialized by the pre-trained autoencoder.
Each shape in the training data could be encoded to a code by the current Enc in VAE. For a random code, we first find m closest codes of the training data, the corresponding hierarchies of the m closest codes consists of a plausible hierarchy set for this random code. Top K candidates can be further selected by the scores from D.
I appreciate your paper very much, and have read it many times!
Recently I started to read your code to figure out some implementation details. But It seems that your code is not consistent with your description in the paper, which makes me hard to follow.
For example:
Could you please give more introduction or explanation about your implementation, so that I can quickly read through your code?
Thank you in advance. @junli-lj
The text was updated successfully, but these errors were encountered: