Skip to content

Replicating Keras Autoencoder Example: understanding blocks #601

Answered by zachgk
hmf asked this question in Q&A
Discussion options

You must be logged in to vote

The relu needs to be between (or after) each usage of Linear. The batchFlatten is not a linear transformation, just a reshape from a 2D image into a 1D feature vector. What the Keras (simplest possible autoencoder) is doing is {Linear, relu, Linear, sigmoid}. We would just add a reshape (batchFlatten) before all of that.

For your loss, the softmaxCrossEntropyLoss is definitely not right. It is a classification loss where each value in your prediction is a different class and the label indicates which class is correct. What you are looking for here is a pixel by pixel comparison that says that the original and encoded images should be as similar as possible. The SigmoidBinaryCrossEntropyLo…

Replies: 2 comments 3 replies

Comment options

You must be logged in to vote
2 replies
@zachgk
Comment options

@hmf
Comment options

Comment options

You must be logged in to vote
1 reply
@hmf
Comment options

Answer selected by hmf
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants