-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transposed weights sharing == tied weights for AE #670
Comments
Yes, with the right model definition. See #1211 (comment). |
@shelhamer I can't find solution to transposed weights there ! Can you point me to the particular comment you are talking about ? |
I am also looking in transposed weight sharing, but I can't find the solution mentioned above either... is there an example? Thanks! |
@SBodenstedt see if #2565 is of some help to you ! |
@Macbull Thanks for the tip! I checked it out and compiled, but it doesn't seem to work. I used the Transpose layer to share weights between two InnerProduct layers, but when trying to train the net, a seg fault occurred:
Removing theTranpose layer fixes the issue. |
@SBodenstedt sorry for that, maybe I forgot to update the PR. You may try this https://github.com/Macbull/ELM-Caffe |
Thank you for your help. I cloned the master branch this time, but the error is still the same :(
|
@SBodenstedt ohh! Sorry, but I am currently out of track of this project of mine (which is still incomplete). A few changes you may try : If this doesn't work, I would suggest you to modify inner product itself, to take input the boolean for Transpose (you will got it, once you see code of inner product layer). It should be easy and straight forward |
Woops, fixed that typo, but still have the same error... |
yes ! by explicitly mentioning transpose parameter in modified inner product layer, you can share weights without additional layer ! (set transpose = true for one and false for the other) |
Hi,
Is there any possibility to use weights sharing (like in #546 ), but transposed? It is used for autoencoders and deconvnets?
The text was updated successfully, but these errors were encountered: