Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maybe there is a bug? #8

Open
KangGrandesty opened this issue Aug 15, 2017 · 2 comments
Open

Maybe there is a bug? #8

KangGrandesty opened this issue Aug 15, 2017 · 2 comments

Comments

@KangGrandesty
Copy link

drops_ dose not be reset after Forward and Backward, it still keeps the value of last Forward.
I suggest to reset drops_ at the beginning of Forward.
this is my implementation of FractalNet with global and local drop in caffe:https://github.com/KangGrandesty/fractalnet.
It may be useful for you.

@zhanglonghao1992
Copy link

@KangGrandesty Hi,I read your code, its cool but i have one question.
In fractal_join_layer.cpp, I dont understand why "Dtype mult = Dtype(bottom_size)/Dtype(total_undrop_)"
I think Dtype mult = Dtype(1)/Dtype(total_undrop_), that makes the top is element wise mean of the bottom

@KangGrandesty
Copy link
Author

@zhanglonghao1992 I just set it like the dropout layer. if half of neurons are dropout randomly, the rest will be scale to sure the output has a closer expected value with non-dropout. Then, I change it to element-wise mean and modify the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants