Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

My implementation with better results based on reasonable hyperparameters #10

Open
waterhorse1 opened this issue May 1, 2020 · 0 comments

Comments

@waterhorse1
Copy link

waterhorse1 commented May 1, 2020

Dear authors,

I have reproduced the algorithm in the paper. In your original paper, you set the inner loop learning rate to 5e-5 and outer-loop learning rate to 5e-6, which from my perspective is too low to have a good learning process. So I reset the parameters and test the MAE of my implementation. And it turns out to be better than your results.

So I wonder whether the hyperparameters in your paper isn't proper or is there any other reasons to lower inner and outer learning rate to that extent.

My implementation is in:
https://github.com/waterhorse1/MELU_pytorch

@waterhorse1 waterhorse1 changed the title Better results with reasonable hyperparameters My implementation with better results based on reasonable hyperparameters May 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant