-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What will it take to get this working on regression problems as well? #15
Comments
Thanks! That's definitely something that I've wanted to get added. I think the thing we could do that would make the most sense would be to allow the user the option to specify their own output layer. That's not something that the GP can evolve anyway, and that way the user has 100% control over the types of outputs they can have, allowing them to use eventually DEvol on any kind of deep net - regressors, autoencoders, potentially DQNs, etc. Obviously some more would have to be done on layer parameters to make that work right, but I think it would at least allow for regression vs. classification as you've brought up. |
Great idea! Elegant solution to many problems. Doesn't add much (any?) complexity to the project itself, while opening up tons of flexibility for users. Big fan of this idea. |
Any quick thoughts on what it will take to implement this @joeddav ? |
It should be very straightforward. Just pass in the output layer to the |
That's exactly the response I was hoping for. Would we have to modify the scoring functions at all? |
Mmm I'm not sure as I've never done a regression problem with Keras. I would guess that you would just make sure to use loss instead of accuracy as your metric and you'd be fine but you may have to experiment. |
Hi there, First of all, regression tasks should expect only one output class, so change output layer class number to 1. Second, minimum squared error should be used when dealing regression tasks and in keras it has already been implemented under losses with name of 'mse'. Finally, as @joeddav mentioned, using loss instead of accuracy is much more suitable. |
Right, so I guess the user also needs to control the parameters of the |
Any movement on this ? If not I can attempt it |
I haven't myself - I would go for it unless @ClimbsRocks has. |
I'll get started. I know we will want to add LSTM layers. For coding styles - should I just subclass GenomeHandler and call it RegressionGenomeHandler and add the new Regression specific things ? |
That might be a good approach, depending on exactly how you're doing it. It may be easier to just pass it as a parameter, but your way would be cleaner in the long run. If I were doing it I would probably just extend the currently functionality to allow the user to pass an output layer of their own choosing. |
@qorrect thanks for taking this on! any updates? |
No , I'm stumbling on the genome layout, not quite sure how to calculate the offsets given the new parameters I introduced - let me dig it up and post the specific problem. |
I have a simple PR I can commit tonight once I work out a few kinks. It won't do anything with added LSTM layers, just allow for regression, but that should probably be separate anyway. |
I like simple PRs, and iterative improvements that build on each other. Thanks for adding that in! Looking forward to trying it out soon. |
any advance on this guys? |
Hi, I would be also interested, if there are any advances. |
@joeddav, could you commit the PR you wrote up? |
Passing by to check if there are any plans to add both Regression and LSTM support ... ? |
@reinert I think this project is dead. |
@reinert You could probably check out Tpot. It seems like it has a similar purpose and I think it supports regression, but I'm not sure |
@gavinfernandes2012 yeah tpot is a much more mature project. |
As I've said, I'm not presently able to actively work on this project, so short of occasional PRs from others, it is not being actively developed. Check out tpot for genetic optimization. If you're just looking for hyperparameter optimization (i.e. not specifically genetic), then there are numerous other algorithms and open source projects to look at like Google Vizier. Bayesian optimization is typically a better approach in any case since it is penalized less by expensive evaluations. Many of these other tools just do hyperparameters and not architectures (adding & removing layers) as this one was intended to, but many provide a generic black box function that you can define any way you'd like. |
Looks like an awesome project!
We've found a fun use case for deep learning for regression problems with auto_ml. We train the neural network, then instead of getting it's output from the final layer (just a linear model), we get the features it learned in it's penultimate layer, then feed those into a gradient boosted model (which is typically better at turning features into predictions than a linear model).
Your library looks like a great way to optimize the deep learning model.
Any thoughts on what it would take to get regression models supported?
The text was updated successfully, but these errors were encountered: