-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Entire refacto of search metrics #141
Conversation
Seems to be a great enhancement ! Few questions/concerns:
|
Sorry for giving not enough details yesterday, I wanted to clarify some parts this morning, and embellish the plot.
-> I have identified a small bug on the testset. I will add a fix on my last commit.
-> The first 2 subplots represent the rolling average of : the nb of nodes visited to prove optimality and the nb of nodes visited to find the first solution. ( the rolling average is used to smooth the results due to disparities between solved instances. )
-> This is the second plot. |
Ok thanks ! I understand the last picture. We should find a way to reduce the number of curves. For instance, we can add a parameter defining each "how many" episode a new curve must be plotted. Also, in practice, it may be difficult to have the relative score, as it requires to know the optimal solution. Maybe we can change it with the relative score to the best solution found. Also, it could be interesting to have a measure of the "raw value" of the objective. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor modifications are still needed (mainly on naming conventions and documentation). But this is an amazing improvement to bring more insights on SeaPearl performances.
Good to me ! (provided that the last build passes :-) I am very eager to see these plots on the next experiments on the learning ! |
This is my proposition for the implementation of a generic and adaptable metrics to store many useful pieces of informations and results obtained during the search, such as the number of nodes visited for every solution for every episode during the training, the time needed to complete a search on an episode...
Previously, the metrics had to be defined by the user while creating his model. This was not relevant as the retrieved results don't depend on the kind of problem ( be it knapsack, graph coloring ... ) addressed by SeaPearl.
By using the abstract struct :
AbstractMetrics
the user can define his ownCustomMetrics
as long as it satisfies some requirements. This PR also provides abasicmetrics
that can be used out of the box.This PR goes in hand with an another PR on SeaPearlZoo that will update the current examples (knapsack, graph_coloring, tsptw) to fit this new format.
Key features :
basicmetrics can be used on any kind of CPModel (containing an objective or not )
basicmetrics can be used on any kind of heuristic ( classic or trained )
basicmetrics provides useful plotting functions : ( evolution of nodes visited to find a first solution / prove the optimality along the training, evolution of score of solutions found compared to the optimal one during the search during an evaluation.
Evolution of nodes visited to find the first solution / prove the optimality along the training : ( for illustration purpose, no noticeable learning can be observed )
Evolution of score of solutions found compared to the optimal one during the search during an evaluation :