You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An interesting point from the kickoff meeting was that there are broadly two directions that any new tools should try to consider:
Creating a differentiable workflow from scratch
Interfacing with existing approaches
It would be very constructive to highlight the considerations one should make for each case :)
E.g. Kyle Cranmer pointed out that to be truly ‘optimal’, invariant of approach, one should learn the true likelihood function instead of objectives like those targeted by INFERNO (inverse fisher info) or neos (p-value from hyp test).
The text was updated successfully, but these errors were encountered:
I agree that learning the fully parameterized likelihood, likelihood ratio or score (both with nuisance and interest parameters) are truly optimal in a more strict sense. That said, training a well-performing parametrized model like that can be challenging when many parameters are considered, and then in the case of likelihood and likelihood-ratio it needs to be cross checked and calibrated that can also be difficult. The SALLY and SALLINO are somehow more similar to INFERNO and neos in that a summary statistic can be constructed from their output and used with the same tools that are currently used.
That said I think this are not two different directions perse, some of the most efficient methods in MadMiner require gradients with respect to statistical model parameters, so if we build a set of tools that has the abstractions tthat allow to construct a differentiable functional version of sets of events and their weight (see my answer in #7 (comment) ) as a function of nuisance parameters and parameters of interest all the approaches could benefit.
I think the idea of having a set of modules, abstractions and examples for building that first part could be useful for all approaches. Then you could plug and play different training modules and objective functions, which could be also implemented in a common framework.
An interesting point from the kickoff meeting was that there are broadly two directions that any new tools should try to consider:
It would be very constructive to highlight the considerations one should make for each case :)
E.g. Kyle Cranmer pointed out that to be truly ‘optimal’, invariant of approach, one should learn the true likelihood function instead of objectives like those targeted by INFERNO (inverse fisher info) or neos (p-value from hyp test).
The text was updated successfully, but these errors were encountered: