You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The latter even has a document (http://aif360.mybluemix.net/resources#guidance) explaining, briefly, what usage of these metrics entails. That document, in essence, does in my view convey an adequate summarization:
If the application follows the WAE worldview, then the demographic parity metrics should be used: disparate_impact and statistical_parity_difference. If the application follows the WYSIWYG worldview, then the equality of odds metrics should be used: average_odds_difference and average_abs_odds_difference. Other group fairness metrics (some are often labeled equality of opportunity) lie in-between the two worldviews and may be used appropriately: false_negative_rate_ratio, false_negative_rate_difference, false_positive_rate_ratio, false_positive_rate_difference, false_discovery_rate_ratio, false_discovery_rate_difference, false_omission_rate_ratio, false_omission_rate_difference, error_rate_ratio, and error_rate_difference.
But to the user facing these questions, this may be a bit too brief. (I would assume that this brevity is also due to caution about being seen as conveying too much of an opinion.)
I'd like to suggest that in the context of modelops, we have a more verbose guide to assumptions underlying metrics choice, making it easier to the user to figure out what they, in fact, want to measure/assure.
As an extreme example, consider this readily-found-when-searching-the-web decision tree: http://aequitas.dssg.io/static/images/metrictree.png. From a matter-of-fact way, it is great, but since it frames its questions like so
* do you want to ...
* do you need to ...
it does not help the user in figuring out what it is they might want to do.
The text was updated successfully, but these errors were encountered:
As @skeydan first outlined in the prototype repo in juliasilge/deploytidymodels#3, we can prioritize a guide for choosing fairness metrics.
The text was updated successfully, but these errors were encountered: