Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for fairness metrics + guide to thinking more holistically about fairness #59

Open
juliasilge opened this issue Nov 16, 2021 · 0 comments
Labels
feature a feature request or enhancement

Comments

@juliasilge
Copy link
Member

As @skeydan first outlined in the prototype repo in juliasilge/deploytidymodels#3, we can prioritize a guide for choosing fairness metrics.

In R, there is no shortage of excellent fairness metrics packages, such as https://github.com/kozodoi/fairness or, first and foremost, the bindings to AI Fairness 360 (http://aif360.mybluemix.net/).

The latter even has a document (http://aif360.mybluemix.net/resources#guidance) explaining, briefly, what usage of these metrics entails. That document, in essence, does in my view convey an adequate summarization:

If the application follows the WAE worldview, then the demographic parity metrics should be used: disparate_impact and statistical_parity_difference. If the application follows the WYSIWYG worldview, then the equality of odds metrics should be used: average_odds_difference and average_abs_odds_difference. Other group fairness metrics (some are often labeled equality of opportunity) lie in-between the two worldviews and may be used appropriately: false_negative_rate_ratio, false_negative_rate_difference, false_positive_rate_ratio, false_positive_rate_difference, false_discovery_rate_ratio, false_discovery_rate_difference, false_omission_rate_ratio, false_omission_rate_difference, error_rate_ratio, and error_rate_difference.

But to the user facing these questions, this may be a bit too brief. (I would assume that this brevity is also due to caution about being seen as conveying too much of an opinion.)

I'd like to suggest that in the context of modelops, we have a more verbose guide to assumptions underlying metrics choice, making it easier to the user to figure out what they, in fact, want to measure/assure.

As an extreme example, consider this readily-found-when-searching-the-web decision tree: http://aequitas.dssg.io/static/images/metrictree.png. From a matter-of-fact way, it is great, but since it frames its questions like so

* do you want to ...

* do you need to ...

it does not help the user in figuring out what it is they might want to do.

@juliasilge juliasilge added the feature a feature request or enhancement label Nov 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature a feature request or enhancement
Projects
None yet
Development

No branches or pull requests

1 participant