Skip to content

Commit

Permalink
Initial documentation for CFs
Browse files Browse the repository at this point in the history
  • Loading branch information
jklaise committed May 22, 2019
1 parent e290a2b commit 8a14dd3
Show file tree
Hide file tree
Showing 2 changed files with 150 additions and 0 deletions.
1 change: 1 addition & 0 deletions doc/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@

methods/Anchors.ipynb
methods/CEM.ipynb
methods/CF.ipynb
methods/TrustScores.ipynb

.. toctree::
Expand Down
149 changes: 149 additions & 0 deletions doc/source/methods/CF.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[[source]](../api/alibi.explainers.counterfactual.rst)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Counterfactual Instances"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Overview"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A counterfactual explanation of an outcome or a situation $Y$ takes the form \"If $X$ had not occured, $Y$ would not have occured\" ([Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/counterfactual.html)). In the context of a machine learning classifier $X$ would be an instance of interest and $Y$ would be the label predicted by the model. The task of finding a counterfactual explanation is then to find some $X^\\prime$ that is in some way related to the original instance $X$ but leading to a different prediction $Y^\\prime$. Reasoning in counterfactual terms is very natural for humans, e.g. asking what should have been done differently to achieve a different result. As a consequence counterfactual instances for machine learning predictions is a promising method for human-interpretable explanations.\n",
"\n",
"The counterfactual method described here is the most basic way of defining the problem of finding such $X^\\prime$. Our algorithm loosely follows Wachter et al. (2017): [Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR](https://arxiv.org/abs/1711.00399). For an extension to the basic method which provides ways of finding higher quality counterfactual instances $X^\\prime$ in a quicker time, please refer to [Counterfactuals Guided by Prototypes](CFProto.ipynb).\n",
"\n",
"We can reason that the most basic requirements for a counterfactual $X^\\prime$ are as follows:\n",
"\n",
"- The predicted class of $X^\\prime$ is different from the predicted class of $X$\n",
"- The difference between $X$ and $X^\\prime$ should be human-interpretable.\n",
"\n",
"While the first condition is straight-forward, the second condition does not immediately lend itself to a condition as we need to first define \"interpretability\" in a mathematical sense. For this method we restrict ourselves to a particular definition by asserting that $X^\\prime$ should be as close as possible to $X$ without violating the first condition. There main issue with this definition of \"interpretability\" is that the difference between $X^\\prime$ and $X$ required to change the model prediciton might be so small as to be un-interpretable to the human eye in which case [we need a more sophisticated approach](CFProto.ipynb).\n",
"\n",
"That being said, we can now cast the search for $X^\\prime$ as a simple optimization problem with the following loss:\n",
"\n",
"$$L = L_{\\text{pred}} + \\lambda L_{\\text{dist}},$$\n",
"\n",
"where the first lost term $L_{\\text{pred}}$ guides the search towards points $X^\\prime$ which would change the model prediction and the second term $\\lambda L_{\\text{dist}}$ ensures that $X^\\prime$ is close to $X$. This form of loss has a single hyperparameter $\\lambda$ weighing the contributions of the two competing terms.\n",
"\n",
"The specific loss in our implementation is as follows:\n",
"\n",
"$$L(X^\\prime\\vert X) = (f_t(X^\\prime) - p_t)^2 + \\lambda L_1(X^\\prime, X).$$\n",
"\n",
"Here $t$ is the desired target class for $X^\\prime$ which can either be specified in advance or left up to the optimization algorithm to find, $p_t$ is the target probability of this class (typically $p_t=1$), $f_t$ is the model prediction on class $t$ and $L_1$ is the distance between the proposed counterfactual instance $X^\\prime$ and the instance to be explained $X$. The use of the $L_1$ distance should ensure that the $X^\\prime$ is a sparse counterfactual - minimizing the number of features to be changed in order to change the prediction.\n",
"\n",
"The optimal value of the hyperparameter $\\lambda$ will vary from dataset to dataset and even within a dataset for each instance to be explained and the desired target class. As such it is difficult to set and we learn it as part of the optimization algorithm, i.e. we want to optimize\n",
"\n",
"$$\\min_{X^{\\prime}}\\max_{\\lambda}L(X^\\prime\\vert X)$$\n",
"\n",
"subject to\n",
"\n",
"$$\\vert f_t(X^\\prime)-p_t\\vert\\leq\\epsilon \\text{ (counterfactual constraint)},$$\n",
"\n",
"where $\\epsilon$ is a tolerance parameter. In practice this is done in two steps, on the first pass we sweep a broad range of $\\lambda$, e.g. $\\lambda\\in(10^{-1},\\dots,10^{-10}$) to find lower and upper bounds $\\lambda_{\\text{lb}}, \\lambda_{\\text{ub}}$ where counterfactuals exist. Then we use bisection to find the maximum $\\lambda\\in[\\lambda_{\\text{lb}}, \\lambda_{\\text{ub}}]$ such that the counterfactual constraint still holds. The result is a set of counterfactual instances $X^\\prime$ with varying distance from the test instance $X$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Fit\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Explanation\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Numerical Gradients"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Examples"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
},
"varInspector": {
"cols": {
"lenName": 16,
"lenType": 16,
"lenVar": 40
},
"kernels_config": {
"python": {
"delete_cmd_postfix": "",
"delete_cmd_prefix": "del ",
"library": "var_list.py",
"varRefreshCmd": "print(var_dic_list())"
},
"r": {
"delete_cmd_postfix": ") ",
"delete_cmd_prefix": "rm(",
"library": "var_list.r",
"varRefreshCmd": "cat(var_dic_list()) "
}
},
"types_to_exclude": [
"module",
"function",
"builtin_function_or_method",
"instance",
"_Feature"
],
"window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 2
}

0 comments on commit 8a14dd3

Please sign in to comment.