This is the source code for the paper "Restricting the Flow: Information Bottlenecks for Attribution" - Oral at ICLR2020.
Note: This implementation might not be up-to-date. The reference implementation is in another repository
Iterations of the Per-Sample Bottleneck
-
Clone this repository:
$ git clone https://github.com/attribution-bottleneck/attribution-bottleneck-pytorch.git && cd attribution-bottleneck-pytorch
-
Create a conda environment with all packages:
$ conda create -n new environment --file requirements.txt
-
Using your new conda environment, install this repository with pip:
$ pip install .
-
Download the model weights from the release page and unpack them in the repository root directory:
$ tar -xvf bottleneck_for_attribution_weights.tar.gz
Optional:
-
If you want to retrain the Readout Bottleneck, place the imagenet dataset under
data/imagenet
. You might just create a link withln -s [image dir] data/imagenet
. -
Test it with:
$ python ./scripts/eval_degradation.py resnet50 8 Saliency test
We provide some jupyter notebooks to demonstrate the usage of both per-sample and readout bottleneck.
example_per-sample.ipynb
: Usage of the Per-Sample Bottleneck on an example imageexample_readout.ipynb
: Usage of the Readout Bottleneck on an example imagecompare_methods.ipynb
: Visually compare different attribution methods on an example image
The scripts to reproduce our evaluation can be found in the scripts directory. Following attributions are implemented:
For the bounding box task, replace the model with either vgg16
or resnet50
.
$eval_bounding_boxes.py [model] [attribution]
For the degradation task, you also have specify the tile size. In the paper, we
used 8
and 14
.
$ eval_degradation.py [model] [tile size] [attribution]
The results on sensitivity-n can be calculated with:
eval_sensitivity_n.py [model] [tile size] [attribution]