Skip to content

Latest commit

 

History

History

tutorials

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Logo_ER10

Tutorials

This folder contains DIANNA tutorial notebooks. To install the dependencies for the tutorials, run (in the main dianna folder)

pip install .[notebooks]

🠊 For general demonstration of DIANNA click on the logo Logo_ER10 or run it in Colab: Open In Colab.

🠊 For tutorials on how to convert an Keras, PyTorch, Scikit-learn or Tensorflow model to ONNX, please see the conversion tutorials.

🠊 For specific XAI methods (explainers):

  • Click on the explainer names to watch explanatory videos for the respective method.
  • Click on the logos for direct access to a tutorial notebook. Run the tutorials directly in Google Colab by clicking on the Colab buttons.

Datasets and Tasks

Illustrative (Simple)

Data modality Dataset Task Logo
Images Binary MNIST Binary digit classification mnist_zero_and_one_half_size
Simple Geometric (circles and triangles) Binary shape classificaiton SimpleGeometric Logo
Imagenet $1000$ classes natural images classificaiton ImageNet_autocrop
Text Stanford sentiment treebank Positive or negative movie reviews sentiment classificaiton nlp-logo_half_size
Timeseries Coffee dataset Binary classificaiton of Robusta and Aribica coffee beans Coffe Logo
Weather dataset Binary classification (summer/winter) of temperature time-series Weather Logo
Tabular Penguin dataset $3$ penguin spicies (Adele, Chinstrap, Gentoo) classificaiton Penguin Logo
Weather dataset Next day sunshine hours prediction (regression) Weather Logo

Scientific use-cases

Data modality Dataset Task Logo
Images Simple Scientific (LeafSnap30) $30$ tree species leaves classification LeafSnap30 Logo
Text
Timeseries Fast Radio Burst (FRB) dataset (not publicly available) Binary classificaiton of Fast Radio Burst (FRB) timeseries data : noise or a real FRB. FRB logo
Tabular Land atmosphere dataset Prediction of "latent heat flux" (regression). The random forest model is used as an emulator to replace the physical model STEMMUS_SCOPE to predict global maps of latent heat flux. Atmosphere Logo

Models

The ONNX models used in the tutorials are available at dianna/models, or linked from their respective tutorial notebooks.

Summary of all Tutorials

Illustrative (Simple)

Modality \ Method RISE LIME KernelSHAP
Images mnist_zero_and_one_half_size or Open In Colab mnist_zero_and_one_half_size or Open In Colab
ImageNet_autocrop or Open In Colab SimpleGeometric Logo or Open In Colab
Text nlp-logo_half_size or Open In Colab nlp-logo_half_size or Open In Colab
Time series Weather Logo or Open In Colab Weather Logo or Open In Colab
Coffee Logo or Open In Colab
Tabular Penguin Logo or Open In Colab Penguin Logo or Open In Colab Penguin Logo or Open In Colab
Weather Logo or Open In Colab Weather Logo or Open In Colab

To learn more about how we aproach the masking for time-series data, please read our Masking time-series for XAI blog-post.

Scientific use-cases

Modality \ Method RISE LIME KernelSHAP
Images LeafSnap30 Logo or Open In Colab
Text
Time series FRB logo or Open In Colab
Tabular Atmosphere Logo or Open In Colab

IMPORTANT: Hyperparameters

The XAI methods (explainers) are sensitive to the choice of their hyperparameters! In this master Thesis, this sensitivity is researched and useful conclusions are drawn. The default hyperparameters used in DIANNA for each explainer as well as the choices for some tutorials and their data modality (i - images, txt - text, ts - time series and tab - tabular) are given in the tables below. Also the main conclusions (🠊) from the thesis (on images and text) about the hyperparameters effect are listed.

RISE

Hyperparameter Default value ImageNet_autocrop (i) mnist_zero_and_one_half_size(i) nlp-logo_half_size (txt) Weather Logo (ts) FRB logo (ts)
$n_{masks}$ $1000$ default $5000$ default $10000$ $5000$
$p_{keep}$ optimized (i, txt), $0.5$ (ts) $0.1$ $0.1$ default $0.1$ $0.1$
$n_{features}$ $8$ $6$ default default default $16$

🠊 The most crucial parameter is $p_{keep}$. Lower values of $p_{keep}$ lead to more sentitive explanations (observed for both images and text). Easier classificication tasks usually require a lower $p_keep$ as this will cause more perturbation in the input and therefore a more distinct signal in the model predictions.

🠊 The feature resolution $n_{features}$ exhibited an optimum at a value of $6$. Higher values can offer a finer grained result but require (far) more $n_masks$. This is also dependent on the scale of the phenomena in the input data that we want to take into account in the explanation.

🠊 Larger $n_masks$ will return more consistent results at the cost of computation time. If 2 identical runs yield (very) different results, these will likely contain a lot of (or even mostly) noise and a higher value for $n_masks$ should be used instead.

LIME

Hyperparameter Default value LeafSnap30 Logo (i) Weather Logo (ts) Coffe Logo(ts)
$n_{samples}$ $5000$ $1000$ $10 000$ $500$
Kernel Width $25$ default default default
$n_{features}$ $10$ $30$ default default

🠊 The most crucial parameter is the Kernel width: low values cause high sensitivity, however that observaiton was dependant on the evaluaiton metric.

KernelSHAP

Hyperparameter Default value mnist_zero_and_one_half_size (i) SimpleGeometric Logo (i) Atmosphere Logo (tab)
$n_{samples}$ auto/int $1000$ $2000$ $136588$
$n_{segments}$ $100$ $200$ $200$ default
$sigma$ $0$ default default default

🠊 The most crucial parameter is the nubmer of super-pixels $n_{segments}$. Higher values led to higher sensitivity, however that observaiton was dependant on the evaluaiton metric.

🠊 Regularization had only a marginal detrimental effect, the best results were obtained using no regularization (no smoothing, $sigma = 0$) or least squares regression.