Skip to content

Evaluation Framework for Multimodal Biomedical Image Registration Methods

License

Notifications You must be signed in to change notification settings

MIDA-group/MultiRegEval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

License DOI DOI

Evaluation Framework for Multimodal Biomedical Image Registration Methods

Code of the paper Is Image-to-Image Translation the Panacea for Multimodal Image Registration? A Comparative Study (arXiv)

Open-access data: Datasets for Evaluation of Multimodal Image Registration

Overview

This repository provides an open-source quantitative evaluation framework for multimodal biomedical registration, aiming to contribute to the openness and reproducibility of future research.

  • evaluate.py is the main script to call the registration methods and calculate their performance.

  • ./Datasets/ contains detailed descriptions of the evaluation datasets, and instructions and scripts to customise them.

  • The *.sh scripts provide examples to set large-scale evaluations.

  • plot.py and show_samples.py can be used to plot the registration performance and visualise the modality-translation results (see paper for examples).

  • Each folder contains the modified implementation of a method, whose compatibility with this evaluation framework is tested (see paper for details).

  • Other files should be self-explanatory, otherwise, please open an issue.

Usage

Image-to-Image translation

  • pix2pix and CycleGAN: run commands_*.sh to train and predict_*.sh to translate
# train and test 
cd pytorch-CycleGAN-and-pix2pix/
./commands_{dataset}.sh {fold} {gpu_id} # no {fold} for Histological data

# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_{dataset}.sh

# for RIRE dataset
# RIRE_temp -> RIRE_slices_fake
./predict_rire.sh
# train and test 
cd ../DRIT/src/
./commands_{dataset}.sh

# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_{dataset}.sh

# for RIRE dataset
# ../../pytorch-CycleGAN-and-pix2pix/datasets/rire_cyc_train -> RIRE_slices_fake
./predict_rire.sh
  • StarGANv2: run commands_*.sh to train and predict_all.sh to translate
# train (for all datasets)
cd ../stargan-v2/
./commands_{dataset}.sh {fold} {gpu_id} # no {fold} for Histological data

# test
# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_{dataset}.sh

# for RIRE dataset
# RIRE_temp -> RIRE_slices_fake
./predict_rire.sh
# train and test (for all datasets)
cd ../CoMIR/
./commands_train.sh

# modality mapping of evaluation data
# {Dataset}_patches -> {Dataset}_patches_fake
./predict_all.sh {gpu_id}

Evaluate registration performance

Run python evaluate.py -h or python evaluate_3D.py -hto see the options.

Dependencies

environment.yml includes the full list of packages used to run most of the experiments. Some packages might be unnecessary. And here are some exceptions:

Citation

Please consider citing our paper and dataset if you find the code useful for your research.

@article{luImagetoImageTranslationPanacea2021,
  title = {Is {{Image}}-to-{{Image Translation}} the {{Panacea}} for {{Multimodal Image Registration}}? {{A Comparative Study}}},
  shorttitle = {Is {{Image}}-to-{{Image Translation}} the {{Panacea}} for {{Multimodal Image Registration}}?},
  author = {Lu, Jiahao and {\"O}fverstedt, Johan and Lindblad, Joakim and Sladoje, Nata{\v s}a},
  year = {2022},
  month = nov,
  journal = {PLOS ONE},
  volume = {17},
  number = {11},
  pages = {e0276196},
  issn = {1932-6203},
  doi = {10.1371/journal.pone.0276196},
  langid = {english}
}

@datasettype{luDatasetsEvaluationMultimodal2021,
  title = {Datasets for {{Evaluation}} of {{Multimodal Image Registration}}},
  author = {Lu, Jiahao and {\"O}fverstedt, Johan and Lindblad, Joakim and Sladoje, Nata{\v s}a},
  year = {2021},
  month = apr,
  publisher = {{Zenodo}},
  doi = {10.5281/zenodo.5557568},
  language = {eng}
}

Code Reference

About

Evaluation Framework for Multimodal Biomedical Image Registration Methods

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published