Skip to content

Unsupervised Learning for Image Registration

License

Notifications You must be signed in to change notification settings

p-wein/voxelmorph

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

voxelmorph (legacy branch)

Unsupervised Learning with CNNs for Image Registration
This repository incorporates several variants, first presented at CVPR2018 (initial unsupervised learning) and then MICCAI2018 (probabilistic & diffeomorphic formulation)

keywords: machine learning, convolutional neural networks, alignment, mapping, registration

Recent Updates

  • Due to popular demand, we added a preliminary pytorch version; please see the pytorch folder.
    Note: we will soon be updating a new voxelmorph version that will integrate both keras/tensorflow and pytorch versions.

  • New core tutorial provides intuition to voxelmorph!

  • See our learning method to automatically build atlases using VoxelMorph, to appear at NeurIPS2019.

Instructions

Setup

It might be useful to have each folder inside the ext folder on your python path. assuming voxelmorph is setup at /path/to/voxelmorph/:

export PYTHONPATH=$PYTHONPATH:/path/to/voxelmorph/ext/neuron/:/path/to/voxelmorph/ext/pynd-lib/:/path/to/voxelmorph/ext/pytools-lib/

If you would like to train/test your own model, you will likely need to write some of the data loading code in 'datagenerator.py' for your own datasets and data formats. There are several hard-coded elements related to data preprocessing and format.

Training

These instructions are for the MICCAI2018 variant using train_miccai2018.py.
If you'd like to run the CVPR version (no diffeomorphism or uncertainty measures, and using CC/MSE as a loss function) use train.py

  1. Change the top parameters in train_miccai2018.py to the location of your image files.
  2. Run train_miccai2018.py with options described in the main function at the bottom of the file. Example:
train_miccai2018.py /my/path/to/data --gpu 0 --model_dir /my/path/to/save/models 

In our experiments, /my/path/to/data contains one npz file for each subject saved in the variable vol_data.

We provide a T1 brain atlas used in our papers at data/atlas_norm.npz.

Testing (measuring Dice scores)

  1. Put test filenames in data/test_examples.txt, and anatomical labels in data/test_labels.mat.
  2. Run python test_miccai2018.py [gpu-id] [model_dir] [iter-num]

Registration

If you simply want to register two images:

  1. Choose the appropriate model, or train your own.
  2. Use register.py. For example, Let's say we have a model trained to register subject (moving) to atlas (fixed). One could run:
python register.py --gpu 0 /path/to/test_vol.nii.gz /path/to/atlas_norm.nii.gz --out_img /path/to/out.nii.gz --model_file ../models/cvpr2018_vm2_cc.h5 

Parameter choices

CVPR version

For the CC loss function, we found a reg parameter of 1 to work best. For the MSE loss function, we found 0.01 to work best.

MICCAI version

For our data, we found image_sigma=0.01 and prior_lambda=25 to work best.

In the original MICCAI code, the parameters were applied after the scaling of the velocity field. With the newest code, this has been "fixed", with different default parameters reflecting the change. We recommend running the updated code. However, if you'd like to run the very original MICCAI2018 mode, please use xy indexing and use_miccai_int network option, with MICCAI2018 parameters.

Spatial Transforms and Integration

  • The spatial transform code, found at neuron.layers.SpatialTransform, accepts N-dimensional affine and dense transforms, including linear and nearest neighbor interpolation options. Note that original development of VoxelMorph used xy indexing, whereas we are now emphasizing ij indexing.

  • For the MICCAI2018 version, we integrate the velocity field using neuron.layers.VecInt. By default we integrate using scaling and squaring, which we found efficient.

VoxelMorph Papers

If you use voxelmorph or some part of the code, please cite (see bibtex):

Notes on Data

In our initial papers, we used publically available data, but unfortunately we cannot redistribute it (due to the constraints of those datasets). We do a certain amount of pre-processing for the brain images we work with, to eliminate sources of variation and be able to compare algorithms on a level playing field. In particular, we perform FreeSurfer recon-all steps up to skull stripping and affine normalization to Talairach space, and crop the images via ((48, 48), (31, 33), (3, 29)).

We encourage users to download and process their own data. See a list of medical imaging datasets here. Note that you likely do not need to perform all of the preprocessing steps, and indeed VoxelMorph has been used in other work with other data.

Creation of Deformable Templates

We present a template consturction method in this preprint:

To experiment with this method, please use train_img_template.py for unconditional templates and train_cond_template.py for conditional templates, which use the same conventions as voxelmorph (please note that these files are less polished than the rest of the voxelmorph library).

We've also provided an unconditional atlas in /data/uncond_atlas_creation_k.npy.

Models in h5 format weights are provided for unconditional atlas here, and conditional atlas here.

Explore the atlases interactively here with tipiX!

Unified Segmentation

We recently published a method on deep learning methods for unsupervised segmentation that makes use of voxelmorph infrastructure. See the unified seg README for more information.

Significant Updates

2019-11-28: Added a preliminary version of pytorch
2019-08-08: Added support for building templates
2019-04-27: Added support for unified segmentation
2019-01-07: Added example register.py file
2018-11-10: Added support for multi-gpu training
2018-10-12: Significant overhaul of code, especially training scripts and new model files.
2018-09-15: Added MICCAI2018 support and py3 transition
2018-05-14: Initial Repository for CVPR version, py2.7

Contact:

For any problems or questions please open an issue in github.

About

Unsupervised Learning for Image Registration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • TeX 0.5%