Skip to content

Latest commit

 

History

History
66 lines (46 loc) · 2.6 KB

README.md

File metadata and controls

66 lines (46 loc) · 2.6 KB

Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation (DAPSAM)

This is the official code of our MICCAI 2024 paper DAPSAM 🥳

Requirement

pip install -r requirements.txt

Data Preparation

Prostate Segmentation

RIGA+ Segmentation

Please download the pretrained SAM model (provided by the original repository of SAM) and put it in the ./pretrained folder.

What's more, we also provide well-trained models at Release. Please put it in the ./snapshot folder for evaluation.

Prostate Segmentation

We take the setting using RUNMC (source domain) and other five datasets (target domains) as the example.

cd prostate
# Training
CUDA_VISIBLE_DEVICES=0 python train.py --root_path dataset_path --output output_path --Source_Dataset RUNMC --Target_Dataset BIDMC BMC HK I2CVB UCL
# Test
CUDA_VISIBLE_DEVICES=0 python test.py --root_path dataset_path --output_dir output_path --Source_Dataset RUNMC --Target_Dataset BIDMC BMC HK I2CVB UCL --snapshot snapshot_path

RIGA+ Segmentation

We take the setting using BinRushed (source domain) and other three datasets (target domains) as the example.

cd fundus
# Training
CUDA_VISIBLE_DEVICES=0 python train.py --root_path dataset_path --output output_path --Source_Dataset BinRushed --Target_Dataset MESSIDOR_Base1 MESSIDOR_Base2 MESSIDOR_Base3
# Test
CUDA_VISIBLE_DEVICES=0 python test.py --root_path dataset_path --output output_path --Source_Dataset BinRushed --Target_Dataset MESSIDOR_Base1 MESSIDOR_Base2 MESSIDOR_Base3 --snapshot snapshot_path

Cite

If you find this code useful, please cite

@inproceedings{wei2024prompting,
  title={Prompting Segment Anything Model with Domain-Adaptive Prototype for Generalizable Medical Image Segmentation},
  author={Wei, Zhikai and Dong, Wenhui and Zhou, Peilin and Gu, Yuliang and Zhao, Zhou and Xu, Yongchao},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={533--543},
  year={2024},
  organization={Springer}
}

Acknowledgement

We appreciate the developers of Segment Anything Model. The code of DAPSAM is built upon SAMed, and we express our gratitude to these projects.