Skip to content

Submitted for JEndourology 2024. Updated version of StoneAnno project

License

Notifications You must be signed in to change notification settings

MedICL-VU/Kidney-Stone-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The Development of a Computer Vision Model for Automated Kidney Stone Segmentation and an Evaluation against Expert Surgeons


Ekamjit S. Deol (co)1, Daiwei Lu (co)2, Tatsuki Koyama3, Ipek Oguz2, Nicholas L Kavoussi4

1 Saint Louis University School of Medicine, St Louis, MO, USA

2 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA

3 Department of Biostatistics, Vanderbilt University, Nashville, TN, USA

4 Department of Urology, Vanderbilt University Medical Center, Nashville, TN, USA

Submitted to JEndourology 2024


Adapted from Stone Annotation paper

Stoebner, Zachary A., Daiwei Lu, Seok Hee Hong, Nicholas L. Kavoussi, and Ipek Oguz. "Segmentation of kidney stones in endoscopic video feeds." In Medical Imaging 2022: Image Processing, vol. 12032, pp. 900-908. SPIE, 2022.

Install & Requirements

This project should be operated in a conda environment. Otherwise, you will run into a slew of problems, particularly with OpenCV.

Required install commands:

  • conda install -c conda-forge opencv

  • conda install pytorch torchvision torchaudio -c pytorch

    • You should go to the PyTorch website and perform the generated install command for conda on your machine.
  • conda install -c conda-forge seaborn

  • conda install -c conda-forge pandas

  • pip install comet_ml

  • conda install -c conda-forge tensorboard

  • conda install -c conda-forge scikit-learn

  • pip install tqdm

  • pip install scikit-image

  • pip install segmentation-models-pytorch

  • pip install albumentations


Comet Experiments

Our codebase uses Comet to log our experiments. Create an account and workspace, then follow instructions to get an api key. Create a config.yaml file with the following structure:

api_key: <KEY>
project_name: <PROJECT_NAME>
workspace: <WORKSPACE>

Data

Data is presumed to be loaded in the form of cropped endoscopy images (.png/.jpg/etc) with corresponding 0-1 image segmentation masks in the following structure:

root
-data
--train
---images
----video1folder
-----image1.png
-----image2.png
----video2folder
-----image1.png
-----image2.png
----video2folder
-----...
---masks
----video1folder
-----image1.png
-----image2.png
----video2folder
-----image1.png
-----image2.png
--val
---...
--test
---...

Video folders and frame names are assumed to be constant between image and mask folders.


Scripts

  1. Run train.py or test.py like so

Arguments can be found in util/__init__.py

python ../<phase>.py [--<argument name> <arg value> ...]

If running from root, you should uncomment any os.chdir('..') instructions in main script flow; the default behavior is to call from slurm dir for running on a cluster.

  1. From the project root, input the command:
python scripts/<script>.py [--<argument name> <arg value> ...]

Troubleshooting

  • If the model trains on normalized inputs, then inputs for testing and synthesis must also be normalized for the model.**

About

Submitted for JEndourology 2024. Updated version of StoneAnno project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages