Driven-Data Biomedical Research Competition
Team BUET Synapticans under M-Health Research Lab, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology
Placed 7th on private leaderboard with an MCC score of 0.7392
Head on to the Alzheimers Stall Catchers Competition website for competition details.
This is a competition of constructing a binary classifier. Model has to predict whether the given data is Stalled or Non-stalled. We think to prepare data in two ways to help model differentiating between stalled and non-stalled. From the perspective of point cloud, one can easily visualize and differentiate them. For instance : A Stalled Sample :
The proposed solution method primarily comprises of two major pipelines:
- Image based approach
- Point cloud based approach
Several deep learning classifier models have tested in both approaches, including ResNet, DenseNet, ResNet 3D and several other 3D convolution based models. Implementation details on any model tried and tested on the dataset has been further discussed in detail in the sub directories of the repository. In 'Point Cloud Based Approach' you can find all detailed code for point cloud based approach codes and in 'Image Based Approach' you can find all detailed code for image based approach.
The final submission of the competition was done on the ensemble of best models of the two major pipelines. Ensemble scheme has also been explained in detail.
The repository encapsulates the work of all the team members during the competition. The subdirectories contaiin all the major works towards the final submission.
- Dataset Visualization and Processing contains code for visualizing the given dataset (video, images with ROI(region of interest) extracted, point cloud of blood vessels, grayscale ROI frame extraction, point cloud extraction)
- Image Based Approach contains classification scheme based on image frames extracted from each sample video ROI.
- Point Cloud Based Approach contains classification scheme based on point cloud of the given blood vessels present in sample videos.
- Final Ensemble of Models contains the ensemble scheme of the best models from the previously mentioned two approaches.
- Other Approaches contains solution approaches tried during the competition that did not perform expectedly, but has been included in the repository for future reference.
- Firstly run the code to generate point cloud from the video data. Code can be found in 'Dataset Visualization and Processing' folder.
python dataset_generator.py
-
Then to train you have to run the notebook from 'Point Cloud Based Approach' folder for DenseNet, ResNet model. Or run this notebook for ResNext, Wide_ResNet model.
-
Lastly, for inferencing you have to run notebook from 'Point Cloud Based Approach' folder for DenseNet, ResNet model. Or run this notebook for ResNext, Wide_ResNet model inference. All the notebooks were used in google colab. So you can modify it to use in your local machine.
- First create a folder naming your dataset (e.g "micro") and keep all the videos in that folder.
- Then place this code on the directory of your dataset folder and open terminal and run the following code
python extract_frames_new.py --dataset_name micro
- It will create a new folder name "micro_frames" which will have the extracted video frames with the region of interest, the background or the unecessary pixels will be colored in blue and the region of interest will have a orange contour
- The subset of the dataset is preprocessed and uploaded to Kaggle
- Finally use this notebook to conduct your training. Details of every cell of the training code can be found here
- (empty)
- Partho Ghosh
- Md. Abrar Istiak Akib
- Swapnil Saha
- Mir Sayeed Mohammad