In this work, we present new ways to successfully train very deep GCNs. We borrow concepts from CNNs, mainly residual/dense connections and dilated convolutions, and adapt them to GCN architectures. Through extensive experiments, we show the positive effect of these deep GCN frameworks.
[Project] [Paper] [Slides] [Tensorflow Code] [Pytorch Code]
We do extensive experiments to show how different components (#Layers, #Filters, #Nearest Neighbors, Dilation, etc.) effect DeepGCNs
. We also provide ablation studies on different type of Deep GCNs (MRGCN, EdgeConv, GraphSage and GIN).
Further information and details please contact Guohao Li and Matthias Müller.
- TensorFlow 1.12.0
- h5py
- vtk (only needed for visualization)
- jupyter notebook (only needed for visualization)
In order to setup a conda environment with all neccessary dependencies run,
conda env create -f environment.yml
You will find detailed instructions how to use our code for semantic segmentation of 3D point clouds, in the folder sem_seg. Currently, we provide the following:
- Conda environment
- Setup of S3DIS Dataset
- Training code
- Evaluation code
- Several pretrained models
- Visualization code
Please cite our paper if you find anything helpful,
@InProceedings{li2019deepgcns,
title={DeepGCNs: Can GCNs Go as Deep as CNNs?},
author={Guohao Li and Matthias Müller and Ali Thabet and Bernard Ghanem},
booktitle={The IEEE International Conference on Computer Vision (ICCV)},
year={2019}
}
@misc{li2019deepgcns_journal,
title={DeepGCNs: Making GCNs Go as Deep as CNNs},
author={Guohao Li and Matthias Müller and Guocheng Qian and Itzel C. Delgadillo and Abdulellah Abualshour and Ali Thabet and Bernard Ghanem},
year={2019},
eprint={1910.06849},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
MIT License
This code is heavily borrowed from PointNet and EdgeConv. We would also like to thank 3d-semantic-segmentation for the visualization code.