Skip to content

Latest commit

 

History

History
123 lines (88 loc) · 5.46 KB

README.md

File metadata and controls

123 lines (88 loc) · 5.46 KB

CrossLoc3D: Aerial-Ground Cross-Source 3D Place Recognition (Accepted by ICCV 2023)

GitHub Workflow Status (with event) GitHub Repo stars GitHub forks GitHub issues GitHub
PWC

CrossLoc3D: Aerial-Ground Cross-Source 3D Place Recognition
Tianrui Guan, Aswath Muthuselvam, Montana Hoover, Xijun Wang, Jing Liang, Adarsh Jagan Sathyamoorthy, Damon Conover, Dinesh Manocha

Motivation


Representation gap between aerial and ground sources: We use the bounding box with the same color to focus on the same region and highlight the differences between aerial (left) and ground (right) LiDAR scans. Scopes ( #00ffff cyan): The aerial scans cover a large region, while ground scans cover only a local area. Coverages ( #65f015 green): The aerial scans cover the top of the buildings, while ground scans cover more details on the ground. Densities ( #151cf0 blue): The distribution and density of the points are different because of various scan patterns, effective ranges, and fidelity of LiDARs. Noise Patterns ( #f03c15 red): The aerial scans have larger noises, as we can see from a bird-eye view and top-down view of a corner of the building.

Network Architecture



If you find this project useful in your research, please cite our work:

@InProceedings{Guan_2023_ICCV,
    author    = {Guan, Tianrui and Muthuselvam, Aswath and Hoover, Montana and Wang, Xijun and Liang, Jing and Sathyamoorthy, Adarsh Jagan and Conover, Damon and Manocha, Dinesh},
    title     = {CrossLoc3D: Aerial-Ground Cross-Source 3D Place Recognition},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
}

Getting Started

Setting up Environment

conda create -n crossloc python=3.7 pandas tensorboard numpy -c conda-forge
conda activate crossloc
conda install pytorch=1.9.1 torchvision cudatoolkit=11.1 -c pytorch -c nvidia


conda install openblas-devel -c anaconda
sudo apt-get install openexr libopenexr-dev
conda install -c conda-forge openexr


pip install laspy pytest addict pytorch-metric-learning==0.9.97 yapf==0.40.1 bitarray==1.6.0 h5py transforms3d open3d
pip install tqdm setuptools==59.5.0 einops
pip install bagpy utm pptk
conda install -c conda-forge openexr-python
pip install pyexr pyntcloud


cd MinkowskiEngine
python setup.py install --blas_include_dirs=${CONDA_PREFIX}/include --blas=openblas

Dataset

Oxford RobotCar dataset

Follow instruction of this repo or download benchmark_datasets.zip from here and put /benchmark_datasets folder in /data folder.

python ./datasets/preprocess/generate_training_tuples_baseline.py
python ./datasets/preprocess/generate_test_sets.py

CS-Campus3D (Ours)

The dataset can be accessed here.

Download data and put /benchmark_datasets folder in /data folder.

Training

CUDA_VISIBLE_DEVICES=0 python main.py ./configs/<config_file>.py

Evaluation

CUDA_VISIBLE_DEVICES=0 python main.py ./configs/<config_file>.py --mode val --resume_from <ckpt_location>.pth

Checkpoints

Name Dataset config ckpt
Crossloc3D Oxford config ckpt
Crossloc3D CS-Campus3D config ckpt