LGU-SLAM: Learnable Gaussian Uncertainty Matching with Deformable Correlation Sampling for Deep Visual SLAM
Yucheng Huang, Luping Ji, Hudong Liu, Mao Ye
@misc{huang2024lguslamlearnablegaussianuncertainty,
title={LGU-SLAM: Learnable Gaussian Uncertainty Matching with Deformable Correlation Sampling for Deep Visual SLAM},
author={Yucheng Huang and Luping Ji and Hudong Liu and Mao Ye},
year={2024},
eprint={2410.23231},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.23231},
}
Initial Code Release: This repo currently provides a single GPU implementation of our monocular, stereo, and RGB-D SLAM systems. It currently contains demos, training, and evaluation scripts.
To run the code you will need ...
-
Inference: Running the demos will require a GPU with at least 12G of memory.
-
Training: Training requires a GPU with at least 24G of memory.
git clone https://github.com/UESTC-nnLab/LGU-SLAM.git
- Creating a new anaconda environment using the provided .yaml file. Use
environment_novis.yaml
to if you do not want to use the visualization
conda env create -f environment.yaml
pip install evo --upgrade --no-binary evo
pip install gdown
- Compile the lietorch CUDA extensions (takes about 6 minutes)
python setup.py install
- Compile the LGU(learnable gaussian uncertainty + deformable sampling + low-memory deformable sampling) CUDA extensions (takes about 8 minutes)
python offersample_LGS/setup.py install
- Download some sample videos using the provided script.
./tools/download_sample_data.sh
Run the demo on any of the samples (all demos can be run on a GPU with 12G of memory). While running, press the "s" key to increase the filtering threshold (= more points) and "a" to decrease the filtering threshold (= fewer points). To save the reconstruction with full resolution depth maps use the --reconstruction_path
flag.
python demo.py --imagedir=data/abandonedfactory --calib=calib/tartan.txt --stride=2
python demo.py --imagedir=data/sfm_bench/rgb --calib=calib/eth.txt
python demo.py --imagedir=data/Barn --calib=calib/barn.txt --stride=1 --backend_nms=4
python demo.py --imagedir=data/mav0/cam0/data --calib=calib/euroc.txt --t0=150
python demo.py --imagedir=data/rgbd_dataset_freiburg3_cabinet/rgb --calib=calib/tum3.txt
Running on your own data: All you need is a calibration file. Calibration files are in the form
fx fy cx cy [k1 k2 p1 p2 [ k3 [ k4 k5 k6 ]]]
with parameters in brackets optional.
Download the TartanAir dataset using the script thirdparty/tartanair_tools/download_training.py
and put them in datasets/TartanAir
./tools/validate_tartanair.sh --plot_curve # monocular eval
./tools/validate_tartanair.sh --plot_curve --stereo # stereo eval
Download the EuRoC sequences (ASL format) and put them in datasets/EuRoC
./tools/evaluate_euroc.sh # monocular eval
./tools/evaluate_euroc.sh --stereo # stereo eval
Download the fr1 sequences from TUM-RGBD and put them in datasets/TUM-RGBD
./tools/evaluate_tum.sh # monocular eval
Download the ETH3D dataset
./tools/evaluate_eth3d.sh # RGB-D eval
First download the TartanAir dataset. The download script can be found in thirdparty/tartanair_tools/download_training.py
. You will only need the rgb
and depth
data.
python download_training.py --rgb --depth
Note: On the first training run, covisibility is computed between all pairs of frames. This can take several hours, but the results are cached so that future training runs will start immediately.
python train.py --datapath=<path to tartanair>
Data from TartanAir was used to train our model. We additionally use evaluation tools from evo and tartanair_tools.
Thank you DROID for inspiring our work.