Skip to content

Adaptive Fusion of LiDAR Height-sliced BEV and Vision for Place Recognition

License

Notifications You must be signed in to change notification settings

Bryan-ZhengRui/LocFuse

Repository files navigation

LocFuse (Pytorch)

LocFuse overview

The following is our dual-modal descriptor generation network LocFuse, which is used for the place recognition task.

LocFuse overview

Our Demo

Successful matching candidates are marked in green, and failed ones are marked in red.

Our Demo

Preparation

To begin, download the four sequences from the RobotCar dataset as a demonstration through this link. If you require all the sequences, please refer to the benchmark established in the PointNetVLAD.

After downloading the four sequences, place the "RobotCar_samples" folder in the root directory of the project.

Once the dataset files are in place, run the generate_training_tuples.py and generating_test_sets.py scripts in the "generating_queries" folder to obtain the .pickle files required for training and testing.

cd generating_queries
python generate_training_tuples.py
python generating_test_sets.py

Train

During the training process, we typically use multi-GPU training by default, employing the nn.DataParallel. However, if you only have a single GPU available, you'll need to modify the corresponding parts of the code accordingly. The trained parameters from our experiments are saved in the file "weights2/tmp_9_22_best/weight_best.pth". The training command is as follows:

python train_qua.py

Test

During the testing process, we default to importing the parameters from the file "weights2/tmp_9_22_best/weight_best.pth". You can modify the import path of the parameters as needed. The testing command is as follows:

python test.py

About

Adaptive Fusion of LiDAR Height-sliced BEV and Vision for Place Recognition

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages