The following is our dual-modal descriptor generation network LocFuse, which is used for the place recognition task.
Successful matching candidates are marked in green, and failed ones are marked in red.
To begin, download the four sequences from the RobotCar dataset as a demonstration through this link. If you require all the sequences, please refer to the benchmark established in the PointNetVLAD.
After downloading the four sequences, place the "RobotCar_samples" folder in the root directory of the project.
Once the dataset files are in place, run the generate_training_tuples.py and generating_test_sets.py scripts in the "generating_queries" folder to obtain the .pickle files required for training and testing.
cd generating_queries
python generate_training_tuples.py
python generating_test_sets.py
During the training process, we typically use multi-GPU training by default, employing the nn.DataParallel. However, if you only have a single GPU available, you'll need to modify the corresponding parts of the code accordingly. The trained parameters from our experiments are saved in the file "weights2/tmp_9_22_best/weight_best.pth". The training command is as follows:
python train_qua.py
During the testing process, we default to importing the parameters from the file "weights2/tmp_9_22_best/weight_best.pth". You can modify the import path of the parameters as needed. The testing command is as follows:
python test.py