-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to visualize my registration results? #3
Comments
You can use the prediction pose, to convert into homogeneous transform matrix, and apply the transform to the source scan point cloud, as the prediction is from source scan to target, then visualized the transformed scans and target scans together in open3d with different colors, a visualize sample script "viz-pointcloud-reg" is provided under tools folder. |
Now train and eval are merged into a single script, just follow the evaluation part, and set "mode" to "test" once your training is done with saved checkpoint. Yes the training loss is actually composed of three part, with rank regularizer loss dominant, so 1.3 is relatively small, but indeed you need to better plot the loss curve offline with train and val loss, like exporting the terminal logs into a txt file. Our original version is planned to be merged with tensorboard for better loss viz, but due to some local env conflict, it is not enabled, sorry for that, so just plot the loss offline by using matplotlib. |
If u want to use the pre-trained model directly on your own point cloud scan pair registration, I recommend to check your point cloud scenario, whether it is close to 3DMatch indoor or KITTI outdoor scenes, if not, there may be some data gap for pre-trained and your own data. If your data is similar to large scene room dataset, like Scannet, then please replace the pointnet descriptor extraction encoder with the public pointTransformerV2 or V3 api, then training the following equi-layers and registration layers only to learn the new equi-feature from large room scan data, otherwise you have to train the whole model from descriptor extraction, to equi-gnn, until to the regression decoder all together on your own data, so the data volume has to be considered as well to match the model capacity. |
Great work!Thanks a lot! |
Hello,author! I have met with difficulties. When I take python src/train_egnn.py, I get a model_epoch_500.pth file in checkpoints. But I do not know how to use the two PCD point clouds I collected for registration, whether it is possible to use the data set provided by the project training, and then their own collection of two points cloud registration and visualization results?I hope to get your answer, thank you very much.
The text was updated successfully, but these errors were encountered: