Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to visualize my registration results? #3

Open
CossellCollege opened this issue Oct 16, 2024 · 5 comments
Open

How to visualize my registration results? #3

CossellCollege opened this issue Oct 16, 2024 · 5 comments

Comments

@CossellCollege
Copy link

Hello,author! I have met with difficulties. When I take python src/train_egnn.py, I get a model_epoch_500.pth file in checkpoints. But I do not know how to use the two PCD point clouds I collected for registration, whether it is possible to use the data set provided by the project training, and then their own collection of two points cloud registration and visualization results?I hope to get your answer, thank you very much.

@alexandor91
Copy link
Owner

You can use the prediction pose, to convert into homogeneous transform matrix, and apply the transform to the source scan point cloud, as the prediction is from source scan to target, then visualized the transformed scans and target scans together in open3d with different colors, a visualize sample script "viz-pointcloud-reg" is provided under tools folder.

@CossellCollege
Copy link
Author

CossellCollege commented Oct 16, 2024

Which step I got the prediction pose? Also, when I use the provided 3DMatch for my training, this is what happens
1729063729770
I am not sure if I have trained well.

@alexandor91
Copy link
Owner

Now train and eval are merged into a single script, just follow the evaluation part, and set "mode" to "test" once your training is done with saved checkpoint. Yes the training loss is actually composed of three part, with rank regularizer loss dominant, so 1.3 is relatively small, but indeed you need to better plot the loss curve offline with train and val loss, like exporting the terminal logs into a txt file. Our original version is planned to be merged with tensorboard for better loss viz, but due to some local env conflict, it is not enabled, sorry for that, so just plot the loss offline by using matplotlib.

@alexandor91
Copy link
Owner

If u want to use the pre-trained model directly on your own point cloud scan pair registration, I recommend to check your point cloud scenario, whether it is close to 3DMatch indoor or KITTI outdoor scenes, if not, there may be some data gap for pre-trained and your own data. If your data is similar to large scene room dataset, like Scannet, then please replace the pointnet descriptor extraction encoder with the public pointTransformerV2 or V3 api, then training the following equi-layers and registration layers only to learn the new equi-feature from large room scan data, otherwise you have to train the whole model from descriptor extraction, to equi-gnn, until to the regression decoder all together on your own data, so the data volume has to be considered as well to match the model capacity.

@CossellCollege
Copy link
Author

Great work!Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants