Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When orientation_method=pca, transformation_matrix can be incorrect #3417

Closed
ChenYutongTHU opened this issue Sep 8, 2024 · 2 comments
Closed
Labels
bug Something isn't working

Comments

@ChenYutongTHU
Copy link

Hi NeRFStudio's contributors,

Thanks for your work!
I notice that when setting ColmapDataParserConfig.orientation_method=pca, a misalignment between initial pointcloud and camera pose can happen due to the incorrect transformation_matrix

In my example, I rendered the COLMAP initialized point cloud in step 0 but found that it was misaligned with the groundtruth image. Specifically, the image is vertically flipped.

github_issue1
githubissue_0

I found that the issue relates to these lines

oriented_poses = transform @ poses
if oriented_poses.mean(dim=0)[2, 1] < 0:
oriented_poses[:, 1:3] = -1 * oriented_poses[:, 1:3]

'oriented_poses.mean(dim=0)[2, 1] < 0: ' seems to mean where the Camera's y-axis (up-direction) is projected to the world's z-axis. So the up direction in the image should be aligned with +z axis in the world. If not, the code here negates the y and z axes in the camera's coordinate. However, this leads to the flipping renderings, misaligned with the label images which are not flipped.

If I turn off this negation, the step=0 renderings become aligned again.

To address this issue, we need to apply the transformation to the world coordinate (Flip the objects in y and z axes) instead of the camera system. For example

        if oriented_poses.mean(dim=0)[2, 1] < 0:
            #oriented_poses[:, 1:3] = -1 * oriented_poses[:, 1:3]
            transform_plus = torch.eye(3)
            transform_plus[1, 1] = -1
            transform_plus[2, 2] = -1
            oriented_poses = transform_plus @ oriented_poses
            transform = transform_plus @ transform

This will lead to much better final results due to the correct initial point clouds.

Great thanks :)

@jb-ye
Copy link
Collaborator

jb-ye commented Sep 10, 2024

Indeed, this seems like a bug.

Could the following change also do the same job on your dataset?

oriented_poses[1:3, :] = -1 * oriented_poses[1:3, :]

Can you send a PR to fix it? Thanks for your contribution.

@ChenYutongTHU
Copy link
Author

Indeed, this seems like a bug.

Could the following change also do the same job on your dataset?

oriented_poses[1:3, :] = -1 * oriented_poses[1:3, :]

Can you send a PR to fix it? Thanks for your contribution.

Hi. In addition to transforming the C2W matrix, we also need to modify the transform matrix accordingly, which will be used to transform the input cloud.

            oriented_poses[1:3, :] = -1 * oriented_poses[1:3, :]
            transform[1:3,:] = -1*transform[1:3,:]

Thanks!

@jb-ye jb-ye added the bug Something isn't working label Sep 24, 2024
jb-ye pushed a commit to jb-ye/nerfstudio that referenced this issue Sep 24, 2024
jb-ye added a commit to jb-ye/nerfstudio that referenced this issue Sep 24, 2024
@jb-ye jb-ye mentioned this issue Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants