Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The depth maps exported by Nerfstudio are usually color images. Is it possible to export the raw depth images? #3428

Open
Lizhinwafu opened this issue Sep 15, 2024 · 8 comments

Comments

@Lizhinwafu
Copy link

The depth maps exported by Nerfstudio are usually false color images. Is it possible to export the raw depth images?

@Tavish9
Copy link

Tavish9 commented Sep 15, 2024

The depth maps exported by Nerfstudio are usually false color images. Is it possible to export the raw depth images?

Hi, @Lizhinwafu, Nerfstudio applies a colormap to the image after rendering. For more details, please refer to utils/colormaps.py. If you would like to save the raw depth as a .npy file, you will need to manually modify the corresponding code.

@Lizhinwafu
Copy link
Author

I want to save the raw depth map as **.jpg. Which code need to modify?

@Tavish9
Copy link

Tavish9 commented Sep 18, 2024

I want to save the raw depth map as **.jpg. Which code need to modify?

By “raw depth image,” are you referring to saving the depth map in its unprocessed form? If you’re looking to save the depth map as a grayscale image, you can simply set ColormapOptions.colormap = "gray". On the other hand, if you’re aiming to save the raw depth data as a .npy file, you’ll need to modify the code at this block.

@Lizhinwafu
Copy link
Author

Thanks. We can directly get RGB and depth using a depth camera. My goal is to generate the same depth as the camera. Pseudo-color images can only be visualized. I want to combine the rendered depth map and RGB to generate the point cloud myself.

@Tavish9
Copy link

Tavish9 commented Sep 18, 2024

Thanks. We can directly get RGB and depth using a depth camera. My goal is to generate the same depth as the camera. Pseudo-color images can only be visualized. I want to combine the rendered depth map and RGB to generate the point cloud myself.

What is the data type and shape of your depth got by the camera? The depth generated by NeRF is of type float and has a shape of (h,w,1). After applying a colormap to the raw depth, the shape changes to (h,w,3).

@Lizhinwafu
Copy link
Author

I rendered RGB and depth, and generated point clouds from multiple views using the intrinsic and extrinsic parameters estimated by COLMAP. Why do the point clouds generated from multiple views not overlap with each other?

@Lizhinwafu
Copy link
Author

When using Nerfstudio for data preprocessing, it generates a transform.json file, and I have also obtained the RGB and depth images for each view, which I used to generate point clouds for each view. How can I use the transform.json file to register the point clouds from multiple views into a single unified point cloud?

@Tavish9
Copy link

Tavish9 commented Oct 3, 2024

When using Nerfstudio for data preprocessing, it generates a transform.json file, and I have also obtained the RGB and depth images for each view, which I used to generate point clouds for each view. How can I use the transform.json file to register the point clouds from multiple views into a single unified point cloud?

Hi, sorry for late reply.

If you have already train your nerf model, you can generate the pointcloud using ns-export pointcloud --load-config path/to/config.yml --output-dir exports/pcd/ --num-points 100000 --remove-outliers True --normal-method open3d --save-world-frame False.

This command would generate only one unified pointcloud.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants