You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yes it is very much possible. One could try our direct inference using DUSt3r (or other foundation model) provided extrinsic and intrinsic. Please look through our pytorch dataset file nerds360_ae.py to make the required changes. You can save the duster provided extrinsics and intrinsics in the same format as our NERDS360 dataset i.e. pose.json file and read the poses using our read_poses function. This should probably be the easiest way to swap out gt poses with duster provided poses.
A quick pointer. Duster, in my understanding, saves poses in opencv format, please convert them to opengl/nerf format to be used with our codebase. After you run duster on NERDS360 dataset, you can run our visualization script to make sure the poses look good before running training or inference. Hope it helps!
Yes it is very much possible. We have released the 10 scenes with colmap poses and show an example overfitting reconstruction here. Please feel free to look into those links and let us know if you run into any issues. For generalizable reconstruction, centering and canonicalization of colmap poses is also important.
Providing multiple images for the 360 degree view , but without any COLMAP data
The text was updated successfully, but these errors were encountered: