You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
背景:
我是在实验室内拍摄了实验室一周(包括两排桌椅、电脑以及天花板、地板)照片,有水平移动拍摄的、有俯拍和仰拍的,也有环绕着拍摄的,一共411张图。
对于拍摄的图片,利用colmap保存了database.db,然后进行了特征提取、特征匹配以及稀疏重建,最后Export model将其保存在"sparse/0/"文件夹下。
利用Fyusion/LLFF将导出的位姿数据转换成llff数据格式,由于照片数(411)和实际在colmap注册的数量(401)不匹配,因此出现了ERROR: the correct camera poses for current points cannot be accessed 这个错误,利用Fyusion/LLFF#60 (comment) 解决了问题,得到了poses_bounds.npy,并删掉了colmap未注册的照片。
您好,我在自制数据集运行时出现了一些问题,想跟您请教一下。
背景:
我是在实验室内拍摄了实验室一周(包括两排桌椅、电脑以及天花板、地板)照片,有水平移动拍摄的、有俯拍和仰拍的,也有环绕着拍摄的,一共411张图。
对于拍摄的图片,利用colmap保存了database.db,然后进行了特征提取、特征匹配以及稀疏重建,最后Export model将其保存在"sparse/0/"文件夹下。
利用Fyusion/LLFF将导出的位姿数据转换成llff数据格式,由于照片数(411)和实际在colmap注册的数量(401)不匹配,因此出现了ERROR: the correct camera poses for current points cannot be accessed 这个错误,利用Fyusion/LLFF#60 (comment) 解决了问题,得到了poses_bounds.npy,并删掉了colmap未注册的照片。
问题:
运行 python run.py --config configs/custom/room.py 时,报错如下:
Loading images from data/room/dense/images_2
Loaded image data (2880, 1620, 3, 401) [2880. 1620. 2119.14228993]
Loaded data/room/dense 8.725662744276601e-10 38.88514969557468
recentered (3, 5)
[[ 1.0000000e+00 7.9511068e-08 7.2999038e-09 -8.4269325e+01]
[-7.9511068e-08 1.0000000e+00 -1.7426238e-07 3.5367581e+02]
[-7.2999176e-09 1.7426238e-07 1.0000000e+00 -1.4172569e+02]]
/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/lib/load_llff.py:409: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at /opt/conda/conda-bld/pytorch_1634272204863/work/torch/csrc/utils/tensor_new.cpp:201.)
render_poses = torch.Tensor(render_poses)
Data:
(401, 3, 5) (401, 2880, 1620, 3) (401, 2)
HOLDOUT view is 214
Loaded llff (401, 2880, 1620, 3) torch.Size([120, 3, 5]) [2880. 1620. 2119.1423] data/room/dense
DEFINING BOUNDS
NEAR FAR 0.0 1.0
train: start
compute_bbox_by_cam_frustrm: start
/home/vcis6/anaconda3/envs/mega-nerf/lib/python3.9/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1634272204863/work/aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
compute_bbox_by_cam_frustrm: xyz_min tensor([nan, nan, nan])
compute_bbox_by_cam_frustrm: xyz_max tensor([nan, nan, nan])
compute_bbox_by_cam_frustrm: finish
train: skip coarse geometry searching
scene_rep_reconstruction (fine): train from scratch
scene_rep_reconstruction (fine): use multiplane images
dmpigo: world_size tensor([-9223372036854775808, -9223372036854775808, 256])
dmpigo: voxel_size_ratio 1.0
Traceback (most recent call last):
File "/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/run.py", line 630, in
train(args, cfg, data_dict)
File "/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/run.py", line 562, in train
scene_rep_reconstruction(
File "/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/run.py", line 328, in scene_rep_reconstruction
model, optimizer = create_new_model(cfg, cfg_model, cfg_train, xyz_min, xyz_max, stage, coarse_ckpt_path)
File "/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/run.py", line 266, in create_new_model
model = dmpigo.DirectMPIGO(
File "/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/lib/dmpigo.py", line 41, in init
self.density = grid.create_grid(
File "/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/lib/grid.py", line 29, in create_grid
return DenseGrid(**kwargs)
File "/home/vcis6/Userlist/Zouchen/LargeScaleNeRFPytorch/lib/grid.py", line 45, in init
self.grid = nn.Parameter(torch.zeros([1, channels, *world_size]))
RuntimeError: Trying to create tensor with negative dimension -9223372036854775808: [1, 1, -9223372036854775808, -9223372036854775808, 256]
(在这之前,我在单个方向拍了38张照片,然后利用上面相同的步骤,可以顺利生成模型以及视频)
The text was updated successfully, but these errors were encountered: