You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first of all, thanks for sharing this great work!
I have a question about Point-NeRF comparison results in the paper.
To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).
However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Hi, first of all, thanks for sharing this great work!
I have a question about Point-NeRF comparison results in the paper.
To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).
However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?
Thanks in advance.
Hi, first of all, thanks for sharing this great work!
I have a question about Point-NeRF comparison results in the paper.
To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).
However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?
Thanks in advance.
Point-NeRF can be trained using only Points (xyz), but the training & inference performance is much worse than "xyz + F" .
Hi, first of all, thanks for sharing this great work!
I have a question about Point-NeRF comparison results in the paper.
To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).
However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?
Thanks in advance.
The text was updated successfully, but these errors were encountered: