Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persue higher resolution for mesh #36

Open
Sapium59 opened this issue Oct 14, 2022 · 8 comments
Open

Persue higher resolution for mesh #36

Sapium59 opened this issue Oct 14, 2022 · 8 comments

Comments

@Sapium59
Copy link

I am happy to see pre-trained model released! Thank you!

Now I have infered a few times with checkpoint of car category. As far as I observe, the triangular meshes has nearly the same resolution of about 5 cm, or 5% of model total length. Is there a way to make more detailed meshes?

Also, I guess your mesh reconstruction is based on SDF-driven DMTet algorithm. Do you expect bad behavior due to inaccuracy of SDF when applyibg finer resolution?

p.s. My inference took 20 mins to produce 100 textured meshes, so computation complexity do not quite impact me.

@SteveJunGao
Copy link
Collaborator

SteveJunGao commented Oct 14, 2022

Hi @zhu-yuefeng ,

Re detailed mesh: What's your tet_res when you run the inference? You can increase it to 100 or even higher to make a more detailed mesh (You can follow this readme to generate a tet mesh in other resolutions) (EDIT: for this one, I found a bug in the code and just pushed the fix to it, please pull again if you want to try this one)

Re inaccurate SDF, it could be a problem if the training and inference have different resolutions, but shouldn't be too much, as the network that predicts SDF is a continuous network. In our model, we train the GET3D with a resolution of 90, you can train a higher resolution and run inference at a higher resolution as well.

Re computation complexity. There can be several problems, 1. At the beginning of this script, it needs to compile some packages that might take some time (e.g. upfirdn2d, bias_act, filtered_lrelu) and also some ops inside nvdiffrast. This will only compile the first run of the script. 2. We do observe the slowness in the xatlas, the most time-consuming part is this line, we're working on speeding this part up so that it can be much faster. Note that, in our paper, when we count the inference time, we removed these two parts, as they're not the actual processing time of GET3D.

@SteveJunGao
Copy link
Collaborator

Hi @zhu-yuefeng, any updates on this issue? do you have further questions? or it's good to close this?

@Sapium59
Copy link
Author

Hi, I was on holiday this weekend~~ Now I am back to code.

In all previous experiments I used default 'tet_res'=90. By changing it to 100 I see little improvement in resolution. I am going to do some training work, which could take a few days to update.

Your explanation and guidance about SDF is quite clear, quite helpful.

Time cost analysis also makes sence. Well, actually I was to state my case, and was not hunger to optimize this part, because its speed is acceptable to me. Seems I didn't make myself clear.

@Sapium59
Copy link
Author

Sapium59 commented Nov 4, 2022

Update: I made a super-high resolution *.npz file, but GPU memory is not large enough to support.

Details: I installed quartet, and then followed your instructions in data/generate_tets.py to produce an npz file with resolution of 1000. I supposed this resolution would make very nice and very detailed meshes through inference, as your pretrained counterparts are with resolution of like 64 ~ 100.
However, even when I try at a debug-level inference parameter (--batch=4), CUDA raises out-of-memory errors. As far as I test on NVIDIA-V100 machine with 16G GPU memory, the default inference with --batch=4 and --tet_res=90 would cost 11G GPU memory. Can we say, without better machine, the resolution limit is just a little above 100?

BTW: data/generate_tets.py seems to apply different name style in line 23 and line 44. Do these lines share the same relationship between res and frac?

Thank you!

@Sapium59
Copy link
Author

Sapium59 commented Nov 4, 2022

Another question: why we apply quartet here?
I read about quartet and it claims to be a method for approximating some given mesh with a uniform mesh. According to example objs like dragon.obj, the reconstructed mesh does not hold as much geometry details as original one. So why do we need quartet?
I also have a relative doubt: in generate_tets.py, you used cube.obj to extract the desired npz file. Is that simple mesh enough?

@Sapium59
Copy link
Author

Sapium59 commented Nov 4, 2022

An experiment with --tet_res=64 also cost 11G memory at peak. I am confused about memory and resolution.

@Bathsheba
Copy link

Bathsheba commented Feb 7, 2023

I am seeing interesting results training at tet_res 100 with an 8Gb card. I lowered some other options to do it. (I secretly believe that latent_dim 512 is not neeeded for my project.)

@p4vv37
Copy link

p4vv37 commented Mar 9, 2023

With A4000 I got to almost 1 000 000 polys for stuff I was generating. That's a lot :)
Latent dim did not change the memory footprint a lot. The most significant impact comes from tet_res for sure. Also, GPU batch size (By default it is 4, it is not exposed as a parameter, you need to do this) can help, I lowered it to 1 for the generation to be able to do this.
--tet_res 300 was the biggest one I used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants