Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why is meshopt_simplifyWithAttributes (silently) limited to 16 attributes? #628

Closed
aras-p opened this issue Oct 31, 2023 · 4 comments · Fixed by #629
Closed

Why is meshopt_simplifyWithAttributes (silently) limited to 16 attributes? #628

aras-p opened this issue Oct 31, 2023 · 4 comments · Fixed by #629
Labels

Comments

@aras-p
Copy link

aras-p commented Oct 31, 2023

The meshopt_simplifyWithAttributes seems to be limited to 16 attributes (i.e. 16 "floats") maximum. It will assert in the debug build, but corrupt the stack in a release build. That does not seem to be documented in the header.

But! I'm wondering why exactly 16 attributes. For a use case of say (float3 normal, float4 tangent, float2 uv, float4 blendWeights, float4 blendIndices), that is already 17 attributes. Could the limit be bumped up? There is kMaxAttributes which sounds like an easy change, just not sure if there are some negative effects of that.

@aras-p aras-p added the bug label Oct 31, 2023
@zeux
Copy link
Owner

zeux commented Oct 31, 2023

Yeah the limit should be documented, I'll fix this.

If you'd like to change this locally / in your copy for testing, that seems completely fine. In the current state of the code this only marginally impacts one function; that function in theory could even be rewritten to not rely on stack space and the limit could be lifted altogether. However, this algorithm is in the state of much needed further development (which hopefully foreshadowing I will be able to dedicate some time to next year), so I'm reluctant to change this, or the limit, at the moment. The limit is 16 because it's about double what I tested :) (UV/normal/color combination)

In general right now due to how the metrics for the algorithm work (which I need to somehow fully rework - they're based on state of the art research from 1999, nothing new has come out since, but they are not actually good enough for complex reasons this margin can't contain), I think it's unreasonable to just put your entire vertex into the algorithm and hope it magically figures it out.

In my testing I found it becomes harder and harder to tune the weights and balance the impact of attributes vs geometry (something else I hope to try to figure out a better solution for) as you increase the number or complexity of attribute data. Even with UV + normal + color it was not obvious what the weights should be and how to guide the algorithm to be making obviously better decisions - again in part due to the fact that the metrics aren't final.

Additionally, the algorithm works best when you give it decorellated and interpolatable data. Someone even tried to change the normal representation in the past (when integrating an earlier version into Godot) and had better results; so I'm very skeptical that for example giving the algorithm both normal and tangent data makes it better at all, and just normal plus maybe a tangent flip with a large weight would suffice.

Giving the algorithm bone indices and weights is going to backfire. The indices aren't interpolatable or stable and don't even have a meaningful distance metric. Vertices nearby will have a different index set, sometimes in a benign fashion - eg switching from ABCD to ABDE, or worse from ABCD to BACE, and this will not end well. I suspect that in terms of representative power, the blend weight/index representation above is about equal, maybe worse, to just two floats - single dominant bone index and its weight. It will still be unstable etc but will be a little more stable. Ideally you don't even want this - you probably want to skin the mesh to a different pose and feed the resulting vertex position as an attribute...

So basically right now I feel like it's really not practical to just feed everything as is and hope for the best.

Of course, if the metric and the implementation were more final, the limit would be fairly inconsequential, but I'm highlighting the fact that if right now you think you're bumping against the limit, you're probably actually not :) Let me know if that makes sense.

@zeux
Copy link
Owner

zeux commented Oct 31, 2023

Oh, but again, I welcome experimenting with higher attribute counts and higher limits. If there's results that say "oh actually it's completely fine, here's the attribute set and weights and see it works great" then it would be a great motivation to bump the limit and maybe even rework the code to not carry a stack buffer proportional to the limit, it's just not obvious to me right now that it's necessary.

Even if there's results that say that something like 12 is meaningful, then bumping up the limit would be better per the "double the tested limit" rule :D

@zeux
Copy link
Owner

zeux commented Oct 31, 2023

All that said there’s probably no real significance to the number 16. Let me bump it to 24 which per reasoning above should really be sufficient… after sleeping on this I think for now let's just keep this as is; it's easy to change locally and this will need more experimentation. It should be very easy for the caller to just pass 16, ignoring the following attributes, while this limitation is in place.

@zeux zeux mentioned this issue Oct 31, 2023
10 tasks
@aras-p
Copy link
Author

aras-p commented Oct 31, 2023

Yeah, all good points! I only initially was a bit puzzled until I realized to build in debug and hit the assert. Thanks for the in-depth explanations and for updating the docs.

@zeux zeux closed this as completed in #629 Nov 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants