You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The underlying library supports a direct signed distance algorithm which also acts as an actual mesh containment check, which I think is new, by implementing this paper. It seems to be more accurate for containment as it doesn't suffer from #3 .
For a simple unit cube mesh, it's slower than checking with 1 ray but faster than checking with 2. For the teapot, it's much slower; table 1 in the paper suggests that the shape of the mesh has quite a large impact on the comparison between ray-casting and this algo, not just the number polygons.
All this to say: a more robust algorithm is available for containment checks, at the cost of performance in real-world meshes. This could be left to the user, where the correct algorithm (signed distance) is preferred by default, but the caller can choose the quick and dirty route (ray casting) if they prefer. The paper actually does note issues with using ray casts for containment checks when you hit edges and vertices, which we could probably try to figure out to iron out that problem and still get the speed benefits.
I don't have a strong opinion, but I would just say that it is frequently the case that automatically generated meshes that are generated by the PCG are pretty poor in terms of watertightness and quality, in exchange for other benefits. My intuition is that the ray casting route (the majority-based one) is relatively robust to this and smarter algorithms probably require assumptions about good quality meshes.
Ah, yes, something else I'd noticed was that there are now some mesh validation routines inside parry which seemed like a no-brainer but using those could be configurable too. At the moment the ray casting strategy checks that all rays hit a backface, but there is a PR for a majority consensus. This could all be configurable as it's all more or less implemented.
I think both the new mesh containment as well as the validation method sound very interesting. I'd be keen to take it for a spin with some of our typical meshes and see what works better. That said: making all of this configurable and keeping the current behaviour as default is probably the right way to go.
The underlying library supports a direct signed distance algorithm which also acts as an actual mesh containment check, which I think is new, by implementing this paper. It seems to be more accurate for containment as it doesn't suffer from #3 .
For a simple unit cube mesh, it's slower than checking with 1 ray but faster than checking with 2. For the teapot, it's much slower; table 1 in the paper suggests that the shape of the mesh has quite a large impact on the comparison between ray-casting and this algo, not just the number polygons.
All this to say: a more robust algorithm is available for containment checks, at the cost of performance in real-world meshes. This could be left to the user, where the correct algorithm (signed distance) is preferred by default, but the caller can choose the quick and dirty route (ray casting) if they prefer. The paper actually does note issues with using ray casts for containment checks when you hit edges and vertices, which we could probably try to figure out to iron out that problem and still get the speed benefits.
What do you think, users - @schlegelp , @ceesem ?
The text was updated successfully, but these errors were encountered: