-
-
Notifications
You must be signed in to change notification settings - Fork 21.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vulkan: Objects close to the camera jitter a lot when far from the origin (even with float=64
)
#58516
Comments
float=64
)
This is an issue caused by precision loss due to your frame of reference still being incredibly far. It is important to understand that although the engine is computing in double precision space, you are truncating those computations back down into 32 bits of precision, and as you get further and further away and the significant digit count increases, the limited amount of precision you have in the mantissa diminishes, causing flicker. This will probably be even more exaggerated with the use of a z-buffer, which is even more finely constrained in some engines to 256-ranged values. In fact, most flickering I have seen has been due to precision issues + z-buffer depth flicker. How can you solve it? It should be quite easy since you are now in 64 bits of precision. Since you have compiled the engine in 64-bit precision, you actually should have more than enough precision to perform camera-based rendering like you said with introduction of very very minimal error. However, you don't even need that much. You should be simply be able to get away with camera-origin-based rendering. Simply take the inverse of the camera of the cameras translation and pre-apply it to the scene's main transformation (the top-level most) and simply remove the camera's translation in the render pipeline. After the final object transforms are computed, you get object coordinates in double-precision form from the CPU in camera-translation space, and when truncated down to 32-bits since your reference is now closer to the objects (the camera in this case), your significant digits are reduced and you now have more precision in the mantissa. The problem you need to solve is to reduce the amount of significant digits of the positions of objects closest to the camera, such that subsequent multiplies on the GPU can utilize as much mantissa as possible, reducing the amount of flicker near the camera. And actually, using a full camera-space system (including rotation) would actually be worse than simple translation, because you are going from a top-level orthonormal basis built on using exact 1, 0, -1 to using an imprecise orthonormal basis (because the camera is rotated, and rotations that do not lie exactly on precise axii will introduce even greater error when inversed). In fact, the last point is more important than it seems. For example, in blender if you export with the default FBX exporter, it performs change of basis using rotation matrices based on sin's and cosine's of rotation instead of simply flipping signs and reordering the basis vectors. This will cause havok when imported into game engines, as points in space that were once perfectly square or orthogonal are now skewed due to precision error introduced in the change of basis. TL;DR - Apply the inverse translation of the camera to the top level matrix of the scene, and strip the translation from the camera when sent to the GPU. The downside of this approach (and similarly, but not as much with origin shifting) is that you must recompute the final transformation matrices OF EVERY OBJECT IN THE SCENE, EVERY FRAME. Since you must perform the change of reference in the realm of higher precision, before it is sent truncated to the GPU, and your reference frame is constantly changing because the camera is moving, you cannot cache these transforms (in the classical sense). Even static objects will need new transforms every frame. This happens with origin shifting as well, except the shift and mass re-computation only occurs when the origin shifts. You can however optimize this a little bit. Since you are double precision space, you can simply keep all of your transforms in origin space, and only apply the change of reference frame right before you send the transform to the GPU. This essentially keeps the scene transforms cached in origin 0 space, and you change its frame in double precision space right before you send it to the GPU. However, it still has the overhead of a transform multiply for every object, every frame. Very fortunately however, since we only want to move the origin point (and not the actual basis vectors) we can cheat even further. We can simply take the final homogeneous transformation matrix sent to the GPU, and perform a single subtraction operation on the translation portion of the matrix. |
@marstaik Excellent writeup. Indeed camera-origin-based rendering is all we need, such that the only thing we need to move around is the translations of objects. The rest of the basis such as rotation doesn't need to be changed. Do you think that you would be able to implement this feature in Godot? |
@aaronfranke I have not been using Godot for a while now, and do not have time myself to do this. But I think this should be relatively easy for anyone to implement. |
As someone who hasn't used godot yet (or in fact any game engine ), Here's what I am currently doing in DirectX. I have an object hierarchy which the camera itself is actually part of. The hierarchy stores transformations both going up and down in branches of the DAG. I start at the camera which is usually at leaf and push and pop transformations as I traverse the DAG, rendering as I go. Not sure if this is what you guys are calling origin shifting, but everything starts with the camera. For things like chunks that don't need to get rotated it doesn't have to do a full matrix multiply. This is all done in double. When I send a matrix to the GPU, it converts it to float at that time. So basically I go directly to view space without going to world space first. This all works except for at very long distances, like thousands of kilometers. For that I found I had to take projection out out of the matrix and just do it as a post step on the GPU which seemed to solve the issue. Not sure if anything is relevant for godot but there you have it. |
if anyone would like to co-write or guide me into how to accomplish this I'd be delighted to learn. I've written a render engine and am familiar with what has to be done on an abstract level, but not too familiar with the code base below the level of the servers, nor with advanced c++. |
I am not sure how relevant it is, but here is my implementation of velocity-adjusted origin rebase (occurs a few times a second or less, not a continuous transform adjustment): https://github.com/roalyr/GDTLancer/blob/master/Scripts/Ship.gd (The code is not refactored, a proof of concept)
Naturally, with larger threshold, you will have more jitter, and my workaround was to increase camera-ship distance and camera fov, and at some point just to hide the model itself. Sure thing, with just double-precision spatial coordinates it would be much better, origin rebase threshold could be more and jitter will be less. You can play with it by grabbing a fresh release of GDTLancer. |
It is also worth mentioning that not just rendering suffers from precision loss (does it even?), but object positioning and physics too, so I can't really tell if it is the right way to do it in rendering pipeline only. |
@roalyr did you try with doubles though? As I mention in the first post, it works ok for distances like 1000Km away (note: you might want immensely more than that but I'm not sure if Godot will support that natively as easily) |
There is already the assumption that you are using double precision on the CPU side - you would have to have an extremely, extremely large world to start getting into mantissa issues on the CPU side at that point - not that it isn't possible, but at that point you should probably be fragmenting your world into multiple origin zones anyways... |
I didn't try 4.0 as of yet at all, since I need to get my product done sooner than later and beg for donations... So I am sticking with 3.x and slowly tweaking what I have figured out. Maybe at some point I'll formalize my origin rebase approach as an asset node, or something. It is not really very straightforward and a bit convoluted (manipulates two kinds of spatial nodes, reads linear velocity, has some tweakable values, etc). |
This will have to be implemented for GPU particles as well, since their transforms are calculated on the GPU. I believe it's done camera-independently, so I'm not sure how that might work. |
Here is what I do personally, and it seems works well in my 3D space game, though let me know if there's an issue with it. (I'm using Godot 4.0 right now with doubles, but this can be applied to 3.4.4 as-well, just replace Node3D by Spatial)
Please note:
On my side, my camera looks very smooth now (no matter how far I am). Objects far away even though they might loose a bit of precision while moving due to floating point errors (since it modifies all origins and camera stays at position [0,0,0]), it doesn't matter, exactly because they're far away objects, we don't need that much precision for them. I think it's pretty close (or it may be actually is) to what @marstaik said, don't move around what you're rendering, instead, make the camera itself "rendering" objects. Quite important:
|
Godot version
v4.0.alpha.custom_build [dae843869]
System information
Windows 10 64 bits GLES3 GeForce GTX 1060 6GB/PCIe/SSE2
Issue description
I've been working lately with a version of Godot 4 compiled with double precision (
float=64
), which has proven to work quite well for stuff happening CPU-side like physics.The main problem is graphics. Everything is jittering, as if it was still working with 32-bit float transforms.
Of course a cheap workaround is origin shifting, however the point of using doubles here is so that we don't actually need to go down that burden. For graphics, it seems there is a more adapted solution to make things easier, which is to use camera-relative rendering. I think far away objects can jitter too, but because they are far, it should not be noticeable.
There are other related problems like triplanar mapping. If configured with world positions, the result is gibberish.
There are also shadow artifacts, maybe more.
Note: another topic is how to render stuff very far away. For example, if I place something at the origin and move 1000Km away, I'd like to still see it. However this issue is about stuff near the camera.
Steps to reproduce
Compile Godot with
float=64
, open the editor, place anything at (1000000, 1000000, 1000000), pressF
to move towards it, and move around it. Observe jittering.Minimal reproduction project
TestDoubles.zip
With a version of Godot compiled with
float=64
, runmain.tscn
.Rotate camera with middle mouse button, pan around with shift, move with WASD.
Click button on top to switch near the origin or back far from it.
The text was updated successfully, but these errors were encountered: