-
-
Notifications
You must be signed in to change notification settings - Fork 21.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a Viewport property to use full floating-point precision in HDR (3.x) #51708
Add a Viewport property to use full floating-point precision in HDR (3.x) #51708
Conversation
This is only available on the GLES3 backend. This can be useful for advanced shaders, but it should generally not be enabled otherwise as full precision has a performance cost. For general-purpose rendering, the built-in debanding filter should be used to reduce banding instead.
@@ -2069,12 +2069,17 @@ SceneTree::SceneTree() { | |||
const float sharpen_intensity = GLOBAL_GET("rendering/quality/filters/sharpen_intensity"); | |||
root->set_sharpen_intensity(sharpen_intensity); | |||
|
|||
GLOBAL_DEF_RST("rendering/quality/depth/hdr", true); | |||
GLOBAL_DEF("rendering/quality/depth/hdr", true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turns out restarting the editor isn't required to apply HDR changes, at least in my experience. I could notice the added banding immediately as soon as I unchecked HDR in the project settings.
The same applies to the new 32 bpc depth setting.
@Calinou It seems data is being implicitly converted anywhere between storage and sampling: it seems that the Basically, any float under that value, such as By storing
Most likely the former? I hope my explanation is not too confusing and I apologize in advance. I've attached a sample project with shaders for testing data packing and restoring. |
@Calinou I investigated a little further and I think I may have found what's going on. There are two issues happening here:
Clamped Negative Numbers As we could verify in #51844, negative values written to NaNs and Denormals The OpenGL specification states that the smallest value (most near zero) is 1.175494 × 10-38... which is eerily similar to the number I mentioned above. Floats which have exponent zero and mantissa non-zero are denormalized (subnormal) numbers. The specification states that the way NaNs and denormals are handled are implementation-dependent and vendors may flush them to 0, which means that this is intended. However, this seems to only happen when sampling the buffer from another shader, which strike me as odd. The following shader code results in 1 within the shader, but results in 0 when sampling the buffer from another shader: void fragment() {
ALBEDO.r = intBitsToFloat(0x00000001); // becomes the denorm 1e-45
ALBEDO.gb = vec2(0.0);
}
void light() {
// float 1e-45 is reinterpret-casted to int 1,
// then left-shifted 30 bits to become 0x40000000,
// and finally reinterpret-casted to float 2.0f
DIRECTIONAL_LIGHT = intBitsToFloat(floatBitsToInt(ALBEDO.r) << 30);
} The bit representation of The Nvidia OpenGL implementation specification says that they flush NaNs and denorms to 0, but they don't specify where. Which means that:
|
I wouldn't remove negative color clamping as it would break the 3D rendering appearance when using strong negative lights. Unless we find a way to fix this in the lights themselves, but I don't know if this is feasible. |
Negative color clamping seems like a regression to me; negative color values are not a bug, they are intended and supported on framebuffers, and in fact they've been used on effect buffers since forever. As far as I know, negative colors only become visible if they underflow due to subtraction. Likewise, strong positive lights may overflow and become black. Commonly, implementations normalize light energy values instead of outright clamping color values. Generally speaking, it's advised to not use way too high values for lights as there are only 16 bits of precision to work with and they can't store way too high values, negative or positive. It's always advised to normalize light energy by the total scene exposure — in that case, dividing every light energy by the strongest light in the scene. In my project I'm finishing implementing a If color clamping is staying then I sure hope it's optional because I'm surely not using it. I have so many shader chains that rely on negative values, I'm not even joking: depth, screen-space bent normals, world-space fragment position, vertex position feedback-loop buffer for my GPU Verlet cloth system, volumetrics... just from the top of my head. I didn't test my project on the last few latest releases of the engine, but I can already see everything breaking apart. Additionally, if floats are being clamped, they surely aren't full floating-point precision.
I have a few ideas, but I need to see about optimizing them. Are we sure if the values "break" during the light loop or during high-dynamic-range -> low-dynamic-range conversion? |
@Lauson1ex I don't think the current renderer clamps light colours. I think the colour clamping Calinou is referring to is clamping the inputs to the tonemapping functions. Tonemapping breaks with negative values. This shouldn't impact your workflow because it only applies to viewports with tonemapping enabled. |
Yes, that's exactly what we're talking about, the color values, after they've been multiplied and added by lights, are fed as input for the tonemapper, and get clamped.
Except negative color values are being clamped despite tonemapping being disabled. |
@Lauson1ex are you sure you have tonemapping disabled? E.g. your viewport has "keep_3D_linear" checked? Setting tonemapping to "linear" is not the same as disabling tonemapping. |
|
@clayjohn A shader writes a solid color to the buffer, which is then read by another shader and output to the screen. src.r is written to dst.g Reproduction test project: |
@Lauson1ex thanks for confirming. Looks like we need to dig in and see where else the outputs are being clamped. From memory, I thought the only place was during tonemapping. But there must be a clamp hidden elsewhere. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation and feature seem fine to me. I'll let others in @godotengine/rendering validate/merge.
Poke @godotengine/rendering |
Thanks! |
3.x
version of #51709.This is only available on the GLES3 backend.
This can be useful for advanced shaders, but it should generally not be enabled otherwise as full precision has a performance cost. For general-purpose rendering, the built-in debanding filter should be used to reduce banding instead.
This implements godotengine/godot-proposals#2935 for
3.x
. I'll have to see about creating amaster
version of this PR.@Lauson1ex Do you have an example shader to test this?
Preview
While I don't have a suitable shader for testing the difference, I can note that there is a subtle visual difference in a test scene I made (verified by dssim).
16 bpc (default)
32 bpc
Difference