-
-
Notifications
You must be signed in to change notification settings - Fork 35.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebGLRenderTarget: Allow opt-in for output transforms / image formation #29429
Comments
Hm. If the user's intention is to force blending in a particular color space like sRGB (building a 2D compositing application, for example) then expressing that in terms of the render target's color space won't be correct. WebGL will decode an sRGB framebuffer to linear when doing blending, it blends in linear when writing to both linear and sRGB framebuffers. In contrast to writing to the drawing buffer, where it blends in sRGB. So instead, the custom blending space use case might require: renderer.outputColorSpace = SRGBColorSpace;
const target = new WebGLRenderTarget(width, height, {
type: HalfFloatType,
colorSpace: NoColorSpace, // ???
needsOutputTransform: true
}); I don't love that; this needs some thought. |
Thinking through implications.
The problem with using So, unfortunately, I don't see a way to allow users to do blending in sRGB space for artistic reasons, without abandoning a linear workflow. Which we do allow, but isn't generally recommended. With that my previous comment can be ignored, and we're back to what I'd suggested in the OP:
Another possible API: renderer.setRenderTarget( target, undefined, undefined, needsToneMapping = true ); |
Do you have an example of when specifically this is a problem? Is there a case where you want a render target to have tone mapping etc applied? I've never had this issue and using tone mapping etc when writing to render targets shouldn't be the default, imo. The way I see it is all these effects should be applied when the image is displayed. When performing forward rendering to the canvas this will happen in whatever surface shader is being used to render the objects. When using a postprocessing pipeline of some sort then these transformations need to be applied whenever the last postprocessing step is written to the canvas. There's no reason it has to be a whole separate pass - that's just how three.js' example processing pipeline is set up. The pmndrs/postprocessing project merges all postprocessing code it can into a single shader, if I recall right, in order to avoid expensive extra passes. Adding a flag to RenderTarget to avoid an extra post processing pass feels like a heavy handed fix to a fixable flaw in the three.js postprocssing pipeline, unless there are other cases I'm not thinking of. edit: I missed that you linked to a discourse post and is specifically a transparency issue - I'll take a look at that in a bit |
This is a pretty annoying case 😅 Unfortunately, as you've pointed out, it doesn't seem like there's a way to easily control the blending color space in WebGL. The only thing I can think of for enabling a toggle while using linear color rendering is to go back to manually handling all color transformations manually and otherwise storing all data in buffers as sRGB, which has a lot of other implications and is a lot to maintain. I'm hoping unmanaged color can be used to handle this case where users need to match behavior from other applications (eg designers requests) but there are some changes needed it looks like. Otherwise I'd think you might need a custom composite step when rendering transparent objects where you perform the blending calculations manually in the desired color space. |
I'm starting to feel that solutions to "support alpha blending in sRGB space" (if we choose to support it) will need to be different from the proposed WebGLRenderTarget changes. Notably, disabling color management (even if changed as proposed in #29549) may not be enough, as I think the OP in the Discord thread would like to keep a proper linear rendering workflow, but also to blend in sRGB space. If they didn't want a linear lit rendering workflow and are just concerned with blending, they could force an sRGB working color space for compositing, but with the linear rendering requirement I worry it's too rare and hard to support.
This has been coming up a lot in the pmndrs discord, let me see if I can get a more concrete example. I think it is mainly a performance problem. |
It wasn't clear to me if that's what they wanted. From the thread they say the following:
"opting out" of color management should not effect the end look of hex colors at all, which makes me think something else is going on - like one of the issues relating to textures mentioned in #29549 (comment). The color space docs page they reference also mentions nothing about having to set "outputColorSpace" to linear in order to retain CSS colors. Definitely agreed, though, that if linear rendering is required then that + sRGB blending isn't something we can easily support.
I'll be curious to hear. I'm not sure what the solution you proposed provides in terms of performance over what can already be done by combining passes into a single shader, though (eg making sure the final post processing pass includes tonemapping, etc fragments). I expect in part postprocessing is just more expensive than people realize especially on mobile with high pixel ratios. Relating to color spaces - I wonder what kind of impact using "SRGB8_ALPHA8" and floating point buffers has on mobile performance. |
Description
In recent three.js versions,
renderer.outputColorSpace
,render.toneMapping
, andrenderer.toneMappingExposure
(jointly, the image formation step) are applied only when writing to the canvas drawing buffer (or WebXR) and disabled when writing to a render target. I think that's the right default. But it does constrain users' choices, and can force an additional (expensive!) pass in a post-processing stack. Based on feedback, this is problematic for mobile applications.Solution
I'd like to suggest that we add a new property to WebGLRenderTarget, perhaps called
target.needsOutputTransform
. If enabled, WebGLRenderer would apply ...when drawing to that render target. I believe
renderer.outputColorSpace
is not relevant here, since the render target already has a.colorSpace
property.Alternatives
Alternatively, we could make WebGLRenderer always apply these settings when drawing to a render target, and require explicit changes to the renderer's properties by the user. I expect this would be a significant breaking change, however.
Additional context
I don't know what this looks like in WebGPURenderer, or whether similar changes would be required there.
/cc @WestLangley
The text was updated successfully, but these errors were encountered: