-
Notifications
You must be signed in to change notification settings - Fork 670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add WebGLContextAttributes.pixelFormat RGBA8/SRGB8_ALPHA8/RGBA16F. #3220
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -730,6 +730,12 @@ <h3>Types</h3> | |
// The power preference settings are documented in the WebGLContextAttributes | ||
// section of the specification. | ||
enum WebGLPowerPreference { "default", "low-power", "high-performance" }; | ||
|
||
enum WebGLPixelFormat { | ||
"RGBA8", | ||
"SRGB8_ALPHA8", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think there is value in including RGB10_A2 as an option here. Few ideas of where this will become useful
That said, this is a perfectly good starting set to have. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would Rec. 2020 with There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Long answer to avoid confusing the issue. The value written to the back buffer is in the canvas' color space, whatever that space is. The value assigned to gl_FragColor may not be the same as the value that is written to the backbuffer.
So, regarding
Does that answer the question? BTW, the terminology in KHR_gl_colorspace is really bad and confusing (they use the word "color space" for things that aren't color spaces, and then when they do talk about real color spaces, they invert the meanings of terms). So beware of that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
BT.2020 defines its own legacy OETF for 10- and 12-bit quantization. Additionally, BT.2100 provides HLG and PQ transfers, that are likely better suited for HDR content. Writing real linear values There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, srgb-encoding is primarily to address the banding you'd get if you tried to store linear values directly into unorm8. It solves this by applying the TF, such that any unorm8 quantization banding is now equally spread throughout the more-perceptually-flat E side of the OETF. With 10bpc, you don't have to worry about banding as much, I suppose, and you could just tag the canvas as e.g. rec2020-linear. (Or srgb-linear! Or whatever!) It's on the app to tag correctly based on the output of their rendering, but I think with this, the major pieces of the puzzle are ready. Would prefer to handle this in a follow-up, but it's a good idea. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
FWIW, modern Apple GPUs have this: https://developer.apple.com/documentation/metal/mtlpixelformat/mtlpixelformatbgr10_xr_srgb There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Wow, this is the first I'd ever seen such a thing! Maybe it will become a thing! I don't know of any standard BT2020-primaries + sRGB-transfer-function color space that is common (which is where this would shine). There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Actually, it may be not enough sometimes even for standard range content: the linear precision of 8-bit sRGB roughly varies from 12 to 7 bits. Also applications would have to be aware of blending with only 2 bits of alpha. So my point here is that using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looking forward I think most of fancier spaces/tfs will just want rgba16f, but we'll keep an eye out for other options. |
||
"RGBA16F", | ||
}; | ||
</pre> | ||
|
||
<!-- ======================================================================================================= --> | ||
|
@@ -753,6 +759,7 @@ <h3><a name="WEBGLCONTEXTATTRIBUTES">WebGLContextAttributes</a></h3> | |
WebGLPowerPreference powerPreference = "default"; | ||
boolean failIfMajorPerformanceCaveat = false; | ||
boolean desynchronized = false; | ||
WebGLPixelFormat pixelFormat; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Per above, I'd prefer if we defined some WebGL2ContextAttributes and put this new field there. WebGL2's getContextAttributes override can then return WebGL2ContextAttributes. |
||
};</pre> | ||
|
||
<h4>Context creation parameters</h4> | ||
|
@@ -919,6 +926,50 @@ <h4>Context creation parameters</h4> | |
</p> | ||
</div> | ||
</dd> | ||
<dt><span class=prop-value>pixelFormat</span></dt> | ||
<dd> | ||
<p> | ||
If defined, require a backbuffer of a specific effective internal format. | ||
This is a requirement, and if the requested format is not availible, context creation will fail. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This would be the first time a WebGL context creation attribute can cause a failure to create a context. Here's why the working group decided previously to not do this, and fall back to supported attributes: If a context of the same type (webgl, webgl2) has already been created for the canvas / OffscreenCanvas, https://html.spec.whatwg.org/multipage/canvas.html#dom-canvas-getcontext specifies that it has to be returned, regardless of the context creation attributes. To handle composability of WebGL libraries, where any one could have created the context, this means that any caller of Having the caller now additionally check for I appreciate that there's a feature detection problem here, and that some applications might want to fall back in very specific ways if they can't get their desired pixel format. Do you think there's a different way to handle this? What if we added a setPixelFormat API call at the same time which would be defined as reallocating and zeroing out the back buffer? That one could potentially throw exceptions or similar. Since the context would have been created already, users would have the option of explicitly checking for extension support (for float16 renderability), too. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. failIfMajorPerformanceCaveat definitely causes failure to create context! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. With a setPixelFormat-like procedural api, I could see us getting closer to a headless design:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is #3222 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fair point about failIfMajorPerformanceCaveat - had forgotten about that. Can consider this - but let's discuss on #3222 , since that might end up being a simpler and more useful form of this functionality. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
On some underlying implementations (e.g. OpenGL ES) switching to an SRGB_ALPHA8 render buffer will require creating a new context. Are your implementations happy to have to tear down the context and make a new one under the covers in response to setPixelFormat? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Well, in Gecko we rarely actually use the default framebuffer for webgl contexts, and when we do it's something we make for ourselves (e.g. EGLSurfaces for an ID3D11Texture). Most of the time we request "headless" contexts when our egl equivalent supports it. I'm not worried about this aspect, but thanks for mentioning it! |
||
</p> | ||
<p> | ||
The allowed values are: | ||
</p> | ||
<dl><dd> | ||
<p> | ||
<em>undefined</em> | ||
kdashg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
</p> | ||
<p> | ||
If not defined, choose implicitly based on <code>alpha</code>. | ||
This is the historical behavior. | ||
</p> | ||
<p> | ||
<em><code>"RGBA8"</code></em> | ||
</p> | ||
<p> | ||
Explicitly require an RGBA8 backbuffer. | ||
This is similar to <code>{alpha: true, pixelFormat: "implicit"}</code>. | ||
This is the same effective format as a renderbuffer or texture specified with a <code>sizedFormat</code> or <code>internalFormat</code> of <code>RGBA8</code>, | ||
or a texture specified with <code>{internalFormat: RGBA, unpackFormat: RGBA, unpackType: UNSIGNED_BYTE}</code>. | ||
</p> | ||
<p> | ||
<em><code>"SRGB8_ALPHA8"</code></em> | ||
</p> | ||
<p> | ||
Explicitly require an SRGB8_ALPHA8 "sRGB-encoded" backbuffer. | ||
kdashg marked this conversation as resolved.
Show resolved
Hide resolved
|
||
This is the same effective format as a renderbuffer or texture specified with a <code>sizedFormat</code> or <code>internalFormat</code> of <code>SRGB8_ALPHA8</code>, | ||
or a texture specified with <code>{internalFormat: SRGB_ALPHA, unpackFormat: SRGB_ALPHA, unpackType: UNSIGNED_BYTE}</code>. | ||
</p> | ||
<p> | ||
<em><code>"RGBA16F"</code></em> | ||
</p> | ||
<p> | ||
Explicitly require an RGBA16F half-precision floating point backbuffer. | ||
This is the same effective format as a renderbuffer or texture specified with a <code>sizedFormat</code> or <code>internalFormat</code> of <code>RGBA16F</code>, | ||
or a texture specified with <code>{internalFormat: RGBA, unpackFormat: RGBA, unpackType: HALF_FLOAT_OES}</code>. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. WebGL 1.0 doesn't have floating-point textures by default which makes exposing this pixel format there doubly complicated. If we're moving forward with exposing this on WebGL 1.0 then this should auto-enable OES_texture_float and EXT_color_buffer_half_float (and if they aren't available, this isn't supported). WebGL 2.0 would auto-enable EXT_color_buffer_half_float. Documentation for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think these should auto-enable. I think auto-enable is more tricky than not. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Chromium will create the resources for the back buffer before the end user has a chance to enable any extensions with the newly-created context. I don't know how we can make this form of the API work without auto-enabling of extensions. Creating the back buffer with these formats will implicitly enable the formats for applications - nearly all of the WebGL validation logic is at a lower level in Chromium's implementation, and we would not want to put back in manual validation of certain internal formats. Let's discuss on #3222 though since that may sidestep this issue completely. |
||
</p> | ||
</dd></dl> | ||
</dd> | ||
</dl> | ||
<div class="example"> | ||
Here is an ECMAScript example which passes a WebGLContextAttributes argument | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should start off by defining this enum in the WebGL 2.0 spec, where SRGB8_ALPHA8 and RGBA16F textures are included in the core spec, even though renderability of RGBA16F is still an extension there. The interactions are more minimal, and this could be promoted to the WebGL 1.0 spec once it's been prototyped and better understood. Would you consider that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it's actually easier to go webgl1 first, particularly given rgba16f on webgl2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's discuss on #3222 , which allows feature detection and requires the user to explicitly enable the required extensions. If we agree there, then we can sidestep this disagreement.