Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add WebGLContextAttributes.pixelFormat RGBA8/SRGB8_ALPHA8/RGBA16F. #3220

Closed
wants to merge 1 commit into from

Conversation

kdashg
Copy link
Contributor

@kdashg kdashg commented Feb 26, 2021

  • Fixed a typo in EXT_sRGB.

@kdashg
Copy link
Contributor Author

kdashg commented Feb 26, 2021

Something like that!

The main goal here is to allow "sRGB-encoded" backbuffers via SRGB8_ALPHA8.
We can pare this down to just RGBA8/SRGB8_ALPHA8 if we want, but RGBA16F seems like the obvious next step? Interested what y'all think!

I didn't auto-activate the required extensions, since that seems like too much complexity. Let the app discover its capabilities normally instead of having a different initial state.
I don't use the enum values, but instead the enum key strings, which should be familiar and natural enough to use here, without making people type out WebGL2RenderingContext.SRGB8_ALPHA8.
Also, this makes it really clear that we're only supporting a subset of these. No RG8UI backbuffers please!

Copy link
Contributor

@ccameron-chromium ccameron-chromium left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great to me, though I am not an authority.

I do think that adding RGB10_A2 as an option will be something people want.


enum WebGLPixelFormat {
"RGBA8",
"SRGB8_ALPHA8",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is value in including RGB10_A2 as an option here. Few ideas of where this will become useful

  • this lets us do full rec2020 with 32-bits-per-pixel (not blowing up to 64-bits-per-pixel)
  • medical imaging applications which demand high resolution of greyscale (IEEE 754 has enough, but maybe they would be particular about fixed-point?)
  • if we end up adding rec2020-pq as a color space at some point in the future, this would be the ideal format for it

That said, this is a perfectly good starting set to have.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would Rec. 2020 with RGB10_A2 imply applying non-linear ITU transfer using GLSL when writing to gl_FragColor (like it happens with RGBA8 and sRGB today)?

Copy link
Contributor

@ccameron-chromium ccameron-chromium Feb 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Long answer to avoid confusing the issue.

The value written to the back buffer is in the canvas' color space, whatever that space is.

The value assigned to gl_FragColor may not be the same as the value that is written to the backbuffer.

  • In particular, if the backbuffer has sRGB framebuffer encoding enabled (which is implicitly true if the format specified here is SRGB8_ALPHA -- on desktop platforms it's true when FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING is SRGB and FRAMEBUFFER_SRGB is enabled), then the linear-to-sRGB-encoded-transform function is applied before writing the value.
  • In that case, gl_FragColor is in a "linearized" version of the canvas' color space.
    • This is true for "srgb" and "display-p3" color spaces, because those spaces uses the sRGB transfer function
    • For something like "rec2020" or "rec2020-pq", this ends up being a well-defined, but not very useful space, because they don't use the sRGB transfer function (the result is not linear, not standard, not ... anything, just the result of a bunch of formulas that aren't usually put together)

So, regarding RGBA10_A2

  • no hardware supports any framebuffer color encoding functions for 10-bit formats
    • not sRGB, which we do have for 8-bit formats
    • and not PQ, which doesn't exist in any hardware (but hey, maybe it would be a nice feature request!)
  • so the values written would be "values in [0, 1] in the rec2020 color space"

Does that answer the question?

BTW, the terminology in KHR_gl_colorspace is really bad and confusing (they use the word "color space" for things that aren't color spaces, and then when they do talk about real color spaces, they invert the meanings of terms). So beware of that.

Copy link
Member

@lexaknyazev lexaknyazev Feb 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does that answer the question?

BT.2020 defines its own legacy OETF for 10- and 12-bit quantization. Additionally, BT.2100 provides HLG and PQ transfers, that are likely better suited for HDR content.

Writing real linear values [0.0 .. 1.0] to RGB10_A2 would likely lead to losing some perceptual precision compared to pre-applying some known to compositor non-linear transfer in the fragment shader.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, srgb-encoding is primarily to address the banding you'd get if you tried to store linear values directly into unorm8. It solves this by applying the TF, such that any unorm8 quantization banding is now equally spread throughout the more-perceptually-flat E side of the OETF. With 10bpc, you don't have to worry about banding as much, I suppose, and you could just tag the canvas as e.g. rec2020-linear. (Or srgb-linear! Or whatever!) It's on the app to tag correctly based on the output of their rendering, but I think with this, the major pieces of the puzzle are ready.

Would prefer to handle this in a follow-up, but it's a good idea.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no hardware supports any framebuffer color encoding functions for 10-bit formats

FWIW, modern Apple GPUs have this: https://developer.apple.com/documentation/metal/mtlpixelformat/mtlpixelformatbgr10_xr_srgb

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, this is the first I'd ever seen such a thing! Maybe it will become a thing! I don't know of any standard BT2020-primaries + sRGB-transfer-function color space that is common (which is where this would shine).

Copy link
Member

@lexaknyazev lexaknyazev Feb 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With 10bpc, you don't have to worry about banding as much

For sRGB or P3 gamut, maybe.

Actually, it may be not enough sometimes even for standard range content: the linear precision of 8-bit sRGB roughly varies from 12 to 7 bits. Also applications would have to be aware of blending with only 2 bits of alpha.

So my point here is that using RGB10_A2 as a backbuffer storage is a very niche option with non-obvious tradeoffs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we end up adding rec2020-pq as a color space at some point in the future

How would blending work there without platform-provided linearization? 10-bit PQ transfer is even more aggressive than typical sRGB-style curves as PQ's linear precision may achieve 28 bits for values near zero.

image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking forward I think most of fancier spaces/tfs will just want rgba16f, but we'll keep an eye out for other options.

specs/latest/1.0/webgl.idl Outdated Show resolved Hide resolved
specs/latest/1.0/index.html Outdated Show resolved Hide resolved
specs/latest/1.0/webgl.idl Show resolved Hide resolved
Copy link
Member

@kenrussell kenrussell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting this together @jdashg .

<dd>
<p>
If defined, require a backbuffer of a specific effective internal format.
This is a requirement, and if the requested format is not availible, context creation will fail.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be the first time a WebGL context creation attribute can cause a failure to create a context. Here's why the working group decided previously to not do this, and fall back to supported attributes:

If a context of the same type (webgl, webgl2) has already been created for the canvas / OffscreenCanvas, https://html.spec.whatwg.org/multipage/canvas.html#dom-canvas-getcontext specifies that it has to be returned, regardless of the context creation attributes.

To handle composability of WebGL libraries, where any one could have created the context, this means that any caller of getContext('webgl[2]', ...) has to check the context creation attributes to make sure they got what they want.

Having the caller now additionally check for null is a little ambiguous - the only way to know that getContext returned null would be to check for a webglcontextcreationerror thrown against the canvas element, which would make context creation extremely contorted and asynchronous.

I appreciate that there's a feature detection problem here, and that some applications might want to fall back in very specific ways if they can't get their desired pixel format.

Do you think there's a different way to handle this? What if we added a setPixelFormat API call at the same time which would be defined as reallocating and zeroing out the back buffer? That one could potentially throw exceptions or similar. Since the context would have been created already, users would have the option of explicitly checking for extension support (for float16 renderability), too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

failIfMajorPerformanceCaveat definitely causes failure to create context!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With a setPixelFormat-like procedural api, I could see us getting closer to a headless design:

const gl = new WebGLRenderContext()
gl.backbufferStorage(gl.RGBA16F, W, H);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is #3222

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point about failIfMajorPerformanceCaveat - had forgotten about that.

Can consider this - but let's discuss on #3222 , since that might end up being a simpler and more useful form of this functionality.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think there's a different way to handle this? What if we added a setPixelFormat API call at the same time which would be defined as reallocating and zeroing out the back buffer? That one could potentially throw exceptions or similar. Since the context would have been created already, users would have the option of explicitly checking for extension support (for float16 renderability), too.

On some underlying implementations (e.g. OpenGL ES) switching to an SRGB_ALPHA8 render buffer will require creating a new context. Are your implementations happy to have to tear down the context and make a new one under the covers in response to setPixelFormat?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, in Gecko we rarely actually use the default framebuffer for webgl contexts, and when we do it's something we make for ourselves (e.g. EGLSurfaces for an ID3D11Texture). Most of the time we request "headless" contexts when our egl equivalent supports it. I'm not worried about this aspect, but thanks for mentioning it!

specs/latest/1.0/index.html Show resolved Hide resolved
specs/latest/1.0/index.html Outdated Show resolved Hide resolved
@@ -730,6 +730,12 @@ <h3>Types</h3>
// The power preference settings are documented in the WebGLContextAttributes
// section of the specification.
enum WebGLPowerPreference { "default", "low-power", "high-performance" };

enum WebGLPixelFormat {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should start off by defining this enum in the WebGL 2.0 spec, where SRGB8_ALPHA8 and RGBA16F textures are included in the core spec, even though renderability of RGBA16F is still an extension there. The interactions are more minimal, and this could be promoted to the WebGL 1.0 spec once it's been prototyped and better understood. Would you consider that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's actually easier to go webgl1 first, particularly given rgba16f on webgl2.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discuss on #3222 , which allows feature detection and requires the user to explicitly enable the required extensions. If we agree there, then we can sidestep this disagreement.

@@ -753,6 +759,7 @@ <h3><a name="WEBGLCONTEXTATTRIBUTES">WebGLContextAttributes</a></h3>
WebGLPowerPreference powerPreference = "default";
boolean failIfMajorPerformanceCaveat = false;
boolean desynchronized = false;
WebGLPixelFormat pixelFormat;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per above, I'd prefer if we defined some WebGL2ContextAttributes and put this new field there. WebGL2's getContextAttributes override can then return WebGL2ContextAttributes.

specs/latest/1.0/webgl.idl Outdated Show resolved Hide resolved
specs/latest/1.0/index.html Show resolved Hide resolved
<p>
Explicitly require an RGBA16F half-precision floating point backbuffer.
This is the same effective format as a renderbuffer or texture specified with a <code>sizedFormat</code> or <code>internalFormat</code> of <code>RGBA16F</code>,
or a texture specified with <code>{internalFormat: RGBA, unpackFormat: RGBA, unpackType: HALF_FLOAT_OES}</code>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WebGL 1.0 doesn't have floating-point textures by default which makes exposing this pixel format there doubly complicated. If we're moving forward with exposing this on WebGL 1.0 then this should auto-enable OES_texture_float and EXT_color_buffer_half_float (and if they aren't available, this isn't supported). WebGL 2.0 would auto-enable EXT_color_buffer_half_float.

Documentation for readPixels needs to be updated in some way - if this is the pixel format, then the RGBA/UNSIGNED_BYTE isn't supported for readbacks from the default back buffer, RGBA/FLOAT are.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these should auto-enable. I think auto-enable is more tricky than not.
Internally Gecko's WebGL only has extension-checking logic at resource creation. Once it's created (like the backbuffer), even if the extensions aren't enabled, it's smooth sailing to use it however.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chromium will create the resources for the back buffer before the end user has a chance to enable any extensions with the newly-created context. I don't know how we can make this form of the API work without auto-enabling of extensions. Creating the back buffer with these formats will implicitly enable the formats for applications - nearly all of the WebGL validation logic is at a lower level in Chromium's implementation, and we would not want to put back in manual validation of certain internal formats.

Let's discuss on #3222 though since that may sidestep this issue completely.

Copy link
Member

@kenrussell kenrussell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lexaknyazev @ccameron-chromium please review #3222 - I'll explicitly add you to the review for that one. If we can agree on it, it would sidestep a few issues and give the application programmer more control.

@@ -730,6 +730,12 @@ <h3>Types</h3>
// The power preference settings are documented in the WebGLContextAttributes
// section of the specification.
enum WebGLPowerPreference { "default", "low-power", "high-performance" };

enum WebGLPixelFormat {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discuss on #3222 , which allows feature detection and requires the user to explicitly enable the required extensions. If we agree there, then we can sidestep this disagreement.

<dd>
<p>
If defined, require a backbuffer of a specific effective internal format.
This is a requirement, and if the requested format is not availible, context creation will fail.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point about failIfMajorPerformanceCaveat - had forgotten about that.

Can consider this - but let's discuss on #3222 , since that might end up being a simpler and more useful form of this functionality.

<p>
Explicitly require an RGBA16F half-precision floating point backbuffer.
This is the same effective format as a renderbuffer or texture specified with a <code>sizedFormat</code> or <code>internalFormat</code> of <code>RGBA16F</code>,
or a texture specified with <code>{internalFormat: RGBA, unpackFormat: RGBA, unpackType: HALF_FLOAT_OES}</code>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chromium will create the resources for the back buffer before the end user has a chance to enable any extensions with the newly-created context. I don't know how we can make this form of the API work without auto-enabling of extensions. Creating the back buffer with these formats will implicitly enable the formats for applications - nearly all of the WebGL validation logic is at a lower level in Chromium's implementation, and we would not want to put back in manual validation of certain internal formats.

Let's discuss on #3222 though since that may sidestep this issue completely.

@kenrussell
Copy link
Member

In recent discussions it seems everyone is happier with the direction @jdashg is proposing in #3222 , so let's focus our attention on that pull request and close this one.

@kenrussell kenrussell closed this Mar 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants