Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Canvas 2D color management #646

Closed
1 task done
ccameron-chromium opened this issue Jun 10, 2021 · 13 comments
Closed
1 task done

Canvas 2D color management #646

ccameron-chromium opened this issue Jun 10, 2021 · 13 comments
Assignees
Labels
Progress: propose closing we think it should be closed but are waiting on some feedback or consensus Topic: graphics

Comments

@ccameron-chromium
Copy link

ccameron-chromium commented Jun 10, 2021

I'm requesting a TAG review of Canvas 2D color management.

This was developed in the W3C's ColorWeb CG, and has been reviewed and updated in WhatWG review. I would like TAG to put their eyes on it too!

Summary: This formalizes the convention of 2D canvases being in the sRGB color color space by default, that input content be converted to the 2D canvas's color space when drawing and that "untagged" content is to be interpreted as sRGB. This adds a parameter whereby a 2D canvas can specify a different color space (with Display P3 being the only value exposed so far). Similarly, this formalizes that ImageData is sRGB by default, and add a parameter to specify its color space.

Further details:

We'd prefer the TAG provide feedback as (please delete all but the desired option):
💬 leave review feedback as a comment in this issue and @-notify ccameron-chromium

@LeaVerou
Copy link
Member

Hi @ccameron-chromium!

Fantastic to see a proposal to color manage Canvas, and extend it beyond sRGB. 👍🏼
It's unfortunate that sRGB has to be the default, but completely understandable for web compat.

Here are some questions that occurred to me on initial reading.

  • You say the target is Chrome 92. However to my knowledge, there are no plans to implement color(display-p3) or lab() or lch() in Chrome 92. Without those, it would be impossible to draw graphics that utilize P3 colors, and thus the only value-add of this implementation would be the ability to paint P3 images on canvas. Is that the case? Does Chromium plan to implement color(display-p3) but only for Canvas? Something else?
  • I imagine eventually we'd want to extend this to the Paint API’s PaintRenderingContext2D. Given that the context for that is pre-generated, how would that look like?
  • Is this intended to become 10 bit by default once 10 bits per component are supported? Could this introduce web compat problems?
  • Input colors (e.g, fillStyle and strokeStyle) follow the same interpretation as CSS color literals, regardless of the canvas color space.
    What happens when someone paints an e.g. rec2020 color on an sRGB or Display P3 canvas? Is the result gamut mapped? If so, how?
  • If I understand the explainer correctly, this means that the first script that calls getContext() gets to define the color space the canvas is in. What happens on any subsequent getContext() calls, either without a colorSpace argument, or with a different one? Do they produce an error or silently return the existing context, color managed with a different color space than the one the author specified? Do they clear the contents? Not sure any of these options is better than making colorSpace be mutable (which would also address PaintRenderingContext2D). It is not that unheard of to change the color space of color managed graphics contexts, e.g. it's possible in every color managed graphics application I know of, and there are several reasonable ways to do it.
  • Am I reading it correctly that getImageData() will return sRGB data even in a P3 canvas, unless P3 data is explicitly requested? What's the rationale for not defaulting to the current color space?
  • The color space is then an immutable property of the CanvasRenderingContext2D.
    Unless I missed it, none of your Web IDL snippets include this readonly attribute. I assume in unsupported color spaces this attribute will be "srgb"?

@LeaVerou LeaVerou added Progress: in progress Progress: pending external feedback The TAG is waiting on response to comments/questions asked by the TAG during the review Topic: graphics and removed Progress: untriaged labels Jun 10, 2021
@ccameron-chromium
Copy link
Author

Thank you so much for the quick look!

Something I should have emphasized is that CanvasColorSpaceProposal.md is what was brought to WhatWG, and then the WhatWG PR is what came out of that review. It may be that I should update CanvasColorSpaceProposal.md to reflect those changes.

Hi @ccameron-chromium!

Fantastic to see a proposal to color manage Canvas, and extend it beyond sRGB. 👍🏼
It's unfortunate that sRGB has to be the default, but completely understandable for web compat.

Here are some questions that occurred to me on initial reading.

  • You say the target is Chrome 92. However to my knowledge, there are no plans to implement color(display-p3) or lab() or lch() in Chrome 92. Without those, it would be impossible to draw graphics that utilize P3 colors, and thus the only value-add of this implementation would be the ability to paint P3 images on canvas. Is that the case? Does Chromium plan to implement color(display-p3) but only for Canvas? Something else?

Indeed Chrome 92 will not have color(display-p3) et al. WCG content can be drawn to a 2D canvas via Images and via ImageData.

When we were trying to decide which pieces to pick off first (CSS color vs 2D canvas), the balance came out in favor of canvas, for applications that wanted to ensure that their images weren't crushed to sRGB (even if all CSS colors were still limited to sRGB). Ultimately both are much more useful with each other.

  • I imagine eventually we'd want to extend this to the Paint API’s PaintRenderingContext2D. Given that the context for that is pre-generated, how would that look like?

For PaintRenderingContext2D, the actual output color space is not observable by Javascript (getImageData isn't exported). This is unlike CanvasRenderingContext2D, where the color space is observable (and has historically been a fingerprinting vector). Because of that, my sense is that the user agent should be able to select the best color space for the display device (just as it does for deciding the color space in which elements are drawn and composited), and potentially change that space behind the scenes. Having the application specifying a color space for PaintRenderingContext2D feels like an unnatural constraint.

Similarly, ImageBitmap and ImageBitmapRenderingContext don't want color spaces -- one should just be able to create an ImageBitmap from a source and send it to ImageBitmapRenderingContext and, by default, have it appear the same as the source would have if drawn directly as an element. (Of note is that we will likely add a color space to ImageBitmapOptions to allow asynchronous-ahead-of-time conversion for when uploading into a WebGL/GPU texture, but that is outside of the 2D context).

  • Is this intended to become 10 bit by default once 10 bits per component are supported? Could this introduce web compat problems?

Indeed for non-srgb-or-display-p3 spaces, we may want to default to something more than 8 bits per pixel. That's part of why we decided not to include rec2020 in the spec (the other part being disputes about its proper definition!!).

For srgb and display-p3, the overwhelming preference is for 8 bits per pixel, and so the default of 8 bits per pixel will be what we will want to stay with (using more than 8 bits per pixel comes with substantial power and memory penalties, for almost no perceptual gain). As you noted, in the HDR spec, we may want to make a selection of color space imply a particular pixel format (I'm still on the fence about that -- fortunately we're avoiding being affected by how that decision lands -- display-p3 is the most requested space).

  • Input colors (e.g, fillStyle and strokeStyle) follow the same interpretation as CSS color literals, regardless of the canvas color space.

    What happens when someone paints an e.g. rec2020 color on an sRGB or Display P3 canvas? Is the result gamut mapped? If so, how?

The input colors (like other inputs) are converted from the input's color space to the canvas's color space using relative colorimetric mapping, which is the "don't do anything fancy" mapping. In your example, the rec2020 color can always be transformed to some pixel in sRGB, but that pixel may have RGB values outside of the 0-to-1 interval. Relative colorimetric intent just clamps the individual color values to 0-to-1.

This is what happens today in all browsers if the browser, e.g, loads a rec2020 image that uses the full gamut and attempts to display it on a less capable monitor.

(Somewhat relatedly, one thing that came up in a separate review is that it might be useful for developer tools to have a "please pretend I have a less capable monitor than I do" mode).

  • If I understand the explainer correctly, this means that the first script that calls getContext() gets to define the color space the canvas is in. What happens on any subsequent getContext() calls, either without a colorSpace argument, or with a different one? Do they produce an error or silently return the existing context, color managed with a different color space than the one the author specified? Do they clear the contents?

The current behavior is that the subsequent call to getContext('2d') will return the previously created context, even if it has different properties than what was requested the second time around. This applies to all of the settings (alpha, etc).

Not sure any of these options is better than making colorSpace be mutable (which would also address PaintRenderingContext2D). It is not that unheard of to change the color space of color managed graphics contexts, e.g. it's possible in every color managed graphics application I know of, and there are several reasonable ways to do it.

Yes, this was another tricky area. There was some discussion around making the colorSpace be a mutable attribute, but there were a few things pushing against it. One was that there were indeed many reasonable things to do (clear the canvas, reinterpret_cast the pixels, convert the pixels?), and no single option was a clear winner. Another was that this matched the behavior for alpha (which will likely match the future canvas bit depth). Another was that it felt conceptually like a bad fit (especially in comparison with, e.g, WebGPU, where the GPUSwapChainDescriptor is the natural spot, and can be changed on frame boundaries).

So that's how we ended up landing where we did. Does that feel reasonable to you too?

In practice, if one wants to swap out a canvas for a differently-configured canvas, one can create the new element (or offscreen canvas) and drawImage the previous canvas into it (which will achieve the "convert" behavior).

We also briefly discussed if it was possible for the canvas to automatically update its color space to support whatever is drawn into it (turns out it's not, at least not without scrutinizing every pixel of every texture that gets sent at it, and even then that may not be desirable).

  • Am I reading it correctly that getImageData() will return sRGB data even in a P3 canvas, unless P3 data is explicitly requested? What's the rationale for not defaulting to the current color space?

Yes, this is a good point -- the WhatWG review changed this behavior (again, sorry I wasn't more clear about that earlier).

The text that landed is what you suggest (getImageData returns the canvas's color space). Critically, getImageData, toDataURL, and toBlob have the property if that one exports a canvas into a (imagedata, blob, url), and then draws the result on the same canvas, the operation is a no-op (no data is lost ... unless you choose lossy compression).

  • The color space is then an immutable property of the CanvasRenderingContext2D.

    Unless I missed it, none of your Web IDL snippets include this readonly attribute. I assume in unsupported color spaces this attribute will be "srgb"?

Following alpha's pattern, it's query-able using getContextAttributes (it will be in the returned CanvasRenderingContext2DSettings).

When creating a context, the color space for the context is set to the color space in the attributes, so all enum values that get past the IDL must be supported for 2D canvas and for ImageData. (Also, the proposal document advertised a feature detection interface, which was nixed in WhatWG review).

If the browser doesn't support this feature at all, then there will be no colorSpace entry in CanvasRenderingContext2DSettings, so the feature may be detected through that mechanism.

Thank you again for the quick feedback!

@LeaVerou LeaVerou removed the Progress: pending external feedback The TAG is waiting on response to comments/questions asked by the TAG during the review label Jun 11, 2021
@plinss plinss added this to the 2021-06-21-week milestone Jun 16, 2021
@ccameron-chromium
Copy link
Author

I've updated the listed explainer to reference this document. This is the best place to look for a concise description of the formalizations and changes being proposed in this feature.

This is a revised and streamlined version of the initial proposal, reflecting the changes made during WhatWG review.

@LeaVerou
Copy link
Member

What happens when someone paints an e.g. rec2020 color on an sRGB or Display P3 canvas? Is the result gamut mapped? If so, how?

The input colors (like other inputs) are converted from the input's color space to the canvas's color space using relative colorimetric mapping, which is the "don't do anything fancy" mapping. In your example, the rec2020 color can always be transformed to some pixel in sRGB, but that pixel may have RGB values outside of the 0-to-1 interval. Relative colorimetric intent just clamps the individual color values to 0-to-1.

This is what happens today in all browsers if the browser, e.g, loads a rec2020 image that uses the full gamut and attempts to display it on a less capable monitor.

Relative Colorimetric is essentially a set of rules for how gamut mapping should happen, not a gamut mapping algorithm. The per-component clamping you describe does conform to RC, but is a very poor implementation of it. E.g. consider the sRGB color rgb(100% 200% 400%). Using per-component clamping, it would just be converted to achromatic white.

That said, Canvas is not the place to define how gamut mapping happens in the Web platform, and there are plans to flesh this out more in CSS Color 4. Meanwhile, please avoid prose that renders implementations non-conformant if they don’t use naïve clamping in the spec (in case there was any).

But beyond how gamut mapping happens, there's also the question of whether it happens. The current behavior of restricting everything on a canvas to the gamut of the color space it's defined on is reasonable. Using the colorSpace argument to just specify a working color space, and allowing both in-gamut and out-of-gamut colors on the canvas also seems reasonable. What was the rationale of going with the first, rather than the second, option? Did you find it satisfies more use cases?

@ccameron-chromium
Copy link
Author

Relative Colorimetric is essentially a set of rules for how gamut mapping should happen, not a gamut mapping algorithm. The per-component clamping you describe does conform to RC, but is a very poor implementation of it. E.g. consider the sRGB color rgb(100% 200% 400%). Using per-component clamping, it would just be converted to achromatic white.

Yes, good point. And yes, particularly when extended into HDR, per-component clamping can create pretty poor-looking results.

That said, Canvas is not the place to define how gamut mapping happens in the Web platform, and there are plans to flesh this out more in CSS Color 4. Meanwhile, please avoid prose that renders implementations non-conformant if they don’t use naïve clamping in the spec (in case there was any).

Thanks for the heads-up. We can be softer on the language with respect to the particular gamut mapping algorithm in the canvas section (I had been trying to get that variable nailed down, but if that's getting taken care of in a more central effort, that would be better).

FYI, a related topic, HDR tonemapping -- mapping from a larger luminance+chrominance range down to a more narrow one, comes up periodically in the ColorWeb CG HDR discussions.

But beyond how gamut mapping happens, there's also the question of whether it happens. The current behavior of restricting everything on a canvas to the gamut of the color space it's defined on is reasonable. Using the colorSpace argument to just specify a working color space, and allowing both in-gamut and out-of-gamut colors on the canvas also seems reasonable. What was the rationale of going with the first, rather than the second, option? Did you find it satisfies more use cases?

With respect to Display P3, most (perhaps all?) users and use cases we encountered wanted the gamut capability of Display P3, rather than having Display P3 as a working space (they didn't mind having Display P3 as the working space -- it's "sRGB-like" enough that it comes with no surprises compared to the default behavior, but that wasn't the part of the feature they were most after).

Allowing in-gamut and out-of-gamut colors requires having >8 bits per pixel of storage. That isn't much for a moderately-powerful desktop or laptop, but it is quite a burden (especially with respect to power consumption) for small battery-powered devices, and so most (I'm again tempted to say all?) users that I've encountered wanted Display P3 with 8 bits per pixel.

(The rest of this might be getting a bit ramble-y, but it also might be some useful background on how we ended up where we did):

In some of the very early versions of the canvas work we tried to separate the working color space from the storage color space. That ended up becoming unwieldy, and we discarded it -- it ended up being much more straightforward to have the storage and working space be the same. In practice, having a separate working space meant having an additional pass using that working space as a storage space, and so having the two not match ended up being downside-only. (There was one sort-of-exception, sRGB framebuffer encoding, which is useful for physically based rendering engines, but is very tightly tied to hardware texture/renderbuffer formats, and so we ended up moving it to a separate WebGL change, and those formats will also eventually find their way to WebGPU's GPUSwapChainDescriptor).

We also discussed having some way to automatically allow arbitrary-gamut content that "just works", without having to specify any additional parameters, and without any performance penalties. One of the ideas was to automatically detect out-of-gamut inputs and upgrade the canvas. This one was discarded because it would add performance cliffs, would have a complicated implementation, and might not be what an application wants (e.g, if just 1 pixel is 1 one bit outside of the gamut, they may prefer it to be clipped rather than pay a cost). Another idea could be to use the output display device's color space, but that would then become a fingerprinting vector (and would also have the issue that the output display device is a moving target).

@torgo torgo added Progress: propose closing we think it should be closed but are waiting on some feedback or consensus and removed Progress: in progress labels Jun 21, 2021
@LeaVerou
Copy link
Member

Following alpha's pattern, it's query-able using getContextAttributes (it will be in the returned CanvasRenderingContext2DSettings).

When creating a context, the color space for the context is set to the color space in the attributes, so all enum values that get past the IDL must be supported for 2D canvas and for ImageData. (Also, the proposal document advertised a feature detection interface, which was nixed in WhatWG review).

Just noticed this -- so if I'm reading this right the colorSpace from the attributes will be srgb in that case?

@ccameron-chromium
Copy link
Author

Following alpha's pattern, it's query-able using getContextAttributes (it will be in the returned CanvasRenderingContext2DSettings).
When creating a context, the color space for the context is set to the color space in the attributes, so all enum values that get past the IDL must be supported for 2D canvas and for ImageData. (Also, the proposal document advertised a feature detection interface, which was nixed in WhatWG review).

Just noticed this -- so if I'm reading this right the colorSpace from the attributes will be srgb in that case?

Sorry, I might not have understood the context of the question (let me know if I miss it again here!). WRT the question of "in unsupported color spaces will this attribute be "srgb"? There can be two meanings of "unsupported":

  • A string that isn't a valid PredefinedColorSpace. This will throw an invalid enum exception.
  • A string that is a valid PredefinedColorSpace, but isn't supported by the implementation. The intent of the language of the spec is for this category to not exist (e.g, there is no "supported versus not" in the context creation algorithm). In some earlier versions of the WhatWG PR, there was this category of "a valid but not supported color space that falls back to sRGB", but this was considered too complicated (see discussion here for more details).

There's also the case of a user agent that hasn't implemented this feature. In that case, there will be no colorSpace entry in CanvasRenderingContext2DSettings.

@LeaVerou
Copy link
Member

To clarify my question further:

I suppose user agents will implement this proposal by first implementing the srgb and display-p3 color spaces. However, you plan to eventually extend this enum with more values, so there will be a transitional period where authors may try to use e.g. colorSpace: "rec2020" in user agents that only support srgb and display-p3.
In that case, if I'm reading your message correctly, it will throw an invalid enum exception?

@annevk
Copy link
Member

annevk commented Jun 23, 2021

Yeah, that's correct.

@LeaVerou
Copy link
Member

Yeah, that's correct.

Thank you. Is it correct to assume it would throw with the same error in subsequent calls to getContext()?

I.e.

let ctx = canvas.getContext("2d", {colorSpace: "display-p3" });
let ctx2 = canvas.getContext("2d", {colorSpace: "flugelhorn" }); // throws?

Another question that came up in a breakout this week. I do see some examples in the explainer use a media query to decide which color space to use. I assume however that the canvas color space and the display device color space are entirely decoupled, and therefore it's entirely possible to work on a P3 canvas, in a less capable (e.g. sRGB) display device. You would obviously not see the non-sRGB colors, but the underlying numbers would be unaffected. Is my assumption correct?

@annevk
Copy link
Member

annevk commented Jun 23, 2021

Yeah (IDL enum validation happens prior to executing the method steps). And yeah, that's correct, the canvas color space and computations are its own thing and not impacted by any kind of global state.

@LeaVerou
Copy link
Member

Hi @ccameron-chromium,

We reviewed this proposal this week and overall we are happy with the direction. We were initially troubled by some of the design decisions, but after discussing them further, we came to the same conclusions.

Therefore, we are going to close this issue. We are looking forward to seeing this feature evolve further.

@ccameron-chromium
Copy link
Author

Thank you for the review! Please feel free to reach out of there are any follow-up questions or related topics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Progress: propose closing we think it should be closed but are waiting on some feedback or consensus Topic: graphics
Projects
None yet
Development

No branches or pull requests

6 participants