Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HTML] Canvas place element #1076

Open
khushalsagar opened this issue Sep 25, 2024 · 1 comment
Open

[HTML] Canvas place element #1076

khushalsagar opened this issue Sep 25, 2024 · 1 comment

Comments

@khushalsagar
Copy link

khushalsagar commented Sep 25, 2024

Request for Mozilla Position on an Emerging Web Specification

Other information

This proposal is being incubated in WICG and following the WHATWG stages process. Please use the WICG repo linked above for detailed issues.

@khushalsagar
Copy link
Author

Copying over some of the offline discussion on this for posterity.

  1. lack of / insufficient explanation of why current technologies don't/can't solve this problem, e.g. why webdevs can't just position an element over their canvas?; for some use-cases, graphics APIs like WebGLare probably enough.

At first glance, positioning an element over the canvas seems like an obvious polyfill for this feature but it’s difficult to interleave canvas and DOM painted content. For example, you can’t render canvas painted content on top of the DOM element. There’s workarounds like stacking multiple canvas and DOM elements but that increases complexity. For 3D, there are both UI use cases, as well as text on GL cases).

  1. likely to rapidly cause webcompat issues for every other browser (and thus pressure others into shipping "something" prematurely before a chance at proper incubation/standardization, issue raising/resolving, etc.)

What we have so far in Chrome is just a proof-of-concept. We plan to incubate this feature in WHATWG following the Stages process to ensure there’s extensive cross-browser and developer feedback before the API is shipped. We realize that this is not a trivial feature and will take time to properly consider all the issues.

  1. confused about how event handling would work, e.g.: what if you render a <select> in the canvas, then expand it, and now you somehow have a whole mess of things to do.

A terse overview for how the feature works is that the browser keeps track of where the painted element draws into the canvas buffer. Such that if the element’s content changes (including its bounding box size), the canvas buffer is re-composited with the updated content.

The browser also keeps track of the bounding box for each draw command along with its matrix/clip to support hit-testing. These data structures remain in sync with the element’s painted content into the canvas buffer. So the author wouldn’t need to do anything if the element’s rendering changes (from user interaction or otherwise). Top layer elements like select popovers could potentially be rendered outside of the canvas.

The situation is more complicated for WebGL/WebGPU where the browser can’t automatically support re-rendering. The pattern there is to notify script when an element needs changes (i.e. when the browser would do a paint invalidation internally), and allow script to redraw the canvas in response. The hit testing data structures mentioned above are provided by script in this case. This does mean there’s more work on the author to redraw the element/keep the hit testing data in sync. But with this feature, it would be a small subset of the work they do today where in addition to managing redraw/hit-testing of interactive content, they have to:

  • Ship a text rendering engine to layout and paint multi-line text.
  • Provide accessibility support.
  • Handle text selection, event routing (all the capabilities built into the web).
  • Handle platform differences to try to keep the theme consistent with the underlying OS.

4 how do events work when elements are drawn over or transformed arbitrarily using shaders? e.g. what happens when drawing a line over a button, clicking the antialiased edge of a rotated button (or the antialiased edge of the line), clicking a button that has been morphed by a shader into an irregular shape (potentially something as arbitrary as a fluid sim), or the demo with the glass dragon in front of elements....where does a click go?

There’s 2 ways we’re thinking about hit-testing:

  • For simple cases, a declarative approach which provides the same level of support as DOM elements. Likely the most complicated thing we could do here is analogous to an arbitrary CSS clip-path. For 2D canvas, this can be done automatically. For WebGL/WebGPU, script provides this as a part of painting the element.
  • There could be other paint commands occluding the element, like the line over a button case you mentioned. For 2D canvas it’s possible to track the bounding box for each draw command (Blink already does that for optimizations). So hit-testing can account for it. We could also add a pointer-events type of param to each draw command to indicate whether it should swallow an event or not. We don’t think it’s doable for WebGL/WebGPU but want to explore this further during standardization.

For complex cases in WebGL/WebGPU a script API where if an event hits the canvas buffer we ask script to hit-test and provide the target element and map the point into the element’s image. This would take care of advanced shader cases like splitting an element into 2.

The script API ensures that authors can handle any case and as patterns become more common, we can try to make them declarative.

  1. feels/looks like another webcomponents v0, houdini, webrtc (plan b)

Is this question similar to 2? Does our answer there address this concern?

6 seems like the sort of thing all vendors need to be actively involved with, for it to succeed without years of turmoil

Absolutely agreed. The standardization for this has only just started. We’re expecting a good chunk of the discussion to happen at WHATWG with instances of joint CSS/WHATWG meetings since there’s an overlap with CSS concepts.

7 seems easy to cause bad performance (perf footgun)

There are design aspects which require close attention for good performance. For example, theoretically re-compositing the canvas when an element changes requires preserving all draw commands that have ever been rendered on the canvas. This is of course not possible so browsers would need to squash/layerize internally; detect when a command (clear) allows discarding a set of commands before it; or issue an event to ask authors to redraw the canvas completely (this already exists for cases where the GPU context is lost).

While it’s not trivial, ensuring good perf for this feature doesn’t seem insurmountable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Unscreened
Development

No branches or pull requests

1 participant