-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BroadcastChannel postMessage requires synchronous possibly-cross-process access #5327
Comments
And by the way I do think this somewhat fits in with what UAs are doing in practice: For example in the constructor, they all seem to connect the channel to some background component(which can be modelled as a parallel-queue): FF: https://searchfox.org/mozilla-central/source/dom/broadcastchannel/BroadcastChannel.cpp#325 You can also see pretty clear how in clearly this infra is setup when the first channel is created in a given context, see https://github.com/chromium/chromium/blob/ccd149af47315e4c6f2fc45d55be1b271f39062c/third_party/blink/renderer/modules/broadcastchannel/broadcast_channel.cc#L27 postMessage also goes through that component: FF: https://searchfox.org/mozilla-central/source/dom/broadcastchannel/BroadcastChannel.cpp#372 It's a bit harder to see exactly how message sent are then received, but it seems there is "something in the renderer"(which would match the "Per global router" I tried to spec) and "something in the browser"(which would match the "Per UA router") that are connected and enable messages to be routed around. When a message is actually received by that "something in the renderer", a task is indeed queued, for example by calling A pretty good higher-level overview for Chromium can be found at https://github.com/chromium/chromium/blob/ea7392901ed885ff4ed64db6dcda7ee23eb62ff5/third_party/blink/public/mojom/broadcastchannel/broadcast_channel.mojom#L14 |
I think you're correct that something in that direction would be a more accurate description, but it's also worth considering what we gain from describing it that way as it will make things harder to follow in a way. |
Yes, it's defenitely complicated, yet it might be worth the cost, for two reasons:
|
Also this might not actually be worth it for BroadcastChannel, the much more simple change of #5305 is probably more worth it, so this is also meant as a kind of proof of concept with regards to specifying some of the IPC stuff more precisely, as it's "easier" to do this for this channel, vs cross window messaging for example(which would require imperatively allocating a yet-to-be-defined proxy to a cross-origin windowproxy and so on). |
Well, reverse engineering might still be needed here and there, especially if implementations want similar performance profiles. See also the note at the end of https://infra.spec.whatwg.org/#algorithms. It's important for specifications to spell out a model that does not lead to observable differences when implemented, but observable differences due to performance or inherent races are deemed acceptable, if that makes sense. As we make things more concrete around event loops I do agree we want some language to deal with what we're doing here. Either by saying we're doing something unusual or spelling it out through parallel queues seem reasonable, but once we're closer with the other things the answer might be more obvious. |
This way nothing happens if something got closed during delivery. (Implementations already did this.) Tests: web-platform-tests/wpt#21895. Fixes #1371. Possible follow-up: #5327.
I've been looking a bit into the Service Worker spec, and I think what it does with the "job queue" concept and the various algorithms is quite similar to what I've tried to express here(I wasn't aware of these existing concepts before). For example And concepts like reject-job-promise are specified as performing an operation on an event-loop, by way of queuing a task on the event-loop of the job's client. So I think the SW spec offers a much clearer example of how to potentially solve various issues, like this or #3691 (that still leaves the question as to whether it's actually worth it in terms of complexity for the issues in this spec). |
Branched off of #3691 and in the context of #5305 @domenic @bzbarsky @annevk
I think broadcast channel
postMessage
is facing some of the same "cross-process" issues, and this one actually seems "easier" than the window equivalent, since it doesn't involve potential navigation(although this is perhaps not true, given the involved origin check?).In the light of my suggestions in the other issue to "use a parallel queue" to solve this, I will try to do it for
BroadcastChannel
here.Please bear with me as this will probably be a very loose initial definition...
First a few definitions:
When the BroadcastChannel() constructor is invoked, after having created the channel, but before returning it, add the following steps:
Let "global" be the settings object of the new channel.
If global's "is_managing_broadcast_channels" flag is false, run the following "start managing broadcast channels" steps:
Now that global is setup to manage channels, run the following steps:
(end of
BroadcastChannel()
constructor definition)To run the "the local broadcast steps" means:
- Let "local destinations" be a list of channels that are found in the Set found at "channel-name" in the global's map of channels
The postMessage(message) method, when invoked on a BroadcastChannel object, must run the following steps:
(Run step 1 to 6)
The "clean-up the Per-global broadcast router" steps are:
1. Set the global's "is_managing_broadcast_channels" to false.
2. Discard the Per-global broadcast router, as well as any tasks it would have enqueued.
3. Enqueue the following steps to the unique "UA broadcast router" of the UA:
- Remove the Per-global broadcast router from the map found at origin.
Everytime a BroadcastChannel object is GC'ed, remove the channel from the map of the global. If the map is empty, run the "clean-up the Per-global broadcast router" steps.
Looking forward to your thoughts.
The text was updated successfully, but these errors were encountered: