-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
automation: Reading updates without active Sensor instances should be stashed, not discarded #478
Comments
Oh yeah, and the Automation section needs to remain decoupled enough from the rest of the spec so that it can continue being used by the Device Orientation API spec... |
I agree the "In an implementation-defined way, make parsedReading available so that it can be obtained by platform sensors connected to virtualSensor." step is doing a lot of heavy lifting and it would be make the current reading a property of the virtual sensor and/or platform sensor in a way that this data flow could be better defined. |
As described in #478, the current behavior is underspecified and allows implementations to discard readings provided to the "Update virtual sensor reading" WebDriver endpoint when no sensors are currently registered to receive readings. This makes writing some tests (especially Device Orientation API ones) harder, as one then needs to order calls in a very specific way so that a virtual sensor is created, a sensor is activated, a virtual sensor reading is provided and then it is consumed by a sensor. To complicate things further, implementations may differ in when they deactivate sensors (think page visibility handling, for example). It is easier to mandate one specific behavior: * Readings are never discarded. They are always saved and are stored as long as the virtual sensor exists * If there are active sensors when they are received, they are also made available to said active sensors. * Whenever a sensor connects to a virtual sensor and wants to start receiving readings, the virtual sensor will provide its saved reading to it if there is one. In practice, this means one can call `test_driver.update_virtual_sensor()` before a Sensor object is even created (or a Device Orientation event listener is added), or that if one calls Sensor.stop() followed by Sensor.start() the previous reading will be reported again. Making this possible requires several changes across the specification though, as virtual sensors need to keep track of platform sensors but platform sensors did not have their lifetime properly defined. Furthermore, the virtual sensor changes need to be flexible enough to be possible to use them in the Device Orientation specification, which has no concept of platform sensors or sensor types at all. Platform sensors: - Platform sensors are now Document-bound. Their definition in the "Concepts" section remains more abstract, but when used in algorithms and interacting with platform objects, they are always treated as belonging to a Document. This change also removes all usages of "browsing context" and finally gets rid of all Bikeshed warnings. - Platform sensors are kept in a per-Document `[[sensorMapping]]` internal slot, a map of sensor types to platform sensors. This helps with the above, and also helps define that there is at most one platform sensor per sensor type in a Document, which is shared by Sensor instances with the same associated sensor type. Virtual sensors: - The concept of "platform sensor-like" was added to account for the Device Orientation API. It is like a platform sensor, but without the Generic Sensor-specific parts that are not used in the Automation section (sensor type, latest reading map, activated sensor objects). - Virtual sensors now have two more associated items: a set of platform sensor-likes (which are notified when there is a reading), and a latest saved reading map (self-explanatory). - When "update virtual sensor reading" is called, the stored reading now contains a "timestamp" key, so that if a reading is sent more than once it contains the same timestamp. The interaction between per-document platform sensors and per-top-level traversable virtual sensors works as follows: - When a virtual sensor exists, a platform sensor adds itself to the former's set of connected platform sensor-likes and also sets it as its associated device sensor. - When "set sensor settings" is called with an empty active sensor objects set, the platform sensor removes itself from its virtual sensor's set of connected platform sensor-likes. When the active sensor objects set is not empty, it checks if it is being activated (i.e. the previous sampling frequency was 0) and, if so, it retrieves the virtual sensor's latest saved reading. - When a Document is unloaded, all platform sensors in `[[sensorMapping]]` are removed from their virtual sensors' connected paltform sensor-likes set. - When a virtual sensor is removed, any connected platform sensor-likes have their associated device sensor set to null. Fixes #444 (removes all remaining browsing context references). Related to #478.
Just like with w3c/sensor#487, the idea is to make it possible for users to write, for example ```js await test_driver.create_virtual_sensor(...); await test_driver.update_virtual_sensor(...); window.addEventListener('deviceorientation', ...); ``` and receive the readings above even if the connection to the virtual sensor was made only after the addEventListener() call. Previously, users would need to carefully order the calls to addEventListener(), update_virtual_sensor() and possibly even need to add a dummy event listener first to get everything to work correctly. Unfortunately, just as with w3c/sensor#487 this requires quite a few changes, even more so in this specification, which is quite vague when it comes to connecting to sensors and the lifetimes of such connections. Some of that behavior is now specified for the virtual sensors case, so that each Document has a `[[virtualSensorMapping]]` that maps virtual sensor types to "orientation event platform sensor-likes", a concept borrowed from Generic Sensor's automation section. Similarly to that section, the idea is that: - Whenever the user agent needs to connect to the underlying hardware because addEventListener() was called, it first attempts to find a virtual sensor, get/create an orientation event platform sensor-like, adds it to `[[virtualSensorMapping]]` and to the virtual sensor's connected platform sensors set and retrieves any existing readings from the virtual sensor. - When a Document is being unloaded, all platform sensor-likes in `[[virtualSensorMapping]]` are removed from their virtual sensors' connected platform sensors set. - Similarly, when a virtual sensor is removed, any platform sensor-likes in its connected platform set have their associated "device sensor" set to null. In the rest of the spec, we switch from attempting to derive a virtual sensor from the top-level traversable directly to trying to find a suitable platform sensor-like entry in `[[virtualSensorMapping]]` and checking its associated virtual sensor when one is set. Related to: w3c/sensors#478.
Just like with w3c/sensors#487, the idea is to make it possible for users to write, for example ```js await test_driver.create_virtual_sensor(...); await test_driver.update_virtual_sensor(...); window.addEventListener('deviceorientation', ...); ``` and receive the readings above even if the connection to the virtual sensor was made only after the addEventListener() call. Previously, users would need to carefully order the calls to addEventListener(), update_virtual_sensor() and possibly even need to add a dummy event listener first to get everything to work correctly. Unfortunately, just as with w3c/sensors#487 this requires quite a few changes, even more so in this specification, which is quite vague when it comes to connecting to sensors and the lifetimes of such connections. Some of that behavior is now specified for the virtual sensors case, so that each Document has a `[[virtualSensorMapping]]` that maps virtual sensor types to "orientation event platform sensor-likes", a concept borrowed from Generic Sensor's automation section. Similarly to that section, the idea is that: - Whenever the user agent needs to connect to the underlying hardware because addEventListener() was called, it first attempts to find a virtual sensor, get/create an orientation event platform sensor-like, adds it to `[[virtualSensorMapping]]` and to the virtual sensor's connected platform sensors set and retrieves any existing readings from the virtual sensor. - When a Document is being unloaded, all platform sensor-likes in `[[virtualSensorMapping]]` are removed from their virtual sensors' connected platform sensors set. - Similarly, when a virtual sensor is removed, any platform sensor-likes in its connected platform set have their associated "device sensor" set to null. In the rest of the spec, we switch from attempting to derive a virtual sensor from the top-level traversable directly to trying to find a suitable platform sensor-like entry in `[[virtualSensorMapping]]` and checking its associated virtual sensor when one is set. Related to: w3c/sensors#478.
This was originally discussed in https://chromium-review.googlesource.com/c/chromium/src/+/5233627, where it was argued that if https://w3c.github.io/sensors/#update-virtual-sensor-reading-command when no active Sensor instances of a given type exist, then the reading should be stashed and processed when the state changes.
In JS, this means that with e.g.
A "reading" event will be fired on sensor, and the x/y/z attributes will return 1/2/3 respectively instead of the update being discarded because at the time update_virtual_sensor() (and consequently https://w3c.github.io/sensors/#update-virtual-sensor-reading-command) is invoked there is no active Accelerometer instance. This applies to other situations, such as if all sensors of a given type have been stopped, or if the page becomes hidden.
The current wording in the spec is intentionally vague -- it does not explicitly prevent the situation above from happening, but it also does not mandate that it does, so implementations are currently free to just discard readings in the situations above.
I am not entirely sure of how to fix this in the spec. One possibility would be making https://w3c.github.io/sensors/#update-virtual-sensor-reading-command stash the reading some under circumstances and getting https://w3c.github.io/sensors/#activate-a-sensor-object to use the stashed reading based on a platform sensor's activated sensor objects's size. Still:
The text was updated successfully, but these errors were encountered: