Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Ambient Light Sensor web developer feedback #64

Open
anssiko opened this issue Oct 26, 2020 · 24 comments
Open

Request for Ambient Light Sensor web developer feedback #64

anssiko opened this issue Oct 26, 2020 · 24 comments

Comments

@anssiko
Copy link
Member

anssiko commented Oct 26, 2020

RESOLUTION: Continue monitoring ALS API developer interest and work with more browser vendors. Encourage developers to experiment with existing prototypes.

via https://www.w3.org/2020/10/23-dap-minutes.html#r01

📢 Web developers - To help accelerate the spec advancement, please share here any pointers to web experiments or prototypes using the Ambient Light Sensor.

I'm cross-linking here one such innovative usage #13 (comment) from @Joe-Palmer that has been brought to the group's attention earlier.

@tomayac
Copy link
Contributor

tomayac commented Oct 26, 2020

(Signal-boosted on Twitter.)

@tomayac
Copy link
Contributor

tomayac commented Dec 3, 2020

The comment from @willmorgan at w3c/screen-wake-lock#129 (comment) makes a connection from maximum screen brightness to ALS.

@anssiko
Copy link
Member Author

anssiko commented Dec 3, 2020

@willmorgan could you expand on your use case for "QR code scanning, document scanning, and other interactive authentication methods" w3c/screen-wake-lock#129 (comment) and explain how Ambient Light Sensor would help realize these use cases? Also, what type of interactions with other specs (e.g. Screen Wake Lock API, yet-to-be-specced brightness control API) you foresee?

Your feedback will help inform implementers shipping decisions as well as future work on related specs, so it is much appreciated. Feel free to provide also other use cases that could not be realized without the ALS API.

@willmorgan
Copy link

Hi @anssiko, gladly.

I've actually been working with @Joe-Palmer and @GlenHughes on the same Web-based product at iProov that uses light reflection from human features to assert identities online, similar to how Face ID works, but more secure and resistant to replay attacks and compromised devices. It is possibly the most complex and cool thing I've ever worked on and hinges on a lot of new web platform tech.

To expand upon Joe's original message, we do still rely on a good level of signal strength of light reflecting back from the user's face in order to perform authentication in a user-friendly way. The easy way to obtain this is to maximise the screen brightness, which we can do with a native app on iOS and Android. We can't currently do this on mobile (or laptop!) web.

Without that ability, one idea is to fall back to detecting the current environmental conditions and directing the user to orient themselves away from harsh lighting conditions to increase the signal strength in that way. Ideally one would be able to detect the orientation of the ambient light sensor relative to other devices but I imagine that would complicate the rollout of any standard significantly.

In real-world use cases, consider industries like banking "know your customer" requirements (KYC) and travel, and security in general. I would much prefer to scan my passport, driving license or travel document and then use my mobile device to assert my identity against that document all on the mobile web, without downloading an app. Much more than the paper based process which is painful at the best of times - try applying for a mortgage in the UK and EU, I believe the US is even more challenging!

QR codes are a slightly separate use case, but for scenarios like boarding a plane with a boarding pass, entry into events or gyms, or granting access to an Amazon locker, then having the ability to increase brightness to increase scan-ability of the displayed QR code would also help usability. Failing that we would fall back to ambient light to direct the user accordingly.

I am sure that our competitors in the eKYC space would appreciate the same, but I won't speak for them 😁

Ultimately we are for all initiatives that help close the gap between mobile web and native feature set and performance so would be happy to assist in any reasonable way.

@anssiko
Copy link
Member Author

anssiko commented Dec 3, 2020

Thank you @willmorgan for explaining the use cases. What are in your mind the MVP requirements for a Web API for adjusting screen brightness? E.g. do you need to know the minimum, current, or maximum brightness, or is it enough to have a boolean to flip to turn the brightness to its maximum (that might be user settable, or rejected by the user, for example)?

Often Web APIs do not directly map to the full feature set of respective platform APIs due to privacy and other reasons, so I'm trying to gauge what is the minimum feature set that'd enable your use cases.

@willmorgan
Copy link

willmorgan commented Dec 3, 2020

Thanks for getting back to me @anssiko.

Right now our MVP requirements for a Web API are:

Screen Brightness

  • MVP: ability to request maximum brightness
  • MVP: ability to set back to the device's original value (either user specified or set dynamically via the OS according to its own ALS)
  • MVP: to handle cases where such requests are denied.
    • To avoid fingerprinting concerns, I'd say we explicitly don't need to know about a failure/deny reason (such as low battery or other user setting). If ALS support existed we could fall back to using that to direct the user wrt lighting.

Ambient Light

  • MVP: Ability to request an ambient light reading, to within a 50 lux resolution.
  • MVP: Ability to know when the ambient light reading was provided, to within 1000ms resolution.
  • Stretch: Ability to get and set update interval, within hardware parameters.

@anssiko
Copy link
Member Author

anssiko commented Dec 9, 2020

@willmorgan, thanks for the MVP requirements!

Your Ambient Light requirements will be taken into consideration as we plan new work in this Working Group.

The proposal for Screen Brightness would need to be incubated in a Community Group first, given this is a new Web API proposal. If you'd like to help get this process started, see the instructions for writing such a proposal and to capture interactions please drop a link to the proposal here. After adequate incubation, this Working Group could consider adopting it.

@willmorgan
Copy link

Thanks @anssiko, done ☝️ above.

@willmorgan
Copy link

At the 2021 Q2 DAS meeting, we discussed how to advance the ALS spec, potentially by moving the Ambient Light Sensor into getUserMedia in order to benefit from the existing privacy and UI framework that could be used to gate permissions for this data.

ALS inside getUserMedia

This clever hack would potentially provide an expedient foundation to bring ALS into the web platform.

However, having looked into this further, I'm not sure this worth pursuing:

  • The existing AmbientLightSensor spec and Chrome implementation provides an API that emits data on state change, which is a different model to a MediaStreamTrack continuous data stream. This would impact existing use cases by requiring them to manage change state themselves and dealing with a lot of irrelevant data, especially considering the resolution of ALS is limited to 50 lux intervals.
  • An ambient light data stream is difficult to categorize in a Media Streams API context, without introducing undue deviance. If we entertain this idea further:
    • We thought of the ALS sensor stream as a 1x1 camera stream providing data in a single dimension (lux, rather than an RGB tuple).
    • To produce a clean and progressive API surface, I'd suggest adding a new "sensor" value to MediaStreamConstraints, then implementing a SensorTrack.
    • Taking this approach, to do this cleanly, the rest of the API should probably be implemented. This would introduce scope creep for the use cases described so far. Alternatively, we could implement the bare minimum, but IMHO, doing so would introduce a strange API surface.
    • As far as a "full" SensorTrack API implementation is concerned, the existing use cases don't require applying constraints such as sensitivity and resolution, and one could argue that the visibility and success of constraint application could be used for fingerprinting.
  • If we simply said "to get devicelight events, use getUserMedia({ video: true })", this would introduce more dilemmas for all use cases not requiring a media stream:
    • displaying the camera icon wouldn't be quite right in terms of UI design - and that would introduce more scope creep
    • what would happen if we got camera permission and then closed the stream after acquiring permission?
    • how would this work with OS level settings to block cameras but not explicitly block other sensors?
  • This would conflict with and confuse the generic sensor work, which is arguably more mainstream.

In short, having looked into this, I'm not sure that getUserMedia would represent a fast way of making this data available for use cases: it would present a new and unique UX challenge for browser vendors, and present an unusual or quirky API surface for developers to interact with.

Bringing this capability into the Web Platform using the specs we have today

Per the above, I would prefer to help tackle the remaining problems with generic sensors to achieve an outcome where we have a sensible and standard way to access this and other APIs.

As we know, the way that existing sensors like the gyroscope are currently accessed is generally to bind to the devicemotion / deviceorientation event on window. In Safari you'd call DeviceMotionEvent.requestPermission() before receiving this information, and the information is further gated by permissions policy (or feature policy 😉).

Today, web developers requiring motion data can obtain it in this way, feature detecting if further permission prompting is required, and handling that as needed. It isn't the cleanest of APIs but it's fairly uncontroversial barring a few feature policy issues which are getting fixed.

The way the existing Chrome implementation or ALS works is perfectly suitable for my use case at the moment:

  • 50 lux resolution;
  • emitting on change;
  • complying with permissions policy (ambient-light-sensor);
  • speculatively in WebKit browsers, calling DeviceLightEvent.requestPermission() if required.

We also discussed that the concerns around privacy intrusion have been disproven, are generally of low severity, are mitigated by reducing resolution and frequency of readings, and may benefit from further mitigation strategies in the future through the generic sensor spec.

As an example, this blog post shows how one could determine browser history using devicelight events and inspecting visited link styles, logos and so on, before resolution was reduced. Lukasz's post is wonderfully creative, and has implications on other areas this group is focusing on such as the Wake Lock API, but the techniques shown also rely on a lot of modern security tooling like Content Security Policy either being misconfigured or simply not present.

I would point out that the use case I'm advancing works in a similar, but higher fidelity way to replicate these scenarios, as that's a major part of its value proposition. It fundamentally differs only in the way that it needs to produce a correlation score of colours flashed on the screen to produce a true/false pass result, rather than estimating websites visited, user behaviour, or some other sensitive credential.

In order to achieve acceptable precision, my own use case requires a full RGB camera feed. The reason why my use case requires ambient light readings is because it can only reach a high degree of confidence without environmental light introducing too much noise. I can't envisage how a single lux data stream, even at high precision or frequency, could do this alone -- but would honestly be fascinated and grateful if someone out there could show me how! 😉

Summary and my own thoughts on next steps

To summarise, I do not believe there is much risk of harm in introducing ALS as it is today, or even under the devicelight event. What we have today meets my use case, and the security and privacy implications can be mitigated with the appropriate tooling (CSP and Permissions Policy).

In the spirit of moving things forward, perhaps it would make sense to keep on the look out for use cases and slightly reduce the existing spec's scope, if needed, and proceed from there?

Thanks for reading my massive wall of text!

@anssiko
Copy link
Member Author

anssiko commented Apr 23, 2021

Thanks @willmorgan for your detailed assessment of pros/cons of ALS inside getUserMedia versus standalone ALS. This addresses the resolution we took at our recent virtual meeting.

This issue will remain open and accepts further use case input.

In parallel we look for opportunities to reduce the ALS scope in a way that won't affect negatively the known key use cases. The group welcomes proposals on ways to further reduce privacy risks while still enabling key use cases. Please consider your proposals in context of the Security and Privacy Considerations that note potential privacy risks and current mitigations strategies. Please note that with new information, the existing considerations can be revised.

@mburrett-jumio
Copy link

Even the most minimal implementation of this - providing a rough estimate of ambient light level coupled with an approximate timestamp - would provide significant benefits to web applications performing any kind of biometric analysis via the camera. The lighting itself is of course a critical component in this type of process.

@anssiko
Copy link
Member Author

anssiko commented Sep 23, 2021

Here's another use case with a proof-of-concept:

https://tonytellstime.netlify.app/ via #69 (thanks @Aayam555) uses getUserMedia to approximate ambient light level (by reading pixel values off of canvas) and announces the current time using the Web Speech API when it detects changes.

A more privacy-preserving and energy-efficient version of this app would use Ambient Light Sensor instead.

@rakuco
Copy link
Member

rakuco commented Sep 28, 2021

As we know, the way that existing sensors like the gyroscope are currently accessed is generally to bind to the devicemotion / deviceorientation event on window. In Safari you'd call DeviceMotionEvent.requestPermission() before receiving this information, and the information is further gated by permissions policy (or feature policy ).
[...]

  • speculatively in WebKit browsers, calling DeviceLightEvent.requestPermission() if required.
    [...]
    To summarise, I do not believe there is much risk of harm in introducing ALS as it is today, or even under the devicelight event.

Just a few clarifications that don't disprove @willmorgan's points:

  • There's a Gyroscope spec that is built upon the Generic Sensors framework. Chromium-based browsers expose it by default. In Chromium-based browsers, the devicemotion and deviceorientation events (which come from the DeviceOrientation Event spec) happen to have their internals implemented using the same code used to implement the Generic Sensors-based APIs, but the spec predates the Generic Sensors ones and they are not related.
  • DeviceMotionEvent.requestPermission() exists in WebKit, but not in Blink.
  • DeviceLightEvent and the devicelight event have not existed since 2016 (and they're not shipped by any web engine at this point). They were part of a former incarnation of the Ambient Light Sensor spec, when it was not based on the Generic Sensors framework.

@marcoscaceres
Copy link
Member

A more privacy-preserving and energy-efficient version of this app would use Ambient Light Sensor instead.

Sure, but that example is totally contrived (it's hard to imagine anyone doing that in a real app). It also requires a permission to allow camera access, which mitigates the privacy aspects.

@anssiko
Copy link
Member Author

anssiko commented Sep 29, 2021

This is an important discussion as we figure out the path forward for this API.

First, I wouldn't dismiss any proof-of-concept but encourage all web developers to share their experiments. Thanks all who already shared experiments with us!

Specific to #64 (comment), this is perhaps not the next Instagram, but a minimal functional example of a long-running task that wants to react to changes in a specific attribute in the environment, available light aka ambient light. The long-running nature imposes further requirements on energy efficiency of the implementation.

Given this group is committed to privacy-preserving APIs, I think using a camera API to monitor the ambient light level would be a violation of the data minimization principle. I'm not blaming anyone, web developers will use what they have at their disposal to get their job done. But I think we can do better and help web developers do the right thing the right way.

TL;DR: I'd challenge the group to think of appropriate abstractions that map close enough to the real-world use cases.

Another thought. Using an API for a purpose other than its primary function will likely confuse the user. For a camera API, the primary function would be to capture and/or display a visual image. This concern applies to all APIs that are multi-purpose, and is not specific to this case. Just wanted to note how gating an API behind a permission when it is used in unexpected ways will likely lead to a confusing user experience.

I know some native platforms allow prompting with a custom description with more context, but faking that to get the user to grant access is a concern. @marcoscaceres did Permissions API consider adding that feature and how did that discussion go?

@larsgk
Copy link

larsgk commented Oct 29, 2021

Another thought. Using an API for a purpose other than its primary function will likely confuse the user. For a camera API, the primary function would be to capture and/or display a visual image.

Not only confusion. A concrete case we have would be using the ALS to adjust color scheme (e.g. for day / dusk / night mode) for dashboards in cars and on the bridge on vessels, where it's important not to not blind users' night sight and generally provide the best UI for the ambient light at the time. This will require continuous monitoring of the ALS and not make sense to bundle with a camera API (including permissions).

@marcoscaceres
Copy link
Member

The use case is not in question: it's a great and valid use case. However, what's in question is the solution (ALS) to address the use case.

The use case seems very tied to prefers-color-scheme (literally for UIs, as mentioned). The ALS doesn't have a nice way of hooking into CSS. Wouldn't it make more sense to just add "dusk" or whatever to prefers-color-scheme? That would afford users control over when the UI is applied, without the need ASL at all (ALS can still be used by the browser to make the "dusk" determination - or the user can just choose "I always prefer dusk... or just let the system decide (auto), like in MacOS").

@tomayac
Copy link
Contributor

tomayac commented Nov 4, 2021

(Bikeshedding, I know.) Wording it as "dusk" is dangerous, though, as dusk is connected to the time between day and night. You don't call temporary darkness in a tunnel "dusk".

The Google Maps platform-specific apps turn dark when one drives through a tunnel. Experiments suggest it's not using an ALS for doing so, which to me is surprising.

@marcoscaceres
Copy link
Member

Yes, I agree... that wasn't to imply that we would use "dusk". I should have been more clear.

Experiments suggest it's not using an ALS for doing so, which to me is surprising.

I think that's correct/good... I think MacOS also does it based on the time sunset happens, but I'm not sure. I think having multiple heuristics is actually a good thing (which may or may not include ALS) - including just giving users control.

@anaestheticsapp
Copy link

I have added ALS to my logbook app a year ago to adjust the color scheme of the app based on ambient light (https://twitter.com/AnaestheticsApp/status/1499425060402212864). In many countries, it is mandatory for anaesthetic doctors to log every case they do and many people do this in theatre. The problem is that, depending on the type of surgery, light conditions in theatre are either very bright (dark mode becomes unusable) or very dark (light mode is too bright and distracts other people in theatre). Users currently have to manually switch the color scheme multiple times a day or manually change the brightness of their screen. It would be great to see this implemented!

@anssiko
Copy link
Member Author

anssiko commented May 3, 2023

@anaestheticsapp thank you for your encouraging feedback! I can't stress how important it is for us folks working on new web capabilities to hear directly from forward-looking web developers (you!) who understand the context-sensitive real-world user needs.

@marcoscaceres
Copy link
Member

This still seems like an OS wide problem, not a web page level problem.

@anaestheticsapp, like, what do the rest of the apps in the OS do?

@anaestheticsapp
Copy link

Good question, I don't develop native apps so I don't know if they have access to an ALS. But I wouldn't expect every app to behave this way, just the ones that are actively being used in frequently changing light conditions and I would expect that users opt in for this behaviour for each app.

@willmorgan
Copy link

willmorgan commented May 12, 2023

Google Maps on iOS/CarPlay adapts the display based on the ambient light sensor. If I drive through a tunnel, for example, it knows to switch to dark mode.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants