WebXRManager: Use reported XRView.eye
for setting stereo layers
#29872
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Inspired by #29742, this PR uses the reported
XRView.eye
when creating cameras for an XR device's views - as well as finalising #29742 for cameras beyond the first two.I've removed the explicit
cameraL
andcameraR
variables, as well as the separatecameras
array in favour of directly usingcameraXR.cameras
, and relying entirely on the device's reportedXRViewerPose
. The knock-on of this is that all the cameras are created on receiving the device's firstXRViewerPose
, rather than at the instantiation of WebXRManager. (relevant: #23972).This leads to two additional assumptions:
eye
set on theirXRView
s.eye
set to"none"
to have the 'left' content rendered, which I think is reasonable based on this from immersive-web: We shouldn't default XREyes to left when they're unknown immersive-web/webxr#620 (comment)The combination of these two things mean that the main regression I've been able to envision would be that, for a device which doesn't report its
eye
s correctly, the right eye would be a duplicate of the left eye (whereas currently we're assuming that the second view is always the right eye).The upside is that for devices with more than two views (even if those views are all
"none"
-eyed), pre-rendered stereo content (e.g. webxr_vr_video) will be rendered correctly.If these assumptions are sound, I think that moving the camera creation after
frame.getViewerPose()
is reasonable, as it allows the XRManager to rely on the spec for eye differentiation - but I'm happy to scale this back to just ensure that all the child cameras' layers are kept in sync if this seems too sweeping.