-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add clarifications for when a frame is rendered #718
Comments
For example is the before or after render buffering, is it affected by vsync etc. One might also consider what happens in the case that the same track is rendered in multiple places - not something you would do in practise, but it is possible to do, so the spec should clarify. |
https://wicg.github.io/video-rvfc/#dom-videoframecallbackmetadata-expecteddisplaytime |
Given the focus on webrtc, I think https://wicg.github.io/video-rvfc/#dom-videoframecallbackmetadata-presentationtime is probably the metric that makes most sense to align with - it doesn't have a "guess about the future" component. Defined as "The time at which the user agent submitted the frame for composition". |
So that would be after any potential buffering in the render pipeline? E.g. some feedback needed to tell webrtc that the frame that was previously output from webrtc is now finally being submitted for composition |
@drkron Do you have opinions here? |
Otherwise... ready for PR? |
Looks good to me. So this together with #717 would be a simpler way of determining various render frame rates. The alternative way that exists today is to use requestVideoFrameCallback. I wonder if a potential continuation is to also add various latency metrics, such as receive to render. |
On that related note, if we want to go even further, we could consider a "media-playout" stats object for video. We did recently add "media-playout" stats object for audio: RTCAudioPlayoutStats. If it existed for video, it would make sense to put the playout related stuff there (separate issue if so and not entirely sure we want to, unlike audio, video is not the same path for multiple RTP streams) |
Several metrics (pause, freeze and inter-frame if #717 is merged) talk about incrementing or measuring "just after" a frame is rendered. We should expand this definition in a common place that all said metrics can point to.
The text was updated successfully, but these errors were encountered: