Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add clarifications for when a frame is rendered #718

Open
henbos opened this issue Dec 13, 2022 · 8 comments
Open

Add clarifications for when a frame is rendered #718

henbos opened this issue Dec 13, 2022 · 8 comments

Comments

@henbos
Copy link
Collaborator

henbos commented Dec 13, 2022

Several metrics (pause, freeze and inter-frame if #717 is merged) talk about incrementing or measuring "just after" a frame is rendered. We should expand this definition in a common place that all said metrics can point to.

@henbos
Copy link
Collaborator Author

henbos commented Dec 13, 2022

For example is the before or after render buffering, is it affected by vsync etc. One might also consider what happens in the case that the same track is rendered in multiple places - not something you would do in practise, but it is possible to do, so the spec should clarify.

@fippo
Copy link
Contributor

fippo commented Dec 13, 2022

https://wicg.github.io/video-rvfc/#dom-videoframecallbackmetadata-expecteddisplaytime
has a definition but it is rather vague (and I heard concerns about it not being accurate) but we should align these specs

@alvestrand
Copy link
Contributor

Given the focus on webrtc, I think https://wicg.github.io/video-rvfc/#dom-videoframecallbackmetadata-presentationtime is probably the metric that makes most sense to align with - it doesn't have a "guess about the future" component.

Defined as "The time at which the user agent submitted the frame for composition".

@henbos
Copy link
Collaborator Author

henbos commented Dec 16, 2022

So that would be after any potential buffering in the render pipeline? E.g. some feedback needed to tell webrtc that the frame that was previously output from webrtc is now finally being submitted for composition

@henbos
Copy link
Collaborator Author

henbos commented Dec 16, 2022

@drkron Do you have opinions here?

@henbos
Copy link
Collaborator Author

henbos commented Dec 16, 2022

Otherwise... ready for PR?

@drkron
Copy link
Contributor

drkron commented Dec 16, 2022

@drkron Do you have opinions here?

Looks good to me. So this together with #717 would be a simpler way of determining various render frame rates. The alternative way that exists today is to use requestVideoFrameCallback. I wonder if a potential continuation is to also add various latency metrics, such as receive to render.

@henbos
Copy link
Collaborator Author

henbos commented Dec 16, 2022

On that related note, if we want to go even further, we could consider a "media-playout" stats object for video. We did recently add "media-playout" stats object for audio: RTCAudioPlayoutStats. If it existed for video, it would make sense to put the playout related stuff there (separate issue if so and not entirely sure we want to, unlike audio, video is not the same path for multiple RTP streams)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants