Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a test for ReplaceTrack that verifies video track content. #22779

Merged
merged 1 commit into from
Apr 30, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 36 additions & 4 deletions webrtc/RTCPeerConnection-helper.js
Original file line number Diff line number Diff line change
Expand Up @@ -582,7 +582,7 @@ const trackFactories = {
return dst.stream.getAudioTracks()[0];
},

video({width = 640, height = 480} = {}) {
video({width = 640, height = 480, signal = null} = {}) {
const canvas = Object.assign(
document.createElement("canvas"), {width, height}
);
Expand All @@ -593,8 +593,13 @@ const trackFactories = {
setInterval(() => {
ctx.fillStyle = `rgb(${count%255}, ${count*count%255}, ${count%255})`;
count += 1;

ctx.fillRect(0, 0, width, height);
// If signal is set, add a constant-color box to the video frame.
if (signal !== null) {
ctx.fillStyle = `rgb(${signal}, ${signal}, ${signal})`;
ctx.fillRect(10, 10, 20, 20);
let pixel = ctx.getImageData(15, 15, 1, 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of let pixel = ctx.getImageData(15, 15, 1, 1)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does look like there's no good reason for it. I'll send a PR to delete this line once this PR is landed.

}
}, 100);

if (document.body) {
Expand All @@ -609,13 +614,40 @@ const trackFactories = {
}
};

// Get the signal from a video element inserted by createNoiseStream
function getVideoSignal(v) {
if (v.videoWidth < 21 || v.videoHeight < 21) {
return null;
}
const canvas = new OffscreenCanvas(v.videoWidth, v.videoHeight);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alvestrand is it necessary to use OffscreenCanvas here? In https://wpt.fyi/results/webrtc/RTCRtpSender-replaceTrack.https.html?label=pr_head&max-count=1&pr=22779 it looks like Firefox and Safari fail this test because of this. Could a regular canvas be used just as well?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even if canvas is used the expectation of the test is not clear where the codec used is lossy: why is a specific pixel expected to be in an exact coordinate when using a lossy video codec.

The timeout is probably related to loadedmetadata never being fired in the code in this PR see w3c/webrtc-pc#2506 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

detectSignal allows for an error margin of 1, but good point that this may not always hold.

I don't see a timeout in the test results so there isn't anything to investigate there I think.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test paints a box in a specific color, and then checks that the middle of that box is the expected color (or close to it). The margin should be big enough that even edge-blurring codecs don't mess things up; if the video width and height are the same (no resize), the box should be in the same place.

The amount of color disparity one can expect should be tested; good point. Later CL.

let context = canvas.getContext('2d');
context.drawImage(v, 0, 0, v.videoWidth, v.videoHeight);
// Extract pixel value at position 20, 20
let pixel = context.getImageData(20, 20, 1, 1);
return (pixel.data[0] + pixel.data[1] + pixel.data[2]) / 3;
}

function detectSignal(t, v, value) {
return new Promise((resolve) => {
let check = () => {
const signal = getVideoSignal(v);
if (signal !== null && signal < value + 1 && signal > value - 1) {
resolve();
} else {
t.step_timeout(check, 100);
}
}
check();
});
}

// Generate a MediaStream bearing the specified tracks.
//
// @param {object} [caps]
// @param {boolean} [caps.audio] - flag indicating whether the generated stream
// should include an audio track
// @param {boolean} [caps.video] - flag indicating whether the generated stream
// should include a video track
// should include a video track, or parameters for video
async function getNoiseStream(caps = {}) {
if (!trackFactories.canCreate(caps)) {
return navigator.mediaDevices.getUserMedia(caps);
Expand All @@ -627,7 +659,7 @@ async function getNoiseStream(caps = {}) {
}

if (caps.video) {
tracks.push(trackFactories.video());
tracks.push(trackFactories.video(caps.video));
}

return new MediaStream(tracks);
Expand Down
32 changes: 31 additions & 1 deletion webrtc/RTCRtpSender-replaceTrack.https.html
Original file line number Diff line number Diff line change
Expand Up @@ -272,5 +272,35 @@
without negotiating.
3. Queue a task that runs the following steps:
1. If connection's [[isClosed]] slot is true, abort these steps.
*/
*/

promise_test(async t => {
const v = document.createElement('video');
v.autoplay = true;
const pc1 = new RTCPeerConnection();
t.add_cleanup(() => pc1.close());
const pc2 = new RTCPeerConnection();
t.add_cleanup(() => pc2.close());
const stream1 = await getNoiseStream({video: {signal: 20}});
t.add_cleanup(() => stream1.getTracks().forEach(track => track.stop()));
const [track1] = stream1.getTracks();
const stream2 = await getNoiseStream({video: {signal: 250}});
t.add_cleanup(() => stream2.getTracks().forEach(track => track.stop()));
const [track2] = stream2.getTracks();
const sender = pc1.addTrack(track1);
pc2.ontrack = (e) => {
v.srcObject = new MediaStream([e.track]);
};
const metadataToBeLoaded = new Promise((resolve) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One issue is metadataToBeLoaded Promise never fulfills.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://wpt.fyi/results/webrtc/RTCRtpSender-replaceTrack.https.html?label=pr_head&max-count=1&pr=22779 are the results from running this test in Chrome, Firefox and Safari.

The failure due to OffscreenCanvas is inside detectSignal, so this promise did resolve.

If you see some other behavior locally it's probably because differences in rules around autoplaying video. In CI we pass some additional flags to browsers to allow that, since so many tests depend on this.

v.addEventListener('loadedmetadata', () => {
resolve();
});
});
exchangeIceCandidates(pc1, pc2);
doSignalingHandshake(pc1, pc2);
await metadataToBeLoaded;
await detectSignal(t, v, 20);
await sender.replaceTrack(track2);
await detectSignal(t, v, 250);
}, 'ReplaceTrack transmits the new track not the old track');
</script>