Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question! #8

Closed
titoBouzout opened this issue Aug 8, 2019 · 28 comments
Closed

question! #8

titoBouzout opened this issue Aug 8, 2019 · 28 comments

Comments

@titoBouzout
Copy link

Hello!, You seem to be the only one on the complete internet with a problem that I am having.

I have an array of blobs, the array has two videos. I have been trying to play one video after the other without any kind of interruptions. As you can guess, when the second video is appended via sourceBuffer.appendBuffer(blob) the video freezes.

Have you found any solution to this problem? To be able to play two videos without any kind of stop, thanks in advance!

@guest271314
Copy link
Owner

Hello!, You seem to be the only one on the complete internet with a problem that I am having.

Hi.

I have an array of blobs, the array has two videos. I have been trying to play one video after the other without any kind of interruptions. As you can guess, when the second video is appended via sourceBuffer.appendBuffer(blob) the video freezes.

Can you create a plnkr https://plnkr.co to demonstrate? Note that MediaSource has recently implemented changeType https://googlechrome.github.io/samples/media/sourcebuffer-changetype.html. If the video is encoded to WebM using MediaRecorder first the resulting Blob can be represented as an ArrayBuffer and passed to appendBuffer(), if "segments" mode is used timestampOffset should be updated.

The result depends on the method chosen to achieve the output of "without any kind of interruptions" and the browser used. Using canvas.captureStream() and AudioContext() to create a video and audio media stream can be implemented at Firefox and Chrome with similar result save for issues respective to each browser.

Have you tried running the code in the branches in this repository in the environments where the video is expected to be rendered?

Are there any restrictions on how the output is derived which renders the video to the user in the browser?

Have you found any solution to this problem? To be able to play two videos without any kind of stop, thanks in advance!

Yes, the requirement is possible.

If the video can be played at HTML <video> element at that specific browser there are several approaches utilizing browser APIs to record multiple videos output to a single WebM file.

@titoBouzout
Copy link
Author

I didn't notice the repository had branches, but I saw your code all over the place in stack overflow and also github comments. I tried to use timestampOffset before but it seems I have been making some sort of mistake because it didnt work. Looking at how you used it and replicating it made it work!

Made a simple plnkr to demonstrate the approach working https://plnkr.co/edit/DGffcP Feel free to include it in the repository if you wish. (btw I suggest to use folders on the same repository to make it more discoverable, because branches are a bit hidden[maybe this is just me that dont pay much attention to branches]). Thanks a lot for the hint on how to use timestampOffset. Take care! ♥

@guest271314
Copy link
Owner

@titoBouzout A slightly modified version of the code https://plnkr.co/edit/28tA98JZJMA1yL6ssrFg?p=preview. Note endOfStream() call at line #37 which allows seeking of the media.

Updated this repository to include Branches of this repository at each branch.

If the code is working as intended this issue can be closed, correct?

@titoBouzout
Copy link
Author

Yes, thanks again. The update is looking good! :)

@titoBouzout
Copy link
Author

titoBouzout commented Aug 10, 2019

Hi again, well, I'm still having problems with this, been reading for hours.

1 -- SourceBuffer.timestampOffset will fail to update if the last blob fed does not contain a complete segment**

It seems that trying to update timestampOffset most of the time gives an error.

The problem seems to be that when fetching blobs via MediaRecorder, the blobs obtained sometimes don't contain a complete segment (is this true? it seems to be, given the following paragraph from the specification).

The UA MUST record stream in such a way that the original Tracks can be retrieved at playback time. When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.

So internally (thing that isn't exposed) the SourceBuffer is still waiting for more blobs to be able to parse the last segment. The workaround is to call SourceBuffer.abort() but that will drop the segment, so while in the video may not be very noticeable, the audio glitches. https://bugs.chromium.org/p/chromium/issues/detail?id=766002#c3

2. -- MediaRecorder.stop() does not give a full segment at the end?

I was trying to figure out how to get segments from MediaRecorder to solve issue 1.

So tried to get a "complete" video using MediaRecorder. Just for testing I put a setInterval that fires every 5 seconds and calls to MediaRecorder.stop(), when I feed these blobs to SourceBuffer still gives the same error of parsing :@ . So this means that when we use MediaRecorder and call stop the last blob does not give the complete segment? I doubt this given the cited paragraph but Im still seeing the error.

3. -- if MediaRecorder.stop() does not give a full segment then what

4. -- My original problem so you understand why Im on this situation. File headers?

Im recording a stream with MediaRecorder. I stop and start recording every 5 seconds to generate videos of 5 seconds. Then I play them all together. Why I do this? Because if I refresh the page the server will send me blobs from the middle of the stream, and feeding blobs from the middle of a stream to SourceBuffer simple does not work because it seems that webm files have a header. Maybe what I can do here is to generate the header myself and always append that first and then append the incoming blobs. Will this work?

@titoBouzout titoBouzout reopened this Aug 10, 2019
@guest271314
Copy link
Owner

@titoBouzout

Why is using MediaSource necessary?

Hi again, well, I'm still having problems with this, been reading for hours.

1 -- SourceBuffer.timestampOffset will fail to update if the last blob fed does not contain a complete segment**

It seems that trying to update timestampOffset most of the time gives an error.

Can you create a plnkr https://plnkr.co to demonstrate?

The problem seems to be that when fetching blobs via MediaRecorder, the blobs obtained sometimes don't contain a complete segment (is this true? it seems to be, given the following paragraph from the specification).

The UA MUST record stream in such a way that the original Tracks can be retrieved at playback time. When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.

What specification are you referring to relevant to "complete segment"?

If timeSlice is passed to start() and one of the individual Blobs is passed to URL.createObjectURL() to individual Blob need not be playable. The resulting complete WebM file, potentially containing multiple concatenated Blobs from dataavailable event object must be playable. The writing of the WebM file is not specified to be performed to output multiple indvidually playable WebM files, only a single playable file.

So internally (thing that isn't exposed) the SourceBuffer is still waiting for more blobs to be able to parse the last segment. The workaround is to call SourceBuffer.abort() but that will drop the segment, so while in the video may not be very noticeable, the audio glitches. https://bugs.chromium.org/p/chromium/issues/detail?id=766002#c3

Not certain what you mean by "waiting for more blobs"? At the code in mater branch of this repository waiting event of HTML <video> element is used to append ArrayBuffers to the sourceBuffer, after executing abort() and timestampOffset. The issue with the code in the master branch of this repository is not playback of the ArrayBuffers, but recording the <video> where MediaSource is set at the src with MediaRecorder.

2. -- MediaRecorder.stop() does not give a full segment at the end?

Again, what do you specifically mean by "full segment"?

I was trying to figure out how to get segments from MediaRecorder to solve issue 1.

So tried to get a "complete" video using MediaRecorder. Just for testing I put a setInterval that fires every 5 seconds and calls to MediaRecorder.stop(), when I feed these blobs to SourceBuffer still gives the same error of parsing :@ . So this means that when we use MediaRecorder and call stop the last blob does not give the complete segment? I doubt this given the cited paragraph but Im still seeing the error.

3. -- if MediaRecorder.stop() does not give a full segment then what

4. -- My original problem so you understand why Im on this situation. File headers?

Not certain what you mean by "file headers"?

Note that Chromium/Chrome implementation of MediaRecorder does not include a duration.

Im recording a stream with MediaRecorder. I stop and start recording every 5 seconds to generate videos of 5 seconds. Then I play them all together. Why I do this? Because if I refresh the page the server will send me blobs from the middle of the stream, and feeding blobs from the middle of a stream to SourceBuffer simple does not work because it seems that webm files have a header. Maybe what I can do here is to generate the header myself and always append that first and then append the incoming blobs. Will this work?

Why is using MediaSource a requirement? If you are already using captureStream() with MediaRecorder you can alternatively use canvas.captureStream() and AudioContext.createMediaStreamDestination(); or WebRTC RTCRtpSender.replaceTrack() https://w3c.github.io/webrtc-pc/#dom-rtcrtpsender-replacetrack

If sending is true, and withTrack is not null, have the sender switch seamlessly to transmitting withTrack instead of the sender's existing track.

with MediaRecorder as used in three branches in this repository to create a single video from multiple MediaStreamTracks which are replaced in the recorded MediaStream.

If using MediaSource is part of the requirement, can you create a minimal, complete, verifiable example of the code that you are using, including comments where errors or issues occur which do not result in the expected output?

@guest271314
Copy link
Owner

@titoBouzout At which browser are you trying the code?

@guest271314
Copy link
Owner

@titoBouzout Note, the code at master branch of this repository outputs the expected result at Firefox, not at Chromium. Re MediaRecorder implementation at Chromium/Chrome outputting seekable WebM files, see https://github.com/legokichi/ts-ebml#media-recorder-media-source-gap.

A version of the code at master using ts-ebml to set duration of WebM files from MediaRecorder at Chromium, using input videos having the same width and height https://plnkr.co/edit/dTEO4HepKOY3xwqBemm5?p=preview. The audio and video are noticably not synchronized near the completion of the output stream at <video> where MediaSource is set.

To merge input tracks into a single video would suggest utilizing canvas.captureStream() and AudioContext() or WebRTC RTCSender.replaceTrack(). Depending on the scope of the application if the videos are all created using MediaRecorder which outputs WebM files (MediaRecorder at Chromium/Chrome can also output Matroska files) you can alternatively using mkvmerge, which also sets duration of the merged tracks, see https://github.com/guest271314/native-messaging-mkvmerge.

Trying to compose code for MediaSource and MediaRecorder to output the same result at Firefox and Chromium/Chrome is not a trivial task.

@guest271314
Copy link
Owner

@titoBouzout Perhaps composing code for MediaSource and MediaRecorder that outputs the same result at Firefox and Chromium/Chrome is trivial. Have not yet achieved that result here, yet. The source of the ts-ebml code is legokichi/ts-ebml#14 (specifically using the version of the pattern from legokichi/ts-ebml#14 (comment)). See https://stackoverflow.com/questions/45217962/how-to-use-blob-url-mediasource-or-other-methods-to-play-concatenated-blobs-of, w3c/mediacapture-record#147.

@titoBouzout
Copy link
Author

Thanks. I will try to explain, for the sake of simplicity I left out irrelevant code.

streamer.js -- So this page gets a stream and records it. This page never dies, the stream is infinite.

// streamer.js

var recorder = new MediaRecorder(stream)
recorder.ondataavailable = function(e) {
	if (e.data.size > 0) {
		send_data_to_client_as_arraybuffer(e.data)
	}
}
recorder.start(1000)

client.js -- So this page receives the blobs from streamer.js and plays it. There could be as many client.js as you need.

// client.js

var media_source = new MediaSource()
media_source.onsourceopen = function (event) {
	var source_buffer = media_source.addSourceBuffer('video/webm;codecs=vp9')
	window.on_data_from_streamer = function(arraybuffer) {
		if (source_buffer.updating === false) {
			source_buffer.appendBuffer(new Uint8Array(arraybuffer))
		}
	}
}
var video = document.createElement('video')
video.src = URL.createObjectURL(media_source)

This code just works in a perfect world. So what's the problem with it?

  • a client.js can't just join if the streamer.js already started

Why? Because:

The UA MUST record stream in such a way that the original Tracks can be retrieved at playback time. When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.

You can't just feed source_buffer.appendBuffer with blobs from the middle of a stream, it will not work.

I know I could just start a new parallel stream.js to serve the client that joined, but my bandwidth will not scale, I can't afford it. I also know about the canvas hack, I need a real solution.

Something I tried:

  1. To start and stop the Stream in an interval to generate small videos.

Problems with this:

  1. ondataavailable gives you chunks that may not contain a complete last segment. So when you append this video via source_buffer.appendBuffer the buffer gets stuck and will not allow you to update the timestampOffset because is waiting for the remaining data to complete the segment parsing. What this means? You made a small video on which the last segment is incomplete. source_buffer.abort() will just drop that segment which means that you will be dropping segments each time you append a video, so the video and audio will glitch. Im not sure about the following, but as you have an incomplete video, when you create it to get the media.duration, the media duration value that you will pass to timestampOffset could be wrong, making the glitch even more noticeable, but idk. It will depend on, if media.duration considers the time of the last incomplete segment.

  2. Restarting the stream is not easy. you can call record.stop() and then immediately call record.start() but these functions are not synced. This means that the first chunk you get in record.start() is not the continuation of the last chunk received when you called record.stop(). So this means that for each small video that you create you will have a double glitch.

I didnt try WebRTC, how does WebRTC solve my problem?

@titoBouzout
Copy link
Author

You can get the duration of a video with something like this, which IDK if performs better or worse than using ts-ebml

const media = document.createElement('video')
media.preload = 'metadata'
media.currentTime = 24 * 60 * 60 // ridiculous high
media.ondurationchange = function() {
  if (media.duration != Infinity) {
    console.log(media.duration)
    media.ondurationchange = null
  }
}
media.src = URL.createObjectURL(blob)

@guest271314
Copy link
Owner

Thanks. I will try to explain, for the sake of simplicity I left out irrelevant code.

Before proceeding, can you kindly answer the questions presented above

  • Is using MediaSource a requirement?

  • Which browser(s) are being targeted and at which browsers have you tested the code?

streamer.js -- So this page gets a stream and records it. This page never dies, the stream is infinite.

// streamer.js

var recorder = new MediaRecorder(stream)
recorder.ondataavailable = function(e) {
	if (e.data.size > 0) {
		send_data_to_client_as_arraybuffer(e.data)
	}
}
recorder.start(1000)

Why is timeslice used at start()?

client.js -- So this page receives the blobs from streamer.js and plays it. There could be as many client.js as you need.

// client.js

var media_source = new MediaSource()
media_source.onsourceopen = function (event) {
	var source_buffer = media_source.addSourceBuffer('video/webm;codecs=vp9')
	window.on_data_from_streamer = function(arraybuffer) {
		if (source_buffer.updating === false) {
			source_buffer.appendBuffer(new Uint8Array(arraybuffer))
		}
	}
}
var video = document.createElement('video')
video.src = URL.createObjectURL(media_source)

This code just works in a perfect world. So what's the problem with it?

Again, why is timeslice being used?

  • a client.js can't just join if the streamer.js already started

Why? Because:

The UA MUST record stream in such a way that the original Tracks can be retrieved at playback time. When multiple Blobs are returned (because of timeslice or requestData()), the individual Blobs need not be playable, but the combination of all the Blobs from a completed recording MUST be playable.

You can't just feed source_buffer.appendBuffer with blobs from the middle of a stream, it will not work.

You can if timeslice is not used.

I know I could just start a new parallel stream.js to serve the client that joined, but my bandwidth will not scale, I can't afford it. I also know about the canvas hack, I need a real solution.

Not sure what you mean by "canvas hack"?

Using canvas.captureStream() and AudioContext() is one viable solution.

Something I tried:

  1. To start and stop the Stream in an interval to generate small videos.

Problems with this:

  1. ondataavailable gives you chunks that may not contain a complete last segment. So when you append this video via source_buffer.appendBuffer the buffer gets stuck and will not allow you to update the timestampOffset because is waiting for the remaining data to complete the segment parsing. What this means? You made a small video on which the last segment is incomplete. source_buffer.abort() will just drop that segment which means that you will be dropping segments each time you append a video, so the video and audio will glitch. Im not sure about the following, but as you have an incomplete video, when you create it to get the media.duration, the media duration value that you will pass to timestampOffset could be wrong, making the glitch even more noticeable, but idk. It will depend on, if media.duration considers the time of the last incomplete segment.
  2. Restarting the stream is not easy. you can call record.stop() and then immediately call record.start() but these functions are not synced. This means that the first chunk you get in record.start() is not the continuation of the last chunk received when you called record.stop(). So this means that for each small video that you create you will have a double glitch.

I didnt try WebRTC, how does WebRTC solve my problem?

WebRTC provides the ability to share a local MediaStream with a remote peer. The MediaStreamTracks within a MediaStream can be replaced.

Have you read the previous comment described the specified procedure that replaceTrace() (http://next.plnkr.co/plunk/l0RoRH) performs and tried the code at the three branches of this repository which utilize RTCSender.replaceTrack()?

Would, again, suggest to try each branch of this repository in the browser(s) that you are targeting.

This is an example of code that is similar to what is described. The video is streamed from a JavaScript Module to a different HTML document https://plnkr.co/edit/5bvp9xv0ciMYfVzG?p=preview. Technically, the stream from the module can be infinite.

@guest271314
Copy link
Owner

You can get the duration of a video with something like this, which IDK if performs better or worse than using ts-ebml

const media = document.createElement('video')
media.preload = 'metadata'
media.currentTime = 24 * 60 * 60 // ridiculous high
media.ondurationchange = function() {
  if (media.duration != Infinity) {
    console.log(media.duration)
    media.ondurationchange = null
  }
}
media.src = URL.createObjectURL(blob)

Yes, am familiar with that approach to get the duration of a video. The code is described in the linked SO answer, where AFAIK, the author of the answer is the origin of that approach. Have you read the all of the links provided above? The code in this repository explores a variety of approaches to recording media fragments and entire videos to a single video using a various browser APIs. If you run the code in the branches you are likely to find an approach which outputs the expected result for your requirement. In this case, using either canvas.captureStream() with AudioContext() or WebRTC replaceTrack().

When you run the code at the linked plnkr from the previous comment, the media stream is being exported to the parent document. Any number of videos can be streamed by continuing to grab the current frame using createImageBitmap(). The audio is not an issue due to AudioContext already having the capability to merge channel data from multiple audio buffers, or when using AudioWorklet each value being currently output as audio.

@guest271314
Copy link
Owner

Note, getting the duration of the video using the loadedmetadata and setting the currentTime to a value beyond the expected duration does not change the fact that the duration is not written to the actual WebM file output by MediaRecorder implementation at Chromium/Chrome.

It would help if you created a minimal, complete example of the code at plnkr, which should be possible since https://plnkr.co allows including files and the ability to open and communicate with windows.

@guest271314
Copy link
Owner

An alternative approach for "client.js" would be to set the <video> srcObject to a MediaStream and use canvas.captureStream() and AudioContext() or WebRTC RTCSender.replaceTrack() to replace the video and/or audio MediaStreamTrack. That would not provide a means to seek the playback, but since the stream can be infinite seeking is not expected in the first instance.

@titoBouzout
Copy link
Author

Is using MediaSource a requirement?

Is not really a requirement, not sure what else I could do?

Which browser(s) are being targeted and at which browsers have you tested the code?

Only chrome I dont have time for more than one browser any more.

Why is timeslice used at start()?

Because is a stream and I need the chunks to send to the clients. If I dont select a timeslice then I will need to wait for the stream to finish which defeats the purpose of streaming :P

createImageBitmap

I cant use that, I need a compressed video, my bandwidth is limited.

WebRTC provides the ability to share a local MediaStream with a remote peer. The MediaStreamTracks within a MediaStream can be replaced.

That looks like a solution to my problem. But the api is so weird that I dont understand it. I can't give you an example code, is just too complicated.

I have read your codes and examples but none of them solve the problem of broadcasting a MediaStream. The code I posted solves that, but it breaks as soon as a client gets disconnected and rejoins. Thats why I been trying to slice the video in smaller videos.

@guest271314
Copy link
Owner

Is not really a requirement, not sure what else I could do?

As suggested, you can set <video> srcObject to a MediaStream and use replaceTrack().

Because is a stream and I need the chunks to send to the clients. If I dont select a timeslice then I will need to wait for the stream to finish which defeats the purpose of streaming :P

You could alternatively use multiple instances of MediaRecorder, correpsonding to each timeslice, then stop() and send the Blob as ArrayBuffer to get a WebM file containing the initial metadata.

createImageBitmap

I cant use that, I need a compressed video, my bandwidth is limited.

createImageBitmap() would be used to capture the <video> being streamed and recorded as an ImageBitmap, which can be transfered to a <canvas> element where the MediaStreamTrack within the MediaStream from canvas.captureStream() is set as the video MediaStreamTrack for the MediaStream set as srcObject at client ("client.js").

WebRTC provides the ability to share a local MediaStream with a remote peer. The MediaStreamTracks within a MediaStream can be replaced.

That looks like a solution to my problem. But the api is so weird that I dont understand it. I can't give you an example code, is just too complicated.

There is a learning curve when using WebRTC, though there is a learning curve for any field of endeavor.

Have you taken the time to actually read and try the code at the branches of this repository?

I have read your codes and examples but none of them solve the problem of broadcasting a MediaStream. The code I posted solves that, but it breaks as soon as a client gets disconnected and rejoins. Thats why I been trying to slice the video in smaller videos.

Yes, there are examples of "broadcasting" a MediaStream, e.g., the last link at this comment #8 (comment).

This repository was created initially to record multiple media fragments to a single WebM file. The first attempt was using MediaSource. The subsequent branches explore the various means to achieve the requirement using available browser APIs, without necessarily using any third-party libraries. Different approaches exist based on the requirement of the application. Use the approach which outputs the expected result, adjusting the code if necessary.

The example of using a JavaScript Module to export a ReadableStream which reads images from a <video> as ImageBitmaps demonstrates how to "broadcast" a media stream. What may not be immediately obvious, though from perspective here, is important to note, is that a "media stream" is at its lowest common denominator, a series of images and/or bytes bytes comprising audio channels. Thus, a media stream can be written and read more than one way.

@guest271314
Copy link
Owner

@titoBouzout

Only chrome I dont have time for more than one browser any more.

Since the target browser is Chrome, consider this code https://github.com/guest271314/MediaFragmentRecorder/blob/imagecapture-audiocontext-readablestream-writablestream/MediaFragmentRecorder.html, which uses ImageCapture.grabFrame() (currently Firefox supports takePhoto() though not grabFrame()).

What is occuring:

  • An instance of ImageCapture is created.
  • ImageCapture.grabFrame() is executed within ReadableStream pull() method.
  • The ImageBitmap is drawn onto the canvas.
  • The canvas is captured using captureStream() to get the MediaStreamTrack of kind "video".

What that means is that an ImageBitmap can represent a captured frame from one or more <video>s that are being played. Instead of "broadcasting" a MediaStream, ImageBitmaps are broadcasted to clients from "streamer.js", where the ImageBitmap is drawn onto a <canvas> where captureStream() is used to get the MediaStreamTrack of kind "video" which can then be set as the MediaStreamTrack of a MediaStream that is set as srcObject of a <video> element at client ("client.js"). The same applies to audio. Both an ImageBitmap and a Float32Array can be transferred to different browsing contexts using one of the message event APIs, e.g., BroadcastChannel or SharedWorker, etc.

If gather the requirement correctly, will try to create a plnkr to demonstrate an approach to achieve the expected output.

@guest271314
Copy link
Owner

@titoBouzout An example of streaming ImageBitmaps from "streamer.js" (index.html) to "client.js" ("client.html"). Audio can be streamed similarly using AudioWorklet which has a MessageChannel defined for the ability to post (Float32Array as ArrayBuffer) and receive messages from other browsing contexts.

"streamer.js" (index.html)

<!DOCTYPE html>
<html>
<head>
</head>
<body>
  <canvas style="border:1px solid blue"></canvas>
  <script>
    const canvas = document.querySelector("canvas");
    const width = 300;
    const height = 200;
    canvas.width = width;
    canvas.height = height;
    const ctx = canvas.getContext("2d");
    const canvasStream = canvas.captureStream(0);
    const [videoTrack] = canvasStream.getVideoTracks();
    const stream = [canvasStream, videoTrack].find(({
      requestFrame: rF
    }) => rF);
    const colors = ["red", "blue", "green", "yellow", "orange", "purple"];
    const len = colors.length;
    // stream frames from a video
    async function* streamFrames() {
      let frames = 0;
      let i = 0;
      while (true) {
        for (; i < len; i++) {
          for (let j = 0; j < 30; j++) {
            ctx.clearRect(0, 0, width, height);
            ctx.fillStyle = colors[i];
            ctx.fillRect(0, 0, width, height);
            stream.requestFrame();
            const frame = await new Promise(resolve => {
              setTimeout(_ => {
                console.log(`${++frames} frames streamed to client`);
                resolve(self.createImageBitmap(canvas));
              }, 1000 / 30);
            });
            yield frame;
          }
        }
        i = 0;
      }
    }
    const clientStream = window.open("client.html", "_blank");
    const startStream = async _ => {
      for await (const frame of streamFrames()) {
        clientStream.postMessage(frame, [frame]);
      }
    }
    onmessage = e => {
      console.log(e.data, e.origin);
      if (e.origin === location.origin) {
        startStream();
      }
    }
  </script>
</body>
</html>

"client.js" (client.html)

<!DOCTYPE html>
<html>
<head>
</head>
<body>
  <video controls muted autoplay></video>
  <script>
    console.log("loaded");
    const video = document.querySelector("video");
    const canvas = document.createElement("canvas");
    canvas.width = 300;
    canvas.height = 200;
    const ctx = canvas.getContext("2d");
    const canvasStream = canvas.captureStream();
    video.srcObject = canvasStream; // set <video> srcObject to MediaStream
    opener.postMessage("ready");
    onmessage = e => {
      ctx.drawImage(e.data, 0, 0);
    }
  </script>
</body>
</html>

https://plnkr.co/edit/KIrO4G?p=preview

@guest271314
Copy link
Owner

@titoBouzout A similar approach to https://plnkr.co/edit/KIrO4G?p=preview using MediaSource with SourceBuffer mode set to "sequence" (timestamps automatically generated) at "client.js"

"streamer.js" (index.html)

<!DOCTYPE html>
<html>
<head>
</head>
<body>
  <canvas style="border:1px solid blue"></canvas>
  <script src="ts-ebml-min.js"></script>
  <script>
    const {
      Decoder, Encoder, tools, Reader
    } = require("ts-ebml");
    const readAsArrayBuffer = function(blob) {
      return new Promise((resolve, reject) => {
        const reader = new FileReader();
        reader.readAsArrayBuffer(blob);
        reader.onloadend = () => {
          resolve(reader.result);
        };
        reader.onerror = (ev) => {
          reject(ev.error);
        };
      });
    }

    const injectMetadata = function(blob) {
      const decoder = new Decoder();
      const reader = new Reader();
      reader.logging = false;
      reader.drop_default_duration = false;

      return readAsArrayBuffer(blob).then((buffer) => {
        const elms = decoder.decode(buffer);
        elms.forEach((elm) => {
          reader.read(elm);
        });
        reader.stop();

        const refinedMetadataBuf = tools.makeMetadataSeekable(
          reader.metadatas, reader.duration, reader.cues);
        const body = buffer.slice(reader.metadataSize);

        const result = new Blob([refinedMetadataBuf, body], {
          type: blob.type
        });

        return result;
      });
    }
  </script>
  <script>
    const canvas = document.querySelector("canvas");
    const width = 300;
    const height = 200;
    canvas.width = width;
    canvas.height = height;
    const ctx = canvas.getContext("2d");
    const canvasStream = canvas.captureStream(0);
    const [videoTrack] = canvasStream.getVideoTracks();
    const stream = [canvasStream, videoTrack].find(({
      requestFrame: rF
    }) => rF);
    const colors = ["red", "blue", "green", "yellow", "orange", "purple"];
    const len = colors.length;
    // stream frames from a video
    async function* streamFrames() {
      while (true) {
        yield await new Promise(async resolve => {
          let i = 0;
          // substitute canvasStream for the appropriate MediaStream instance here
          let recorder = new MediaRecorder(canvasStream);
          recorder.ondataavailable = async e => {
            try {
              // set duration of WebM file using ts-ebml
              const makeMediaRecorderBlobSeekable = await injectMetadata(e.data);
              resolve(await new Response(makeMediaRecorderBlobSeekable).arrayBuffer());
            } catch (e) {
              console.error(e);
              console.trace();
            }
          };
          recorder.start();
          setTimeout(() => {
            recorder.stop();
          }, 5000);
          // video stream for demonstration
          do {
            for (; i < len; i++) {
              for (let j = 0; j < 30; j++) {
                ctx.clearRect(0, 0, width, height);
                ctx.fillStyle = colors[i];
                ctx.fillRect(0, 0, width, height);
                stream.requestFrame();
                if (recorder.state !== "recording") {
                  break;
                }
                await new Promise(_resolve => setTimeout(_resolve, 1000 / 30));
              }
            }
            i = 0;
          } while (recorder.state === "recording");
        });
      }
    }
    const clientStream = window.open("client.html", "_blank");
    const startStream = async _ => {
      for await (const frame of streamFrames()) {
        clientStream.postMessage(frame, [frame]);
      }
    }
    onmessage = e => {
      console.log(e.data, e.origin);
      if (e.origin === location.origin) {
        startStream();
      }
    }
  </script>
</body>
</html>

"client.js" (client.html)

<!DOCTYPE html>
<html>
<head>
</head>
<body>
  <video controls muted autoplay></video>
  <script>
    console.log("loaded");
    const video = document.querySelector("video");
    const mediaSource = new MediaSource();
    const mimeCodec = "video/webm;codecs=vp8";
    let sourceBuffer;
    mediaSource.onsourceopen = e => {
      sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
      sourceBuffer.mode = "sequence";
      sourceBuffer.onupdateend = e => {
        console.log(e);
      }
    }
    video.src = URL.createObjectURL(mediaSource);
    opener.postMessage("ready");
    onmessage = e => {
      if (sourceBuffer.updating === false) {
        sourceBuffer.appendBuffer(e.data);
      } else {
        console.log(sourceBuffer);
      }
    }
  </script>
</body>
</html>

https://plnkr.co/edit/KXdaXG?p=preview

@guest271314
Copy link
Owner

An alternative approach would be to use addTrack() and removeTrack() of MediaStream at "client.js", which would not require setting duration of WebM file using ts-ebml. Since the video at client is not expected to be recorded we do not need to be concerned with MediaRecorder stopping when the MediaStreamTrack is added or moved from the MediaStream instance set as srcObject of HTML <video> element.

At "client.js"

const mediaStream = new MediaStream();
const video = document.querySelector("video");
video.srcObject = mediaStream;
window.on_data_from_streamer = function(arraybuffer) {
  const videoStream = document.createElement("video");
  videoStream.onplay = _ => {
    const stream = videoStream.captureStream();
    mediaStream.getTracks.forEach(track => {
      mediaStream.removeTrack(track);
    });
    stream.getTracks().forEach(track => {
      mediaStream.addTrack(track);
    });
  };
  videoStream.src = URL.createObjectURL(new Blob([arraybuffer]));
}

@guest271314
Copy link
Owner

@titoBouzout An approach using MediaStream addTrack() and removeTrack() methods at "client.js" (client.html)

<!DOCTYPE html>
<html>

<head>
</head>

<body>
  <video controls autoplay muted></video>
  <script>
    console.log("loaded");
    const video = document.querySelector("video");
    const mediaStream = new MediaStream();
    video.srcObject = mediaStream;
    const videoStream = document.createElement("video");
    videoStream.oncanplay = async e => {
      if (videoStream.buffered.length) {
        onsole.log(videoStream.buffered.start(0));
      }
      if (videoStream.paused) {
        try {
          videoStream.play();
        } catch (e) {
          console.error(e);
        }
      } else {
        console.log(videoStream.readyState);
      }
    };
    opener.postMessage("ready");
    onmessage = e => {
      videoStream.onplay = _ => {
        videoStream.onplay = null;
        const stream = videoStream.captureStream();
        const [audioTrack] = stream.getAudioTracks();
        const [videoTrack] = stream.getVideoTracks();
        stream.getTracks().forEach(track => {
          mediaStream.addTrack(track);
        });
        [mediaStream.getAudioTracks().find(({
            id
          }) => id !== audioTrack.id),
          mediaStream.getVideoTracks().find(({
            id
          }) => id !== videoTrack.id)
        ]
        .forEach(track => {
          if (track) {
            // set track enabled to false, stop() track
            track.enabled = false;
            track.stop();
            mediaStream.removeTrack(track);
          }
        });
      };
      videoStream.src = URL.createObjectURL(new Blob([e.data], {
        type: "video/webm;codecs=vp9"
      }));
    };
  </script>
</body>
</html>

https://plnkr.co/edit/MeWIKs?p=preview

@titoBouzout
Copy link
Author

titoBouzout commented Aug 12, 2019

Thanks, that's a lot of examples. Sadly the canvas one I can't use it because when you get a frame with createImageBitmap you generate an uncompressed image. If you were using a local network that may not be a problem. But over the internet and to too many clients located basically anywhere is difficult

@guest271314
Copy link
Owner

@titoBouzout Can you describe the issue with generating an uncompressed image? Would substituting OffscreenCanvas for createImageBitmap() resolve that concern?

@titoBouzout
Copy link
Author

I need to send these images over the network, around 60 images (1920x1080) per second, thats why video formats exists. Btw, I solved the problem differently. I just restart the stream when I need, and I let the current video finish while at the same time I create a new video on the very same position as the current one. Once the one we watch ends, I remove it and the other plays.

@guest271314
Copy link
Owner

@titoBouzout Is this issue resolved?

@guest271314
Copy link
Owner

@titoBouzout Since window.open() is used for the examples, another option would be to assign the existing MediaStream globally to the new window instead of postMessage()

    // "streamer.js"
    const clientStream = window.open("client.html", "_blank");
    clientStream.mediaStream = canvasStream;
    // "client.js"
    const video = document.querySelector("video");
    video.srcObject = this.mediaStream;

https://plnkr.co/edit/BwVzYU?p=preview

@titoBouzout
Copy link
Author

Thanks, it is resolved yes :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants