Skip to content

Commit

Permalink
Further improvements to Introduction to Elixir WebRTC tutorial (#141)
Browse files Browse the repository at this point in the history
  • Loading branch information
LVala authored Jul 25, 2024
1 parent be18ebc commit af1fec4
Show file tree
Hide file tree
Showing 6 changed files with 250 additions and 173 deletions.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Modifying the session

So far, we focused on forwarding the data back to the same peer. Usually, you want to connect with multiple peers, which means adding
In the introductory tutorials we focused on forwarding the data back to the same peer. Usually, you want to connect with multiple peers, which means adding
more PeerConnection to the Elixir app, like in the diagram below.

```mermaid
Expand Down Expand Up @@ -31,7 +31,7 @@ new negotiation has to take place!
>
> But what does that even mean?
> Each transceiver is responsible for sending and/or receiving a single track. When you call `PeerConnection.add_track`, we actually look for a free transceiver
> (that is, one that is not sending a track already) and use it, or create a new transceiver if we don' find anything suitable. If you are very sure
> (that is, one that is not sending a track already) and use it, or create a new transceiver if we don't find anything suitable. If you are very sure
> that the remote peer added _N_ new video tracks, you can add _N_ video transceivers (using `PeerConnection.add_transceiver`) and begin the negotiation as
> the offerer. If you didn't add the transceivers, the tracks added by the remote peer (the answerer) would be ignored.
Expand Down
135 changes: 122 additions & 13 deletions guides/introduction/consuming.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,136 @@
# Consuming media data

Other than just forwarding, we probably would like to be able to use the media right in the Elixir app to
e..g feed it to a machine learning model or create a recording of a meeting.
Other than just forwarding, we would like to be able to use the media right in the Elixir app to e.g.
use it as a machine learning model input, or create a recording of a meeting.

In this tutorial, we are going to build on top of the simple app from the previous tutorial by, instead of just sending the packets back, depayloading and decoding
the media, using a machine learning model to somehow augment the video, encode and payload it back into RTP packets and only then send it to the web browser.
In this tutorial, we are going to learn how to use received media as input for ML inference.

## Deplayloading RTP
## From raw media to RTP

We refer to the process of taking the media payload out of RTP packets as _depayloading_.
When the browser sends audio or video, it does the following things:

1. Capturing the media from your peripheral devices, like a webcam or microphone.
2. Encoding the media, so it takes less space and uses less network bandwidth.
3. Packing it into a single or multiple RTP packets, depending on the media chunk (e.g., video frame) size.
4. Sending it to the other peer using WebRTC.

We have to reverse these steps in order to be able to use the media:

1. We receive the media from WebRTC.
2. We unpack the encoded media from RTP packets.
3. We decode the media to a raw format.
4. We use the media however we like.

We already know how to do step 1 from previous tutorials, and step 4 is completely up to the user, so let's go through steps 2 and 3 in the next sections.

> #### Codecs {: .info}
> A media codec is a program used to encode/decode digital video and audio streams. Codecs also compress the media data,
> A media codec is a program/technique used to encode/decode digital video and audio streams. Codecs also compress the media data,
> otherwise, it would be too big to send over the network (bitrate of raw 24-bit color depth, FullHD, 60 fps video is about 3 Gbit/s!).
>
> In WebRTC, most likely you will encounter VP8, H264 or AV1 video codecs and Opus audio codec. Codecs that will be used during the session are negotiated in
> the SDP offer/answer exchange. You can tell what codec is carried in an RTP packet by inspecting its payload type (`packet.payload_type`,
> a non-negative integer field) and match it with one of the codecs listed in this track's transceiver's `codecs` field (you have to find
> the `transceiver` by iterating over `PeerConnection.get_transceivers` as shown previously in this tutorial series).
> In WebRTC, most likely you will encounter VP8, H264 or AV1 video codecs and Opus audio codec. Codecs used during the session are negotiated in
> the SDP offer/answer exchange. You can tell what codec is carried in an RTP packet by inspecting its payload type (`payload_type` field in the case of Elixir WebRTC).
> This value should correspond to one of the codecs included in the SDP offer/answer.
## Depayloading RTP

We refer to the process of getting the media payload out of RTP packets as _depayloading_. Usually a single video frame is split into
multiple RTP packets, and in case of audio, each packet carries, more or less, 20 milliseconds of sound. Fortunately, you don't have to worry about this,
just use one of the depayloaders provided by Elixir WebRTC (see the `ExWebRTC.RTP.<codec>` submodules). For instance, when receiving VP8 RTP packets, we could depayload
the video by doing:

```elixir
def init(_) do
# ...
state = %{depayloader: ExWebRTC.Media.VP8.Depayloader.new()}
{:ok, state}
end

def handle_info({:ex_webrtc, _from, {:rtp, _track_id, nil, packet}}, state) do
depayloader =
case ExWebRTC.RTP.VP8.Depayloader.write(state.depayloader, packet) do
{:ok, depayloader} -> depayloader
{:ok, frame, depayloader} ->
# we collected a whole frame (it is just a binary)!
# we will learn what to do with it in a moment
depayloader
end

{:noreply, %{state | depayloader: depayloader}}
end
```

Every time we collect a whole video frame consisting of a bunch of RTP packets, the `VP8.Depayloader.write` returns it for further processing.

_TBD_
> #### Codec configuration {: .warning}
> By default, `ExWebRTC.PeerConnection` will use a set of default codecs when negotiating the connection. In such case, you have to either:
>
> * support depayloading/decoding for all of the negotiated codecs
> * force some specific set of codecs (or even a single codec) in the `PeerConnection` configuration.
>
> Of course, the second option is much simpler, but it increases the risk of failing the negotiation, as the other peer might not support your codec of choice.
> If you still want to do it the simple way, set the codecs in `PeerConnection.start_link`
> ```elixir
> codec = %ExWebRTC.RTPCodecParameters{
> payload_type: 96,
> mime_type: "video/VP8",
> clock_rate: 90_000
> }
> {:ok, pc} = ExWebRTC.PeerConnection.start_link(video_codecs: [codec])
> ```
> This way, you either will always have to send/receive VP8 video codec, or you won't be able to negotiate a video stream at all. At least you won't encounter
> unpleasant bugs in video decoding!
## Decoding the media to raw format
_TBD_
Before we use the video as an input to the machine learning model, we need to decode it into raw format. Video decoding or encoding is a very
complex and resource-heavy process, so we don't provide anything for that in Elixir WebRTC, but you can use the `xav` library, a simple wrapper over `ffmpeg`,
to decode the VP8 video. Let's modify the snippet from the previous section to do so.
```elixir
def init(_) do
# ...
serving = # setup your machine learning model (i.e. using Bumblebee)
state = %{
depayloader: ExWebRTC.Media.VP8.Depayloader.new(),
decoder: Xav.Decoder.new(:vp8),
serving: serving
}
{:ok, state}
end
def handle_info({:ex_webrtc, _from, {:rtp, _track_id, nil, packet}}, state) do
depayloader =
with {:ok, frame, depayloader} <- ExWebRTC.RTP.VP8.Depayloader.write(state.depayloader, packet),
{:ok, raw_frame} <- Xav.Decoder.decode(state.decoder, frame) do
# raw frame is just a 3D matrix with the shape of resolution x colors (e.g 1920 x 1080 x 3 for FullHD, RGB frame)
# we can cast it to Elixir Nx tensor and use it as the machine learning model input
# machine learning stuff is out of scope of this tutorial, but you probably want to check out Elixir Nx and friends
tensor = Xav.Frame.to_nx(raw_frame)
prediction = Nx.Serving.run(state.serving, tensor)
# do something with the prediction
depayloader
else
{:ok, depayloader} -> depayloader
{:error, _err} -> # handle the error
end
{:noreply, %{state | depayloader: depayloader}}
end
```
We decoded the video and used it as an input of the machine learning model and got some kind of prediction - do whatever you want with it.

> #### Jitter buffer {: .warning}
> Do you recall that WebRTC uses UDP under the hood, and UDP does not ensure packet ordering? We could ignore this fact when forwarding the packets (as
> it was not our job to decode/play/save the media), but now packets out of order can seriously mess up the process of decoding.
> To remedy this issue, something called _jitter buffer_ can be used. Its basic function
> is to delay/buffer incoming packets by some time, let's say 100 milliseconds, waiting for the packets that might be late. Only if the packets do not arrive after the
> additional 100 milliseconds, we count them as lost. To learn more about jitter buffer, read [this](https://bloggeek.me/webrtcglossary/jitter-buffer/).
>
> As of now, Elixir WebRTC does not provide a jitter buffer, so you either have to build something yourself or wish that such issues won't occur, but if anything
> is wrong with the decoded video, this might be the problem.
This tutorial shows, more or less, what the [Recognizer](https://github.com/elixir-webrtc/apps/tree/master/recognizer) app does. Check it out, along with other
example apps in the [apps](https://github.com/elixir-webrtc/apps) repository, it's a great reference on how to implement fully-fledged apps based on Elixir WebRTC.

108 changes: 29 additions & 79 deletions guides/introduction/forwarding.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The `packet` is an RTP packet. It contains the media data alongside some other u
> RTP is a network protocol created for carrying real-time data (like media) and is used by WebRTC.
> It provides some useful features like:
>
> * sequence numbers: UDP (which is usually used by WebRTC) does not provide ordering, thus we need this to catch missing or out-of-order packets
> * sequence numbers: UDP (which is usually used by WebRTC) does not provide packet ordering, thus we need this to catch missing or out-of-order packets
> * timestamp: these can be used to correctly play the media back to the user (e.g. using the right framerate for the video)
> * payload type: thanks to this combined with information in the SDP offer/answer, we can tell which codec is carried by this packet
>
Expand All @@ -39,45 +39,36 @@ flowchart LR
WB((Web Browser)) <-.-> PC
```

The only thing we have to implement is the `Forwarder` GenServer. Let's combine the ideas from the previous section to write it.
The only thing we have to implement is the `Forwarder` process. In practice, making it a `GenServer` would be probably the
easiest and that's what we are going to do here. Let's combine the ideas from the previous section to write it.

```elixir
defmodule Forwarder do
use GenServer

alias ExWebRTC.{PeerConnection, ICEAgent, MediaStreamTrack, SessionDescription}

@ice_servers [%{urls: "stun:stun.l.google.com:19302"}]

@impl true
def init(_) do
{:ok, pc} = PeerConnection.start_link(ice_servers: @ice_servers)

# we expect to receive two tracks from the web browser - one for audio, one for video
# so we also need to add two tracks here, we will use these to forward media
# from each of the web browser tracks
stream_id = MediaStreamTrack.generate_stream_id()
audio_track = MediaStreamTrack.new(:audio, [stream_id])
video_track = MediaStreamTrack.new(:video, [stream_id])

{:ok, _sender} = PeerConnection.add_track(pc, audio_track)
{:ok, _sender} = PeerConnection.add_track(pc, video_track)

# in_tracks (tracks we will receive media from) = %{id => kind}
# out_tracks (tracks we will send media to) = %{kind => id}
out_tracks = %{audio: audio_track.id, video: video_track.id}
{:ok, %{pc: pc, out_tracks: out_tracks, in_tracks: %{}}}
end

# ...
def init(_) do
{:ok, pc} = PeerConnection.start_link(ice_servers: [%{urls: "stun:stun.l.google.com:19302"}])

# we expect to receive two tracks from the web browser - audio and video
# so we also need to add two tracks here, we will use them to loop media back t othe browser
# from each of the web browser tracks
stream_id = MediaStreamTrack.generate_stream_id()
audio_track = MediaStreamTrack.new(:audio, [stream_id])
video_track = MediaStreamTrack.new(:video, [stream_id])

{:ok, _sender} = PeerConnection.add_track(pc, audio_track)
{:ok, _sender} = PeerConnection.add_track(pc, video_track)

# in_tracks (tracks we will receive from the browser) = %{id => kind}
# out_tracks (tracks we will send to the browser) = %{kind => id}
in_tracks = %{}
out_tracks = %{audio: audio_track.id, video: video_track.id}
{:ok, %{pc: pc, out_tracks: out_tracks, in_tracks: in_tracks}}
end
```

We started by creating the PeerConnection and adding two tracks (one for audio and one for video).
Remember that these tracks will be used to *send* data to the web browser peer. Remote tracks (the ones we will set up on the JavaScript side, like in the previous tutorial)
will arrive as messages after the negotiation is completed.

> #### Where are the tracks? {: .tip}
> #### What are the tracks? {: .tip}
> In the context of Elixir WebRTC, a track is simply a _track id_, _ids_ of streams this track belongs to, and a _kind_ (audio/video).
> We can either add tracks to the PeerConnection (these tracks will be used to *send* data when calling `PeerConnection.send_rtp/4` and
> for each one of the tracks, the remote peer should fire the `track` event)
Expand All @@ -96,39 +87,14 @@ will arrive as messages after the negotiation is completed.
>
> If you want to know more about transceivers, read the [Mastering Transceivers](https://hexdocs.pm/ex_webrtc/mastering_transceivers.html) guide.
Next, we need to take care of the offer/answer and ICE candidate exchange. As in the previous tutorial, we assume that there's some kind
of WebSocket relay service available that will forward our offer/answer/candidate messages to the web browser and back to us.
```elixir
@impl true
def handle_info({:web_socket, {:offer, offer}}, state) do
:ok = PeerConnection.set_remote_description(state.pc, offer)
{:ok, answer} = PeerConnection.create_answer(state.pc)
:ok = PeerConnection.set_local_description(state.pc, answer)
web_socket_send(answer)
{:noreply, state}
end
@impl true
def handle_info({:web_socket, {:ice_candidate, cand}}, state) do
:ok = PeerConnection.add_ice_candidate(state.pc, cand)
{:noreply, state}
end
@impl true
def handle_info({:ex_webrtc, _from, {:ice_candidate, cand}}, state) do
web_socket_send(cand)
{:noreply, state}
end
```
Next, we need to take care of the offer/answer and ICE candidate exchange. This can be done the exact same way as in the previous
tutorial, so we won't get into here.
Now we can expect to receive messages with notifications about new remote tracks.
After the negotiation, we can expect to receive messages with notifications about new remote tracks.
Let's handle these and match them with the tracks that we are going to send to.
We need to be careful not to send packets from the audio track on a video track by mistake!
```elixir
@impl true
def handle_info({:ex_webrtc, _from, {:track, track}}, state) do
state = put_in(state.in_tracks[track.id], track.kind)
{:noreply, state}
Expand All @@ -138,7 +104,6 @@ end
We are ready to handle the incoming RTP packets!

```elixir
@impl true
def handle_info({:ex_webrtc, _from, {:rtp, track_id, nil, packet}}, state) do
kind = Map.fetch!(state.in_tracks, track_id)
id = Map.fetch!(state.out_tracks, kind)
Expand All @@ -154,28 +119,13 @@ end
> change between two tracks, the payload types are dynamically assigned and may differ between RTP sessions), and some RTP header extensions. All of that is
> done by Elixir WebRTC behind the scenes, but be aware - it is not as simple as forwarding the same piece of data!
Lastly, let's take care of the client-side code. It's nearly identical to what we have written in the previous tutorial.
Lastly, let's take care of the client-side code. It's nearly identical to what we have written in the previous tutorial,
except for the fact that we need to handle tracks added by the Elixir's PeerConnection.

```js
const localStream = await navigator.mediaDevices.getUserMedia({audio: true, video: true});
const pc = new RTCPeerConnection({iceServers: [{urls: "stun:stun.l.google.com:19302"}]});
localStream.getTracks().forEach(track => pc.addTrack(track, localStream));

// these will be the tracks that we added using `PeerConnection.add_track`
// these will be the tracks that we added using `PeerConnection.add_track` in Elixir
// but be careful! event for the same track, the ids might be different for each of the peers
pc.ontrack = event => videoPlayer.srcObject = event.stream[0];

// sending/receiving the offer/answer/candidates to the other peer is your responsibility
pc.onicecandidate = event => send_to_other_peer(event.candidate);
on_cand_received(cand => pc.addIceCandidate(cand));

// remember that we set up the Elixir app to just handle the incoming offer
// so we need to generate and send it (and thus, start the negotiation) here
const offer = await pc.createOffer();
await pc.setLocalDescription(offer)
send_offer_to_other_peer(offer);

const answer = await receive_answer_from_other_peer();
await pc.setRemoteDescription(answer);
```

And that's it! The other peer should be able to see and hear the echoed video and audio.
Expand Down
2 changes: 1 addition & 1 deletion guides/introduction/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,4 +30,4 @@ your web application. Here are some example use cases:
In general, all of the use cases come down to getting media from one peer to another. In the case of Elixir WebRTC, one of the peers is usually a server,
like your Phoenix app (although it doesn't have to - there's no concept of server/client in WebRTC, so you might as well connect two browsers or two Elixir peers).

This is what the next section of this tutorial series will focus on - we will try to get media from a web browser to a simple Elixir app.
This is what the next tutorials will focus on - we will try to get media from a web browser to a simple Elixir app.
Loading

0 comments on commit af1fec4

Please sign in to comment.