Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add jitter buffer #158

Merged
merged 10 commits into from
Sep 20, 2024
Merged

Add jitter buffer #158

merged 10 commits into from
Sep 20, 2024

Conversation

sgfn
Copy link
Member

@sgfn sgfn commented Aug 20, 2024

The battle-tested (I would assume) implementation from Membrane RTP Plugin

@sgfn sgfn requested a review from LVala August 20, 2024 15:03
Copy link

codecov bot commented Aug 20, 2024

Codecov Report

Attention: Patch coverage is 96.00000% with 5 lines in your changes missing coverage. Please review.

Project coverage is 88.03%. Comparing base (f310ba0) to head (babf6db).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
lib/ex_webrtc/rtp/jitter_buffer/packet_store.ex 92.85% 4 Missing ⚠️
lib/ex_webrtc/rtp/jitter_buffer/heap.ex 97.22% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #158      +/-   ##
==========================================
+ Coverage   87.44%   88.03%   +0.58%     
==========================================
  Files          40       44       +4     
  Lines        2231     2356     +125     
==========================================
+ Hits         1951     2074     +123     
- Misses        280      282       +2     
Files with missing lines Coverage Δ
lib/ex_webrtc/rtp/jitter_buffer.ex 100.00% <100.00%> (ø)
test/support/packet_factory.ex 100.00% <100.00%> (ø)
lib/ex_webrtc/rtp/jitter_buffer/heap.ex 97.22% <97.22%> (ø)
lib/ex_webrtc/rtp/jitter_buffer/packet_store.ex 92.85% <92.85%> (ø)

... and 2 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f310ba0...babf6db. Read the comment docs.

lib/ex_webrtc/rtp/jitter_buffer.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Show resolved Hide resolved
@sgfn sgfn requested a review from LVala August 22, 2024 10:54
Copy link
Member

@LVala LVala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have @mickel8 take a look at the API and we are probably good to go

lib/ex_webrtc/rtp/jitter_buffer.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Show resolved Hide resolved
@mickel8 mickel8 self-requested a review August 24, 2024 11:28
mix.exs Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer/packet_store.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer/packet_store.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Show resolved Hide resolved
@opaque t :: %__MODULE__{
latency: non_neg_integer(),
store: PacketStore.t(),
state: :initial_wait | :timer_set | :timer_not_set
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could replace this by started? boolean and changing maybe_set_timer to get_next_timeout.
Just return time to the next action.

  @spec get_next_timeout(t()) :: non_neg_integer()
  defp get_next_timeout(buffer)

  defp get_next_timeout(%{started: false} = buffer) do
    case PacketStore.first_record_timestamp(buffer.store) do
      # If we're inserting the very first packet, set the initial latency timer
      nil -> buffer.latency
      _ts -> nil
    end
  end

  defp get_next_timeout(%{started: true} = buffer) do
    case PacketStore.first_record_timestamp(buffer.store) do
      nil ->
        nil

      timestamp_ms ->
        since_insertion = System.monotonic_time(:millisecond) - timestamp_ms
        max(0, buffer.latency - since_insertion)
    end
  end

if we also decide that the latency for the first packet is not needed we could simplify this further into

  defp get_next_timeout(buffer) do
    case PacketStore.first_record_timestamp(buffer.store) do
      nil ->
        nil

      timestamp_ms ->
        since_insertion = System.monotonic_time(:millisecond) - timestamp_ms
        max(0, buffer.latency - since_insertion)
    end
  end

Does it make sense to you?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not differentiating between having the timer set and unset means we'll be telling the user to set a timer every time a packet is inserted, which to me seems quite unnecessary and wasteful when it can easily be avoided with just a few lines of code.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS I love that you chose to include the typespec for the private function here :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not differentiating between having the timer set and unset means we'll be telling the user to set a timer every time a packet is inserted

To be precise, every time when usser inserts something and we are waiting for some packet. In most cases, when packets are ordered, we will immediately flush them.

Anyway, this indeed might not be the bes UX, I didn't notice that earlier 🤔

On the other hand, right now, if someone calls handle_timeout too early, they will get nil as timer and will think everything is okay

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be precise, every time when usser inserts something and we are waiting for some packet. In most cases, when packets are ordered, we will immediately flush them.

Yeah, you're correct, my bad

On the other hand, right now, if someone calls handle_timeout too early, they will get nil as timer and will think everything is okay

I don't see how that would happen, we're changing the buffer state to :timer_not_set immediately after handle_timeout is called, so all of the timer-setting code is being executed

  def handle_timeout(buffer) do
    %__MODULE__{buffer | state: :timer_not_set} |> send_packets()
  end

lib/ex_webrtc/rtp/jitter_buffer.ex Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer/packet_store.ex Outdated Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer_test.exs Outdated Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer_test.exs Outdated Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer_test.exs Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer_test.exs Show resolved Hide resolved
Copy link
Member Author

@sgfn sgfn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First part of second batch of fixes. Will replace the PQ and fix the commented out parts in packet_store_test.exs

lib/ex_webrtc/rtp/jitter_buffer/packet_store.ex Outdated Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer.ex Show resolved Hide resolved
lib/ex_webrtc/rtp/jitter_buffer/packet_store.ex Outdated Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer/packet_store_test.exs Outdated Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer_test.exs Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer_test.exs Outdated Show resolved Hide resolved
test/ex_webrtc/rtp/jitter_buffer_test.exs Show resolved Hide resolved
Copy link
Member

@mickel8 mickel8 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just take care of commented code in tests and merge 👍


alias ExWebRTC.RTP.JitterBuffer.Heap

test "stores things and reports size" do
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick, in Elixir WebRTC we decided to avoid the old Membrane style of describing tests. Sometimes you can still find such tests or it might not be posssible to describe them in other way but in general we favor something like this:

test "Heap.push/1"
test "Heap.root/1"
test "Heap.new/1"

That's how tests in Elixir repo are written. We don't have to change these tests, just for the future

@@ -0,0 +1,118 @@
defmodule ExWebRTC.RTP.JitterBuffer.Heap do
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks! 🎉

Comment on lines 95 to 107
# seq_numbers =
# store
# |> PacketStore.dump()
# |> Enum.map(& &1.packet.sequence_number)

# assert seq_numbers == [65_535, 0, 1]

# indexes =
# store
# |> PacketStore.dump()
# |> Enum.map(& &1.index)

# assert indexes == [@seq_number_limit - 1, @seq_number_limit, @seq_number_limit + 1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👀

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sgfn sgfn merged commit e42c09d into master Sep 20, 2024
1 check passed
@sgfn sgfn deleted the sgfn/jitter-buffer branch September 20, 2024 13:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants