Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1661 bug fix raw diff viewer #6

Draft
wants to merge 43 commits into
base: main
Choose a base branch
from
Draft

1661 bug fix raw diff viewer #6

wants to merge 43 commits into from

Conversation

gretchenfrage
Copy link
Owner

No description provided.

@gretchenfrage gretchenfrage changed the title prototype fix 1661 bug fix raw diff viewer Apr 14, 2024
@gretchenfrage gretchenfrage force-pushed the 1661-bug-fix branch 6 times, most recently from bfc9b75 to 89f208d Compare April 16, 2024 00:48
@gretchenfrage gretchenfrage force-pushed the 1661-bug-fix branch 6 times, most recently from d0e8175 to ce62a7a Compare April 20, 2024 22:02
@gretchenfrage gretchenfrage force-pushed the 1661-bug-fix branch 3 times, most recently from 3dcc72a to eb372bb Compare April 24, 2024 05:35
Retry source CIDs are under our control and are already required to be
unpredictable, so they make a fine basis for key derivation.
Allows multiple tasks to concurrently wait for writability on the same
socket. See the [tokio] and [async-io] docs for details.

[tokio]: https://docs.rs/tokio/latest/tokio/net/struct.UdpSocket.html
[async-io]: https://docs.rs/async-io/latest/async_io/struct.Async.html#concurrent-io
Dropping these has low cost because they're not associated with any
connection.
`UdpState` was removed some time ago, and various other interface
details have changed. As a result, the build would fail on platforms
without native support.
As we no longer buffer multiple transmits in memory, this complexity
is unused. GSO is expected to account for most, if not all, of the
performance benefit.
Because we no longer buffer transmits for unpredictable periods,
there's no need to share ownership of their contents.
This didn't impact any tests, but was confusing.
We no longer need to share ownership of this memory, so we should use
a simpler type to reflect our simpler requirements.
@gretchenfrage gretchenfrage force-pushed the 1661-bug-fix branch 2 times, most recently from c63dc5d to 03294a8 Compare April 26, 2024 04:31
@gretchenfrage gretchenfrage force-pushed the 1661-bug-fix branch 2 times, most recently from fc23130 to eba944e Compare April 30, 2024 23:18
Closes quinn-rs#1820

The fix:

- Endpoint now maintains a slab with an entry for each pending Incoming
  to buffer received data.
- ConnectionIndex now maps initial DCID to that slab key immediately
  upon construction of Incoming.
- If Incoming is accepted, association is overridden with association
  with ConnectionHandle, and all buffered datagrams are fed to newly
  constructed Connection.
- If Incoming is refused/retried/ignored, or accepting errors,
  association and slab entry are cleaned up to prevent memory leak.

Additional considerations:

- The Incoming::ignore operation can no longer be implemented as just
  dropping it. To help prevent incorrect API usage, proto::Incoming is
  modified to log a warning if it is dropped without being passed to
  Endpoint::accept/refuse/retry/ignore.
- Three things protect against memory exhaustion attacks here:

  1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto,
     limiting the number of concurrent incoming connections for which
     datagrams will be buffered before the application decides what to
     do with them. Also, it is changed from a constant to a field of the
     server config, max_incoming.
  2. Per-incoming buffered data is limited to a new limit stored in the
     server config, incoming_buffer_size, beyond which subsequent
     packets are discarded if received in these conditions.
  3. The sum total of all incoming buffered data is limited to a new
     limit stored in the server config, incoming_buffer_size_total,
     beyond which subsequent packets are discarded if received in these
     conditions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants