-
-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0-RTT packets can be lost due to race condition introduced along with Incoming #1820
Comments
gretchenfrage
changed the title
0-RTT packets can be lost due to bug introduced along with Incoming
0-RTT packets can be lost due to race condition introduced along with Incoming
Apr 14, 2024
Thanks for following up on this! |
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 14, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - To help protect against memory exhaustion attacks, per-Incoming buffered data is limited to twice the receive window or 10 KB, which- ever is larger. Excessive packets silently dropped. - Does this introduce a new vulnerability to an attack in which an attacker could spam a server with 0-RTT packets with the same connection ID as it observed a client attempting to initiate a 0-RTT connection to the server? I do think so. Is this a severe problem? Here's two reasons I don't think so: 1. The default receive window is set to max value, so this won't actually kick in unless the user is already hardening against adverse conditions. 2. It is already possible for an on-path attacker to distrupt a connection handshake if 0.5-RTT data is being used, so this probably doesn't actually expand the set of situations in which it's vulnerable to this kind of vulnerability. Could this be avoided? Possibly by introducing additional state to the buffering state to validate whether these packets are validly encrypted for the associated connection? However, that may risk making these operations costly enough that they start to defeat the DDOS-resistance abilities of the Incoming API.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 14, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - To help protect against memory exhaustion attacks, per-Incoming buffered data is limited to twice the receive window or 10 KB, which- ever is larger. Excessive packets silently dropped. - Does this introduce a new vulnerability to an attack in which an attacker could spam a server with 0-RTT packets with the same connection ID as it observed a client attempting to initiate a 0-RTT connection to the server? I do think so. Is this a severe problem? Here's two reasons I don't think so: 1. The default receive window is set to max value, so this won't actually kick in unless the user is already hardening against adverse conditions. 2. It is already possible for an on-path attacker to distrupt a connection handshake if 0.5-RTT data is being used, so this probably doesn't actually expand the set of situations in which it's vulnerable to this kind of vulnerability. Could this be avoided? Possibly by introducing additional state to the buffering state to validate whether these packets are validly encrypted for the associated connection? However, that may risk making these operations costly enough that they start to defeat the DDOS-resistance abilities of the Incoming API.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 14, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - To help protect against memory exhaustion attacks, per-Incoming buffered data is limited to twice the receive window or 10 KB, which- ever is larger. Excessive packets silently dropped. - Does this introduce a new vulnerability to an attack in which an attacker could spam a server with 0-RTT packets with the same connection ID as it observed a client attempting to initiate a 0-RTT connection to the server? I do think so. Is this a severe problem? Here's two reasons I don't think so: 1. The default receive window is set to max value, so this won't actually kick in unless the user is already hardening against adverse conditions. 2. It is already possible for an on-path attacker to distrupt a connection handshake if 0.5-RTT data is being used, so this probably doesn't actually expand the set of situations in which it's vulnerable to this kind of vulnerability. Could this be avoided? Possibly by introducing additional state to the buffering state to validate whether these packets are validly encrypted for the associated connection? However, that may risk making these operations costly enough that they start to defeat the DDOS-resistance abilities of the Incoming API.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 15, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. 2. Per-incoming buffered data is limited to the maximum initially receivable stream data as per the transport parameters, plus one datagram, after which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new MAX_ALL_INCOMING_BUFFERED constant, after which subsequence packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 16, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. 2. Per-incoming buffered data is limited to the maximum initially receivable stream data as per the transport parameters, plus one datagram, after which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new MAX_ALL_INCOMING_BUFFERED constant, after which subsequence packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 20, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. 2. Per-incoming buffered data is limited to the maximum initially receivable stream data as per the transport parameters, plus one datagram, after which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new MAX_ALL_INCOMING_BUFFERED constant, after which subsequence packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 20, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. 2. Per-incoming buffered data is limited to the maximum initially receivable stream data as per the transport parameters, plus one datagram, after which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new MAX_ALL_INCOMING_BUFFERED constant, after which subsequence packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 20, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_per_incoming, which defaults to the maximum initially receivable stream data as per the transport parameters, plus 1200 bytes to allow for some QUIC overhead. Beyond this limit, subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_all_incoming, beyond which subsequenct packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 20, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_per_incoming, which defaults to the maximum initially receivable stream data as per the transport parameters, plus 1200 bytes to allow for some QUIC overhead. Beyond this limit, subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_all_incoming, beyond which subsequenct packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 20, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_per_incoming, which defaults to the maximum initially receivable stream data as per the transport parameters, plus 1200 bytes to allow for some QUIC overhead. Beyond this limit, subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_all_incoming, beyond which subsequenct packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 20, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_per_incoming, which defaults to the maximum initially receivable stream data as per the transport parameters, plus 1200 bytes to allow for some QUIC overhead. Beyond this limit, subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, max_buffer_bytes_all_incoming, beyond which subsequenct packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 24, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 24, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 24, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 26, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 26, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 29, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
Apr 30, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
gretchenfrage
added a commit
to gretchenfrage/quinn
that referenced
this issue
May 1, 2024
Closes quinn-rs#1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
djc
pushed a commit
that referenced
this issue
May 1, 2024
Closes #1820 The fix: - Endpoint now maintains a slab with an entry for each pending Incoming to buffer received data. - ConnectionIndex now maps initial DCID to that slab key immediately upon construction of Incoming. - If Incoming is accepted, association is overridden with association with ConnectionHandle, and all buffered datagrams are fed to newly constructed Connection. - If Incoming is refused/retried/ignored, or accepting errors, association and slab entry are cleaned up to prevent memory leak. Additional considerations: - The Incoming::ignore operation can no longer be implemented as just dropping it. To help prevent incorrect API usage, proto::Incoming is modified to log a warning if it is dropped without being passed to Endpoint::accept/refuse/retry/ignore. - Three things protect against memory exhaustion attacks here: 1. The MAX_INCOMING_CONNECTIONS limit is moved from quinn to proto, limiting the number of concurrent incoming connections for which datagrams will be buffered before the application decides what to do with them. Also, it is changed from a constant to a field of the server config, max_incoming. 2. Per-incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size, beyond which subsequent packets are discarded if received in these conditions. 3. The sum total of all incoming buffered data is limited to a new limit stored in the server config, incoming_buffer_size_total, beyond which subsequent packets are discarded if received in these conditions.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Unfortunately it looks like in #1752 I accidentally introduced a bug. Apologies for not catching it.
Bug description: If a client sends a 0-RTT datagram to the server after sending the datagram with its Initial packet, and it is received and processed by the endpoint before the corresponding
Incoming
isaccept
-ed, the 0-RTT packet will not find any associated connection and thus be dropped. As a consequence, this seems to cause request/response pairs on connections where this occurs to actually take 3 round trips to go through successfully.I am working on a fix, and I will see if I can get a PR opened for it tomorrow.
The text was updated successfully, but these errors were encountered: