-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added a builder option to set the read_chunk_size #1707
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a great idea!
I think it may also be worthwhile (in a separate PR of course) to create an adaptive read buffer strategy to be used by default, which grows and shrinks depending on how much is read from the transport each time. Similar in concept to Netty's AdaptiveRecvByteBufAllocator
.
src/client/conn.rs
Outdated
@@ -496,6 +510,8 @@ where | |||
if self.builder.h1_title_case_headers { | |||
conn.set_title_case_headers(); | |||
} | |||
println!("Creating connection where chunk size is {:?}", self.builder.read_chunk_size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This println snuck in ;)
@@ -11,15 +11,15 @@ use tokio_io::{AsyncRead, AsyncWrite}; | |||
use super::{Http1Transaction, ParseContext, ParsedMessage}; | |||
|
|||
/// The initial buffer size allocated before trying to read from IO. | |||
pub(crate) const INIT_BUFFER_SIZE: usize = 8192; | |||
pub(crate) const INIT_BUFFER_SIZE: usize = 64 * 1024; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd leave this default at 8kb for now. Requests without a body will likely never need bigger.
|
||
/// The minimum value that can be set to max buffer size. | ||
pub const MINIMUM_MAX_BUFFER_SIZE: usize = INIT_BUFFER_SIZE; | ||
|
||
/// The default maximum read buffer size. If the buffer gets this big and | ||
/// a message is still not complete, a `TooLarge` error is triggered. | ||
// Note: if this changes, update server::conn::Http::max_buf_size docs. | ||
pub(crate) const DEFAULT_MAX_BUFFER_SIZE: usize = 8192 + 4096 * 100; | ||
pub(crate) const DEFAULT_MAX_BUFFER_SIZE: usize = 8192 * 1024; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd leave this as it was for now as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. But we'll need to somehow let the user know that if they set the read_buf/chunk_size they'll need to also up the max buffer size otherwise it will panic.
My inclination is to get rid of the initial check, and assume that if the user ups the read_buf_size they know what they are doing.. I'm not sure, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thinking about this a bit more, I think maybe we just set the max size to 4x the initial buffer size if the user updates the initial buffer size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting the max buffer size is currently only done on the server side (via Http::max_buf_size
). I think for this new API, this should be about setting an exact (as in, always use exactly and only this size) read buffer size. Thus, (assuming read_buf_exact_size
as the Builder function name), calling Conn::set_read_buf_exact_size
should also just set max_buf_size
to the same. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we take this approach, I think that we'll need to go to a two buffer system, since the read buffer is extended as needed to handle the contents of the AsyncRead::read
. I am fairly sure that, at this function, the buffer size is compared to the max buf size, even on the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IOW, current we have a single, contiguous buffer that gets extended to handle the entire http message (if it is not chunked) . We would need to move away from this. (might be a good idea anyways)
/// Default is 64kb | ||
/// | ||
#[inline] | ||
pub fn read_chunk_size(&mut self, sz: usize) -> &mut Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this size is uses for all reading, both chunks and headers, it maybe should be named read_buf_size
.
Though, as I think eventually allowing to adjust a read buffer strategy is a good idea, maybe this should state that it's using an exact size always. read_buf_exact_size
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe initial_read_buf()
?
I filed #1708 for the adaptive buffer stuff. |
I've taken this branch and made the edits I mentioned, with the beginnings of a Thanks! |
Addresses #1706