Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added a builder option to set the read_chunk_size #1707

Closed
wants to merge 3 commits into from

Conversation

rrichardson
Copy link
Contributor

Addresses #1706

@rrichardson rrichardson changed the title added a builder option to set the read_chunk_size in http 1 client added a builder option to set the read_chunk_size Nov 16, 2018
Copy link
Member

@seanmonstar seanmonstar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a great idea!

I think it may also be worthwhile (in a separate PR of course) to create an adaptive read buffer strategy to be used by default, which grows and shrinks depending on how much is read from the transport each time. Similar in concept to Netty's AdaptiveRecvByteBufAllocator.

@@ -496,6 +510,8 @@ where
if self.builder.h1_title_case_headers {
conn.set_title_case_headers();
}
println!("Creating connection where chunk size is {:?}", self.builder.read_chunk_size);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This println snuck in ;)

@@ -11,15 +11,15 @@ use tokio_io::{AsyncRead, AsyncWrite};
use super::{Http1Transaction, ParseContext, ParsedMessage};

/// The initial buffer size allocated before trying to read from IO.
pub(crate) const INIT_BUFFER_SIZE: usize = 8192;
pub(crate) const INIT_BUFFER_SIZE: usize = 64 * 1024;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd leave this default at 8kb for now. Requests without a body will likely never need bigger.


/// The minimum value that can be set to max buffer size.
pub const MINIMUM_MAX_BUFFER_SIZE: usize = INIT_BUFFER_SIZE;

/// The default maximum read buffer size. If the buffer gets this big and
/// a message is still not complete, a `TooLarge` error is triggered.
// Note: if this changes, update server::conn::Http::max_buf_size docs.
pub(crate) const DEFAULT_MAX_BUFFER_SIZE: usize = 8192 + 4096 * 100;
pub(crate) const DEFAULT_MAX_BUFFER_SIZE: usize = 8192 * 1024;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd leave this as it was for now as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. But we'll need to somehow let the user know that if they set the read_buf/chunk_size they'll need to also up the max buffer size otherwise it will panic.

My inclination is to get rid of the initial check, and assume that if the user ups the read_buf_size they know what they are doing.. I'm not sure, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking about this a bit more, I think maybe we just set the max size to 4x the initial buffer size if the user updates the initial buffer size.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting the max buffer size is currently only done on the server side (via Http::max_buf_size). I think for this new API, this should be about setting an exact (as in, always use exactly and only this size) read buffer size. Thus, (assuming read_buf_exact_size as the Builder function name), calling Conn::set_read_buf_exact_size should also just set max_buf_size to the same. WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we take this approach, I think that we'll need to go to a two buffer system, since the read buffer is extended as needed to handle the contents of the AsyncRead::read. I am fairly sure that, at this function, the buffer size is compared to the max buf size, even on the client.

Copy link
Contributor Author

@rrichardson rrichardson Nov 21, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IOW, current we have a single, contiguous buffer that gets extended to handle the entire http message (if it is not chunked) . We would need to move away from this. (might be a good idea anyways)

/// Default is 64kb
///
#[inline]
pub fn read_chunk_size(&mut self, sz: usize) -> &mut Self {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this size is uses for all reading, both chunks and headers, it maybe should be named read_buf_size.

Though, as I think eventually allowing to adjust a read buffer strategy is a good idea, maybe this should state that it's using an exact size always. read_buf_exact_size?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe initial_read_buf()?

@seanmonstar
Copy link
Member

I filed #1708 for the adaptive buffer stuff.

@seanmonstar
Copy link
Member

I've taken this branch and made the edits I mentioned, with the beginnings of a ReadStrategy enum, and merged into master in this commit: 2e7250b

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants