Skip to content
This repository has been archived by the owner on Apr 22, 2023. It is now read-only.

lib: introduce .setMaxSendFragment(size) #6900

Closed
wants to merge 4 commits into from

Conversation

indutny
Copy link
Member

@indutny indutny commented Jan 17, 2014

fix #6889

@@ -627,6 +627,10 @@ has been established.
ANOTHER NOTE: When running as the server, socket will be destroyed
with an error after `handshakeTimeout` timeout.

### tlsSocket.setMaxSendFragment(size)

Set maximum TLS fragment size (default value is: `16384`).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great to use a lower max size. Ideally, it would be adjusted over the lifetime of the connection, but at a minimum, we want to eliminate the CWND overlfow case on new connections + slow-start after idle cases... Setting fragment size to ~8k would do the trick (assuming we're on IW10 server).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It affects throughput significantly, I did benchmarks of bud with and without this option enabled and the difference was dramatical. Also, default value is set by OpenSSL, not us.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, do you have your bud benchmark numbers published anywhere?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, these are with max send fragment set to nearly MTU value (~1300 or a bit less): indutny/bud#20 (comment)

And here are results without max send fragment:

indutny/bud#20 (comment)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There're no bandwidth metric, so you could only believe me and observe lower RPS :)

@igrigorik
Copy link

I don't think that we are going to implement dynamic strategy in core.

Do you have any particular reason in mind? Config flag is definitely a good start (and kudos for super-quick turnaround!), but I'd love to see a smarter strategy in node.

@tjfontaine
Copy link

Sometimes it's not always in our best interest to implement (and then maintain) optimizations around specific implementations that could be subject to bikeshedding. Ideally we're working to provide APIs in core that allow external consumers to handle these features in the module space.

By simply making this configurable, it allows someone to publish the auto-window management without Node being opinionated about what that means.

@igrigorik
Copy link

@tjfontaine I guess from a framework perspective that approach makes sense. That said, my reservation with this line of thinking is the suboptimal "out-of-the-box" experience: most developers won't know to enable an extra module or add that extra config flag, hence the majority of deployed servers will continue to use the suboptimal settings and users will incur higher TTFB than necessary.

The advantage of the dynamic strategy is that you can meet both goals: optimize TTFB for interactive traffic, and provide good through for bulk transfers. No configuration needed, unless you want to tweak the boost threshold.

@tjfontaine
Copy link

@igrigorik I understand your sentiment, but more often than not deployments are using servers like restify, hapi, or express -- each with their own priorities regarding what they're designed to provide. They are in a much better place to define the policy and mechanism they want to adopt. They could even use the module system to share implementations among each other. This is the ethos of core and of the ecosystem in general.

As far as TTFB considerations for people using require('https').createServer(), that should be noted in our documentation of the method setMaxSendFragment such that users can be aware and be pointed to more in depth information that can be utilized.

@igrigorik
Copy link

@indutny @tjfontaine sg.

What do you guys think of the following for the documentation:

Set maximum TLS fragment size (default value is: 16384). Smaller fragment size decreases buffering latency on the client: large fragments are buffered by the TLS layer until the entire fragment is received and its integrity is verified; large records can span multiple roundtrips, and their processing can be delayed due to packet loss or reordering. However, smaller record size adds extra TLS framing bytes and CPU overhead, which may decrease overall server throughput.

Not sure what the policy is about linking to external sites in the doc, but if it makes sense:

@indutny
Copy link
Member Author

indutny commented Jan 18, 2014

That is very meaningful documentation, thank you! Though, I'll insert it without links.

@igrigorik
Copy link

@indutny sg, thanks!

@indutny
Copy link
Member Author

indutny commented Jan 18, 2014

Pushed update.

@indutny
Copy link
Member Author

indutny commented Jan 20, 2014

@igrigorik fixed.

@indutny
Copy link
Member Author

indutny commented Jan 20, 2014

@tjfontaine please review

@tjfontaine
Copy link

will do

### tlsSocket.setMaxSendFragment(size)

Set maximum TLS fragment size (default and maximum value is: `16384`, minimum
is: `512`). Returns `true` on success, `false` otherwise.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add test coverage on this part? Is there an upper bound?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep 16384, will add test tomorrow.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added test coverage.

@tjfontaine
Copy link

lgtm

@indutny
Copy link
Member Author

indutny commented Jan 20, 2014

Thank you, landed in 7f9b015

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants