-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Success rate, throughput, and latency issues with HTTP/1 #1353
Comments
After enabling a bunch of logs, noticed this:
Which is interesting! The "trying to connect" means the error came from the Digging deeper... |
Huh, is |
Well, this error is when connecting, and thus we don't set the option at all. But, this suggests that a lot of churn is happening, and many connections are sitting in |
Turns out the real problem was every single one of these requests resulted in a new connection. There was some optimizations added to hyper to reduce the amount of operations needed when the size of a body was known, but because of those optimizations, the internal read state wasn't polled to the end, so it assumed the body wasn't wanted and had to close the connection. Fix to hyper merged in hyperium/hyper#1610, new PR for the proxy incoming! |
Confirmed, tested with Linkerd2 |
With linkerd2-proxy, observed 80% success rate and high latency when testing HTTP/1.
Test environment
Proxy metrics:
https://gist.github.com/siggy/2708cdff73c3e25463d80fc10feac45a
Kubernetes config:
https://gist.github.com/siggy/21ecc89162c23f1690baf29ab4cd2b5a
Seeing lots of these in proxy log:
Steps to reproduce
Deploy
Observe in Grafana
The text was updated successfully, but these errors were encountered: