-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection::graceful_shutdown always waits for the first request. #2730
Comments
Yea, I think this is behavior is currently on purpose. Whether it should be is a fair question. I think at the time, I assumed that a new connection would usually have a request incoming ASAP, whereas an idle connection might not, and so it was better to allow that request to come in. We could alter that behavior, if we document a good reason for why we're making that change. Also, I imagine using the |
I agree that it's more likely for the first request to show up than a follow up request, but it seems weird to bake that assumption into graceful_shutdown IMO. For reference, I'm using graceful_shutdown in some logic to shut down connections after a certain period of idleness (I can't just terminate the connection future since I want the TLS session to shut down cleanly for session reuse). Right now, the current behavior will miss connections that are opened and never make a request. Does |
Any progress on this? I was trying to figure out why my program wasn't exiting when I expected it to and found this issue. If I send the graceful shutdown signal the server will process one more request before actually exiting. So a user sending the shutdown command through a web app won't actually be shutting anything down. |
From some testing just now, the answer to the first part is "no". If I start-up an |
This seems like a thing to slide into RC2 or RC3.
@sfackler Does this mean you'd have the connection write out a 408 response and then finish? Or simply calling |
Yeah it just needs to call shutdown on the underlying IO object, which will handle the SSL shutdown protocol. |
I think I can take this one. |
@seanmonstar so the issue right now is this: if we try to control this using the keep_alive state (which is what we're currently doing) it introduces a race condition of sorts. If keep_alive is initially Busy then the connection will wait forever on the read of the initial request. I think we have two options here: either we keep the initial state as Busy and add some sort of read timeout somewhere, or we keep the initial state as Idle and do a peek on the underlying io stream and process any data found there before closing the connection. edit: |
fix issue in the graceful shutdown logic which causes the connection future to hang when graceful shutdown is called prior to any requests being made. This fix checks to see if the connection is still in its initial state when disable_keep_alive is called, and starts the shutdown process if it is. This addresses issue hyperium#2730
That seems like the desired outcome of this issue, no? Or do you mean the request was already received, and the connection just aborts abruptly? |
@seanmonstar yeah so essentially if a http1 server conn is created with .keep_alive(false) the server shuts down without servicing any requests. Is that the behavior we want? I would think you would want the server to service one request to see if that request wants to keep the connection alive or not. The following two tests fail when the initial keep alive state is set to idle: The read_to_end in both cases returns an empty array because the server shut down before processing anything. |
fix issue in the graceful shutdown logic which causes the connection future to hang when graceful shutdown is called prior to any requests being made. This fix checks to see if the connection is still in its initial state when disable_keep_alive is called, and starts the shutdown process if it is. This addresses issue #2730
fix issue in the graceful shutdown logic which causes the connection future to hang when graceful shutdown is called prior to any requests being made. This fix checks to see if the connection is still in its initial state when disable_keep_alive is called, and starts the shutdown process if it is. This addresses issue #2730
fix issue in the graceful shutdown logic which causes the connection future to hang when graceful shutdown is called prior to any requests being made. This fix checks to see if the connection is still in its initial state when disable_keep_alive is called, and starts the shutdown process if it is. This addresses issue #2730 Co-authored-by: Robin Seitz <[email protected]>
fix issue in the graceful shutdown logic which causes the connection future to hang when graceful shutdown is called prior to any requests being made. This fix checks to see if the connection is still in its initial state when disable_keep_alive is called, and starts the shutdown process if it is. This addresses issue hyperium#2730 Co-authored-by: Robin Seitz <[email protected]>
fix issue in the graceful shutdown logic which causes the connection future to hang when graceful shutdown is called prior to any requests being made. This fix checks to see if the connection is still in its initial state when disable_keep_alive is called, and starts the shutdown process if it is. This addresses issue hyperium#2730 Co-authored-by: Robin Seitz <[email protected]> Signed-off-by: Sven Pfennig <[email protected]>
Version
hyper 0.14.16
Platform
Linux DESKTOP-DHO88R7 4.19.104-microsoft-standard #1 SMP Wed Feb 19 06:37:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Description
If you gracefully shut down a server connection future before the first request, Hyper will not actually shut the connection down until a request is processed:
I would expect this program to exit almost instantly since there is no request being processed when the graceful_shutdown is invoked. However, it instead blocks forever waiting on the client to send headers.
The behavior actually appears to be that the shutdown is processed immediately after the first request is fully handled.
The text was updated successfully, but these errors were encountered: