-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aiohttp.ClientSession reconnect #2867
Comments
I don't understand, a ClientSession != connection, it has a connection pool internally. What is a server disconnect? Some exception on requests? Maybe you should just catch it and retry the request (if it is desirable). If the connection is aborted during a request, usually we cannot retry it automatically, because it might be executed twice on the server side. Users must do this explicitly. |
This is an implementation detail. In fact, from a user pov, a ClientSession == a connection
Yes it can be ServerDisconnectedError, but this is not what we want to catch, this is the responsability of the developer.
I think you did not understand the use case. It's not a retry. It's the fact that a session will be able to do just a single call and then be closed. see for example https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20forceclose There's a clean close happening from the server after the response has been sent. So the idea is not about retrying a request when it fails but just checking that the connection is not closed just before doing a request with it - and if it's closed, reconnect it. Otherwise, ClientSession is just a single-request-then-throw-away object. |
I do not like the idea of having a Client that passively tries to open TCP connections to keep the pool in a certain size. Here who has the last call is the server and the user, if the server supports keep alive, the connection will keep it for a while, once its internal timeout is reached the connection is closed. If the user makes an HTTP request and there is no connections available and opened the Client will open a new connection. This is the normal situation for most of the cases. Indeed this is more an INFX issue than a software issue, long life connections in low latency networks that do not have sudden changes in their topology are a good idea - AWS Load Balancers drops of all connections that are not being used after 60 seconds with the service that is being proxied. But for other networks, such as the mobile ones this might be a bad idea and a source of other problems that are not easy to solve or need another protocol. |
@tarekziade does |
@pfreixes I would not add this feature by default, it's a special case. I think it boils down to being able to detect that a socket was closed by a server right after the response was received and determine if it's the right behavior from the server. I need to look at what headers HAProxy sends when it does that. Maybe it's just a matter of always closing the socket after each response when it's expected to be closed by the server, as @asvetlov suggests (thanks) So I'll look at HAProxy and see if there's something that can be done with headers. Thanks everyone for the feedback, closing this bug as invalid |
@tarekziade If the server supports HTTP/1.1 and it wants to close the connection after this request, it should return "Connection: close" for the last response. If it returns "Connection: keep-alive" or do not return the connection header, the client will always assume that the server is accepting more requests. If the server returns "Connection: keep-alive" bug immediately close the connection, it should be a bug of server implementation. It is not always possible to detect a "close" in time. Usually the client already starts the next request and write some thing to the socket, and then get a broken pipe. The client cannot tell whether server closes the connection before receiving these data, or after received these data (maybe the request is already performing). Usually a HTTP request is small enough to be sent in a single packet. |
@tarekziade I think you should be able to implement desired behavior with custom connector implementation |
@tarekziade For HAProxy, I remember it basically supports only HTTP/1.0, so it has options to force a close after the first response. But it should be combined with a header operation to modify the |
The use of force_full was inspired by: aio-libs/aiohttp#2867 (comment) - Causes Factory and Downloaders to use force_close which causes the TCP connection to close with each request. - Ups the default concurrency to 20 - Moves the concurrency restriction feature to the Downloaders and updates HttpDownloader and FileDownloader to use it. - Stops using aiohttp for concurrency restriction because it's done in the downloaders. - The Factory now configures the concurrency restriction using the value on the remote. https://pulp.plan.io/issues/4036 https://pulp.plan.io/issues/4075 closes #4036 closes #4075
The use of force_close was inspired by: aio-libs/aiohttp#2867 (comment) - Causes Factory and Downloaders to use force_close which causes the TCP connection to close with each request. - Ups the default concurrency to 20 - Moves the concurrency restriction feature to the Downloaders and updates HttpDownloader and FileDownloader to use it. - Stops using aiohttp for concurrency restriction because it's done in the downloaders. - The Factory now configures the concurrency restriction using the value on the remote. https://pulp.plan.io/issues/4036 https://pulp.plan.io/issues/4075 closes #4036 closes #4075
The use of force_close was inspired by: aio-libs/aiohttp#2867 (comment) - Causes Factory and Downloaders to use force_close which causes the TCP connection to close with each request. - Ups the default concurrency to 20 - Moves the concurrency restriction feature to the Downloaders and updates HttpDownloader and FileDownloader to use it. - Stops using aiohttp for concurrency restriction because it's done in the downloaders. - The Factory now configures the concurrency restriction using the value on the remote. https://pulp.plan.io/issues/4036 https://pulp.plan.io/issues/4075 closes #4036 closes #4075
The use of force_close was inspired by: aio-libs/aiohttp#2867 (comment) - Causes Factory and Downloaders to use force_close which causes the TCP connection to close with each request. - Ups the default concurrency to 20 - Moves the concurrency restriction feature to the Downloaders and updates HttpDownloader and FileDownloader to use it. - Stops using aiohttp for concurrency restriction because it's done in the downloaders. - The Factory now configures the concurrency restriction using the value on the remote. - creates a _run() method that subclassed Downloaders now need to implement instead of run(). https://pulp.plan.io/issues/4036 https://pulp.plan.io/issues/4075 closes #4036 closes #4075
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a [new issue] for related bugs. |
Sometimes servers will disconnect the client on purpose after each request.
When using aiohttp.ClientSession to do several calls, it means we need to re-create a session for each request, and that defeats the with pattern..
I would like to propose the addition of a "reconnect" option (False by default)
when used, this option will try to reconnect if the previous call disconnected the socket, so this
code works even if the server disconnects the client after the first call:
What do you think? Happy to try to implement it of you think it's good idea!
The text was updated successfully, but these errors were encountered: