-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection Pooling doesn't work #150
Comments
The callback you want is HTTPSession::InfoCallback::onTransactionDetached. In this callback you can check to see if the connection is below the maximum transaction limit and move it into the queue.
I don't think it's true that a single transaction timeout would cause the whole session to be torn down. Can you give me an example of that? |
Thanks for the reply. I also see that session_->getMaxConcurrentOutgoingStreams() is returning 1 always. Which I guess makes sense for HTTP 1.1? Regarding the timeout. Here is a stack trace, generated from when the Session is being destroyed on a transactionTimeout: (The first line SessionWrapper class implements proxygen::HTTPSession::InfoCallback ) How can I prevent that from happening? One more question would be regarding the threads. Currently I have a separate pool for every http client thread (which is wasteful). However I have more than 1 thread. Thanks again for the quick reply! |
Oops I guess you actually need to implement both callbacks. Kind of ugly that HTTPSession gives you different callback. I think we should probably deliver onTranactionDetached whenever a transaction detaches, and onConnectionDeactivated when it goes completely idle. That being said, you probably shouldn't be timing out? Either your request or response is not completing. There are APIs in HTTPUpstreamSession which you can use to detach it from one thread and attach it in another, but I wouldn't recommend using those. It was kind of fragile to implement, and some of the logic required may not be open source yet. |
Thank you again for your reply. So the first problem I see is that a timeout on one transaction killing the entire connection seems unreasonable to me. A timeout on one transaction doesn't indicate that the next transaction will timeout as well. So I would like to keep the connection alive. Second problem is much more severe: Testing with a single timing out request I confirmed that, first I get the proxygen::HTTPSession::InfoCallback::onDeactivateConnection callback, and it passes this test: Then later the proxygen::HTTPSession::InfoCallback::onDestroy callback is called, after I put that session back to the pool already. And at that time the session is destroyed. I tried to work around it by wrapping every HTTPUpstreamSession with an object that implements the proxygen::HTTPSession::InfoCallback (like the SessionWrapper in the ProxyServer example of proxygen). And mark that object as destroyed when I receive the onDestroy callback. So the pool will basically have a bunch of dead objects. And when I try to get a new session from the pool I need to go through a bunch, and pop them out until I find a healthy one. Consider this: I also wanted to ask if you guys implemented your own connection pool that you can share if me? Whether as a patch, or an example? Thanks again for your help and quick replies! |
Regarding timeout on one transaction killing the connection: For HTTP/1.1, each request/response must complete in order for the connection to be reusable for the next request/response. So if you timeout waiting for the response, you have no choice but to abort the connection. For HTTP/2, the timed out transaction is individually aborted via RST_STREAM, and the connection continues to be reusable. Let me know if you see this behavior with H2. Regarding onDeactivateConnection/onDestroy, I think they should be called basically in the same event loop, so it shouldn't be possible for another intervening request to get the session out of the pool. We also check HTTPSession::isPoolable() before placing it in the pool. We probably should release our connection pool open source. It was recently rewritten at the time we released proxygen to the community, so we didn't think it was mature enough. Now we have several years of operational experience with it, it's probably safe to put out. I'll get the ball rolling internally but I imagine our security folks will want to take another look at it before it goes out, so it won't be immediate. |
Oh, you're right! Regarding the race condition, I am aware of the fact that it's all in one even loop, but I was just worried that event will be in that order in the loop, since I'm not sure that onDeactivateConnection and onDestroy will be guaranteed to be called in one event loop task. But maybe I am being paranoid here. I will test. About the Connection Pool: Also I'm guessing HTTPSession::isPoolable() is part of that code that hasn't been released, since I don't see that method in the repo. Thanks again for your help! |
Oops I mistyped, I meant isReusable(). And we also check isClosing(). I've started asking the questions, but I don't have a timeline. I'll update this issue when I have more info. |
Thank you! I'm using cmake to compile my application. However the example same code, same request. When I compile in Release. I'm always getting session->getNumOutgoingStreams()==1. This is my code: void ConnectionPool::returnConnectionToPool(const HTTPUpstreamSession* session) { I'm really confused why that would happen. I understand that there's some kind of wrong optimization that the compiler is doing in Release, but I don't know how to do about figuring it out. I worked around it by using just isReusable() for now.. But I think there's something that should be investigated here. Thanks again, I really appreciate your help on this! |
I'll try to repro as well |
Sorry, I wasn't able to repro this behavior using our build tools. I took your patch and merged it into the proxy sample SessionWrapper (just the printing part). I always get "Adding session back to pool" both with our internal build and with an ubuntu 14.04 build. Can anyone else repro? What happens if you attach the debugger to the opt build? |
Attaching the debugger to an optimized executable doesn't work. Compiling in Release, the debugger doesn't even hit the break point, and when compiling in RelWithDebugInfo, I hit the break point but can't watch any variables. So all I can do is print the result, and it's always 1. Except in Debug. I'm on Ubuntu 16.04, I don't think it matters though. I'll try to create a small repo with just that issue when I get the chance. It's not blocking me at the moment. I'm facing other big performance issues with Proxygen right now. Very high CPU usage on fairly low Requests/sec. I'm going to investigate it a bit further and create separate issue. |
I've created a repo to show how I reproduce the issue I've described https://github.com/michaelpog/proxygen_proxyserver along with the output I'm seeing both in Debug and Release. I hope it helps |
I don't suppose you were able to figure this one out? I really can't see what's going on that would cause a simple integer to report wrong in release builds. Any chance you can try with a different compiler? |
Hi. Is there an update on open sourcing the connection pool feature? |
Sadly, not right now. We'll let folks know when we have concrete plans |
i have same question. i see you have open source SessionPool and SessionHolder. |
Connection pooling has been open sourced. See proxygen/lib/http/connpool. I'm not sure of the state of the docs and examples, but we will build those over time. Please migrate to that code and let us know if you have any issues. |
SessionPool it can only be used from one thread。 but holder->drain(); will cause multi thread read-write same HTTPSessionBase, cause crash。 |
See a followup in issue #314 |
I'm trying to implement a connection pool for the http client.
I'm attempting to store for every remote address (host:port) a queue of HTTPUpstreamSessions that I can reuse.
I'm having issues reusing those sessions.
Looking through all handlers and callbacks of HTTPTransaction and HTTPUpstreamSession, I can't find a callback in which I can put the HTTPUpstreamSession back into the pool and have it reused, for a new transaction.
Furthermore, if one of the transactions in the session receives a timeout, the session is destroyed.
The performance for creating a new Session and transaction for every outbound request is pretty bad, which defeats the purposed of switching to Proxygen.
Can anyone please point me to the right direction? Or there needs to be some code change for that?
PS: Using regular HTTP 1.1, no ssl, simple request/response.
Thanks
The text was updated successfully, but these errors were encountered: