-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increasing alive socket count, looks like a leak #148
Comments
Hi @omerkirk Great report, thanks for including all the details! This is a known issue unfortunately but work has been made towards fixing it - #116 by @gnawux has added pruning unused connections (thanks!) and I'm currently working on a performance tweak to mitigate the cost of the PR (see #142) that I expect to have ready some time today for review - after that we'll cut a new release that fixes this for everyone 👍 As for the number of goroutines, each socket has an event loop in the background so I expect this to be solved by the connection PR too. For what it's worth, the event loop goroutine for an unused socket should be blocked on network I/O and wouldn't be eligible for scheduling, but it's still far from ideal. Hopefully we'll have a release in the next week or so (it takes time to test) - would you be willing to try a pre-release version and confirm it solves the issue for you? Thanks again! Dom |
Hi @domodwyer , Sure I'd love to help test this. Just a suggestion, golang sql driver has a very elegant pool implementation with similar requirements (minPoolSize, etc. ) that might guide a solution without the performance hit in #142. Thanks for the quick answer and the hard work. |
Hi @omerkirk The development branch now has all the bits to fix this - if you have the time please do try it and let us know! Dom |
Hi @domodwyer , Will test it tomorrow, and let you know the results. Omer |
Hi @domodwyer , I just deployed the service using the development branch, everything seems to be working fine currently, I will keep it in production to see the socket usage under high load and will let you know. |
Hi @omerkirk That's great news - we've run the development branch through some tests too and it looks good, I'll cut a release later today 👍 Thanks for your help! Dom |
Hi again @domodwyer , The development branch works great, we are 5 days in and the number of sockets alive is in check at 20 which is the minPoolSize I configured. Great work. Closing the issue. |
Thanks @omerkirk you've been a big help! We also confirmed it's all working as intended in our test environment too - we've just cut a new release so everyone can enjoy the changes. Thanks again! Dom |
Hi,
I am using the driver with go 1.10. First I've noticed the number of goroutines waiting in socket readLoop is constantly increasing.
To be able to confirm that this is not due to high load I've checked mongo stats daily, and found that the number of goroutines is the same as SocketsAlive stat. The service I am using the driver gets traffic around 400 rps to 4000 rps depending on the time of day. For the past 2 weeks the number of alive sockets never decreased even when the traffic is very low and the number of used sockets fluctuates around 1 and 2.
For example a snapshot of mongo stats is below, as you can see the number of needed sockets to handle the traffic is very low however the number of alive sockets is still very high.
I've checked the code and what I do is first dial a session in the beginning of the runtime then in each mongo request I copy the session and defer session.Close() in every function. The sessions all have a 1 minute timeout. Please let me know if you need any more information on the issue.
The text was updated successfully, but these errors were encountered: