Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Idle connections are not getting released when used with Try-with-resources #2215

Open
VikChauhan20 opened this issue Jul 11, 2024 · 7 comments

Comments

@VikChauhan20
Copy link

Facing issue where few select statements executed leaks connections in try with resource but code works for update and delete statements which too are in try with resource.
We are using postgresql DB - spring.datasource.driver-class-name=org.postgresql.Driver

Code snippet: PFA the code snippet.. few values changed for security purposes
image

Hikari Properties being used:
spring.datasource.hikari.minimumIdle=10
spring.datasource.hikari.maximumPoolSize=50
spring.datasource.hikari.idleTimeout=10000
spring.datasource.hikari.connectionTimeout=240000
spring.datasource.hikari.maxLifetime=300000

Error Screenshot -- Query in image is different but issue is same
image

Please help.

@quaff
Copy link
Contributor

quaff commented Jul 15, 2024

Connection keep open for reusing, that's the purpose of pool, what's the problem here?

@VikChauhan20
Copy link
Author

Connection keep open for reusing, that's the purpose of pool, what's the problem here?

The problem is that they are not being reused as well.. Those connections are stuck in that idle stage and not utilized for other transactions. Thus our total pool connections get exhausted when these stuck connections number become huge.

@quaff
Copy link
Contributor

quaff commented Jul 16, 2024

Connection keep open for reusing, that's the purpose of pool, what's the problem here?

The problem is that they are not being reused as well.. Those connections are stuck in that idle stage and not utilized for other transactions. Thus our total pool connections get exhausted when these stuck connections number become huge.

What makes you think they are not reused? please note the default minIdle is 10.

@VikChauhan20
Copy link
Author

Connection keep open for reusing, that's the purpose of pool, what's the problem here?

The problem is that they are not being reused as well.. Those connections are stuck in that idle stage and not utilized for other transactions. Thus our total pool connections get exhausted when these stuck connections number become huge.

What makes you think they are not reused? please note the default minIdle is 10.

@quaff Basis of my understanding is presented by this screenshot as we have 4 pods running each with 10 minIdle... so total should be 40 but from yellow highlighted you can see it is 42 connections and also state change is more then 46 mins for these 2 connections.. Request help

image

@quaff
Copy link
Contributor

quaff commented Jul 17, 2024

so total should be 40

The total should be 40 to 200 because the maximumPoolSize is 50 if concurrency happens, and the total should be 40 after idleTimeout elapsed.

@VikChauhan20
Copy link
Author

so total should be 40

The total should be 40 to 200 because the maximumPoolSize is 50 if concurrency happens, and the total should be 40 after idleTimeout elapsed.

@quaff
Yes, the problem is that it is not coming to 40.... even when idleTimeout elapsed like here 2 idle connections have exceeded timeout but are not being released. Thus those 2 connections in idle state will remain until we restart the pods.
These idle connections keep on increasing and ultimately max pool size reaches 200 as these idle connections are not being released and our application start giving Exception "Connection Timeout Error" as no more connections are available in pool for allocation :(

@VikChauhan20
Copy link
Author

Any suggestion is appreciated as issue still persists :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants