-
Notifications
You must be signed in to change notification settings - Fork 647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to configure pooled connection idle timeout #612
Comments
I have created a related SO question how to configure pooled connection idle timeout in reactor-netty |
@kimec currently not possible. We plan to have such functionality for 0.9.x |
@kimec I responded to your SO post with an approach that worked for me in the 0.7.x branch. It leveraged existing idle state handlers that netty provides. I could not figure out how to do that in 0.8.x, so we lost that functionality when we upgraded. |
@jim2paterson @kimec With @jim2paterson
|
@violetagg Thanks for the suggestion but I'd already attempted that on 0.8.3/.4 without success. The code compiles but does not behave properly. In 0.7.x, you can see established http socket connections being removed after the specified idle time. I tried it again just now with 0.8.5 and got the following exception dump that ultimately dies with a stack overflow. Thanks again for the suggestion, but 0.8.5 is working ok for us with fixed connection pools so I am content to wait for the official support for idle time-outs in 0.9.x.
|
@jim2paterson @violetagg thank you both for your replies. Since we are still on 0.7.x, I will try @jim2paterson's advice. |
I really want to this function, especially TcpClient (and HttpClient also). |
Any plans / dates when to expect this feature? |
@kimec I managed to configure My solution is partially based on the official documentation about IdleStateHandler extended with my research on how to properly apply it when creating an instance of Here is how I did that: public class IdleCleanupHandler extends ChannelDuplexHandler {
@Override
public void userEventTriggered(final ChannelHandlerContext ctx, final Object evt) throws Exception {
if (evt instanceof IdleStateEvent) {
final IdleState state = ((IdleStateEvent) evt).state();
if (state == IdleState.ALL_IDLE) { // or READER_IDLE / WRITER_IDLE
// close idling channel
ctx.close();
}
} else {
super.userEventTriggered(ctx, evt);
}
}
}
...
public static WebClient createWebClient(final String baseUrl, final int idleTimeoutSec) {
final TcpClient tcpClient = TcpClient.create(ConnectionProvider.fixed("fixed-pool"))
.bootstrap(bootstrap -> BootstrapHandlers.updateConfiguration(bootstrap, "idleTimeoutConfig",
(connectionObserver, channel) -> {
channel.pipeline()
.addLast("idleStateHandler", new IdleStateHandler(0, 0, idleTimeoutSec))
.addLast("idleCleanupHandler", new IdleCleanupHandler());
}));
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
.baseUrl(baseUrl)
.build();
} UPDATE: My further testing has indicated that adding handlers during The right way to add the handlers is: public static WebClient createWebClient(final String baseUrl, final int idleTimeoutSec) {
final TcpClient tcpClient = TcpClient.create(ConnectionProvider.fixed("fixed-pool"))
.doOnConnected(conn -> {
final ChannelPipeline pipeline = conn.channel().pipeline();
if (pipeline.context("idleStateHandler") == null) {
pipeline.addLast("idleStateHandler", new IdleStateHandler(0, 0, idleTimeoutSec))
.addLast("idleCleanupHandler", new IdleCleanupHandler());
}
});
return WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
.baseUrl(baseUrl)
.build();
} |
@darklynx wow, It is a good alternative solution. I try to find avoid this problem, but currently not. |
@creatorKoo I'm sure you can still combine it with ReadTimeoutHandler and WriteTimeoutHandler, adding them before IdleStateHandler. Of course, the configured timeouts of The main finding of mine from yestarday was that TcpClient.doOnConnected(), as well as
But you can reconfigure existing handlers on UPDATE: Connection object that one gets an access to during I wonder what that "pool release" means in that context.. |
@violetagg is this what we should expect in reactor 0.9.x? from the beginning? will the pool support configuration to remove idle sockets/connections? |
With this the connections with idle time > from the configuration will be removed from the pool on acquire i.e. if the acquired connection is with idle time above the configuration the connection will be closed and another one will be acquired. |
Just posting this comment to make the solution look more obvious. |
Also in the Reference Guide https://projectreactor.io/docs/netty/release/reference/index.html#_connection_pool |
@violetagg I'm facing an issue to set idle timeout & lifo strategy for a spring boot application through application.yaml, can you please help here? |
@hragarwalee Programatically you should do it like this For anything else specific to Spring Boot: use the corresponding support channels for Spring Boot. |
Hi @violetagg , regarding the slide 92 you linked: does one still have to pass the custom provider to the |
@kimec Yep it is a typo, you have to pass that to |
I am using reactor-netty http client (0.7.X series) with connection pooling and would like to configure pooled connection's idle timeout but don't know where.
More precisely, I need to configure reactor-netty's connection pool in such a way that it will automatically close connections that did not see any activity within configurable timeout. These connections are open but no bytes were transferred in or out for some (configurable) amount of time.
As an example, Jetty's http client has a configuration option with the above semantics branded as connectionIdleTimeout.
Is there an analogous setting in reactor-netty that allows me to set a connection's idle timeout?
How can I configure reactory-netty http client to close idle connections preemptively?
We are getting
Connection prematurely closed
errors described in #413 and #498 because of this.Expected behavior
Pool automatically closes a connection which was idle for a given time interval
Actual behavior
Don't know how to force the pool to close the inactive connection
Steps to reproduce
N/A
Reactor Netty version
0.7.13
JVM version (e.g.
java -version
)N/A
OS version (e.g.
uname -a
)N/A
The text was updated successfully, but these errors were encountered: