Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to configure pooled connection idle timeout #612

Closed
kimec opened this issue Feb 13, 2019 · 20 comments
Closed

how to configure pooled connection idle timeout #612

kimec opened this issue Feb 13, 2019 · 20 comments
Labels
type/enhancement A general enhancement
Milestone

Comments

@kimec
Copy link

kimec commented Feb 13, 2019

I am using reactor-netty http client (0.7.X series) with connection pooling and would like to configure pooled connection's idle timeout but don't know where.

More precisely, I need to configure reactor-netty's connection pool in such a way that it will automatically close connections that did not see any activity within configurable timeout. These connections are open but no bytes were transferred in or out for some (configurable) amount of time.

As an example, Jetty's http client has a configuration option with the above semantics branded as connectionIdleTimeout.

Is there an analogous setting in reactor-netty that allows me to set a connection's idle timeout?
How can I configure reactory-netty http client to close idle connections preemptively?

We are getting Connection prematurely closed errors described in #413 and #498 because of this.

Expected behavior

Pool automatically closes a connection which was idle for a given time interval

Actual behavior

Don't know how to force the pool to close the inactive connection

Steps to reproduce

N/A

Reactor Netty version

0.7.13

JVM version (e.g. java -version)

N/A

OS version (e.g. uname -a)

N/A

@kimec kimec changed the title Need to configure pooled connection idle timeout how to configure pooled connection idle timeout Feb 19, 2019
@kimec
Copy link
Author

kimec commented Feb 19, 2019

I have created a related SO question how to configure pooled connection idle timeout in reactor-netty

@violetagg
Copy link
Member

@kimec currently not possible. We plan to have such functionality for 0.9.x

@violetagg violetagg added the type/enhancement A general enhancement label Feb 19, 2019
@violetagg violetagg added this to the 0.9.x Backlog milestone Feb 19, 2019
@jim2paterson
Copy link

@kimec I responded to your SO post with an approach that worked for me in the 0.7.x branch. It leveraged existing idle state handlers that netty provides. I could not figure out how to do that in 0.8.x, so we lost that functionality when we upgraded.

@violetagg
Copy link
Member

@jim2paterson @kimec With 0.8.5.RELEASE we switched the connection pool to use the oldest Channel instead of the most recent one - #601

@jim2paterson
You can use the code below when 0.8.x (just copied the code from SO and did it with 0.8.x API)

ConnectionProvider connectionProvider = ConnectionProvider.fixed(connectionPoolName, maxConnections, timeoutPool);
HttpClient.create(connectionProvider)
          .port(endpointUrl.getPort())
          .tcpConfiguration(tcpClient ->
              tcpClient.host(endpointUrl.getHost())
                       .doOnConnected(c ->
                           c.channel()
                            .pipeline()
                            // The write and read timeouts are serving as generic socket idle state handlers.
                            .addFirst("write_timeout", new WriteTimeoutHandler(timeoutIdle, TimeUnit.MILLISECONDS))
                            .addFirst("read_timeout", new ReadTimeoutHandler(timeoutIdle, TimeUnit.MILLISECONDS))));

@jim2paterson
Copy link

@violetagg Thanks for the suggestion but I'd already attempted that on 0.8.3/.4 without success. The code compiles but does not behave properly. In 0.7.x, you can see established http socket connections being removed after the specified idle time. I tried it again just now with 0.8.5 and got the following exception dump that ultimately dies with a stack overflow.

Thanks again for the suggestion, but 0.8.5 is working ok for us with fixed connection pools so I am content to wait for the official support for idle time-outs in 0.9.x.

2019-02-20 23:26:08,582 ERROR [nioEventLoopGroup-3-4] [reactor.core.publisher.Operators] [] Operator called default onErrorDropped
io.netty.handler.proxy.ProxyConnectException: http, none, > `/10.130.40.8:8081` => example.com:80, disconnected
	at io.netty.handler.proxy.ProxyHandler.channelInactive(ProxyHandler.java:236)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelInactive(CombinedChannelDuplexHandler.java:420)
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:390)
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355)
	at io.netty.handler.codec.http.HttpClientCodec$Decoder.channelInactive(HttpClientCodec.java:282)
	at io.netty.channel.CombinedChannelDuplexHandler.channelInactive(CombinedChannelDuplexHandler.java:223)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
	at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:277)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1403)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:912)
	at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:826)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:495)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:844)
2019-02-20 23:26:08,582 WARN  [nioEventLoopGroup-3-4] [io.netty.channel.AbstractChannelHandlerContext] [] An exception 'reactor.core.Exceptions$BubblingException: io.netty.handler.proxy.ProxyConnectException: http, none, /10.130.40.8:8081 => example.com:80, disconnected' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
io.netty.handler.proxy.ProxyConnectException: http, none, /10.130.40.8:8081 => example.com:80, disconnected
	at io.netty.handler.proxy.ProxyHandler.channelInactive(ProxyHandler.java:236)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
	at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelInactive(CombinedChannelDuplexHandler.java:420)
	at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:390)
	at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355)
	at io.netty.handler.codec.http.HttpClientCodec$Decoder.channelInactive(HttpClientCodec.java:282)
	at io.netty.channel.CombinedChannelDuplexHandler.channelInactive(CombinedChannelDuplexHandler.java:223)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
	at io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:277)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1403)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231)
	at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:912)
	at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:826)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:495)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:844)
2019-02-20 23:26:10,372 WARN  [nioEventLoopGroup-3-4] [io.netty.channel.AbstractChannelHandlerContext] [] Failed to mark a promise as failure because it has failed already: [DefaultChannelPromise@d3d3618(failure: java.lang.StackOverflowError), java.lang.StackOverflowError
	at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:542)
	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:120)
	at io.netty.util.internal.PromiseNotificationUtil.tryFailure(PromiseNotificationUtil.java:64)
	at io.netty.channel.AbstractChannelHandlerContext.notifyOutboundHandlerException(AbstractChannelHandlerContext.java:843)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:626)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelOutboundHandlerAdapter.close(ChannelOutboundHandlerAdapter.java:71)
	at io.netty.channel.AbstractChannelHandlerContext.invokeClose(AbstractChannelHandlerContext.java:624)
	at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
	at io.netty.channel.ChannelDuplexHandler.close(ChannelDuplexHandler.java:73)
  
  etc...

@kimec
Copy link
Author

kimec commented Feb 21, 2019

@jim2paterson @violetagg thank you both for your replies. Since we are still on 0.7.x, I will try @jim2paterson's advice.

@creatorKoo
Copy link

I really want to this function, especially TcpClient (and HttpClient also).

@darklynx
Copy link

Any plans / dates when to expect this feature?

@darklynx
Copy link

darklynx commented Jul 24, 2019

@kimec I managed to configure WebClient (via underlying TcpClient) to remove idle connections on timeout from connection pool in reactor-netty 0.8.9

My solution is partially based on the official documentation about IdleStateHandler extended with my research on how to properly apply it when creating an instance of HttpClient.

Here is how I did that:

public class IdleCleanupHandler extends ChannelDuplexHandler {
    @Override
    public void userEventTriggered(final ChannelHandlerContext ctx, final Object evt) throws Exception {
        if (evt instanceof IdleStateEvent) {
            final IdleState state = ((IdleStateEvent) evt).state();
            if (state == IdleState.ALL_IDLE) { // or READER_IDLE / WRITER_IDLE
                // close idling channel
                ctx.close();
            }
        } else {
            super.userEventTriggered(ctx, evt);
        }
    }
}

...

public static WebClient createWebClient(final String baseUrl, final int idleTimeoutSec) {
    final TcpClient tcpClient = TcpClient.create(ConnectionProvider.fixed("fixed-pool"))
        .bootstrap(bootstrap -> BootstrapHandlers.updateConfiguration(bootstrap, "idleTimeoutConfig",
            (connectionObserver, channel) -> {
                channel.pipeline()
                    .addLast("idleStateHandler", new IdleStateHandler(0, 0, idleTimeoutSec))
                    .addLast("idleCleanupHandler", new IdleCleanupHandler());
            }));

    return WebClient.builder()
        .clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
        .baseUrl(baseUrl)
        .build();
}

UPDATE:

My further testing has indicated that adding handlers during bootstrap hook distructs the pool and sockets (channels) are not reused by Connection.

The right way to add the handlers is:

public static WebClient createWebClient(final String baseUrl, final int idleTimeoutSec) {
    final TcpClient tcpClient = TcpClient.create(ConnectionProvider.fixed("fixed-pool"))
        .doOnConnected(conn -> {
            final ChannelPipeline pipeline = conn.channel().pipeline();
            if (pipeline.context("idleStateHandler") == null) {
                pipeline.addLast("idleStateHandler", new IdleStateHandler(0, 0, idleTimeoutSec))
                        .addLast("idleCleanupHandler", new IdleCleanupHandler());
            }
        });

    return WebClient.builder()
        .clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))
        .baseUrl(baseUrl)
        .build();
}

@creatorKoo
Copy link

@darklynx wow, It is a good alternative solution.
And I find something weakness that. That solution must be idle timeout over than read timeout.
For example, set idle timeout 5 and read timeout 10, then connection close 5. Not 10.

I try to find avoid this problem, but currently not.

@darklynx
Copy link

darklynx commented Jul 25, 2019

@creatorKoo I'm sure you can still combine it with ReadTimeoutHandler and WriteTimeoutHandler, adding them before IdleStateHandler. Of course, the configured timeouts of IdleStateHandler should not destruct expected timeouts for read/write from external service.

The main finding of mine from yestarday was that TcpClient.doOnConnected(), as well as TcpClient.doOnConnect() and TcpClient.doOnDisconnected() are not the right place to configure idle handlers, since they are called every time the Socket Channel is picked up from the internal pool to handle a new HTTP request. One should not add new handlers to Connection.channel().pipeline() during TcpClient.doOnConnected() or there will be an exception like:

2019-07-25 10:15:05.233  WARN 86772 --- [ctor-http-nio-7] i.n.u.concurrent.AbstractEventExecutor   : A task raised an exception. Task: reactor.netty.resources.PooledConnectionProvider$DisposableAcquire@13f8284

java.lang.IllegalArgumentException: Duplicate handler name: idleStateHandler
	at io.netty.channel.DefaultChannelPipeline.checkDuplicateName(DefaultChannelPipeline.java:1066) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.filterName(DefaultChannelPipeline.java:284) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
	at io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:204) ~[netty-transport-4.1.36.Final.jar:4.1.36.Final]
...

But you can reconfigure existing handlers on TcpClient.doOnConnected() and reset the configuration on TcpClient.doOnDisconnected() if you wish to interrupt the logic of IdleStateHandler when Channel is picked up from the pool and is busy with HTTP request processing.

UPDATE: Connection object that one gets an access to during TcpClient.doOnConnected() has alternative methods: addHandler, addHandlerFirst, and addHandlerLast that are safe to use multiple times since they ignore an attempt to add duplicates. Also the JavaDoc of Connection.addHandler() states: "If effectively added, the handler will be safely removed when the channel is made inactive (pool release)."

I wonder what that "pool release" means in that context..

@violetagg
Copy link
Member

This depends on #717, #723

@violetagg violetagg modified the milestones: 0.9.x Backlog, 0.9.0.M3 Jul 26, 2019
@darklynx
Copy link

@violetagg is this what we should expect in reactor 0.9.x? from the beginning? will the pool support configuration to remove idle sockets/connections?

@violetagg
Copy link
Member

#792

With this the connections with idle time > from the configuration will be removed from the pool on acquire i.e. if the acquired connection is with idle time above the configuration the connection will be closed and another one will be acquired.

@Arivanandam
Copy link

Arivanandam commented Nov 22, 2019

Just posting this comment to make the solution look more obvious.
A new overloaded version of the method is available now where you can set the maxIdle time.
f0729c5#diff-838a952a6538b5173f40436cdfacae56R178

@violetagg
Copy link
Member

@hragarwalee
Copy link

hragarwalee commented Nov 9, 2020

@violetagg I'm facing an issue to set idle timeout & lifo strategy for a spring boot application through application.yaml, can you please help here?

@violetagg
Copy link
Member

@hragarwalee Programatically you should do it like this
https://speakerdeck.com/violetagg/how-to-avoid-common-mistakes-when-using-reactor-netty?slide=92

For anything else specific to Spring Boot: use the corresponding support channels for Spring Boot.

@kimec
Copy link
Author

kimec commented Nov 9, 2020

Hi @violetagg , regarding the slide 92 you linked: does one still have to pass the custom provider to the HttpClient? To me it seems like the reference to provider should be passed to HttpClient.create(ConnectionProvider connectionProvider) and that is not happening on the slide. Is it a typo? I haven't seen the presentation, so it is hard to guess the context just from the slide (whether it is intentional example of a common error to avoid or not).

@violetagg
Copy link
Member

@kimec Yep it is a typo, you have to pass that to HttpClient.create

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/enhancement A general enhancement
Projects
None yet
Development

No branches or pull requests

7 participants