Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max_idle_time specification when using reactor-netty #3414

Closed
yuya17 opened this issue Aug 30, 2024 · 19 comments
Closed

max_idle_time specification when using reactor-netty #3414

yuya17 opened this issue Aug 30, 2024 · 19 comments
Assignees
Labels
status/invalid We don't feel this issue is valid

Comments

@yuya17
Copy link

yuya17 commented Aug 30, 2024

We have found that we need to send FIN packets due to the specification of the firewall that exists on the intermediate route, but we would like to confirm the following two points regarding the max_idle_time specification for that part.
(1) If the max_idle_time limit is exceeded, do we send a FIN packet?
(2) If FIN packets are not sent, is there any way to send them?
>(2) If a FIN packet is not sent, is there a way to send one?
In addition to the above questions, here is an additional check on the current behavior.
Current behavior: Even if a request is made to the same destination address, a new PORT is assigned for each request and a new connection is established.(Connections are not reused.)
In light of the above behavior, do you know the following points?
①When setting "reactor.netty.pool.maxConnections", do you know the general behavior when the respective upper limit is reached? (For example, if an additional setting is made in addition to max-idle-time, a FIN packet will be sent.)
②As stated in the following problem, the specification is such that FIN packets will not be sent as long as you use the build method of the ConnectionProvider class.
Other Issue:#3071

Expected behavior
・Sessions are destroyed as per the lifetime setting in max_idle_time
・Connections are reused

Actual behavior
・The session lifetime is set in max_idle_time, but the session is not destroyed after the lifetime.
・A new connection is created for each request and is not reused. (The destination IP address remains the same, but a new port number is assigned.)

Steps to reproduce
Same as Actual behavior.

Minimal yet complete reproducer code (or URL to code)

①Internal implementation of REST communication wrapped in reactor-netty
 ※REST communication executes methods in order from the top.

//Create an instance of WebClient.
private void createWebClient() {
  Builder builder = WebClient.builder();
  builder.baseUrl(baseUrl);
  HttpClient httpClient = null;
  if (maxIdleTime != null) {
     ConnectionProvider provider =
     ConnectionProvider.builder("com.kmp3g.dfw.restclient.RestClient")
     .maxIdleTime(Duration.ofMillis(maxIdleTime)).build();
     
     httpClient = HttpClient.create(provider);
   } else {
     httpClient = HttpClient.create();
   }

}

// Connection timeout settings
   httpClient = httpClient.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connectTimeout);

// Response Timeout Settings
   if (responseTimeout != null) {
       httpClient = httpClient.responseTimeout(Duration.ofMillis(responseTimeout));
    }

// proxy configuration
   if (proxyType != null && proxyHost != null && !proxyHost.isEmpty() && proxyPort != null) {
       httpClient =
httpClient.proxy(typeSpec -> typeSpec.type(proxyType).host(proxyHost).port(proxyPort));
   }
   builder = builder.clientConnector(new ReactorClientHttpConnector(httpClient));
   if (codecMaxInMemorySize != null) {
      ExchangeStrategies strategies = ExchangeStrategies.builder()
.codecs(configurer -> 
      configurer.defaultCodecs().maxInMemorySize(codecMaxInMemorySize))
.build();
      builder = builder.exchangeStrategies(strategies);
   }

// Filter Settings
builder = builder.filter(restRequestIDFilter.filter());
if (Boolean.TRUE.equals(authInfoLink)) {
builder = builder.filter(restAuthInfoLinkFilter.filter());
}
builder = builder.filter(restRequestLogFilter.filter());
this.webClient = builder.build();
}

//Get REST client class.
@param accessInfoKey Key to uniquely identify the connection destination.
@return Generated REST client class.
@throws IllegalArgumentException if no destination information matches the key that uniquely identifies the destination.
https://github.com/see RestClient

public static RestClient getRestClient(String accessInfoKey) {
  String baseUrl = RestClientProperties.getBaseUrl(accessInfoKey);
  if (baseUrl == null || baseUrl.isEmpty()) {
  throw new IllegalArgumentException(
  String.format(MESSAGE_ACCESS_INFO_KEY_INVALID, accessInfoKey));
  }
  return (RestClient) RestClientAppConfig.getContext().getBean("dfw.restclient.RestClient",
accessInfoKey);
}


//Create Requester Class
@OverRide
public RestRequester createPostRequester(String urlPath, Map<String, String> requestHeader,
Object jsonDto, Class dtoClass) {
  validate(urlPath, dtoClass);
  RequestBodySpec bodySpec = webClient.post().uri(urlPath);
  setRequestHeader(bodySpec, requestHeader);
  String jsonBody = toStringJson(jsonDto);
  setJsonBody(bodySpec, jsonBody);
  ResponseSpec responseSpec = bodySpec.retrieve();
  Mono<ResponseEntity> mono = responseSpec.toEntity(dtoClass);
  mono = setupMono(mono, METHOD_POST, createUrl(urlPath), jsonBody);
  return new RestRequester<>(mono);
}

②Sample source code for the application side to use the REST communication function mentioned in ① above (for reference)

import java.util.HashMap;
import java.util.Map;
import com.kmp3g.dfw.restclient.RestClient;
import com.kmp3g.dfw.restclient.RestClientFactory;
import com.kmp3g.dfw.restclient.RestRequester;
public class Sample {
 public void run() {
  // 1. Get a REST client instance from the REST factory
  RestClient restClient = RestClientFactory.getRestClient("server1");
  // 2. Get a REST requester class to execute the GET method and obtain the execution results as a DTO
  RestRequester<Hello1ResultDTO> requester = restClient.createGetRequester("/api/hello/1", requestHeader, queryParam, 
  Hello1ResultDTO.class);
  // 3. Execute the access to the REST API and receive the execution results
  Hello1ResultDTO result = requester.execute(); 
 }
}

Reactor-Netty version
1.0.39

※SpringFramework Version:5.3.20

JVM version (e.g. java -version)
Container Image:ibmcom/websphere-liberty:22.0.0.6-full-java17-openj9-ubi

OS version (e.g. uname -a)
Linux/amd64

@yuya17 yuya17 added status/need-triage A new issue that still need to be evaluated as a whole type/bug A general bug labels Aug 30, 2024
@yuya17
Copy link
Author

yuya17 commented Sep 4, 2024

What is your opinion on the above points?

I also have an additional point to check.
As stated in the issue below, I found that Netty sends a FIN packet when I change ConnectionProvider provider = ConnectionProvider.builder ... to ConnectionProvider provider = ConnectionProivder.newConnection();
Other Issue: #3071

However, the following issue is occurring. Do you have any insight into this?
Verification: 3 keystrokes at 3-minute intervals to the same connection destination. Leave for about 10 minutes after the last keystroke.

  • FIN and RST are sent about 12 seconds after receiving HTTP 200 OK for the first and second keystrokes.
  • FIN and RST are not sent for the last keystroke (3rd time).
     →FIN is sent every time a request is completed. However, for some reason, FIN is never sent for the last request.

@yuya17
Copy link
Author

yuya17 commented Sep 10, 2024

Deadline for response: 2024/9/18
Is it okay to consider the following two issues that arise in the implementation as potential bugs on the reactor-Netty side?

  1. Connections are not reused from the connection pool.
  2. The state does not transition from inactive to closed after max-idle-time is exceeded.

I suspect that other issues may also be involved.

#3373
#3318


  Builder builder = WebClient.builder();
  builder.baseUrl(baseUrl);
  HttpClient httpClient = null;
  if (maxIdleTime != null) {
     ConnectionProvider provider =
     ConnectionProvider.builder("com.kmp3g.dfw.restclient.RestClient")
     .maxIdleTime(Duration.ofMillis(maxIdleTime)).build();
     httpClient = HttpClient.create(provider);
   } else {
     httpClient = HttpClient.create();
   }

@violetagg violetagg self-assigned this Sep 10, 2024
@violetagg violetagg removed the status/need-triage A new issue that still need to be evaluated as a whole label Sep 10, 2024
@violetagg
Copy link
Member

@yuya17 Reactor Netty version 1.0.x reached its EOL and now is not supported. Is it possible for you to update to a supported version?

@violetagg violetagg added the for/user-attention This issue needs user attention (feedback, rework, etc...) label Sep 10, 2024
@yuya17
Copy link
Author

yuya17 commented Sep 11, 2024

@violetagg
#3414 (comment)
The above issue occurs even when changing to reactor-netty version 1.1.22.

Also, where can I find information that version 1.0.x is EOL?
Version 1.0.39 will be released on 2023/11/15, and there is no deprecated tag in any of the documents.

@violetagg
Copy link
Member

violetagg commented Sep 11, 2024 via email

@violetagg
Copy link
Member

@yuya17 Please provide a reproducible example that shows the issue in Reactor Netty.

@iwamoto703
Copy link

@violetagg
I'll submit a reproducible example

Internal implementation of REST communication wrapped in reactor-netty
※REST communication executes methods in order from the top.

//Create an instance of WebClient.
private void createWebClient() {
  Builder builder = WebClient.builder();
  builder.baseUrl(baseUrl);
  HttpClient httpClient = null;
  if (maxIdleTime != null) {
     ConnectionProvider provider =
     ConnectionProvider.builder("com.kmp3g.dfw.restclient.RestClient")
     .maxIdleTime(Duration.ofMillis(maxIdleTime)).build();
     
     httpClient = HttpClient.create(provider);
   } else {
     httpClient = HttpClient.create();
   }

}

// Connection timeout settings
   httpClient = httpClient.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, connectTimeout);

// Response Timeout Settings
   if (responseTimeout != null) {
       httpClient = httpClient.responseTimeout(Duration.ofMillis(responseTimeout));
    }

// proxy configuration
   if (proxyType != null && proxyHost != null && !proxyHost.isEmpty() && proxyPort != null) {
       httpClient =
httpClient.proxy(typeSpec -> typeSpec.type(proxyType).host(proxyHost).port(proxyPort));
   }
   builder = builder.clientConnector(new ReactorClientHttpConnector(httpClient));
   if (codecMaxInMemorySize != null) {
      ExchangeStrategies strategies = ExchangeStrategies.builder()
.codecs(configurer -> 
      configurer.defaultCodecs().maxInMemorySize(codecMaxInMemorySize))
.build();
      builder = builder.exchangeStrategies(strategies);
   }

// Filter Settings
builder = builder.filter(restRequestIDFilter.filter());
if (Boolean.TRUE.equals(authInfoLink)) {
builder = builder.filter(restAuthInfoLinkFilter.filter());
}
builder = builder.filter(restRequestLogFilter.filter());
this.webClient = builder.build();
}

//Get REST client class.
@param accessInfoKey Key to uniquely identify the connection destination.
@return Generated REST client class.
@throws IllegalArgumentException if no destination information matches the key that uniquely identifies the destination.
https://github.com/see RestClient

public static RestClient getRestClient(String accessInfoKey) {
  String baseUrl = RestClientProperties.getBaseUrl(accessInfoKey);
  if (baseUrl == null || baseUrl.isEmpty()) {
  throw new IllegalArgumentException(
  String.format(MESSAGE_ACCESS_INFO_KEY_INVALID, accessInfoKey));
  }
  return (RestClient) RestClientAppConfig.getContext().getBean("dfw.restclient.RestClient",
accessInfoKey);
}


//Create Requester Class
@OverRide
public RestRequester createPostRequester(String urlPath, Map<String, String> requestHeader,
Object jsonDto, Class dtoClass) {
  validate(urlPath, dtoClass);
  RequestBodySpec bodySpec = webClient.post().uri(urlPath);
  setRequestHeader(bodySpec, requestHeader);
  String jsonBody = toStringJson(jsonDto);
  setJsonBody(bodySpec, jsonBody);
  ResponseSpec responseSpec = bodySpec.retrieve();
  Mono<ResponseEntity> mono = responseSpec.toEntity(dtoClass);
  mono = setupMono(mono, METHOD_POST, createUrl(urlPath), jsonBody);
  return new RestRequester<>(mono);
}

Send the debugs that you ran three times above.
Even if a request is made to the same destination address, a new PORT is assigned for each request and a new connection is established.(Connections are not reused.)

①port:34636
Sep 4 18:08:52 ita-restclient DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=25000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [ita-restclient.com/<unresolved>:443]
Sep 4 18:08:52 ita-restclient DEBUG [962db390] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] Registering pool release on close event for channel
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}, [connected])
Sep 4 18:08:52 ita-restclient DEBUG [962db390-1, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=null, connection=PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [configured])
Sep 4 18:08:52 ita-restclient DEBUG [962db390-1, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [request_prepared])
Sep 4 18:08:52 ita-restclient DEBUG [962db390-1, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [request_sent])
Sep 4 18:08:52 ita-restclient DEBUG [962db390-1, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [response_received])
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [response_completed])
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [disconnecting])
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] Releasing channel
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] Channel cleaned, now: 0 active connections, 1 inactive connections and 0 pending acquire requests.
Sep 4 18:09:22 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 ! R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 ! R:ita-restclient.com/aa.aaa.aa.a:443]}, [disconnecting])
②port:43236
Sep 4 18:09:30 ita-restclient DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=25000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [ita-restclient.com/<unresolved>:443]
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] Registering pool release on close event for channel
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443]}, [connected])
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08-1, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=null, connection=PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [configured])
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08-1, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [request_prepared])
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08-1, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [request_sent])
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08-1, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [response_received])
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [response_completed])
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [disconnecting])
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] Releasing channel
Sep 4 18:09:30 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 - R:ita-restclient.com/aa.aaa.aa.a:443] Channel cleaned, now: 0 active connections, 1 inactive connections and 0 pending acquire requests.
Sep 4 18:10:00 ita-restclient DEBUG [a9a99c08, L:/aaa.aa.aa.aa:43236 ! R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(PooledConnection{channel=[id: 0xa9a99c08, L:/aaa.aa.aa.aa:43236 ! R:ita-restclient.com/aa.aaa.aa.a:443]}, [disconnecting])
③port:42682
Sep 4 18:10:11 ita-restclient DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=25000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [ita-restclient.com/<unresolved>:443]
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b] Created a new pooled channel, now: 0 active connections, 0 inactive connections and 0 pending acquire requests.
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] Registering pool release on close event for channel
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] Channel connected, now: 1 active connections, 0 inactive connections and 0 pending acquire requests.
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(PooledConnection{channel=[id: 0x0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443]}, [connected])
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b-1, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=null, connection=PooledConnection{channel=[id: 0x0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [configured])
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b-1, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [request_prepared])
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b-1, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [request_sent])
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b-1, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [response_received])
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [response_completed])
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [disconnecting])
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] Releasing channel
Sep 4 18:10:11 ita-restclient DEBUG [0aad093b, L:/aaa.aa.aa.aa:42682 - R:ita-restclient.com/aa.aaa.aa.a:443] Channel cleaned, now: 0 active connections, 1 inactive connections and 0 pending acquire requests.

@violetagg violetagg removed the for/user-attention This issue needs user attention (feedback, rework, etc...) label Sep 13, 2024
@iwamoto703
Copy link

@violetagg
Deadline for response: 2024/9/21

(1)Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(GET{uri=/ita-restclient/app/product/1, connection=PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443]}}, [disconnecting])
(2)Sep 4 18:09:22 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 ! R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 ! R:ita-restclient.com/aa.aaa.aa.a:443]}, [disconnecting])

Regarding (1) and (2) of the log of port:34636, please check if there is any error in the following recognition.

(1)Discard the communication information from PooledConnection (L:/aaa.aa.aa.aa:34636) (from inactive to inactive) to terminate communication.
(2)Destroying PooledConnection (L:/aaa.aa.aa.aa:34636)

@violetagg
Copy link
Member

@iwamoto703

Deadline for response: 2024/9/21

I don't understand this deadline

The code snippet that you provided cannot be run, it shows only how the client is configured. Please provide a reproducible example as I requested.

From the provided logs I can say the following:
The connection is successfully returned to the pool:

Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] Releasing channel
Sep 4 18:08:52 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 - R:ita-restclient.com/aa.aaa.aa.a:443] Channel cleaned, now: 0 active connections, 1 inactive connections and 0 pending acquire requests.

And after some time the connection is closed, most probably this is because of the maxIdleTime configuration

Sep 4 18:09:22 ita-restclient DEBUG [962db390, L:/aaa.aa.aa.aa:34636 ! R:ita-restclient.com/aa.aaa.aa.a:443] onStateChange(PooledConnection{channel=[id: 0x962db390, L:/aaa.aa.aa.aa:34636 ! R:ita-restclient.com/aa.aaa.aa.a:443]}, [disconnecting])

Every time when we try to acquire a connection from the pool, we check whether the connection is still usable. This means that we will check whether the connection is not closed, the max idle time, max life time etc. If the connection is not usable then we will close it and remove it from the pool.
More info here https://projectreactor.io/docs/netty/release/reference/index.html#connection-pool-timeout

@violetagg violetagg added the for/user-attention This issue needs user attention (feedback, rework, etc...) label Sep 19, 2024
@violetagg
Copy link
Member

violetagg commented Sep 19, 2024

On the other hand creating 3 times the pool is also very suspicious

Sep 4 18:08:52 ita-restclient DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=25000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [ita-restclient.com/<unresolved>:443]
...
Sep 4 18:09:30 ita-restclient DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=25000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [ita-restclient.com/<unresolved>:443]
...
Sep 4 18:10:11 ita-restclient DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=25000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [ita-restclient.com/<unresolved>:443]

@iwamoto703
Copy link

@violetagg

Regarding (1) and (2) of the log of port:34636, please check if there is any error in the following recognition.
It was the deadline for responding to the above. There is no problem because I understood the question with the content of the answer. Thank you.

And after some time the connection is closed, most probably this is because of the configuration maxIdleTime

Sep 4 18:08:52 ita-restclient DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT0S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=25000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [ita-restclient.com/<unresolved>:443]
maxIdleTime is set to 25 seconds, but the connection is closed after 30 seconds.
Instead of closing with maxIdleTime, Liberty is configured to send FIN 30 seconds after communication.
I don't understand why the connection doesn't close at maxIdleTime.

@violetagg
Copy link
Member

violetagg commented Sep 19, 2024

@iwamoto703 Please check the link that I provided with my previous comment https://projectreactor.io/docs/netty/release/reference/index.html#connection-pool-timeout

It is written there:

By default, these timeouts are checked on connection release or acquire operations and, if some timeout is reached, the connection is closed and removed from the connection pool. However, you can also configure the connection pool, by setting evictInBackground, to perform periodic checks on connections.

so if there is no acquisition the connection pool will do nothing. You need to configure evictInBackground if you want to perform periodic checks on connections

@yuya17
Copy link
Author

yuya17 commented Sep 24, 2024

@violetagg cc: @iwamoto703
Even if maxidletime is set to 10 seconds and evictinbackground is set to 20 seconds, the problem of the connection not being reused remains.
*By the way, the application server used for typing was WAS Liberty, and the keep-alive timeout setting value in Liberty was set to 30 seconds.

Keystroke content
Keystrokes were made twice, 9 seconds apart, to the same destination server and the same API.

Expected behavior
① The connection is reused in the second request (Shot2).
*Criteria for determining whether a connection is reused: The IP address and port number in the reused application log must be the same.

② The connection is closed after the first and second requests (Shot1, 2) have passed the max-idle-time of 10 seconds.

Keystroke result
① The second request (Shot2) does not reuse the connection.
*Because the port number in the app log is different

② The log shows that the connection was not closed 10 seconds after max-idle-time, and the connection for Shot1 and Shot2 was closed 20 seconds after evictinbackground.

  • Related source location
ConnectionProvider provider = ConnectionProvider.builder("com.kmp3g.dfw.restclient.RestClient")
.maxIdleTime(Duration.ofMillis(Duration.ofMillis(10000)))
.evictInBackground(Duration.ofMillis(Duration.ofMillis(20000)))
.build();

@violetagg
Copy link
Member

@yuya17 @iwamoto703 As I wrote above, you create the connection pool on every request.

<Shot1>
Sep 24 08:51:35 db01-testing-1-ita-tss-async-deployment-595d5865cd-mmcjr ita-tss-async DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT20S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=10000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [gw.apic.k3g-cm-testing-0-priv.staging.kmp3g.com/<unresolved>:443]
<Shot2>
Sep 24 08:51:44 db01-testing-1-ita-tss-async-deployment-595d5865cd-mmcjr ita-tss-async DEBUG Creating a new [com.kmp3g.dfw.restclient.RestClient] client pool [PoolFactory{evictionInterval=PT20S, leasingStrategy=fifo, maxConnections=16, maxIdleTime=10000, maxLifeTime=-1, metricsEnabled=false, pendingAcquireMaxCount=32, pendingAcquireTimeout=45000}] for [gw.apic.k3g-cm-testing-0-priv.staging.kmp3g.com/<unresolved>:443]

Do you create the WebClient for every request?

Please provide a complete example so that we can investigate why the connection pool is created always.

@yuya17
Copy link
Author

yuya17 commented Sep 26, 2024

@violetagg cc: @iwamoto703

Do you create the WebClient for every request?
Please provide a complete example so that we can investigate why the connection pool is created always.

It is my understanding that the implementation does not create a WebClient for each request and open a new connection pool.
Expand the source code for the Restful function that uses Netty.
"RestClientImpl.java"under "main/java/com/kmp3g/dfw/restclient/impl" folder is the source code for creating the WebClient, and the other sources should be used as supplementary information.
dfw-restclient-devlop-reactor-netty-max-idle-time.zip

@violetagg violetagg removed the for/user-attention This issue needs user attention (feedback, rework, etc...) label Sep 27, 2024
@violetagg
Copy link
Member

@violetagg cc: @iwamoto703

Do you create the WebClient for every request?
Please provide a complete example so that we can investigate why the connection pool is created always.

It is my understanding that the implementation does not create a WebClient for each request and open a new connection pool. Expand the source code for the Restful function that uses Netty. "RestClientImpl.java"under "main/java/com/kmp3g/dfw/restclient/impl" folder is the source code for creating the WebClient, and the other sources should be used as supplementary information. dfw-restclient-devlop-reactor-netty-max-idle-time.zip

There are paths that do not use the webClient but some createWebClientInternal, can you check?

@violetagg violetagg added the for/user-attention This issue needs user attention (feedback, rework, etc...) label Sep 27, 2024
@violetagg
Copy link
Member

@yuya17 As you have the source and you can reproduce it, please try to debug why the connection pool is created always.
Put a break point on

PoolKey holder = new PoolKey(remoteAddress, config.channelHash());
PoolFactory<T> poolFactory = poolFactory(remoteAddress);
InstrumentedPool<T> pool = MapUtils.computeIfAbsent(channelPools, holder, poolKey -> {

Copy link

github-actions bot commented Oct 5, 2024

If you would like us to look at this issue, please provide the requested information. If the information is not provided within the next 7 days this issue will be closed.

Copy link

Closing due to lack of requested feedback. If you would like us to look at this issue, please provide the requested information and we will re-open.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 13, 2024
@violetagg violetagg added status/invalid We don't feel this issue is valid and removed type/bug A general bug for/user-attention This issue needs user attention (feedback, rework, etc...) status/need-feedback labels Oct 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/invalid We don't feel this issue is valid
Projects
None yet
Development

No branches or pull requests

3 participants