Skip to content

Configuring Client resources

chao chang edited this page Mar 14, 2019 · 9 revisions

Client resources are configuration settings for the client related to performance, concurrency, and events. A vast part of Client resources consists of thread pools (EventLoopGroups and a EventExecutorGroup) which build the infrastructure for the connection workers. In general, it is a good idea to reuse instances of ClientResources across multiple clients.

Client resources are stateful and need to be shut down if they are supplied from outside the client.

Creating Client resources

Client resources are required to be immutable. You can create instances using two different patterns:

The create() factory method

By using the create() method on DefaultClientResources you create ClientResources with default settings:

ClientResources res = DefaultClientResources.create();

This approach fits the most needs.

Resources builder

You can build instances of DefaultClientResources by using the embedded builder. It is designed to configure the resources to your needs. The builder accepts the configuration in a fluent fashion and then creates the ClientResources at the end:

ClientResources res = DefaultClientResources.builder()
                        .ioThreadPoolSize(4)
                        .computationThreadPoolSize(4)
                        .build()

Using and reusing ClientResources

A RedisClient and RedisClusterClient can be created without passing ClientResources upon creation. The resources are exclusive to the client and are managed itself by the client. When calling shutdown() of the client instance ClientResources are shut down.

RedisClient client = RedisClient.create();
...
client.shutdown();

If you require multiple instances of a client or you want to provide existing thread infrastructure, you can configure a shared ClientResources instance using the builder. The shared Client resources can be passed upon client creation:

ClientResources res = DefaultClientResources.create();
RedisClient client = RedisClient.create(res);
RedisClusterClient clusterClient = RedisClusterClient.create(res, seedUris);
...
client.shutdown();
clusterClient.shutdown();
res.shutdown();

Shared ClientResources are never shut down by the client. Same applies for shared EventLoopGroupProviders that are an abstraction to provide EventLoopGroups.

Why Runtime.getRuntime().availableProcessors() * 3?

Netty requires different EventLoopGroups for NIO (TCP) and for EPoll (Unix Domain Socket) connections. One additional EventExecutorGroup is used to perform computation tasks. EventLoopGroups are started lazily to allocate Threads on-demand.

Shutdown

Every client instance requires a call to shutdown() to clear used resources. Clients with dedicated ClientResources (i.e. no ClientResources passed within the constructor/create-method) will shut down ClientResources on their own.

Client instances with using shared ClientResources (i.e. ClientResources passed using the constructor/create-method) won’t shut down the ClientResources on their own. The ClientResources instance needs to be shut down once it’s not used anymore.

Configuration settings

The basic configuration options are listed in the table below:

Name Method Default

I/O Thread Pool Size

ioThreadPoolSize

Number of processors

The number of threads in the I/O thread pools. The number defaults to the number of available processors that the runtime returns (which, as a well-known fact, sometimes does not represent the actual number of processors). Every thread represents an internal event loop where all I/O tasks are run. The number does not reflect the actual number of I/O threads because the client requires different thread pools for Network (NIO) and Unix Domain Socket (EPoll) connections. The minimum I/O threads are 3. A pool with fewer threads can cause undefined behavior.

Computation Thread Pool Size

computationThreadPoolSize

Number of processors

The number of threads in the computation thread pool. The number defaults to the number of available processors that the runtime returns (which, as a well-known fact, sometimes does not represent the actual number of processors). Every thread represents an internal event loop where all computation tasks are run. The minimum computation threads are 3. A pool with fewer threads can cause undefined behavior.

Advanced settings

Values for the advanced options are listed in the table below and should not be changed unless there is a truly good reason to do so.

Name Method Default

Provider for EventLoopGroup

eventLoopGroupProvider

none

For those who want to reuse existing netty infrastructure or the total control over the thread pools, the EventLoopGroupProvider API provides a way to do so. EventLoopGroups are obtained and managed by an EventLoopGroupProvider. A provided EventLoopGroupProvider is not managed by the client and needs to be shut down once you do not longer need the resources.

Provided EventExecutorGroup

eventExecutorGroup

none

For those who want to reuse existing netty infrastructure or the total control over the thread pools can provide an existing EventExecutorGroup to the Client resources. A provided EventExecutorGroup is not managed by the client and needs to be shut down once you do not longer need the resources.

Event bus

eventBus

DefaultEventBus

The event bus system is used to transport events from the client to subscribers. Events are about connection state changes, metrics, and more. Events are published using a RxJava subject and the default implementation drops events on backpressure. Learn more about the Reactive API. You can also publish your own events. If you wish to do so, make sure that your events implement the Event marker interface.

Command latency collector options

commandLatencyCollectorOptions

DefaultCommandLatencyCollectorOptions

The client can collect latency metrics during while dispatching commands. The options allow configuring the percentiles, level of metrics (per connection or server) and whether the metrics are cumulative or reset after obtaining these. Command latency collection is enabled by default and can be disabled by setting commandLatencyPublisherOptions(…) to DefaultEventPublisherOptions.disabled(). Latency collector requires LatencyUtils to be on your class path.

Command latency collector

commandLatencyCollector

DefaultCommandLatencyCollector

The client can collect latency metrics during while dispatching commands. Command latency metrics is collected on connection or server level. Command latency collection is enabled by default and can be disabled by setting commandLatencyCollectorOptions(…) to DefaultCommandLatencyCollectorOptions.disabled().

Latency event publisher options

commandLatencyPublisherOptions

DefaultEventPublisherOptions

Command latencies can be published using the event bus. Latency events are emitted by default every 10 minutes. Event publishing can be disabled by setting commandLatencyPublisherOptions(…) to DefaultEventPublisherOptions.disabled().

DNS Resolver

dnsResolver

DnsResolvers.JVM_DEFAULT (or netty if present)

Since: 3.5, 4.2

Configures a DNS resolver to resolve hostnames to a java.net.InetAddress. Defaults to the JVM DNS resolution that uses blocking hostname resolution and caching of lookup results. Users of DNS-based Redis-HA setups (e.g. AWS ElastiCache) might want to configure a different DNS resolver. Lettuce comes with DirContextDnsResolver that uses Java’s DnsContextFactory to resolve hostnames. DirContextDnsResolver allows using either the system DNS or custom DNS servers without caching of results so each hostname lookup yields in a DNS lookup.

Since 4.4: Defaults to DnsResolvers.UNRESOLVED to use netty’s AddressResolver that resolves DNS names on Bootstrap.connect() (requires netty 4.1)

Reconnect Delay

reconnectDelay

Delay.exponential()

Since: 4.2

Configures a reconnect delay used to delay reconnect attempts. Defaults to binary exponential delay with an upper boundary of 30 SECONDS. See Delay for more delay implementations.

Netty Customizer

NettyCustomizer

none

Since: 4.4

Configures a netty customizer to enhance netty components. Allows customization of Bootstrap after Bootstrap configuration by Lettuce and Channel customization after all Lettuce handlers are added to Channel. The customizer allows custom SSL configuration (requires RedisURI in plain-text mode, otherwise Lettuce’s configures SSL), adding custom handlers or setting customized Bootstrap options. Misconfiguring Bootstrap or Channel can cause connection failures or undesired behavior.

Tracing

tracing

disabled

Since: 5.1

Configures a tracing instance to trace Redis calls. Lettuce wraps Brave data models to support tracing in a vendor-agnostic way if Brave is on the class path. A Brave tracing instance can be created using BraveTracing.create(clientTracing);, where clientTracing is a created or existent Brave tracing instance .

Clone this wiki locally