Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting "org.springframework.dao.QueryTimeoutException: Redis command timed out" sporadically in Springboot Cloud Gateway code. #2984

Open
sanketkeskar opened this issue Sep 10, 2024 · 2 comments
Labels
status: waiting-for-feedback We need additional information before we can continue

Comments

@sanketkeskar
Copy link

Bug Report

Current Behavior

Getting "org.springframework.dao.QueryTimeoutException: Redis command timed out" sporadically in Springboot Cloud Gateway code.
2024-09-09 01:22:11.954 DEBUG 3139744 --- [lettuce-eventExecutorLoop-3-2] o.s.c.g.f.ratelimit.RedisRateLimiter     : Error calling rate limiter lua

org.springframework.dao.QueryTimeoutException: Redis command timed out; nested exception is io.lettuce.core.RedisCommandTimeoutException: Command timed out after 10 second(s)
	at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:70) ~[spring-data-redis-2.7.17.jar!/:2.7.17]
	at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41) ~[spring-data-redis-2.7.17.jar!/:2.7.17]
	at org.springframework.data.redis.connection.lettuce.LettuceReactiveRedisConnection.lambda$translateException$0(LettuceReactiveRedisConnection.java:293) ~[spring-data-redis-2.7.17.jar!/:2.7.17]
	at reactor.core.publisher.Flux.lambda$onErrorMap$28(Flux.java:7070) ~[reactor-core-3.4.33.jar!/:3.4.33]
	at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94) ~[reactor-core-3.4.33.jar!/:3.4.33]
	at reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onError(MonoFlatMapMany.java:255) ~[reactor-core-3.4.33.jar!/:3.4.33]
	at io.lettuce.core.RedisPublisher$ImmediateSubscriber.onError(RedisPublisher.java:891) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.RedisPublisher$State.onError(RedisPublisher.java:712) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.RedisPublisher$RedisSubscription.onError(RedisPublisher.java:357) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.RedisPublisher$SubscriptionCommand.onError(RedisPublisher.java:797) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.RedisPublisher$SubscriptionCommand.doOnError(RedisPublisher.java:793) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.protocol.CommandWrapper.completeExceptionally(CommandWrapper.java:128) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.cluster.ClusterCommand.completeExceptionally(ClusterCommand.java:99) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.protocol.CommandExpiryWriter.lambda$potentiallyExpire$0(CommandExpiryWriter.java:175) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.100.Final.jar!/:4.1.100.Final]
	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:153) ~[netty-common-4.1.100.Final.jar!/:4.1.100.Final]
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173) ~[netty-common-4.1.100.Final.jar!/:4.1.100.Final]
	at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-common-4.1.100.Final.jar!/:4.1.100.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[netty-common-4.1.100.Final.jar!/:4.1.100.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.100.Final.jar!/:4.1.100.Final]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.100.Final.jar!/:4.1.100.Final]
	at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
Caused by: io.lettuce.core.RedisCommandTimeoutException: Command timed out after 10 second(s)
	at io.lettuce.core.internal.ExceptionFactory.createTimeoutException(ExceptionFactory.java:59) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	at io.lettuce.core.protocol.CommandExpiryWriter.lambda$potentiallyExpire$0(CommandExpiryWriter.java:176) ~[lettuce-core-6.1.10.RELEASE.jar!/:6.1.10.RELEASE]
	... 8 common frames omitted

2024-09-09 01:22:11.955 DEBUG 3139744 --- [lettuce-eventExecutorLoop-3-2] o.s.c.g.f.ratelimit.RedisRateLimiter     : response: Response{allowed=true, headers={X-RateLimit-Remaining=-1, X-RateLimit-Requested-Tokens=1, X-RateLimit-Burst-Capacity=100, X-RateLimit-Replenish-Rate=50}, tokensRemaining=-1}

Input Code

I have used RateLimiterFilter in my Spring Cloud Gateway for implementing rate limiting
package com.lti.api.gateway.filter.pre;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.gateway.filter.ratelimit.KeyResolver;
import org.springframework.stereotype.Component;
import org.springframework.web.server.ServerWebExchange;

import reactor.core.publisher.Mono;

@Component
public class RateLimiterFilter implements KeyResolver{

	private static final Logger logger = LoggerFactory.getLogger(RateLimiterFilter.class);

	@Override
	public Mono<String> resolve(ServerWebExchange exchange) {
			logger.info("In resolve --- client_name: NA");
			return Mono.just("1");
	}
}

Expected behavior/code

Environment

  • Lettuce version(s): 6.1.10.RELEASE

Possible Solution

Additional context

I am connecting to AWS ElastiCache for this operation.
image

@sanketkeskar
Copy link
Author

@mp911de
Hi, I saw multiple issues reported regarding the same. But couldn't find any concrete solution of it. Can you please help into it.
How can it be resolved/handled?

@tishun
Copy link
Collaborator

tishun commented Sep 16, 2024

@tishun tishun added the status: waiting-for-feedback We need additional information before we can continue label Sep 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: waiting-for-feedback We need additional information before we can continue
Projects
None yet
Development

No branches or pull requests

2 participants