-
Notifications
You must be signed in to change notification settings - Fork 975
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
commandBuffer peforms expensive queue in channelInactive? #960
Comments
Lettuce retains commands on disconnect to re-trigger these on reconnect. We applied some optimizations in command re-queueing, you might want to check out Lettuce 5 as well. If that's what you're talking about, then everything works as designed, feel free to close this ticket. If you mean something different, then please elaborate. |
Above is branch 3.5.x's code. We can see commandBuffer.addAll(queue) is called without queue size checked in method channelInactive. Because it's called without queue size checked, then commandBuffer can be very huge if channelInactive is being constantly called. But in branch 4.5.x's code, channelInactive won't call commandBuffer.addAll(queue) as belows
|
When Lettuce 4 has moved the queue rebuild into
Commands are not duplicated, they are retained until commands get canceled or executed. If you run out of memory because having too many queued commands, then you need to solve that issue first. Moving commands between queues requires additional memory which is a fraction of the already allocated command memory. What is the actual question? What are you trying to solve? |
Sorry, I have to correct my question. I have this issue And I am wondering why... |
And we stop the traffic, the client server is keeping full gc until OOM, and can't recover. Seems like the worker thread stop consuming the transportBuffer. |
I suggest limiting the request queue via |
We have configured the And we also found out that we got java.util.ConcurrentModificationException: null
at java.util.ArrayDeque.delete(ArrayDeque.java:554) ~[na:1.8.0_172]
at java.util.ArrayDeque.removeFirstOccurrence(ArrayDeque.java:375) ~[na:1.8.0_172]
at java.util.ArrayDeque.remove(ArrayDeque.java:752) ~[na:1.8.0_172]
at com.lambdaworks.redis.protocol.CommandHandler.queueCommand(CommandHandler.java:428) ~[lettuce-3.5.0.Final.jar:na]
at com.lambdaworks.redis.protocol.CommandHandler.writeSingleCommand(CommandHandler.java:372) ~[lettuce-3.5.0.Final.jar:na]
at com.lambdaworks.redis.protocol.CommandHandler.write(CommandHandler.java:355) ~[lettuce-3.5.0.Final.jar:na] We know the problem above is solved in 3.5.1. But I want figure out the reasons with version 3.5.0.Final.
I have some ideas and I'll share with you and hope you'll feedback.
if (commandBuffer.size() + queue.size() >= clientOptions.getRequestQueueSize()) {
throw new RedisException("Request queue size exceeded: " + clientOptions.getRequestQueueSize()
+ ". Commands are not accepted until the queue size drops.");
} The precondition is we need to remove from private void writeSingleCommand(ChannelHandlerContext ctx, RedisCommand<K, V, ?> command, ChannelPromise promise)
throws Exception {
if (command.isCancelled()) {
transportBuffer.remove(command);
return;
}
queueCommand(command, promise);
ctx.write(command, promise);
} But what if Thus the queue size check below can't work properly. With high load and if (commandBuffer.size() + queue.size() >= clientOptions.getRequestQueueSize()) {
throw new RedisException("Request queue size exceeded: " + clientOptions.getRequestQueueSize()
+ ". Commands are not accepted until the queue size drops.");
} Above is my idea about the below question
As this question
I think this may have some connection with this one
So the working thread stop consuming Am I right? Thank you for your help and look forward for your feedback. |
Lettuce 3.5 is out of maintenance for quite a long time and it's not supported anymore. Given the number of bugs we fixed around queue limitation in Lettuce 4 and 5, it's likely that these bugs still persist in Lettuce 3.5. The actual problem with Lettuce 3.5 command queueing is that operations happen in different threads and that I strongly advise upgrading to Lettuce 5. |
Closing as there's nothing left to do here. If you would like us to look at this issue, please provide the requested information and we will re-open the issue. |
Hi, all
In branch below 4.5.x, channelInactive will add all queued command to command buffer. If the network is very unstable and qps/traffic is very high, the command buffer will become huge, and this object is can't be recycled by gc. Thus client server is full gc all the time. While I see you remove commandBuffer.addAll call in channelInactive. Is it because what I think? Thanks for you attention and look forward for your reply.
The text was updated successfully, but these errors were encountered: