Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log a warn message when the size of buffers passes the limit of Netty's maxOrder and the allocation moves form pooled to unpooled #492

Closed
andsel opened this issue Feb 5, 2024 · 0 comments · Fixed by #493
Assignees

Comments

@andsel
Copy link
Contributor

andsel commented Feb 5, 2024

In V2Batch there is an accumulation of raw payloads of events, that later are lazily decoded to Logstash events.

if (internalBuffer.writableBytes() < size + (2 * SIZE_OF_INT)){
internalBuffer.capacity(internalBuffer.capacity() + size + (2 * SIZE_OF_INT));
}

This accumulation is one point where unpooled Netty's buffers are requested.
Update the implementation of the method to log a warn message, when the requested size requires an un pooled allocation. The log happens the first time the unpooled allocation is requested and prints also the expected maxOrder that would solve the problem.

void addMessage(int sequenceNumber, ByteBuf buffer, int size) {
...
        if (internalBuffer.writableBytes() < size + (2 * SIZE_OF_INT)) {
            int expectedSize = internalBuffer.capacity() + size + (2 * SIZE_OF_INT)
            int idealMaxOrder = computeIdealMaxOrder(expectedSize);
            if (warnNeverLogged(idealMaxOrder)) {
                LOGGER.warn("Got a batch size of {} bytes, while this instance expects batches up to {}, please bump maxOrder to {}.", size, allocator.chunkSize, idealMaxOrder);
        this.warningsDone.add(idealMaxOrder);
            }

            internalBuffer.capacity(internalBuffer.capacity() + size + (2 * SIZE_OF_INT));
        }
...
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants