Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logs a warn message on max order overshoot #493

Merged

Conversation

andsel
Copy link
Contributor

@andsel andsel commented Feb 13, 2024

Release notes

Added logging the best Netty's maxOrder setting when big payload trigger an unpooled allocation

What does this PR do?

Print a log line with the preferred Netty's maxOrder to accomodate the buffer size that exceeds the actual chunk size that would trigger an unpooled allocation.
It prints the log just one time across all plugin instances for each maxOrder required. So if plugin instance A reaches the limit of 4Mb (aka maxOrder 9) with a desired payload size of 7Mb it will report line about to use maxOrder 10. After this log line is printed, any other plugin instance wouldn't report it again, unless it needs an even bigger maxOrder not yet reported.
This PR also introduces a test spy of the Logger class to test the behaviour.

Why is it important/What is the impact to the user?

Helps the user to set the right maxOrder for his message flow to avoid slow unpooled allocation.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • [ ] I have made corresponding changes to the documentation
  • [ ] I have made corresponding change to the default configuration files (and/or docker env variables)
  • I have added tests that prove my fix is effective or that my feature works

Author's Checklist

  • add test that when pass maxOrder n doesn't print either notification for < n
  • real test run with Logstash

How to test this PR locally

It needs the test tool present in branch ingest_all_hands_ams contained in PR elastic/logstash#15151.

  1. checkout Logstash branch from Create a benchmarking client to test TCP and Beats input problems with Netty direct memory elastic/logstash#15151
  2. checkout this PR branch
  3. configure Logstash to use the checkouted Beats PR, after vendored it (./gradlew vendor)
# in Gemfile add:
gem "logstash-input-beats", :path => "/Users/andrea/workspace/logstash_plugins/logstash-input-beats"
# install it
bin/logstash-plugin install --no-verify
  1. eventually configure log (config/(log4j2.properties) add:
logger.beats_input.name = org.logstash.beats
logger.beats_input.level = info
  1. run logstash with pipeline like:
input {
  beats {
    port => 3333
  }
}
output {
  sink {}
}  
  1. run the benchmark tool to create payloads > 4MB, from ingest_all_hands_ams run:
bundle exec ruby -J-Xmx16g -J-XX:-MaxFDLimit benchmark_client.rb --test=beats --msg_sizes=2500 -a yes
  1. verify in the logs of Logstash, it report the preferred maxOrder of 10, like:
[2024-02-14T14:18:07,863][WARN ][org.logstash.beats.V2Batch][beats_input][32f207bacda794ed2bf54ea29a1645a3dbb680398856e5cabef16bde58104d8d][defaultEventExecutorGroup-4-5]
 Got a batch size of 4194342 bytes, while this instance expects batches up to 4194304, please bump maxOrder to 10.

Related issues

Use cases

Screenshots

Logs

…ded test case to avoid notify a maxOrder minor than last max order notified
@andsel andsel changed the title Feature/log warn on max order overshoot Logs a warn message on max order overshoot Feb 14, 2024
@andsel andsel marked this pull request as ready for review February 14, 2024 13:30
@jsvd jsvd self-requested a review February 14, 2024 13:37
Copy link
Member

@jsvd jsvd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once CI is green

@andsel andsel merged commit bc72add into logstash-plugins:main Feb 22, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants