Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip add_kubernetes_metadata processor if kubernetes metadata are aleady there #27689

Conversation

MichaelKatsoulis
Copy link
Contributor

@MichaelKatsoulis MichaelKatsoulis commented Sep 1, 2021

What does this PR do?

This PR prevents add_kubernetes_metadata processor from running in cases kubernetes metadata are already present in an event.

Why is it important?

This change prevents metadata being overridden by the processor.
Also In cases where the processor is run by elastic-agent (enabled by default) it fails since the matchers are not properly configured (see #27216).

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

How to test this PR locally

  1. Run filebeat on kubernetes with kubernetes autodiscover provider enabled [(doc)](https://www.elastic.co/guide/en/beats/filebeat/master/configuration-autodiscover.html, the provider adds the metadata) and also add_kubernetes_metadata processor enabled.
    Check in the logs that the processor is being skipped.

  2. Run filebeat on kubernetes with input of type container (doc) and also add_kubernetes_metadata enabled.
    Check in the logs that the processor starts and adds the metadata.

Related issues

@MichaelKatsoulis MichaelKatsoulis added enhancement Team:Integrations Label for the Integrations team backport-v7.16.0 Automated backport with mergify labels Sep 1, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations (Team:Integrations)

@botelastic botelastic bot added needs_team Indicates that the issue/PR needs a Team:* label and removed needs_team Indicates that the issue/PR needs a Team:* label labels Sep 1, 2021
@elasticmachine
Copy link
Collaborator

elasticmachine commented Sep 1, 2021

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2021-09-03T06:52:59.446+0000

  • Duration: 147 min 38 sec

  • Commit: 3955e71

Test stats 🧪

Test Results
Failed 0
Passed 53773
Skipped 5325
Total 59098

Trends 🧪

Image of Build Times

Image of Tests

💚 Flaky test report

Tests succeeded.

Expand to view the summary

Test stats 🧪

Test Results
Failed 0
Passed 53773
Skipped 5325
Total 59098

Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks good. Left 2 minors.

Also consider adding a test if possible.

@@ -251,6 +259,10 @@ func (k *kubernetesAnnotator) Run(event *beat.Event) (*beat.Event, error) {
if !k.kubernetesAvailable {
return event, nil
}
if kubernetesMetadataExist(event) {
k.log.Debug("Skipping add_kubernetes_metadata processor as kubernetes metadata already exist")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be reporting the log for every single event, means that even if it is in debug level it could become quite overwhelming.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only in case user enables debug mode. In this Run method there are other debug messages as well. Do you think we shouldn't log this at all?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, since we have already another debug message in the main flow we can preserve this one too. Maybe in the future we can revisit the logging in this method if we see it becomes noisy, but for now we can keep it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also we log each published event in processing/processors.go:203. So tbh the logging of one line that the processor is skipped is nothing comparing to that

libbeat/processors/add_kubernetes_metadata/kubernetes.go Outdated Show resolved Hide resolved
@MichaelKatsoulis MichaelKatsoulis merged commit 4d068b4 into elastic:master Sep 3, 2021
mergify bot pushed a commit that referenced this pull request Sep 3, 2021
…ady there (#27689)

* Skip add_kubernetes_metadata processor if kubernetes metadata are already there

(cherry picked from commit 4d068b4)
MichaelKatsoulis added a commit that referenced this pull request Sep 6, 2021
…ady there (#27689) (#27738)

* Skip add_kubernetes_metadata processor if kubernetes metadata are already there

(cherry picked from commit 4d068b4)

Co-authored-by: Michael Katsoulis <[email protected]>
Icedroid pushed a commit to Icedroid/beats that referenced this pull request Nov 1, 2021
…ady there (elastic#27689)

* Skip add_kubernetes_metadata processor if kubernetes metadata are already there
andresrc added a commit to andresrc/beats that referenced this pull request Dec 13, 2021
andresrc added a commit to andresrc/beats that referenced this pull request Dec 13, 2021
andresrc added a commit that referenced this pull request Dec 13, 2021
andresrc added a commit that referenced this pull request Dec 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-v7.16.0 Automated backport with mergify enhancement Team:Integrations Label for the Integrations team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Elastic Agent] Error extracting container id in kubernetes
4 participants