Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cluster level add_kubernetes_metadata support for centralized enrichment #24621

Merged
merged 1 commit into from
Sep 28, 2021

Conversation

vjsamuel
Copy link
Contributor

@vjsamuel vjsamuel commented Mar 18, 2021

Enhancement

What does this PR do?

Given that we now have ability to do autodiscover at cluster scope. We should also be able to do metadata enrichment at cluster scope. This PR adds support for the same.

Why is it important?

Feature parity

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Author's Checklist

How to test this PR locally

Add scope: cluster to an existing add_kubernetes_metadata config and it should sync all pods from the entire cluster.

Related issues

Use cases

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Mar 18, 2021
@elasticmachine
Copy link
Collaborator

elasticmachine commented Mar 18, 2021

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2021-09-27T19:04:04.895+0000

  • Duration: 141 min 14 sec

  • Commit: 1b6225c

Test stats 🧪

Test Results
Failed 0
Passed 54087
Skipped 5328
Total 59415

💚 Flaky test report

Tests succeeded.

🤖 GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

  • /package : Generate the packages and run the E2E tests.

  • /beats-tester : Run the installation tests with beats-tester.

@andresrc andresrc added the Team:Integrations Label for the Integrations team label Mar 18, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations (Team:Integrations)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Mar 18, 2021
@ChrsMark ChrsMark self-assigned this Mar 22, 2021
@ChrsMark ChrsMark added v7.13.0 needs_backport PR is waiting to be backported to other branches. labels Mar 22, 2021
@ChrsMark
Copy link
Member

/test

Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thank you @vjsamuel !

@ChrsMark
Copy link
Member

@vjsamuel would you mind rebasing this one with the latest master please?

@ChrsMark
Copy link
Member

/test

@ChrsMark
Copy link
Member

/test

@ChrsMark
Copy link
Member

CI failure is irrelevant to the change (there is a known issue with windows workers), I will proceed and merge this.

@@ -590,6 +590,18 @@ https://github.com/elastic/beats/compare/v7.0.0-alpha2...master[Check the HEAD d
- Added "add_network_direction" processor for determining perimeter-based network direction. {pull}23076[23076]
- Added new `rate_limit` processor for enforcing rate limits on event throughput. {pull}22883[22883]
- Allow node/namespace metadata to be disabled on kubernetes metagen and ensure add_kubernetes_metadata honors host {pull}23012[23012]
- Honor kube event resysncs to handle missed watch events {pull}22668[22668]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah @vjsamuel it seems that the rebase messed up changelog, could you please give it some care?

@mergify
Copy link
Contributor

mergify bot commented Apr 7, 2021

This pull request is now in conflicts. Could you fix it? 🙏
To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/

git fetch upstream
git checkout -b cluster_metadataa upstream/cluster_metadataa
git merge upstream/master
git push upstream cluster_metadataa

@mergify
Copy link
Contributor

mergify bot commented Sep 22, 2021

This pull request does not have a backport label. Could you fix it @vjsamuel? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-v./d./d./d is the label to automatically backport to the 7./d branch. /d is the digit

NOTE: backport-skip has been added to this pull request.

@mergify mergify bot added the backport-skip Skip notification from the automated backport with mergify label Sep 22, 2021
Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@ChrsMark ChrsMark added backport-v7.16.0 Automated backport with mergify and removed v7.13.0 labels Sep 28, 2021
@ChrsMark ChrsMark merged commit fc964d8 into elastic:master Sep 28, 2021
mergify bot pushed a commit that referenced this pull request Sep 28, 2021
ChrsMark pushed a commit that referenced this pull request Sep 28, 2021
…ichment (#24621) (#28147)

(cherry picked from commit fc964d8)

Co-authored-by: Vijay Samuel <[email protected]>
Icedroid pushed a commit to Icedroid/beats that referenced this pull request Nov 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-skip Skip notification from the automated backport with mergify backport-v7.16.0 Automated backport with mergify enhancement needs_backport PR is waiting to be backported to other branches. Team:Integrations Label for the Integrations team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants