Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[filebeat] add_kubernetes_metadata processor stopped working since v7.16 (under a specific condition) #31171

Open
gpothier opened this issue Apr 6, 2022 · 5 comments
Labels
Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team

Comments

@gpothier
Copy link

gpothier commented Apr 6, 2022

The kubernetes metadata fields are not added anymore since v7.16 if a field named kubernetes.cluster.name (or any field that starts with kubernetes, I suppose) is statically added to all events with fields and fields_under_root in the config. It is presumably caused by this PR: #27689.

I understand the rationale of the PR, but I think there is room for improvement:

  • First and foremost, nothing appears in the log, which makes the issue really difficult to troubleshoot. It would be great if a warning was logged once, the first time the processor decides to skip adding the metadata (as far as I understand, if logging level is set to debug, the skipping would be logged for each event, but one warning in the normal logging level would be nice).

  • Second, in this particular case, the add_kubernetes_metadata took the decision not to add the metadata even though it wouldn't output the kubernetes.cluster.name field anyway (which is why I add it statically in the first place). Maybe the decision to add the metadata fields should be taken field by field, and not for the whole document, ie. fields that are already present in the document are not overwritten, but fields that are not present are added. Maybe this could be configurable, eg. with an option like overwrite_existing_metadata: 'always' | 'never' | 'merge', default being never (current behaviour).

Here is the filebeat.yml config, just in case:

      filebeat.inputs:
      - type: container
        paths:
          - /var/log/containers/*.log
        processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

      output.elasticsearch:
        protocol: https

      fields_under_root: true
      fields:
        kubernetes.cluster.name: "${KUBERNETES_CLUSTER_NAME}"
        cloud.provider: "o3"
        cloud.availability_zone: "o3"

      processors:
        - add_host_metadata:

      cloud.id: "${ELASTIC_CLOUD_ID}"
      cloud.auth: "${ELASTIC_CLOUD_AUTH}"
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Apr 6, 2022
@tetianakravchenko tetianakravchenko added the Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team label Apr 6, 2022
@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Apr 6, 2022
@ryan-dyer-sp
Copy link

Please fix this. We updated to 8.2.2 from 7.9.1 only to find all of our kubernetes metadata stopped working. After enabling debug and not seeing anything wrong, going through release notes breaking changes and not finding anything, I finally decided to check issues and here we are.

This is a breaking change which should have been mentioned as such in the release notes. Not just as a bug fix. What bug is this fixing? Its not mentioned in the PR. This behavior does not appear to be documented anywhere on this page: https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html

@ryan-dyer-sp
Copy link

ryan-dyer-sp commented Jun 9, 2022

For those that also stumble across this issue. Workaround: remove the kubernetes.* fields from the field object and add an add_fields to your processors.

      - add_fields:
          # We use the add_fields processor instead of the fields object as add_kubernetes_metadata does not work if it finds any existing kubernetes.* fields on the event.
          # https://github.com/elastic/beats/issues/31171
          target: kubernetes
          fields:
            cluster: <cluster> 

IDK if you can put sub fields(cluster.name) this way or not

@botelastic
Copy link

botelastic bot commented Jun 9, 2023

Hi!
We just realized that we haven't looked into this issue in a while. We're sorry!

We're labeling this issue as Stale to make it hit our filters and make sure we get back to it as soon as possible. In the meantime, it'd be extremely helpful if you could take a look at it as well and confirm its relevance. A simple comment with a nice emoji will be enough :+1.
Thank you for your contribution!

@botelastic botelastic bot added the Stalled label Jun 9, 2023
@m-standfuss
Copy link

m-standfuss commented Jul 18, 2023

We just spent hours looking for logs to troubleshoot an issue only to realized that our search parameters were including a k8s field that was no longer being populated after our upgrade to 8.8. This is a bad one for us.

100% agree with @ryan-dyer-sp 's sentiment

This is a breaking change which should have been mentioned as such in the release notes. Not just as a bug fix. What bug is this fixing? Its not mentioned in the PR. This behavior does not appear to be documented anywhere on this page: elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html

@botelastic botelastic bot removed the Stalled label Jul 18, 2023
@qaiserali
Copy link

I'm experiencing the same issue with filebeat version 8.14.1. Any idea how to resolve it and add k8s metadata using 'add_kubernetes_metadata'? According to the documentation available at https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html#k8s-beat-role-based-access-control-for-beats, it should work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team
Projects
None yet
Development

No branches or pull requests

5 participants