-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[filebeat] add_kubernetes_metadata processor stopped working since v7.16 (under a specific condition) #31171
Comments
Please fix this. We updated to 8.2.2 from 7.9.1 only to find all of our kubernetes metadata stopped working. After enabling debug and not seeing anything wrong, going through release notes breaking changes and not finding anything, I finally decided to check issues and here we are. This is a breaking change which should have been mentioned as such in the release notes. Not just as a bug fix. What bug is this fixing? Its not mentioned in the PR. This behavior does not appear to be documented anywhere on this page: https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html |
For those that also stumble across this issue. Workaround: remove the kubernetes.* fields from the field object and add an add_fields to your processors.
IDK if you can put sub fields(cluster.name) this way or not |
Hi! We're labeling this issue as |
We just spent hours looking for logs to troubleshoot an issue only to realized that our search parameters were including a k8s field that was no longer being populated after our upgrade to 8.8. This is a bad one for us. 100% agree with @ryan-dyer-sp 's sentiment
|
I'm experiencing the same issue with filebeat version 8.14.1. Any idea how to resolve it and add k8s metadata using 'add_kubernetes_metadata'? According to the documentation available at https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html#k8s-beat-role-based-access-control-for-beats, it should work. |
The kubernetes metadata fields are not added anymore since v7.16 if a field named
kubernetes.cluster.name
(or any field that starts withkubernetes
, I suppose) is statically added to all events withfields
andfields_under_root
in the config. It is presumably caused by this PR: #27689.I understand the rationale of the PR, but I think there is room for improvement:
First and foremost, nothing appears in the log, which makes the issue really difficult to troubleshoot. It would be great if a warning was logged once, the first time the processor decides to skip adding the metadata (as far as I understand, if logging level is set to
debug
, the skipping would be logged for each event, but one warning in the normal logging level would be nice).Second, in this particular case, the add_kubernetes_metadata took the decision not to add the metadata even though it wouldn't output the
kubernetes.cluster.name
field anyway (which is why I add it statically in the first place). Maybe the decision to add the metadata fields should be taken field by field, and not for the whole document, ie. fields that are already present in the document are not overwritten, but fields that are not present are added. Maybe this could be configurable, eg. with an option likeoverwrite_existing_metadata: 'always' | 'never' | 'merge'
, default beingnever
(current behaviour).Here is the filebeat.yml config, just in case:
The text was updated successfully, but these errors were encountered: