Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Metricbeat][Kubernetes] Namespace labels missing on kube-state-metrics and container metrics #33108

Closed
bvader opened this issue Sep 17, 2022 · 10 comments
Assignees
Labels
bug Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team

Comments

@bvader
Copy link
Contributor

bvader commented Sep 17, 2022

FYI : @gizas

Use Case : As a Kubernetes Platform Owner / Operator I want to be able to define labels at the namespace level (such as org, environment, product line etc) so that I can route / filter / apply workflow to the kube-state-metrics and container metrics based on these defined namespace labels.

The Following kube-state-metrics should support namespace labels.

The request is for 7.17.x backport and the 8.x release for metricbeat

Currently Supported

  • state_pod
  • state_container
  • state_service

Not Currently Supported - But Required

  • state_deployment
  • state_replicaset

EDIT removed - state_node per @MichaelKatsoulis explanation

Container metrics are / are not supported depending on what version is used see the matrix below.

Matrix of what the test indicate that are currently supported.

Metricbeat Version KSB Metrics Support Container Metrics Supported Requires Matching Code
7.15.2 state_pod, state_container pod, container Yes
7.16.3  none   none  Does Not Matter
7.17.6  none   none  Does Not Matter
8.0.1  none   none  Does Not Matter
8.1.3 state_pod, state_container, state_service pod, container No
8.2.3 state_pod, state_container, state_service pod, container No
8.4.1 state_pod, state_container, state_service pod, container No

Steps to reproduce

I used GKE and the Google Online Boutique Sample

  1. Deploy a microservices app such as the online boutique

  2. With custom namespace labels example

$ cat product-catalog.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: product-catalog
  labels:
    ecosystem: prod
    org: product 
  1. Update kube-state-metrics manifiest to include custom labels and deploy
    Note : Tested with kube-state-metrics 2.4.2 and 2.6.0

https://github.com/kubernetes/kube-state-metrics/blob/v2.4.2/examples/standard/deployment.yaml#L24

kube-state-metrics/examples/standard/deployment.yml 
    spec:
      automountServiceAccountToken: true
      containers:
      - image: k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.4.2
        args: ['--metric-labels-allowlist=namespaces=[org,ecosystem]']

  1. Deploy metricbeat with standard kube-state-metrics configuration in 7.X 8.X

  2. Go To Discover and Observer the kube-state-metrics that the custom namespace labels and those that do not.

Not Currently Supported - But Required

  • state_deployment
  • state_replicaset

Screen Shot 2022-09-16 at 7 17 39 PM

  1. Note it we exec into a metricbeat container and directly curl the kube-state-metrics endpoint we see the labels are there. So it is unclear why they are not showing up on the state_deployment and state_replicaset metricsets
root@gke-stephenb-gke-istio-t-default-pool-31633e50-i1y9:/usr/share/metricbeat# curl http://kube-state-metrics:8080/metrics | grep label_ecosystem
kube_namespace_labels{namespace="payment",label_ecosystem="prod",label_org="finance"} 1
kube_namespace_labels{namespace="product-catalog",label_ecosystem="prod",label_org="product"} 1
root@gke-stephenb-gke-istio-t-default-pool-31633e50-i1y9:/usr/share/metricbeat# 

It does Appear that in 8.X the custom matcher code for container metrics is not needed, it is unclear when look at the matrix above whether it is needed or not in 7.17

For reference here is the custom matcher code we had to use in 7.15.x to get the container metrics to have namespace labels

    processors:
      - add_cloud_metadata:
      # Custom Matcher Code
      - add_kubernetes_metadata:
          host: ${NODE_NAME}
          default_indexers.enabled: false
          default_matchers.enabled: false
          annotations.dedot: false
          indexers:
            - pod_uid:
          matchers:
            - fields:
                lookup_fields: ['kubernetes.pod.uid']
          #Why not just this enough???
          add_resource_metadata:
            namespace:
              enabled: true
              include_labels: ["org","ecosystem","env","region","costCenter"]
              
@bvader bvader added the bug label Sep 17, 2022
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Sep 17, 2022
@gizas gizas added the Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team label Sep 19, 2022
@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Sep 19, 2022
@MichaelKatsoulis
Copy link
Contributor

Hi @bvader.

Thank you for the detailed description.
While using metricbeat we don't recommend the usage of the add_kubernetes_metadata processor for enriching the events with kubernetes metadata.
The kubernetes metricsets themselves have enrichers that enrich the events with kubernetes metadata. So the usage of the processor is not needed.
Also kubernetes metadata are added when the kubernetes autodiscover provider is used.
I suspect that in your configuration you are not using the autodiscover provider.

Regarding the inconsistencies between the versions, there is an explanation.

  1. Until versions 8.1 , the enricher which is part of the metricsets was not configured to collect namespace labels and annotations.
  2. At versions before 7.16, you can see the namespace labels because they are added by the add_kubernetes_metadata processor.
  3. At 7.16 release, we introduced a feature to skip the addition of metadata by the add_kubernetes_metadata processor if the event already included kubernetes metadata. Those metadata would come either by the kubernetes autodiscover provider or the metricset level enrichers. So the processor does nothing.
  4. So in versions 7.16 until 8.0 you do not see the labels, because
    a. they are not added by the processor
    b. the metricset level enricher is not configured yet to collect them
    c. you are not using the autodiscover provider which collects them by default
  5. In versions 8.1 as I mentioned before, the metricset level enricher starts collecting by default the labels of the namespaces. Hence you can see them as part of some(will discuss this later) metricset events.

So the best way to collect the labels before 8.1 would be with the use of the kubernetes autodiscover provider.

Now, regarding the some metricsets that include the labels metadata. The reason that happens is that in both the kubernetes provider and metricset enricher, only pod, container and service are coded to add the namespace metadata. The rest of the resources are not.

So this is not a bug, but rather a feature. I can see and agree with you that this request make sense for a devops. It is something that @elastic/obs-cloudnative-monitoring team could do as an enhancement.
@gizas if you agree we can prioritise this accordingly.

@bvader
Copy link
Contributor Author

bvader commented Sep 19, 2022

Hi @MichaelKatsoulis

Thanks for the detailed response it helps greatly.

So if I understand correctly in 8.1+

A) state_pod, state_container, state_service are supported
B) state_node seems to be supported but I did not see it in your list.
C) state_deployment, state_replicaset would be an enhancement request
D) the container and pod labels are picked up by the metricset enricher

I am mostly following except the following as, we would really like to get this working in 7.17.

4. So in versions 7.16 until 8.0 you do not see the labels, because
b. the metricset level enricher is not configured yet to collect them

Are the metric level enrichers something the end user configures?... if so can you show us the documentation or a how to? or are you saying in 7.17 the labels are not supported at all?

4. So in versions 7.16 until 8.0 you do not see the labels, because
c. you are not using the autodiscover provider which collects them by default
_

I am currently Testing running metricbeat with 7.17.6 with autodiscover (which I think I have been configured all along)

Here is and example I am currently running, the autodiscover provider is configured and it is basically straight from the example here so I am unclear why we are not getting the labels.

Perhaps I am unclear on the autodiscover / using it wrong.

If so can you provide an example this is all pretty vanilla / simple GKE / kube-state-metrics / container & pod metrics

Current config.

metricbeat-kubernetes-7.17.6.yml
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    metricbeat.autodiscover:
      providers:
        - type: kubernetes
          scope: cluster
          node: ${NODE_NAME}
          # In large Kubernetes clusters consider setting unique to false
          # to avoid using the leader election strategy and
          # instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
          unique: true
          templates:
            - config:
                - module: kubernetes
                  hosts: ["kube-state-metrics:8080"]
                  period: 10s
                  add_metadata: true
                  metricsets:
                    - state_node
                    - state_deployment
                    - state_daemonset
                    - state_replicaset
                    - state_pod
                    - state_container
                    - state_job
                    - state_cronjob
                    - state_resourcequota
                    - state_statefulset
                    - state_service
                - module: kubernetes
                  metricsets:
                    - apiserver
                  hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
                  ssl.certificate_authorities:
                    - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                  period: 30s
                # Uncomment this to get k8s events:
                #- module: kubernetes
                #  metricsets:
                #    - event
        # To enable hints based autodiscover uncomment this:
        #- type: kubernetes
        #  node: ${NODE_NAME}
        #  hints.enabled: true

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["https://${NODE_NAME}:10250"]
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      ssl.verification_mode: "none"
      # If there is a CA bundle that contains the issuer of the certificate used in the Kubelet API,
      # remove ssl.verification_mode entry and use the CA, for instance:
      #ssl.certificate_authorities:
        #- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
    # Currently `proxy` metricset is not supported on Openshift, comment out section
    - module: kubernetes
      metricsets:
        - proxy
      period: 10s
      host: ${NODE_NAME}
      hosts: ["localhost:10249"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
spec:
  selector:
    matchLabels:
      k8s-app: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.17.6
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: elasticsearch
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          value: changeme
        - name: ELASTIC_CLOUD_ID
          value: "test-test:asdfkjhasdlfkjhasdflkjashdf"
        - name: ELASTIC_CLOUD_AUTH
          value: "elastic:aksjfhasldkfjhasldfkjahsdflkjh"
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: data
          mountPath: /usr/share/metricbeat/data
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: config
        configMap:
          defaultMode: 0640
          name: metricbeat-daemonset-config
      - name: modules
        configMap:
          defaultMode: 0640
          name: metricbeat-daemonset-modules
      - name: data
        hostPath:
          # When metricbeat runs as non-root user, this directory needs to be writable by group (g+w)
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metricbeat
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: metricbeat
    namespace: kube-system
roleRef:
  kind: Role
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metricbeat-kubeadm-config
  namespace: kube-system
subjects:
  - kind: ServiceAccount
    name: metricbeat
    namespace: kube-system
roleRef:
  kind: Role
  name: metricbeat-kubeadm-config
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    k8s-app: metricbeat
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - events
  - pods
  - services
  verbs: ["get", "list", "watch"]
# Enable this rule only if planing to use Kubernetes keystore
#- apiGroups: [""]
#  resources:
#  - secrets
#  verbs: ["get"]
- apiGroups: ["extensions"]
  resources:
  - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  - deployments
  - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["batch"]
  resources:
  - jobs
  verbs: ["get", "list", "watch"]
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
- nonResourceURLs:
  - "/metrics"
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: metricbeat
  # should be the namespace where metricbeat is running
  namespace: kube-system
  labels:
    k8s-app: metricbeat
rules:
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs: ["get", "create", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: metricbeat-kubeadm-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
rules:
  - apiGroups: [""]
    resources:
      - configmaps
    resourceNames:
      - kubeadm-config
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
---

No Namespace Labels at all, I am right back to where I was...

Metricbeat Version KSB Metrics Support Container Metrics Supported Requires Matching Code
7.17.6  none   none  none - matcher not configured

Screen Shot 2022-09-19 at 4 29 08 PM

Appreciate your help / guidance

@MichaelKatsoulis
Copy link
Contributor

MichaelKatsoulis commented Sep 20, 2022

So if I understand correctly in 8.1+
A) state_pod, state_container, state_service are supported

Correct

B) state_node seems to be supported but I did not see it in your list.

state_node metricset events are not enriched with namespace labels if that is what you mean. A node does not belong to a namespace so which namespace labels should be there? Does not make sense.

C) state_deployment, state_replicaset would be an enhancement request

Correct

D) the container and pod labels are picked up by the metricset enricher

Correct as well as the namespace labels where the pod/container belong to.

  1. So in versions 7.16 until 8.0 you do not see the labels, because
    b. the metricset level enricher is not configured yet to collect them
    Are the metric level enrichers something the end user configures?... if so can you show us the documentation or a how to? or are you saying in 7.17 the labels are not supported at all?

Unfortunately the enricher became configurable in version 8.1 and after that can be configured to collect the namespace labels. Before that version it is not.

  1. So in versions 7.16 until 8.0 you do not see the labels, because
    c. you are not using the autodiscover provider which collects them by default_
    I am currently Testing running metricbeat with 7.17.6 with autodiscover (which I think I have been configured all along)

This is my mistake. It seems that the autodiscover provider although you are using it, does not collect the labels. The reason behind this is the setting of unique:true. We definitely do not recommend changing that to false in case of a multi node cluster.

After all the above analysis it seems that between versions 7.16-8.1 it is not trivial to collect the namespace labels.
But this does not mean that the feature of stopping the add_kubernetes_metatada processor in versions 7.16 was wrong.
The problem is that the enricher in metricset level was not configurable before 8.1 version.

I can suggest some ways forward.

  1. We can backport the enricher enhancement feature to 7.16, 7.17, 8.0 versions to cover the blind spot.
  2. You could try as a workaround the following config for those versions, utilising the kubernetes autodiscover provider. Note that by this, you cannot see the namespace labels in kube state metrics metricsets.
metricbeat-kubernetes-7.17.6.yml
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    metricbeat.autodiscover:
      providers:
        - type: kubernetes
          scope: cluster
          node: ${NODE_NAME}
          # In large Kubernetes clusters consider setting unique to false
          # to avoid using the leader election strategy and
          # instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
          unique: false
          templates:
            - config:
                - module: kubernetes
                  metricsets:
                    - node
                    - system
                    - pod
                    - container
                    - volume
                  period: 10s
                  add_metadata: true
                  host: ${NODE_NAME}
                  hosts: ["https://${NODE_NAME}:10250"]
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
                  ssl.verification_mode: "none"
          add_resource_metadata:
            namespace:
              enabled: true
              include_labels: ["test", "test-2"]
            node:
              enabled: false
        - type: kubernetes
          scope: cluster
          node: ${NODE_NAME}
          # In large Kubernetes clusters consider setting unique to false
          # to avoid using the leader election strategy and
          # instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
          unique: true
          templates:
            - config:
                - module: kubernetes
                  hosts: ["kube-state-metrics:8080"]
                  period: 10s
                  add_metadata: true
                  metricsets:
                    - state_node
                    - state_deployment
                    - state_daemonset
                    - state_replicaset
                    - state_pod
                    - state_container
                    - state_job
                    - state_cronjob
                    - state_resourcequota
                    - state_statefulset
                    - state_service
                - module: kubernetes
                  metricsets:
                    - apiserver
                  hosts: ["https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}"]
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
                  ssl.certificate_authorities:
                    - /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                  period: 30s
                # Uncomment this to get k8s events:
                #- module: kubernetes
                #  metricsets:
                #    - event
        # To enable hints based autodiscover uncomment this:
        #- type: kubernetes
        #  node: ${NODE_NAME}
        #  hints.enabled: true

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: kube-system
  labels:
    k8s-app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib|snap)($|/)'
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - proxy
      period: 10s
      host: ${NODE_NAME}
      hosts: ["localhost:10249"]
---

This way you will rely on the provider to start the metricsets that do not require unique:true and take advantage of the add_resource_metadata.

I would like the teams opinion on this matter. @ChrsMark , @gizas

@ChrsMark
Copy link
Member

This part looks weird to me:

metricbeat.autodiscover:
      providers:
        - type: kubernetes
          scope: cluster
          node: ${NODE_NAME}
          # In large Kubernetes clusters consider setting unique to false
          # to avoid using the leader election strategy and
          # instead run a dedicated Metricbeat instance using a Deployment in addition to the DaemonSet
          unique: false
          templates:
            - config:
                - module: kubernetes
                  metricsets:
                    - node
                    - system
                    - pod
                    - container
                    - volume
                  period: 10s
                  add_metadata: true
                  host: ${NODE_NAME}
                  hosts: ["https://${NODE_NAME}:10250"]
                  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
                  ssl.verification_mode: "none"
          add_resource_metadata:
            namespace:
              enabled: true
              include_labels: ["test", "test-2"]
            node:
              enabled: false

The kubernetes autodiscover provider is meant for spawning modules based on conditions etc. I wouldn't use it to tackle this case here, it would make things more complicated. Also not sure what happens under the hood with the provided configuration 🤔 .

Now that I'm thinking of it again, how about adding a configurable option in add_kubernetes_metadata to allow overwriting metadata? We have sth similar in add_cloud_metadata: https://www.elastic.co/guide/en/beats/metricbeat/current/add-cloud-metadata.html

@MichaelKatsoulis
Copy link
Contributor

Now that I'm thinking of it again, how about adding a configurable option in add_kubernetes_metadata to allow overwriting metadata? We have sth similar in add_cloud_metadata: https://www.elastic.co/guide/en/beats/metricbeat/current/add-cloud-metadata.html

If we are going to add this on the upcoming releases, it still needs to be backported to 7.16, 7.17 and 8.0. So why not just back porting the enricher feature we already have?

@ChrsMark
Copy link
Member

Now that I'm thinking of it again, how about adding a configurable option in add_kubernetes_metadata to allow overwriting metadata? We have sth similar in add_cloud_metadata: https://www.elastic.co/guide/en/beats/metricbeat/current/add-cloud-metadata.html

If we are going to add this on the upcoming releases, it still needs to be backported to 7.16, 7.17 and 8.0. So why not just back porting the enricher feature we already have?

I'm +1 on doing this if it's easy enough and worth the effort. On the other hand having the extra config option can be handy in other cases too (I don't have anything specific in mind at the moment). It's up to you make the call :).

@MichaelKatsoulis
Copy link
Contributor

Regardless of the choice I think it should be just back ported only to 7.17 version as this is mainly the one supported and will be supported in the future.

For the enhancement request I will open the issue myself.

@gizas
Copy link
Contributor

gizas commented Sep 20, 2022

@MichaelKatsoulis and I had a sync and thank you for the great explanation here

state_deployment, state_replicaset would be an enhancement request
Ok this makes sense and relevant issue will be opened

For the backport work still I am not convinced if needed. @bvader is this analysis an outcome from a specific customer?
Can not just update to 8.1 versions?

@bvader
Copy link
Contributor Author

bvader commented Sep 20, 2022

@MichaelKatsoulis Thanks! for the clarifications, mostly makes sense.

B) state_node seems to be supported but I did not see it in your list.

state_node metricset events are not enriched with namespace labels if that is what you mean. A node does not belong to a namespace so which namespace labels should be there? Does not make sense.

Yes that was a mistake on my part... I edited the original post.

I will absorb the other items.

I spoke to @gizas Offline about the very large customer impacted by this...

@tetianakravchenko
Copy link
Contributor

tetianakravchenko commented Dec 22, 2022

@MichaelKatsoulis does this issue include any other issues beside #33144 ? I've closed this issue, and backported it to 7.17, 8.5, 8.6
can it be closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Team:Cloudnative-Monitoring Label for the Cloud Native Monitoring team
Projects
None yet
Development

No branches or pull requests

5 participants