Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ClusterFlow exclude logs feature is not working properly #1783

Open
sjanorkar opened this issue Jul 18, 2024 · 5 comments
Open

ClusterFlow exclude logs feature is not working properly #1783

sjanorkar opened this issue Jul 18, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@sjanorkar
Copy link

Bugs should be filed for issues encountered whilst operating logging-operator.
You should first attempt to resolve your issues through the community support
channels, e.g. Slack, in order to rule out individual configuration errors. #logging-operator
Please provide as much detail as possible.

Describe the bug:
ClusterFlow exclude logs feature is not working properly. Even after excluding the logs with the help of labels, container names I can see the logs pushed to the output.

Expected behaviour:
All the labels, namespaces and container_names mentioned in match.exclude[] should not be present in the log destination

Steps to reproduce the bug:
Use default logging Operator config.

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: aicore-logs-flow
  namespace: logging
spec:
  filters:
    - tag_normaliser: {}
    - record_transformer:
        enable_ruby: true
        records:
          - namespace: ${record["kubernetes"]["namespace_name"]}
          - app: ${record["kubernetes"]["labels"]["app"]}
          - cluster_name: {{ .Values.environmentName }}
          - container: ${record["kubernetes"]["container_name"]}
          - pod: ${record["kubernetes"]["pod_name"]}
          - node_name: ${record["kubernetes"]["host"]}
          - ai_sap_com_tenantId: aicore
        remove_keys: kubernetes
  match:
    - exclude:
        labels:
          app: kubernetes # apiserver-proxy, kube-proxy-aalpha-worker, kube-proxy-loki, kube-proxy-node, kube-proxy-prometheus
    - exclude:
        container_names:
          - nginx
          - kube-proxy
    - exclude:
        labels:
          app: <label>
    - exclude:
        labels:
          app: <label>
    - exclude:
        labels: 
          k8s-app: <label>
    - exclude:
        namespaces:
          - prometheus
          - default 
          - tests
  globalOutputRefs:
    -logs-output

Additional context:
Add any other context about the problem here.

Environment details:

  • Kubernetes version (e.g. v1.15.2):
  • Cloud-provider/provisioner (e.g. AKS, GKE, EKS, PKE etc):
  • logging-operator version (e.g. 2.1.1):
  • Install method (e.g. helm or static manifests):
  • Logs from the misbehaving component (and any other relevant logs):
  • Resource definition (possibly in YAML format) that caused the issue, without sensitive data:

/kind bug

@sjanorkar sjanorkar added the bug Something isn't working label Jul 18, 2024
@pepov
Copy link
Member

pepov commented Jul 18, 2024

Do you have any other clusterflows in the system? Based on the above ClusterFlow you should not get any logs at all, since there should be at least a 'select: {}' rule at the end.

@sjanorkar
Copy link
Author

No, this is the only active clusterflow. Actually I do see all the logs in the output. Seems like exclude feature isn't working properly.

@pepov
Copy link
Member

pepov commented Jul 18, 2024

Which logging operator version do you use?

@pepov
Copy link
Member

pepov commented Jul 18, 2024

Also what happens if you add a select statement, that filters for a non existing label at the end?

@pepov
Copy link
Member

pepov commented Aug 6, 2024

@sjanorkar any update? Which version are you running?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants