Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After upgraded to 7.6 filebeat doesn’t work as expected with config from 7.5.x #16464

Closed
nerddelphi opened this issue Feb 20, 2020 · 9 comments · Fixed by #16480
Closed

After upgraded to 7.6 filebeat doesn’t work as expected with config from 7.5.x #16464

nerddelphi opened this issue Feb 20, 2020 · 9 comments · Fixed by #16480
Assignees
Labels
autodiscovery bug containers Related to containers use case Team:Platforms Label for the Integrations - Platforms team v7.6.0

Comments

@nerddelphi
Copy link

Hi there.

I'm using Filebeat 7.5.2 to read pod/container logs of GKE (k8s) using official Elastic Helm Charts. After upgrading to 7.6.0 it doesn't read pod/container logs anymore. There isn't any error in Filebeat logs (It runs as daemonset in all k8s nodes). No another config was changed in all environment and I've double checked that log files from pods are mounted correctly inside Filebeat pods as well.

Is the new autodiscover scope setting needed?

My filebeat config:

filebeat.inputs:

- type: google-pubsub
  project_id: xxxxxx
  topic: xxxxxxxx
  subscription.name: xxxxx
  subscription.create: false
  credentials_file: xxxx.json
  processors:
    - decode_json_fields:
        process_array: true
        max_depth: 20
        target: ""
        overwrite_keys: true
        fields: ["message"]
    - drop_fields:
        fields: ["message"]
    - add_fields:
        target: ''
        fields:
          topic_name: xxxxxx

- type: google-pubsub
  project_id: xxxxxx
  topic: xxx
  subscription.name: xxxx
  subscription.create: false
  credentials_file: xxxxxxx.json
  processors:
    - decode_json_fields:
        process_array: true
        max_depth: 20
        target: ""
        overwrite_keys: true
        fields: ["message"]
    - drop_fields:
        fields: ["message"]
    - add_fields:
        target: ''
        fields:
          topic_name: xxxxxx

- type: google-pubsub
  project_id: xxxxx
  topic: xxxxxx
  subscription.name: xxxxxx
  subscription.create: false
  credentials_file: xxxxxxx.json
  processors:
    - decode_json_fields:
        process_array: true
        max_depth: 20
        target: ""
        overwrite_keys: true
        fields: ["message"]
    - drop_fields:
        fields: ["message"]
    - add_fields:
        target: ''
        fields:
          topic_name: xxxxxxx

- type: google-pubsub
  project_id: xxxxxxx
  topic: xxxxxxxx
  subscription.name: xxxxxxxx
  subscription.create: false
  credentials_file: xxxx.json
  processors:
    - decode_json_fields:
        process_array: true
        max_depth: 20
        target: ""
        overwrite_keys: true
        fields: ["message"]
    - drop_fields:
        fields: ["message"]
    - add_fields:
        target: ''
        fields:
          topic_name: xxxxx

filebeat.autodiscover:
  providers:
    - type: kubernetes
      templates:
        - condition:
            equals:
              kubernetes.labels.elastic_logs/json: "true"
            # regexp:
            #     kubernetes.container.name: "auth.*|upms.*"
          config:
            - type: container
              stream: stdout
              paths:
                - "/var/lib/docker/containers/${data.kubernetes.container.id}/*.log"
              encoding: utf-8
              symlinks: true
              scan_frequency: 1s
              # multiline.pattern: '^[[:space:]]+(\bat\b|\.{3})|^Caused by:'
              # multiline.negate: false
              # multiline.match: after
              processors:
                - decode_json_fields:
                    process_array: true
                    max_depth: 10
                    target: ""
                    overwrite_keys: true
                    fields: ["message"]
                # - add_cloud_metadata:
                # - add_docker_metadata:
                #     labels.dedot: true
                - add_kubernetes_metadata:
                    labels.dedot: true
                    annotations.dedot: true
            - type: container
              stream: stderr
              paths:
                - "/var/lib/docker/containers/${data.kubernetes.container.id}/*.log"
              encoding: utf-8
              symlinks: true
              scan_frequency: 1s
              multiline.pattern: '^[[:space:]]+(\bat\b|\.{3})|^Caused by:'
              multiline.negate: false
              multiline.match: after
              processors:
                - decode_json_fields:
                    process_array: true
                    max_depth: 10
                    target: ""
                    overwrite_keys: true
                    fields: ["message"]
                # - add_cloud_metadata:
                # - add_docker_metadata:
                #     labels.dedot: true
                - add_kubernetes_metadata:
                    labels.dedot: true
                    annotations.dedot: true
        - condition:
            equals:
              kubernetes.namespace: haproxy
          config:
            - module: haproxy
              log:
                input:
                  type: container
                  paths:
                    - "/var/lib/docker/containers/${data.kubernetes.container.id}/*.log"
                  encoding: utf-8
                  symlinks: true
                  scan_frequency: 1s
                  # multiline.pattern: '^[[:space:]]+(\bat\b|\.{3})|^Caused by:'
                  # multiline.negate: false
                  # multiline.match: after
                  processors:
                    # - decode_json_fields:
                    #     process_array: true
                    #     max_depth: 10
                    #     target: ""
                    #     overwrite_keys: true
                    #     fields: ["message"]
                    # - add_cloud_metadata:
                    # - add_docker_metadata:
                    #     labels.dedot: true
                    - add_kubernetes_metadata:
                        labels.dedot: true
                        annotations.dedot: true

#logging.level: debug
#logging.selectors: ["*"]

monitoring.enabled: "true"
monitoring.elasticsearch.username: ${beats-username}
monitoring.elasticsearch.password: ${beats-password}

queue.mem:
  events: 10000
  flush.min_events: 2048
  flush.timeout: 1s

setup.dashboards.enabled: false
setup.template:
  enabled: true
  overwrite: false
  name: flb-k8s
  pattern: "flb-k8s-*"
  settings.index:
    number_of_shards: 3
    number_of_replicas: 0
    number_of_routing_shards: 30
    refresh_interval: "30s"
    translog.durability: "async"
    routing.allocation.require.node_type: "hot"


setup.ilm:
  enabled: false

#output.console.pretty: true

output.elasticsearch:
  worker: 2
  hosts: http://xxxxxx:9200
  username: ${filebeat-elastic-username}
  password: ${filebeat-elastic-password}
  bulk_max_size: 5000
  indices:
    - index: "flb-k8s-pubsub-%{[topic_name]}"
      when.contains:
        input.type: "google-pubsub"
    - index: "flb-k8s-%{[kubernetes.namespace]}"
      when.contains:
        input.type: "container"

setup.kibana:
  host: "https://xxxxxx:443"
  #username: ${filebeat_kibana_user}
  #password: ${filebeat_kibana_pwd}
@nerddelphi
Copy link
Author

You have changed default k8s fields and it isn't on docs.

Example:
kubernetes.labels.elastic_logs/json: "true"

become:
kubernetes.pod.labels.elastic_logs/json: "true"

So, my processor condition would never matched.

Is it by design or a bug/typo?

@exekias
Copy link
Contributor

exekias commented Feb 20, 2020

This is a regression and we need to fix it, sorry for the inconvenience

@exekias exekias added autodiscovery bug containers Related to containers use case Team:Platforms Label for the Integrations - Platforms team v7.6.0 labels Feb 20, 2020
@exekias
Copy link
Contributor

exekias commented Feb 20, 2020

@ChrsMark could you please have a look to this one?

@ChrsMark ChrsMark self-assigned this Feb 21, 2020
@ChrsMark
Copy link
Member

ChrsMark commented Feb 21, 2020

I think that problems occur because of this:

out := p.resource.Generate("pod", obj, opts...)

Maybe we need something similar to

meta = flattenMetadata(meta)

What we need is to cure metadata accordingly so as to bring the labels in first level like this: dba8f74#diff-15420f06ef66547336cabd3cab40dd04L151

Pushing a patch soon.

@nerddelphi
Copy link
Author

@ChrsMark and @exekias There's another issue, I guess:

My another condition (HAProxy) in Kubernetes Autodiscover isn't working after the upgrade.

Could be something related? Field kubernetes.namespace didn't change at all.

@nerddelphi
Copy link
Author

I've found the erros:

ERROR   fileset/factory.go:105    Error creating input: each processor must have exactly one action, but found 2 actions (add_locale,add_kubernetes_metadata)

ERROR   [autodiscover]    cfgfile/list.go:96    Error creating runner from config: each processor must have exactly one action, but found 2 actions (add_locale,add_kubernetes_meta
data)

But I didn't set any add_locale. :(

@ChrsMark
Copy link
Member

ChrsMark commented Mar 4, 2020

Thanks @nerddelphi ! @exekias does this sound sound familiar?

@exekias
Copy link
Contributor

exekias commented Mar 4, 2020

As you guessed @nerddelphi, your issue should be fixed by #16450

@hicham-elbizy
Copy link

hicham-elbizy commented Aug 26, 2020

Hi there,
I have the same problem when configured haproxy module input processors,
my filebeat conf is :

  • module: haproxy
    log:
    enabled: true

    var.input: "file"
    var.paths:
    - {{ proxy_logs_path }}/proxy*.log*
    input:
    processors:
    - drop_fields:
    fields: ["service.type"]

I've found the erros:
Exiting: Error while initializing input: each processor must have exactly one action, but found 2 actions (add_locale,drop_fields)

thanks for help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
autodiscovery bug containers Related to containers use case Team:Platforms Label for the Integrations - Platforms team v7.6.0
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants