Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws ressource detectors are excuted even if not configured #24072

Closed
cforce opened this issue Jul 10, 2023 · 3 comments
Closed

aws ressource detectors are excuted even if not configured #24072

cforce opened this issue Jul 10, 2023 · 3 comments
Labels
exporter/datadog Datadog components question Further information is requested

Comments

@cforce
Copy link

cforce commented Jul 10, 2023

Component(s)

exporter/datadog

What happened?

Description

Although not configured the at least aws detector (maybe other as well) is executed anyway

Steps to Reproduce

Configure the processor for detection without aws but still find warnings because scanning for aws cloud api is to not successfull

Expected Result

Only those detectors configured will be executed

Actual Result

Aws (and maybe others) are still involved

Collector version

0.81.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 1.20.5")

https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor

OpenTelemetry Collector configuration

extensions:
  zpages:
    endpoint: '0.0.0.0:55679'
  health_check:
    endpoint: '0.0.0.0:8081'
  memory_ballast:
    size_mib: 512
receivers:
  hostmetrics:
    collection_interval: 10s
    scrapers:
      paging:
        metrics:
          system.paging.utilization:
            enabled: true
      cpu:
        metrics:
          system.cpu.utilization:
            enabled: true
      memory: null
      load:
        cpu_average: true
      network: null
      process:
        mute_process_name_error: false
        mute_process_exe_error: false
        mute_process_io_error: false
  hostmetrics/disk:
    collection_interval: 3m
    scrapers:
      disk: null
      filesystem:
        metrics:
          system.filesystem.utilization:
            enabled: true
  otlp:
    protocols:
      grpc:
        endpoint: '0.0.0.0:4317'
      http:
        endpoint: '0.0.0.0:4318'
  prometheus/otelcol:
    config:
      scrape_configs:
        - job_name: otelcol
          scrape_interval: 10s
          static_configs:
            - targets:
                - '0.0.0.0:8888'
processors:
  resourcedetection:
    detectors:
      - env
      - system
      - docker
      - azure
    timeout: 10s
    override: false
  cumulativetodelta: null
  batch/metrics:
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 10s
  batch/traces:
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 5s
  batch/logs:
    send_batch_max_size: 1000
    send_batch_size: 100
    timeout: 30s
  attributes:
    actions:
      - key: tags
        value:
          - 'DD_ENV:${env:ENVIRONMENT}'
          - 'geo:${env:GEO}'
        action: upsert
  resource:
    attributes:
      - key: DD_ENV
        value: '${env:ENVIRONMENT}'
        action: insert
      - key: env
        value: '${env:ENVIRONMENT}'
        action: insert
      - key: geo
        value: '${env:GEO}'
        action: insert
      - key: region
        value: '${env:REGION}"'
        action: insert
exporters:
  datadog:
    api:
      site: datadoghq.com
      key: '${env:DATADOG_API_KEY}'
    metrics:
      resource_attributes_as_tags: true
    host_metadata:
      enabled: true
      tags:
        - 'DD_ENV:${env:ENVIRONMENT}'
        - 'geo:${env:GEO}'
        - 'region:${env:REGION}'
service:
  extensions:
    - zpages
    - health_check
    - memory_ballast
  telemetry:
    metrics:
      address: '0.0.0.0:8888'
    logs:
      level: ${env:LOG_LEVEL || 'info'}
  pipelines:
    traces:
      receivers:
        - otlp
      processors:
        - batch/traces
      exporters:
        - datadog
    metrics/hostmetrics:
      receivers:
        - otlp
      processors:
        - batch/metrics
      exporters:
        - datadog
    metrics:
      receivers:
        - otlp
      processors:
        - batch/metrics
      exporters:
        - datadog

Log output

023-07-10T16:37:50.605Z        info    service/telemetry.go:81 Setting up own telemetry...
2023-07-10T16:37:50.605Z        info    service/telemetry.go:104        Serving Prometheus metrics      {"address": "0.0.0.0:8888", "level": "Basic"}
2023/07/10 16:37:50 WARN: failed to get session token, falling back to IMDSv1: 403 connecting to 169.254.169.254:80: connecting to 169.254.169.254:80: dial tcp 169.254.169.254:80: connectex: A socket operation was 
attempted to an unreachable network.: Forbidden
        status code: 403, request id:
caused by: EC2MetadataError: failed to make EC2Metadata request
connecting to 169.254.169.254:80: connecting to 169.254.169.254:80: dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network.
        status code: 403, request id:
2023-07-10T16:37:50.684Z        info    provider/provider.go:30 Resolved source {"kind": "exporter", "data_type": "metrics", "name": "datadog", "provider": "system", "source": {"Kind":"host","Identifier":"b53bd9e04e85"}}

Additional context

No response

@cforce cforce added bug Something isn't working needs triage New item requiring triage labels Jul 10, 2023
@github-actions github-actions bot added the processor/resourcedetection Resource detection processor label Jul 10, 2023
@github-actions
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@mx-psi mx-psi added question Further information is requested exporter/datadog Datadog components and removed bug Something isn't working processor/resourcedetection Resource detection processor needs triage New item requiring triage labels Jul 11, 2023
@mx-psi
Copy link
Member

mx-psi commented Jul 11, 2023

This log likely comes from the Datadog exporter and not from the resource detection processor. The Datadog exporter calls the AWS EC2 endpoint to determine the cloud provider the Collector is running on. This is not configurable at the moment, one way to avoid the call is to set the hostname option. See #16442 for more details.

codeboten pushed a commit that referenced this issue Jul 12, 2023
Make Datadog exporter source providers run in parallel to reduce start
times. With the new `Chain` implementation, we start checking all
sources in parallel instead of waiting for the previous one to fail.
This makes the Datadog exporter call all cloud provider endpoints in all
cloud providers, so it may increase spurious logs such as those reported
in #24072.

**Link to tracking Issue:** Updates #16442 (at least it should
substantially improve start time in some environments)

---------

Co-authored-by: Yang Song <[email protected]>
Co-authored-by: Alex Boten <[email protected]>
@mx-psi
Copy link
Member

mx-psi commented Sep 8, 2023

I am going to close this as a duplicate of #22807. Let's continue the discussion over there

@mx-psi mx-psi closed this as not planned Won't fix, can't repro, duplicate, stale Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exporter/datadog Datadog components question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants