Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus receiver starts by default, even when not configured. How do I disable the prometheus receiver to stop collecting resource metrics? #31598

Closed
summahto opened this issue Mar 5, 2024 · 3 comments
Labels
exporter/prometheus needs triage New item requiring triage receiver/prometheus Prometheus receiver

Comments

@summahto
Copy link

summahto commented Mar 5, 2024

Component(s)

exporter/prometheus, receiver/prometheus

Describe the issue you're reporting

I am running opentelemetry collector's latest version 0.95 through docker. Here is my docker-compose.yml for starting otel-collector:

services:
  otelcol:
    image: otel/opentelemetry-collector-contrib
    volumes:
      - ./otel-collector-config.yaml:/usr/src/app/otelcol-contrib/config.yaml
    ports:
      - "1888:1888" # pprof extension
      - "13133:13133" # health_check extension
      - "4317:4317" # OTLP gRPC receiver
      - "4318:4318" # OTLP http receiver
      - 55679:55679 # zpages extension

and here is my otel-collector-config.yaml:

 receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
exporters:
  # NOTE: Prior to v0.86.0 use `logging` instead of `debug`.
  debug:
    verbosity: basic
  # logging:
  #   loglevel: debug
  azuremonitor:
    connection_string: 
processors:
  batch:
extensions:
  health_check:
  pprof:
  zpages:
  # memory_ballast:
  #   size_mib: 512
service:
  extensions: [health_check, pprof, zpages]
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [azuremonitor]
      processors: [batch]
    metrics:
      receivers: [otlp]
      exporters: [azuremonitor]
      processors: [batch]
    logs:
      receivers: [otlp]
      exporters: [azuremonitor]
      processors: [batch]
  telemetry:
    metrics:
      level: none

I am facing some issues with prometheus receiver, I have not configured in my receivers or in my services section. But, even then the it keeps scraping the collector metrics which completely fill up my console logs that I cannot view any other important trace, log which I exporting from my app to collector. Further, the frequency of it is also pretty small. I have attached the collector logs for your reference. Showing the small snippet which keeps generating. How can I configure my collector to run without prometheus receiver ?

maas-api-otelcol-1   | 2024-03-05T14:36:41.300Z info    [email protected]/metrics_receiver.go:282      Starting scrape manager {"kind": "receiver", "name": "prometheus", "data_type": "metrics"}
maas-api-otelcol-1   | 2024-03-05T14:36:52.544Z info    MetricsExporter {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 12, "data points": 12}
maas-api-otelcol-1   | 2024-03-05T14:36:52.544Z info    ResourceMetrics #0
maas-api-otelcol-1   | Resource SchemaURL: 
maas-api-otelcol-1   | Resource attributes:
maas-api-otelcol-1   |      -> service.name: Str(otel-collector)
maas-api-otelcol-1   |      -> service.instance.id: Str(0.0.0.0:8888)
maas-api-otelcol-1   |      -> net.host.port: Str(8888)
maas-api-otelcol-1   |      -> http.scheme: Str(http)
maas-api-otelcol-1   |      -> service_instance_id: Str(ddb95075-4288-4ad5-bd2e-452b3120915f)
maas-api-otelcol-1   |      -> service_name: Str(otelcol-contrib)
maas-api-otelcol-1   |      -> service_version: Str(0.95.0)
maas-api-otelcol-1   | ScopeMetrics #0
maas-api-otelcol-1   | ScopeMetrics SchemaURL: 
maas-api-otelcol-1   | InstrumentationScope otelcol/prometheusreceiver 0.95.0
maas-api-otelcol-1   | Metric #0
maas-api-otelcol-1   | Descriptor:
maas-api-otelcol-1   |      -> Name: otelcol_process_runtime_heap_alloc_bytes
maas-api-otelcol-1   |      -> Description: Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc')
maas-api-otelcol-1   |      -> Unit: 
maas-api-otelcol-1   |      -> DataType: Gauge
maas-api-otelcol-1   | NumberDataPoints #0
maas-api-otelcol-1   | Data point attributes:
maas-api-otelcol-1   |      -> service_instance_id: Str(ddb95075-4288-4ad5-bd2e-452b3120915f)
maas-api-otelcol-1   |      -> service_name: Str(otelcol-contrib)
maas-api-otelcol-1   |      -> service_version: Str(0.95.0)
maas-api-otelcol-1   | StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
maas-api-otelcol-1   | Timestamp: 2024-03-05 14:36:52.35 +0000 UTC
maas-api-otelcol-1   | Value: 42193616.000000
maas-api-otelcol-1   | Metric #1
maas-api-otelcol-1   | Descriptor:
maas-api-otelcol-1   |      -> Name: otelcol_process_runtime_total_alloc_bytes
maas-api-otelcol-1   |      -> Description: Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc')
maas-api-otelcol-1   |      -> Unit: 
maas-api-otelcol-1   |      -> DataType: Sum
maas-api-otelcol-1   |      -> IsMonotonic: true
maas-api-otelcol-1   |      -> AggregationTemporality: Cumulative
maas-api-otelcol-1   | NumberDataPoints #0
maas-api-otelcol-1   | Data point attributes:
maas-api-otelcol-1   |      -> service_instance_id: Str(ddb95075-4288-4ad5-bd2e-452b3120915f)
maas-api-otelcol-1   |      -> service_name: Str(otelcol-contrib)
maas-api-otelcol-1   |      -> service_version: Str(0.95.0)
maas-api-otelcol-1   | StartTimestamp: 2024-03-05 14:36:52.35 +0000 UTC
maas-api-otelcol-1   | Timestamp: 2024-03-05 14:36:52.35 +0000 UTC
maas-api-otelcol-1   | Value: 48675096.000000
maas-api-otelcol-1   | Metric #2
maas-api-otelcol-1   | Descriptor:
maas-api-otelcol-1   |      -> Name: otelcol_process_uptime
maas-api-otelcol-1   |      -> Description: Uptime of the process
maas-api-otelcol-1   |      -> Unit: 
maas-api-otelcol-1   |      -> DataType: Sum
maas-api-otelcol-1   |      -> IsMonotonic: true
maas-api-otelcol-1   |      -> AggregationTemporality: Cumulative
maas-api-otelcol-1   | NumberDataPoints #0
[TooManyMetrics.txt](https://github.com/open-telemetry/opentelemetry-collector-contrib/files/14498366/TooManyMetrics.txt)

@summahto summahto added the needs triage New item requiring triage label Mar 5, 2024
Copy link
Contributor

github-actions bot commented Mar 5, 2024

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@dashpole
Copy link
Contributor

dashpole commented Mar 5, 2024

I suspect the collector is using the default configuration, rather than your configuration.

The demo docker compose file has a command field:

command: ["--config=/etc/otel-collector-config.yaml", "${OTELCOL_ARGS}"]

Maybe try adding that, and pointing it at your config volume?

@summahto
Copy link
Author

summahto commented Mar 5, 2024

Yes. That was the issue. Thanks for letting me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exporter/prometheus needs triage New item requiring triage receiver/prometheus Prometheus receiver
Projects
None yet
Development

No branches or pull requests

2 participants