Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How would I go about getting internal traces for a collector itself? #2831

Closed
jcleal opened this issue Mar 29, 2021 · 4 comments
Closed

How would I go about getting internal traces for a collector itself? #2831

jcleal opened this issue Mar 29, 2021 · 4 comments
Labels
enhancement New feature or request

Comments

@jcleal
Copy link

jcleal commented Mar 29, 2021

My team and I are looking to monitor some collectors we have running for a project, and was wondering how to pull the internal traces to send "somewhere" (eg. could be Jaeger, could be AWS X-Ray, etc).

I'm thinking that I'd need to instrument a collector with an SDK to forward the traces to somewhere, but I noticed there are some traces returned from the zpages extension. Just wondering how I'd forward those, if that's possible at the moment.

I previously asked this question over in https://cloud-native.slack.com/archives/C01N6P7KR6W, and was told to create an issue here 👍🏻

@jcleal jcleal changed the title How would I about getting internal traces for an collector tself? How would I go about getting internal traces for an collector itself? Mar 29, 2021
@jcleal jcleal changed the title How would I go about getting internal traces for an collector itself? How would I go about getting internal traces for a collector itself? Mar 29, 2021
@bogdandrutu bogdandrutu added this to the Phase2-GA-Roadmap milestone Mar 29, 2021
@alolita
Copy link
Member

alolita commented May 12, 2021

@bogdandrutu this issue does not seem to be a GA must-have issue? Can we move this to a post-GA phase 3 backlog?

@bogdandrutu
Copy link
Member

I think we need to extend the "config.Service" and allow to configure telemetry support:

  1. Exporters for traces/metrics
  2. Telemetry level (see that in the code).

So users should do this via configuration of the service (here is an example but somebody needs to really think if this is the right config):

service:
  telemetry:
    - defaultlevel: normal
    - metrics:
      - exporter: 
        - name: prometheus
        - port: 123
    - traces: 
      - exporter: 
        - name: x-ray
        - endpoint: localhost:123
  extensions: [exampleextension/0, exampleextension/1]
  pipelines:
    traces:
      receivers: [examplereceiver]
      processors: [exampleprocessor]
      exporters: [exampleexporter]

@julealgon
Copy link

Is there a known workaround we can implement until this is put in practice? Not being able to see my collector itself in Datadog is very bad. The only way to see the collector logs for us is to log into an Azure VM and inspect a file in the system where we are redirecting the stdout/strerr to.

This feels incredibly counterintuitive considering the whole purpose of having the collector is to centralize and improve monitoring. The fact that the collector itself doesn't push its own logs and traces feels incomplete to me.

Would it be possible to setup a second collector instance that takes the output of the first and sends it back as telemetry to the main collector as a "file" input of sorts? This sounds incredibly hacky but maybe it would work?

I just wanted to be able to see something regarding the collector in my DataDog instance.

@codeboten
Copy link
Contributor

I'm closing this issue as the goals of exporting traces (and metrics & logs) is part of this other issue here: #7532

Note that exporting traces is currently an experimental feature supported behind a feature gate: https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/observability.md#how-we-expose-telemetry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants