-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propose "ideal" configuration #5
Comments
An example proposal disabled: false # OTEL_SDK_DISABLED
resource:
attributes: # OTEL_RESOURCE_ATTRIBUTES
- key1: value1
- key2: value2
service:
name: myapp # OTEL_SERVICE_NAME
log:
level: info # OTEL_LOG_LEVEL
propagators: [tracecontext, baggage] # OTEL_PROPAGATORS
sampler:
name: parentbased_always_on # OTEL_TRACES_SAMPLER
argument: "0.25" # OTEL_TRACES_SAMPLER_ARG
processors:
batch/span:
delay: 5000 # OTEL_BSP_SCHEDULE_DELAY
timeout: 30000 # OTEL_BSP_EXPORT_TIMEOUT
queue_size: 2048 # OTEL_BSP_MAX_QUEUE_SIZE
export_size: 512 # OTEL_BSP_MAX_EXPORT_BATCH_SIZE
batch/log:
delay: 5000 # OTEL_BLRP_SCHEDULE_DELAY
timeout: 30000 # OTEL_BLRP_EXPORT_TIMEOUT
queue_size: 2048 # OTEL_BLRP_MAX_QUEUE_SIZE
export_size: 512 # OTEL_BLRP_MAX_EXPORT_BATCH_SIZE
limits:
attributes:
value_length: 0 # OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT
count: 128 # OTEL_ATTRIBUTE_COUNT_LIMIT
spans:
attributes:
value_length: 0 # OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT
count: 128 # OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT
event:
count: 128 # OTEL_SPAN_EVENT_COUNT_LIMIT
attributes:
count: 128 # OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT
link:
count: 128 # OTEL_SPAN_LINK_COUNT_LIMIT
attributes:
count: 128 # OTEL_LINK_ATTRIBUTE_COUNT_LIMIT
exporters:
otlp:
endpoint:
jaeger:
protocol: grpc # OTEL_EXPORTER_JAEGER_PROTOCOL
endpoint: http://localhost:14268/api/traces # OTEL_EXPORTER_JAEGER_ENDPOINT
timeout: 10000 # OTEL_EXPORTER_JAEGER_TIMEOUT
user: "" # OTEL_EXPORTER_JAEGER_USER
password: "" # OTEL_EXPORTER_JAEGER_PASSWORD
zipkin:
endpoint: http://localhost:9411/api/v2/spans # OTEL_EXPORTER_ZIPKIN_ENDPOINT
timeout: 10000 # OTEL_EXPORTER_ZIPKIN_TIMEOUT
prometheus:
host: localhost # OTEL_EXPORTER_PROMETHEUS_HOST
port: 9464 # OTEL_EXPORTER_PROMETHEUS_PORT
logging:
python: # OTEL_PYTHON_*
pipelines:
traces:
processors: [simple]
exporters: [logging, jaeger] # OTEL_TRACES_EXPORTER
metrics:
processors: [batch]
exporters: [otlp] # OTEL_METRICS_EXPORTER
logs:
processors: [batch]
exporters: [otlp] # OTEL_LOGS_EXPORTER
instrumentations:
redis:
package:
|
I like yours, it is a mix of what we in Erlang originally had for file configuration:
And what we have now (while supporting the original:
It makes me think there may be even another level of mixture with the kitchen-sink examples I can do ... where you still have to define stuff like tracer providers but have less nesting by supporting top level definitions of stuff like exporters by giving each a user defined name they can use within the tracer provider definition. |
Right, my example is strongly influenced by the collector's configuration where components individually, and the telemetry use those definitions via the names used as identifiers. This has the bonus that folks familiar with configuring the collector would find configuring SDKs fairly straightforward. |
While I'd like to have symmetry with the collector where possible, SDK processors and exporters are conceptually different than collector processors and exporters. Specifically, SDK exporters don't show up in the configuration of tracerprovider / meterprovider / loggerprovider except as arguments for specific built in processors. Let me try to illustrate through some examples:
Example 2
Example 3
Additionally, meterprovider is quite different from loggerprovider and tracerprovider - it doesn't have the notion of a processor at all - only metric readers. |
In this case I would omit the processors section (this is what the collector does)
In practice, the SDK would have to configure multiple span processors for each configured exporter
Wouldn't configuring different processors w/ different exporters effectively be configuring multiple pipelines? I agree that processor configuration doesn't make sense for metrics, not sure if there is a term that could be used other than processors to generalize the configuration. |
I think that would have to configure effectively a noop tracer provider configuration, since exporters are meaningless without an associated processor to feed them data.
Example 3 illustrates the challenges with this.
Its conceptually different than two pipelines. If there were actually two pipelines, each pipeline would have its own set of processors. In the SDK, there is only one pipeline and all processors invoked. So if you had two batch processors each with an exporter, then added an additional processor that did some enrichment (i.e. enrich with baggage), the changes from the additional processor would be seen by both batch processors. There's no way to limit isolate the additional processor's changes to only a single batch processor. |
Pipelines aren't a concept in the API or SDK. I think configuring providers is confusing enough without adding pipeline :) |
I've added a PR to make discussions around the specifics of the config a bit easier. |
Closing this issue, there is a working configuration here |
A few things are still missing:
Samplers are the most interesting case because they may delegate to each other. |
This issue is to try and produce the configuration that would be ideal from a user ergonomics standpoint
The text was updated successfully, but these errors were encountered: