Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Most of the OpenTelemetry trace tags are null #9472

Closed
dpiddock opened this issue Dec 30, 2022 · 4 comments · Fixed by #9489
Closed

Most of the OpenTelemetry trace tags are null #9472

dpiddock opened this issue Dec 30, 2022 · 4 comments · Fixed by #9489
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@dpiddock
Copy link

What happened:

I was playing around with the in-development OpenTelemetry tracing feature and noticed that most of the trace tags are null. This appears to be because the module is built with gRPC v1.43.2. The module only seems to work correctly when built against gRPC v1.36.4. open-telemetry/opentelemetry-cpp-contrib#251
tempo-broken

What you expected to happen:
All the tags should have a value.
tempo-working

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
#9062 rebased on 3474c33 . Has latest changes related to opentelemetry work and the otel container referenced by the helm chart is compatible with the nginx image that needs to be built.

How to reproduce this issue:
With minikube.

  1. Build ingress-nginx controller with opentelemetry support, from the WIP feat: OpenTelemetry module integration #9062

    git clone [email protected]:kubernetes/ingress-nginx.git
    pushd ingress-nginx
    gh pr checkout 9062
    git switch --detach 2ce5273
    git rebase 3474c33 # get recent changes for /images/nginx/ compatible with the ingress-nginx/opentelemetry referenced in the helm chart
    TAG=1.5.2 REGISTRY=local make build image
    minikube image load local/controller:1.5.2
  2. Configure and install nginx-ingress using local helm chart

    echo '
    controller:
      image:
        repository: local/controller
        tag: 1.5.2
        digest:
      opentelemetry:
        enabled: true
      config:
        enable-opentelemetry: "true"
        opentelemetry-operation-name: "HTTP $request_method $service_name $uri"
        otlp-collector-host: "tempo.observability.svc"
        otel-max-queuesize: "2048"
        otel-schedule-delay-millis: "5000"
        otel-max-export-batch-size: "512"
        otel-sampler: "AlwaysOn"
    ' > helm-values.yaml
    helm upgrade --install ingress-nginx ./charts/ingress-nginx --create-namespace -n ingress-nginx -f helm-values.yaml
  3. Test setup, stripped down from opentelemetry documentation #9144

    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update
    helm upgrade --install tempo grafana/tempo --create-namespace -n observability --set tempo.searchEnabled=true
    helm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/3617c7eafb34be370163e8b24e1ba369e3643c79/observability/grafana/grafana-values.yaml --install grafana grafana/grafana -n observability
  4. Make some requests

    kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8090:80
    curl localhost:8090
  5. Navigate the Grafana UI. Find a trace in the Explore section. Note that most of the Attributes are null.

    kubectl port-forward --namespace=observability service/grafana 3000:80

    http://localhost:3000/explore

Anything else we need to know:

Should be easy to fix by building with gRPC 1.36.4.

The nginx opentelemetry module is technically abandoned. Development efforts are now focused on the otel-webserver-module in the same repo. per open-telemetry/opentelemetry-cpp-contrib#229.

@dpiddock dpiddock added the kind/bug Categorizes issue or PR as related to a bug. label Dec 30, 2022
@k8s-ci-robot
Copy link
Contributor

@dpiddock: This issue is currently awaiting triage.

If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Dec 30, 2022
@esigo
Copy link
Member

esigo commented Dec 30, 2022

/assign

@strongjz
Copy link
Member

strongjz commented Jan 5, 2023

/triage accepted
/priority backlog

@esigo esigo mentioned this issue Jan 7, 2023
11 tasks
@esigo
Copy link
Member

esigo commented Jan 7, 2023

Hi @dpiddock,
thanks for testing and your feedback. I've raised a PR to fix this.
We will switch to the new implementation when it has feature parity with the current one.
Until then I'll maintain the old implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

4 participants