-
Notifications
You must be signed in to change notification settings - Fork 489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to unregister all metrics from previous promtail. THIS IS A BUG #2754
Comments
@kuberxy could you please try v0.31.0 (which was released today) and let us know if the issue is gone for you? |
this issue is not gone. $ kubectl -n loki logs loki-logs-sbfp8 grafana-agent
ts=2023-02-02T07:51:04.895353776Z caller=server.go:191 level=info msg="server listening on addresses" http=[::]:8080 grpc=127.0.0.1:12346 http_tls_enabled=false grpc_tls_enabled=false
ts=2023-02-02T07:51:04.896050123Z caller=node.go:85 level=info agent=prometheus component=cluster msg="applying config"
ts=2023-02-02T07:51:04.896177889Z caller=remote.go:180 level=info agent=prometheus component=cluster msg="not watching the KV, none set"
ts=2023-02-02T07:51:04.906062725Z caller=promtail.go:123 level=info component=logs logs_config=loki/loki msg="Reloading configuration file" md5sum=6e7026368fd78b062c8c20e5ce9e9b8a
ts=2023-02-02T07:51:04.907204839Z caller=kubernetes.go:326 level=info component=logs logs_config=loki/loki component=discovery discovery=kubernetes msg="Using pod service account via in-cluster config"
ts=2023-02-02T07:51:04.908589284Z caller=zapadapter.go:78 level=info component=traces msg="Traces Logger Initialized"
ts=2023-02-02T07:51:04.908651471Z caller=integrations.go:156 level=warn msg="integrations-next is enabled. integrations-next is subject to change"
ts=2023-02-02T07:51:04.911183301Z caller=reporter.go:103 level=info msg="running usage stats reporter"
ts=2023-02-02T07:51:08.808643551Z caller=entrypoint.go:337 level=info msg="reload of config file requested"
ts=2023-02-02T07:51:08.846228489Z caller=entrypoint.go:167 level=error msg="failed to update loki" err="failed to unregister all metrics from previous promtail. THIS IS A BUG"
ts=2023-02-02T07:51:08.846331922Z caller=entrypoint.go:348 level=error msg="failed to reload config file" err="changes did not apply successfully"
ts=2023-02-02T07:51:13.805653552Z caller=entrypoint.go:337 level=info msg="reload of config file requested"
$ kubectl -n loki describe pod loki-grafana-agent-operator-5df4cf9c46-ttf6r
...
Containers:
grafana-agent-operator:
Container ID: docker://517a2247c825b95d8e67fb0e79d8f5ceac8b44817c2aba9da4951e9cd1c703c5
Image: docker.io/grafana/agent-operator:v0.31.0
Image ID: docker-pullable://grafana/agent-operator@sha256:01aace5fa7943dfaf1edc55e6562cd12e6888fa569ebb5a45962d3fef81407ac
Port: <none>
Host Port: <none>
Args:
--kubelet-service=default/kubelet
State: Running
Started: Thu, 02 Feb 2023 15:30:00 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgzs8 (ro)
Conditions:
... |
I had a look at this today. Apparently Prometheus has a feature called "unchecked collectors". It is mentioned in their docs:
The underlying loki library uses unchecked collectors because the Desc function doesn't return anything. One problem with unchecked collectors is that even though I tried to enhance
This got rid of the error, but unfortunately the metric stopped working properly. It was still visible on I suspect the reason unchecked collectors are used here is that due to a I would like to try to get rid of the unchecked collectors in the loki code, and instead switch to normal collectors. There are several reasons:
|
I tried using normal collectors such as
This will cause the I just need to find a good enough place to call We need to persist the metrics between calls to
Also, I found another bug with the way these metrics are collected. Apparently, a Footnotes
|
I want to use metrics pipeline_stages count the number of error rows in the log. Below is my configuration
After the loki-logs pod restarts, I get an error
When I delete metrics pipeline_stages, everything is normal. My deployment method is helm
Please help me with this, thanks
The text was updated successfully, but these errors were encountered: