You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are on a testing phase with Grafana-Alloy on a very huge Infrastructure.
we have currently deployed Grafana-alloy on a cluster which is running with 30 nodes(including 3 control planes).
We noticed a strange behavior with Alloy where only 1 - 3 three pods are consuming more memory compared to other pods.
For example -- most of the pods are consuming below 600Mi. However, few pods(1 - 3 pods) are utilizing above 1Gi.
We are unable to find a valid explanation for this.
We also tried deploying the same on a smaller 4 node cluster and also noticed the same behavior where one single pod is consuming more memory(more than 50%) compared to others.
This is bit concerning for us to set the limits and requests for Grafana alloy ds as there is huge difference in memory utilization between few pods.
We noticed the clustering is happening perfectly fine.
There are no errors logs which shows the actual issue.
What's wrong?
We are on a testing phase with Grafana-Alloy on a very huge Infrastructure.
we have currently deployed Grafana-alloy on a cluster which is running with 30 nodes(including 3 control planes).
We noticed a strange behavior with Alloy where only 1 - 3 three pods are consuming more memory compared to other pods.
For example -- most of the pods are consuming below 600Mi. However, few pods(1 - 3 pods) are utilizing above 1Gi.
We are unable to find a valid explanation for this.
We also tried deploying the same on a smaller 4 node cluster and also noticed the same behavior where one single pod is consuming more memory(more than 50%) compared to others.
This is bit concerning for us to set the limits and requests for Grafana alloy ds as there is huge difference in memory utilization between few pods.
We noticed the clustering is happening perfectly fine.
There are no errors logs which shows the actual issue.
Is this expected? Or a bug?
Only grafana-alloy-cjrb5 is utilizing more than the other pods.
Steps to reproduce
Deploy Grafana alloy ds in clustering mode.
System information
Kubernetes v1.29
Software version
Grafana alloy v1.4
Configuration
Logs
None..
The text was updated successfully, but these errors were encountered: