-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-prometheus-stack] additionalRuleLabels shouldn't apply to recording rules #3396
Comments
I rolled back to |
I see the original issue #3340 for the problem PR suggests what i think should be the desired solution: separate additionalRuleLabels for alert & recording. Probably |
There is something wrong with this version which I needed to roll back as well. Trying to install the Helm release gives errors like this:
|
@scott-grimes Any idea? |
looks like we should split this like @defenestration has suggested, allow |
The biggest problem this PR has introduced is the required |
@buroa submitted pr to fix issue |
@scott-grimes Thanks! Looks great! |
The PR that closes this doesn't fix the issue from the title tho "additionalRuleLabels shouldn't apply to recording rules". |
yes, the issue still persists |
Agreed, @buroa 's issue was fixed, but the original issue still exists. Unsure if i can reopen this issue myself though. |
Well, looking at a diff between 45.28.0 and the current Current plan is to look at the files changed in the initial pr for 45.28.1 and not add my relabel rules to those groups. |
So the rule groups so far have divisions between alert and recording rules. Here are the current
So i just added my label rule to every group but those. additionalRuleGroupLabels:
alertmanager:
owner: {{ stuff }}
... So, it seems the issue can stay closed after all. Its not as fine of a line as initially imagined but so be it. |
i believe that shouldn't work like that because in docs its saying that its label only for alert but its not. and you cant define label for just all alerts its pretty annoying issue that break previous functionality i dont think it should be closed until someone fixes docs or broken functionality |
I agree! We are hitting the same issue after upgrading. |
Please reopen as we're still experiencing the same issue. |
We are on the most recent chart version (55.11.0) and also hitting the same issue. We are using very simple config :
|
Same problem in Pometheus 56.1.0 version. We have to label individually excluding defaultRules:
additionalRuleGroupLabels:
...
k8sPodOwner:
group_dest: infra
kubeApiserverAvailability: {}
kubeApiserverBurnrate:
group_dest: infra
kubeApiserverHistogram:
... If we add the label there or in the global config
|
Same here on 56.6.2 |
Still seeing this even on 57.0.2. Does not work as documented. |
We also seeing this error in 62.7.0 chart version as well for kube-apiserver-availability.rules. |
Describe the bug a clear and concise description of what the bug is.
It looks like a recent #3351 for
kube-prometheus-stack-45.28.1
appliesadditionalRuleLabels
to recording rules. Which is a new change. We have aadditionalRuleLabels
label defined, to help us route alert rules based on a namespace annotation. (see #1231 (comment) for background on that.)In our helm chart we use something like:
With the recent release, this label also gets applied to recording rules.
This caused an error in for us. The built-in alert
PrometheusRuleFailures
fires and going to http://prometheus/rules page shows an error with this recording rule in kube-apiserver-availability.rulesIt seems like it might be good to specify different,
additionalRuleLabels
to apply to alerts and recording rules separately, ex; addadditionalAlertRuleLabels
additionalRecordingRuleLabels
parameters to the helm chart possibly.What's your helm version?
version.BuildInfo{Version:"v3.11.2", GitCommit:"912ebc1cd10d38d340f048efaf0abda047c3468e", GitTreeState:"clean", GoVersion:"go1.18.10"}
What's your kubectl version?
1.24
Which chart?
kube-prometheus-stack
What's the chart version?
45.28.1
What happened?
The built-in alert
PrometheusRuleFailures
fires and going to http://prometheus/rules page shows an error with this recording rule in kube-apiserver-availability.rulesWhat you expected to happen?
no error
How to reproduce it?
Try to add a similar additionalRuleLabels
This will render correct on alert rules
the metric
code_verb:apiserver_request_total:increase1h
contains the label text verbatim as well, which is undesired.Enter the changed values of values.yaml?
None from previous version of chart.
Enter the command that you execute and failing/misfunctioning.
N/A - flux auto upgrades to latest version of helm chart.
Anything else we need to know?
No response
The text was updated successfully, but these errors were encountered: