We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug
We had an unpinned version of AWS for FluentBit running. Yesterday when the update was released It caused the issue seen below:
Error: UPGRADE FAILED: cannot patch "aws-for-fluent-bit" with kind DaemonSet: DaemonSet.apps "aws-for-fluent-bit" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"aws-for-fluent-bit", "app.kubernetes.io/name":"aws-for-fluent-bit"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
I believe this the diff that caused the issue:
- app.kubernetes.io/instance: {{ include "aws-for-fluent-bit.namespace" . }} + app.kubernetes.io/instance: {{ include "aws-for-fluent-bit.fullname" . }}
Workarounds:
Read K8s discussion: kubernetes/kubernetes#50808
Steps to reproduce
Run the following while having 0.1.13 running before
helm upgrade --install aws-for-fluent-bit eks/aws-for-fluent-bit --version 0.1.14 -n kube-system -f helm/fluent-bit.yml \ --set 'cloudWatch.region'=us-east-1 \ --set 'cloudWatch.logGroupName'=/test_flb \ --set 'cloudWatch.logRetentionDays'=30
Expected outcome A concise description of what you expected to happen.
Environment
Additional Context:
The text was updated successfully, but these errors were encountered:
Ran into this, and also pinned to 0.1.13. If this is intended behavior, the chart's major version should be bumped.
Sorry, something went wrong.
I was wondering whether it's better to move to the FLB helm charts instead https://github.com/fluent/helm-charts/tree/main/charts/fluent-bit
aws-for-fluent-bit
Successfully merging a pull request may close this issue.
Describe the bug
We had an unpinned version of AWS for FluentBit running.
Yesterday when the update was released
It caused the issue seen below:
I believe this the diff that caused the issue:
Workarounds:
Read K8s discussion: kubernetes/kubernetes#50808
Steps to reproduce
Run the following while having 0.1.13 running before
Expected outcome
A concise description of what you expected to happen.
Environment
Additional Context:
The text was updated successfully, but these errors were encountered: