-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide ability to ignore extra application objects (e.g. kustomize nameSuffixHash) #1629
Comments
The CG approach can not be purely based on the running pods in the namespace because it won't allow you to rollback your deployment to a previous version. A combination of time or number of non referenced ConfigMaps plus the verification that they are not in use seems about right to me. |
It will, because an argo-cd rollback is going to the manifests at a previous commit hash. At that hash, the old configmap would be re-deployed. |
I'm considering an emergency situation where because for whatever reason you do have access to the cluster but not to Argo |
Even in that situation, you would do something like:
There's nothing really magical about the way argocd deploys, and is why But what I'm trying to argue, is that it's unnecessary to implement more complicated logic involving N-revisions of unreferenced configmaps/secrets combined with some notion of time. This logic I don't believe can be easily be gotten right. |
I think this is preferable syntax: metadata:
annotations:
argocd.argoproj.io/compare-options: IgnoreExtra |
I split out the GC request as a separate feature request: #1636 Will leave this issue as the ability to have Argo CD ignore extra config which has the annotation |
Actually, this sounds similar to #1373. In fact, this would already solve my problem described there (StatefulSets and their PVCs with kustomize). |
I actually think #1373 is still useful as a separate feature. I feel it's desirable to have the ability to ignore OutOfSync conditions of a resource as a separate decision than skipping pruning of a resource. To solve your issue where there are extra resources (caused by kustomize's commonLabels), I would propose something like: metadata:
annotations:
argocd.argoproj.io/compare-options: IgnoreExtra
argocd.argoproj.io/sync-options: SkipPrune
|
Agree, both issues are in fact different. Very much like your proposal! 👍 |
@jessesuen is this the Kustomize feature we're talking about please: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/configGeneration.md |
Yes, but the problem applies in other use cases (unwanted label propagation). Config map generation with name suffix hash just highlights the problem in a concrete way. |
kind: Kustomization
generatorOptions:
annotations:
argcd.argoproj.io/compare-options: Ignore
configMapGenerator:
- name: my-map
literals:
- foo=bar |
Part of this work is in PR: |
I have agreement with @alexec on syntax. Resources can be annotated like the following: metadata:
annotations:
compare-options: IgnoreExtraneous
sync-options: Prune=false In the future, these options might be expanded: metadata:
annotations:
compare-options: IgnoreExtraneous,IgnoreDifference=/spec/some/field
sync-options: Prune=false,Validate=false,Force=true,ConfigGC=enabled |
For those coming to this issue, please note, the ability to IgnoreDifference with an annotation (suggested by @jessesuen) was never added, there is still an open issue: |
Kustomize has a feature to append a hash of the contents of configmap/secret, as part of the name. This feature allows
kubectl apply
to automatically trigger a rolling update of a deployment whenever the contents of the configmap/secret changes, while also remaining a no-op when the contents do not change.This poses some challenges for Argo CD:
The nameSuffixHash leaves a bunch of old, unused ConfigMaps/Secrets lying around in the namespace. Argo CD subsequently considers the application OutOfSync (which it in fact is), because there are extra resources in the system which no longer are defined in git.
Although Argo CD can automatically delete the old ConfigMaps using the
--prune
flag toargocd app sync
, doing so might be undesirable in some circumstances. For example, during a rolling update, Argo CD would delete the old ConfigMaps referenced by the previous deployment's pods, even when the deployment has not yet fully finished the rolling update.Describe the solution you'd like
Some proposals:
argocd app sync
but not error out in the CLI when it detects that there are resources that require pruning. To allow users to describe their intent, I propose a new annotation which can be set on resources, which if found on an "extra" resource object, we do not error in the CLI. Something like:The annotation to ignore extra resource should also affect application sync status. In other words, if we find extra configmaps lying around the namespace, and those extra ConfigMaps have the
argocd.argoproj.io/ignore-sync-condition: Extra
annotation, then we do not contribute that configmap to the overall OutOfSync condition of the application.Today, pruning extra objects in Argo CD is all or nothing. As mentioned above, it is problematic to prune all of the extra objects early because of the issue of deleting the previous ConfigMap from underneath old deployments which are still referencing it. Even if we introduce the annotation to ignore extra configmaps, there will be an accumulation of configmaps indefinitely until
argocd app sync --prune
is invoked.To mediate this, Argo CD could implement some sort of ConfigMap/Secret garbage collection. Note that there is proposals as part of kubernetes core, to GC configmaps/secrets: kubernetes/kubernetes#22368. So I'm hesitant to implement anything specific to Argo CD.
Furthermore, detecting if a ConfigMap or Secret is still "referenced" might be very difficult to detect. One heuristic might be to look at all pods in the namespace, and see if any pod spec is still referencing the configmap/secret it as a volume mount, env var, etc.... However this may be tricky to get right, subject to timing issues (deployment not fully rolled out), and might not work in all cases (e.g. if the configmap was retrieved via K8s API query)
The text was updated successfully, but these errors were encountered: