Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrating from client-side apply to server-side prevents field deletion #486

Closed
stefanprodan opened this issue Nov 11, 2021 · 14 comments · Fixed by #527
Closed

Migrating from client-side apply to server-side prevents field deletion #486

stefanprodan opened this issue Nov 11, 2021 · 14 comments · Fixed by #527
Labels
blocked/upstream Blocked by an upstream dependency or issue bug Something isn't working

Comments

@stefanprodan
Copy link
Member

stefanprodan commented Nov 11, 2021

When migrating from Flux 0.17 (that used kubectl client-side apply) to Flux 0.18 or later (that uses server-side apply) fields removed from Git will not be removed from the cluster. The same issue is when migrating from manually applied resources to Flux.

Ref: kubernetes/kubernetes#99003

@stefanprodan stefanprodan added blocked/upstream Blocked by an upstream dependency or issue bug Something isn't working labels Nov 11, 2021
@stefanprodan
Copy link
Member Author

stefanprodan commented Nov 11, 2021

To allow Flux to manage all fields, the current workaround is to delete the managedFields e.g.:

kubectl patch cm test --type=json -p='[{"op": "remove", "path": "/metadata/managedFields/1"}]'

After the managedFields are cleared, at the next reconciliation, kustomize-controller will take full ownership.

@stefanprodan stefanprodan pinned this issue Nov 11, 2021
@stefanprodan stefanprodan changed the title Migrating from client-side apply to server-side leaves stuck fields in the object Migrating from client-side apply to server-side prevents field deletion Nov 11, 2021
@monotek
Copy link

monotek commented Nov 24, 2021

Is there a way to tell kustomize controller to become the sole manager via force as described in: https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts ?

@stefanprodan
Copy link
Member Author

@monotek we do that (see https://github.com/fluxcd/pkg/blob/main/ssa/manager_apply.go#L199) but it doesn't help in this case, you can't force ownerships for non existing fields.

@monotek
Copy link

monotek commented Nov 25, 2021

So whats the upgradepath, coming from flux 0.17.0?

The only way i was able to remove an env var from a pod was to delete the deployment with "--cascade=orphan" so flux recreates the deployment.

Deleting managedFields and reconcile kustomization had no effect to deleted fields.

@stefanprodan
Copy link
Member Author

Add the env vars to Git, remove the managedFields, then at the next reconciliation Flux will own them all. Then you can remove/add env vars via Git.

@monotek
Copy link

monotek commented Nov 25, 2021

The env var is already in git and was client side applied with Flux 0.17.0
We've update to Flux 0.23.0 and now I want to remove the env var.
If i do, the env var stays in the deployment.
Even if i delete the managed fields.

@monotek
Copy link

monotek commented Nov 30, 2021

We were able to let flux take ownership like this.

k get deployment testdeployment -o yaml | k apply --server-side -f -
k patch deployment testdeployment --type=json -p='[{"op": "remove", "path": "/metadata/managedFields/1"}]'
k patch deployment testdeployment --type='json' -p='[{"op": "replace", "path": "/metadata/managedFields/0/manager", "value":"kustomize-controller"}]'

When we've reconciled the kustomization the env var was finaly removed :)

@monotek
Copy link

monotek commented Dec 3, 2021

Helm controller seems to have the same problem :(

We're currently struggling finding a general solution to update all resources created by the flux controllers, so they are changed reliably again.

Have you found a genral workaround for prod clusters?
Looking at the current version i know Flux should not be considered production ready but this is a bit of a problem for us.

Currently more and more apps need fixes by hand as the changes, which remove single parts of a resource, do not change the resources anymore. For example some pdb, which should be updated by helm controller, kept the old matchlabels after an update and therfore the matchlabels did not match and the pdb did not work.

@stefanprodan
Copy link
Member Author

@monotek helm-controller is using Helm packages, Helm itself didn’t switched to server-side apply, so I don’t see how any of this can affect HelmReleases.

@monotek
Copy link

monotek commented Dec 4, 2021

Thanks for the update.

So the helm problem might be something else :/
It looked like the same problem as a check with "helm template . -f values.yaml" created the right resoruces but the pdb was not updated anyway.

How do you've handled your clusters after updating flux?
Did you never run in the issue with kustomization controller?

@stefanprodan
Copy link
Member Author

stefanprodan commented Dec 4, 2021

It looked like the same problem as a check with "helm template . -f values.yaml" created the right resoruces but the pdb was not updated anyway.

This had nothing to so with server-side apply nor with kustomize-controller. Please open an issue on the Kubernetes repo.

@cpressland
Copy link

We were having the same issue, here is a quick for-loop which seems to mitigate this well enough for us.

Bash / ZSH:

for ns in $(kubectl get namespaces -o name | awk '{print $2}' FS=/);
    do kubectl config set-context --current --namespace $ns;
    for deploy in $(kubectl get deploy -o name);
        do kubectl patch $deploy --type=json -p='[{"op": "remove", "path": "/metadata/managedFields/1"}]';
    done
done

Fish:

for ns in (kubectl get namespaces -o name | awk '{print $2}' FS=/)
    kubectl config set-context --current --namespace $ns
    for deploy in (kubectl get deploy -o name)
        kubectl patch $deploy --type=json -p='[{"op": "remove", "path": "/metadata/managedFields/1"}]'
    end
end

@texasbobs
Copy link

texasbobs commented Dec 30, 2021

We have some annotations that are not being removed. The cluster only ever had flux2 v0.23.0 so there was never a migration from a previous version of flux. This cm was created from a file via the kubectl apply -f command.

From the cluster:

kubectl get cm storeconfig -o yaml                                                                                                                             ldc1d0e3: Thu Dec 30 13:33:11 2021

apiVersion: v1
data:
  STORE_NUM: SOD
  TIMEZONE: America/New_York
kind: ConfigMap
metadata:
  annotations:
    deploy.kuber.thdks/cr: undefined
    deploy.kuber.thdks/email: [email protected]
    deploy.kuber.thdks/ldapid: rtestuser03
    deploy.kuber.thdks/name: Testuser03, Rachel
    kube.kuber.thdks/name: kube-depot/storeconfig
    kube.kuber.thdks/version: v0.0.2
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"STORE_NUM":"SOD","TIMEZONE":"America/New_York"},"kind":"ConfigMap","metadata":{"annotations":{"deploy.kuber.thdks/cr":"newKuber","deploy.kuber.thdks/email":"Ashley_Jarman@
HomeDepot.com","deploy.kuber.thdks/ldapid":"rtestuser03","deploy.kuber.thdks/name":"Testuser03, Rachel","kube.kuber.thdks/name":"kube-depot/storeconfig","kube.kuber.thdks/version":"v0.0.1","reflector.v1.k8
s.emberstack.com/reflection-allowed":"true","reflector.v1.k8s.emberstack.com/reflection-auto-enabled":"true","thdks/deployed-by":"kuber","thdks/managed-by":"flux"},"name":"storeconfig","namespace":"default
"}}
    reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
    reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
    replicator/replicate: "true"
    thdks/deployed-by: kuber
    thdks/managed-by: flux
  creationTimestamp: "2021-12-30T17:52:58Z"
  labels:
    kustomize.toolkit.fluxcd.io/name: storeconfig
    kustomize.toolkit.fluxcd.io/namespace: flux-system
  name: storeconfig
  namespace: default
  resourceVersion: "15499"
  uid: df90830b-f4d7-40ac-93cd-335605fd1915

From the repo:

apiVersion: v1
data:
  STORE_NUM: SOD
  TIMEZONE: America/New_York
kind: ConfigMap
metadata:
  annotations:
    deploy.kuber.thdks/cr: undefined
    deploy.kuber.thdks/email: [email protected]
    deploy.kuber.thdks/ldapid: rtestuser03
    deploy.kuber.thdks/name: Testuser03, Rachel
    kube.kuber.thdks/name: kube-depot/storeconfig
    kube.kuber.thdks/version: v0.0.2
    replicator/replicate: "true"
    thdks/deployed-by: kuber
    thdks/managed-by: flux
  name: storeconfig
  namespace: default

The reflector annotations are not being removed.
The patch commands listed above for the managedFields do not work.
It appears that the kustomization controller worked.

{"level":"info","ts":"2021-12-30T18:26:07.706Z","logger":"controller.kustomization","msg":"server-side apply completed","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"storeconfig","namespace":"flux-system","output":{"ConfigMap/default/storeconfig":"unchanged"}}

@kingdonb
Copy link
Member

@texasbobs That's a known facet of this issue. Using kubectl apply -f [input-file.yaml] the Kubernetes client "takes ownership" of the resource at the top level, blocking removal of any fields by alternate managers like kustomize-controller.

Very sorry for this issue, we are still working on a total resolution that covers all the varied and different ways this problem can creep in.

kingdonb pushed a commit to kingdonb/bootstrap-repo that referenced this issue Jan 20, 2022
In preparation to install Flux 0.17.2, ensure that no kustomization
overlay rewrites the version of kustomize-controller or any other Flux
controllers.

I'll be installing this version that came from before SSA to test
upgrading through Flux SSA versions to the latest RC, can proceed
smoothly, and not fall victim anymore to fluxcd/kustomize-controller#486
kingdonb pushed a commit to kingdonb/bootstrap-repo that referenced this issue Jan 20, 2022
In preparation to install Flux 0.17.2, ensure that no kustomization
overlay rewrites the version of kustomize-controller or any other Flux
controllers.

I'll be installing this version that came from before SSA to test
upgrading through Flux SSA versions to the latest RC, can proceed
smoothly, and not fall victim anymore to fluxcd/kustomize-controller#486
kingdonb pushed a commit to kingdonb/bootstrap-repo that referenced this issue Jan 20, 2022
In preparation to install Flux 0.17.2, ensure that no kustomization
overlay rewrites the version of kustomize-controller or any other Flux
controllers.

I'll be installing this version that came from before SSA to test
upgrading through Flux SSA versions to the latest RC, can proceed
smoothly, and not fall victim anymore to fluxcd/kustomize-controller#486
kingdonb pushed a commit to kingdonb/bootstrap-repo that referenced this issue Jan 21, 2022
In preparation to install Flux 0.17.2, ensure that no kustomization
overlay rewrites the version of kustomize-controller or any other Flux
controllers.

I'll be installing this version that came from before SSA to test
upgrading through Flux SSA versions to the latest RC, can proceed
smoothly, and not fall victim anymore to fluxcd/kustomize-controller#486
@stefanprodan stefanprodan unpinned this issue Jan 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked/upstream Blocked by an upstream dependency or issue bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants