Skip to content

Commit

Permalink
Add e2e & doc for prune order during sync
Browse files Browse the repository at this point in the history
Signed-off-by: Siddhesh Ghadi <[email protected]>
  • Loading branch information
svghadi committed Jan 4, 2024
1 parent 80ca563 commit e42d8e8
Show file tree
Hide file tree
Showing 5 changed files with 139 additions and 1 deletion.
4 changes: 3 additions & 1 deletion docs/user-guide/sync-waves.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Hooks and resources are assigned to wave zero by default. The wave can be negati
When Argo CD starts a sync, it orders the resources in the following precedence:
* The phase
* The wave they are in (lower values first)
* The wave they are in (lower values first for creation & updation and higher values first for deletion)
* By kind (e.g. [namespaces first and then other Kubernetes resources, followed by custom resources](https://github.com/argoproj/gitops-engine/blob/bc9ce5764fa306f58cf59199a94f6c968c775a2d/pkg/sync/sync_tasks.go#L27-L66))
* By name
Expand All @@ -49,6 +49,8 @@ It repeats this process until all phases and waves are in-sync and healthy.
Because an application can have resources that are unhealthy in the first wave, it may be that the app can never get to healthy.
During pruning of resources, resources from higher waves are processed first before moving to lower waves. If, for any reason, a resource isn't removed/pruned in a wave, the resources in next waves won't be processed. This is to ensure proper resource cleanup between waves.
Note that there's currently a delay between each sync wave in order give other controllers a chance to react to the spec change
that we just applied. This also prevent Argo CD from assessing resource health too quickly (against the stale object), causing
hooks to fire prematurely. The current delay between each sync wave is 2 seconds and can be configured via environment
Expand Down
43 changes: 43 additions & 0 deletions test/e2e/sync_waves_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ import (

"github.com/argoproj/gitops-engine/pkg/health"
. "github.com/argoproj/gitops-engine/pkg/sync/common"

v1 "k8s.io/api/core/v1"
)

func TestFixingDegradedApp(t *testing.T) {
Expand Down Expand Up @@ -100,3 +102,44 @@ func TestDegradedDeploymentIsSucceededAndSynced(t *testing.T) {
Expect(SyncStatusIs(SyncStatusCodeSynced)).
Expect(ResourceResultNumbering(1))
}

// resources should be pruned in reverse of creation order(syncwaves order)
func TestSyncPruneOrderWithSyncWaves(t *testing.T) {
ctx := Given(t).Timeout(60)

// remove finalizer to ensure proper cleanup if test fails at early stage
defer RunCli("app", "patch-resource", ctx.AppQualifiedName(),
"--kind", "Pod",
"--resource-name", "pod-with-finalizers",
"--patch", `[{"op": "remove", "path": "/metadata/finalizers"}]`,
"--patch-type", "application/json-patch+json", "--all",
)

ctx.Path("syncwaves-prune-order").
When().
CreateApp().
// creation order: sa & role -> rolebinding -> pod
Sync().
Wait().
Then().
Expect(SyncStatusIs(SyncStatusCodeSynced)).
Expect(HealthIs(health.HealthStatusHealthy)).
When().
// delete files to remove resources
DeleteFile("pod.yaml").
DeleteFile("rbac.yaml").
Refresh(RefreshTypeHard).
IgnoreErrors().
Then().
Expect(SyncStatusIs(SyncStatusCodeOutOfSync)).
When().
// prune order: pod -> rolebinding -> sa & role
Sync("--prune").
Wait().
Then().
Expect(OperationPhaseIs(OperationSucceeded)).
Expect(SyncStatusIs(SyncStatusCodeSynced)).
Expect(HealthIs(health.HealthStatusHealthy)).
Expect(NotPod(func(p v1.Pod) bool { return p.Name == "pod-with-finalizers"})).
Expect(ResourceResultNumbering(4))
}
15 changes: 15 additions & 0 deletions test/e2e/testdata/syncwaves-prune-order/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
## Test Scenario

This test example is for testing the reverse pruning of resources with syncwaves during sync operation.

Resource creation happens in below order
- wave 0: sa & role
- wave 1: rolebinding
- wave 2: pod

They are setup in such a way that the resources will be cleaned up properly only if they are deleted in the reverse order of creation i.e
- wave 0: pod
- wave 1: rolebinding
- wave 2: sa & role

If above delete order is not followed the pod gets stuck in terminating state due to a finalizer which is supposed to be removed by k8s container lifecycle hook on delete if delete order is correct.
41 changes: 41 additions & 0 deletions test/e2e/testdata/syncwaves-prune-order/pod.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
apiVersion: v1
kind: Pod
metadata:
name: pod-with-finalizers
annotations:
argocd.argoproj.io/sync-wave: "2"
# remove this finalizers using container preStop lifecycle hook on delete
finalizers:
- example.com/block-delete
spec:
serviceAccountName: modify-pods-sa # sa with permissions to modify pods
terminationGracePeriodSeconds: 15
containers:
- name: container
image: nginx:alpine
command: ["/bin/sh", "-c"]
args: ["sleep 10h"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
lifecycle:
# remove finalizers for successful delete of pod
preStop:
exec:
command:
- /bin/sh
- -c
- |
set -e
SERVICE_ACCOUNT_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
POD_URL="https://kubernetes.default.svc/api/v1/namespaces/$NAMESPACE/pods/$POD_NAME"
PATCH_PAYLOAD='[{"op": "remove", "path": "/metadata/finalizers"}]'
curl -k -v -H "Authorization: Bearer $SERVICE_ACCOUNT_TOKEN" -H "Content-Type: application/json-patch+json" -X PATCH --data "$PATCH_PAYLOAD" $POD_URL
37 changes: 37 additions & 0 deletions test/e2e/testdata/syncwaves-prune-order/rbac.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: modify-pods-sa
annotations:
argocd.argoproj.io/sync-wave: "0"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods-role
annotations:
argocd.argoproj.io/sync-wave: "0"
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
- update
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-rolebinding
annotations:
argocd.argoproj.io/sync-wave: "1"
subjects:
- kind: ServiceAccount
name: modify-pods-sa
roleRef:
kind: Role
name: modify-pods-role
apiGroup: rbac.authorization.k8s.io

0 comments on commit e42d8e8

Please sign in to comment.