Maximum memory usages for DAGs settings #584
-
Hello! Upfront I am relatively new to K8s and Helm but I have been working to stand up your airflow helm chart and are so close. I feel like I have gotten a handle on the templates abstractions to the values file but am wondering if there was a specific way to set the maximum memory usages for the DAGs. Currently I am faced with this error for all of my DAGs: "ApiException when attempting to run task, re-queueing. Reason: 'Forbidden'. Message: pods "helloworldhellotask.209f32e0ce204dd9a764f357c4b6fc92" is forbidden: [maximum memory usage per Pod is 16Gi. No limit is specified, maximum cpu usage per Pod is 4." I believe the solution is to set the KubernetesPodOperator as discussed here in the airflow doc but are unable to find where this setting maybe in the chart? Additional notes: We are using KubernetesExecutor and not Celery for deployment hopeful for any thoughts on this Thank you TLDR; Our kubernetes policy states we must establish resources for each pod but we don't know where to put that request. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
@ddgtz you can set the default resource request/limit in your KubernetesExecutor Pod template, using the airflow:
kubernetesPodTemplate:
## resource requests/limits for the Pod template "base" container
## [SPEC] https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#resourcerequirements-v1-core
resources:
requests:
cpu: "256m"
memory: "2Gi"
#limits:
# cpu: "256m"
# memory: "2Gi" |
Beta Was this translation helpful? Give feedback.
@ddgtz you can set the default resource request/limit in your KubernetesExecutor Pod template, using the
airflow.kubernetesPodTemplate.resources
value, for example: