Does spark operator support dynamic resource allocation? #312
Replies: 2 comments 4 replies
-
It is supported partially, in the sense that you can set min/max CPU and a memory limit for the executors defined for your SparkApplication. What we don't support is a dynamic number of executors. We may do in the future but there are a number of factors that need to be considered:
So...the k8s "paradigm" moves away from a cluster running a static number of nodes to pods-on-demand, and resources for these pods can be set on a per-Job basis: is that sufficient for your use-cases? If not, what specific scenario are you looking at that could be improved by dynamic allocation? |
Beta Was this translation helpful? Give feedback.
-
Have a look at the nightly docs as this should make things a little clearer: https://docs.stackable.tech/home/nightly/spark-k8s/usage-guide/resources
See also https://docs.stackable.tech/home/nightly/concepts/resources and https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes for information on the units used by Kubernetes and which we (try to!) reflect in our resource definitions. |
Beta Was this translation helpful? Give feedback.
-
I haven't seen any mention of dynamic resource allocation for spark jobs in stackable docs. Is it supported? Or is it more of a "use it at your own risk" type of thing?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions