You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 9, 2020. It is now read-only.
Taints, tolerations, and node affinities seem to be the way to provide fine-grain scheduling support in a kubernetes cluster. Maybe I'm blind, but I can't seem to find any way to provide pod spec level properties when running spark. The closest I can get is spark.kubernetes.node.selector., but this doesn't provide the level of required control.
I can spin up another dedicated kube cluster to only host spark, but I would prefer to instead just have a dedicated node pool, which adding support for tolerations would do.
The text was updated successfully, but these errors were encountered:
NB - Per the README all development work and discussion has moved to the main apache/spark repo
You can probably do this with the new Pod Templates feature which is currently only available in master and is targeted for release with Spark 3.0 sometime later in the year
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Taints, tolerations, and node affinities seem to be the way to provide fine-grain scheduling support in a kubernetes cluster. Maybe I'm blind, but I can't seem to find any way to provide pod spec level properties when running spark. The closest I can get is
spark.kubernetes.node.selector.
, but this doesn't provide the level of required control.I can spin up another dedicated kube cluster to only host spark, but I would prefer to instead just have a dedicated node pool, which adding support for tolerations would do.
The text was updated successfully, but these errors were encountered: