Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for worker pods with extra containers #320

Closed
znd4 opened this issue Mar 19, 2021 · 4 comments
Closed

Support for worker pods with extra containers #320

znd4 opened this issue Mar 19, 2021 · 4 comments

Comments

@znd4
Copy link

znd4 commented Mar 19, 2021

In one of the clusters that I work with, every pod is created with an additional proxy container, which messes up read_namespaced_pod_log because I get a "container name must be provided" error.

Would it be possible to infer the name of the worker container from the worker template and pass container=container_name? (Or, if that's too tricky, allow users to pass that in as a kwarg somewhere?)

Thanks :)

Also, if either of those options is feasible + worth adding, I'm down to try writing a PR

@jacobtomlinson
Copy link
Member

Thanks for raising this @zdog234. That sounds like something we should fix.

We set the container name to dask-worker

name="dask-worker",

So I guess we could add container="dask-worker" to the read_namespaced_pod_log here

log = await self.core_api.read_namespaced_pod_log(
self._pod.metadata.name, self.namespace
)

I think the scheduler's container is also called dask-worker so perhaps we should also just change that to dask or something.

@znd4
Copy link
Author

znd4 commented Mar 19, 2021

Hmm I think the quickstart docs set it to "dask" in worker-spec.yaml

However, because it's configurable, I'd be a bit worried about hard-coding it in case someone changed the container name for some reason. I don't know why someone would, so maybe it is okay to just hard-code it, or hard-code it but let users override it somewhere.

Sorry if that's poorly written; I just woke up. I can try to clarify later

@jacobtomlinson
Copy link
Member

Hrm good point.

Perhaps in dask-kubernetes/dask_kubernetes/core.py we should inspect the podspec and extract the container name from there. Because your sidecar container won't be listed in that spec.

@jacobtomlinson
Copy link
Member

The classic KubeCluster was removed in #890. All users will need to migrate to the Dask Operator. Closing.

@jacobtomlinson jacobtomlinson closed this as not planned Won't fix, can't repro, duplicate, stale Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants