-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend KubeSpawner and its UI to handle Persistent Volume Claims #34
Comments
I'm not sure if it's intuitive to have the user specify PVCs and mount points, as opposed to just requesting resources (as they currently do). It runs a bit afoul of the broad goal of making infrastructure "disappear" IMO. Maybe we can have a single place to do advanced config, which can be enabled by jupyterhub/kubespawner#111 |
@foxish I'm not sure I understand the distinction you are drawing between resources and PVCs? In this case a PVC seems like a resource in the same regard as CPU/RAM/GPU. |
My point was that PVCs are a much more K8s-specific construct as opposed to CPU/RAM/GPU. As I understood, we wanted to hide the complexity of k8s concepts from the user - but this is still probably useful for advanced users. IIRC, JupyterHub in kubeflow already uses PVCs under the hood for notebook storage right now. Is there a use-case driving exposing arbitrary mounts and PVCs? |
So I think the use case is as follows. People have a bunch of existing NFS servers or other types of storage systems that they want to bind to their Jupyter notebooks so that they can access data on those shares. There are generally two approaches to this
I think I lean towards #2. The advantage of #2 is that you don't have to redeploy to attach a newly created PVC. Furthermore, #2 seems better suited to the case where we allow users to spawn multiple Jupyter notebooks which is a direction I think we want to move in. My philosophy for Kubeflow is that we should expose K8s concepts/abstractions when it makes sense rather than introducing new ones. PV/PVC is one of those abstractions that I think we should expose; I don't think we can avoid dealing with storage and PV/PVC is a great abstraction. @pdmack any interest in trying to tackle it? |
@jlewi I'll have a go. Probably have more questions as I begin the journey. |
One idea I had was to generate the jupyterhub_spawner.py file using an init container. This would allow us to be more dynamic; e.g. we could make a K8s API request to see if there is a default storage class defined or not and adjust the config accordingly. |
Yeah, I mean the KubeSpawner embeds the python k8s client so I was assuming that we could do some introspection from the hub pod itself. But init container might be worthwhile as a separation of concerns and a place to generally store "outward" facing logic. The hub UI will still need to reflect what comes back to it though. |
/assign @pdmack |
Hi all, We would like to contribute to this issue and extend the current JupyterHub Spawner UI with the option of specifying PVCs for a Jupyter Notebook. We suggest that the corresponding UI element described above should be an extra entry in the 'Advanced Options' form, just like the rest of the resources. A user should be able to attach 0-N new, empty volumes to their notebook, so we will implement a dynamic UI component that will allow the addition and removal of volumes prior to the spawning of the Notebook. These volumes will be able to back both a workspace (
Regarding the default behavior, we suggest that only the default workspace ( We can also have an option for the user to specify an existing PVC rather than a new, empty one, referenced by name. Please comment if this is something that would be of interest to the community, we believe this would be very helpful, as @pdmack also suggested in a previous community meeting. If this is the case, the user will need to define:
We also plan to extend the aforementioned solution and concurrently deal with two additional open issues that share similar context: Finally, the solution will allow for 3rd party extensions to be implemented, inheriting from the proposed UI, so custom/advanced options provided by 3rd parties to K8s can be exposed to the end users of Kubeflow/JupyterHub. All the above are related to #56 and aim at making the default JupyterHub Spawner UI more intuitive in terms of configuring persistent storage, which as stated in the Kubeflow 0.4 Release Planning Document is a major feature for v0.4. We are looking forward to your comments and feedback on the proposed solution. @jlewi @pdmack @inc0 |
In general the plan sounds good to me. At the Kubeflow, contributor summit yesterday a couple key points emerged
This raises the following problem, Can we continue to iterate and improve the user facing UI? without blocking on replacing JupyterHub which might take some time? @ioandr Any thoughts about this? e.g.
/cc @pdmack |
Yes, we believe we can. We agree completely that iterating on the user-facing UI, and evolving the user experience should not block on replacing JupyterHub, especially before we have a crystal clear view of what the shortcomings of the current JupyterHub-based approach are. Having said that, it makes sense to ensure that when we do have a clear view of where we want to go, we don’t just have to rewrite everything, but can instead continue making piece-wise improvements to the code base. So, a proposed plan of action is:
We can focus on implementing Step 1 for now, which should be enough to improve the offered functionality in an end-to-end workflow considerably and to close this issue. In the following, I try to describe what the goal of our PR will be:
We implemented the biggest part of this proposal for our recent demo. We plan to have a PR submitted for this issue within the next two weeks, I'll let @ioandr elaborate more on this as he makes progress with the implementation.
This is what we are aiming for in the plan laid out above. It is best to treat JupyterHub as a small wrapper around KubeSpawner, focus most of the changes in KubeSpawner, then switch to using a Kubeflow-specific web app which will call into the same KubeSpawner class, ideally with minimal or no changes to its interface. Finally, does it make sense to change the title of the current issue to "Extend KubeSpawner and its UI to handle Persistent Volume Claims", so it better reflects its scope? |
This sounds like a great plan to me! @pdmack any thoughts? |
@vkoukis @jlewi @pdmack @yuvipanda A k8s native approach will likely need to scrap kubespawner entirely based on my own attempt to work within the kubespawner framework to replace kubespawner's proxy with ambassador as suggested by @yuvipanda in issue #239. I don't make this statement out-of-hand and the remainder of my comment is to substantiate this claim. It's entirely possible that JupyterHub/KubeSpawner have evolved from 0.8.1 or that I missed an approach that was obvious. The Jupyterhub/Kubespawner community may have contributed some clean solutions since 0.8.1. Hopefully Yuvi has some info in this regard. The solution for #239 (which was merged) was to keep configurable-http-proxy which is the default proxy used by KubeSpawner as I documented in the PR. However this solution falls short of what we're after and authentication with JupyterHub remains an issue since it basically means that client_id, client_secret values must be propagated downstream from envoy or ambassador at point of time when the notebook is spawned by the jupyterhub pod (eg the user selects 'Spawn'). My attempt to extend KubeSpawner by replacing ConfigurableHttpProxy with one that uses ambassador is documented. I followed the documentation of 'Writing a custom proxy implementation' and the work is preserved in the unmerged PR #817. The custom proxy code can be found here. What made this intractable IMO is that JupyterHub, KubeSpawner use a number of flags to track and mutate pending state of launching the notebook that I've noted here. The code is a complex state machine whose transitions are arbitrated over these state flags - mostly in Spawner that include Spawner.pending, Spawner._start_pending, Spawner._stop_pending, Spawner._proxy_pending, Spawner._spawn_future. These flags may be mutated by other classes - for example in User.py that drive key state transitions. In the end one has to weigh the complexity of JupyterHub's state machine with the complexity noted by @yuvipanda. |
BTW my note above agrees with @vkoukis' plan of action (retain the KubeSpawner API but bypass the implementation with a new one). |
Agree with @kkasravi. The Kubespawner implementation is where we need to place much of the new work (PVC, GPU type) and rip out some other stuff (auth, reverse proxy). |
how about we rip out Kubespawner?:) |
@kkasravi Your observations are very useful, and thanks for having taken the effort to document meticulously all the problems you encountered. I understand replacing JupyterHub with a K8s-native implementation is not going to be a light task, and the main problem is that JupyterHub/Kubespawner maintain a lot of mutable state, which is used to track the launch of a new notebook server. That being said, our initial goal is to get a first set of changes into the KubeSpawner implementation and its UI so that it can handle PVCs uniformly. Then, we will need to have a much better understanding of the way JupyterHub interacts with KubeSpawner, it will take a deeper discussion, and we will definitely keep you in the loop for any proposed changes, before we can replace JupyterHub with anything else. If you agree, it would be best if we can focus the discussion in this issue on the "KubeSpanwer + support for PVCs" part of the problem, and continue the more open-ended discussion of how JupyterHub maintains local state and how to potentially replace it, either at #1630 or in a new issue that we can create just for this purpose. |
/area 0.4.0 |
* Extend default Spawner UI with two new elements: Workspace and Data Volumes * Extend `config.yaml` of the default UI with default values and options for the new Spawner form fields * Add asynchronous logic for handling K8s-native API requests in `spawner.py` (e.g. provision new PVCs, retrieve existing PVCs) * Add 'storageClass` ksonnet parameter to easily configure which PVC provisioner will be used - defaults to 'none' * Remove `disks` and `notebookPVCMount` ksonnet parameters, along with related assignments in `jupyterhub_config.py` * Remove buggy code for attaching multiple PVCs in `jupyterhub_config.py` * Fix flake8 linting errors in `spawner.py` Closes kubeflow#34 Closes kubeflow#541 Signed-off-by: Ioannis Androulidakis <[email protected]> Signed-off-by: Ilias Tsitsimpis <[email protected]>
* Extend default Spawner UI with two new elements: Workspace and Data Volumes * Extend `config.yaml` of the default UI with default values and options for the new Spawner form fields * Add asynchronous logic for handling K8s-native API requests in `spawner.py` (e.g. provision new PVCs, retrieve existing PVCs) * Add 'storageClass` ksonnet parameter to easily configure which PVC provisioner will be used - defaults to 'none' * Remove `disks` and `notebookPVCMount` ksonnet parameters, along with related assignments in `jupyterhub_config.py` * Remove buggy code for attaching multiple PVCs in `jupyterhub_config.py` * Fix flake8 linting errors in `spawner.py` Closes kubeflow#34 Closes kubeflow#541 Signed-off-by: Ioannis Androulidakis <[email protected]> Signed-off-by: Ilias Tsitsimpis <[email protected]>
* Extend default Spawner UI with two new elements: Workspace and Data Volumes * Extend `config.yaml` of the default UI with default values and options for the new Spawner form fields * Add asynchronous logic for handling K8s-native API requests in `spawner.py` (e.g. provision new PVCs, retrieve existing PVCs) * Add 'storageClass` ksonnet parameter to easily configure which PVC provisioner will be used - defaults to 'none' * Remove `disks` and `notebookPVCMount` ksonnet parameters, along with related assignments in `jupyterhub_config.py` * Remove buggy code for attaching multiple PVCs in `jupyterhub_config.py` * Fix flake8 linting errors in `spawner.py` Closes kubeflow#34 Closes kubeflow#541 Signed-off-by: Ioannis Androulidakis <[email protected]> Signed-off-by: Ilias Tsitsimpis <[email protected]>
* Extend default Spawner UI with two new elements: Workspace and Data Volumes * Extend `config.yaml` of the default UI with default values and options for the new Spawner form fields * Add asynchronous logic for handling K8s-native API requests in `spawner.py` (e.g. provision new PVCs, retrieve existing PVCs) * Add 'storageClass` ksonnet parameter to easily configure which PVC provisioner will be used - defaults to 'none' * Remove `disks` and `notebookPVCMount` ksonnet parameters, along with related assignments in `jupyterhub_config.py` * Remove buggy code for attaching multiple PVCs in `jupyterhub_config.py` * Fix flake8 linting errors in `spawner.py` Closes kubeflow#34 Closes kubeflow#541 Signed-off-by: Ioannis Androulidakis <[email protected]> Signed-off-by: Ilias Tsitsimpis <[email protected]>
* Chunchsiang is going to be a Google intern. * Give him access to kubeflow-dev to try things out.
Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The Spec will contain only one field for specifying which PVC to edit its contents. The Status will have two fields. `Conditions` which will mirror the PodConditions of the underlying Pod and `Ready` which is a boolean that will be true only if the Pod is has conditions Ready and ContainersReady. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The PVCViewers controller will be only creating a Pod, a Service and a VirtualService for each PVCViewer CR. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The manager will also have a culling controller. This controller will be periodically reconciling PVCViewers and if they are idle then it will delete them from the API Server. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The controller will be also using an ENV Variable to check 1. what image to use for the pvc viewer pod. 2. what the will be the endpoint for a pvcviewer 3. what image pull policy to use (we want always when in dev mode) Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
This commit modifies the Makefile to download controller-gen to a folder inside the pvcviewer-controller. The goal is to make sure the specific version of controller-gen is used to generate manifests for the pvcviewer-controller. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34 Related-to: arrikto/dev#77 Related-to: arrikto/dev#333
Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The Spec will contain only one field for specifying which PVC to edit its contents. The Status will have two fields. `Conditions` which will mirror the PodConditions of the underlying Pod and `Ready` which is a boolean that will be true only if the Pod is has conditions Ready and ContainersReady. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The PVCViewers controller will be only creating a Pod, a Service and a VirtualService for each PVCViewer CR. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The manager will also have a culling controller. This controller will be periodically reconciling PVCViewers and if they are idle then it will delete them from the API Server. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
The controller will be also using an ENV Variable to check 1. what image to use for the pvc viewer pod. 2. what the will be the endpoint for a pvcviewer 3. what image pull policy to use (we want always when in dev mode) Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34
This commit modifies the Makefile to download controller-gen to a folder inside the pvcviewer-controller. The goal is to make sure the specific version of controller-gen is used to generate manifests for the pvcviewer-controller. Signed-off-by: Kimonas Sotirchos <[email protected]> Reviewed-by: Yannis Zarkadas <[email protected]> Github-PR: kubeflow#34 Related-to: arrikto/dev#77 Related-to: arrikto/dev#333
Update OWNERS file
In the JuptyerHub spawner you can specify Docker image and resource requirements. It would be nice if you could also specify volume claims and mount points so that users can easily attach shared volume to their notebooks.
According to jupyterhub/jupyterhub/issues/1580 we can do this just by adding some properties to our KubeSpawner.
The text was updated successfully, but these errors were encountered: