-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GNIP-58: Add Kubernetes Deployment #3924
Comments
It looks great! Some initial feedbacks: TODO
I'm going to the article myself too to understand better how that would work
That would be a great improvement!
Part of this issue should be placed in the documentation imho as it is a very good explanation for the users
Apparently the concept of ingress in a kubernetes deployment put in place a model where the application is always navigated from an ip/hostname which is translating the docker daemon. For instance minikube behaves exactly as docker-machine does. There are cases out of the context of kubernetes where the model is not that and where we need to know for the run-time configuration of GeoNode what is the address where the application will be navigated, for instance in a docker native environment (linux or docker4*). Removing that ip magic stuff would break the docker-compose out here (k8s) in those cases.
There is a plan from the last PSC meeting to consolidate the github repos for the images of the stack into a unique repository, where maybe also this stuff could be placed. If you have time I would invite to join us in the next meeting of the PSC to discuss it. Many thanks I love this improvement. |
Well I forgot to say that I second the suggestion of @afabiani to add an initial step of a circleCI configuration for CI/CD. I can obviously help on this as it was already on our todo list for docker-compose. |
Hi @eggshell, I think there is some overlap with this : #3707 (that refers to this implementation of Geonode deployment using docker-compose ) As said in that other issue, having a stable and officially supported setup to deploy Geonode is very much needed. It would be great if we could concentrate efforts on this goal to avoid the proliferation of multiple half-backed scripts in the repo (here is a small checklist on what would be a fully-backed implementation in my mind), A few questions :
It would be helpful (at least for me - I'm not familiar with Kubernetes) if you could show us the steps needed to configure and deploy the stack with your setup (like this). I think due to the nature of Geonode (mostly used for relatively small repos, often with relatively low sysadmin skills), we should aim for simplicity before extreme scalability. But anyway, you're suggesting to start by consolidating the docker images, which as Francesco said we were already willing to do. So as @francbartoli's suggested, let's talk about this during next PSC if this works for you. Thanks for sharing !! Cheers |
This is great. I will definitely have a look at it in the next few weeks. Thanks! |
@francbartoli @olivierdalang I would love to participate in the next PSC to talk about all of this. How do I get an invite? I don't see a schedule in the contributing.md but maybe I am just missing something. |
@eggshell we'll send it shortly as soos as it will be scheduled |
@eggshell I'd move this to a GNIP for now, what do you think? |
@capooti I though this was already considered as GNIP as long as labelled as feature but now I realized there is even a label gnip so I would add it as well. I've also prefixed the title as GNIP |
@eggshell is this PR abandoned? If so, what a pity! |
If I remember correctly 52°North wanted to work on that. I will ping them. Maybe they will come by and comment on this GNIP and we can reopen it. |
@gannebamm that would be great! There is also #3911 which looks denied because of missing tests ?! |
I just want to mention our work on this topic: https://github.com/zalf-rdm/geonode-k8s is somebody is stumble up on this post looking for a helm chart |
@mwallschlaeger great to know you're working on it! I think it's time to resurrect this GNIP. |
Overview
Currently, there is no official path for GeoNode to be deployed onto a kubernetes cluster, whether they be managed by a cloud service (IBM Kubernetes Service, Google Kubernetes Engine, Azure Kubernetes Service, etc) or rolled in-house. Kubernetes has more or less emerged as the de facto container orchestration system, so providing this deployment method would be a good step towards providing more options for real production GeoNode.
I have gone through the process of converting the
docker-compose.yml
into some corresponding kubernetes manifest yamls (#3911) which should be platform-agnostic and can serve as a base for maturation into a fully baked production deployment solution. This issue can serve as a place to discuss kubernetes concepts, the architecture for the GeoNode kubernetes deployment as it stands, and keep track of to-do's.Architecture
This is the architecture of the GeoNode kubernetes deployment as it stands right now.
Concepts
Pods
A pod in kubernetes is a group of one or more containers. They share storage and network, are co-located and co-scheduled, and run in a shared context (credit: k8s official docs on podS). The list of pods in the GeoNode deployment is:
Each pod in this deployment contains 1 container, with the exception of the django and celery pods, which each have a
docker-in-docker
pod, which should be able to go away with more modification. More on that below.Services and Service Discovery
Kubernetes services target pods and expose their functionality to other pods, providing a policy which pods use to communicate with one another. This is usually done by exposing a port on a container, the same way you would open a port on traditional infrastructure.
Services are assigned a name, and this combination of service name and port are injected into DNS records cluster-wide so these names can be resolved.
Example service definition:
Deployments
Each pod is defined in a kubernetes deployment. A deployment is a resource type in kubernetes which specifies desired state for a pod. The deployment makes sure a pod is always running. Has a pod crashed? Restart it. Want to scale up/down the number of replicated containers in a pod? Go ahead and do it. Has the desired state changed? Terminate the old pod and spin up a new one.
Example deployment defintion:
Persistent Volumes
You could provision a container volume in the spec above, but in the case of GeoNode, we want this data to have a lifecycle independent of any individual pod that uses it. Say a pod's spec changes. It would need to be terminated and recreated, and if its volumes are provisioned with the pod, then that data is lost. The PersistentVolume resource type allows us to provision some storage space in a cluster for this use case.
Example PersistentVolume definition:
PersistentVolumeClaims are bound to PersistentVolumes and referenced by pods.
Example PersistentVolumeClaim definition:
Then, a deployment spec's volumes section would reference this PersistentVolumeClaim:
Finally, a deployment spec's container section defines where this volume should be mounted.
Ingress
Kubernetes Ingress resources manage external access to a service or services deployed to a kubernetes cluster (typically HTTP). Ingress can provide load balancing, SSL termination, and name-based virtual hosting. Right now I just have it set to provide external access, so it is pretty simple.
The
annotations
section here allows for the ingress resource to be configured at creation time to correctly set theclient-max-body-size
nginx parameter so layer uploads in GeoNode are possible.Hacks Used
Init Containers
Init Containers run before application containers are started, can have their own container image, and typically contain setup scripts or utilities not present in the application container image. I used an Init Container in the geoserver deployment definition to ensure the
geoserver_data
persistent volume has its initial data. This Init Container runs anytime ageoserver
deployment is configured, but the of the-n
flag in cp ensures data is not overwritten if a given file is present. This ensures fault tolerance on the part of th egeoserver deploymen persistent volume has its initial data. This Init Container runs anytime ageoserver
deployment is configured, but the of the-n
flag in cp ensures data is not overwritten if a given file is present. This ensures fault tolerance on the part of the geoserver deployment.docker-in-docker
The Django and Celery deployments each have a
docker-in-docker
ordind
container. Thedocker.sock
of thedind
container is then bound to the other application container present in the pod so that it may run thecodenvy/che-ip
container and not crash.I really just included this as a stopgap to get GeoNode running it kubernetes. It seems to be endemic to the
docker-compose
local deployment, but I wanted more information and discussion about this topic before removing all the plumbing. Thoughts?TODO
scenarios (local on minikube, managed k8s cluster, different providers,
etc)
dind
stuff, as it doesn't appear to communicatewith other services or do any realy work in kubernetes.
is piped through.
What am I missing? Thanks for reading!
The text was updated successfully, but these errors were encountered: