Skip to content
This repository has been archived by the owner on Feb 5, 2021. It is now read-only.

Latest commit

 

History

History
250 lines (202 loc) · 11.6 KB

existing-cluster.md

File metadata and controls

250 lines (202 loc) · 11.6 KB

Existing Kubernetes cluster

Cellery runtime can be installed onto an existing k8s clusters. Cellery runtime requires MySQL server to store the control plane state, and MySQL server can be started within kubernetes with persistent volume mounted or not.

Hence we have persistence mode and non-persisted mode. Also it requires network or local file system access to store the runtime artifacts of the cellery runtime.

Cellery installer is tested on K8s environments with following component versions.

  • Kubernetes: 1.14.x
  • Docker : Docker 19.03.0-rc2 - Edge channel
  • MySQL Server: 5.7
  • NFS Server : NFSv3 compatible

Prerequisites

Mandatory

Optional

  • Ballerina 1.0.3 If Ballerina 1.0.3 is not installed, Cellery will execute ballerina using Docker.

Tested Kubernetes Providers

  1. GCP GKE

  2. Docker for Desktop on MacOS

  3. Kube Admin

  4. Minikube

1. GCP GKE

Cellery system can be installed into the existing GCP setup in both modes; Persisted volume and Non-persisted volume. NFS share and set of MySQL databases are required to proceed with the Cellery installation if users opted to go with Persisted volume.

Follow below steps to configure your GCP setup to install cellery system on to it.

  1. Create GKE k8s cluster.
    Note: Follow steps - 2, 3, 4 only if you want to create cellery system with persisted volume
  2. Create NFS server and MySQL server in the same GCP compute region and the zone i.
  3. User can find the SQL script file in the Cellery distribution repository. Please note to replace the database username (DATABASE_USERNAME) and the password (DATABASE_PASSWORD) accordingly.
  4. Import the script that was prepared in step-2 into MySQL database.

2. Docker for Desktop on macOS

Cellery system can be installed into the docker for desktop setup in both modes; Persisted volume and Non-persisted volume. User needs to increase the Docker Desktop resources to support Cellery runtime. Minimum resources required by Cellery runtime:

  • CPU : 4 core or more
  • Memory : 8 GB or more

Note:

  • If users want to install cellery with persisted volume, then add /var/tmp/cellery to the Docker Desktop file sharing to support persistence deployments.
  • Docker for Desktop versions 2.1.3.0 edge does not allow to add /var/tmp/cellery as a Docker file share. User can overcome this problem by adding /var/tmp/cellery path to filesharingDirectories in /Users//Library/Group\ Containers/group.com.docker/settings.json.
  • User may need to restart the Docker Desktop to update the ingress-nginx EXTERNAL-IP after deploying the Cellery runtime. ( This is a known issue in Docker Desktop.)

3. Kube Admin

Tested on Ubuntu 18.04. Cellery supports both persistent and non-persistent runtime deployment on kubeadm based k8s.

Note: To run in persistence mode follow the below instructions

  • Set umask to 0000 if it is not already set
  • Create a folder in /var/tmp/cellery and give full executable permission to create and modify the artifacts which are shared with the cellery runtime.

4. Minikube

Cellery only supports non-persistence mode deployment on Minikube.

Cellery setup with existing kubernetes cluster

Interactive Method

In this cellery installation option cellery CLI uses the default kubernetes cluster configured in the $HOME/.kube/config file. As mentioned above this can be installed with persistent volume and non-persistent volume.

i. Execute cellery setup command to configure Cellery runtime. This will prompt a list of selections. By selecting create section users can setup the Cellery runtime:

  $ cellery setup
  [Use arrow keys]
  ? Setup Cellery runtime
      Manage
    ➤ Create
      Modify
      Switch
      EXIT

ii. From the selections available for environment type, select Existing cluster to proceed with local installation:

  $ ✔ Create
  [Use arrow keys]
  ? Select an environment to be installed
      Local
      GCP
    ➤ Existing cluster
      BACK

1. Persistent volume

Cellery needs persistent volume to keep MySQL server files and WSO2 APIM deployable artifacts. This option enables to save the state of the cellery system, therefore restarting the cellery runtime will not

Once the option Existing cluster is selected, the CLI will prompt to select whether to use Persistent Volume or not:

    $ cellery setup
    ✔ Create
    ✔ Existing cluster
    [Use arrow keys]
    ? Select the type of runtime
     ➤ Persistent volume
       Non persistent volume
       BACK
1.1. Access to NFS

If the user has access to an NFS server he/she can use it as the persistent volume, else he/she can proceed with file system mount by default. And based on this, user should select yes or no for the using NFS server option.

Note: If you are trying this on docker for desktop, and you don't have NFS, then you will be required add /var/tmp/cellery to the Docker Desktop file sharing as mentioned.

 $ cellery setup 
  ✔ Create
  ✔ Existing cluster
  ✔ Persistent volume
  Use the arrow keys to navigate: ↓ ↑ → ←
  ? Use NFS server:
   ▸ Yes
     No
     BACK
1.2. Access to MySQL Server

The user can provide database username/password of the MySQL instance that's running on his environment with this step as shown below.

  $ cellery setup
   ✔ Create
   ✔ Existing cluster
   ✔ Persistent volume
   ✔ Yes
   ? NFS server ip:  192.168.2.1
   ? File share name:  data
   ? Database host:  192.168.2.100
   
   ? Mysql credentials required
   Username: mysqlroot
   Password:
   Confirm Password:

Once above are performed, there will be an option to select Basic or Complete installation packages. Now continue to configure host entries to complete the setup.

2. Non-Persistent Volume

This mode allows users to start cellery system without any need for access to NFS/File system or MySQL database storage. But this will not preserve the state of the cellery system, and once the cellery system is restarted any changes made during runtime will be lost, and observability and APIM changes also will be not be stored. This is ideal for development and quick test environments.

i. Select the option Non persistent volume and continue.

  $ cellery setup
  ✔ Create
  ✔ Existing cluster
  [Use arrow keys]
  ? Select the type of runtime
      Persistent volume
    ➤ Non persistent volume

ii. The select the setup type and continue.

 $ cellery setup
 ✔ Create
 ✔ Existing cluster
 ✔ Non persistent volume
  [Use arrow keys]
  ? Select the type of runtime
    ➤ Basic
      Complete

3. Selecting ingress mode

User should select the suitable ingress mode: NodePort or Load balancer, based on their runtime(kubeadm, minikube, gcp, docker for desktop).

    ✔ Create
    ✔ Existing cluster
    ✔ Non persistent volume
    ✔ Basic
    [Use arrow keys]
    ? Select ingress mode
      ➤ Node port [kubeadm, minikube]
        Load balancer [gcp, docker for desktop]
        BACK
3.1. NodePort

If the ingress mode is NodePort user can either use a default NodePort ip address or provide a custom NodePort ip address.

    ✔ Create
    ✔ Existing cluster
    ✔ Non persistent volume
    ✔ Basic
    ✔ Node port [kubeadm, minikube]
    ? NodePort Ip address:  [Press enter to use default NodePort ip address]

Inline Method

Persistent volume Access to NFS storage Command Description
No N/A cellery setup create existing [--complete] [--loadbalancer | --nodePortIp <NODEPORT_IP_ADDRESS>] By default basic setup will be created, if --complete flag is passed, then complete setup will be created. If k8s cluster supports cloud based loadbalancer (e.g: GCP, Docker-for-mac), users have to pass --loadbalancer flag. If the ingress type of the runtime is NodePort, users can pass a custom NodePort ip address.
Yes No cellery setup create existing --persistent [--complete] [--loadbalancer | --nodePortIp <NODEPORT_IP_ADDRESS>] In this case, the file system should be mounted or should be accessible by the k8s cluster. By default basic setup will be created, if --complete flag is passed, then complete setup will be created. If k8s cluster supports cloud based loadbalancer (e.g: GCP, Docker-for-mac), users have to pass --loadbalancer flag.If the ingress type of the runtime is NodePort, users can pass a custom NodePort ip address.
Yes Yes cellery setup create existing [--complete] [--dbHost <DB_HOST> --dbUsername <DB_USER_NAME> --dbPassword <DB_PASSWORD> --nfsServerIp <IP_ADDRESS> --nfsFileShare <FILE_SHARE>] [--loadbalancer | --nodePortIp <NODEPORT_IP_ADDRESS>] In this case, the external database and NFS server available and k8s cluster can be connected to those to provide the persisted functionality. This is the recommended mode for the production deployment.

Configure host entries

Once the setup is complete, cellery system hostnames should be mapped with the ip of the ingress.

For this purpose, its currently assumed that the K8s cluster has a nginx ingress controller functioning.

Run the following kubectl command to get the IP address.

 kubectl get ingress -n cellery-system

Then update the /etc/hosts file with that Ip as follows.

 <IP Address> wso2-apim cellery-dashboard wso2sp-observability-api wso2-apim-gateway cellery-k8s-metrics idp.cellery-system pet-store.com hello-world.com my-hello-world.com

Note:

  1. IP Address for docker for desktop is 127.0.0.1, and minikube is 192.168.99.100.

  2. In some pre-configured setups, (ex.: setups created with kubeam command line tool), it might be required to specifically find the publicly exposed IP(s) of the nodes and update the ingress-nginx kubernetes service's externalIPs section by using kubectl edit command.


Trying Out

Once the installation process is completed, you can try out quick start with cellery.

Cleaning Up

Please refer readme for managing cellery runtimes for details on how to clean up the setup.

What's Next?