Provides easy deployment for kubernetes cluster on ARM architecture (example Raspberry Pi). The setup is streamlined and automated for the most part and provides developers with hassle free management of the cluster.
kube-arm uses Ansible for automating all setup tasks required to get the kubernetes cluster up and running. The core requirements for setting up and managing the cluster are provided by the ansible playbooks.
- It is recommended to have atleast two nodes for your cluster.
- We recommend installing HypriotOS on each node as it comes pre installed with Docker and is optimized to run it very well.
- If you are using something other than HypriotOS then you must setup docker-engine for each node.
- [Optional] After OS has been flashed configure the host name on each node.
- Setup ssh keys for all nodes:
-
Generate public/private rsa key pair for each node.
-
Copy control PC public ssh key to each node.
# Copy for all nodes ssh-copy-id pirate@<node-ip>
-
-
Clone kube-arm into a control PC (we will be executing ansible playbooks on this PC).
-
Install ansible on the control PC. For installation steps check ansible docs.
-
Prepare the hosts.ini file. You can refer to sample hosts for help.
-
Run the playbook cluster-init.yaml:
ansible-playbook -i hosts.ini cluster-init.yaml
Most of the playbooks are self explanatory. In this section examples are presented for playbooks which requires some additional steps or parameters to be executed successfully.
-
Install ingress controller (traefik): If
ingress_controller_node
variable is defined inhosts.ini
whencluster-init
playbook is run then ingress controller is installed as part of that playbook. It can also be installed separately by:-
Define the variable
ingress_controller_node
inhosts.ini
with the node name on which ingress controller will be installed underkube:vars
section[kube:vars] ingress_controller_node=kube-02
-
Run the playbook
ansible-playbook -i hosts.ini install-ingress-controller.yaml
-
-
Add a new worker node to the cluster
-
Define the new node in
hosts.ini
underkube-workers
kube-05 ansible_host=192.168.1.19 ansible_user=pirate
-
Run the playbook with the newly added node as target node
ansible-playbook -i hosts.ini cluster-scaleup.yaml --extra-vars "TARGET_NODE=kube-05"
-
-
Remove a node from the cluster
-
Run playbook with the name of the node that is to be removed
ansible-playbook -i hosts.ini cluster-scaledown.yaml --extra-vars "TARGET_NODE=kube-05"
-
Remove the entry for the
TARGET_NODE
fromhosts.ini
-
-
master
orworker
specfic tasks can be run from a particular playbook using tagsmaster
andworkers
respectively# Will only reset worker nodes ansible-playbook -i hosts_sample.ini cluster-reset.yaml --tags=workers
# Initialize only master node ansible-playbook -i hosts.ini cluster-init.yaml --tags=master
- kubernetes - System for automating deployment, scaling and management of containerized applications
- kubeadm - Tool for kubernetes administration
- Ansible - IT automation system
- Raspberry Pi 3 Model B - Single board ARM based computer
- HypriotOS v1.5.0 - Docker Pirates with ARMed explosives
- kubernetes client-version 1.8.0, server-version 1.8.0 - System for automating deployment, scaling and management of containerized applications
- kubeadm v1.8.0 - Tool for kubernetes administration
- Ansible 2.4.0.0 - IT automation system
This project is licensed under the MIT License - see the LICENSE file for details