Skip to content

HA k3s deployment for infrastructure components (minIO, OpenLDAP, etc...)

License

Notifications You must be signed in to change notification settings

aveiga/onprem-kubernetes-cluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Airgap Infra-dedicated Kubernetes Edge Cluster

Ansible playbooks to setup an Airgap, HA k3s cluster Infrastructure components

VM Setup

  • cd terraform
  • If not done, create a secret.tfvars file with the following content:
sshkey_id = "<ID of the dev machine SSH key>"
token     = "<CIVO API Token>"
  • When setting up the project for the first time, run terraform init. If upgrading the civo provider version, run terraform init --upgrade instead (provider version is updated in the terraform/provider.tf file and the latest version is defined here - click "use provider")
  • make plan
  • make apply

Kubernetes Setup

Ansible host pre-requisites

  1. Install Ansible
  2. Install the Ansible Docker Community collection: ansible-galaxy collection install community.docker
  3. Install the Ansible Posix collection: ansible-galaxy collection install ansible.posix
  4. Install the Ansible Kubernetes collection: ansible-galaxy collection install kubernetes.core
  5. Install the Python PyYaml library: pip3 install pyyaml
  6. Install the Python Kubernetes library: pip3 install kubernetes

Install Procedure

  1. Customize the hosts.yaml file to point to your nodes. The "master0*" hosts are assumed to be control plane nodes.
  2. cd ansible
  3. make pre-requisites
  4. make install

Kubernetes Uninstall Procedure

  1. make uninstall

Shutdown all nodes

  1. make shutdown

VM Deletion

  1. cd terraform
  2. make destroy

Full documentation

TBD

Disclaimer

The provided code was tested on a 2020 M1 Macbook Air as the Ansible host and VirtualBox VMs running on a Windows host acting as the Kubernetes nodes.

Other tips

VM Clone

  • While setting up the VMs that would become the k8s nodes, I've started by creating a first machine and then cloning it once I was happy with it's setup. In order for the clones to be able to pickup an IP address, I've had to delete the /etc/udev/rules.d/70-persistent-net.rules file on each of them.

Rook

  • In the process of setting Rook up, I've had to resize the VM disks for the worker nodes. To do it on VirtualBox, go to File -> Virtual Media Manager. Then, on openSUSE, run:
sudo zypper install growpart
sudo growpart /dev/sda 2
sudo btrfs filesystem resize max /mnt
  • Also in the process of setting Rook up, I've had to add new unformatted disks (10Gi) to all worker nodes, to be used for ceph storage

  • To verify rook status, bash into the toolbox pod by running sudo kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash and check rook status by running ceph status

Access Argo CD

  1. ssh -L 8080:localhost:8080 master01
  2. sudo kubectl port-forward svc/argocd-server -n argocd 8080:443
  3. Access localhost:8080
  4. Username: admin
  5. Password: sudo kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

To Do

  • Deploy the Trivy Operator by default
  • ArgoCD
    • deploy minIO using an Application CRD
  • Rook
    • understand how to recover previously used partitions

Resources

About

HA k3s deployment for infrastructure components (minIO, OpenLDAP, etc...)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published