-
Notifications
You must be signed in to change notification settings - Fork 41
Deployment on Amazon EKS
You will need to follow the guide here to deploy a cluster with EKS.
Make sure to use node instance sizes that are at least a
t2.large
. Anything smaller won't fit a diego cell properly.
Once you have a running cluster, and you can do kubectl get nodes
and you can see Ready
nodes, please continue.
Usually EC2 nodes come with 20GB of disk space. This is insufficient for a cap deployment. Make sure you increase the disk space for each node after creating the kubernetes cluster:
- go to Services -> EC2
- select Security Groups
- Click on the Node Security Group for your nodes
- click Actions -> Edit inbound rules
- click Add Rule
- choose "SSH" in the Type column and "Anywhere" in the Source column
- click "Save"
- go to Services -> EC2
- select Volumes
- increase the size for all volumes attached to your nodes from 20GB to a larger size (at least 60GB)
- login to each node and run
sudo growpart /dev/nvme0n1 1 && sudo xfs_growfs -d /
(these commands might change depending on the actual disk device and/or the filesystem in use. Details on resizing are deocumented here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html)
Use this version of helm (or newer): https://storage.googleapis.com/kubernetes-helm/helm-v2.9.0-rc4-linux-amd64.tar.gz
In rbac-config.yaml
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
Then:
kubectl create -f rbac-config.yaml
helm init --service-account tiller
Create the following storage-class.yaml
:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
Run:
kubectl create -f storage-class.yaml
In your EC2 VM List, find one of the EKS nodes you've deployed.
Find its security group, then add the following rules to it:
Type | Protocol | Port Range | Source | Description |
---|---|---|---|---|
HTTP | TCP | 80 | 0.0.0.0/0 | CAP HTTP |
Custom TCP Rule | TCP | 2793 | 0.0.0.0/0 | CAP UAA |
Custom TCP Rule | TCP | 2222 | 0.0.0.0/0 | CAP SSH |
Custom TCP Rule | TCP | 4443 | 0.0.0.0/0 | CAP WSS |
Custom TCP Rule | TCP | 443 | 0.0.0.0/0 | CAP HTTPS |
In your EC2 VM List, find one of the EKS nodes you've deployed.
Find its private IPs and note the one that's also used in its private DNS (which looks like ip-<THE IP YOU'RE LOOKING FOR>.us-west-2.compute.internal
).
Also note the public IP address. You'll need it for the DOMAIN
of the cluster.
You'll deploy CAP using the usual procedure described here.
Make the following changes in your values.yaml
:
- use
overlay-xfs
forenv.GARDEN_ROOTFS_DRIVER
- use
""
forenv.GARDEN_APPARMOR_PROFILE
- the following roles need to have ALL capabilities: cc_uploader, nats, routing_api, diego_locket, diego_access, diego_brain, diego_api.
- set
kube.storage_class.persistent
andkube.storage_class.shared
togp2
Example values.yaml
:
env:
# Domain for SCF. DNS for *.DOMAIN must point to a kube node's (not master)
# external ip address.
DOMAIN: <PUBLIC IP OF A NODE VM>.nip.io
#### The UAA hostname is hardcoded to uaa.$DOMAIN, so shouldn't be
#### specified when deploying
# UAA host/port that SCF will talk to. If you have a custom UAA
# provide its host and port here. If you are using the UAA that comes
# with the SCF distribution, simply use the two values below and
# substitute the cf-dev.io for your DOMAIN used above.
UAA_HOST: uaa.<PUBLIC IP OF A NODE VM>.nip.io
UAA_PORT: 2793
GARDEN_ROOTFS_DRIVER: overlay-xfs
GARDEN_APPARMOR_PROFILE: ""
sizing:
cc_uploader:
capabilities: ["ALL"]
nats:
capabilities: ["ALL"]
routing_api:
capabilities: ["ALL"]
router:
capabilities: ["ALL"]
diego_locket:
capabilities: ["ALL"]
diego_access:
capabilities: ["ALL"]
diego_brain:
capabilities: ["ALL"]
diego_api:
capabilities: ["ALL"]
kube:
# The IP address assigned to the kube node pointed to by the domain.
#### the external_ip setting changed to accept a list of IPs, and was
#### renamed to external_ips
external_ips:
- <PRIVATE IP ADDRESS OF THE NODE VM>
storage_class:
# Make sure to change the value in here to whatever storage class you use
persistent: "gp2"
shared: "gp2"
# The registry the images will be fetched from. The values below should work for
# a default installation from the suse registry.
registry:
hostname: "registry.suse.com"
username: ""
password: ""
organization: "cap"
# hostname: "staging.registry.howdoi.website"
# username: "legituser"
# password: "" <- fill this out
# organization: "splatform"
auth: rbac
secrets:
# Password for user 'admin' in the cluster
CLUSTER_ADMIN_PASSWORD: changeme
# Password for SCF to authenticate with UAA
UAA_ADMIN_CLIENT_SECRET: uaa-admin-client-secret