Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubernetes with containerd example #339

Merged
merged 1 commit into from
Oct 18, 2021

Conversation

afbjorklund
Copy link
Member

@afbjorklund afbjorklund commented Oct 17, 2021

Will set up a single-node (i.e. no workers) Kubernetes "cluster"
with kubeadm, using containerd as the CRI and flannel as the CNI.

$ lima kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
$ lima sudo crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.5.7
RuntimeApiVersion:  v1alpha2

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/
(this is still the simplest networking: https://github.com/flannel-io/flannel)

$ limactl shell kubernetes kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   coredns-78fcd69978-b2rn5                  1/1     Running   0          12h
kube-system   coredns-78fcd69978-d5qdg                  1/1     Running   0          12h
kube-system   etcd-lima-kubernetes                      1/1     Running   0          12h
kube-system   kube-apiserver-lima-kubernetes            1/1     Running   0          12h
kube-system   kube-controller-manager-lima-kubernetes   1/1     Running   0          12h
kube-system   kube-flannel-ds-trwr2                     1/1     Running   0          12h
kube-system   kube-proxy-tgggv                          1/1     Running   0          12h
kube-system   kube-scheduler-lima-kubernetes            1/1     Running   0          12h
$ limactl shell kubernetes sudo crictl images
IMAGE                                TAG                 IMAGE ID            SIZE
k8s.gcr.io/coredns/coredns           v1.8.4              8d147537fb7d1       13.7MB
k8s.gcr.io/etcd                      3.5.0-0             0048118155842       99.9MB
k8s.gcr.io/kube-apiserver            v1.22.2             e64579b7d8862       31.3MB
k8s.gcr.io/kube-controller-manager   v1.22.2             5425bcbd23c54       29.8MB
k8s.gcr.io/kube-proxy                v1.22.2             873127efbc8a7       35.9MB
k8s.gcr.io/kube-scheduler            v1.22.2             b51ddc1014b04       15MB
k8s.gcr.io/pause                     3.5                 ed210e3e4a5ba       301kB
quay.io/coreos/flannel               v0.14.0             8522d622299ca       21.1MB

@afbjorklund
Copy link
Member Author

afbjorklund commented Oct 17, 2021

Maybe should move the k3s.yaml to the same place, since it's not really a container engine (it uses a containerd fork)

Container engines:

  • docker.yaml: Docker
  • podman.yaml: Podman
  • ....

Container orchestration:

  • kubernetes.yaml: Kubernetes
  • k3s.yaml: k3s
  • ....

Going with the vulgar k8s.yaml could work, but I think I prefer not to... Seems to be mostly used by a7s (Americans) ?

@AkihiroSuda AkihiroSuda added this to the v0.7.2 milestone Oct 17, 2021
@AkihiroSuda
Copy link
Member

Container orchestration:

  • kubernetes.yaml: Kubernetes
  • k3s.yaml: k3s
  • ....

SGTM

@jandubois
Copy link
Member

Maybe should move the k3s.yaml to the same place, since it's not really a container engine (it uses a containerd fork)

I agree with moving k3s to the "orchestration" category, but it is not using a fork of containerd afaik; it is just packaging it together with kubernetes (and other components). k3s is also not a fork of kubernetes, but a distribution that includes kubernetes...

Container engines:

  • docker.yaml: Docker
  • podman.yaml: Podman
  • ....

Container orchestration:

  • kubernetes.yaml: Kubernetes
  • k3s.yaml: k3s
  • ....

Maybe use "Kubernetes via kubeadm" and "Kubernetes via k3s" to explain the difference, which is really just the bootstrapping method, (and that k3s uses sqlite instead of etcd for the database to use fewer resources).

Going with the vulgar k8s.yaml could work, but I think I prefer not to... Seems to be mostly used by a7s (Americans) ?

I don't care much either way, but I think k8s is pretty pervasive and not restricted to the US. k8s.io is used internally all over the place for namespacing purposes. k8s is also slightly faster to type, but I don't think it matters here.

@afbjorklund
Copy link
Member Author

afbjorklund commented Oct 17, 2021

Fair points, and the differences probably don't matter here anyway. Maybe I will go with k8s.yaml after all, since it looks nicer.

Maybe not so much "fork" than recompiled, it does have the benefit of using less resources (compared to using containers for it)

$ limactl shell k3s sudo crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.4.11-k3s1
RuntimeApiVersion:  v1alpha2
$ limactl shell k3s sudo crictl images
IMAGE                                      TAG                    IMAGE ID            SIZE
docker.io/rancher/coredns-coredns          1.8.3                  3885a5b7f138c       12.9MB
docker.io/rancher/klipper-helm             v0.6.4-build20210813   f0b5a8f3a50a8       64.8MB
docker.io/rancher/klipper-lb               v0.2.0                 465db341a9e5b       2.71MB
docker.io/rancher/library-traefik          2.4.8                  deaf4b1027ed4       27.8MB
docker.io/rancher/local-path-provisioner   v0.0.19                148c192562719       13.6MB
docker.io/rancher/metrics-server           v0.3.6                 9dd718864ce61       10.5MB
docker.io/rancher/pause                    3.1                    da86e6ba6ca19       327kB

Looks like the /etc/crictl.yaml was missing, but can add that in a PR (/run/k3s/containerd/containerd.sock)

Copy link
Member

@jandubois jandubois left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM; maybe squash commits?

Will set up a single-node (i.e. no workers) Kubernetes cluster
with kubeadm, using containerd as the CRI and flannel as the CNI.

Signed-off-by: Anders F Björklund <[email protected]>
apt-get update
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why xenial (16.04)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should just use binary tarball if apt.kubernetes.io is not maintained for recent distros.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They ship one static go binary to all distros, I think "xenial" and "el7" was just the ones that was there in the beginning ?

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deb and rpm packages also handles the dependencies and systemd units, unlike e.g. the containerd/buildkit tarballs

@AkihiroSuda AkihiroSuda merged commit 905f622 into lima-vm:master Oct 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants