Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LXD vm-driver support for Linux users #946

Closed
BrendanBall opened this issue Dec 22, 2016 · 12 comments
Closed

LXD vm-driver support for Linux users #946

BrendanBall opened this issue Dec 22, 2016 · 12 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@BrendanBall
Copy link

BrendanBall commented Dec 22, 2016

FEATURE REQUEST

It seems that only full virtual machines are currently supported for deploying minikube.
As a Linux user (and Kubernetes being native to Linux) this feels unnecessary to use a full VM when I could just use an Lxd container if I don't want to run it on my host. Lxd would provide the same reproducible environment as a VM. I didn't find any issues relating to Lxd so I would like to find out if there is a reason for that? Would anyone else be interested in this feature?

EDIT: I've so far tried to use kube-deploy inside an lxd container running ubuntu 16.04.
It seems to run into a problem docker pulling some of the images. The same images that fail in lxd successfully pull on the host running lxd. It seems to be some filesystem problem.

docker images pulled successfully:

  • gcr.io/google_containers/hyperkube-amd64:v1.5.1
  • gcr.io/google_containers/pause-amd64:3.0

docker images failed on pulling:

  • gcr.io/google_containers/etcd-amd64:3.0.4
    (extracting last layer: "failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /usr/local/bin/etcd: invalid argument")
  • gcr.io/google_containers/kube-addon-manager-amd64:v6.1
    (extracting last layer: "failed to register layer: ApplyLayer exit status 1 stdout: stderr: lchown /usr: invalid argument")

This fails regardless whether trying to pull from the bootstrap docker or normal docker instance inside lxd.

@r2d4 r2d4 added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 22, 2016
@dlorenc
Copy link
Contributor

dlorenc commented Jan 5, 2017

I'm not too familiar with LXD, it would be an alternative to KVM? Is there an existing docker-machine driver for LXD? That would be a good first step in figuring out the feasability of this.

@BrendanBall
Copy link
Author

No there isn't currently a docker-machine driver for LXD

@bmullan
Copy link

bmullan commented Jan 8, 2017

But you can run Docker inside an LXD container...

Here is Stephane Graber (one of the LXD devs) web blog on running docker in lxd...

https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/

@CarltonSemple
Copy link

@bmullan I'm also running into the issue. @BrendanBall There's also currently another issue related to running Docker in LXD. See here https://github.com/lxc/lxd/issues/2825

@CarltonSemple
Copy link

@BrendanBall did you ever make any progress with this?

@BrendanBall
Copy link
Author

@CarltonSemple No I haven't. For some reason I didn't get a notification for your message. I currently don't actively use kubernetes in any projects, I only play around with it when I get the time, which isn't very often. You'll see I referenced an issue on docker-machine where someone else has another use case for supporting LXD in docker-machine.

@luisfaceira
Copy link

Developing a docker-machine driver for LXD is something that seems at first sight weird and a bad idea (running one container system inside another container system). Although I agree and would very like to see the use-case of the current issue go forward, I'm afraid that it's probably one of the few use cases for such driver to make sense, and so I'm not optimistic that it will get developed anytime soon...

Which brings me to the question: is such driver a requirement for going forward with minkube's lxd support? Can't the docker-machine generic drivers (that use simple ssh connections) somehow be leveraged?

@dlorenc
Copy link
Contributor

dlorenc commented Oct 19, 2017

Closing as this is stale.

@gattytto
Copy link

Running Minikube inside a LXC container
This section describes how to properly configure a LXC container to set up Minikube when the hypervisor uses ZFS/BTRFS/LVM to provision the containers storage

Background
The chectl command-line tool requires the minikube ingress plugin to be enabled in Minikube, at the same time the Minikube ingress plugin requires docker to be running with the overlay filesystem driver.

Problem
According to Docker storage drivers docker overlay2 driver is only supported with ext4 and xfs (with ftype=1).

Solution
The solution is to create a virtual block device inside a volume, which in the case of BTRFS is not possible and will require to use a file as the virtual block device.

Procedure
Note: change the Zpool name / LVM volume_group name and dockerstorage to your use case and preferences

Create a fixed size ZFS dataset / LVM volume on the hypervisor side

$ zfs create -V 50G zfsPool/dockerstorage #USING ZFS
$ lvcreate -L 50G -n dockerstorage volumegroup_name #USING LVM
Use a partition tool to create a partition inside the virtual block device

$ parted /dev/zvol/zfsPool/dockerstorage --script mklabel gpt #USING ZFS
$ parted /dev/zvol/zfsPool/dockerstorage --script mkpart primary 1 100% #USING ZFS
$ parted /dev/mapper/volumegroup_name-dockerstorage --script mklabel gpt #USING LVM
$ parted /dev/mapper/volumegroup_name-dockerstorage --script mkpart primary 1 100% #USING LVM
After this there will be a reference called dockerstorage-part1 inside the /dev/zvol/zfsPool folder for the case of ZFS and a reference called volumegroup_name-dockerstorage1 inside the /dev/mapper folder for the case of LVM. This is the virtual block device’s partition to be used to store /var/lib/docker from the LXC container.

Format the virtual partition to xfs with the ftype flag set to 1

$ mkfs.xfs -n ftype=1 /dev/zvol/zfsPool/dockerstorage-part1 #FOR ZFS
$ mkfs.xfs -n ftype=1 /dev/mapper/volumegroup_name-dockerstorage1 #FOR LVM
Finally attach the virtual partition to the container (minikube is the name of LXC container, dockerstorage is the name for the storage instance in LXC configuration)

$ lxc config device add minikube dockerstorage disk path=/var/lib/docker source=/dev/zvol/zfsPool/dockerstorage-part1 #FOR ZFS
$ lxc config device add minikube dockerstorage disk path=/var/lib/docker source=/dev/mapper/volumegroup_name-dockerstorage1 #FOR LVM
You can check the filesystem inside the container using the command 'df -T /var/lib/docker'

Use the following LXC configuration profile in the LXC container to allow it running Minikube

config:
linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter
raw.lxc: |
lxc.apparmor.profile=unconfined
lxc.mount.auto=proc:rw sys:rw
lxc.cgroup.devices.allow=a
lxc.cap.drop=
security.nesting: "true"
security.privileged: "true"
description: Profile supporting minikube in containers
devices:
aadisable:
path: /sys/module/apparmor/parameters/enabled
source: /dev/null
type: disk
aadisable2:
path: /sys/module/nf_conntrack/parameters/hashsize
source: /sys/module/nf_conntrack/parameters/hashsize
type: disk
aadisable3:
path: /dev/kmsg
source: /dev/kmsg
type: disk
name: minikube
After starting and setting up networking and docker inside the container, start Minikube.

minikube start --vm-driver=none --extra-config kubeadm.ignore-preflight-errors=SystemVerification

@gattytto
Copy link

@CarltonSemple

@gattytto
Copy link

@luisfaceira @dlorenc

@berttejeda
Copy link

berttejeda commented Nov 25, 2019

This is for anyone running into a somewhat related issue ...
In my case, I encountered a similar error when attempting to launch rancher 2.3 under an LXC container. Specifics of my scenario:

  • Goal: Run rancher 2.3 inside an LXC container
    Command as per rancher installation docs: docker run -d --restart=unless-stopped -p 80:80 -p 443:443 -e CATTLE_SYSTEM_CATALOG=bundled rancher/rancher:latest
  • Error I encountered: lchown /usr/bin/etcd: invalid argument
  • Solution:
    Log out of your rancher LXC container
    On the LXC host, run:
lxc config set {{ my_rancher_lxc_container }} security.nesting true
lxc config set {{ my_rancher_lxc_container }} security.privileged true
echo -e """
lxc.cgroup.devices.allow = a
lxc.mount.auto=proc:rw sys:rw
lxc.cap.drop =
""" | lxc config set {{ my_rancher_lxc_container }} raw.lxc - && lxc restart {{ my_rancher_lxc_container }}

Re-attempt the command

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

8 participants