-
-
Notifications
You must be signed in to change notification settings - Fork 458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Explicit support for running k3s in docker mode. #113
Comments
I was able to get a functional solution with some help from the k3s-dind maintainer in unbounded-systems/k3s-dind#4. I've provided some context for my requirements and rough solution below. To give some context on why I needed this - I am working on getting Hyperledger Fabric set up within a Kubernetes environment. The way Fabric installs chaincode (smart contracts) specifically requires docker support on "peer" nodes, for the time being. My CI environment is GitLab, which heavily emphasizes running jobs within (docker) containers. I wanted a pattern that easily enables execution in that environment and locally for testing changes to the deployment. When researching local k8s development, k3s + k3d seemed like a natural fit (and was VERY easy to deploy in GitLab without the The K3D side: E.g., FROM unboundedsystems/k3s-dind:latest
# overwrite upstream start-k3s.sh with your wrapper
COPY ./start-k3s.sh /usr/local/bin/start-k3s.sh
ENTRYPOINT ["/usr/local/bin/start-k3s.sh"]
CMD ["agent"] The replaced ...
runDocker
k3s "$@" From there, k3d can be used to launch a cluster just by specifying the right server/agent args and your custom image: k3d create --workers 2 \
--server-arg '--docker' --agent-arg '--docker' \
--image your-k3d-dind:latest I think I am safe to launch my k3s server without docker support, but keeping the environments consistent for now. This worked great locally, but took some tinkering to get working in CI. I ended leveraging GitLab's DIND service pattern but launching another dind for direct interaction with that docker daemon, as opposed to my Gitlab DIND service docker daemon. E.g. gitlab-job:
stage: my-stage
image: docker:stable
services: docker:stable-dind
before_script:
- docker run --rm -d --name k3d docker:dind
script:
- docker exec k3d sh -c 'k3d create --workers 2 --server-arg '--docker' --agent-arg '--docker' --image your-k3d-dind:latest'
after_script:
- docker stop k3d Using a similar pattern, I've been able to get 3 layers of containerization working for my requirements (gitlab docker executor (1) -> dind (2) -> k3s pods running docker containers (3) -> chaincode running as sibling containers within pods). I would ultimately be ok with closing this as a niche issue. It would be nice to have an upstream k3d compatible dind image, but wrapping another dind solution is was low enough fruit for me. |
Thanks a lot for this writeup @rpocase 😃 I'm not sure, where a I think, if we have a website for this project at some point, your text (+ some extras) would be a great fit for a "use-cases" section. What do you think? Would it be OK for you to put this there? Additionally I'd like to ask you for a bit more info on how you now do testing on the k3d cluster that you create in the pipeline? How do you get test results out of there? Thanks again :) |
@iwilltry42 Apologies for the delay - totally forgot about your questions. I am absolutely all right with using my text as a baseline for use case documentation. As for test results - I'm not at the point where I am exporting results from the cluster other than exit codes (and stdout in the pipeline/local worfklow). Exporting test result files from the cluster will come once kubernetes is integrated into our devs primary workflow. This has taken a backseat for various reasons. |
Hello, I spent a few hours to get the following setup to work. It's not directly releated to k3d but I occured similar issues, so my solution might help you here. I run k3s on my machine in docker (for easier maintainability). I want to run it with --docker mode (required when I want to run pipeworks on it which I need for my a-bit-special home setup). And I want it to use the hosts docker. That means my k3s uses that same docker that it runs on. I achieved it using this compose file:
Kind regards, |
@micw, thanks for your efforts. I'd like to get this working as well. But I'm getting an error, and I don't quite understand how to set up my volumes correctly:
Sometimes I get the same error for the Docker mount:
What else did you configure to make this work, if I may ask? |
Problem
I have been experimenting with k3d as a lightweight method for CI and development workflows. My target deployment environment ultimately has a hard requirement on k3s to be running with
--docker
due to lack of support for other container run times.I can start a k3d cluster (with or without worker nodes), and the cluster goes 'running' and check cluster-info shows services, but the k3s server fails to completely launch any pods.
Digging into the server logs, I see a lot of similar error messages.
Per a conversation in slack, the resolv.conf issue is resolvable by mounting the host
/var/lib/docker/containers
location, but does not resolve the/proc
issues.Scope of your request
I would like explicit support for running k3s with docker enabled through k3d, including example docs.
Describe the solution you'd like
A new CLI argument should be added to create (e.g.
k3d create --docker
) that automatically injects the correct server/agent args and launches with a valid default image. The user may need to provide additional information (e.g., a docker socket volume or a DOCKER_HOST) if relevant.This seems like it would necessitate an explicit k3s-dind image. The new image would likely need to be based out of rancher/k3s.
Describe alternatives you've considered
I've found one implementation available for running k3s with dind, but does not provide as clean a workflow as k3d.
I also have not been able to get this to launch correctly. It suffers similar issues as above if ran with the host docker socket mounted (and different issues otherwise).The text was updated successfully, but these errors were encountered: