-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Docker-based development support #52
Conversation
Articulate why you broke up the all in one into a ton of things? What's the value to real use cases to have ten containers? |
First, a question from your question: I want to do some development or integration-like testing of the scheduler and load balancer against 5 minions/kubelets and pods associated with them. What are my options today other than a cluster of VMs? |
There's nothing preventing an "all-in-one" image as well. The point is having a way to compose clusters locally (multi-master tests, multi-minion, multi-anything) without VMs or host port juggling. |
Testing multiple containers in docker on a single vm is useful. I don't see the carry over to testing Openshift as multiple containers adding anything to that. |
Host port problem is going away soon in favor of testing real networks. I don't think that requires breaking up the all in one for that test env. |
How do you do live testing of the scheduler, load balancer, replication controller, multi-anything support today without multiple VMs? |
Discussed via phone, three concrete use cases:
The docker image is best for system level integration testing, spin up a container you can throw away to get back to a clean env. |
9a7055f
to
d783481
Compare
For comparison purposes, here's a branch which collapses all the builder images into a single all-in-one builder image which is reused to launch the discrete components in their own containers: https://github.com/ironcladlou/origin/compare/docker-build-support-collapsed?expand=1 Functionally equivalent, less stuff overall. All linking setup very explicitly expressed in one place now rather than being spread across EDIT: note that the linked branch is a rough POC to demonstrate the concept. |
I really like this approach because it allows you to have a clean,
I think this will be much easier to get into than a Vagrant-created On Sat, Sep 6, 2014 at 8:35 PM, Dan Mace [email protected] wrote:
|
I like this too. Guessing the major thing we would be missing is per pod IP addresses? I can take a stab Monday at getting that working in upstream kubernetes by reusing the provision networking pieces after I get a chance to play with this some more.
|
Per-pod IP addresses are implemented; containers in a pod still share their networking stack with a networking container as usual. All component and pod containers have IPs on the same subnet by virtue of living on the same docker bridge, which combined with links makes everything flatly routable. From an etcd perspective, there's a multi-machine cluster in action, and the kubelet doesn't care that its containers are colocated on the same physical host; docker abstracts the details of where the kubelet's containers actually live. |
The kubelet will nuke containers from other machines unless you did something to the kubelet to support that
|
Hm, I think you're right. Looks like the kubelet assumes that any container with a name prefixed with Need to think about that some more. Random musings: Is the core assumption flawed? (e.g. Should container labels express a host affinity, like the kubelet does already? That would allow logical partitioning of containers by kubelet within Docker but I have no use case beyond setting up test clusters.) Is there any other way to accommodate the existing behavior without using docker-in-docker? |
|
Testing is another big value add. We should prototype e2e tests against On Sunday, September 7, 2014, Clayton Coleman [email protected]
|
The latest branch ditches the multiple images in favor of a single build image which also serves as a runner. It also has some really, really rough code to:
Something's still wrong with proxying. To even try this branch, you need to apply this patch to a local clone of the |
Talking and thinking about all this more, docker-in-docker sidesteps all the kubelet and proxy issues entirely, at the expense of some UX complexity: to get at the pod containers, we'd have to try and transparently expose the inner dockers (random idea: Docker-in-docker also means a loss of image caching on the host, so spinning up tests is going to be drastically slower unless host level caching is in place. |
The latest commit demonstrates how this could work using docker-in-docker. It requires no patching to kubernetes. Here's how I inspect the docker daemon within the minions:
I use a little helper script called
The guestbook example (with all |
fbfa449
to
4e14071
Compare
usage: $0 [OPTIONS] | ||
|
||
OPTIONS: | ||
-l lanch a local cluster following a successful build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
launch
fd65748
to
7e58be1
Compare
7e58be1
to
68e96bb
Compare
@@ -11,3 +11,37 @@ If you are not updating packages you should not need godep installed. | |||
To update to a new version of a dependency that's not already included in Kubernetes, checkout the correct version in your GOPATH and then run `godep save <pkgname>`. This should create a new version of `Godeps/Godeps.json`, and update `Godeps/workspace/src`. Create a commit that includes both of these changes. | |||
|
|||
To update the Kubernetes version, checkout the new "master" branch from openshift/kubernetes (within your regular GOPATH directory for Kubernetes), and run `godep restore ./...` from the Kubernetes dir. Then switch to the OpenShift directory and run `godep save ./...` | |||
|
|||
## Running OpenShift in Docker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any benefit for an end user to try this path? If so, have a section for it in CONTRIBUTING.adoc
68e96bb
to
0cf36e7
Compare
Origin Action Required: Pull request cannot be automatically merged, please rebase your branch from latest HEAD and push again |
Add deploy trigger integration tests
…obile-dropdown Update nav-mobile-dropdown to use new extensions
REVIEW: UPSTREAM: 00000: k8s.io/apiserver: accept unversioned core gr…
Why build or run OpenShift in Docker?
Building and running on bare metal and on VMs each have advantages and disadvantages. Using Docker for builds and development is useful when bare metal is too limited and VMs are too cumbersome. Here are some of the reasons to use Docker when developing OpenShift:
hack/build-go.sh
.origin-build
provideshttpie
to developers with no host requirements beyond Docker, and in a way which can easily and closely integrate with both the cluster components and the host OS)