-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8s e2e tests [149/149] #533
Comments
Nice - next target is getting these to 0 failures :) |
alright, trimming down the list above a lot :) many of the failures is just beacuse local-up-cluster.sh runs with SecurityContextDeny. I'll re-run the test and post an updated list :) (also updated first comment for a way to run e2e for anyone to try out) |
FYI I figured the following test:
will be fixed by using Not sure that fixes other tests here, I'll re-run them once I finish the run I started in my previous comment :) |
first comment updated with new result, 137/151, going to re-run the suite as per my previous comment. |
diff --git a/server/container_create.go b/server/container_create.go
index c15985e..5cbab69 100644
--- a/server/container_create.go
+++ b/server/container_create.go
@@ -589,6 +589,14 @@ func (s *Server) createSandboxContainer(ctx context.Context, containerID string,
containerImageConfig := containerInfo.Config
+ // TODO: volume handling in CRI-O
+ // right now, we do just mount tmpfs in order to have images like
+ // gcr.io/k8s-testimages/redis:e2e to work with CRI-O
+ for dest := range containerImageConfig.Config.Volumes {
+ destOptions := []string{"mode=1777", "size=" + strconv.Itoa(64*1024*1024), label.FormatMountLabel("", sb.mountLabel)}
+ specgen.AddTmpfsMount(dest, destOptions)
+ }
+
processArgs, err := buildOCIProcessArgs(containerConfig, containerImageConfig)
if err != nil {
return nil, err The patch above makes many other tests pass for tests with images that defines a |
I think we can add this tmpfs volume as a temporary fix. The downside is more RAM usage for containers with VOLUMEs so would want to move to disk-backed cri-o managed volumes. |
I'll test with that patch and see how it goes. Will report status here |
for the record, I'm running e2e on Fedora 25 and all network tests currently failing seem to be resolved by just |
@runcom |
139/151, updated first comment and logs available at https://runcom.red/e2e-1.log |
One of them is the projected failure which is really a bug in kube so one less to worry about ;) |
1st and 2nd test used to pass with node-e2e though (and also if you look at the first run of e2e tests I did https://runcom.red/e2e.log you can see them passing), we need to understand why they're failing now. |
Fixed by #537 |
Nice!
… On May 28, 2017, at 4:33 AM, Antonio Murdaca ***@***.***> wrote:
[Fail] [k8s.io] Kubectl client [k8s.io] Kubectl run rc [It] should create an rc from an image [Conformance]
/home/amurdaca/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1165
[Fail] [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update [It] should support rolling-update to same image [Conformance]
/home/amurdaca/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:177
fixed as well
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Port forwarding tests panic, fix is here #542 though, it seems our nsenter/socat implementation isn't correctly working (even if it's copy/pasted from the dockershim one, I verified this by running the same tests with docker, and they pass with it, but fail with CRI-O for some weird, network, issue I guess). |
I suspect that it might be related to iptables. We might need a rule or something may be blocking it. Can you run the tests disabling firewalls to rule out the second scenario?
… On May 28, 2017, at 8:31 AM, Antonio Murdaca ***@***.***> wrote:
Port forwarding tests panic, fix is here #542
though, it seems our nsenter/socat implementation isn't correctly working (even if it's copy/pasted from the dockershim one, I verified this by running the same tests with docker, and they pass with it, but fail with CRI-O for some weird, network, issue I guess).
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
found the root cause, it's actually a bug in CRI-O itself, port forwarding tests now passes with #542 + a fix coming in a moment 😎 |
I'll re-run the whole e2e once those 2 PRs are merged. note that these 4 pass also:
|
we're now at 144/151 🎉 , I've updated the result in the first comment so look there for failing tests. The remaining failing tests are either network issues (probably because of my laptop) or tests that relies on attach (but @mrunalp is working on it 👍) |
@sameo @mrunalp this test works fine only after disabling firewalld and flushing iptables (and also applying #544 to master), otherwise it always fail (maybe more net tests are also blocked by firewalld)
|
So 3 of the 7 failures are flaky or misconfigurations, we need to tackle just 4 then, that are all related to attach :) |
Awesome 👍😀
… On May 29, 2017, at 11:23 AM, Antonio Murdaca ***@***.***> wrote:
So 3 of the 7 failures are flaky or misconfigurations, we need to tackle just 4 then, that are all related to attach :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Turns out this test:
is failing just because we run the tests w/o a DNS service when doing local-up-cluster, one way to fix it when running the test is to follow this https://github.com/linzichang/kubernetes/blob/master/examples/guestbook/README.md#finding-a-service (in the CI, we'll likely switch to env DNS as pointed out in that readme) |
Yeah, we could switch to env for that test 👍 |
That test is basically failing because |
So the only test failing now is the attach one :) I just confirmed the following test works fine (it was a kubelet misconfiguration wrt to kube-dns). We need to enable kube-dns in local-up-cluster to test it, like this: $ sudo PATH=$GOPATH/src/k8s.io/kubernetes/third_party/etcd:${PATH} \
PATH=$PATH GOPATH=$GOPATH \
ALLOW_PRIVILEGED=1 \
CONTAINER_RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT='/var/run/crio.sock \
--runtime-request-timeout=5m' ALLOW_SECURITY_CONTEXT="," DNS_SERVER_IP="192.168.1.5" API_HOST="192.168.1.5" API_HOST_IP="192.168.1.5" KUBE_ENABLE_CLUSTER_DNS=true ./hack/local-up-cluster.sh After setting
So, we are at 148/151 :) |
Updated first comment :) |
attach test now passes! 🎉 🎉 🎉 @rajatchopra @dcbw @sameo could you help figuring out the network flakyness? probably some misconfiguration:
the above is never passing for some reason The test below passes only if we stop firewalld and flush iptables before running tests: # systemctl stop firewalld
# iptables -F
[Fail] [k8s.io] PreStop [It] should call prestop when killing a pod [Conformance]
/home/amurdaca/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:174 |
@mrunalp Can we close this issue? I think all tests are passed now |
let's leave this open till we figure the DNS stuff out |
FYI I got all tests running in the CI :) so i've updated the title and I'm just waiting for #631 to be merged :) |
#631 was merged last September. I'm not sure if we want to leave it open as some sort of tracker issue? Personally, I prefer issue labels for that sort of thing. |
I am going to close this issue, Since it seems to be fixed. |
Not that bad for a very first run of k8s e2e tests: 117/151 😸Full logs available at:
https://runcom.red/e2e.log (117/151)https://runcom.red/e2e-1.log (139/151)144/151148/151test yourself with:
The text was updated successfully, but these errors were encountered: