Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-enable oc run --attach verifications once the docker logs api fix is vendored #11240

Open
dcbw opened this issue Oct 6, 2016 · 26 comments
Open
Assignees
Labels
component/restapi lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/P2

Comments

@dcbw
Copy link
Contributor

dcbw commented Oct 6, 2016

Running test/end-to-end/core.sh:331: executing 'cat '/tmp/openshift/test-end-to-end-docker//logs/cli-with-token.log'' expecting success and text 'In project test'...
SUCCESS after 0.016s: test/end-to-end/core.sh:331: executing 'cat '/tmp/openshift/test-end-to-end-docker//logs/cli-with-token.log'' expecting success and text 'In project test'
Standard output from the command:
Waiting for pod test/cli-with-token to be running, status is Pending, pod ready: false
Waiting for pod test/cli-with-token to be running, status is Pending, pod ready: false
I1005 22:44:47.756240       1 merged_client_builder.go:122] Using in-cluster configuration
I1005 22:44:47.789231       1 merged_client_builder.go:122] Using in-cluster configuration
I1005 22:44:47.792214       1 merged_client_builder.go:122] Using in-cluster configuration
I1005 22:44:47.792372       1 merged_client_builder.go:145] Using in-cluster namespace
In project test on server https://172.30.0.1:443
pod/cli-with-token runs openshift/origin:6cd347a
2 warnings identified, use 'openshift cli status -v' to see details.

There was no error output from the command.
Running test/end-to-end/core.sh:332: executing 'oc delete pod cli-with-token' expecting success...
SUCCESS after 0.289s: test/end-to-end/core.sh:332: executing 'oc delete pod cli-with-token' expecting success
Standard output from the command:
pod "cli-with-token" deleted

There was no error output from the command.
Running test/end-to-end/core.sh:334: executing 'cat '/tmp/openshift/test-end-to-end-docker//logs/cli-with-token2.log'' expecting success and text 'system:serviceaccount:test:default'...
FAILURE after 0.015s: test/end-to-end/core.sh:334: executing 'cat '/tmp/openshift/test-end-to-end-docker//logs/cli-with-token2.log'' expecting success and text 'system:serviceaccount:test:default': the output content test failed
Standard output from the command:
Waiting for pod test/cli-with-token-2 to be running, status is Pending, pod ready: false
Waiting for pod test/cli-with-token-2 to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
Waiting for pod test/cli-with-token-2 to terminate, status is Running

There was no error output from the command.
[ERROR] PID 30321: hack/lib/cmd.sh:229: `return "${return_code}"` exited with status 1.
[INFO]      Stack Trace: 
[INFO]        1: hack/lib/cmd.sh:229: `return "${return_code}"`
[INFO]        2: test/end-to-end/core.sh:334: os::cmd::expect_success_and_text
[INFO]   Exiting with code 1.
[ERROR] PID 29510: hack/test-end-to-end-docker.sh:94: `${OS_ROOT}/test/end-to-end/core.sh` exited with status 1.
[INFO]      Stack Trace: 
[INFO]        1: hack/test-end-to-end-docker.sh:94: `${OS_ROOT}/test/end-to-end/core.sh`
[INFO]   Exiting with code 1.
[INFO] Stacked plot for log subset "iops" written to /tmp/openshift/test-end-to-end-docker//logs/iops.pdf
[INFO] Stacked plot for log subset "paging" written to /tmp/openshift/test-end-to-end-docker//logs/paging.pdf
[INFO] Stacked plot for log subset "cpu" written to /tmp/openshift/test-end-to-end-docker//logs/cpu.pdf
[INFO] Stacked plot for log subset "queue" written to /tmp/openshift/test-end-to-end-docker//logs/queue.pdf
[INFO] Stacked plot for log subset "memory" written to /tmp/openshift/test-end-to-end-docker//logs/memory.pdf

[FAIL] !!!!! Test Failed !!!!
@dcbw dcbw added the kind/test-flake Categorizes issue or PR as related to test flakes. label Oct 6, 2016
This was referenced Oct 6, 2016
@marun
Copy link
Contributor

marun commented Oct 6, 2016

I'm assuming this is the same failure manifesting slightly differently.

[INFO] Running a CLI command in a container using the service account
Running test/end-to-end/core.sh:328: executing 'oc policy add-role-to-user view -z default' expecting success...
SUCCESS after 0.247s: test/end-to-end/core.sh:328: executing 'oc policy add-role-to-user view -z default' expecting success
There was no output from the command.
There was no error output from the command.
Running test/end-to-end/core.sh:330: executing 'cat '/tmp/openshift/test-end-to-end-docker//logs/cli-with-token.log'' expecting success and text 'Using in-cluster configuration'...
FAILURE after 0.013s: test/end-to-end/core.sh:330: executing 'cat '/tmp/openshift/test-end-to-end-docker//logs/cli-with-token.log'' expecting success and text 'Using in-cluster configuration': the output content test failed
Standard output from the command:
Waiting for pod test/cli-with-token to be running, status is Pending, pod ready: false
Waiting for pod test/cli-with-token to be running, status is Pending, pod ready: false
If you don't see a command prompt, try pressing enter.
Waiting for pod test/cli-with-token to terminate, status is Running

There was no error output from the command.

@bparees bparees assigned mfojtik and unassigned bparees Oct 6, 2016
@bparees
Copy link
Contributor

bparees commented Oct 6, 2016

@mfojtik not sure this is the PM team either, but it's not devex...something is running an image w/ oc run and then expecting to see some output in the container logs:

echo "[INFO] Running a CLI command in a container using the service account"
os::cmd::expect_success 'oc policy add-role-to-user view -z default'
oc run cli-with-token --attach --image="openshift/origin:${TAG}" --restart=Never -- cli status --loglevel=4 > "${LOG_DIR}/cli-with-token.log" 2>&1
os::cmd::expect_success_and_text "cat '${LOG_DIR}/cli-with-token.log'" 'Using in-cluster configuration'
os::cmd::expect_success_and_text "cat '${LOG_DIR}/cli-with-token.log'" 'In project test'

@liggitt
Copy link
Contributor

liggitt commented Oct 7, 2016

is this the same problem as #9309? @ncdc, you mentioned that was a known issue... is it tracked upstream?

@ncdc
Copy link
Contributor

ncdc commented Oct 7, 2016

@liggitt upstream is flaking a ton lately on kubectl exec not returning the expected output. So far I've seen it with GCE (GCI image) and GKE. I obviously can't debug on GKE, but I did spin up a GCE/GCI cluster and ran kubectl exec in an endless loop for close to 2 hours and it never flaked on me. I have no idea what's going on :-(

@ncdc ncdc changed the title Running test/end-to-end/core.sh:334: executing 'cat '/tmp/openshift/test-end-to-end-docker//logs/cli-with-token2.log'' expecting success and text Re-enable oc run --attach verifications once the docker logs api fix is vendored Oct 20, 2016
@liggitt liggitt added this to the 1.4.0 milestone Oct 20, 2016
@ncdc
Copy link
Contributor

ncdc commented Oct 28, 2016

The fix for this merged upstream to docker yesterday. If everyone wants, I can do a PR for origin's vendored copies of docker/engine-api and kube to add back support for retrieving logs at the beginning of attach. Let me know if I should do this.

@liggitt
Copy link
Contributor

liggitt commented Oct 28, 2016

if we don't, is this a regression in 1.4?

@ncdc
Copy link
Contributor

ncdc commented Oct 28, 2016

Not in 1.4, no. I was a regression in 1.3 I think.

@soltysh
Copy link
Contributor

soltysh commented Jun 2, 2017

This is tracked as part of #12558 and fix (although still not fully functional) is in #13669. I'm closing that in favor of #12558.

@soltysh soltysh closed this as completed Jun 2, 2017
@miminar
Copy link

miminar commented Jun 20, 2017

Happened again here: https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_integration/3417/consoleFull#-10772849525762928fe4b031e1b25ced2a

@soltysh
Copy link
Contributor

soltysh commented Jun 20, 2017

Interesting, the output I have from the command vs the output I got from docker logs is different:

If you don't see a command prompt, try pressing enter.
I0619 16:14:25.397369       1 merged_client_builder.go:123] Using in-cluster configuration
I0619 16:14:25.400517       1 merged_client_builder.go:123] Using in-cluster configuration
I0619 16:14:25.440694       1 cached_discovery.go:134] failed to write cache to /root/.kube/172.30.0.1_443/servergroups.json due to mkdir /root/.kube: permission denied
I0619 16:14:25.441363       1 merged_client_builder.go:123] Using in-cluster configuration

vs

I0619 16:14:25.397369       1 merged_client_builder.go:123] Using in-cluster configuration
I0619 16:14:25.400517       1 merged_client_builder.go:123] Using in-cluster configuration
I0619 16:14:25.440694       1 cached_discovery.go:134] failed to write cache to /root/.kube/172.30.0.1_443/servergroups.json due to mkdir /root/.kube: permission denied
I0619 16:14:25.441363       1 merged_client_builder.go:123] Using in-cluster configuration
system:serviceaccount:test:default

We're checking the former, but the latter contains the value we're looking for.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2018
@soltysh
Copy link
Contributor

soltysh commented Feb 12, 2018

/remove-lifecycle stale
/lifecycle frozen

@openshift-ci-robot openshift-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 12, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/restapi lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/P2
Projects
None yet
Development

No branches or pull requests