Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

downward API extended test flake #12629

Closed
smarterclayton opened this issue Jan 23, 2017 · 17 comments
Closed

downward API extended test flake #12629

smarterclayton opened this issue Jan 23, 2017 · 17 comments
Assignees
Labels
component/kubernetes component/storage kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@smarterclayton
Copy link
Contributor

Uh oh:

Extended.[k8s.io] Downward API volume should update labels on modification [Conformance]

https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_conformance/10550/testReport/junit/(root)/Extended/_k8s_io__Downward_API_volume_should_update_labels_on_modification__Conformance_/

/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124
Timed out after 120.001s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    ...
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:123
Jan 23 15:18:26.827: INFO: At 2017-01-23 15:16:12.227062384 -0500 EST - event for labelsupdatec8dca6b8-e1a8-11e6-a908-0e53c2054dd0: {default-scheduler } Scheduled: Successfully assigned labelsupdatec8dca6b8-e1a8-11e6-a908-0e53c2054dd0 to 172.18.4.242
Jan 23 15:18:26.827: INFO: At 2017-01-23 15:16:17.219264543 -0500 EST - event for labelsupdatec8dca6b8-e1a8-11e6-a908-0e53c2054dd0: {kubelet 172.18.4.242} Pulled: Container image "gcr.io/google_containers/mounttest:0.7" already present on machine
Jan 23 15:18:26.827: INFO: At 2017-01-23 15:16:17.945694933 -0500 EST - event for labelsupdatec8dca6b8-e1a8-11e6-a908-0e53c2054dd0: {kubelet 172.18.4.242} Created: Created container with docker id eae9aa4c3f70; Security:[seccomp=unconfined]
Jan 23 15:18:26.827: INFO: At 2017-01-23 15:16:19.558858783 -0500 EST - event for labelsupdatec8dca6b8-e1a8-11e6-a908-0e53c2054dd0: {kubelet 172.18.4.242} Started: Started container with docker id eae9aa4c3f70
Jan 23 15:18:26.827: INFO: At 2017-01-23 15:16:26 -0500 EST - event for labelsupdatec8dca6b8-e1a8-11e6-a908-0e53c2054dd0: {kubelet 172.18.4.242} FailedMount: MountVolume.SetUp failed for volume "kubernetes.io/downward-api/c8ddb323-e1a8-11e6-9825-0e53c2054dd0-podinfo" (spec.Name: "podinfo") pod "c8ddb323-e1a8-11e6-9825-0e53c2054dd0" (UID: "c8ddb323-e1a8-11e6-9825-0e53c2054dd0") with: remove /mnt/openshift-xfs-vol-dir/pods/c8ddb323-e1a8-11e6-9825-0e53c2054dd0/volumes/kubernetes.io~downward-api/podinfo/resolv.conf: device or resource busy
@smarterclayton
Copy link
Contributor Author

@pmorie @ncdc have never seen this before

@smarterclayton smarterclayton added kind/test-flake Categorizes issue or PR as related to test flakes. priority/P1 labels Jan 23, 2017
@smarterclayton
Copy link
Contributor Author

High priority in case this is a severe issue under the covers

@ncdc
Copy link
Contributor

ncdc commented Jan 24, 2017

This failed because the kubelet wasn't able to write the updated downward api file after the pod update was issued. The device or resource busy error trying to remove the old data is the problem. Not sure why it hit that error.

@derekwaynecarr
Copy link
Member

given andy's comment, do people agree with a lower priority?

@smarterclayton
Copy link
Contributor Author

Should the downward API retry? Does it already?

@ncdc
Copy link
Contributor

ncdc commented Feb 6, 2017

It does retry, but I don't think retrying is specific to downward api. It's just the kubelet and the volume code trying to set up a volume mount.

@smarterclayton
Copy link
Contributor Author

@ncdc
Copy link
Contributor

ncdc commented Feb 8, 2017

@derekwaynecarr @sjenning @pmorie it's probably worth digging into this when someone has some free time to see what is holding the file open (assuming that's the reason).

@derekwaynecarr
Copy link
Member

@pmorie -- can you look at this?

@pmorie
Copy link
Contributor

pmorie commented Feb 13, 2017

This one flakes occasionally upstream as well, and has been for quite a while - moving to p2.

@derekwaynecarr
Copy link
Member

i assume this issue: kubernetes/kubernetes#37456 (as of 11/26)

@0xmichalis
Copy link
Contributor

0xmichalis commented Dec 5, 2017

Different failure, same test

/tmp/openshift/init/rpm/BUILD/origin-3.8.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:125
Timed out after 120.000s.
Expected
    <string>: content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    content of file "/etc/labels": key1="value1"
    key2="value2"
    
to contain substring
    <string>: key3="value3"
    
/tmp/openshift/init/rpm/BUILD/origin-3.8.0/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:124

https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/17589/test_pull_request_origin_extended_conformance_install/3493/

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 5, 2018
@sdodson
Copy link
Member

sdodson commented Mar 5, 2018

/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 5, 2018
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 3, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 3, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/kubernetes component/storage kind/test-flake Categorizes issue or PR as related to test flakes. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

9 participants