Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hack/lib/start.sh uses openshift start #20655

Closed
dmage opened this issue Aug 15, 2018 · 14 comments
Closed

hack/lib/start.sh uses openshift start #20655

dmage opened this issue Aug 15, 2018 · 14 comments
Assignees
Labels
kind/post-rebase lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/security

Comments

@dmage
Copy link
Contributor

dmage commented Aug 15, 2018

openshift start was removed in #20344, but our hack scripts still use it (for example, make test-extended, os::start::all_in_one).

@dmage
Copy link
Contributor Author

dmage commented Aug 15, 2018

@bparees
Copy link
Contributor

bparees commented Aug 15, 2018

/assign @smarterclayton @deads2k

@dmage
Copy link
Contributor Author

dmage commented Aug 15, 2018

For those who will hit the same problem:

$ oc cluster up --tag=latest
$ oc login -u system:admin
$ KUBECONFIG=~/.kube/config TEST_ONLY=1 make test-extended SUITE="core"

@smarterclayton
Copy link
Contributor

I think I'll probably just remove the magic of test-extended starting a cluster (make TEST_ONLY=1 the only supported behavior)

@stevekuznetsov
Copy link
Contributor

Can we poll for consensus on openshift-dev@ first?

@bparees
Copy link
Contributor

bparees commented Aug 16, 2018

i haven't used make test-extended in years... i've always started my own cluster and used TEST_ONLY=1.

@enj
Copy link
Contributor

enj commented Aug 16, 2018

@openshift/sig-security both the LDAP and GSSAPI extended tests are now broken.

Wrote master config to: /tmp/openshift/ldap_groups/openshift.local.config/master/master-config.yaml
Running hack/lib/start.sh:341: executing 'oc get --raw /healthz --as system:unauthenticated --config='/tmp/openshift/ldap_groups/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s...
FAILURE after 79.581s: hack/lib/start.sh:341: executing 'oc get --raw /healthz --as system:unauthenticated --config='/tmp/openshift/ldap_groups/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s: the command timed out
Standard output from the command:
Standard error from the command:
The connection to the server 172.18.7.243:8443 was refused - did you specify the right host or port?
... repeated 187 times
[ERROR] PID 28940: hack/lib/cmd.sh:617: `return "${return_code}"` exited with status 1.
[INFO] 		Stack Trace: 
[INFO] 		  1: hack/lib/cmd.sh:617: `return "${return_code}"`
[INFO] 		  2: hack/lib/start.sh:341: os::cmd::try_until_text
[INFO] 		  3: hack/lib/start.sh:227: os::start::all_in_one
[INFO] 		  4: test/extended/ldap_groups.sh:30: os::start::server
[INFO]   Exiting with code 1.
/data/src/github.com/openshift/origin/hack/lib/log/system.sh: line 31: 29037 Terminated              sar -A -o "${binary_logfile}" 1 86400 > /dev/null 2> "${stderr_logfile}"
[INFO] No compiled `junitreport` binary was found. Attempting to build one using:
Wrote master config to: /tmp/openshift/gssapi/openshift.local.config/master/master-config.yaml
Running hack/lib/start.sh:341: executing 'oc get --raw /healthz --as system:unauthenticated --config='/tmp/openshift/gssapi/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s...
FAILURE after 79.701s: hack/lib/start.sh:341: executing 'oc get --raw /healthz --as system:unauthenticated --config='/tmp/openshift/gssapi/openshift.local.config/master/admin.kubeconfig'' expecting any result and text 'ok'; re-trying every 0.25s until completion or 80.000s: the command timed out
Standard output from the command:
Standard error from the command:
The connection to the server 172.18.11.24:8443 was refused - did you specify the right host or port?
... repeated 185 times
[ERROR] PID 28874: hack/lib/cmd.sh:617: `return "${return_code}"` exited with status 1.
[INFO] 		Stack Trace: 
[INFO] 		  1: hack/lib/cmd.sh:617: `return "${return_code}"`
[INFO] 		  2: hack/lib/start.sh:341: os::cmd::try_until_text
[INFO] 		  3: hack/lib/start.sh:227: os::start::all_in_one
[INFO] 		  4: test/extended/gssapi.sh:49: os::start::server
[INFO]   Exiting with code 1.
[ERROR] Expected no test suites to be marked as in-flight at the end of testing, got 1
make: *** [test-extended] Error 1
++ export status=FAILURE
++ status=FAILURE
+ set +o xtrace
########## FINISHED STAGE: FAILURE: RUN EXTENDED TESTS [00h 01m 26s] ##########

@smarterclayton
Copy link
Contributor

There is no support path now without oc cluster up that involves a node. Anyone who needs a node in those tests should be using oc cluster up or the correct extended flow, anyone who doesn't need a node should stop using start::all_in_one.

@stevekuznetsov
Copy link
Contributor

Whoever made the change should update start::all_in_one so their change doesn't break people

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 14, 2018
@enj enj added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 19, 2018
@soltysh soltysh assigned soltysh and unassigned smarterclayton and deads2k Nov 21, 2018
@soltysh soltysh removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Nov 21, 2018
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 19, 2019
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 21, 2019
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/post-rebase lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/security
Projects
None yet
Development

No branches or pull requests

9 participants