-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Openshift Authentication Status Unknown #3960
Comments
Hi, What does the cluster operator say about it?
Are there any CrashLoopBackOff pods?
Are you able to share those two outputs? |
Hello, Please see the output below. No crashlooping pod. [root@bastion ~]# oc describe co authentication [root@bastion ~]# oc logs console-55f9c8b8cf-8rnhn |
I also saw this issue with Openshift4.5 cluster. Here is my observation. When cluster is created I see that router pods are running successfully in control-plane nodes. And my DNS is configured to resolve *.apps.example.com URL to compute nodes. This is why console pods were hitting above issue. I restarted those pods to get it schedule on compute nodes, after that it started working. root: root:~# oc get co |
Thanks for the comment. I will try this and update here. |
I checked and ensured that pods in namespace openshift-ingress were running on the worker nodes, but the same issue persists. |
This issue persists in my case since master node is having worker role as well.
|
I ensured my masters hold only master role and not worker role, but the issue still persist. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
"ensured my masters hold only master role and not worker role, but the issue still persist"+1 |
same issues met in 4.5.6. [root@bastion ~]# oc get pods |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hello,
I am deploying OCP 4.5.3 and everything gets installed, but when then installation finishes I notice that in the output of "oc get co" authentication status is unknown and console status is false. The snapshot is attached. Could you please assist me in fixing this issue?
Version 4.5.3
Platform:
I see the following errors when I run the command "oc logs podname -n openshift-console"
Openshift console pods are running, but not ready. I have included the snapshots.
2020-07-25T10:07:46Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ocp4.contoso.com/oauth/token failed: Head https://oauth-openshift.apps.ocp4.contoso.com: EOF
2020-07-25T10:07:56Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ocp4.contoso.com/oauth/token failed: Head https://oauth-openshift.apps.ocp4.contoso.com: EOF
2020-07-25T10:08:06Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ocp4.contoso.com/oauth/token failed: Head https://oauth-openshift.apps.ocp4.contoso.com: EOF
2020-07-25T10:08:16Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ocp4.contoso.com/oauth/token failed: Head https://oauth-openshift.apps.oc
The text was updated successfully, but these errors were encountered: