You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a nginx-ingress-controller running in my GKE cluster that together with cert-manager manages ingress aspects to my jupyterhub helm chart.
After an upgrade to the ingress-controller but without any change to hub pod, proxy pod, or autohttps pod, I think I caused major failure for my JupyterHub users even after the ingress controller was back online.
Users experienced
By refreshing the browsers, they experienced a 503 error page instead.
Resolving the faulty state
Restarting the hub resolved the issue.
Analysis
The proxy pod relies on a chp proxy among other things. So I think that perhaps the CHP state got reset or invalid, and that the hub didn't know to update it.
Why was the CHP state lost or made invalid? I don't know... Hmmm... I don't have logs for this event =/
Preliminary solution idea
Note, I lack understanding of the CHP project in general, my comprehension can be summarized as "I think it is something that will redirect traffic based on how it is configured by jupyterhub", but how it is configured etc by the jupyterhub I mostly only assume it is done through an API.
Perhaps we could make the state external to the CHP pod, for example in a k8s configmap, and if the autohttps pod restarts, its state will not need to be setup again, because it would simply resetup itself using the configmap. Something like that... Hmmm....
Step one is to pin the issue though.
The text was updated successfully, but these errors were encountered:
I think this issue is to vague description of an issue, so I wouldn't be sure if we fixed it anyhow. Closing it in favor of other issues, such as #1364.
I have a nginx-ingress-controller running in my GKE cluster that together with cert-manager manages ingress aspects to my jupyterhub helm chart.
After an upgrade to the ingress-controller but without any change to hub pod, proxy pod, or autohttps pod, I think I caused major failure for my JupyterHub users even after the ingress controller was back online.
Users experienced
Resolving the faulty state
Restarting the hub resolved the issue.
Analysis
The proxy pod relies on a chp proxy among other things. So I think that perhaps the CHP state got reset or invalid, and that the hub didn't know to update it.
Why was the CHP state lost or made invalid? I don't know... Hmmm... I don't have logs for this event =/
Preliminary solution idea
Note, I lack understanding of the CHP project in general, my comprehension can be summarized as "I think it is something that will redirect traffic based on how it is configured by jupyterhub", but how it is configured etc by the jupyterhub I mostly only assume it is done through an API.
Perhaps we could make the state external to the CHP pod, for example in a k8s configmap, and if the autohttps pod restarts, its state will not need to be setup again, because it would simply resetup itself using the configmap. Something like that... Hmmm....
Step one is to pin the issue though.
The text was updated successfully, but these errors were encountered: