-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Watch stream disconnecting - how to watch forever? #728
Comments
I built a workaround to handle the exception and reconnect, not sure if this is the right solution or if the python Kubernetes client should be taking care of this?
|
Spoke too soon, Now I'm running into this bug with the above workaround, #701 |
running into the same issue, got anything ? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Still an issue |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen - not sure why closing issues based on elapsed time is a thing. |
@richstokes & @govindKAG : Facing the same problem at our end, By any chance have you been able to get to some solution or workaround. |
Has anyone figured out a reliable solution to this problem? |
@logicfoxA given Kubernetes server will only preserve a historical list of changes for a limited time [ 5-15 minutes as per configuration ]. K8 internally uses etcd3 and that preserve changes in the last 5 minutes by default. The requested watch operations fails because the historical version of that resource is not available. If the specified value is no longer valid whether due to expiration ( generally five to fifteen minutes ) or a configuration change on the server, the server will respond with a
Everytime the watch happens the PodResource version needs to be preserved as have been depicted in the code snippet above and then handle that in the watch exception handling block, by restarting the watch again but with the last resource version preserved with something like below
|
Hi,
Is there a way to keep the watch stream connected forever? I am getting disconnected after approx. 5-10 minutes with this error:
The watch function I am using (which works great up until it gets disconnected):
The text was updated successfully, but these errors were encountered: