You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
If you provide --request-timeout 30s argument to k9s and open pod logs, then in ~30 seconds your stream disconnects with message on screen (when using old k9s version v0.24.15)
stream failed: &http.httpError{err:"context deadline exceeded (Client.Timeout or context cancellation while reading body)", timeout:true}
And nothing on screen for latest released version, but still message in k9s logfile:
5:06AM WRN Stream READ error "XXX_REDACTED_XXX"::"default" error="context deadline exceeded (Client.Timeout or context cancellation while reading body)"
Expected behavior --request-timeout should have effect only on requests, but not log streams.
Versions (please complete the following information):
OS: linux
K9s: v0.24.15 and v0.25.12
K8s: v1.18.10
Additional context
Also it would be great if k9s had indication that log stream disconnected and you have to reconnect (maybe red exclamation point next to since-time indicator).
Old version style error message in pod logs confused me multiple times and I had hard time trying to find ouy why we have http disconnects in services for no reason 😄
The text was updated successfully, but these errors were encountered:
Describe the bug
If you provide
--request-timeout 30s
argument to k9s and open pod logs, then in ~30 seconds your stream disconnects with message on screen (when using old k9s version v0.24.15)And nothing on screen for latest released version, but still message in k9s logfile:
Expected behavior
--request-timeout
should have effect only on requests, but not log streams.Versions (please complete the following information):
Additional context
Also it would be great if k9s had indication that log stream disconnected and you have to reconnect (maybe red exclamation point next to
since-time
indicator).Old version style error message in pod logs confused me multiple times and I had hard time trying to find ouy why we have http disconnects in services for no reason 😄
The text was updated successfully, but these errors were encountered: