-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collect logging for exited containers #70
Conversation
For any containers with non-0 restart counts, collect logging for exited containers using `kubectl logs ... --previous`.
FYI @sivanov-nuodb, I created the branch The root cause seems to be that the container exits unexpectedly and takes a long time to acquire the lease on restart. Not sure what is causing the crashes on 2.5.0 or whether it is still relevant in newer versions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addign this!
Thanks for creating the branch so that we can investigate further! The webhook server is started immediately and does not wait for the leader assignment. I agree that the webhook error (connection refused) is due to the operator container being restarted. The operator will fail if the |
Okay, so it is no longer an issue. I will keep using 2.6.1 rather than explicitly disabling the backup manager, since it does give us more coverage of past product versions. Currently we have coverage of 2.5.0 via KWOK, and both the KWOK and Minikube variants exercise 2.7.0 (latest). |
For any containers with non-0 restart counts, collect logging for exited containers using
kubectl logs ... --previous
.