-
Notifications
You must be signed in to change notification settings - Fork 816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Presence of >1 inactive GSS can cause a rolling update to become stuck until an allocation ends #2574
Comments
Is this a similar issue to #2432? |
Hey @roberthbailey I think it is similar to #2432, but that would have gotten resolved with #2420 . More generally I think #2420 fixed issues involving 1 inactive GSS with allocated replicas, however the issue may still occur if there's more than 1. |
I think this may have been fixed by #2623 - we should re-test. |
RC comes out tomorrow, so would be a good opportunity 😄 |
Just touching base again, see if this issue has been resolved for you? |
Sorry for the delay to reply, can confirm that this has been resolved as of |
What happened:
Setup:
v0
)v1
)v1
)v2
)As a result, the second rolling update became stuck in the following state:
There is a
v1
GameServerReady
10m afterv2
was created. It'll remain until one of the allocations fromv1
orv0
ends. While this is unlikely to cause an issue in an environment with a lot of allocation churn, it can lead to poor development experience in test environments where updates happen often and GameServers remain allocated for long.What you expected to happen:
The
v1
GameServerSet to be scaled to 0 Desired replicas. AllReady
v1
replicas to be terminated as soon as the buffer size was satisfied.How to reproduce it (as minimally and precisely as possible):
Minimal reproduction as unit test: #2575
Anything else we need to know?:
Environment:
kubectl version
): v1.21.5The text was updated successfully, but these errors were encountered: