Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignoring Primary desiredReplicas sync from Canary desiredReplicas #1349

Closed
wants to merge 1 commit into from

Conversation

hiakki
Copy link

@hiakki hiakki commented Jan 24, 2023

Fixes - #1347

Ref - https://docs.flagger.app/usage/how-it-works

The autoscaler reference is optional, when specified, Flagger will pause the traffic increase while the target and primary deployments are scaled up or down. HPA can help reduce the resource usage during the canary analysis. When the autoscaler reference is specified, any changes made to the autoscaler are only made active in the primary autoscaler when a rollout for the deployment starts and completes successfully. Optionally, you can create two HPAs, one for canary and one for the primary to update the HPA without doing a new rollout. As the canary deployment will be scaled to 0, the HPA on the canary will be inactive.

Q. Why we wanted to use 2 explicit HPAs?
A. Use Case - If our HPA had minReplicas = 100, maxReplicas = 200 and currently desiredReplicas = 150, then during deployment Flagger will setup Canary HPA's minReplicas = 100, which will cause DB Pool Connection throttling. Hence, on using 2 different HPAs for Canary and Primary, we can define minReplicas for canary HPA.

Issue - On using 2 explicit HPAs, during canary, Primary Replicas = Canary Replicas.
Example -

Canary HPA -

  • minReplicas - 1
  • maxReplicas - 10
  • desiredReplicas - 5

Primary HPA -

  • minReplicas - 100
  • maxReplicas - 200
  • desiredReplicas - 150

So, after Canary Analysis, Primary HPA's desiredReplicas = Canary HPA's desiredReplicas. Now, our production live app will have Primary HPA's desiredReplicas = 5, hence our app which was being served by 150 pods, now will be served by only 5 pods. Hence, total production downtime.

@hiakki hiakki changed the title Ignoring Primary replicas sync from Canary Ignoring Primary desiredReplicas sync from Canary desiredReplicas Jan 24, 2023
@stefanprodan
Copy link
Member

Please see #1343 which allows you to set a different autoscaling config for primary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants