-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat!: add create_before_destroy=true to node pools #357
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,4 +11,4 @@ variable "location" { | |
variable "resource_group_name" { | ||
type = string | ||
default = null | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least README notes are required if we "hardcode" this. Nodepool names need to be unique and
create_before_destroy = true
will create another nodepool with the same name by default.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jep definitely. Since the change is hard coded (and can only be hard coded), I think we should also hard code a random suffix to the node pool?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I like this random suffix idea.
Btw, it's possible to make this
create_before_destroy = true
dynamically, we can declare two similar resources with a toggle switcher, one containscreate_before_destroy = true
and another doesn't. Like:But in this case, we won't need this trick, I agree with you, the node pool should contain
create_before_destroy
.I have another question @the-technat, even with
create_before_destroy = true
how can we achieve seamless node pool updates? The old pool would be destroyed immediately once the new pool is ready, would it cause downtime?Another option is we maintain the node pool in green-blue pattern, the upgrade could create a blue pool, then drain the green pool, then destroy the green pool. I would consider it as plan B if re-creation of a node pool with
create_before_destroy
still cause downtime.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the random suffix is implemented, WDYT?
Regarding the dynamically, I think it's not DRY if we have a switch-case for the node pools and all features must be implemented on both. In addition, we just add many more edge-cases I'm not sure we want to test them all, that's why I'm for hard coding that thing.
Regarding your other question, that totally depends on how the AzureRM API implements deletion of the old node pool. I assume it starts with cordening all the nodes and then draining them, which results in workload beeing moved to the new nodes. That should give you a seamless update. And in order to ensure total 100% availability of the apps, they have to use PDBs (which they should use anyway), to prevent K8s from draining nodes too fast (draining will respect PDBs and wait until the pods are started on the new nodes).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking would the current implementation of using
random_string
's result be refreshed on the node pool's recreation?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we do need sufficient tests for this Terraform circus show 😃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @the-technat, I've done some tests on my side and the results are good, I've tried updating name, force new attributes and update-in-place attributes, and all scenarios work as we expected. I would encourage you to do the same tests on your side so we can be on the same page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lonegunmanb I did some tests with your suggestion and it worked fine. Now I also see why my apporach wouldn't work (it recreates the nodepool for every non-breaking attribute too...).
I updated the PR with your approach and fixed one test (now nodepool names can only be up to 8 chars since the module itself adds 4). Still not very happy with the
null_resource
, but I guess that's our only option. Maybe it's worth creating a feature request for the AKS API to implement a new fielduse_name_prefix=true
on the nodepool that will automatically use your given name as prefix and inject a random suffix if the node pool get's recreated.Sorry that it took so long unterstanding why your apporach was needed. Definitely underestimated the effect / effort to implement this "feature".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@the-technat Terraform 1.4 has added a new built-in resource
terraform_data
to replacenull_resource
, so I'm thinking of whether we should use this new resource or not. Using this new resource could set us free fromnull
provider, but it also requires the Terraform Core's version must be>= 1.4
.This
null_resource
+md5
+replace_triggered_by
+create_before_destroy
pattern can be used in multiple scenarios, and I think pulumi has faced the very same problem so they've introduced auto-naming.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting,
terraform_data
definitely looks like a better approach.Because this is a breaking change anyway I guess we could do this. Otherwise I'd be afraid of bumping the Terraform version to
>= 1.4
as this has affect on all parent / sibling-modules.@mkilchhofer WDYT about using
terraform_data
?