-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kubernetes_cluster_agent_pool #4046
Conversation
Does this also fix #3971 ? |
}, | ||
"resource_group_name": azure.SchemaResourceGroupName(), | ||
|
||
"agent_pool_type": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agent Pools are only supported with VMSS. Should we have care about this setting ?
setting the default to AvailabilitySet doesn't seem to make sense
https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#limitations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@djsly is correct, we will not be supporting multiple agent pools with VMAS moving forward. We will still have VMAS clusters with a single pool, but if you want multiple pools you'll need to back it with VMSS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks like we can remove/default this
48f89ae
to
3a369b6
Compare
@tombuildsstuff I just rebased this patch on master |
@Ant59 Yes ! |
3a369b6
to
30a2afb
Compare
@tomasaschan @jluk Tests added ! |
@titilambert needs a rebase. Eagerly awaiting this 🤤 |
7a87418
to
3796ae0
Compare
@worldspawn @tombuildsstuff rebased ! |
@@ -107,6 +107,7 @@ func resourceArmKubernetesCluster() *schema.Resource { | |||
"agent_pool_profile": { | |||
Type: schema.TypeList, | |||
Required: true, | |||
MaxItems: 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@titilambert firstly sorry for the delay reviewing this!
I've spent the last few days working through trying to reconcile the open AKS PR's (#4676, #4543, #4472, #4256 and this one) to work out how to move forward.
Spending some time on this I believe there's a few things needed to fix this/get these in:
- Introduce a new
default_node_pool
block which is limited to a single element, which replaces the existingagent_pool_profile
block (which can then be removed in 2.0) which also allows us to resolve Support planned AKS API default swap to VMSS + SLB (from VMAS + BLB) #4465 - Supporting more conditional/delta updates within the AKS resource - allowing
ignore_changes
to properly work for thecount
field (amongst others) which fixes azurerm_kubernetes_cluster with multiple agent pools keeps getting re-created #4560, Autoscaling enabled AKS Cluster leads to error on terraform apply, tough no changes on aks planned #4075, Provider produced invalid plan when addon_profile in ignore_changes #3428 and Upgrading Kubernetes cluster via Azure Portal causes terraform to try to recreate it #2993 - Adding a more tests covering the open bugs/use-cases
- Adding support for multiple node pools (via this PR) which fixes azurerm_kubernetes_cluster: Adding node pools causes AKS cluster replacement #3971 and Terraform replacing AKS nodepool cluster when changing VM count #3835
- Adding support for Standard Load Balancer via AKS: Support outbound IPs when using standard loadbalancer #4472
Whilst that's not ideal since 1-3 will supersede #4256 and #4543 (as the agent_pool_profile
block is replaced) - this means users only have a breaking behavioural change once via the new block - unfortunately whatever happens it'll cause merge conflicts with #4472, but we can deal with that once the multiple node pools issue is resolved.
I've got the changes for 1-2 locally and I'm working through #3 at the moment - once that's done we should be able to get this merged :)
Since quite a few of those PR's are touching this file (azurerm/resource_arm_kubernetes_cluster.go
) - I think it might be worth backing these changes out of this PR (via git reset HEAD~<numberofcommits>
, rather than git revert <sha>
) to avoid a series of merge conflicts with this PR - WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I figured there might be some merge conflicts with my code. I'm also happy to create a new PR after this one gets merged, if that makes things easier. 👍
Thanks for considering it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey @titilambert @djsly
Thanks for this PR - as mentioned in the earlier comment I'm going to rebase this on top of #4898 now that that's shipped.
Taking a look through this is looking good - I've fixed up the issues I've commented about during the rebase. There's one thing I've intentionally removed during the rebase for the moment - which is the orchestrator_version
- since this wants to be added to both the azurerm_kubernetes_cluster
resource and this resource at the same time - and I figured this'd be easiest to review in a follow-up PR (since for now the split-out resource features the same fields as the built-in default_node_pool
block).
As discussed offline, I hope you don't mind but I'm going to push the rebased/fixed up commits - since the azurerm_kubernetes_cluster
resource has changed quite substantially in #4898 I've had to git reset HEAD~N
to resolve the merge conflicts here, but I've added you as a co-author to the commit, so you'll still get the Git credit here 👍
Thanks!
}, | ||
|
||
// AKS name | ||
// TODO replace by aks ID |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
}, | ||
"resource_group_name": azure.SchemaResourceGroupName(), | ||
|
||
"agent_pool_type": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks like we can remove/default this
string(containerservice.Linux), | ||
string(containerservice.Windows), | ||
}, true), | ||
DiffSuppressFunc: suppress.CaseDifference, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're making all new fields case-sensitive - as such can we remove this:
DiffSuppressFunc: suppress.CaseDifference, |
ValidateFunc: validation.StringInSlice([]string{ | ||
string(containerservice.Linux), | ||
string(containerservice.Windows), | ||
}, true), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're making all new fields case-sensitive - as such can we make this:
}, true), | |
}, false), |
Optional: true, | ||
Computed: true, | ||
ValidateFunc: validate.NoEmptyStrings, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for the moment I'd suggest we drop this field from the split-out resource; once this is merged we can then add this to both this resource, and the default_node_pool in the main resource; adding tests for both - my main question here is around if any locking's required when provisioning/updating agents
Type: schema.TypeString, | ||
Required: true, | ||
ForceNew: true, | ||
DiffSuppressFunc: suppress.CaseDifference, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're making all new fields case-sensitive - as such can we remove this:
DiffSuppressFunc: suppress.CaseDifference, |
"node_count": { | ||
Type: schema.TypeInt, | ||
Optional: true, | ||
Default: 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given users are almost always going to override this, I think it'd make more sense to make this Required and remove the default?
84ee5e8
to
ccb5f4f
Compare
@titilambert unfortunately rebasing this has broken the PR in Github - I hope you don't mind but I'm going to open another PR which includes these commits, so you'll still get the credit here :( |
This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 1.37.0"
}
# ... other configuration ... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Fixes: #4001