Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubernetes_cluster_agent_pool #4046

Closed
wants to merge 0 commits into from

Conversation

titilambert
Copy link
Contributor

Fixes: #4001

@Ant59
Copy link

Ant59 commented Aug 9, 2019

Does this also fix #3971 ?

},
"resource_group_name": azure.SchemaResourceGroupName(),

"agent_pool_type": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agent Pools are only supported with VMSS. Should we have care about this setting ?

setting the default to AvailabilitySet doesn't seem to make sense

https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#register-multiple-node-pool-feature-provider

https://docs.microsoft.com/en-us/azure/aks/use-multiple-node-pools#limitations

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djsly is correct, we will not be supporting multiple agent pools with VMAS moving forward. We will still have VMAS clusters with a single pool, but if you want multiple pools you'll need to back it with VMSS.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 looks like we can remove/default this

@titilambert
Copy link
Contributor Author

titilambert commented Oct 16, 2019

@tombuildsstuff I just rebased this patch on master
What do I have to do next ?

@titilambert
Copy link
Contributor Author

@Ant59 Yes !

@titilambert
Copy link
Contributor Author

@tomasaschan @jluk Tests added !

@worldspawn
Copy link

@titilambert needs a rebase.

Eagerly awaiting this 🤤

@titilambert
Copy link
Contributor Author

@worldspawn @tombuildsstuff rebased !

@@ -107,6 +107,7 @@ func resourceArmKubernetesCluster() *schema.Resource {
"agent_pool_profile": {
Type: schema.TypeList,
Required: true,
MaxItems: 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@titilambert firstly sorry for the delay reviewing this!

I've spent the last few days working through trying to reconcile the open AKS PR's (#4676, #4543, #4472, #4256 and this one) to work out how to move forward.

Spending some time on this I believe there's a few things needed to fix this/get these in:

  1. Introduce a new default_node_pool block which is limited to a single element, which replaces the existing agent_pool_profile block (which can then be removed in 2.0) which also allows us to resolve Support planned AKS API default swap to VMSS + SLB (from VMAS + BLB) #4465
  2. Supporting more conditional/delta updates within the AKS resource - allowing ignore_changes to properly work for the count field (amongst others) which fixes azurerm_kubernetes_cluster with multiple agent pools keeps getting re-created #4560, Autoscaling enabled AKS Cluster leads to error on terraform apply, tough no changes on aks planned #4075, Provider produced invalid plan when addon_profile in ignore_changes #3428 and Upgrading Kubernetes cluster via Azure Portal causes terraform to try to recreate it #2993
  3. Adding a more tests covering the open bugs/use-cases
  4. Adding support for multiple node pools (via this PR) which fixes azurerm_kubernetes_cluster: Adding node pools causes AKS cluster replacement #3971 and Terraform replacing AKS nodepool cluster when changing VM count #3835
  5. Adding support for Standard Load Balancer via AKS: Support outbound IPs when using standard loadbalancer #4472

Whilst that's not ideal since 1-3 will supersede #4256 and #4543 (as the agent_pool_profile block is replaced) - this means users only have a breaking behavioural change once via the new block - unfortunately whatever happens it'll cause merge conflicts with #4472, but we can deal with that once the multiple node pools issue is resolved.

I've got the changes for 1-2 locally and I'm working through #3 at the moment - once that's done we should be able to get this merged :)

Since quite a few of those PR's are touching this file (azurerm/resource_arm_kubernetes_cluster.go) - I think it might be worth backing these changes out of this PR (via git reset HEAD~<numberofcommits>, rather than git revert <sha>) to avoid a series of merge conflicts with this PR - WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I figured there might be some merge conflicts with my code. I'm also happy to create a new PR after this one gets merged, if that makes things easier. 👍

Thanks for considering it.

Copy link
Contributor

@tombuildsstuff tombuildsstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey @titilambert @djsly

Thanks for this PR - as mentioned in the earlier comment I'm going to rebase this on top of #4898 now that that's shipped.

Taking a look through this is looking good - I've fixed up the issues I've commented about during the rebase. There's one thing I've intentionally removed during the rebase for the moment - which is the orchestrator_version - since this wants to be added to both the azurerm_kubernetes_cluster resource and this resource at the same time - and I figured this'd be easiest to review in a follow-up PR (since for now the split-out resource features the same fields as the built-in default_node_pool block).

As discussed offline, I hope you don't mind but I'm going to push the rebased/fixed up commits - since the azurerm_kubernetes_cluster resource has changed quite substantially in #4898 I've had to git reset HEAD~N to resolve the merge conflicts here, but I've added you as a co-author to the commit, so you'll still get the Git credit here 👍

Thanks!

},

// AKS name
// TODO replace by aks ID
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

},
"resource_group_name": azure.SchemaResourceGroupName(),

"agent_pool_type": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 looks like we can remove/default this

string(containerservice.Linux),
string(containerservice.Windows),
}, true),
DiffSuppressFunc: suppress.CaseDifference,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're making all new fields case-sensitive - as such can we remove this:

Suggested change
DiffSuppressFunc: suppress.CaseDifference,

ValidateFunc: validation.StringInSlice([]string{
string(containerservice.Linux),
string(containerservice.Windows),
}, true),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're making all new fields case-sensitive - as such can we make this:

Suggested change
}, true),
}, false),

Optional: true,
Computed: true,
ValidateFunc: validate.NoEmptyStrings,
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the moment I'd suggest we drop this field from the split-out resource; once this is merged we can then add this to both this resource, and the default_node_pool in the main resource; adding tests for both - my main question here is around if any locking's required when provisioning/updating agents

Type: schema.TypeString,
Required: true,
ForceNew: true,
DiffSuppressFunc: suppress.CaseDifference,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're making all new fields case-sensitive - as such can we remove this:

Suggested change
DiffSuppressFunc: suppress.CaseDifference,

"node_count": {
Type: schema.TypeInt,
Optional: true,
Default: 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given users are almost always going to override this, I think it'd make more sense to make this Required and remove the default?

@tombuildsstuff
Copy link
Contributor

@titilambert unfortunately rebasing this has broken the PR in Github - I hope you don't mind but I'm going to open another PR which includes these commits, so you'll still get the credit here :(

@ghost
Copy link

ghost commented Nov 26, 2019

This has been released in version 1.37.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
    version = "~> 1.37.0"
}
# ... other configuration ...

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New resource - azurerm_kubernetes_cluster_agentpool
8 participants