-
Notifications
You must be signed in to change notification settings - Fork 469
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
default_node_pool not upgraded with automatic_channel_upgrade = 'patch' #301
Comments
Thanks @aescrob for opening this issue, the failed check was: automatic_channel_upgrade_check = var.automatic_channel_upgrade == null ? true : var.orchestrator_version == null && (
(contains(["patch"], var.automatic_channel_upgrade) && can(regex("^[0-9]{1,}\\.[0-9]{1,}$", var.kubernetes_version)))
|| (contains(["rapid", "stable", "node-image"], var.automatic_channel_upgrade) && var.kubernetes_version == null
)) If you assigned We'd like to hear your voice if you have any further question or thought on this. |
To give some additional context: As @aescrob described we deployed multiple clusters with the
So we assume that the system node pool (which is the one that has Upgrade Behavior Docs: https://learn.microsoft.com/en-us/azure/aks/auto-upgrade-cluster#auto-upgrade-limitations @aescrob is currently testing what happens when we set |
In this case (patch) we need to be able to set orchestrator_version. I've just tested successful upgrade of the default_node_pool as well with the following change for the precondition:
|
@aescrob can we close this now that the PR is merged? |
Hello Folks, I know the issue is closed, but I want to understand if there is any leftover work for the Azurerm provider or for AKS. @the-technat can you please confirm this sequence of events ? Step 1) Terraform deploy with:
Step 2) Terraform deploy with:
At this point you noticed that the control plane was upgraded to 1.24.9 but the default node pool was stuck forever at 1.23.15 Is this correct ? I checked the docs about autoupgrade. My doubt is, when |
Yes that's correct. We had the clusters running on v1.23.15 and upgraded them that way and the default node pool was stuck. According to @aescrob the default node pool is updated if you do the update via azure-cli, so it might actually be a problem of the provider. |
I would expect the provider to call the equivalent API call of It seems to me everything works as expected. I am just thinking if we can improve the documentation or the module to make the user experience better. @the-technat Do you agree everything works as expected ? |
Ah I see... |
Is there an existing issue for this?
Greenfield/Brownfield provisioning
brownfield
Terraform Version
1.3.2
Module Version
6.6.0
AzureRM Provider Version
3.43.0
Affected Resource(s)/Data Source(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
tfvars variables values
Debug Output/Panic Output
Expected Behaviour
'default_node_pool' is upgraded based on 'orchestrator_version'
Actual Behaviour
Upgrading cluster_version from 1.23.x to 1.24.x does not upgrade orchestrator version for 'default_node_pool' but for all 'azurerm_kubernetes_cluster_node_pools' as we set 'orchestrator_version=cluster_version'.
Setting 'orchestrator_version=cluster_version' on 'default_node_pool' is prevented by precondition
Steps to Reproduce
No response
Important Factoids
No response
References
No response
The text was updated successfully, but these errors were encountered: