You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform Version
v.1.120.0
Affected Resource(s)
azurerm_kubernetes_cluster
Terraform Configuration Files
main.tf
resource"azurerm_resource_group""akc-rg" {
name="${var.resource_group_name}"location="${var.resource_group_location}"
}
#an attempt to keep the aci container group name (and dns label) somewhat uniqueresource"random_integer""random_int" {
min=100max=999
}
resourceazurerm_network_security_group"aks_advanced_network" {
name="akc-${random_integer.random_int.result}-nsg"location="${var.resource_group_location}"resource_group_name="${azurerm_resource_group.akc-rg.name}"
}
resource"azurerm_virtual_network""aks_advanced_network" {
name="akc-${random_integer.random_int.result}-vnet"location="${var.resource_group_location}"resource_group_name="${azurerm_resource_group.akc-rg.name}"address_space=["10.1.0.0/16"]
}
resource"azurerm_subnet""aks_subnet" {
name="akc-${random_integer.random_int.result}-subnet"resource_group_name="${azurerm_resource_group.akc-rg.name}"network_security_group_id="${azurerm_network_security_group.aks_advanced_network.id}"address_prefix="10.1.0.0/24"virtual_network_name="${azurerm_virtual_network.aks_advanced_network.name}"
}
resource"azurerm_kubernetes_cluster""aks_container" {
name="akc-${random_integer.random_int.result}"location="${var.resource_group_location}"dns_prefix="akc-${random_integer.random_int.result}"resource_group_name="${azurerm_resource_group.akc-rg.name}"linux_profile {
admin_username="${var.linux_admin_username}"ssh_key {
key_data="${var.linux_admin_ssh_publickey}"
}
}
agent_pool_profile {
name="agentpool"count="2"vm_size="Standard_DS2_v2"os_type="Linux"# Required for advanced networkingvnet_subnet_id="${azurerm_subnet.aks_subnet.id}"
}
service_principal {
client_id="${var.client_id}"client_secret="${var.client_secret}"
}
network_profile {
network_plugin="kubenet"
}
}
variables.tf
variable"name" {
type="string"description="Name of this cluster."default="akc-example"
}
variable"client_id" {
type="string"description="Client ID"
}
variable"client_secret" {
type="string"description="Client secret."
}
variable"resource_group_name" {
type="string"description="Name of the azure resource group."default="akc-rg"
}
variable"resource_group_location" {
type="string"description="Location of the azure resource group."default="eastus"
}
variable"linux_admin_username" {
type="string"description="User name for authentication to the Kubernetes linux agent virtual machines in the cluster."
}
variable"linux_admin_ssh_publickey" {
type="string"description="Configure all the linux virtual machines in the cluster with the SSH RSA public key string. The key should include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm'"
}
Debug Output
Panic Output
Expected Behavior
If user selects kubenet or azure, the Azure ARM API will default docker_bridge_cidr, podCidr, serviceCidr and dns_service_ip, the user should not be required to enter anything for these values.
Actual Behavior
Terraform plan fails when kubenet is selected as network_plugin because code logic requires the fields be non-empty
Steps to Reproduce
Deploy using standard example but to not include values for:
docker_bridge_cidr
podCidr,
serviceCidr
dns_service_ip
terraform apply
Important Factoids
azure rp will default values if the values are not supplied to the following:
docker_bridge_cidr to 172.16.0.1/16,
podCidr to 10.244.0.0/16 if kubenet and ignore if azure,
serviceCidr to 10.0.0.0/16
dns_service_ip 10.0.0.10
the only logic that should be tested is if there is a supplied value for service_cidr, then dns_service_ip must also have a value that is within the range of the serviceCidr and higher than x.x.x.5 of the range
I already tested the deployment with this code changed and cluster creates correctly except for the fact that the routetable, is not assigned to the custom subnet, so users should be warned that another resource that assigns the AKS created routetable to be assigned to the custom subnet.
References
#0000
The text was updated successfully, but these errors were encountered:
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
ghost
locked and limited conversation to collaborators
Mar 30, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Community Note
Terraform Version
v.1.120.0
Affected Resource(s)
Terraform Configuration Files
main.tf
variables.tf
Debug Output
Panic Output
Expected Behavior
If user selects kubenet or azure, the Azure ARM API will default docker_bridge_cidr, podCidr, serviceCidr and dns_service_ip, the user should not be required to enter anything for these values.
Actual Behavior
Terraform plan fails when kubenet is selected as network_plugin because code logic requires the fields be non-empty
Steps to Reproduce
Deploy using standard example but to not include values for:
docker_bridge_cidr
podCidr,
serviceCidr
dns_service_ip
terraform apply
Important Factoids
azure rp will default values if the values are not supplied to the following:
docker_bridge_cidr to 172.16.0.1/16,
podCidr to 10.244.0.0/16 if kubenet and ignore if azure,
serviceCidr to 10.0.0.0/16
dns_service_ip 10.0.0.10
the only logic that should be tested is if there is a supplied value for service_cidr, then dns_service_ip must also have a value that is within the range of the serviceCidr and higher than x.x.x.5 of the range
I already tested the deployment with this code changed and cluster creates correctly except for the fact that the routetable, is not assigned to the custom subnet, so users should be warned that another resource that assigns the AKS created routetable to be assigned to the custom subnet.
References
The text was updated successfully, but these errors were encountered: