Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azurerm_kubernetes_cluster: Add support for virtual machine scale set node pool #3422

Merged
merged 2 commits into from
May 12, 2019

Conversation

dominik-lekse
Copy link
Contributor

This pull request adds support for virtual machine scale set node pools in the resource and data source azurerm_kubernetes_cluster.

Tests

=== RUN   TestAccAzureRMKubernetesCluster_virtualMachineScaleSets
=== PAUSE TestAccAzureRMKubernetesCluster_virtualMachineScaleSets
=== CONT  TestAccAzureRMKubernetesCluster_virtualMachineScaleSets
--- PASS: TestAccAzureRMKubernetesCluster_virtualMachineScaleSets (896.94s)
PASS
ok      github.com/terraform-providers/terraform-provider-azurerm/azurerm       896.981s

New or Affected Resource(s)

  • azurerm_kubernetes_cluster

References

…and data source `azurerm_kubernetes_cluster`
Copy link
Collaborator

@katbyte katbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @dominik-lekse,

Thanks for the PR, i've left a few comments inline but overall this is looking good.

azurerm/resource_arm_kubernetes_cluster.go Outdated Show resolved Hide resolved
azurerm/resource_arm_kubernetes_cluster.go Outdated Show resolved Hide resolved
website/docs/d/kubernetes_cluster.html.markdown Outdated Show resolved Hide resolved
@dominik-lekse
Copy link
Contributor Author

Hi @katbyte, thanks for the quick review. All suggestions confirmed.

Copy link
Collaborator

@katbyte katbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dominik-lekse! this LGTM now 🚀

@katbyte katbyte merged commit 77d8dd0 into hashicorp:master May 12, 2019
katbyte added a commit that referenced this pull request May 12, 2019
katbyte added a commit that referenced this pull request May 12, 2019
@katbyte
Copy link
Collaborator

katbyte commented May 12, 2019

I'm really sorry @dominik-lekse but i was looking at the wrong test results when I merged this 😞

It breaks all the other tests with:

DIFF:
        
        DESTROY/CREATE: azurerm_kubernetes_cluster.test
          addon_profile:                                        "" => "<computed>"
          addon_profile.#:                                      "1" => ""
          addon_profile.0.aci_connector_linux.#:                "0" => ""
          addon_profile.0.http_application_routing.#:           "0" => ""
          addon_profile.0.oms_agent.#:                          "0" => ""
          agent_pool_profile.#:                                 "1" => "1"
          agent_pool_profile.0.count:                           "2" => "2"
          agent_pool_profile.0.dns_prefix:                      "" => "<computed>"
          agent_pool_profile.0.fqdn:                            "acctestaks190512185214794318-7541a2a9.hcp.westeurope.azmk8s.io" => "<computed>"
          agent_pool_profile.0.max_pods:                        "30" => "<computed>"
          agent_pool_profile.0.name:                            "default" => "default"
          agent_pool_profile.0.os_disk_size_gb:                 "100" => "<computed>"
          agent_pool_profile.0.os_type:                         "Linux" => "Linux"
          agent_pool_profile.0.type:                            "AvailabilitySet" => "" (forces new resource)

because the api automatically sets it to AvailabilitySet. If that is the default unless type is specified then all we need to do is set the default to that value.

@dominik-lekse
Copy link
Contributor Author

Hi @katbyte, apologies for being not strict enough on the test execution, I should have run at least one of the existing acceptance tests.

My assumption was that the Computed takes care of this case. With your suggestion to add the default value, the following tests pass including the TestAccAzureRMKubernetesCluster_basic. Due to resource limitations, I am not able to run all related acceptance tests.

With the default value, we have also covered upgrading of tf states produces by previous provider versions, right?

Since the commit is in the original branch, I will reopen the pull request.

=== RUN   TestAccAzureRMKubernetesCluster_virtualMachineScaleSets
=== PAUSE TestAccAzureRMKubernetesCluster_virtualMachineScaleSets
=== CONT  TestAccAzureRMKubernetesCluster_virtualMachineScaleSets
--- PASS: TestAccAzureRMKubernetesCluster_virtualMachineScaleSets (894.93s)
PASS
ok      github.com/terraform-providers/terraform-provider-azurerm/azurerm       894.976s
=== RUN   TestAccAzureRMKubernetesCluster_basic
=== PAUSE TestAccAzureRMKubernetesCluster_basic
=== CONT  TestAccAzureRMKubernetesCluster_basic
--- PASS: TestAccAzureRMKubernetesCluster_basic (894.15s)
PASS
ok      github.com/terraform-providers/terraform-provider-azurerm/azurerm       894.195s

@ghost
Copy link

ghost commented May 17, 2019

This has been released in version 1.28.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example:

provider "azurerm" {
	version = "~> 1.28.0"
}
# ... other configuration ...

@epa095 epa095 mentioned this pull request May 19, 2019
@ghost
Copy link

ghost commented Jun 12, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jun 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants