Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignore Scale Set capacity changes #930

Closed
ghtyrant opened this issue Mar 5, 2018 · 7 comments
Closed

Ignore Scale Set capacity changes #930

ghtyrant opened this issue Mar 5, 2018 · 7 comments
Labels
bug service/vmss Virtual Machine Scale Sets

Comments

@ghtyrant
Copy link

ghtyrant commented Mar 5, 2018

Terraform Version

AzureRM provider 1.1.2
Terraform v0.11.3

Affected Resource(s)

  • azurerm_virtual_machine_scale_set

Terraform Configuration Files

resource "azurerm_virtual_network" "vnet" {
  name                = "vnet"
  address_space       = ["10.0.0.0/16"]
  location            = "West Europe"
  resource_group_name = "rg"
}

resource "azurerm_subnet" "nodesubnet" {
  name                 = "nodesubnet"

  resource_group_name  = "rg"
  virtual_network_name = "${azurerm_virtual_network.vnet.name}"
  address_prefix       = "10.0.2.0/24"
}

resource "azurerm_network_security_group" "ssnsg" {
  name                = "ssnsg"

  resource_group_name = "rg"
  location            = "West Europe"

  security_rule = {
    name = "Custom HTTP"
    description = "Custom HTTP"
    protocol = "tcp"
    source_port_range = "*"
    destination_port_range = "8080"

    # We should fix this to the IP of the load balancer
    source_address_prefix = "10.0.2.0/24"
    destination_address_prefix = "*"

    access = "Allow"
    direction = "Inbound"
    priority = "100"
  }
}

resource "azurerm_virtual_machine_scale_set" "ss" {
  name                = "nodescaleset"
  location            = "West Europe"
  resource_group_name = "rg"
  upgrade_policy_mode = "Manual"

  depends_on = ["azurerm_virtual_machine.mn"]

  overprovision = false

  sku {
    name     = "Standard_NC6"
    tier     = "Standard"
    capacity = 2
  }

  storage_profile_image_reference {
    id = "(...)"
  }

  storage_profile_os_disk {
    caching           = "ReadWrite"
    create_option     = "FromImage"
    managed_disk_type = "Standard_LRS"
  }

  os_profile {
    computer_name_prefix = "node"
    admin_username       = "admin"
    admin_password       = "xxx"
  }

  network_profile {
    name    = "scalenetwork"
    primary = true
    network_security_group_id = "${azurerm_network_security_group.nodensg.id}"

    ip_configuration {
      name                                   = "scaleip"
      subnet_id                              = "${azurerm_subnet.nodesubnet.id}"
    }
  }
}

Actual Behavior

When updating settings like the image id (or any other non sku related stuff) and applying, terraform will notice difference in scale set capacity if it has been scaled up/down in the meantime. It will then reset this capacity back to the number defined in the script, thus killing/creating a bunch of machines.

Expected Behavior

In our use case, we dynamically change the capacity of our scale set (not using terraform). When changing the referenced image id, we expect terraform to ignore the capacity, thus leaving our machines in the scale set alive.

This is a feature request. We want to be able to tell terraform to ignore sku.capacity completely. Simply leaving it out leads to an error (Error: azurerm_virtual_machine_scale_set.ss: sku.0.capacity: required field is not set).

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply
  2. Increase the capacity of the scale set (using az-cli or the portal)
  3. Change the id of the referenced image
  4. terraform apply
@ghtyrant ghtyrant changed the title Update Scale Set image without changing its capacity Ignore Scale Set capacity changes Mar 5, 2018
@ghtyrant
Copy link
Author

ghtyrant commented Mar 5, 2018

I just discovered 'ignore_changes', a typical case of write before read.

Sorry about that, I will close this issue.

@ghtyrant ghtyrant closed this as completed Mar 5, 2018
@nomis4u
Copy link

nomis4u commented Jul 8, 2018

@ghtyrant Were you able to ignore just the capacity rather than the entier sku ? If so , could you please share how you did it

@ghtyrant
Copy link
Author

@nomis4u I wasn't able to only ignore the capacity only - ignore_changes ignores the whole SKU. This kind of works for us for now, since we never change name or tier of the SKU. But since this still is a workaround, let me reopen the issue.

@katbyte
Copy link
Collaborator

katbyte commented Jul 12, 2018

Hi @ghtyrant,

I'm glad that you managed to find a workaround! @nomis4u, you "should" be able to ignore the capacity property alone by using:

  lifecycle {
    ignore_changes = ["sku.#.capacity"]
  }

As the sku block is stored as a set you would need to figure out what the hashcode is (visible in the plan) and use it as so: ignore_changes = ["sku.12345678.capacity"]

This is obviously not ideal so I have opened #1558 to change it to a list so ignore_changes = ["sku.0.capacity"] will be possible.

@katbyte katbyte added bug service/vmss Virtual Machine Scale Sets labels Jul 12, 2018
@katbyte katbyte added this to the 1.10.0 milestone Jul 12, 2018
katbyte added a commit that referenced this issue Jul 13, 2018
VMSS: changed sku property from a set to list to help with #930
@tombuildsstuff tombuildsstuff modified the milestones: 1.10.0, Soon Jul 16, 2018
@katbyte
Copy link
Collaborator

katbyte commented Aug 2, 2018

As #1558 has been merged and released in v1.10.0 it is now possible to easily ignore these changes via the lifecycle property :

  lifecycle {
    ignore_changes = ["sku.0.capacity"]
  }

@ghtyrant
Copy link
Author

Works for me now! Thank you!

@tombuildsstuff tombuildsstuff modified the milestones: Soon, Being Sorted Oct 25, 2018
@ghost
Copy link

ghost commented Mar 6, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 6, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug service/vmss Virtual Machine Scale Sets
Projects
None yet
Development

No branches or pull requests

4 participants