Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/azurerm: Fix an issue with azurerm_virtual_machine ssh_keys #6541

Merged
merged 1 commit into from
May 8, 2016

Conversation

stack72
Copy link
Contributor

@stack72 stack72 commented May 8, 2016

Fixes part of #5793, #6212

ssh_keys were throwing an error similar to this:

* azurerm_virtual_machine.test: [DEBUG] Error setting Virtual Machine
* Storage OS Profile Linux Configuration: &errors.errorString{s:"Invalid
* address to set: []string{\"os_profile_linux_config\", \"0\",
* \"ssh_keys\"}"}

This was because of nesting of Set within a Set in the schema. By
changing this to a List within a Set, the schema works as expected. This
means we can now set SSH Keys on VMs. This has been tested using a
remote-exec and a connection block with the ssh key

azurerm_virtual_machine.test: Still creating... (2m10s elapsed)
azurerm_virtual_machine.test (remote-exec): Connected!
azurerm_virtual_machine.test (remote-exec): CONNECTED!

@stack72
Copy link
Contributor Author

stack72 commented May 8, 2016

@clstokes / @HX-Rd / @clintonm9 / @jlecren / @gdhagger hopefully this will make you all happy :)

Sorry for the time this has taken to resolve!

ssh_keys were throwing an error similar to this:

```
* azurerm_virtual_machine.test: [DEBUG] Error setting Virtual Machine
* Storage OS Profile Linux Configuration: &errors.errorString{s:"Invalid
* address to set: []string{\"os_profile_linux_config\", \"0\",
* \"ssh_keys\"}"}
```

This was because of nesting of Set within a Set in the schema. By
changing this to a List within a Set, the schema works as expected. This
means we can now set SSH Keys on VMs. This has been tested using a
remote-exec and a connection block with the ssh key

```
azurerm_virtual_machine.test: Still creating... (2m10s elapsed)
azurerm_virtual_machine.test (remote-exec): Connected!
azurerm_virtual_machine.test (remote-exec): CONNECTED!
```
@stack72 stack72 mentioned this pull request May 8, 2016
8 tasks
@jen20
Copy link
Contributor

jen20 commented May 8, 2016

Thanks @stack72 - that's a big improvement!

@jen20 jen20 merged commit 6ca0e00 into master May 8, 2016
@stack72 stack72 deleted the fix-azurerm-vm-ssh-key branch May 8, 2016 23:20
@heywoodj
Copy link

heywoodj commented May 9, 2016

Hey,

I'm struggling with the syntax required to fix the ssh_keys error above. When you say "By
changing this to a List within a Set, the schema works as expected.". Could you provide an example of this please.

Thanks in advance

@stack72
Copy link
Contributor Author

stack72 commented May 9, 2016

Hi @heywoodj

There is no difference to the end user - this was a code issue within my original development

P.

@heywoodj
Copy link

heywoodj commented May 9, 2016

Hey,
Thanks for swift response. Just to clarify I've just downloaded v0.6.15, should the syntax below work for the ssh keys as I get the "Invalid address to set" error.

os_profile_linux_config {
disable_password_authentication = true

    ssh_keys {
        path = ""
        key_data = ""
    }
}

Thanks again

@stack72
Copy link
Contributor Author

stack72 commented May 9, 2016

Hi @heywoodj

YEs, that will work but only when 0.6.16 gets released today / tomorrow

1 thing to note though (again going into the documentation today / tomorrow) is that the path is fixed to a known location. For example, if the admin Username for the machine is foo, then the ssh_keys path needs to be:

/home/foo/.ssh/authorized_keys

@heywoodj
Copy link

heywoodj commented May 9, 2016

Ah, didn't realise was release dependent just thought I had an issue with my config. Thanks for the heads up with the path, I have been setting that correctly and it does copy up the public key up to the correct place :-) but the Terraform script must fail afterwards.
Thanks again will download and retry with .16.

@stack72
Copy link
Contributor Author

stack72 commented May 9, 2016

@heywoodj

yes, it was failing reading the information back and setting it to terraform state and causing the resource to fallover - it'll be fixed really soon now :)

P.

@heywoodj
Copy link

@stack72 Hey, I noticed a copy of .16 was available for download and has fixed the issue above.
Did have an extra issue with regards to connection. I am using a provisioner that errors when trying to connect via private key with message "no key found". It's as though its trying to connect before the VM is created correctly. I do see the same behaviour when I just use password and the connection just retries until successful. I noticed above that you tested this ok. Is there a way to ensure public key is available before the remote script is run.

os_profile_linux_config {
disable_password_authentication = true

    ssh_keys {
        path = "/home/[Admin Username]/.ssh/authorized_keys"
        key_data = "[Public key file path]")}"
    }
}

provisioner "remote-exec" {

    connection {
        host = "[PublicIP]"
        type = "ssh"
        user = "[Admin username]"
        private_key = "[Private key file path]"
    }

    inline = [ ... ] 
}

@stack72
Copy link
Contributor Author

stack72 commented May 10, 2016

Hi @heywoodj

this param:

ssh_keys {
        path = "/home/[Admin Username]/.ssh/authorized_keys"
        key_data = "[Public key file path]")}"
    }

What is the key_data that you are trying to use?

P.

@heywoodj
Copy link

Hey, the path the public key file that I am generating via Putty. If I do not try to run the remote execution the server is created correctly and the public key is copied up. I can then access no problem via a ssh client using private key. It seems like it is trying to connect before VM provisioned properly and failing rather than trying to reconnect like the password only scenario. Thanks J

@stack72
Copy link
Contributor Author

stack72 commented May 10, 2016

This is the example i am using for this part:

os_profile_linux_config {
    disable_password_authentication = true
    ssh_keys {
      path = "/home/${var.username}/.ssh/authorized_keys"
      key_data = "${file("~/.ssh/id_rsa.pub")}"    <---------------- This is the public on my local machine
    }
  }

connection {
    host     = "${azurerm_public_ip.test.ip_address}"
    user     = "${var.username}"
    key_file = "~/.ssh/id_rsa"         <---------------- This is the private on my local machine
  }

  provisioner "remote-exec" {
    inline = ["echo 'CONNECTED!'"]
  }

  provisioner "file" {
    source      = "test.txt"
    destination = "/tmp/test.txt"
  }

@stack72
Copy link
Contributor Author

stack72 commented May 10, 2016

As you can see, I am doing the connection and provisioner INSIDE the vm resource. Otherwise you will need to put a depends_on in the provisioner block to stop the provisioner trying to happen before the machine starts

@heywoodj
Copy link

@stack72 Hey, sorry my bad, the error was actually the private key file. I'd forgotten to use openssh format hence the 'no key found' error. I swift to correct format and is working now. Thanks

@stack72
Copy link
Contributor Author

stack72 commented May 10, 2016

Excellent - glad it's working!

P.

@noam87
Copy link

noam87 commented May 13, 2016

omg guys thanks, this was driving me crazy! (+ thanks for the people posting examples here)

@lmeyemezu
Copy link

@stack72
Hi,
can you give all your configuration example ?
i'm using 0.6.16 et doesnt work.
error is

Ragards

@stack72
Copy link
Contributor Author

stack72 commented May 17, 2016

Hi @lmeyemezu

What part isn't working for you?

Paul

@lmeyemezu
Copy link

lmeyemezu commented May 17, 2016

Hi,
thanks for replying.
error is :
Error applying plan:

1 error(s) occurred:

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

bastion vm module tf (except)

resource "azurerm_virtual_machine" "bastion" {
    name = "${var.name}"
    location = "${var.location}"
    resource_group_name = "${var.resource_group_name}"
    vm_size = "${var.vm_size}"
    storage_image_reference {
        publisher = "${var.image_publisher}"
        offer = "${var.image_offer}"
        sku = "${var.image_sku}"
        version = "${var.image_version}"
    }
    storage_os_disk {
        name = "${var.os_disk_name}"        
        caching = "ReadWrite"
        create_option = "FromImage"
        vhd_uri = "${var.os_disk_vhd_uri}"        
        os_type = "${var.os_type}"
    }
    os_profile {
        computer_name = "${var.computer_name}"
        admin_username = "${var.admin_username}"
        admin_password = "${var.admin_password}"
    }
    os_profile_linux_config {
        disable_password_authentication = true
        ssh_keys {
          #path = "/home/${var.app.user}/.ssh/authorized_keys"
          path = "${var.ssh_key_path}"
          key_data = "${var.ssh_key_data}"
      }
    }
    network_interface_ids = ["${azurerm_network_interface.bastion_nic.id}"]
    tags {
      project = "${var.project}"
      environment = "${var.envt}"
    }
}

main.tf (except)

admin_username = "${var.admin_username}"
  admin_password = "${var.admin_password}"
  ssh_key_path = "/home/${var.ssh_user}/.ssh/authorized_keys"
  ssh_key_data = "${file("../../modules/keys/af_staging/af_staging_id_rsa.pub")}" 

terraform.tfvars(except)
bastion_nic_name = "bastion_nic"  
vm_size = "Standard_A0"
image_publisher = "Canonical"
image_offer = "UbuntuServer"
image_sku = "14.04.4-LTS"
image_version = "14.04.201605091"
admin_username = "cio"
admin_password = "Joptimisme!"
ssh_user = "cio"
os_type = "linux"

Regards

@ghost
Copy link

ghost commented Apr 25, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants