Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changes found in EBS Block device where there aren't really changes #4786

Closed
davedash opened this issue Jan 21, 2016 · 8 comments
Closed

Changes found in EBS Block device where there aren't really changes #4786

davedash opened this issue Jan 21, 2016 · 8 comments

Comments

@davedash
Copy link
Contributor

    ebs_block_device.#:                                "2" => "1"
    ebs_block_device.2659407853.delete_on_termination: "true" => "1" (forces new resource)
    ebs_block_device.2659407853.device_name:           "/dev/sdf" => "/dev/sdf" (forces new resource)
    ebs_block_device.2659407853.encrypted:             "true" => "1" (forces new resource)
    ebs_block_device.2659407853.iops:                  "150" => "<computed>"
    ebs_block_device.2659407853.snapshot_id:           "" => "<computed>"
    ebs_block_device.2659407853.volume_size:           "50" => "50" (forces new resource)
    ebs_block_device.2659407853.volume_type:           "gp2" => "gp2" (forces new resource)

I keep seeing this on a few instances. apply keeps recreating the instances and then they instances still do the above. These are on t2.small instances in EC2.

Here's the original tf:

resource "aws_instance" "instance" {
  ami = "${var.ami}"  # Ubuntu HVM
  instance_type = "${var.instance_type}"

  tags {
    Name = "${var.env}-${var.name}${var.name_suffix}"
    env = "${var.env}"
    role = "${var.env}-${var.name}"
  }

  ebs_block_device {
    encrypted = 1
    device_name = "/dev/sdf"
    volume_type = "gp2"
    volume_size = 50
  }

  security_groups = ["${split(",", var.security_groups)}"]

  subnet_id = "${element(split(",", var.subnets), var.number)}"

  iam_instance_profile = "${var.env}-${var.name}"
  ebs_optimized = "${var.ebs_optimized}"
  user_data = "${file(var.user_data)}"
  key_name = "davedash"

}
@wata727
Copy link
Contributor

wata727 commented Jan 24, 2016

Hi @davedash. I had the same problem.

ebs_block_device is additional block device. If you want to use it, specify root_block_device
When you only specify ebs_block_device, it create ebs_block_device as root_block_device automatically.
aws_instance attribute has root_block_device instead of ebs_block_device in terraform.tfstate
As a result, apply keeps recreating instances.

@davedash
Copy link
Contributor Author

Okay… this makes sense.  

But I feel this is different than before, since almost all of my volumes use a root abs volume.  Does this behavior change based on instance size?  This happens to be my t2.small that trigger this.

-- 
Dave Dash

On January 24, 2016 at 5:40:56 AM, Kazuma Watanabe ([email protected]) wrote:

Hi @davedash. I had the same problem.

ebs_block_device is additional block device. If you want to use it, specify root_block_device
When you only specify ebs_block_device, it create ebs_block_device as root_block_device automatically.
aws_instance attribute has root_block_device instead of ebs_block_device in terraform.tfstate
As a result, apply keeps recreating instances.


Reply to this email directly or view it on GitHub.

@wata727
Copy link
Contributor

wata727 commented Jan 26, 2016

Sorry, I don't know condition that not happened this problem...
At least I can confirm that on t2.micro and m3.medium instance. The following output is caused on m3.medium instance:

-/+ aws_instance.server
    ami:                                               "ami-383c1956" => "ami-383c1956"
    associate_public_ip_address:                       "true" => "1"
    availability_zone:                                 "ap-northeast-1a" => "<computed>"
    disable_api_termination:                           "false" => "0"
    ebs_block_device.#:                                "0" => "1"
    ebs_block_device.3935708772.delete_on_termination: "" => "1" (forces new resource)
    ebs_block_device.3935708772.device_name:           "" => "/dev/xvda" (forces new resource)
    ebs_block_device.3935708772.encrypted:             "" => "<computed>" (forces new resource)
    ebs_block_device.3935708772.iops:                  "" => "<computed>" (forces new resource)
    ebs_block_device.3935708772.snapshot_id:           "" => "<computed>" (forces new resource)
    ebs_block_device.3935708772.volume_size:           "" => "8" (forces new resource)
    ebs_block_device.3935708772.volume_type:           "" => "gp2" (forces new resource)
    ephemeral_block_device.#:                          "0" => "<computed>"
    instance_initiated_shutdown_behavior:              "stop" => "stop"
    instance_type:                                     "m3.medium" => "m3.medium"
    ...

If you want to know about that, you should ask in AWS developer forum. https://forums.aws.amazon.com/tags/ec2?categoryID=9

@simonluijk
Copy link

I am seeing this too. But it is not related to root_block_device. In my case (and I suspect the OP's too) it is caused by an aws_volume_attachment to the same instance. Whenever Terraform refreshes it adds the volume attached with the aws_volume_attachment to ebs_block_device.4278008646.* in the state file. When creating the plan it wants to remove this device as there is no ebs_block_device definition for it.

@mxs
Copy link

mxs commented Mar 9, 2016

I am seeing this also, like OP i have ebs_block_device in my aws_instance config. I don't have a aws_volume_attachment config.

ebs_block_device.#:                                "2" => "1"
    ebs_block_device.2250803247.delete_on_termination: "false" => "0" (forces new resource)
    ebs_block_device.2250803247.device_name:           "/dev/sdp" => "/dev/sdp" (forces new resource)
    ebs_block_device.2250803247.encrypted:             "false" => "<computed>"
    ebs_block_device.2250803247.iops:                  "6" => "<computed>"
    ebs_block_device.2250803247.snapshot_id:           "" => "<computed>"
    ebs_block_device.2250803247.volume_size:           "2" => "2" (forces new resource)
    ebs_block_device.2250803247.volume_type:           "gp2" => "gp2" (forces new resource)
    ebs_block_device.504524833.delete_on_termination:  "1" => "0"
    ebs_block_device.504524833.device_name:            "/dev/xvdcz" => ""

@davedash
Copy link
Contributor Author

Even with

  lifecycle {
    ignore_changes = ["ebs_block_device", "root_block_device"]
  }

I still get issues:

-/+ module.dev.docker-instance.aws_instance.instance
    ami:                        "ami-2b3b6041" => "ami-2b3b6041"
    availability_zone:          "us-east-1a" => "<computed>"
    ebs_optimized:              "false" => "0"
    ephemeral_block_device.#:   "0" => "<computed>"
    iam_instance_profile:       "dev-docker" => "dev-docker"
    instance_state:             "running" => "<computed>"
    instance_type:              "t2.small" => "t2.small"
    key_name:                   "davedash" => "davedash"
    placement_group:            "" => "<computed>"
    public_dns:                 "" => "<computed>"
    public_ip:                  "" => "<computed>"
    security_groups.#:          "2" => "2"
    security_groups.1667421418: "sg-e74eff81" => "sg-e74eff81"
    security_groups.4253194611: "sg-fee58e9a" => "sg-fee58e9a"
    source_dest_check:          "true" => "1"
    subnet_id:                  "subnet-130abe38" => "subnet-130abe38"
    tags.#:                     "3" => "3"
    tags.Name:                  "dev-docker" => "dev-docker"
    tags.env:                   "dev" => "dev"
    tags.role:                  "dev-docker" => "dev-docker"
    tenancy:                    "default" => "<computed>"
    user_data:                  "50d98c37ef90b3a8b5453959d9217578748579e9" => "50d98c37ef90b3a8b5453959d9217578748579e9"
    vpc_security_group_ids.#:   "2" => "<computed>"

@ashb
Copy link
Contributor

ashb commented Mar 15, 2016

@davedash That last comment is known and being tracked in #5627

@ghost
Copy link

ghost commented Apr 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants