Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_elasticache_replication_group snapshot_retention_limit value not being set in state file #8433

Closed
danielnbarbosa opened this issue Aug 23, 2016 · 13 comments
Assignees

Comments

@danielnbarbosa
Copy link

Terraform Version

Terraform v0.7.1

Affected Resource(s)

  • aws_elasticache_replication_group

Terraform Configuration Files

resource "aws_elasticache_replication_group" "mod" {
  replication_group_id          = "${var.elasticache_name}"
  replication_group_description = "${var.elasticache_name}"
  engine                        = "redis"
  node_type                     = "${var.elasticache_node_type}"
  port                          = 6379
  number_cache_clusters         = "${var.elasticache_number_cache_clusters}"
  automatic_failover_enabled    = "${var.elasticache_automatic_failover_enabled}"
  security_group_ids            = ["${aws_security_group.elasticache.id}"]
  parameter_group_name          = "${aws_elasticache_parameter_group.mod.name}"
  subnet_group_name             = "${aws_elasticache_subnet_group.mod.name}"
  snapshot_retention_limit      = "30"
  maintenance_window            = "${var.elasticache_maintenance_window}"
}

Expected Behavior

Setting snapshot_retention_limit to something other then 0 should stick, and be reflected in the state file.

Actual Behavior

snapshot_retention_limit always shows as 0 in the state file, forcing a re-converge on every terraform plan. The correct value is reflected in AWS.

Steps to Reproduce

  1. Create an aws_elasticache_replication_group with a snapshot_retention_limit greater then 0.
  2. After creating, run terraform plan again and it will want to re-converge.
~ module.bar.aws_elasticache_replication_group.mod
    snapshot_retention_limit: "0" => "30"
@stack72
Copy link
Contributor

stack72 commented Aug 24, 2016

Hi @danielnbarbosa

thanks for bringing this to my attention - i will look into this first thing in the morning and see if we can get it fixed asap

Paul

@sharmaansh21
Copy link
Contributor

@danielnbarbosa I am unable to reproduce this

➜  terraform-debug $GOPATH/bin/terraform -version
Terraform v0.7.2-dev (70cc108614b8ef768503b43ecd1b803b383e29ea)
➜  terraform-debug $GOPATH/bin/terraform show
aws_elasticache_replication_group.bar:
  id = tf-asadsa
  automatic_failover_enabled = false
  availability_zones.# = 1
  availability_zones.986537655 = us-east-1c
  engine = redis
  engine_version = 2.8.24
  maintenance_window = wed:08:30-wed:09:30
  node_type = cache.m1.small
  number_cache_clusters = 1
  parameter_group_name = default.redis2.8
  port = 6379
  primary_endpoint_address = tf-asadsa.cy5v6n.ng.0001.use1.cache.amazonaws.com
  replication_group_description = test description
  replication_group_id = tf-asadsa
  security_group_ids.# = 1
  security_group_ids.1731462772 = sg-a01cb0da
  security_group_names.# = 0
  snapshot_retention_limit = 30
  snapshot_window = 06:00-07:00
  subnet_group_name = tf-test-cache-subnet-1234
resource "aws_elasticache_replication_group" "bar" {
    replication_group_id = "tf-asadsa"
    replication_group_description = "test description"
    node_type = "cache.m1.small"
    number_cache_clusters = 1
    port = 6379
    snapshot_retention_limit      = "30"
    subnet_group_name = "${aws_elasticache_subnet_group.bar.name}"
    security_group_ids = ["${aws_security_group.bar.id}"]
    parameter_group_name = "default.redis2.8"
    availability_zones = ["us-east-1c"]
}

@danielnbarbosa
Copy link
Author

Hmm, that's strange. Could it be because my resource is in a module? What else can I provide to help trace this down? Also, I would suggest trying with number_cache_clusters = 2.

@stack72
Copy link
Contributor

stack72 commented Aug 26, 2016

Hi @danielnbarbosa

Ok, I have been able to recreate this :)

So I have the following config:

resource "aws_elasticache_replication_group" "test2" {
  replication_group_id = "test-rg-2"
  replication_group_description = "test description"
  node_type = "cache.m1.small"
  number_cache_clusters = 2
  port = 6379
  parameter_group_name = "default.redis2.8"
  availability_zones = ["us-west-2a","us-west-2b"]
  snapshot_retention_limit = 20
  snapshot_window = "06:00-07:00"
  automatic_failover_enabled = true
}

I apply that and it works:

[stacko@Pauls-MacBook-Pro:~/Code/terraform-recreations/elasticache-replication-group]
% terraform apply
aws_elasticache_replication_group.test2: Creating...
  apply_immediately:             "" => "<computed>"
  automatic_failover_enabled:    "" => "true"
  availability_zones.#:          "" => "2"
  availability_zones.221770259:  "" => "us-west-2b"
  availability_zones.2487133097: "" => "us-west-2a"
  engine:                        "" => "redis"
  engine_version:                "" => "<computed>"
  maintenance_window:            "" => "<computed>"
  node_type:                     "" => "cache.m1.small"
  number_cache_clusters:         "" => "2"
  parameter_group_name:          "" => "default.redis2.8"
  port:                          "" => "6379"
  primary_endpoint_address:      "" => "<computed>"
  replication_group_description: "" => "test description"
  replication_group_id:          "" => "test-rg-2"
  security_group_ids.#:          "" => "<computed>"
  security_group_names.#:        "" => "<computed>"
  snapshot_retention_limit:      "" => "20"
  snapshot_window:               "" => "06:00-07:00"
  subnet_group_name:             "" => "<computed>"
aws_elasticache_replication_group.test2: Still creating... (10s elapsed)
.........
aws_elasticache_replication_group.test2: Still creating... (15m50s elapsed)
aws_elasticache_replication_group.test2: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

When i then replan, I see the following:

% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

aws_elasticache_replication_group.test2: Refreshing state... (ID: test-rg-2)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

~ aws_elasticache_replication_group.test2
    snapshot_retention_limit: "0" => "20"


Plan: 0 to add, 1 to change, 0 to destroy.

I can then modify as follows:

% terraform apply
aws_elasticache_replication_group.test2: Refreshing state... (ID: test-rg-2)
aws_elasticache_replication_group.test2: Modifying...
  snapshot_retention_limit: "0" => "20"
aws_elasticache_replication_group.test2: Still modifying... (10s elapsed)
aws_elasticache_replication_group.test2: Still modifying... (20s elapsed)
aws_elasticache_replication_group.test2: Still modifying... (30s elapsed)
aws_elasticache_replication_group.test2: Modifications complete

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Then another plan shows as follows:

% terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

aws_elasticache_replication_group.test2: Refreshing state... (ID: test-rg-2)

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

~ aws_elasticache_replication_group.test2
    snapshot_retention_limit: "0" => "20"


Plan: 0 to add, 1 to change, 0 to destroy.

I am looking into this right now

Paul

@danielnbarbosa
Copy link
Author

Awesome. Thanks @stack72 !

@gregoryguillou
Copy link

Hi @stack72

AWS does not return Redis/Elasticache SnapshotRetentionLimit as part of the ReplicationGroup; the code must:

  • grab SnapshottingClusterId from ReplicationGroup
  • get SnapshotRetentionLimit as part of CacheCluster matching SnapshottingClusterId

The AWS Console does not display SnapshotRetentionLimit either. Instead, it is only available when creating/modifying the ReplicationGroup. I've tested on the console: changing the SnapshotRetentionLimit on the Snapshotting CacheCluster, does show up on the ReplicationGroup when modifying it.

I hope it helps

Gregory

@danielnbarbosa
Copy link
Author

Still seeing this in v0.7.5. Anything I can do to help push it across the finish line?

@adamgotterer
Copy link

adamgotterer commented Oct 27, 2016

I'm also seeing this in 0.7.5.

snapshot_retention_limit = "${var.snapshot_retention_limit}" # Value is 7

tfstate:

 "snapshot_retention_limit": "0",

terraform apply:

~ module.redis.aws_elasticache_replication_group.main
    snapshot_retention_limit: "0" => "7"

@ebgc
Copy link

ebgc commented Oct 28, 2016

same for 0.7.4

@chamindg
Copy link

chamindg commented Nov 4, 2016

Same for 0.7.7.

@gregoryguillou
Copy link

Probably fixed by PR #9601 in 0.7.8! At least the issue has disappeared in 0.7.9. Thank you

@stack72
Copy link
Contributor

stack72 commented Jan 30, 2017

Closed - this has no longer been the case since 0.7.8

@stack72 stack72 closed this as completed Jan 30, 2017
@ghost
Copy link

ghost commented Apr 17, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants