Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_elasticache_replication_group: Unexpected EOF #9670

Closed
spanktar opened this issue Oct 27, 2016 · 5 comments
Closed

aws_elasticache_replication_group: Unexpected EOF #9670

spanktar opened this issue Oct 27, 2016 · 5 comments

Comments

@spanktar
Copy link

spanktar commented Oct 27, 2016

Terraform Version

0.7.7

Affected Resource(s)

Please list the resources as a list, for example:

  • aws_elasticache_replication_group

If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.

Terraform Configuration Files

resource "aws_elasticache_replication_group" "redis" {
    availability_zones = [
        "${var.aws_region_1}",
        "${var.aws_region_2}",
        "${var.aws_region_3}"
    ]
    automatic_failover_enabled =    true
    engine_version =                "3.2.4"
    node_type =                     "cache.m4.large"
    number_cache_clusters =         3
    parameter_group_name =          "default.redis3.2.cluster.on"
    port =                          6379
    replication_group_description = "Redis RG"
    replication_group_id =          "rg-redis"
    subnet_group_name =             "${aws_elasticache_subnet_group.elasticache_subnet_group.name}"
}

resource "aws_elasticache_subnet_group" "elasticache_subnet_group" {
    description =                   "ElastiCache subnet group"
    name =                          "elasticache-subnet-group"
    subnet_ids = [
        "${aws_subnet.subnet_devops_1.id}",
        "${aws_subnet.subnet_devops_2.id}",
        "${aws_subnet.subnet_devops_3.id}"
    ]
}

Debug Output

https://gist.github.com/spanktar/e540a456a338bc88b10ba2acf6f3efd3 and
https://gist.github.com/spanktar/c848aa43cff7f046e18449e723cd505d

Expected Behavior

The elasticache cluster should have been built and stored in the state file

Actual Behavior

The elasticache cluster was built properly in AWS but not stored in the state file, and Terraform crashed.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform apply

References

Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:

@kwilczynski
Copy link
Contributor

@spanktar hi there! I am sorry you are having issues!

I believe this might be related to #9656, and also resolved via #9601. This fix is due to be released as part of the upcoming 0.7.8 release.

@spanktar
Copy link
Author

Thanks for the quick reply, I eagerly await that release to see! Thanks.

@kwilczynski
Copy link
Contributor

@spanktar hi there again! I am sorry that you were affected!

If you have the time, and ability to do so, I would recommend building master branch and trying it out to see, whether the issue is indeed resolved for you - if not, then we can try to fix it for you before the release.

Again, sincerely apologies for all the troubles.

@stack72
Copy link
Contributor

stack72 commented Oct 27, 2016

Hi @spanktar

I am happy to say that this has already been fixed in a PR #9601 that was applied to master :)

This will be released as part of TF 0.7.8 - I am sorry for the issues here

Paul

@ghost
Copy link

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants