Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: Fix issue with checking for ElastiCache cluster status #2842

Merged
merged 3 commits into from
Jul 29, 2015

Conversation

catsby
Copy link
Contributor

@catsby catsby commented Jul 24, 2015

Currently we're only checking the first Cache Cluster returned when we query the API to describe our clusters. If you have multiple clusters, you can falsely get a status of available.

  • We entered the waitForState method without first setting the cluster id, so we were effectively grabbing all clusters by searching for "", not just a specific one
  • We then examine the first cluster returned, which if we had specified a specific one, would probably be sufficient. Instead, we could possibly get another, existing one which could be available.

Now we set the id first, then enter the waitForState method. There we also filter the results for the specific Cluster. There should be only 1 returned, but I filter anyway.

This should fix #2732 . Thanks to all in that issue, as well as #2051, for the help and patience.

@phinze
Copy link
Contributor

phinze commented Jul 24, 2015

Nice find! Totally makes sense why repro was tough in our test accounts - we rarely have multiple cache clusters floating around.

pending := []string{"creating"}
stateConf := &resource.StateChangeConf{
Pending: pending,
Target: "available",
Refresh: cacheClusterStateRefreshFunc(conn, d.Id(), "available", pending),
Refresh: cacheClusterStateRefreshFunc(conn, *resp.CacheCluster.CacheClusterID, "available", pending),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can leave this as d.Id() since you moved the SetId up, but not a huge deal either way

@phinze
Copy link
Contributor

phinze commented Jul 24, 2015

LGTM

@mzupan
Copy link
Contributor

mzupan commented Jul 26, 2015

this worked great for me.. thanks for the fix

* master: (33 commits)
  Update CHANGELOG.md
  Update CHANGELOG.md
  scripts: change website_push to push from HEAD
  update analytics
  provider/aws: Update source to comply with upstream breaking change
  Update CHANGELOG.
  provider/aws: Fix issue with IAM Server Certificates and Chains
  Increase timeout, IGM delete can be slow
  Make failure of "basic" test not interfere with success of "update" test
  Update CHANGELOG.md
  Use new autoscaler / instance group manager APIs.
  Compute private ip addresses of ENIs if they are not specified
  Update CHANGELOG.md
  Update CHANGELOG.md
  provider/aws: Error when unable to find a Root Block Device name
  Update CHANGELOG.md
  aws_db_instance: Add mixed-case engine test to ensure StateFunc works.
  aws_db_instance: Only write lowercase engines to the state file.
  Update CHANGELOG.md
  Split AWS provider topics by service.
  ...
catsby added a commit that referenced this pull request Jul 29, 2015
provider/aws: Fix issue with checking for ElastiCache cluster status
@catsby catsby merged commit 1043fb7 into master Jul 29, 2015
@catsby catsby deleted the aws-elasticache-debug branch July 29, 2015 16:42
@ghost
Copy link

ghost commented May 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

aws: aws_elasticache_cluster doesn't wait till completed
3 participants