Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_region doesn't work with terraform_remote_state #2751

Closed
radeksimko opened this issue Jul 16, 2015 · 10 comments
Closed

aws_region doesn't work with terraform_remote_state #2751

radeksimko opened this issue Jul 16, 2015 · 10 comments

Comments

@radeksimko
Copy link
Member

Steps to reproduce:

/00-common/main.tf:

output "default_region" {
  value = "us-east-1"
}
terraform remote config -backend=S3 -backend-config="bucket=terraform-test-bucket" -backend-config="key=terraform.tfstate" -backend-config="region=us-east-1"
terraform apply

/main.tf

resource "terraform_remote_state" "shared" {
  backend = "s3"

  config {
    bucket = "terraform-test-bucket"
    key = "terraform.tfstate"
    region = "us-east-1"
  }
}

provider "aws" {
  region = "${terraform_remote_state.shared.output.default_region}"
}

resource "aws_vpc" "test" {
    cidr_block = "10.0.0.0/16"
}
$ terraform plan
Refreshing Terraform state prior to plan...

Error refreshing state: 1 error(s) occurred:

* 1 error(s) occurred:

* Not a valid region:

Here's the state file from S3:

{
    "version": 1,
    "serial": 1,
    "remote": {
        "type": "s3",
        "config": {
            "bucket": "terraform-test-bucket",
            "key": "terraform.tfstate",
            "region": "us-east-1"
        }
    },
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {
                "default_region": "us-east-1"
            },
            "resources": {}
        }
    ]
}

May be related to #2256 although that one has been fixed.

@antonosmond
Copy link

Just looking at this and shouldn't

provider "aws" {
  region = "${terraform_remote_state.shared.default_region}"
}

be

provider "aws" {
  region = "${terraform_remote_state.shared.output.default_region}"
}

As per the docs here: http://www.terraform.io/docs/state/remote.html
Just an observation btw I've not actually tried it.

@radeksimko
Copy link
Member Author

@antonosmond Good catch, but it doesn't really help. I just tested & fixed the example.

@mitchellh
Copy link
Contributor

Reproduced.

@mitchellh
Copy link
Contributor

Okay, this will be a hard bug to fix.

The crux of the issue is that if a parameter to a provider is computed, then we shouldn't configure that provider. But, continuing that: if a provider isn't configured, we can't refresh anything using that provider. But, if we don't refresh anything, then when will that refresh ever happen? Think about it, and you start getting into weird thoughts of multi-convergence, which isn't a place we want to go.

I think the correct solution to this might be a much larger change...

@steve-jansen
Copy link
Contributor

Also ran into this problem trying to share the value of aws_region between Atlas environments. Not a huge deal to workaround though, just need to duplicate variables between the Atlas environments.

@apparentlymart
Copy link
Contributor

I think this is the same issue that motivated me to write up #2976. Over there I've been prototyping and writing up some ideas for resolving this issue. Like @mitchellh said here, it's definitely some sort of architectural change to fix this, and I'd like to help explore solutions.

@egoldschmidt
Copy link

👍 I'm running into this as well

@apparentlymart
Copy link
Contributor

#4169 is the latest generation of the proposal to address this use-case.

FWIW, there is a workaround for this that I've been using everywhere in the mean time: whenever you ask Terraform to change resources, always run with -target=terraform_remote_state.shared first, which lets Terraform create or refresh the remote state in isolation, and then run Terraform with no explicit target to apply the change.

This is annoying, but at least it's something predictable enough that I can just encode it in our deployment scripts for everyone to run without really understanding exactly why it needs to be done.

In some cases this actually seems like it doesn't work, because even with -target specified it seems that Terraform sometimes instantiates the aws provider even though it doesn't need it. However, the error returned in that case doesn't seem to actually block the Terraform state from being created, since it doesn't itself depend on the aws provider.

After #4169 is implemented, this problem would be resolved by changing the terraform_remote_state resource into a terraform_remote_state data source, which will then let Terraform know that it should fetch that data as part of every plan, rather than waiting for the resource to be "created" first.

@catsby catsby removed the core label Mar 11, 2016
@mitchellh
Copy link
Contributor

This should be possible with #4169 as @apparentlymart said above!

@ghost
Copy link

ghost commented Apr 23, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants