Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Conditionally load tfvars/tf file based on Workspace #15966

Open
atkinchris opened this issue Aug 30, 2017 · 102 comments · May be fixed by #33873
Open

Feature: Conditionally load tfvars/tf file based on Workspace #15966

atkinchris opened this issue Aug 30, 2017 · 102 comments · May be fixed by #33873

Comments

@atkinchris
Copy link

Feature Request

Terraform to conditionally load a .tfvars or .tf file, based on the current workspace.

Use Case

When working with infrastructure that has multiple environments (e.g. "staging", "production"), workspaces can be used to isolate the state for different environments. Often, different variables are needed per workspace. It would be useful if Terraform could conditionally include or load variable file, depending on the workspace.

For example:

application/
|-- main.tf // Always included
|-- staging.tfvars // Only included when workspace === staging
|-- production.tfvars // Only included when workspace === production

Other Thoughts

Conditionally loading a file would be flexible, but possibly powerfully magic. Conditionally loading parts of a .tf/.tfvars file based on workspace, or being able to specify different default values per workspace within a variable, could be more explicit.

@apparentlymart
Copy link
Contributor

Hi @atkinchris! Thanks for this suggestion.

We have plans to add per-workspace variables as a backend feature. This means that for the local backend it would look for variables at terraform.d/workspace-name.tfvars (alongside the local states) but in the S3 backend (for example) it could look for variable definitions on S3, keeping the record of the variables in the same place as the record of which workspaces exist. This would also allow more advanced, Terraform-aware backends (such as the one for Terraform Enterprise) to support centralized management of variables.

We were planning to prototype this some more before actually implementing it, since we want to make sure the user experience makes sense here. With the variables stored in the backend we'd probably add a local command to update them from the CLI so that it's not necessary to interact directly with the underlying data store.

At this time we are not planning to support separate configuration files per workspace, since that raises some tricky questions about workflow and architecture. Instead, we plan to make the configuration language more expressive so that it can support more flexible dynamic behavior based on variables, which would then allow you to use the variables-per-workspace feature to activate or deactivate certain behaviors without coupling the configuration directly to specific workspaces.

These items are currently in early planning stages and so no implementation work has yet been done and the details may shift along the way, but this is a direction we'd like to go to make it easier to use workspaces to model differences between environments and other similar use-cases.

@atkinchris
Copy link
Author

Awesome, look forward to seeing how workspaces evolve.

We'll keep loading the workspace specific variables with -var-file=staging.tfvars.

@b-dean
Copy link

b-dean commented Oct 11, 2017

@apparentlymart is there another github issue that is related to these plans? Something we could subscribe to?

I'm interested in this because we currently have a directory in our repo with env/<short account nickname>-<workspace>.tfvars files and it's a little bit of a pain to have to remember to mention them all the time when doing plans, etc (although it's immediately obvious when you forget it on the plan and nothing looks like you expect, could be dangerous to forget it on apply though).

If these were kept in some backend-specific location, that would be great!

@et304383
Copy link

et304383 commented Nov 1, 2017

We just want to reference a different VPC CIDR block based on my workspace. Is there any other workaround that could get us going today?

@apparentlymart
Copy link
Contributor

A few common workarounds I've heard about are:

  • Create a map in a named local value whose keys are workspace names and whose values are the values that should vary per workspace. Then use another named local value to index that map with terraform.workspace to get the appropriate value for the current workspace.
  • Place per-workspace settings in some sort of per-workspace configuration store, such as Consul's key/value store, and then use the above technique to select an appropriate Consul server to read from based on the workspace. This way there's only one per-workspace indirection managed directly in Terraform, to find the Consul server, and everything else is obtained from there. Even this map can be avoided with some systematically-created DNS records to help Terraform find a Consul server given the value of terraform.workspace.
  • (For VPCs in particular) Use AWS tags so systematically identify which VPC belongs to which workspace and use the aws_vpc data source to look one up based on tag, to obtain the cidr_block attribute.

@et304383
Copy link

et304383 commented Nov 1, 2017

@apparentlymart thanks. I think option one is best. 3 doesn't work as we create the VPC with terraform in the same workspace.

@james-lawrence
Copy link

@apparentlymart what is the estimated timeline for this functionality, could it be stripped down to just the tfvars and not the dynamic behaviour based on variables? It sounds like you have a pretty solid understanding of how the tfvars being loaded for a particular workspace is going to work.

@apparentlymart
Copy link
Contributor

Hi @james-lawrence,

In general we can't comment on schedules and timelines because we work iteratively, and thus there simply isn't a defined schedule for when things get done beyond our current phase of work.

However, we tend to prefer to split up the work by what subsystem it relates to in order to reduce context-switching, since non-trivial changes to Terraform Core tend to require lots of context. For example, in 0.11 the work was focused on the module and provider configuration subsystems because that allowed the team to reload all the context on how modules are loaded, how providers are inherited between modules, etc and thus produce a holistic design.

The work I described above belongs to the "backends" subsystem, so my guess (though definitely subject to change along the way) is that we'd try to bundle this work up with other planned changes for backends, such as the ability to run certain operations on a remote system, ability to retrieve outputs without disclosing the whole state, etc. Unfortunately all I can say right now is that we're not planning to look at this right now, since our current focus is on the configuration language usability and work is already in progress in that area which we want to finish (or, at least, reach a good stopping point) before switching context to backends.

@non7top
Copy link

non7top commented Nov 29, 2017

That becomes quite hard to manage when you are dealing with multiple aws accounts and terraform workspaces

@ura718
Copy link

ura718 commented Dec 19, 2017

Can anyone explain what the difference is between terraform.tfvars and variables.tf file, when to use one over the other? And do you need both or just one is good enough?

@non7top
Copy link

non7top commented Dec 19, 2017

[variables].tf has definitions and default values, .tfvars has overriding values if needed
You can have single .tf file and several tfvars files each defining different environment

@matti
Copy link

matti commented Jan 23, 2018

Yet another workaround (based on the @apparentlymart 's "first" workaround) that allows you to have workspace variables in different files (easier to diff). When you add new workspaces you only need to a) add the file b) add it to the list in the merge. This is horrible, but works.

workspace1.tf

locals {
  workspace1 = {
    workspace1 = {
      project_name = "project1"
      region_name  = "europe-west1"
    }
  }
}

workspace2.tf

locals {
  workspace2 = {
    workspace2 = {
      project_name = "project2"
      region_name  = "europe-west2"
    }
  }
}

main.tf

locals {
  workspaces = "${merge(local.workspace1, local.workspace2)}"
  workspace  = "${local.workspaces[terraform.workspace]}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}

@mhfs
Copy link

mhfs commented Feb 15, 2018

Taking @matti's strategy a little further, I like having default values and only customize per workspace as needed. Here's an example:

locals {
  defaults = {
    project_name = "project-default"
    region_name  = "region-default"
  }
}

locals {
  staging = {
    staging = {
      project_name = "project-staging"
    }
  }
}

locals {
  production = {
    production = {
      region_name  = "region-production"
    }
  }
}

locals {
  workspaces = "${merge(local.staging, local.production)}"
  workspace  = "${merge(local.defaults, local.workspaces[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}

When in workspace staging it outputs:

project_name = project-staging
region_name = region-default
workspace = staging

When on workspace production it outputs:

project_name = project-default
region_name = region-production
workspace = production

@tilgovi
Copy link

tilgovi commented Feb 15, 2018

I've been thinking about using Terraform in automation and doing something like -var-file $TF_WORKSPACE.tfvars.

@farman022
Copy link

can someone please give example/template of "Terraform to conditionally load a .tfvars or .tf file, based on the current workspace." Even old way is worked for me. I just wanted to run multiple infra from a single directory.

@landon9720
Copy link

@farman022 Just use the -vars-file command line option to point to your workspace-specific vars file.

@bborysenko
Copy link

Like @mhfs strategy but with one merge:

locals {

  env = {
    defaults = {
      project_name = "project_default"
      region_name = "region-default"
    }

    staging = {
      project_name = "project-staging"
    }

    production = {
      region_name = "region-production"
    }
  }

  workspace = "${merge(local.env["defaults"], local.env[terraform.workspace])}"
}

output "workspace" {
  value = "${terraform.workspace}"
}

output "project_name" {
  value = "${local.workspace["project_name"]}"
}

output "region_name" {
  value = "${local.workspace["region_name"]}"
}

@menego
Copy link

menego commented Apr 17, 2018

locals {
 
 context_variables = {
	dev = {
		pippo = "pippo-123"
	}
	prod = {
		pippo = "pippo-456"
	}
  }
  
  pippo = "${lookup(local.context_variables[terraform.workspace], "pippo")}"
}

output "LOCALS" {
  value = "${local.pippo}"
}

@ahsannaseem
Copy link

is this feature added in v0.11.7 I tried creating terraform.d with qa.tfvars and prod.tfvars. then select workspace qa. On apply plan it seems that it is not detecting qa.tfvars.

@mildwonkey
Copy link
Contributor

No, this hasn't been added yet (current version is v0.11.8).

While we try to follow up with issues like this in Github, sometimes things get lost in the shuffle - you can always check the Changelog for updates.

@hussfelt
Copy link

This is a resource that I have used a couple of times as a reference to setup a Makefile wrapping terraform, maybe some of you find it useful:
https://github.com/pgporada/terraform-makefile

@edantes-1845
Copy link

I use the following: terraform plan -var-file "$(terraform workspace show).tfvars". This means I can use the same command, not matter the workspace selected.

But it is kinda sad that after 4 years this is still not implemented. I mean, "How hard can it be?" slightly_smiling_face

I use it too. It is a good decision

@infogulch
Copy link

terraform plan -var-file "$(terraform workspace show).tfvars"

This is a great idea for most. Ironically this is the one solution that can't work for Terraform Enterprise customers because the terraform cli is invoked by the TFE node. Funny when you pay only for it to be worse 🤦

@rauerhans
Copy link

terraform plan -var-file "$(terraform workspace show).tfvars"

very clever, but it's possible to get it wrong if you forget to automatically get the tf workspace, but after going through the whole thread here, I'll roll with this, thanks!

@pecigonzalo
Copy link

terraform plan -var-file "$(terraform workspace show).tfvars"

This is a great idea for most. Ironically this is the one solution that can't work for Terraform Enterprise customers because the terraform cli is invoked by the TFE node. Funny when you pay only for it to be worse 🤦

@infogulch What if you do -var-file "foo.tfvars" via TF_CLI_ARGS for each workspace?

terraform plan -var-file "$(terraform workspace show).tfvars"

very clever, but it's possible to get it wrong if you forget to automatically get the tf workspace, but after going through the whole thread here, I'll roll with this, thanks!

@rauerhans could you elaborate?

@rokcarl
Copy link

rokcarl commented Nov 22, 2021

It could be dangerous if you think you're on dev, run this command, but you're actually on the prod workspace so you apply to production. That's why I have Oh My Zsh and it always shows me which workspace I'm on before running any Terraform command.

@raman-nbg
Copy link

This is my second day of writing TF script for a multi-staging setup and I think I should switch to a different tool. For me it looks like that there is no clean solution available for using different tfvars files per workspace (with Terraform Cloud). The workarounds described here only apply to running/applying TF locally.

Why isn't there any option in the TF Cloud UI where I can specify which tfvars files should be used? This seems so simple...

@matti
Copy link

matti commented Mar 16, 2022

@raman-nbg yes, do it before you have massive set of terraform written. I wish I was you.

@thomas-riccardi
Copy link

@raman-nbg in TFC we ended up using -var-file (from #15966 (comment)) with the TF_CLI_ARGS env var (https://www.terraform.io/cli/config/environment-variables#tf_cli_args-and-tf_cli_args_name): TF_CLI_ARGS_plan=-var-file=staging.env.tfvars
It works good enough.

(TF_CLI_ARGS_plan instead of TF_CLI_ARGS for TFC as it does a plan, saved to file, then apply from file)

Something similar is documented here: https://support.hashicorp.com/hc/en-us/articles/4416764686611-Using-Terraform-Variable-Definition-Files-in-Terraform-Cloud-Enterprise

@paololazzari
Copy link

Will this feature ever be added? It's been almost 5 years since this issue was opened

@ghost
Copy link

ghost commented May 26, 2022

Will this feature ever be added? It's been almost 5 years since this issue was opened

I don't think so, haha

@yukari1414
Copy link

still waiting... 😂

@crw
Copy link
Collaborator

crw commented Aug 8, 2022

This issue is not currently prioritized. It does rank highly on our list of most requested issues, however that does not guarantee it will be addresses in the near future. Thanks for your interest and your patience.

@oniGino
Copy link

oniGino commented Aug 22, 2022

here is yet another workaround structure

locals {
  workspaces = {
    workspace1= {
      key1 = "value1"
      key2 = "value2"
     }
     workspace2 = {
      key1 = "foo1"
      key2 = "foo2"
     }
  }
  ws = local.workspaces[terraform.workspace]
}

Now all workspace specific values can be references as
local.ws.key1
local.ws.key2
or
local.ws[key1]

added bonus you get an error trying to run in a workspace that isn't defined in locals

@github-usr-name
Copy link

github-usr-name commented Aug 24, 2022

> This issue is not currently prioritized. It does rank highly on our list of most requested issues
                              ^^^ I do not think this word means what you think.
                                  The issue is _clearly_ a high priority for your customers.

@github-usr-name
Copy link

@oniGino Reasonable approach, though without a bit of juggling it has the disadvantage of coupling the settings for all possible environments into a single file. I tend to use this pattern quite a lot in various languages - it's essentially an poor-man's DI ;)

@philomory
Copy link
Contributor

@github-usr-name Although I do not work for Hashicorp, I can almost guarantee you that they know exactly what "prioritized" means; in this context, "not currently prioritized", as in, "we have not assigned this issue a priority in our backlog/work queue".

@briceburg
Copy link

Somehow inventing a whole new language (HCL v1) over the adoption of yaml/json or CUE /jsonet took "priority" over sensible features like this. I find it strange that golang-friendly devs would not want to create conventions around such a common feature; the language itself preaches idioism and "readability"... my sad $0.02

@matti
Copy link

matti commented Aug 25, 2022 via email

@Bessonov
Copy link

Somehow inventing a whole new language (HCL v1) over the adoption of yaml/json or CUE /jsonet took "priority" over sensible features like this. I find it strange that golang-friendly devs would not want to create conventions around such a common feature; the language itself preaches idioism and "readability"... my sad $0.02

Your comment is off-topic for reason that this issue has nothing to do with configuration language, but with how deviations can be introduced.

AFAIK HCL is used in multiple HashiCorp products and therefore on it's own it makes perfectly sense. But I'm in cohort who says, that a declarative language is a bad idea for infrastructure management or for every dynamic task. Of course, there are changes since terraform 0.12 which made terraform usable for most use cases.

Back to you comment. Your suggestions would make it even worse. yaml/json/jsonnet are more dysfunctional than hcl. And cue was introduced at the end of 2018, long after hcl was used in production world wide. I never used cue (and don't plan to do that), but at first glance there is no real benefit for HashiCorp and(!) the community, but a bunch of disadvantages.

Therefore, if switch to another language, then, probably, the best choice could be a general purpose language, like pulumi did it.

@nitrocode
Copy link

nitrocode commented Oct 19, 2022

It would be very nice if this was built into Terraform.

NOTE: that terraform.workspace is unavailable to variable validation blocks so those cannot be used for this.

Assumptions

assumptions

If there are consistent workspace names such as ue1-prod, ue1-dev, etc and have inputs such as

# ue1-prod.tfvars
short_region = "ue1"

env = "prod"
# ue1-dev.tfvars
short_region = "ue1"

env = "dev"
terraform workspace new ue1-dev
terraform workspace new ue1-prod
terraform workspace select ue1-dev

Option 1: consistent workspaces with a local check

local check

main.tf

variable "short_region" {
  type = string
}

variable "env" {
  type = string
}

locals {
  check_workspace = {
    terraform.workspace = "some-good-value-doesn't-matter"
  }["${var.short_region}-${var.env}"]
}

If you tried to select ue1-prod workspace and use ue1-dev.tfvars by mistake, you'll try to pass in dev for the env and then the check_workspace map local will only contain ue1-prod so the lookup will try to find ue1-dev in the map which will fail. It would only succeed, and be an unused local, only if the workspace matched the naming convention provided by the inputs.

Returns

$ terraform plan -var-file="ue1-prod.tfvars"

│ Error: Invalid index
│
│   on main.tf line 12, in locals:
│   12:   }["${var.short_region}-${var.env}"]
│     ├────────────────
│     │ terraform.workspace is "ue1-dev"
│     │ var.env is "prod"
│     │ var.short_region is "ue1"

│ The given key does not identify an element in this collection value.

Option 2: consistent workspaces with a null_resource check

null_resource check
resource "null_resource" "workspace_check" {
  lifecycle {
    precondition {
      condition     = contains(split("-", terraform.workspace), var.short_region)
      error_message = "The selected workspace \"${terraform.workspace}\" does not have the correct short_region \"${var.short_region}\""
    }
    precondition {
      condition     = contains(split("-", terraform.workspace), var.env)
      error_message = "The selected workspace \"${terraform.workspace}\" does not have the correct env \"${var.env}\""
    }
  }
}

Returns

$ terraform plan -var-file="ue1-prod.tfvars"

│ Error: Resource precondition failed
│
│   on main.tf line 16, in resource "null_resource" "workspace_check":16:       condition     = contains(split("-", terraform.workspace), var.env)
│     ├────────────────
│     │ terraform.workspace is "ue1-dev"
│     │ var.env is "prod"
│
│ The selected workspace "ue1-dev" does not have the correct env "prod"

Option 3: terraform wrapper (shell script or atmos)

terraform wrapper

We hit a similar problem with clients and developed a tool called atmos to get around this limitation.

  1. define tfvars via yaml (we call it a stack)
  2. define a root terraform module (we call it a component)
  3. run atmos terraform plan example --stack uw2-dev
  4. deep merge uw2-dev.yaml and then generate tfvars file
  5. create or select a workspace (which is derived from the yaml stack) i.e. uw2-dev
  6. run the terraform plan
# stacks/uw2-dev.yaml
components:
  terraform:
    example:
      vars:
        # override the value of var.hello 
        hello: world
# components/terraform/example/main.tf
variable "hello" {
  default = "hello"
}

output "hello" {
  value = var.hello
}
$ brew install atmos
$ wget https://raw.githubusercontent.com/cloudposse/atmos/master/atmos.yaml
$ atmos terraform plan example --stack uw2-dev

The atmos command will then create the tfvars json in terraform/components/example/uw2-dev-example.tfvars.json

{
  "hello": "world"
}

The atmos command will then run the following

cd components/terraform/example
terraform init
# if the workspace doesn't exist
terraform workspace new uw2-dev
# if the workspace exists
terraform workspace select uw2-dev
# finally
terraform plan -var-file uw2-dev-example.tfvars.json

Returns no error since the error is prevented if exclusively using the terraform wrapper.

@iateadonut
Copy link

can I +1 this feature request?

I have a workspace 'production' and i was really surprised when I dropped a terraform.tfvars file into terraform.tfstate.d/production/ and it didn't automatically read from it.

@greyvugrin
Copy link

This doesn't solve the direct terraform integration part, but this script makes it easier to not schlep around -var-file=$(terraform workspace show).tfvars for every command in the meantime.

https://gist.github.com/greyvugrin/d7e43b4834796101c6c328718a1b7250

# Replaces stuff like:
# - terraform plan -var-file=$(terraform workspace show).tfvars
# - terraform import -var-file=$(terraform workspace show).tfvars aws_s3_bucket.bucket MY_BUCKET
# with:
# - ./tf.sh dev plan
# - ./tf.sh dev import aws_s3_bucket.bucket MY_BUCKET

@michael-mcmasters
Copy link

michael-mcmasters commented Dec 25, 2023

I was able to do this with Terraform Cloud by adding an environment variable to the workspace:
Key: TF_CLI_ARGS
Value: -var-file "dev.tfvars"

Now when I run terraform apply it turns into terraform apply -var-file dev.tfvars. It will do this for all terraform commands. (See here: https://developer.hashicorp.com/terraform/cli/config/environment-variables#tf_cli_args-and-tf_cli_args_name)

It would be better if we could apply this in the code itself though.

@DavidGamba
Copy link

One hard reason to support conditionally loading tf files is that moved blocks don't allow variables.
That means that one can't do refactors in a CI environment, only locally and the moved blocks can't be committed to version control.

As for working with multiple workspaces, I have a tool that automatically sets TF_DATA_DIR and TF_WORKSPACE to allow you to work with multiple workspaces in different terminals bt

@TheShahin
Copy link

TheShahin commented Mar 7, 2024

Here's my hacky workaround for the issue in the meantime:

Consider the directory structure:

./
  |-- main.tf
  |-- staging.tfvars.json
  |-- production.tfvars.json
  |-- terraform.tfvars

In order to load either staging.tfvars.json or production.tfvars.json based on current workspace I have the following code in main.tf:

locals {
  workspace_vars = jsondecode(file("${terraform.workspace}.tfvars.json"))
  ...
}

module "example" {
  ...
  variable_a = local.workspace_vars.variable_a
  variable_b = local.workspace_vars.variable_b
  // etc
}

Obviously this only works if the current working directory is not expecting these input variables but instead we load them as local variables that we can then pass as input variables to a module. It's not exactly what's being asked for but hopefully this might be good enough for some in the meantime.

@svengreb
Copy link

svengreb commented Jun 5, 2024

As of Terraform version 1.8 the provider specific function provider::terraform::decode_tfvars can be used in combination with the builtin file function to load all workspace specific variables from a file using ${terraform.workspace}.tfvars as path.

terraform {
  required_providers {
    terraform = {
      source = "terraform.io/builtin/terraform"
    }
  }
}

locals {
  # All workspace specific variables that are loaded based on the active workspace.
  ws  = provider::terraform::decode_tfvars(file("${terraform.workspace}.tfvars"))
}

module "example" {
  #
  variable_a = local.ws.variable_a
  #
}

It works great. Unfortunately, the big disadvantage is that this break auto-completion of most editors as well as "type safety" because the loaded variables can not be evaluated beforehand. It also required the explicit definition of the terraform.io/builtin/terraform provider because this is not a builtin function but a provider specific one which is possible as of Terraform version 1.8.

This is basically @TheShahin's setup, but simplified even more which also allows to use HCL instead of JSON.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.