-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect behavior terraform init
when TF_WORKSPACE set
#26127
Comments
Hi @ALutchko , I'm sorry you've experienced this unexpected behavior! Perhaps there's an opportunity for a clearer error message, as you said. I'm having trouble following the sequence of commands you are running when you get this message. I can see that |
Hi @mildwonkey, The sequence: one should run Thank you. |
I just tried the following steps with both terraform v0.12.29 and v0.13 (deleting the workspaces between each run)
Terraform created and selected the workspace without me having to do it manually. Is it possible that you had a different issue? Perhaps your credentials or backend configuration wasn't working correctly, and that's why you saw the S3 error? I do believe that you had an issue, but I don't think it's with terraform's workspace mechanism specifically. |
The error was related to DynamoDB, so maybe if you have backend without that feature there won't be error. On the other hand, guys from aws provider repo sent me there, please see link in reference. |
I suspect that there are two problems going on here that aren't actually related, just coincidentally in the same commands - let's see what we can figure out. The issue you've linked refers to a different workspace select error that what you have in this issue, and that's why the AWS provider pointed you here. Now that we've confirmed that the odd init/workspace behavior is fixed in v0.13, we can see if the dynamoDB error is related to the workspace, or separate. The first step is to confirm that the credentials you are using have the necessary permissions. Do those same credentials work in other workspaces, or do you have this problem with every configuration using these creds? Can you double check that your permissions match what's required by the s3 backend? |
I used admin role to run that, and it was not related to the workspace name. Also, |
This reproduces on The problem remains, if we have |
I also observe this behavior when using the |
I get this issue with terraform |
I am getting the same issue using terraform 1.0.0 and using s3 as a backend. |
I'm getting the same issue using terraform 1.0.5 and using s3 as a backend. |
Sam issue with Terraform 1.0.4 and a gcs backend. What are the next steps to hopefully getting this fixed? |
any workarounds found? I am running TF in automation mode through a Jenkins pipeline and it fails asking for workspace at the init stage... if I try to either set TF_WORKSPACE or create/manually set a workspace, it fails asking for init; I am chasing around my tail without any sign of results... |
You are correct, I deleted the comment to avoid misunderstandings. If you wish delete that reference as well, thanks for pointing it out! |
The scenario that bothers me is that I'm using my pipelines to create a new workspace for each environment (prod, test, dev, etc.) I end up in the catch 22 where if I specify the workspace name before the init the init won't run because it can't find the workspace.
but I also cannot create a new workspace before the init is run.
I therefore I do not specify the workspace before the init and allow the init to run under the default workspace. Then when it gets to the build stage I specify the workspace with Using Terraform 1.1.4 |
I faced the same issue. it does feel like a bug, I think it is not convenient to apply workarounds if this was working ok in older versions. |
This is how I've worked around the issue. I use S3 as my backend and have it configured as an "empty" backend.
In my CI/CD workflows I do these steps:
Here's the generic GitHub Actions code. I use a lot of variables to not repeat myself with the multiple
|
Community Note
Terraform CLI and Terraform AWS Provider Version
0.12.29
3.3
Affected Resource(s)
backend initialization
Terraform Configuration Files
Debug Output
on pastebin
Expected Behavior
new workspace created
Actual Behavior
it fails with (quite misleading) error:
If I turn on TF_LOG=DEBUG then I see 400 Bad Request, details on pastebin link above
Steps to Reproduce
run
terraform workspace new test
Important Factoids
backend is not on the same account as the target environment
I use TF_WORKSPACE variables, and if I just run
terraform init
it fails because the workspace does not exist yet and the value cannot be provided because the process runs in pipeline:References
hashicorp/terraform-provider-aws#14896
I found the reason, but it still look like misbehavior or at least proper error message is needed:
terraform workspace whatcoever
should rum only AFTERterraform init
. If you haveTF_WORKSPACE
set up, you may have error duringtf init
saying that the workspace does not exist yet, so you may have temptation to rumtf workspace new
beforetf init
. Don't do it, just set upTF_WORKSPACE
only aftertf init
.The text was updated successfully, but these errors were encountered: