Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform crashes with nil context when making requests #6877

Closed
longwuyuan opened this issue Jul 29, 2020 · 14 comments
Closed

Terraform crashes with nil context when making requests #6877

longwuyuan opened this issue Jul 29, 2020 · 14 comments
Assignees
Labels

Comments

@longwuyuan
Copy link

longwuyuan commented Jul 29, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

$ terraform -version 127 ↵
Terraform v0.12.29

  • provider.google v3.32.0

Affected Resource(s)

data source * google_compute_network

Terraform Configuration Files

╰─$ cat *                                                                                                                                                                                                          
resource "google_compute_instance" "master" {                                                                                                                                                                      
  name         = var.master0_name                                                                                                                                                                                  
  machine_type = var.machine_type                                                                                                                                                                                  
  zone         = var.zone                                                                                                                                                                                          
                                                                                                                                                                                                                   
  tags = ["role", "master"]                                                                                                                                                                                        
                                                                                                                                                                                                                   
  boot_disk {                                                                                                                                                                                                      
    initialize_params {                                                                                                                                                                                            
      image = "debian-cloud/debian-9"                                                                                                                                                                              
    }                                                                                                                                                                                                              
  }                                                                                                                                                                                                                
                                                                                                                                                                                                                   
  network_interface {                                                                                                                                                                                              
    network = "default"                                                                                                                                                                                            
  }                                                                                                                                                                                                                
                                                                                                                                                                                                                   
  metadata = {                                                                                                                                                                                                     
    role = "master"                                                                                                                                                                                                
  }                                                                                                                                                                                                                
}                                                                                                                                                                                                                  
                                                                                                                                                                                                                   
resource "google_compute_firewall" "k8s_api_server_port" {
  name    = "k8s-apiserver-port-8443"
  network = "default"

  allow {
    protocol = "https"
    ports    = ["8443"]
  }
}
provider "google" {
  credentials = var.credentials
  project     = var.project
  region      = var.region
  zone        = var.zone
}
variable "credentials" {
  type    = string
  default = "/home/me/.config/gcloud/legacy/...../adc.json"
}

variable "project" {
  type    = string
  default = "project-222222"
}
variable "region" {
  type    = string
  default = "us-east1"
}

variable "zone" {
  type    = string
  default = "us-east1-b"
}

variable "master0_name" {
  type    = string
  default = "master0"
}

variable "machine_type" {
  type    = string
  default = "n1-standard-1"
}
terraform {
  required_version = "~> 0.12.18"
}
data "google_compute_network" "default_vpc" {
  name = "default"
}

https://gist.github.com/longwuyuan/d764ed0f671f55b1239138fc915c0e11

Panic Output

https://gist.github.com/longwuyuan/d764ed0f671f55b1239138fc915c0e11

Expected Behavior

Terraform plan should be returned

Actual Behavior

Terraform crashes with error as

Error: rpc error: code = Unavailable desc = transport is closing

Steps to Reproduce

  • Copy paste the .tf files shown here above
  • Edit the value for the variables "project" and "credentials" to your project and your credentials
  • Run terraform init
  • Run terraform plan

Important Factoids

References

@ghost ghost added the bug label Jul 29, 2020
@longwuyuan longwuyuan changed the title Terraform crashes using data source google_compute_network or the default VPC Terraform crashes when default VPC is used as data-source for google_compute_network Jul 29, 2020
@rileykarson
Copy link
Collaborator

I'm unable to reproduce this using the datasource alone or in your provided config. Do other datasources and resources work? Does it work differently if you specify a different network?

The panic appears to be related to a context that's set to nil. However we don't actually set a context on a per-request basis. If this datasource fails, I'd expect all requests to fail.

@longwuyuan
Copy link
Author

longwuyuan commented Jul 30, 2020

  • Other resources work (.tf ffiles I have pasted in this issue) only if I remove that data source
  • If I delete the file vpc.tf (it contains just that data source), then no crash
  • I think if I create a resource and then use it as a data source, it will not crash
  • But I do want to use the default VPC and not create a new VPC
  • My need is to put a VM in that default VPC so I am not creating a network
  • What is a "context" here ? Is it description of the default network. It is empty

image

  • This was working fine in Ubuntu 20 LTS
  • The crash started on MX-Linux which is debian
  • Is it crashing because default VPC does not have description

What other information would be helpful ?

@ghost ghost removed the waiting-response label Jul 30, 2020
@longwuyuan
Copy link
Author

longwuyuan commented Jul 30, 2020

main.tf.txt

now it crashed on apply

                                                                                                                                                                                                                   
google_compute_firewall.k8s_api_server_port: Creating...                                                                                                                                                           
                                                                                                                                                                                                                   
Error: rpc error: code = Unavailable desc = transport is closing                                                                                                                                                   
                                                                                                                                                                                                                   
                                                                                                                                                                                                                   
                                                                                                                                                                                                                   
Error: rpc error: code = Unavailable desc = transport is closing  

crash.log

@rileykarson
Copy link
Collaborator

Thanks! That second log matches my theory that it wasn't an error specific to the datasource itself, but is an issue as the provider level. This has nothing to do with the description on the resource- I've essentially never included one personally.

What is a "context" here ? Is it description of the default network. It is empty

A context is a Go object that's used to communicate information about an event, often an asynchronous one like a web request. The information passed along most often is a deadline for a request, or whether the request / series of requests has been cancelled by the sender.

The Terraform provider receives a context from Terraform Core (the terraform binary) that it then links to each request it makes. We're failing because the context we're sending to the request is nil. That shouldn't happen, and my current best guess is we're receiving an invalid context from Terraform Core.

In case it's something else, are you running Terraform in an atypical environment? I saw a warning in the logs I didn't recognise and one of the only other issues I found when searching for it is hashicorp/terraform-provider-azurerm#7289. The specific message in the logs is unknown service plugin.GRPCStdio.

@longwuyuan
Copy link
Author

I am running terraorm binary on my linux laptop.
Can you kindly hint at some likely atypical scene on laptop.
I deleted .terraform directory and init again but GRPCStdio still shows up again on apply
Does my go env matter ?
image

@rileykarson
Copy link
Collaborator

rileykarson commented Jul 30, 2020

Thanks! The linked Azure issue mentioned an atypical environment, so I was wondering if that was the case here. Turns out we also recently got another issue with the same message and error: #6887

Would you mind trying an older provider version to see if it's affected as well?

@rileykarson rileykarson changed the title Terraform crashes when default VPC is used as data-source for google_compute_network Terraform crashes with nil context Jul 30, 2020
@rileykarson rileykarson changed the title Terraform crashes with nil context Terraform crashes with nil context when making requests Jul 30, 2020
@longwuyuan
Copy link
Author

3.31 also crashed
3.23 did not crash
With 3.23 .. I saw normal error in my .tf file ;

google_compute_firewall.k8s_api_server_port: Creating...
google_compute_instance.master: Creating...
google_compute_instance.master: Still creating... [10s elapsed]
google_compute_instance.master: Still creating... [20s elapsed]
google_compute_instance.master: Creation complete after 21s [id=projects/longproject-224313/zones/us-east1-b/instances/master0]                                                                                   

Error: Error creating Firewall: googleapi: Error 400: Invalid value for field 'resource.allowed[0].IPProtocol': 'https'. Invalid IP protocol specification., invalid                                              

  on main.tf line 31, in resource "google_compute_firewall" "k8s_api_server_port":
  31: resource "google_compute_firewall" "k8s_api_server_port" {

@longwuyuan
Copy link
Author

longwuyuan commented Jul 30, 2020

image

@rileykarson
Copy link
Collaborator

Thanks for all the context! We've identified a potential fix, although nobody on my team has been able to reproduce the issue. We'll merge it- GoogleCloudPlatform/magic-modules#3800 and push it through to the next valid release. Even if it isn't ultimately the fix, it's harmless to add either way.

That should release as part of 3.34.0. In the meantime, versions 3.30.0 and earlier should work. Once 3.34.0 has released (likely August 10th), do you mind trying it out and seeing if the issue is resolved and reporting back here?

@longwuyuan
Copy link
Author

Thanks.
I will test and update on 3.34.
I just need to change the version string so not complicated at all.

@ghost ghost removed the waiting-response label Jul 31, 2020
@kaisellgren
Copy link

kaisellgren commented Aug 1, 2020

I can confirm that 3.30.0 version worked for me! So, there's something wrong with the 3.32.0. I had this issue previously #6887

@SoundaryaChinnu144
Copy link

SoundaryaChinnu144 commented Aug 4, 2020

Same panic error i got with provider version 3.32.0, after downgrade to 3.30.0 looks fine.
Thanks for the update.

@longwuyuan
Copy link
Author

@rileykarson 3.34 did not crash. I will close

@ghost
Copy link

ghost commented Sep 15, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Sep 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants