Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform apply not deploying all the resources specified #5249

Closed
gowrisankar22 opened this issue Dec 21, 2019 · 4 comments
Closed

terraform apply not deploying all the resources specified #5249

gowrisankar22 opened this issue Dec 21, 2019 · 4 comments
Labels

Comments

@gowrisankar22
Copy link

gowrisankar22 commented Dec 21, 2019

Hello Colleagues,

I am trying to deploy the GKE cluster with a custom node pool. While deploying it deploys the cluster only for the first time and it fails. During the second attempt, it deploys the node pool and it is getting succeeding. It should deploy all the resources ideally.

Please find the terraform below

provider "google" {
  project = xx"
  region = "xx"
  credentials = <<CREDENTIALS_JSON
{CREDENTIALS_JSON}
}

resource "google_container_cluster" "gke-cluster" {
  name                      = "gke-cluster"
  network                   = "abc-net"
  subnetwork                = "abc-net-subnet"
  location                  = "europe"
  remove_default_node_pool  = true
  initial_node_count        = "1"
  min_master_version        = "latest"
  logging_service           = "logging.googleapis.com/kubernetes"
  monitoring_service        = "monitoring.googleapis.com/kubernetes"
  maintenance_policy {
    daily_maintenance_window {
      start_time = "03:00"
    }
  }
  private_cluster_config {
    enable_private_endpoint = true
    enable_private_nodes = true
    master_ipv4_cidr_block = "172.16.0.0/28"
  }
  master_authorized_networks_config {}
  # Configuration options for the NetworkPolicy feature.
  network_policy {
    # Whether network policy is enabled on the cluster. Defaults to false.
    # In GKE this also enables the ip masquerade agent
    # https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent
    enabled = true

    # The selected network policy provider. Defaults to PROVIDER_UNSPECIFIED.
    provider = "CALICO"
  }
  master_auth {
    # Setting an empty username and password explicitly disables basic auth
    username = ""
    password = ""

    # Whether client certificate authorization is enabled for this cluster.
    client_certificate_config {
      issue_client_certificate = false
    }
  }
# The configuration for addons supported by GKE.
  addons_config {
    # The status of the Kubernetes Dashboard add-on, which controls whether
    # the Kubernetes Dashboard is enabled for this cluster. It is enabled by default.
    kubernetes_dashboard {
      disabled = true
    }
    horizontal_pod_autoscaling {
      disabled = true
    }
    http_load_balancing {
      disabled = true
    }
}
  ip_allocation_policy {
    #Choose the range, but let GCP pick the IPs within the range
    #cluster_ipv4_cidr_block   = ""
    #services_ipv4_cidr_block  = ""
    cluster_ipv4_cidr_block   = "10.96.0.0/14"
    services_ipv4_cidr_block  = "10.94.0.0/18"
      }
    }
resource "google_container_node_pool" "gke-node-pool" {
  name                  = "gke-node-pool"
  location              = "europe"
  cluster               = "gke-cluster"
  node_count            = 3
  node_config {
  image_type   = "COS"
  machine_type = "n1-standard-2"
  metadata = {
    disable-legacy-endpoints = "true"
    }
      oauth_scopes = [
        "https://www.googleapis.com/auth/devstorage.read_only",
        "https://www.googleapis.com/auth/logging.write",
        "https://www.googleapis.com/auth/monitoring",
        "https://www.googleapis.com/auth/servicecontrol",
        "https://www.googleapis.com/auth/service.management.readonly",
        "https://www.googleapis.com/auth/trace.append"
      ]
  }
  autoscaling {
    min_node_count = "3"
    max_node_count = "10"
  }
  management {
    auto_repair  = "true"
    auto_upgrade = "true"
  }
  timeouts {
    create = "30m"
    update = "30m"
    delete = "30m"
  }
}
@gowrisankar22
Copy link
Author

@edwardmedia can you help ?

@emilymye
Copy link
Contributor

emilymye commented Dec 23, 2019

@gowrisankar22 my guess is that, if this is your exact config, you're running into an issue because the node_pool has no dependency on the cluster and thus Terraform may try to create it at the same time as the cluster. This will fail because the cluster doesn't exist yet, but on reapply works because the cluster was created. You need to either interpolate the cluster name into the node pool config or add an explicit depends_on.

i.e.

resource "google_container_node_pool" "gke-node-pool" {
  cluster               = google_container_cluster.gke-cluster.name
  ...
}

or

resource "google_container_node_pool" "gke-node-pool" {
  depends_on  = [ google_container_cluster.gke-cluster.name ]
  ...
}

To confirm, what is the error you are getting initially?

@gowrisankar22
Copy link
Author

hi @emilymye I dint get any error but after adding depends_on field it worked like a charm

@ghost
Copy link

ghost commented Jan 23, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Jan 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants