From c3b741d4a6b9462667bc2e10001513709dac4b29 Mon Sep 17 00:00:00 2001 From: The Magician Date: Wed, 8 Jan 2020 11:55:58 -0800 Subject: [PATCH] set master_authorized_networks_config.enabled=true ... (#5343) * Use more markdown for Bug * Consistently use sentences for each bullet * Rewrite bug reproduction block * Allow domain mapping to succeed if DNS is pending Signed-off-by: Modular Magician * Updated google_folder.html (#4149) * Updated google_folder.html The page in the first example shows that you should use organization_id with value of 1234567. In the Import example, it's not clear whether organization_id is user, or folder_id is used. API call behind this import command is only accepting folder_id (can be checked when setting TF_LOG to trace and viewing the API call) * Update website/docs/r/google_folder.html.markdown Co-Authored-By: Dana Hoffman Co-authored-by: Dana Hoffman * add google_kms_secret_ciphertext resource, deprecate datasource (#5314) Signed-off-by: Modular Magician Co-authored-by: Dana Hoffman * Allow add/removing Bigtable clusters (#5318) Signed-off-by: Modular Magician Co-authored-by: Riley Karson * Add bootstrapped test networks for service networking tests (#5316) Signed-off-by: Modular Magician Co-authored-by: emily * Update CHANGELOG.md * fix docs for google_bigquery_default_service_account (#5329) Signed-off-by: Modular Magician Co-authored-by: Martin Nowak * Nil return for absent Bigtable resources (#5331) Signed-off-by: Modular Magician Co-authored-by: Brian Hildebrandt * add lifecycle_config to dataproc_cluster.cluster_config Signed-off-by: Modular Magician * set master_authorized_networks_config.enabled=true when there is a master_authorized_networks_config block (#2939) Merged PR #2939. Co-authored-by: Josh Soref Co-authored-by: Chris Stephens Co-authored-by: Petar Marinkovic <13387474+marinkovicpetar@users.noreply.github.com> Co-authored-by: Dana Hoffman Co-authored-by: megan07 Co-authored-by: Riley Karson Co-authored-by: emily Co-authored-by: Paddy Co-authored-by: Martin Nowak Co-authored-by: Brian Hildebrandt --- .changelog/2939.txt | 3 + CHANGELOG.md | 46 +++++- google/resource_bigtable_gc_policy.go | 2 +- google/resource_bigtable_instance.go | 148 +++++++++++++----- google/resource_bigtable_instance_test.go | 46 +++++- google/resource_bigtable_table.go | 2 +- google/resource_dataproc_cluster.go | 1 - google/resource_sql_database_instance_test.go | 1 + ...ery_default_service_account.html.markdown} | 1 + .../docs/r/bigtable_instance.html.markdown | 16 +- 10 files changed, 217 insertions(+), 49 deletions(-) create mode 100644 .changelog/2939.txt rename website/docs/d/{google_bigquery_default_service_account.html => google_bigquery_default_service_account.html.markdown} (97%) diff --git a/.changelog/2939.txt b/.changelog/2939.txt new file mode 100644 index 00000000000..f3edc4698eb --- /dev/null +++ b/.changelog/2939.txt @@ -0,0 +1,3 @@ +```release-note:REPLACEME +terraform-google-conversion only: set master_authorized_networks_config.enabled to true when there is a master_authorized_networks_config block in Terraform configuration. +``` diff --git a/CHANGELOG.md b/CHANGELOG.md index 9b3c4a32114..d49dadb10a9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,48 @@ -## 3.3.1 (Unreleased) +## 3.4.1 (Unreleased) +## 3.4.0 (January 07, 2020) + +DEPRECATIONS: +* kms: deprecated `data.google_kms_secret_ciphertext` as there was no way to make it idempotent. Instead, use the `google_kms_secret_ciphertext` resource. ([#5314](https://github.com/terraform-providers/terraform-provider-google/pull/5314)) + +BREAKING CHANGES: +* cloudrun: Changed `google_cloud_run_domain_mapping` to correctly match Cloud Run API expected format for `spec.route_name`, {serviceName}, instead of invalid projects/{project}/global/services/{serviceName} ([#5264](https://github.com/terraform-providers/terraform-provider-google/pull/5264)) +* compute: Added back ConflictsWith restrictions for ExactlyOneOf restrictions that were removed in v3.3.0 for `google_compute_firewall`, `google_compute_health_check`, and `google_compute_region_health_check`. This effectively changes an API-side failure that was only accessible in v3.3.0 to a plan-time one. ([#5220](https://github.com/terraform-providers/terraform-provider-google/pull/5220)) +* logging: Changed `google_logging_metric.metric_descriptors.labels` from a list to a set ([#5258](https://github.com/terraform-providers/terraform-provider-google/pull/5258)) +* resourcemanager: Added back ConflictsWith restrictions for ExactlyOneOf restrictions that were removed in v3.3.0 for `google_organization_policy`, `google_folder_organization_policy`, and `google_project_organization_policy`. This effectively changes an API-side failure that was only accessible in v3.3.0 to a plan-time one. ([#5220](https://github.com/terraform-providers/terraform-provider-google/pull/5220)) + +FEATURES: +* **New Data Source:** google_sql_ca_certs ([#5306](https://github.com/terraform-providers/terraform-provider-google/pull/5306)) +* **New Resource:** `google_identity_platform_default_supported_idp_config` ([#5199](https://github.com/terraform-providers/terraform-provider-google/pull/5199)) +* **New Resource:** `google_identity_platform_inbound_saml_config` ([#5199](https://github.com/terraform-providers/terraform-provider-google/pull/5199)) +* **New Resource:** `google_identity_platform_oauth_idp_config` ([#5199](https://github.com/terraform-providers/terraform-provider-google/pull/5199)) +* **New Resource:** `google_identity_platform_tenant_default_supported_idp_config` ([#5199](https://github.com/terraform-providers/terraform-provider-google/pull/5199)) +* **New Resource:** `google_identity_platform_tenant_inbound_saml_config` ([#5199](https://github.com/terraform-providers/terraform-provider-google/pull/5199)) +* **New Resource:** `google_identity_platform_tenant_oauth_idp_config` ([#5199](https://github.com/terraform-providers/terraform-provider-google/pull/5199)) +* **New Resource:** `google_identity_platform_tenant` ([#5199](https://github.com/terraform-providers/terraform-provider-google/pull/5199)) +* **New Resource:** `google_kms_crypto_key_iam_policy` ([#5247](https://github.com/terraform-providers/terraform-provider-google/pull/5247)) +* **New Resource:** `google_kms_secret_ciphertext` ([#5314](https://github.com/terraform-providers/terraform-provider-google/pull/5314)) + +IMPROVEMENTS: +* composer: Increased default timeouts for `google_composer_environment` ([#5223](https://github.com/terraform-providers/terraform-provider-google/pull/5223)) +* compute: Added graceful termination to `container_cluster` create calls so that partially created clusters will resume the original operation if the Terraform process is killed mid create. ([#5217](https://github.com/terraform-providers/terraform-provider-google/pull/5217)) +* compute: Fixed `google_compute_disk_resource_policy_attachment` parsing of region from zone to allow for provider-level zone and make error message more accurate` ([#5257](https://github.com/terraform-providers/terraform-provider-google/pull/5257)) +* provider: Reduced default `send_after` controlling the time interval after which a batched request sends. ([#5268](https://github.com/terraform-providers/terraform-provider-google/pull/5268)) + +BUG FIXES: +* all: fixed issue where many fields that were removed in 3.0.0 would show a diff when they were removed from config ([#5313](https://github.com/terraform-providers/terraform-provider-google/pull/5313)) +* bigquery: fixed `bigquery_table.encryption_configuration` to correctly recreate the table when modified ([#5321](https://github.com/terraform-providers/terraform-provider-google/pull/5321)) +* cloudrun: Changed `google_cloud_run_domain_mapping` to correctly match Cloud Run API expected format for `spec.route_name`, {serviceName}, instead of invalid projects/{project}/global/services/{serviceName} ([#5264](https://github.com/terraform-providers/terraform-provider-google/pull/5264)) +* cloudrun: Changed `cloud_run_domain_mapping` to poll for success or failure and throw an appropriate error when ready status returns as false. ([#5267](https://github.com/terraform-providers/terraform-provider-google/pull/5267)) +* cloudrun: Fixed `google_cloudrun_service` to allow update instead of force-recreation for changes in `spec` `env` and `command` fields ([#5269](https://github.com/terraform-providers/terraform-provider-google/pull/5269)) +* cloudrun: Removed unsupported update for `google_cloud_run_domain_mapping` to allow force-recreation. ([#5253](https://github.com/terraform-providers/terraform-provider-google/pull/5253)) +* cloudrun: Stopped returning an error when a `cloud_run_domain_mapping` was waiting on DNS verification. ([#5315](https://github.com/terraform-providers/terraform-provider-google/pull/5315)) +* compute: Fixed `google_compute_backend_service` to allow updating `cdn_policy.cache_key_policy.*` fields to false or empty. ([#5276](https://github.com/terraform-providers/terraform-provider-google/pull/5276)) +* compute: Fixed behaviour where `google_compute_subnetwork` did not record a value for `name` when `self_link` was specified. ([#5288](https://github.com/terraform-providers/terraform-provider-google/pull/5288)) +* container: fixed issue where an empty variable in `tags` would cause a crash ([#5226](https://github.com/terraform-providers/terraform-provider-google/pull/5226)) +* endpoints: Added operation wait for `google_endpoints_service` to fix 403 "Service not found" errors during initial creation ([#5259](https://github.com/terraform-providers/terraform-provider-google/pull/5259)) +* logging: Made `google_logging_metric.metric_descriptors.labels` a set to prevent diff from ordering ([#5258](https://github.com/terraform-providers/terraform-provider-google/pull/5258)) +* resourcemanager: added retries for `data.google_organization` ([#5246](https://github.com/terraform-providers/terraform-provider-google/pull/5246)) + ## 3.3.0 (December 17, 2019) FEATURES: diff --git a/google/resource_bigtable_gc_policy.go b/google/resource_bigtable_gc_policy.go index d77f9ea1cac..aa40d98ccfc 100644 --- a/google/resource_bigtable_gc_policy.go +++ b/google/resource_bigtable_gc_policy.go @@ -151,7 +151,7 @@ func resourceBigtableGCPolicyRead(d *schema.ResourceData, meta interface{}) erro if err != nil { log.Printf("[WARN] Removing %s because it's gone", name) d.SetId("") - return fmt.Errorf("Error retrieving table. Could not find %s in %s. %s", name, instanceName, err) + return nil } for _, fi := range ti.FamilyInfos { diff --git a/google/resource_bigtable_instance.go b/google/resource_bigtable_instance.go index 5b22f30c832..e6212ffbbf0 100644 --- a/google/resource_bigtable_instance.go +++ b/google/resource_bigtable_instance.go @@ -44,12 +44,10 @@ func resourceBigtableInstance() *schema.Resource { "cluster_id": { Type: schema.TypeString, Required: true, - ForceNew: true, }, "zone": { Type: schema.TypeString, Required: true, - ForceNew: true, }, "num_nodes": { Type: schema.TypeInt, @@ -60,7 +58,6 @@ func resourceBigtableInstance() *schema.Resource { Type: schema.TypeString, Optional: true, Default: "SSD", - ForceNew: true, ValidateFunc: validation.StringInSlice([]string{"SSD", "HDD"}, false), }, }, @@ -162,7 +159,7 @@ func resourceBigtableInstanceRead(d *schema.ResourceData, meta interface{}) erro if err != nil { log.Printf("[WARN] Removing %s because it's gone", instanceName) d.SetId("") - return fmt.Errorf("Error retrieving instance. Could not find %s. %s", instanceName, err) + return nil } d.Set("project", project) @@ -212,27 +209,28 @@ func resourceBigtableInstanceUpdate(d *schema.ResourceData, meta interface{}) er } defer c.Close() - clusters, err := c.Clusters(ctx, d.Get("name").(string)) - if err != nil { - return fmt.Errorf("Error retrieving clusters for instance %s", err.Error()) + conf := &bigtable.InstanceWithClustersConfig{ + InstanceID: d.Get("name").(string), } - clusterMap := make(map[string]*bigtable.ClusterInfo, len(clusters)) - for _, cluster := range clusters { - clusterMap[cluster.Name] = cluster - } - - for _, cluster := range d.Get("cluster").([]interface{}) { - config := cluster.(map[string]interface{}) - cluster_id := config["cluster_id"].(string) - if cluster, ok := clusterMap[cluster_id]; ok { - if cluster.ServeNodes != config["num_nodes"].(int) { - err = c.UpdateCluster(ctx, d.Get("name").(string), cluster.Name, int32(config["num_nodes"].(int))) - if err != nil { - return fmt.Errorf("Error updating cluster %s for instance %s", cluster.Name, d.Get("name").(string)) - } - } - } + displayName, ok := d.GetOk("display_name") + if !ok { + displayName = conf.InstanceID + } + conf.DisplayName = displayName.(string) + + switch d.Get("instance_type").(string) { + case "DEVELOPMENT": + conf.InstanceType = bigtable.DEVELOPMENT + case "PRODUCTION": + conf.InstanceType = bigtable.PRODUCTION + } + + conf.Clusters = expandBigtableClusters(d.Get("cluster").([]interface{}), conf.InstanceID) + + _, err = bigtable.UpdateInstanceAndSyncClusters(ctx, c, conf) + if err != nil { + return fmt.Errorf("Error updating instance. %s", err) } return resourceBigtableInstanceRead(d, meta) @@ -305,6 +303,7 @@ func expandBigtableClusters(clusters []interface{}, instanceID string) []bigtabl return results } +// resourceBigtableInstanceValidateDevelopment validates restrictions specific to DEVELOPMENT clusters func resourceBigtableInstanceValidateDevelopment(diff *schema.ResourceDiff, meta interface{}) error { if diff.Get("instance_type").(string) != "DEVELOPMENT" { return nil @@ -318,46 +317,115 @@ func resourceBigtableInstanceValidateDevelopment(diff *schema.ResourceDiff, meta return nil } +// resourceBigtableInstanceClusterReorderTypeList causes the cluster block to +// act like a TypeSet while it's a TypeList underneath. It preserves state +// ordering on updates, and causes the resource to get recreated if it would +// attempt to perform an impossible change. +// This doesn't use the standard unordered list utility (https://github.com/GoogleCloudPlatform/magic-modules/blob/master/templates/terraform/unordered_list_customize_diff.erb) +// because some fields can't be modified using the API and we recreate the instance +// when they're changed. func resourceBigtableInstanceClusterReorderTypeList(diff *schema.ResourceDiff, meta interface{}) error { - old_count, new_count := diff.GetChange("cluster.#") + oldCount, newCount := diff.GetChange("cluster.#") // simulate Required:true, MinItems:1, MaxItems:4 for "cluster" - if new_count.(int) < 1 { + if newCount.(int) < 1 { return fmt.Errorf("config is invalid: Too few cluster blocks: Should have at least 1 \"cluster\" block") } - if new_count.(int) > 4 { + if newCount.(int) > 4 { return fmt.Errorf("config is invalid: Too many cluster blocks: No more than 4 \"cluster\" blocks are allowed") } - if old_count.(int) != new_count.(int) { + // exit early if we're in create (name's old value is nil) + n, _ := diff.GetChange("name") + if n == nil || n == "" { return nil } - var old_ids []string - clusters := make(map[string]interface{}, new_count.(int)) + oldIds := []string{} + clusters := make(map[string]interface{}, newCount.(int)) - for i := 0; i < new_count.(int); i++ { - old_id, new_id := diff.GetChange(fmt.Sprintf("cluster.%d.cluster_id", i)) - if old_id != nil && old_id != "" { - old_ids = append(old_ids, old_id.(string)) + for i := 0; i < oldCount.(int); i++ { + oldId, _ := diff.GetChange(fmt.Sprintf("cluster.%d.cluster_id", i)) + if oldId != nil && oldId != "" { + oldIds = append(oldIds, oldId.(string)) } + } + log.Printf("[DEBUG] Saw old ids: %#v", oldIds) + + for i := 0; i < newCount.(int); i++ { + _, newId := diff.GetChange(fmt.Sprintf("cluster.%d.cluster_id", i)) _, c := diff.GetChange(fmt.Sprintf("cluster.%d", i)) - clusters[new_id.(string)] = c + clusters[newId.(string)] = c + } + + // create a list of clusters using the old order when possible to minimise + // diffs + // initially, add matching clusters to their index by id (nil otherwise) + // then, fill in nils with new clusters. + // [a, b, c, e] -> [c, a, d] becomes [a, nil, c] followed by [a, d, c] + var orderedClusters []interface{} + for i := 0; i < newCount.(int); i++ { + // when i is out of range of old, all values are nil + if i >= len(oldIds) { + orderedClusters = append(orderedClusters, nil) + continue + } + + oldId := oldIds[i] + if c, ok := clusters[oldId]; ok { + log.Printf("[DEBUG] Matched: %#v", oldId) + orderedClusters = append(orderedClusters, c) + delete(clusters, oldId) + } else { + orderedClusters = append(orderedClusters, nil) + } } - // reorder clusters according to the old cluster order - var old_cluster_order []interface{} - for _, id := range old_ids { - if c, ok := clusters[id]; ok { - old_cluster_order = append(old_cluster_order, c) + log.Printf("[DEBUG] Remaining clusters: %#v", clusters) + for _, elem := range clusters { + for i, e := range orderedClusters { + if e == nil { + orderedClusters[i] = elem + } } } - err := diff.SetNew("cluster", old_cluster_order) + err := diff.SetNew("cluster", orderedClusters) if err != nil { return fmt.Errorf("Error setting cluster diff: %s", err) } + // Clusters can't have their zone / storage_type updated, ForceNew if it's + // changed. This will show a diff with the old state on the left side and + // the unmodified new state on the right and the ForceNew attributed to the + // _old state index_ even if the diff appears to have moved. + // This depends on the clusters having been reordered already by the prior + // SetNew call. + // We've implemented it here because it doesn't return an error in the + // client and silently fails. + for i := 0; i < newCount.(int); i++ { + oldId, newId := diff.GetChange(fmt.Sprintf("cluster.%d.cluster_id", i)) + if oldId != newId { + continue + } + + oZone, nZone := diff.GetChange(fmt.Sprintf("cluster.%d.zone", i)) + if oZone != nZone { + err := diff.ForceNew(fmt.Sprintf("cluster.%d.zone", i)) + if err != nil { + return fmt.Errorf("Error setting cluster diff: %s", err) + } + } + + oST, nST := diff.GetChange(fmt.Sprintf("cluster.%d.storage_type", i)) + if oST != nST { + err := diff.ForceNew(fmt.Sprintf("cluster.%d.storage_type", i)) + if err != nil { + return fmt.Errorf("Error setting cluster diff: %s", err) + } + } + } + return nil } diff --git a/google/resource_bigtable_instance_test.go b/google/resource_bigtable_instance_test.go index e03f962c279..efef6cfbfd1 100644 --- a/google/resource_bigtable_instance_test.go +++ b/google/resource_bigtable_instance_test.go @@ -68,7 +68,23 @@ func TestAccBigtableInstance_cluster(t *testing.T) { ImportStateVerify: true, }, { - Config: testAccBigtableInstance_cluster_reordered(instanceName, 5), + Config: testAccBigtableInstance_clusterReordered(instanceName, 5), + }, + { + ResourceName: "google_bigtable_instance.instance", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccBigtableInstance_clusterModified(instanceName, 5), + }, + { + ResourceName: "google_bigtable_instance.instance", + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccBigtableInstance_clusterReordered(instanceName, 5), }, { ResourceName: "google_bigtable_instance.instance", @@ -225,7 +241,7 @@ resource "google_bigtable_instance" "instance" { `, instanceName, instanceName, instanceName, instanceName, instanceName, instanceName) } -func testAccBigtableInstance_cluster_reordered(instanceName string, numNodes int) string { +func testAccBigtableInstance_clusterReordered(instanceName string, numNodes int) string { return fmt.Sprintf(` resource "google_bigtable_instance" "instance" { name = "%s" @@ -257,6 +273,32 @@ resource "google_bigtable_instance" "instance" { `, instanceName, instanceName, numNodes, instanceName, numNodes, instanceName, numNodes, instanceName, numNodes) } +func testAccBigtableInstance_clusterModified(instanceName string, numNodes int) string { + return fmt.Sprintf(` +resource "google_bigtable_instance" "instance" { + name = "%s" + cluster { + cluster_id = "%s-c" + zone = "us-central1-c" + num_nodes = %d + storage_type = "HDD" + } + cluster { + cluster_id = "%s-a" + zone = "us-central1-a" + num_nodes = %d + storage_type = "HDD" + } + cluster { + cluster_id = "%s-b" + zone = "us-central1-b" + num_nodes = %d + storage_type = "HDD" + } +} +`, instanceName, instanceName, numNodes, instanceName, numNodes, instanceName, numNodes) +} + func testAccBigtableInstance_development(instanceName string) string { return fmt.Sprintf(` resource "google_bigtable_instance" "instance" { diff --git a/google/resource_bigtable_table.go b/google/resource_bigtable_table.go index 5a24e522402..4c63a6dbf19 100644 --- a/google/resource_bigtable_table.go +++ b/google/resource_bigtable_table.go @@ -142,7 +142,7 @@ func resourceBigtableTableRead(d *schema.ResourceData, meta interface{}) error { if err != nil { log.Printf("[WARN] Removing %s because it's gone", name) d.SetId("") - return fmt.Errorf("Error retrieving table. Could not find %s in %s. %s", name, instanceName, err) + return nil } d.Set("project", project) diff --git a/google/resource_dataproc_cluster.go b/google/resource_dataproc_cluster.go index ffb24ea2a68..6b899b24eac 100644 --- a/google/resource_dataproc_cluster.go +++ b/google/resource_dataproc_cluster.go @@ -701,7 +701,6 @@ func resourceDataprocClusterCreate(d *schema.ResourceData, meta interface{}) err log.Printf("[INFO] Dataproc cluster %s has been created", cluster.ClusterName) return resourceDataprocClusterRead(d, meta) - } func expandClusterConfig(d *schema.ResourceData, config *Config) (*dataproc.ClusterConfig, error) { diff --git a/google/resource_sql_database_instance_test.go b/google/resource_sql_database_instance_test.go index 11c939db0d2..069037b74f2 100644 --- a/google/resource_sql_database_instance_test.go +++ b/google/resource_sql_database_instance_test.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/terraform-plugin-sdk/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/helper/resource" "github.com/hashicorp/terraform-plugin-sdk/terraform" + sqladmin "google.golang.org/api/sqladmin/v1beta4" ) diff --git a/website/docs/d/google_bigquery_default_service_account.html b/website/docs/d/google_bigquery_default_service_account.html.markdown similarity index 97% rename from website/docs/d/google_bigquery_default_service_account.html rename to website/docs/d/google_bigquery_default_service_account.html.markdown index 4fb7ea51756..c9717c7ef3a 100644 --- a/website/docs/d/google_bigquery_default_service_account.html +++ b/website/docs/d/google_bigquery_default_service_account.html.markdown @@ -1,4 +1,5 @@ --- +subcategory: "BigQuery" layout: "google" page_title: "Google: google_bigquery_default_service_account" sidebar_current: "docs-google-datasource-bigquery-default-service-account" diff --git a/website/docs/r/bigtable_instance.html.markdown b/website/docs/r/bigtable_instance.html.markdown index 083db724b39..47205456ba7 100644 --- a/website/docs/r/bigtable_instance.html.markdown +++ b/website/docs/r/bigtable_instance.html.markdown @@ -68,11 +68,21 @@ The `cluster` block supports the following arguments: * `cluster_id` - (Required) The ID of the Cloud Bigtable cluster. -* `zone` - (Required) The zone to create the Cloud Bigtable cluster in. Each cluster must have a different zone in the same region. Zones that support Bigtable instances are noted on the [Cloud Bigtable locations page](https://cloud.google.com/bigtable/docs/locations). +* `zone` - (Required) The zone to create the Cloud Bigtable cluster in. Each +cluster must have a different zone in the same region. Zones that support +Bigtable instances are noted on the [Cloud Bigtable locations page](https://cloud.google.com/bigtable/docs/locations). -* `num_nodes` - (Optional) The number of nodes in your Cloud Bigtable cluster. Required, with a minimum of `3` for a `PRODUCTION` instance. Must be left unset for a `DEVELOPMENT` instance. +* `num_nodes` - (Optional) The number of nodes in your Cloud Bigtable cluster. +Required, with a minimum of `3` for a `PRODUCTION` instance. Must be left unset +for a `DEVELOPMENT` instance. -* `storage_type` - (Optional) The storage type to use. One of `"SSD"` or `"HDD"`. Defaults to `"SSD"`. +* `storage_type` - (Optional) The storage type to use. One of `"SSD"` or +`"HDD"`. Defaults to `"SSD"`. + +!> **Warning:** Modifying the `storage_type` or `zone` of an existing cluster (by +`cluster_id`) will cause Terraform to delete/recreate the entire +`google_bigtable_instance` resource. If these values are changing, use a new +`cluster_id`. ## Attributes Reference