Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[General Usage]: How to stop replacing schema while migrating #3015

Open
cyclonstep opened this issue Aug 26, 2024 · 6 comments · Fixed by #2977
Open

[General Usage]: How to stop replacing schema while migrating #3015

cyclonstep opened this issue Aug 26, 2024 · 6 comments · Fixed by #2977
Assignees
Labels
bug Used to mark issues with provider's incorrect behavior

Comments

@cyclonstep
Copy link

cyclonstep commented Aug 26, 2024

Terraform CLI Version

1.9.5

Terraform Provider Version

0.94.1

Terraform Configuration

resource "snowflake_schema" "test" {
  database                    = snowflake_database.test_db.name
  name                        = "TEST"
  data_retention_time_in_days = 1
  depends_on = [
    snowflake_database.test_db
  ]

  lifecycle {
    prevent_destroy = true
  }
}

Category

category:resource

Object type(s)

data_source:schemas

Expected Behavior

The old existing schema will not be replaced by the newest code applications.

Actual Behavior

It will replaced and all of the data missing.

Steps to Reproduce

  1. Upgrade the provider versions
  2. terraform plan / apply
  3. we will see that the schema will be replaced.

How much impact is this issue causing?

High

Logs

No response

Additional Information

Hello, Thank you very much for all hard works in regards to maintaining this terraform providers.

I've been trying to migrate the terraform (especially grant roles) in these past few days from the old 0.71.0 to the latest 0.94.1.

But when I tried to apply the code, looks like we need to replace all of the schemas. I read in migration_guide that there was some breaking changes and refactoring in the schema resources itself. but is there anything that we can do to prevent the replacement of the object itself while applying? Maybe I'm missing something but I cannot find any way to actually mitigate this situations.

What we've been tested:

  • follow resource migration guide -> Need replacement
  • using terraform import -> Need replacement
  • prevent destroy on lifecycle -> Error, still need to be replaced.

I've been testing out the behaviour and if we apply this, it will destroy all of the data in the schema. We understand that we have Undrop and Clone in snowflake to mitigate this situations but in our current state we cannot afford any downtime and complicated side effects because our snowflake environment is being used by myriad of processes in our business.

As always, thank you very much.

@cyclonstep cyclonstep added the general-usage General help/usage questions label Aug 26, 2024
@sfc-gh-jcieslak
Copy link
Collaborator

sfc-gh-jcieslak commented Aug 26, 2024

Hey @cyclonstep 👋
I tried to migrate the following configuration from v0.71.0 to v0.94.1

resource "snowflake_database" "test_db" {
  name = "test_db_schema_upgrade"
}

resource "snowflake_schema" "test" {
  database                    = snowflake_database.test_db.name
  name                        = "TEST"
}

and got this plan (a little trimmed, because it's big):

Terraform will perform the following actions:

  # snowflake_database.test_db will be updated in-place
  ~ resource "snowflake_database" "test_db" {
      ~ data_retention_time_in_days                   = 1 -> (known after apply)
        id                                            = "test_db_schema_upgrade"
        name                                          = "test_db_schema_upgrade"
        # (17 unchanged attributes hidden)
    }

  # snowflake_schema.test must be replaced
-/+ resource "snowflake_schema" "test" {
      + catalog                                       = (known after apply)
      ~ data_retention_time_in_days                   = 1 -> (known after apply)
      + default_ddl_collation                         = (known after apply)
      ~ describe_output                               = [] -> (known after apply)
      ~ enable_console_output                         = false -> (known after apply)
      + external_volume                               = (known after apply)
      ~ id                                            = "test_db_schema_upgrade|TEST" -> (known after apply)
      ~ is_transient                                  = "false" -> "default" # forces replacement
      ~ log_level                                     = "OFF" -> (known after apply)
      ~ max_data_extension_time_in_days               = 14 -> (known after apply)
        name                                          = "TEST"
      ~ parameters                                    = []
}

It seems like it_transient is causing the schema to be replaced due to our new way of handling Snowflake defaults. I performed the same upgrade, but this time the is_transient field is set explicitly:

resource "snowflake_database" "test_db" {
  name = "test_db_schema_upgrade"
}

resource "snowflake_schema" "test" {
  database                    = snowflake_database.test_db.name
  name                        = "TEST"
  is_transient = false
}

Now, after the upgrade the plan looked like this:

Terraform will perform the following actions:

  # snowflake_database.test_db will be updated in-place
  ~ resource "snowflake_database" "test_db" {
      ~ data_retention_time_in_days                   = 1 -> (known after apply)
        id                                            = "test_db_schema_upgrade"
        name                                          = "test_db_schema_upgrade"
        # (17 unchanged attributes hidden)
    }

  # snowflake_schema.test will be updated in-place
  ~ resource "snowflake_schema" "test" {
      ~ data_retention_time_in_days                   = 1 -> (known after apply)
        id                                            = "test_db_schema_upgrade|TEST"
        name                                          = "TEST"
      ~ show_output                                   = [
          - {
              - created_on      = "2024-08-26 02:03:47.515 -0700 PDT"
              - database_name   = "test_db_schema_upgrade"
              - dropped_on      = "0001-01-01 00:00:00 +0000 UTC"
              - is_current      = false
              - is_default      = false
              - name            = "TEST"
              - owner           = "ACCOUNTADMIN"
              - owner_role_type = "ROLE"
              - retention_time  = "1"
                # (2 unchanged attributes hidden)
            },
        ] -> (known after apply)
      ~ with_managed_access                           = "false" -> "default"
        # (21 unchanged attributes hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.

So what I propose is to specify the is_transient = false in the older version and then upgrade to the new one. That's also a mistake on our end, because we should provide the state upgrader that would automatically change "false" to "default" if nothing is specified in the configuration. We'll try to add that in the v0.95.0, so no manual work will be necessary on your end, but also that version may introduce breaking changes in other resources that you will need to adapt before upgrading. Will either of these solutions work for you?

@sfc-gh-jcieslak sfc-gh-jcieslak self-assigned this Aug 26, 2024
@sfc-gh-jcieslak sfc-gh-jcieslak added bug Used to mark issues with provider's incorrect behavior and removed general-usage General help/usage questions labels Aug 26, 2024
@cyclonstep
Copy link
Author

cyclonstep commented Aug 26, 2024

@sfc-gh-jcieslak Thank you for your answer!
Indeed when I was adding is_transient=false to the code itself, the plan now correctly stops replacing the schema itself.

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # snowflake_schema.test will be updated in-place
  ~ resource "snowflake_schema" "test" {
        id                                            = "TEST_DB|TEST"
        name                                          = "TEST"
      ~ show_output                                   = [
          - {
              - created_on      = "2022-05-23 15:58:32.497 +0900 JST"
              - database_name   = "TEST_DB"
              - dropped_on      = "0001-01-01 00:00:00 +0000 UTC"
              - is_current      = false
              - is_default      = false
              - name            = "TEST"
              - owner           = "SYSADMIN"
              - owner_role_type = "ROLE"
              - retention_time  = "1"
                # (2 unchanged attributes hidden)
            },
        ] -> (known after apply)
      ~ with_managed_access                           = "false" -> "default"
        # (22 unchanged attributes hidden)
    }

Let me check this further whether or not is there any other side-effect going on.
That would be helpful if you can implement the state upgrader in the 0.95.0. Of course I will check what are the breaking changes that will occurred on my end but as for now, I will go with your recommendations for the current version.

@sfc-gh-jcieslak
Copy link
Collaborator

Also, keep in mind that before v1.0, we're introducing many breaking changes, so it would be better to upgrade version by version with the migration guide in mind. Making such a big version jump may break many other resources without proper adjustments.

@sfc-gh-jcieslak
Copy link
Collaborator

Hey, actually, I went through this problem with the team. Because of the limited Terraform SDK V2 capabilities, we won't be able to guess and fill in default during the upgrade. Currently, filling the value explicitly will be the only way to upgrade the schema with is_transient unset in previous versions.

sfc-gh-jcieslak added a commit that referenced this issue Sep 3, 2024
## Addressed issues
* #3016 `SHOW ORGANIZATION ACCOUNTS` -> `SHOW ACCOUNTS` BCR
* #3015
* #2807
* #3025
sfc-gh-jcieslak pushed a commit that referenced this issue Sep 4, 2024
🤖 I have created a release *beep* *boop*
---


##
[0.95.0](v0.94.1...v0.95.0)
(2024-09-04)


### 🎉 **What's new:**

* Add change_tracking, row access policy and aggregation policy to views
([#2988](#2988))
([1f88bb1](1f88bb1))
* Add fully_qualified_name to all resources
([#2990](#2990))
([1b0462f](1b0462f))
* Add identifier parsers
([#2957](#2957))
([824ec52](824ec52))
* Add identifier with arguments
([#2979](#2979))
([00ae1c5](00ae1c5))
* Add timeouts block to cortex
([#3004](#3004))
([34d764b](34d764b))
* Add user parameters to resource
([#2968](#2968))
([f4ae380](f4ae380))
* Conclude user rework
([#3036](#3036))
([23e4625](23e4625))
* database role v1 readiness
([#3014](#3014))
([c4db255](c4db255))
* Identifier with arguments for procedure and external function
([#2987](#2987))
([f13cc5c](f13cc5c))
* Rework user resource
([#3026](#3026))
([bde2638](bde2638)),
closes
[#1572](#1572)
* Rework users datasource
([#3030](#3030))
([751239b](751239b)),
closes
[#2902](#2902)
* Upgrade view sdk
([#2969](#2969))
([ef2d50a](ef2d50a))
* View rework part 2
([#3021](#3021))
([e05377d](e05377d))
* View rework part 3
([#3023](#3023))
([195b41c](195b41c))


### 🔧 **Misc**

* Add annotation about fully_qualified_name and fix handling granteeName
([#3009](#3009))
([94e6345](94e6345))
* Apply identifier conventions
([#2996](#2996))
([5cbea84](5cbea84))
* apply identifier conventions to grants
([#3008](#3008))
([d7780ae](d7780ae))
* Clean collection utils
([#3028](#3028))
([426ddb1](426ddb1))
* Clean old assertions
([#3029](#3029))
([ad657eb](ad657eb))
* Conclude identifiers rework
([#3011](#3011))
([c1b53f3](c1b53f3))
* Improve user test and add manual test for user default database and
role
([#3035](#3035))
([6cb0b4e](6cb0b4e))
* Use new identifier with arguments in function, external function and
procedure grants
([#3002](#3002))
([5053f8b](5053f8b))
* User improvements
([#3034](#3034))
([65b64d7](65b64d7))


### 🐛 **Bug fixes:**

* database tests and introduce a new parameter
([#2981](#2981))
([3bae7f6](3bae7f6))
* Fix custom diffs for fields with diff supression
([#3032](#3032))
([2499602](2499602))
* Fix default secondary roles after BCR 2024_07
([#3040](#3040))
([2ca465a](2ca465a)),
closes
[#3038](#3038)
* Fix issues 2972 and 3007
([#3020](#3020))
([1772387](1772387))
* Fix known user resource issues
([#3013](#3013))
([a5dfeac](a5dfeac))
* identifier issues
([#2998](#2998))
([6fb76b7](6fb76b7))
* minor issues
([#3027](#3027))
([467b06e](467b06e)),
closes
[#3015](#3015)
[#2807](#2807)
[#3025](#3025)
* Nuke users
([#2971](#2971))
([0d90cc9](0d90cc9))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

Co-authored-by: snowflake-release-please[bot] <105954990+snowflake-release-please[bot]@users.noreply.github.com>
@cyclonstep
Copy link
Author

cyclonstep commented Sep 5, 2024

@sfc-gh-jcieslak Hello Jan, sorry for replying so late. I see, so there is a limitation on the SDK side right. I saw your documentation commits and thank you for the assistance 👍 . I think this would be quite an issue and it is so easy with one careless terraform apply just to replace everything. If there are things that I could help, please let me know.

To anyone who will read this, please read the plan before applying .

@sfc-gh-jcieslak
Copy link
Collaborator

Yeah, in the future, we plan to migrate to the newest Terraform Framework SDK, which should allow for more flexibility, but for now, we are very limited and have to make a lot of workarounds. Thank you for your understanding 🙏. If you have any questions, feel free to ask. Otherwise, I guess we can close this thread (It was closed previously accidentally by GitHub bot).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Used to mark issues with provider's incorrect behavior
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants