Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into jmichalak-security-in…
Browse files Browse the repository at this point in the history
…tegrations
  • Loading branch information
sfc-gh-jmichalak committed May 29, 2024
2 parents bf63d0f + 64518a3 commit ddc0d9c
Show file tree
Hide file tree
Showing 135 changed files with 3,679 additions and 1,176 deletions.
13 changes: 12 additions & 1 deletion MIGRATION_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,17 @@ This document is meant to help you migrate your Terraform config to the new newe
describe deprecations or breaking changes and help you to change your configuration to keep the same (or similar) behavior
across different versions.

## v0.91.0 ➞ v0.92.0
### snowflake_database new alternatives
As part of the [preparation for v1](https://github.com/Snowflake-Labs/terraform-provider-snowflake/blob/main/ROADMAP.md#preparing-essential-ga-objects-for-the-provider-v1), we split up the database resource into multiple ones:
- Standard database (in progress)
- Shared database - can be used as `snowflake_shared_database` (used to create databases from externally defined shares)
- Secondary database - can be used as `snowflake_secondary_database` (used to create replicas of databases from external sources)
From now on, please migrate and use the new database resources for their unique use cases. For more information, see the documentation for those resources on the [Terraform Registry](https://registry.terraform.io/providers/Snowflake-Labs/snowflake/latest/docs).

The split was done (and will be done for several objects during the refactor) to simplify the resource on maintainability and usage level.
Its purpose was also to divide the resources by their specific purpose rather than cramping every use case of an object into one resource.

## v0.89.0 ➞ v0.90.0
### snowflake_table resource changes
#### *(behavior change)* Validation to column type added
Expand All @@ -23,7 +34,7 @@ resource "snowflake_tag_masking_policy_association" "name" {
masking_policy_id = snowflake_masking_policy.example_masking_policy.id
}
```

After
```terraform
resource "snowflake_tag_masking_policy_association" "name" {
Expand Down
98 changes: 98 additions & 0 deletions docs/resources/secondary_database.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
---
page_title: "snowflake_secondary_database Resource - terraform-provider-snowflake"
subcategory: ""
description: |-
A secondary database creates a replica of an existing primary database (i.e. a secondary database). For more information about database replication, see Introduction to database replication across multiple accounts https://docs.snowflake.com/en/user-guide/db-replication-intro.
---

# snowflake_secondary_database (Resource)

A secondary database creates a replica of an existing primary database (i.e. a secondary database). For more information about database replication, see [Introduction to database replication across multiple accounts](https://docs.snowflake.com/en/user-guide/db-replication-intro).

## Example Usage

```terraform
# 1. Preparing primary database
resource "snowflake_database" "primary" {
provider = primary_account # notice the provider fields
name = "database_name"
replication_configuration {
accounts = ["<secondary_account_organization_name>.<secondary_account_name>"]
ignore_edition_check = true
}
}
# 2. Creating secondary database
resource "snowflake_secondary_database" "test" {
provider = secondary_account
name = snowflake_database.primary.name # It's recommended to give a secondary database the same name as its primary database
as_replica_of = "<primary_account_organization_name>.<primary_account_name>.${snowflake_database.primary.name}"
is_transient = false
data_retention_time_in_days {
value = 10
}
max_data_extension_time_in_days {
value = 20
}
external_volume = "external_volume_name"
catalog = "catalog_name"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "OPTIMIZED"
log_level = "OFF"
trace_level = "OFF"
comment = "A secondary database"
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `as_replica_of` (String) A fully qualified path to a database to create a replica from. A fully qualified path follows the format of `"<organization_name>"."<account_name>"."<database_name>"`.
- `name` (String) Specifies the identifier for the database; must be unique for your account. As a best practice for [Database Replication and Failover](https://docs.snowflake.com/en/user-guide/db-replication-intro), it is recommended to give each secondary database the same name as its primary database. This practice supports referencing fully-qualified objects (i.e. '<db>.<schema>.<object>') by other objects in the same database, such as querying a fully-qualified table name in a view. If a secondary database has a different name from the primary database, then these object references would break in the secondary database.

### Optional

- `catalog` (String) The database parameter that specifies the default catalog to use for Iceberg tables.
- `comment` (String) Specifies a comment for the database.
- `data_retention_time_in_days` (Block List, Max: 1) Specifies the number of days for which Time Travel actions (CLONE and UNDROP) can be performed on the database, as well as specifying the default Time Travel retention time for all schemas created in the database. For more details, see [Understanding & Using Time Travel](https://docs.snowflake.com/en/user-guide/data-time-travel). (see [below for nested schema](#nestedblock--data_retention_time_in_days))
- `default_ddl_collation` (String) Specifies a default collation specification for all schemas and tables added to the database. It can be overridden on schema or table level. For more information, see [collation specification](https://docs.snowflake.com/en/sql-reference/collation#label-collation-specification).
- `external_volume` (String) The database parameter that specifies the default external volume to use for Iceberg tables.
- `is_transient` (Boolean) Specifies the database as transient. Transient databases do not have a Fail-safe period so they do not incur additional storage costs once they leave Time Travel; however, this means they are also not protected by Fail-safe in the event of a data loss.
- `log_level` (String) Specifies the severity level of messages that should be ingested and made available in the active event table. Valid options are: [TRACE DEBUG INFO WARN ERROR FATAL OFF]. Messages at the specified level (and at more severe levels) are ingested. For more information, see [LOG_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-log-level).
- `max_data_extension_time_in_days` (Block List, Max: 1) Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for tables in the database to prevent streams on the tables from becoming stale. For a detailed description of this parameter, see [MAX_DATA_EXTENSION_TIME_IN_DAYS](https://docs.snowflake.com/en/sql-reference/parameters.html#label-max-data-extension-time-in-days). (see [below for nested schema](#nestedblock--max_data_extension_time_in_days))
- `replace_invalid_characters` (Boolean) Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results for an Iceberg table. You can only set this parameter for tables that use an external Iceberg catalog.
- `storage_serialization_policy` (String) Specifies the storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED]. COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines. OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.
- `trace_level` (String) Controls how trace events are ingested into the event table. Valid options are: [ALWAYS ON_EVENT OFF]. For information about levels, see [TRACE_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-trace-level).

### Read-Only

- `id` (String) The ID of this resource.

<a id="nestedblock--data_retention_time_in_days"></a>
### Nested Schema for `data_retention_time_in_days`

Required:

- `value` (Number)


<a id="nestedblock--max_data_extension_time_in_days"></a>
### Nested Schema for `max_data_extension_time_in_days`

Required:

- `value` (Number)

## Import

Import is supported using the following syntax:

```shell
terraform import snowflake_secondary_database.example 'secondary_database_name'
```
81 changes: 81 additions & 0 deletions docs/resources/shared_database.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
---
page_title: "snowflake_shared_database Resource - terraform-provider-snowflake"
subcategory: ""
description: |-
A shared database creates a database from a share provided by another Snowflake account. For more information about shares, see Introduction to Secure Data Sharing https://docs.snowflake.com/en/user-guide/data-sharing-intro.
---

# snowflake_shared_database (Resource)

A shared database creates a database from a share provided by another Snowflake account. For more information about shares, see [Introduction to Secure Data Sharing](https://docs.snowflake.com/en/user-guide/data-sharing-intro).

## Example Usage

```terraform
# 1. Preparing database to share
resource "snowflake_share" "test" {
provider = primary_account # notice the provider fields
name = "share_name"
accounts = ["<secondary_account_organization_name>.<secondary_account_name>"]
}
resource "snowflake_database" "test" {
provider = primary_account
name = "shared_database"
}
resource "snowflake_grant_privileges_to_share" "test" {
provider = primary_account
to_share = snowflake_share.test.name
privileges = ["USAGE"]
on_database = snowflake_database.test.name
}
# 2. Creating shared database
resource "snowflake_shared_database" "test" {
provider = secondary_account
depends_on = [snowflake_grant_privileges_to_share.test]
name = snowflake_database.test.name # shared database should have the same as the "imported" one
from_share = "<primary_account_organization_name>.<primary_account_name>.${snowflake_share.test.name}"
is_transient = false
external_volume = "external_volume_name"
catalog = "catalog_name"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "OPTIMIZED"
log_level = "OFF"
trace_level = "OFF"
comment = "A shared database"
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `from_share` (String) A fully qualified path to a share from which the database will be created. A fully qualified path follows the format of `"<provider_account>"."<share_name>"`.
- `name` (String) Specifies the identifier for the database; must be unique for your account.

### Optional

- `catalog` (String) The database parameter that specifies the default catalog to use for Iceberg tables.
- `comment` (String) Specifies a comment for the database.
- `default_ddl_collation` (String) Specifies a default collation specification for all schemas and tables added to the database. It can be overridden on schema or table level. For more information, see [collation specification](https://docs.snowflake.com/en/sql-reference/collation#label-collation-specification).
- `external_volume` (String) The database parameter that specifies the default external volume to use for Iceberg tables.
- `log_level` (String) Specifies the severity level of messages that should be ingested and made available in the active event table. Valid options are: [TRACE DEBUG INFO WARN ERROR FATAL OFF]. Messages at the specified level (and at more severe levels) are ingested. For more information, see [LOG_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-log-level).
- `replace_invalid_characters` (Boolean) Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character (�) in query results for an Iceberg table. You can only set this parameter for tables that use an external Iceberg catalog.
- `storage_serialization_policy` (String) Specifies the storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED]. COMPATIBLE: Snowflake performs encoding and compression of data files that ensures interoperability with third-party compute engines. OPTIMIZED: Snowflake performs encoding and compression of data files that ensures the best table performance within Snowflake.
- `trace_level` (String) Controls how trace events are ingested into the event table. Valid options are: [ALWAYS ON_EVENT OFF]. For information about levels, see [TRACE_LEVEL](https://docs.snowflake.com/en/sql-reference/parameters.html#label-trace-level).

### Read-Only

- `id` (String) The ID of this resource.

## Import

Import is supported using the following syntax:

```shell
terraform import snowflake_shared_database.example 'shared_database_name'
```
1 change: 1 addition & 0 deletions examples/resources/snowflake_secondary_database/import.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
terraform import snowflake_secondary_database.example 'secondary_database_name'
34 changes: 34 additions & 0 deletions examples/resources/snowflake_secondary_database/resource.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# 1. Preparing primary database
resource "snowflake_database" "primary" {
provider = primary_account # notice the provider fields
name = "database_name"
replication_configuration {
accounts = ["<secondary_account_organization_name>.<secondary_account_name>"]
ignore_edition_check = true
}
}

# 2. Creating secondary database
resource "snowflake_secondary_database" "test" {
provider = secondary_account
name = snowflake_database.primary.name # It's recommended to give a secondary database the same name as its primary database
as_replica_of = "<primary_account_organization_name>.<primary_account_name>.${snowflake_database.primary.name}"
is_transient = false

data_retention_time_in_days {
value = 10
}

max_data_extension_time_in_days {
value = 20
}

external_volume = "external_volume_name"
catalog = "catalog_name"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "OPTIMIZED"
log_level = "OFF"
trace_level = "OFF"
comment = "A secondary database"
}
1 change: 1 addition & 0 deletions examples/resources/snowflake_shared_database/import.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
terraform import snowflake_shared_database.example 'shared_database_name'
35 changes: 35 additions & 0 deletions examples/resources/snowflake_shared_database/resource.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# 1. Preparing database to share
resource "snowflake_share" "test" {
provider = primary_account # notice the provider fields
name = "share_name"
accounts = ["<secondary_account_organization_name>.<secondary_account_name>"]
}

resource "snowflake_database" "test" {
provider = primary_account
name = "shared_database"
}

resource "snowflake_grant_privileges_to_share" "test" {
provider = primary_account
to_share = snowflake_share.test.name
privileges = ["USAGE"]
on_database = snowflake_database.test.name
}

# 2. Creating shared database
resource "snowflake_shared_database" "test" {
provider = secondary_account
depends_on = [snowflake_grant_privileges_to_share.test]
name = snowflake_database.test.name # shared database should have the same as the "imported" one
from_share = "<primary_account_organization_name>.<primary_account_name>.${snowflake_share.test.name}"
is_transient = false
external_volume = "external_volume_name"
catalog = "catalog_name"
replace_invalid_characters = false
default_ddl_collation = "en_US"
storage_serialization_policy = "OPTIMIZED"
log_level = "OFF"
trace_level = "OFF"
comment = "A shared database"
}
6 changes: 6 additions & 0 deletions pkg/acceptance/check_destroy.go
Original file line number Diff line number Diff line change
Expand Up @@ -136,12 +136,18 @@ var showByIdFunctions = map[resources.Resource]showByIdFunc{
resources.Schema: func(ctx context.Context, client *sdk.Client, id sdk.ObjectIdentifier) error {
return runShowById(ctx, id, client.Schemas.ShowByID)
},
resources.SecondaryDatabase: func(ctx context.Context, client *sdk.Client, id sdk.ObjectIdentifier) error {
return runShowById(ctx, id, client.Databases.ShowByID)
},
resources.Sequence: func(ctx context.Context, client *sdk.Client, id sdk.ObjectIdentifier) error {
return runShowById(ctx, id, client.Sequences.ShowByID)
},
resources.Share: func(ctx context.Context, client *sdk.Client, id sdk.ObjectIdentifier) error {
return runShowById(ctx, id, client.Shares.ShowByID)
},
resources.SharedDatabase: func(ctx context.Context, client *sdk.Client, id sdk.ObjectIdentifier) error {
return runShowById(ctx, id, client.Databases.ShowByID)
},
resources.Stage: func(ctx context.Context, client *sdk.Client, id sdk.ObjectIdentifier) error {
return runShowById(ctx, id, client.Stages.ShowByID)
},
Expand Down
24 changes: 24 additions & 0 deletions pkg/acceptance/helpers/database_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,30 @@ func (c *DatabaseClient) client() sdk.Databases {
return c.context.client.Databases
}

func (c *DatabaseClient) CreatePrimaryDatabase(t *testing.T, enableReplicationTo []sdk.AccountIdentifier) (*sdk.Database, sdk.ExternalObjectIdentifier, func()) {
t.Helper()
ctx := context.Background()

primaryDatabase, primaryDatabaseCleanup := c.CreateDatabase(t)

err := c.client().AlterReplication(ctx, primaryDatabase.ID(), &sdk.AlterDatabaseReplicationOptions{
EnableReplication: &sdk.EnableReplication{
ToAccounts: enableReplicationTo,
IgnoreEditionCheck: sdk.Bool(true),
},
})
require.NoError(t, err)

organizationName, err := c.context.client.ContextFunctions.CurrentOrganizationName(ctx)
require.NoError(t, err)

accountName, err := c.context.client.ContextFunctions.CurrentAccountName(ctx)
require.NoError(t, err)

externalPrimaryId := sdk.NewExternalObjectIdentifier(sdk.NewAccountIdentifier(organizationName, accountName), primaryDatabase.ID())
return primaryDatabase, externalPrimaryId, primaryDatabaseCleanup
}

func (c *DatabaseClient) CreateDatabase(t *testing.T) (*sdk.Database, func()) {
t.Helper()
return c.CreateDatabaseWithOptions(t, c.ids.RandomAccountObjectIdentifier(), &sdk.CreateDatabaseOptions{})
Expand Down
2 changes: 1 addition & 1 deletion pkg/acceptance/helpers/table_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ func (c *TableClient) DropTableFunc(t *testing.T, id sdk.SchemaObjectIdentifier)

return func() {
// to prevent error when schema was removed before the table
_, err := c.context.client.Schemas.ShowByID(ctx, sdk.NewDatabaseObjectIdentifier(id.DatabaseName(), id.SchemaName()))
_, err := c.context.client.Schemas.ShowByID(ctx, id.SchemaId())
if errors.Is(err, sdk.ErrObjectNotExistOrAuthorized) {
return
}
Expand Down
Loading

0 comments on commit ddc0d9c

Please sign in to comment.