Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data sharing #17

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
128 changes: 128 additions & 0 deletions docs/resources/datashare.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "redshift_datashare Resource - terraform-provider-redshift"
subcategory: ""
description: |-
Defines a Redshift datashare. Datashares allows a Redshift cluster (the "consumer") to
read data stored in another Redshift cluster (the "producer"). For more information, see
https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html
The redshift_datashare resource should be defined on the producer cluster.
Note: Data sharing is only supported on certain Redshift instance families,
such as RA3.
---

# redshift_datashare (Resource)

Defines a Redshift datashare. Datashares allows a Redshift cluster (the "consumer") to
read data stored in another Redshift cluster (the "producer"). For more information, see
https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html

The redshift_datashare resource should be defined on the producer cluster.

Note: Data sharing is only supported on certain Redshift instance families,
such as RA3.

## Example Usage

```terraform
# Example: Datashare that can only be consumed by a non-public Redshift cluster.
resource "redshift_datashare" "private_datashare" {
name = "my_private_datashare" # Required
owner = "my_user" # Optional

# Example of adding a schema to a data share in "auto" mode.
# All tables/views and functions in the schema are added to the datashare,
# and redshift will automatically add newly-created tables/views and functions
# to the datashare without needing to re-run terraform.
schema {
name = "public" # Required
mode = "auto" # Required
}

# Example of ading a schema to a data share in "manual" mode.
# Only the specified tables/views and functions will be added to the data share.
schema {
name = "other" # Required
mode = "manual" # Required
tables = [ # Optional. If unspecified then no tables/views will be added.
"my_table",
"my_view",
"my_late_binding_view",
"my_materialized_view",
]
functions = [ # Optional. If unspecified then no functions will be added.
"my_sql_udf",
]
}
}

# Example: Datashare that can be shared with publicly available consumer clusters.
resource "redshift_datashare" "publicly_accessible_datashare" {
name = "my_public_datashare" # Required
publicly_accessible = true # Optional. Default is `false`

# Example of adding a schema to a data share in "auto" mode.
# All tables/views and functions in the schema are added to the datashare,
# and redshift will automatically add newly-created tables/views and functions
# to the datashare without needing to re-run terraform.
schema {
name = "public" # Required
mode = "auto" # Required
}

# Example of ading a schema to a data share in "manual" mode.
# Only the specified tables/views and functions will be added to the data share.
schema {
name = "other" # Required
mode = "manual" # Required
tables = [ # Optional. If unspecified then no tables/views will be added.
"my_table",
"my_view",
"my_late_binding_view",
"my_materialized_view",
]
functions = [ # Optional. If unspecified then no functions will be added.
"my_sql_udf",
]
}
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- **name** (String) The name of the datashare.

### Optional

- **id** (String) The ID of this resource.
- **owner** (String) The user who owns the datashare.
- **publicly_accessible** (Boolean) Specifies whether the datashare can be shared to clusters that are publicly accessible. Default is `false`.
- **schema** (Block Set) Defines which objects in the specified schema are exposed to the data share (see [below for nested schema](#nestedblock--schema))

### Read-Only

- **created** (String) The date when datashare was created
- **producer_account** (String) The ID for the datashare producer account.
- **producer_namespace** (String) The unique cluster identifier for the datashare producer cluster.

<a id="nestedblock--schema"></a>
### Nested Schema for `schema`

Required:

- **mode** (String) Configures how schema objects will be exposed to the datashare. Must be either `auto` or `manual`.

In `auto` mode, all tables, views, and UDFs will be exposed to the datashare, and Redshift will automatically expose new tables, views, and functions in the schema to the datashare (without requiring `terraform apply` to be run again).

In `manual` mode, only the `tables` and `functions` explicitly declared in the `schema` block will be exposed to the datashare.
- **name** (String) The name of the schema

Optional:

- **functions** (Set of String) UDFs that are to exposed to the datashare. You should configure this attribute explicitly when using `manual` mode. When using `auto` mode, this is treated as a computed attribute and you should not explicitly declare it.
- **tables** (Set of String) Tables and views that are exposed to the datashare. You should configure this attribute explicitly when using `manual` mode. When using `auto` mode, this is treated as a computed attribute and you should not explicitly declare it.


61 changes: 61 additions & 0 deletions examples/resources/redshift_datashare/resource.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Example: Datashare that can only be consumed by a non-public Redshift cluster.
resource "redshift_datashare" "private_datashare" {
name = "my_private_datashare" # Required
owner = "my_user" # Optional

# Example of adding a schema to a data share in "auto" mode.
# All tables/views and functions in the schema are added to the datashare,
# and redshift will automatically add newly-created tables/views and functions
# to the datashare without needing to re-run terraform.
schema {
name = "public" # Required
mode = "auto" # Required
}

# Example of ading a schema to a data share in "manual" mode.
# Only the specified tables/views and functions will be added to the data share.
schema {
name = "other" # Required
mode = "manual" # Required
tables = [ # Optional. If unspecified then no tables/views will be added.
"my_table",
"my_view",
"my_late_binding_view",
"my_materialized_view",
]
functions = [ # Optional. If unspecified then no functions will be added.
"my_sql_udf",
]
}
}

# Example: Datashare that can be shared with publicly available consumer clusters.
resource "redshift_datashare" "publicly_accessible_datashare" {
name = "my_public_datashare" # Required
publicly_accessible = true # Optional. Default is `false`

# Example of adding a schema to a data share in "auto" mode.
# All tables/views and functions in the schema are added to the datashare,
# and redshift will automatically add newly-created tables/views and functions
# to the datashare without needing to re-run terraform.
schema {
name = "public" # Required
mode = "auto" # Required
}

# Example of ading a schema to a data share in "manual" mode.
# Only the specified tables/views and functions will be added to the data share.
schema {
name = "other" # Required
mode = "manual" # Required
tables = [ # Optional. If unspecified then no tables/views will be added.
"my_table",
"my_view",
"my_late_binding_view",
"my_materialized_view",
]
functions = [ # Optional. If unspecified then no functions will be added.
"my_sql_udf",
]
}
}
21 changes: 20 additions & 1 deletion redshift/helpers.go
Original file line number Diff line number Diff line change
Expand Up @@ -128,11 +128,12 @@ func isRetryablePQError(code string) bool {
return ok
}

func splitCsvAndTrim(raw string) ([]string, error) {
func splitCsvAndTrim(raw string, delimiter rune) ([]string, error) {
if raw == "" {
return []string{}, nil
}
reader := csv.NewReader(strings.NewReader(raw))
reader.Comma = delimiter
rawSlice, err := reader.Read()
if err != nil {
return nil, err
Expand All @@ -146,3 +147,21 @@ func splitCsvAndTrim(raw string) ([]string, error) {
}
return result, nil
}

func setToMap(set *schema.Set, key string) map[string]map[string]interface{} {
result := make(map[string]map[string]interface{})
for _, s := range set.List() {
m := s.(map[string]interface{})
id := m[key].(string)
result[id] = m
}
return result
}

func mapToSet(mapOfMaps map[string]map[string]interface{}, hashFunction schema.SchemaSetFunc) *schema.Set {
result := schema.NewSet(hashFunction, nil)
for _, m := range mapOfMaps {
result.Add(m)
}
return result
}
1 change: 1 addition & 0 deletions redshift/provider.go
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@ func Provider() *schema.Provider {
"redshift_schema": redshiftSchema(),
"redshift_privilege": redshiftPrivilege(),
"redshift_database": redshiftDatabase(),
"redshift_datashare": redshiftDatashare(),
},
DataSourcesMap: map[string]*schema.Resource{
"redshift_user": dataSourceRedshiftUser(),
Expand Down
Loading