Skip to content

Commit

Permalink
update hand written docs to reflect newly required fields, a few last…
Browse files Browse the repository at this point in the history
… fields
  • Loading branch information
megan07 committed Nov 13, 2019
1 parent 2383492 commit 6a3aaa4
Show file tree
Hide file tree
Showing 14 changed files with 104 additions and 57 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -36,13 +36,23 @@ var composerEnvironmentReservedEnvVar = map[string]struct{}{
"SQL_USER": {},
}

var composerSoftwareConfigKeys = []string{
"config.0.software_config.0.airflow_config_overrides",
"config.0.software_config.0.pypi_packages",
"config.0.software_config.0.env_variables",
"config.0.software_config.0.image_version",
"config.0.software_config.0.python_version",
}
var (
composerSoftwareConfigKeys = []string{
"config.0.software_config.0.airflow_config_overrides",
"config.0.software_config.0.pypi_packages",
"config.0.software_config.0.env_variables",
"config.0.software_config.0.image_version",
"config.0.software_config.0.python_version",
}

composerConfigKeys = []string{
"config.0.node_count",
"config.0.node_config",
"config.0.software_config",
"config.0.private_environment_config",
}

)

func resourceComposerEnvironment() *schema.Resource {
return &schema.Resource{
Expand Down Expand Up @@ -91,13 +101,15 @@ func resourceComposerEnvironment() *schema.Resource {
Type: schema.TypeInt,
Computed: true,
Optional: true,
AtLeastOneOf: composerConfigKeys,
ValidateFunc: validation.IntAtLeast(3),
},
"node_config": {
Type: schema.TypeList,
Computed: true,
Optional: true,
MaxItems: 1,
Type: schema.TypeList,
Computed: true,
Optional: true,
AtLeastOneOf: composerConfigKeys,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"zone": {
Expand Down Expand Up @@ -206,10 +218,11 @@ func resourceComposerEnvironment() *schema.Resource {
},
},
"software_config": {
Type: schema.TypeList,
Optional: true,
Computed: true,
MaxItems: 1,
Type: schema.TypeList,
Optional: true,
Computed: true,
AtLeastOneOf: composerConfigKeys,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"airflow_config_overrides": {
Expand Down Expand Up @@ -251,23 +264,32 @@ func resourceComposerEnvironment() *schema.Resource {
},
},
"private_environment_config": {
Type: schema.TypeList,
Optional: true,
Computed: true,
MaxItems: 1,
ForceNew: true,
Type: schema.TypeList,
Optional: true,
Computed: true,
AtLeastOneOf: composerConfigKeys,
MaxItems: 1,
ForceNew: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"enable_private_endpoint": {
Type: schema.TypeBool,
Required: true,
ForceNew: true,
Type: schema.TypeBool,
Optional: true,
AtLeastOneOf: []string{
"config.0.private_environment_config.0.enable_private_endpoint",
"config.0.private_environment_config.0.master_ipv4_cidr_block",
},
ForceNew: true,
},
"master_ipv4_cidr_block": {
Type: schema.TypeString,
Optional: true,
Type: schema.TypeString,
Optional: true,
AtLeastOneOf: []string{
"config.0.private_environment_config.0.enable_private_endpoint",
"config.0.private_environment_config.0.master_ipv4_cidr_block",
},
ForceNew: true,
Default: "172.16.0.0/28",
Default: "172.16.0.0/28",
},
},
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,10 @@ var schemaOrganizationPolicy = map[string]*schema.Schema{
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"allow": {
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
ConflictsWith: []string{"list_policy.0.deny"},
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
ExactlyOneOf: []string{"list_policy.0.allow", "list_policy.0.deny"},
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"all": {
Expand All @@ -61,10 +61,10 @@ var schemaOrganizationPolicy = map[string]*schema.Schema{
},
},
"deny": {
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
ConflictsWith: []string{"list_policy.0.allow"},
Type: schema.TypeList,
Optional: true,
MaxItems: 1,
ExactlyOneOf: []string{"list_policy.0.allow", "list_policy.0.deny"},
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"all": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ In an attempt to avoid allowing empty blocks in config files, at least one of `s

## Resource: `google_composer_environment`

### At least one of `airflow_config_overrides`, `pypi_packages`, `env_variables`, `image_version`, or `python_version` are now required on `google_composer_environment.config.software_config`
### At least one of `airflow_config_overrides`, `pypi_packages`, `env_variables`, `image_version`, or `python_version` is now required on `google_composer_environment.config.software_config`

In an attempt to avoid allowing empty blocks in config files, at least one of `airflow_config_overrides`,
`pypi_packages`, `env_variables`, `image_version`, or `python_version` is now required on the
Expand Down Expand Up @@ -570,7 +570,7 @@ In an attempt to avoid allowing empty blocks in config files, at least one of `a
`disk_encryption_key_raw`, `kms_key_self_link`, `initialize_params`, `mode` or `source` is now required on the
`boot_disk` block.

### At least one of `size`, `type`, `image`, or `labels` are now required on `google_compute_instance.boot_disk.initialize_params`
### At least one of `size`, `type`, `image`, or `labels` is now required on `google_compute_instance.boot_disk.initialize_params`

In an attempt to avoid allowing empty blocks in config files, at least one of `size`, `type`, `image`, or `labels`
is now required on the `initialize_params` block.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The following arguments are supported:

* `feature_settings` - (Optional) A block of optional settings to configure specific App Engine features:

* `split_health_checks` - (Optional) Set to false to use the legacy health check instead of the readiness
* `split_health_checks` - (Required) Set to false to use the legacy health check instead of the readiness
and liveness checks.

## Attributes Reference
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -185,13 +185,14 @@ The `csv_options` block supports:

The `google_sheets_options` block supports:

* `range` (Optional, Beta) - Range of a sheet to query from. Only used when
non-empty.
* `range` (Optional) - Range of a sheet to query from. Only used when
non-empty. At least one of `range` or `skip_leading_rows` must be set.
Typical format: "sheet_name!top_left_cell_id:bottom_right_cell_id"
For example: "sheet1!A1:B20"

* `skip_leading_rows` (Optional) - The number of rows at the top of the sheet
that BigQuery will skip when reading the data.
that BigQuery will skip when reading the data. At least one of `range` or
`skip_leading_rows` must be set.

The `time_partitioning` block supports:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ The `private_environment_config` block supports:
The `ip_allocation_policy` block supports:

* `use_ip_aliases` -
(Optional)
(Required)
Whether or not to enable Alias IPs in the GKE cluster. If true, a VPC-native cluster is created.
Defaults to true if the `ip_allocation_block` is present in config.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ resource "google_compute_instance" "default" {
// Local SSD disk
scratch_disk {
interface = "SCSI"
}
network_interface {
Expand Down Expand Up @@ -193,8 +194,7 @@ The `initialize_params` block supports:

The `scratch_disk` block supports:

* `interface` - (Optional) The disk interface to use for attaching this disk; either SCSI or NVME.
Defaults to SCSI.
* `interface` - (Required) The disk interface to use for attaching this disk; either SCSI or NVME.

The `attached_disk` block supports:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -300,7 +300,7 @@ The `disk` block supports:

The `disk_encryption_key` block supports:

* `kms_key_self_link` - (Optional) The self link of the encryption key that is stored in Google Cloud KMS
* `kms_key_self_link` - (Required) The self link of the encryption key that is stored in Google Cloud KMS

The `network_interface` block supports:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ The `advertised_ip_ranges` block supports:
(Optional) User-specified description for the IP range.

* `range` -
(Optional) The IP range to advertise. The value must be a CIDR-formatted string.
(Required) The IP range to advertise. The value must be a CIDR-formatted string.


## Attributes Reference
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,7 @@ The `network_policy` block supports:

* `provider` - (Optional) The selected network policy provider. Defaults to PROVIDER_UNSPECIFIED.

* `enabled` - (Optional) Whether network policy is enabled on the cluster. Defaults to false.
* `enabled` - (Required) Whether network policy is enabled on the cluster.

The `node_config` block supports:

Expand Down Expand Up @@ -614,7 +614,7 @@ The `sandbox_type` block supports:

The `release_channel` block supports:

* `channel` - (Optional) The selected release channel. Defaults to `UNSPECIFIED`.
* `channel` - (Required) The selected release channel.
Accepted values are:
* UNSPECIFIED: Not set.
* RAPID: Weekly upgrade cadence; Early testers and developers who requires new features.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -172,6 +172,9 @@ The `cluster_config` block supports:
* `software_config` (Optional) The config settings for software inside the cluster.
Structure defined below.

* `autoscaling_config` (Optional) The autoscaling policy config associated with the cluster.
Structure defined below.

* `initialization_action` (Optional) Commands to execute on each node after config is completed.
You can specify multiple versions of these. Structure defined below.

Expand Down Expand Up @@ -424,6 +427,27 @@ cluster_config {

- - -

The `cluster_config.autoscaling_config` block supports:

```hcl
cluster_config {
# Override or set some custom properties
autoscaling_config {
policy_uri = "projects/projectId/locations/region/autoscalingPolicies/policyId"
}
}
```

* `policy_uri` - (Required) The autoscaling policy used by the cluster.

Only resource names including projectid and location (region) are valid. Examples:

`https://www.googleapis.com/compute/v1/projects/[projectId]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]`
`projects/[projectId]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]`
Note that the policy must be in the same project and Cloud Dataproc region.

- - -

The `initialization_action` block (Optional) can be specified multiple times and supports:

```hcl
Expand Down
12 changes: 6 additions & 6 deletions third_party/terraform/website/docs/r/dataproc_job.html.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ output "pyspark_status" {

* `labels` - (Optional) The list of labels (key/value pairs) to add to the job.

* `scheduling.max_failures_per_hour` - (Optional) Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.
* `scheduling.max_failures_per_hour` - (Required) Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.

The `pyspark_config` block supports:

Expand Down Expand Up @@ -145,7 +145,7 @@ are generally applicable:

* `properties` - (Optional) A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in `/etc/spark/conf/spark-defaults.conf` and classes in user code.

* `logging_config.driver_log_levels`- (Optional) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

The `spark_config` block supports:

Expand Down Expand Up @@ -187,7 +187,7 @@ resource "google_dataproc_job" "spark" {

* `properties` - (Optional) A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in `/etc/spark/conf/spark-defaults.conf` and classes in user code.

* `logging_config.driver_log_levels`- (Optional) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'


The `hadoop_config` block supports:
Expand Down Expand Up @@ -221,7 +221,7 @@ resource "google_dataproc_job" "hadoop" {

* `properties` - (Optional) A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in `/etc/hadoop/conf/*-site` and classes in user code..

* `logging_config.driver_log_levels`- (Optional) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'

The `hive_config` block supports:

Expand Down Expand Up @@ -285,7 +285,7 @@ resource "google_dataproc_job" "pig" {

* `jar_file_uris` - (Optional) HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.

* `logging_config.driver_log_levels`- (Optional) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'


The `sparksql_config` block supports:
Expand Down Expand Up @@ -316,7 +316,7 @@ resource "google_dataproc_job" "sparksql" {

* `jar_file_uris` - (Optional) HCFS URIs of jar files to be added to the Spark CLASSPATH.

* `logging_config.driver_log_levels`- (Optional) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
* `logging_config.driver_log_levels`- (Required) The per-package log levels for the driver. This may include 'root' package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'


## Attributes Reference
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -250,9 +250,9 @@ The required `settings` block supports:

The optional `settings.database_flags` sublist supports:

* `name` - (Optional) Name of the flag.
* `name` - (Required) Name of the flag.

* `value` - (Optional) Value of the flag.
* `value` - (Required) Value of the flag.

The optional `settings.backup_configuration` subblock supports:

Expand Down Expand Up @@ -287,7 +287,7 @@ The optional `settings.ip_configuration.authorized_networks[]` sublist supports:

* `name` - (Optional) A name for this whitelist entry.

* `value` - (Optional) A CIDR notation IPv4 or IPv6 address that is allowed to
* `value` - (Required) A CIDR notation IPv4 or IPv6 address that is allowed to
access this instance. Must be set even if other two attributes are not for
the whitelist to become active.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ The `condition` block supports the following elements, and requires at least one

The `versioning` block supports:

* `enabled` - (Optional) While set to `true`, versioning is fully enabled for this bucket.
* `enabled` - (Required) While set to `true`, versioning is fully enabled for this bucket.

The `website` block supports:

Expand Down

0 comments on commit 6a3aaa4

Please sign in to comment.