diff --git a/website/docs/cdktf/python/guides/continuous-validation-examples.html.md b/website/docs/cdktf/python/guides/continuous-validation-examples.html.md deleted file mode 100644 index aeb00cf6220..00000000000 --- a/website/docs/cdktf/python/guides/continuous-validation-examples.html.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Using Terraform Cloud's Continuous Validation feature with the AWS Provider" -description: |- - Using Terraform Cloud's Continuous Validation feature with the AWS Provider ---- - - - -# Using Terraform Cloud's Continuous Validation feature with the AWS Provider - -## Continuous Validation in Terraform Cloud - -The Continuous Validation feature in Terraform Cloud (TFC) allows users to make assertions about their infrastructure between applied runs. This helps users to identify issues at the time they first appear and avoid situations where a change is only identified once it causes a customer-facing problem. - -Users can add checks to their Terraform configuration using check blocks. Check blocks contain assertions that are defined with a custom condition expression and an error message. When the condition expression evaluates to true the check passes, but when the expression evaluates to false Terraform will show a warning message that includes the user-defined error message. - -Custom conditions can be created using data from Terraform providers’ resources and data sources. Data can also be combined from multiple sources; for example, you can use checks to monitor expirable resources by comparing a resource’s expiration date attribute to the current time returned by Terraform’s built-in time functions. - -Below, this guide shows examples of how data returned by the AWS provider can be used to define checks in your Terraform configuration. - -## Example - Ensure your AWS account is within budget (aws_budgets_budget) - -AWS Budgets allows you to track and take action on your AWS costs and usage. You can use AWS Budgets to monitor your aggregate utilization and coverage metrics for your Reserved Instances (RIs) or Savings Plans. - -- You can use AWS Budgets to enable simple-to-complex cost and usage tracking. Some examples include: - -- Setting a monthly cost budget with a fixed target amount to track all costs associated with your account. - -- Setting a monthly cost budget with a variable target amount, with each subsequent month growing the budget target by 5 percent. - -- Setting a monthly usage budget with a fixed usage amount and forecasted notifications to help ensure that you are staying within the service limits for a specific service. - -- Setting a daily utilization or coverage budget to track your RI or Savings Plans. - -The example below shows how a check block can be used to assert that you remain in compliance for the budgets that have been established. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) -``` - -If the budget exceeds the set limit, the check block assertion will return a warning similar to the following: - -``` -│ Warning: Check block assertion failed -│ -│ on main.tf line 43, in check "check_budget_exceeded": -│ 43: condition = !data.aws_budgets_budget.example.budget_exceeded -│ ├──────────────── -│ │ data.aws_budgets_budget.example.budget_exceeded is true -│ -│ AWS budget has been exceeded! Calculated spend: '1550.0' and budget limit: '1200.0' -``` - -## Example - Check GuardDuty for Threats (aws_guardduty_finding_ids) - -Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your Amazon Web Services accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in Amazon Web Services Cloud. - -The following example outlines how a check block can be utilized to assert that no threats have been identified from AWS GuardDuty. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_guardduty_detector import DataAwsGuarddutyDetector -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsGuarddutyDetector(self, "example") -``` - -If findings are present, the check block assertion will return a warning similar to the following: - -``` -│ Warning: Check block assertion failed -│ -│ on main.tf line 24, in check "check_guardduty_findings": -│ 24: condition = !data.aws_guardduty_finding_ids.example.has_findings -│ ├──────────────── -│ │ data.aws_guardduty_finding_ids.example.has_findings is true -│ -│ AWS GuardDuty detector 'abcdef123456' has 9 open findings! -``` - -## Example - Check for unused IAM roles (aws_iam_role) - -AWS IAM tracks role usage, including the [last used date and region](https://docs.aws.amazon.com/IAM/latest/APIReference/API_RoleLastUsed.html). This information is returned with the [`aws_iam_role`](../d/iam_role.html.markdown) data source, and can be used in continuous validation to check for unused roles. AWS reports activity for the trailing 400 days. If a role is unused within that period, the `last_used_date` will be an empty string (`""`). - -In the example below, the [`timecmp`](https://developer.hashicorp.com/terraform/language/functions/timecmp) function checks for a `last_used_date` more recent than the `unused_limit` local variable (30 days ago). The [`coalesce`](https://developer.hashicorp.com/terraform/language/functions/coalesce) function handles empty (`""`) `last_used_date` values safely, falling back to the `unused_limit` local, and automatically triggering a failed condition. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/custom-service-endpoints.html.md b/website/docs/cdktf/python/guides/custom-service-endpoints.html.md deleted file mode 100644 index 236660f7eea..00000000000 --- a/website/docs/cdktf/python/guides/custom-service-endpoints.html.md +++ /dev/null @@ -1,400 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Custom Service Endpoint Configuration" -description: |- - Configuring the Terraform AWS Provider to connect to custom AWS service endpoints and AWS compatible solutions. ---- - - - - - -# Custom Service Endpoint Configuration - -The Terraform AWS Provider configuration can be customized to connect to non-default AWS service endpoints and AWS compatible solutions. This may be useful for environments with specific compliance requirements, such as using [AWS FIPS 140-2 endpoints](https://aws.amazon.com/compliance/fips/), connecting to AWS Snowball, SC2S, or C2S environments, or local testing. - -This guide outlines how to get started with customizing endpoints, the available endpoint configurations, and offers example configurations for working with certain local development and testing solutions. - -~> **NOTE:** Support for connecting the Terraform AWS Provider with custom endpoints and AWS compatible solutions is offered as best effort. Individual Terraform resources may require compatibility updates to work in certain environments. Integration testing by HashiCorp during provider changes is exclusively done against default AWS endpoints at this time. - - - -- [Getting Started with Custom Endpoints](#getting-started-with-custom-endpoints) -- [Available Endpoint Customizations](#available-endpoint-customizations) -- [Connecting to Local AWS Compatible Solutions](#connecting-to-local-aws-compatible-solutions) - - [DynamoDB Local](#dynamodb-local) - - [LocalStack](#localstack) - - - -## Getting Started with Custom Endpoints - -To configure the Terraform AWS Provider to use customized endpoints, it can be done within `provider` declarations using the `endpoints` configuration block, e.g., - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - endpoints=[AwsProviderEndpoints( - dynamodb="http://localhost:4569", - s3="http://localhost:4572" - ) - ] - ) -``` - -If multiple, different Terraform AWS Provider configurations are required, see the [Terraform documentation on multiple provider instances](https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-instances) for additional information about the `alias` provider configuration and its usage. - -## Available Endpoint Customizations - -The Terraform AWS Provider allows the following endpoints to be customized. - -**Note:** The Provider allows some service endpoints to be customized despite not supporting those services. - -**Note:** For backward compatibility, some endpoints can be assigned using multiple service "keys" (_e.g._, `dms`, `databasemigration`, or `databasemigrationservice`). If you use more than one equivalent service key in your configuration, the provider will use the _first_ endpoint value set. For example, in the configuration below we have set the DMS service endpoints using both `dms` and `databasemigration`. The provider will set the endpoint to whichever appears first. Subsequent values are ignored. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - endpoints=[AwsProviderEndpoints( - databasemigration="http://this.value.will.be.ignored.com", - dms="http://this.value.will.be.used.com" - ) - ] - ) -``` - - - - -
- -
- - -As a convenience, for compatibility with the [Terraform S3 Backend](https://www.terraform.io/language/settings/backends/s3), -the following service endpoints can be configured using environment variables: - -* DynamoDB: `TF_AWS_DYNAMODB_ENDPOINT` (or **Deprecated** `AWS_DYNAMODB_ENDPOINT`) -* IAM: `TF_AWS_IAM_ENDPOINT` (or **Deprecated** `AWS_IAM_ENDPOINT`) -* S3: `TF_AWS_S3_ENDPOINT` (or **Deprecated** `AWS_S3_ENDPOINT`) -* STS: `TF_AWS_STS_ENDPOINT` (or **Deprecated** `AWS_STS_ENDPOINT`) - -## Connecting to Local AWS Compatible Solutions - -~> **NOTE:** This information is not intended to be exhaustive for all local AWS compatible solutions or necessarily authoritative configurations for those documented. Check the documentation for each of these solutions for the most up to date information. - -### DynamoDB Local - -The Amazon DynamoDB service offers a downloadable version for writing and testing applications without accessing the DynamoDB web service. For more information about this solution, see the [DynamoDB Local documentation in the Amazon DynamoDB Developer Guide](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). - -An example provider configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - access_key="mock_access_key", - endpoints=[AwsProviderEndpoints( - dynamodb="http://localhost:8000" - ) - ], - region="us-east-1", - secret_key="mock_secret_key", - skip_credentials_validation=True, - skip_metadata_api_check=Token.as_string(True), - skip_requesting_account_id=True - ) -``` - -### LocalStack - -[LocalStack](https://localstack.cloud/) provides an easy-to-use test/mocking framework for developing Cloud applications. - -An example provider configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - access_key="mock_access_key", - endpoints=[AwsProviderEndpoints( - apigateway="http://localhost:4566", - cloudformation="http://localhost:4566", - cloudwatch="http://localhost:4566", - dynamodb="http://localhost:4566", - es="http://localhost:4566", - firehose="http://localhost:4566", - iam="http://localhost:4566", - kinesis="http://localhost:4566", - lambda_="http://localhost:4566", - redshift="http://localhost:4566", - route53="http://localhost:4566", - s3="http://localhost:4566", - secretsmanager="http://localhost:4566", - ses="http://localhost:4566", - sns="http://localhost:4566", - sqs="http://localhost:4566", - ssm="http://localhost:4566", - stepfunctions="http://localhost:4566", - sts="http://localhost:4566" - ) - ], - region="us-east-1", - s3_use_path_style=True, - secret_key="mock_secret_key", - skip_credentials_validation=True, - skip_metadata_api_check=Token.as_string(True), - skip_requesting_account_id=True - ) -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/resource-tagging.html.md b/website/docs/cdktf/python/guides/resource-tagging.html.md deleted file mode 100644 index e619e8fa02a..00000000000 --- a/website/docs/cdktf/python/guides/resource-tagging.html.md +++ /dev/null @@ -1,288 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Resource Tagging" -description: |- - Managing resource tags with the Terraform AWS Provider. ---- - - - -# Resource Tagging - -Many AWS services implement [resource tags](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) as an essential part of managing components. These arbitrary key-value pairs can be utilized for billing, ownership, automation, [access control](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html), and many other use cases. Given that these tags are an important aspect of successfully managing an AWS environment, the Terraform AWS Provider implements additional functionality beyond the typical one-to-one resource lifecycle management for easier and more customized implementations. - --> Not all AWS resources support tagging, which can differ across AWS services and even across resources within the same service. Browse the individual Terraform AWS Provider resource documentation pages for the `tags` argument, to see which support resource tagging. If the AWS API implements tagging support for a resource and it is missing from the Terraform AWS Provider resource, a [feature request](https://github.com/hashicorp/terraform-provider-aws/issues/new?labels=enhancement&template=Feature_Request.md) can be submitted. - - - -- [Getting Started with Resource Tags](#getting-started-with-resource-tags) -- [Ignoring Changes to Specific Tags](#ignoring-changes-to-specific-tags) - - [Ignoring Changes in Individual Resources](#ignoring-changes-in-individual-resources) - - [Ignoring Changes in All Resources](#ignoring-changes-in-all-resources) -- [Managing Individual Resource Tags](#managing-individual-resource-tags) -- [Propagating Tags to All Resources](#propagating-tags-to-all-resources) - - - -## Getting Started with Resource Tags - -Terraform AWS Provider resources that support resource tags implement a consistent argument named `tags` which accepts a key-value map, e.g., - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.vpc import Vpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Vpc(self, "example", - tags={ - "Name": "MyVPC" - } - ) -``` - -The tags for the resource are wholly managed by Terraform except tag keys beginning with `aws:` as these are managed by AWS services and cannot typically be edited or deleted. Any non-AWS tags added to the VPC outside of Terraform will be proposed for removal on the next Terraform execution. Missing tags or those with incorrect values from the Terraform configuration will be proposed for addition or update on the next Terraform execution. Advanced patterns that can adjust these behaviors for special use cases, such as Terraform AWS Provider configurations that affect all resources and the ability to manage resource tags for resources not managed by Terraform, can be found later in this guide. - -For most environments and use cases, this is the typical implementation pattern, whether it be in a standalone Terraform configuration or within a [Terraform Module](https://www.terraform.io/docs/modules/). The Terraform configuration language also enables less repetitive configurations via [variables](https://www.terraform.io/docs/configuration/variables.html), [locals](https://www.terraform.io/docs/configuration/locals.html), or potentially a combination of these, e.g., - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import VariableType, TerraformVariable, Fn, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.vpc import Vpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - # Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. - # You can read more about this at https://cdk.tf/variables - additional_tags = TerraformVariable(self, "additional_tags", - default=[{}], - description="Additional resource tags", - type=VariableType.map(VariableType.STRING) - ) - Vpc(self, "example", - tags=Token.as_string_map( - Fn.merge([additional_tags.value, { - "Name": "MyVPC" - } - ])) - ) -``` - -## Ignoring Changes to Specific Tags - -Systems outside of Terraform may automatically interact with the tagging associated with AWS resources. These external systems may be for administrative purposes, such as a Configuration Management Database, or the tagging may be required functionality for those systems, such as Kubernetes. This section shows methods to prevent Terraform from showing differences for specific tags. - -### Ignoring Changes in Individual Resources - -All Terraform resources support the [`lifecycle` configuration block `ignore_changes` argument](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html#ignore_changes), which can be used to explicitly ignore all tags changes on a resource beyond an initial configuration or individual tag values. - -In this example, the `Name` tag will be added to the VPC on resource creation, however any external changes to the `Name` tag value or the addition/removal of any tag (including the `Name` tag) will be ignored: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from cdktf import TerraformResourceLifecycle -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.vpc import Vpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Vpc(self, "example", - lifecycle=TerraformResourceLifecycle( - ignore_changes=[tags] - ), - tags={ - "Name": "MyVPC" - } - ) -``` - -In this example, the `Name` and `Owner` tags will be added to the VPC on resource creation, however any external changes to the value of the `Name` tag will be ignored while any changes to other tags (including the `Owner` tag and any additions) will still be proposed: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from cdktf import TerraformResourceLifecycle -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.vpc import Vpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Vpc(self, "example", - lifecycle=TerraformResourceLifecycle( - ignore_changes=[name] - ), - tags={ - "Name": "MyVPC", - "Owner": "Operations" - } - ) -``` - -### Ignoring Changes in All Resources - -As of version 2.60.0 of the Terraform AWS Provider, there is support for ignoring tag changes across all resources under a provider. This simplifies situations where certain tags may be externally applied more globally and enhances functionality beyond `ignore_changes` to support cases such as tag key prefixes. - -In this example, all resources will ignore any addition of the `LastScanned` tag: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - ignore_tags=[AwsProviderIgnoreTags( - keys=["LastScanned"] - ) - ] - ) -``` - -In this example, all resources will ignore any addition of tags with the `kubernetes.io/` prefix, such as `kubernetes.io/cluster/name` or `kubernetes.io/role/elb`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - ignore_tags=[AwsProviderIgnoreTags( - key_prefixes=["kubernetes.io/"] - ) - ] - ) -``` - -Any of the `ignore_tags` configurations can be combined as needed. - -The provider ignore tags configuration applies to all Terraform AWS Provider resources under that particular instance (the `default` provider instance in the above cases). If multiple, different Terraform AWS Provider configurations are being used (e.g., [multiple provider instances](https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-instances)), the ignore tags configuration must be added to all applicable provider configurations. - -## Managing Individual Resource Tags - -Certain Terraform AWS Provider services support a special resource for managing an individual tag on a resource without managing the resource itself. One example is the [`aws_ec2_tag` resource](/docs/providers/aws/r/ec2_tag.html). These resources enable tagging where resources are created outside Terraform such as EC2 Images (AMIs), shared across accounts via Resource Access Manager (RAM), or implicitly created by other means such as EC2 VPN Connections implicitly creating a taggable EC2 Transit Gateway VPN Attachment. - -~> **NOTE:** This is an advanced use case and can cause conflicting management issues when improperly implemented. These individual tag resources should not be combined with the Terraform resource for managing the parent resource. For example, using `aws_vpc` and `aws_ec2_tag` to manage tags of the same VPC will cause a perpetual difference where the `aws_vpc` resource will try to remove the tag being added by the `aws_ec2_tag` resource. - --> Not all services supported by the Terraform AWS Provider implement these resources. Browse the Terraform AWS Provider resource documentation pages for a resource with a type ending in `_tag`. If there is a use case where this type of resource is missing, a [feature request](https://github.com/hashicorp/terraform-provider-aws/issues/new?labels=enhancement&template=Feature_Request.md) can be submitted. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ec2_tag import Ec2Tag -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Ec2Tag(self, "example", - key="Owner", - resource_id=Token.as_string(aws_vpn_connection_example.transit_gateway_attachment_id), - value="Operations" - ) -``` - -To manage multiple tags for a resource in this scenario, [`for_each`](https://www.terraform.io/docs/configuration/meta-arguments/for_each.html) can be used: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformIterator, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ec2_tag import Ec2Tag -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - # In most cases loops should be handled in the programming language context and - # not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - # you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - # you need to keep this like it is. - example_for_each_iterator = TerraformIterator.from_list( - Token.as_any("[object Object]")) - Ec2Tag(self, "example", - key=example_for_each_iterator.key, - resource_id=Token.as_string(aws_vpn_connection_example.transit_gateway_attachment_id), - value=example_for_each_iterator.value, - for_each=example_for_each_iterator - ) -``` - -The inline map provided to `for_each` in the example above is used for brevity, but other Terraform configuration language features similar to those noted at the beginning of this guide can be used to make the example more extensible. - -### Propagating Tags to All Resources - -As of version 3.38.0 of the Terraform AWS Provider, the Terraform Configuration language also enables provider-level tagging as an alternative to the methods described in the [Getting Started with Resource Tags](#getting-started-with-resource-tags) section above. -This functionality is available for all Terraform AWS Provider resources that currently support `tags`, with the exception of the [`aws_autoscaling_group`](/docs/providers/aws/r/autoscaling_group.html.markdown) resource. Refactoring the use of [variables](https://www.terraform.io/docs/configuration/variables.html) or [locals](https://www.terraform.io/docs/configuration/locals.html) may look like: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -from imports.aws.vpc import Vpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - default_tags=[AwsProviderDefaultTags( - tags={ - "Environment": "Production", - "Owner": "Ops" - } - ) - ] - ) - Vpc(self, "example", - tags={ - "Name": "MyVPC" - } - ) -``` - -In this example, the `Environment` and `Owner` tags defined within the provider configuration block will be added to the VPC on resource creation, in addition to the `Name` tag defined within the VPC resource configuration. -To access all the tags applied to the VPC resource, use the read-only attribute `tags_all`, e.g., `aws_vpc.example.tags_all`. - - \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/using-aws-with-awscc-provider.html.md b/website/docs/cdktf/python/guides/using-aws-with-awscc-provider.html.md deleted file mode 100644 index 24b23cdf909..00000000000 --- a/website/docs/cdktf/python/guides/using-aws-with-awscc-provider.html.md +++ /dev/null @@ -1,169 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Using the Terraform awscc provider with aws provider" -description: |- - Managing resource tags with the Terraform AWS Provider. ---- - - - -# Using AWS & AWSCC Provider Together - -~> **NOTE:** The `awscc` provider is currently in technical preview. This means some aspects of its design and implementation are not yet considered stable for production use. We are actively looking for community feedback in order to identify needed improvements. - -The [HashiCorp Terraform AWS Cloud Control Provider](https://registry.terraform.io/providers/hashicorp/awscc/latest) aims to bring Amazon Web Services (AWS) resources to Terraform users faster. The new provider is automatically generated, which means new features and services on AWS can be supported right away. The AWS Cloud Control provider supports hundreds of AWS resources, with more support being added as AWS service teams adopt the Cloud Control API standard. - -For Terraform users managing infrastructure on AWS, we expect the AWSCC provider will be used alongside the existing AWS provider. This guide is provided to show guidance and an example of using the providers together to deploy an AWS Cloud WAN Core Network. - -For more information about the AWSCC provider, please see the provider documentation in [Terraform Registry](https://registry.terraform.io/providers/hashicorp/awscc/latest) - - - -- [AWS CloudWAN Overview](#aws-cloud-wan) -- [Specifying Multiple Providers](#specifying-multiple-providers) - - [First Look at AWSCC Resources](#first-look-at-awscc-resources) - - [Using AWS and AWSCC Providers Together](#using-aws-and-awscc-providers-together) - - - -## AWS Cloud Wan - -In this guide we will deploy [AWS Cloud WAN](https://aws.amazon.com/cloud-wan/) to demonstrate how both AWS & AWSCC can work togther. Cloud WAN is a wide area networking (WAN) service that helps you build, manage, and monitor a unified global network that manages traffic running between resources in your cloud and on-premises environments. - -With Cloud WAN, you define network policies that are used to create a global network that spans multiple locations and networks—eliminating the need to configure and manage different networks individually using different technologies. Your network policies can be used to specify which of your Amazon Virtual Private Clouds (VPCs) and on-premises locations you wish to connect through AWS VPN or third-party software-defined WAN (SD-WAN) products, and the Cloud WAN central dashboard generates a complete view of the network to monitor network health, security, and performance. Cloud WAN automatically creates a global network across AWS Regions using Border Gateway Protocol (BGP), so you can easily exchange routes around the world. - -For more information on AWS Cloud WAN see [the documentation.](https://docs.aws.amazon.com/vpc/latest/cloudwan/what-is-cloudwan.html) - -## Specifying Multiple Providers - -Terraform can use many providers at once, as long as they are specified in your `terraform` configuration block: - -```terraform -terraform { - required_version = ">= 1.0.7" - required_providers { - aws = { - source = "hashicorp/aws" - version = ">= 4.9.0" - } - awscc = { - source = "hashicorp/awscc" - version = ">= 0.25.0" - } - } -} -``` - -The code snippet above informs terraform to download 2 providers as plugins for the current root module, the AWS and AWSCC provider. You can tell which provider is being use by looking at the resource or data source name-prefix. Resources that start with `aws_` use the AWS provider, resources that start with `awscc_` are using the AWSCC provider. - -### First look at AWSCC resources - -Lets start by building our [global network](https://aws.amazon.com/about-aws/global-infrastructure/global_network/) which will house our core network. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Fn, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.awscc.networkmanager_global_network import NetworkmanagerGlobalNetwork -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - # The following providers are missing schema information and might need manual adjustments to synthesize correctly: awscc. - # For a more precise conversion please use the --provider flag in convert. - terraform_tag = [{ - "key": "terraform", - "value": "true" - } - ] - NetworkmanagerGlobalNetwork(self, "main", - description="My Global Network", - tags=Fn.concat([terraform_tag, [{ - "key": "Name", - "value": "My Global Network" - } - ] - ]) - ) -``` - -Above, we define a `awscc_networkmanager_global_network` with 2 tags and a description. AWSCC resources use the [standard AWS tag format](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) which is expressed in HCL as a list of maps with 2 keys. We want to reuse the `terraform = true` tag so we define it as a `local` then we use [concat](https://www.terraform.io/language/functions/concat) to join the list of tags together. - -### Using AWS and AWSCC providers together - -Next we will create a [core network](https://docs.aws.amazon.com/vpc/latest/cloudwan/cloudwan-core-network-policy.html) using an AWSCC resource `awscc_networkmanager_core_network` and an AWS data source `data.aws_networkmanager_core_network_policy_document` which allows users to write HCL to generate the json policy used as the [core policy network](https://docs.aws.amazon.com/vpc/latest/cloudwan/cloudwan-policies-json.html). - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, Fn, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_networkmanager_core_network_policy_document import DataAwsNetworkmanagerCoreNetworkPolicyDocument -from imports.awscc.networkmanager_core_network import NetworkmanagerCoreNetwork -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - # The following providers are missing schema information and might need manual adjustments to synthesize correctly: awscc. - # For a more precise conversion please use the --provider flag in convert. - main = DataAwsNetworkmanagerCoreNetworkPolicyDocument(self, "main", - attachment_policies=[DataAwsNetworkmanagerCoreNetworkPolicyDocumentAttachmentPolicies( - action=DataAwsNetworkmanagerCoreNetworkPolicyDocumentAttachmentPoliciesAction( - association_method="constant", - segment="shared" - ), - condition_logic="or", - conditions=[DataAwsNetworkmanagerCoreNetworkPolicyDocumentAttachmentPoliciesConditions( - key="segment", - operator="equals", - type="tag-value", - value="shared" - ) - ], - rule_number=1 - ) - ], - core_network_configuration=[DataAwsNetworkmanagerCoreNetworkPolicyDocumentCoreNetworkConfiguration( - asn_ranges=["64512-64555"], - edge_locations=[DataAwsNetworkmanagerCoreNetworkPolicyDocumentCoreNetworkConfigurationEdgeLocations( - asn=Token.as_string(64512), - location="us-east-1" - ) - ], - vpn_ecmp_support=False - ) - ], - segment_actions=[DataAwsNetworkmanagerCoreNetworkPolicyDocumentSegmentActions( - action="share", - mode="attachment-route", - segment="shared", - share_with=["*"] - ) - ], - segments=[DataAwsNetworkmanagerCoreNetworkPolicyDocumentSegments( - description="SegmentForSharedServices", - name="shared", - require_attachment_acceptance=True - ) - ] - ) - awscc_networkmanager_core_network_main = NetworkmanagerCoreNetwork(self, "main_1", - description="My Core Network", - global_network_id=awscc_networkmanager_global_network_main.id, - policy_document=Fn.jsonencode( - Fn.jsondecode(Token.as_string(main.json))), - tags=terraform_tag - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - awscc_networkmanager_core_network_main.override_logical_id("main") -``` - -Thanks to Terraform's plugin design, the providers work together seemlessly! - - \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/version-2-upgrade.html.md b/website/docs/cdktf/python/guides/version-2-upgrade.html.md deleted file mode 100644 index e5f165faac3..00000000000 --- a/website/docs/cdktf/python/guides/version-2-upgrade.html.md +++ /dev/null @@ -1,1256 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 2 Upgrade Guide" -description: |- - Terraform AWS Provider Version 2 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 2 Upgrade Guide - -Version 2.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. This guide is intended to help with that process and focuses only on changes from version 1.60.0 to version 2.0.0. - -Most of the changes outlined in this guide have been previously marked as deprecated in the Terraform plan/apply output throughout previous provider releases. These changes, such as deprecation notices, can always be found in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Provider: Configuration](#provider-configuration) -- [Data Source: aws_ami](#data-source-aws_ami) -- [Data Source: aws_ami_ids](#data-source-aws_ami_ids) -- [Data Source: aws_iam_role](#data-source-aws_iam_role) -- [Data Source: aws_kms_secret](#data-source-aws_kms_secret) -- [Data Source: aws_lambda_function](#data-source-aws_lambda_function) -- [Data Source: aws_region](#data-source-aws_region) -- [Resource: aws_api_gateway_api_key](#resource-aws_api_gateway_api_key) -- [Resource: aws_api_gateway_integration](#resource-aws_api_gateway_integration) -- [Resource: aws_api_gateway_integration_response](#resource-aws_api_gateway_integration_response) -- [Resource: aws_api_gateway_method](#resource-aws_api_gateway_method) -- [Resource: aws_api_gateway_method_response](#resource-aws_api_gateway_method_response) -- [Resource: aws_appautoscaling_policy](#resource-aws_appautoscaling_policy) -- [Resource: aws_autoscaling_policy](#resource-aws_autoscaling_policy) -- [Resource: aws_batch_compute_environment](#resource-aws_batch_compute_environment) -- [Resource: aws_cloudfront_distribution](#resource-aws_cloudfront_distribution) -- [Resource: aws_cognito_user_pool](#resource-aws_cognito_user_pool) -- [Resource: aws_dx_lag](#resource-aws_dx_lag) -- [Resource: aws_ecs_service](#resource-aws_ecs_service) -- [Resource: aws_efs_file_system](#resource-aws_efs_file_system) -- [Resource: aws_elasticache_cluster](#resource-aws_elasticache_cluster) -- [Resource: aws_iam_user_login_profile](#resource-aws_iam_user_login_profile) -- [Resource: aws_instance](#resource-aws_instance) -- [Resource: aws_lambda_function](#resource-aws_lambda_function) -- [Resource: aws_lambda_layer_version](#resource-aws_lambda_layer_version) -- [Resource: aws_network_acl](#resource-aws_network_acl) -- [Resource: aws_redshift_cluster](#resource-aws_redshift_cluster) -- [Resource: aws_route_table](#resource-aws_route_table) -- [Resource: aws_route53_record](#resource-aws_route53_record) -- [Resource: aws_route53_zone](#resource-aws_route53_zone) -- [Resource: aws_wafregional_byte_match_set](#resource-aws_wafregional_byte_match_set) - - - -## Provider Version Configuration - --> Before upgrading to version 2.0.0 or later, it is recommended to upgrade to the most recent 1.X version of the provider (version 1.60.0) and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html) without unexpected changes or deprecation notices. - -We recommend using [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -Update to latest 1.X version: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -Update to latest 2.X version: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -## Provider: Configuration - -### skip_requesting_account_id Argument Now Required to Skip Account ID Lookup Errors - -If the provider is unable to determine the AWS account ID from a provider assume role configuration or the STS GetCallerIdentity call used to verify the credentials (if `skip_credentials_validation = false`), it will attempt to lookup the AWS account ID via EC2 metadata, IAM GetUser, IAM ListRoles, and STS GetCallerIdentity. Previously, the provider would silently allow the failure of all the above methods. - -The provider will now return an error to ensure operators understand the implications of the missing AWS account ID in the provider. - -If necessary, the AWS account ID lookup logic can be skipped via: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - skip_requesting_account_id=True - ) -``` - -## Data Source: aws_ami - -### owners Argument Now Required - -The `owners` argument is now required. Specifying `owner-id` or `owner-alias` under `filter` does not satisfy this requirement. - -## Data Source: aws_ami_ids - -### owners Argument Now Required - -The `owners` argument is now required. Specifying `owner-id` or `owner-alias` under `filter` does not satisfy this requirement. - -## Data Source: aws_iam_role - -### assume_role_policy_document Attribute Removal - -Switch your attribute references to the `assume_role_policy` attribute instead. - -### role_id Attribute Removal - -Switch your attribute references to the `unique_id` attribute instead. - -### role_name Argument Removal - -Switch your Terraform configuration to the `name` argument instead. - -## Data Source: aws_kms_secret - -### Data Source Removal and Migrating to aws_kms_secrets Data Source - -The implementation of the `aws_kms_secret` data source, prior to Terraform AWS provider version 2.0.0, used dynamic attribute behavior which is not supported with Terraform 0.12 and beyond (full details available in [this GitHub issue](https://github.com/hashicorp/terraform-provider-aws/issues/5144)). - -Terraform configuration migration steps: - -* Change the data source type from `aws_kms_secret` to `aws_kms_secrets` -* Change any attribute reference (e.g., `"${data.aws_kms_secret.example.ATTRIBUTE}"`) from `.ATTRIBUTE` to `.plaintext["ATTRIBUTE"]` - -As an example, lets take the below sample configuration and migrate it. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_kms_secret import DataAwsKmsSecret -from imports.aws.rds_cluster import RdsCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, engine): - super().__init__(scope, name) - example = DataAwsKmsSecret(self, "example", - secret=[DataAwsKmsSecretSecret( - name="master_password", - payload="AQEC..." - ), DataAwsKmsSecretSecret( - name="master_username", - payload="AQEC..." - ) - ] - ) - aws_rds_cluster_example = RdsCluster(self, "example_1", - master_password=Token.as_string(example.master_password), - master_username=Token.as_string(example.master_username), - engine=engine - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_rds_cluster_example.override_logical_id("example") -``` - -Notice that the `aws_kms_secret` data source previously was taking the two `secret` configuration block `name` arguments and generating those as attribute names (`master_password` and `master_username` in this case). To remove the incompatible behavior, this updated version of the data source provides the decrypted value of each of those `secret` configuration block `name` arguments within a map attribute named `plaintext`. - -Updating the sample configuration from above: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Fn, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_kms_secrets import DataAwsKmsSecrets -from imports.aws.rds_cluster import RdsCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, engine): - super().__init__(scope, name) - example = DataAwsKmsSecrets(self, "example", - secret=[DataAwsKmsSecretsSecret( - name="master_password", - payload="AQEC..." - ), DataAwsKmsSecretsSecret( - name="master_username", - payload="AQEC..." - ) - ] - ) - aws_rds_cluster_example = RdsCluster(self, "example_1", - master_password=Token.as_string( - Fn.lookup_nested(example.plaintext, ["\"master_password\""])), - master_username=Token.as_string( - Fn.lookup_nested(example.plaintext, ["\"master_username\""])), - engine=engine - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_rds_cluster_example.override_logical_id("example") -``` - -## Data Source: aws_lambda_function - -### arn and qualified_arn Attribute Behavior Changes - -The `arn` attribute now always returns the unqualified (no `:QUALIFIER` or `:VERSION` suffix) ARN value and the `qualified_arn` attribute now always returns the qualified (includes `:QUALIFIER` or `:VERSION` suffix) ARN value. Previously by default, the `arn` attribute included `:$LATEST` suffix when not setting the optional `qualifier` argument, which was not compatible with many other resources. To restore the previous default behavior, set the `qualifier` argument to `$LATEST` and reference the `qualified_arn` attribute. - -## Data Source: aws_region - -### current Argument Removal - -Simply remove `current = true` from your Terraform configuration. The data source defaults to the current provider region if no other filtering is enabled. - -## Resource: aws_api_gateway_api_key - -### stage_key Argument Removal - -Since the API Gateway usage plans feature was launched on August 11, 2016, usage plans are now required to associate an API key with an API stage. To migrate your Terraform configuration, the AWS provider implements support for usage plans with the following resources: - -* [`aws_api_gateway_usage_plan`](/docs/providers/aws/r/api_gateway_usage_plan.html) -* [`aws_api_gateway_usage_plan_key`](/docs/providers/aws/r/api_gateway_usage_plan_key.html) - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_api_key import ApiGatewayApiKey -from imports.aws.api_gateway_deployment import ApiGatewayDeployment -from imports.aws.api_gateway_rest_api import ApiGatewayRestApi -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = ApiGatewayRestApi(self, "example", - name="example" - ) - aws_api_gateway_deployment_example = ApiGatewayDeployment(self, "example_1", - rest_api_id=example.id, - stage_name="example" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_api_gateway_deployment_example.override_logical_id("example") - aws_api_gateway_api_key_example = ApiGatewayApiKey(self, "example_2", - name="example", - stage_key=[{ - "rest_api_id": example.id, - "stage_name": aws_api_gateway_deployment_example.stage_name - } - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_api_gateway_api_key_example.override_logical_id("example") -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_api_key import ApiGatewayApiKey -from imports.aws.api_gateway_deployment import ApiGatewayDeployment -from imports.aws.api_gateway_rest_api import ApiGatewayRestApi -from imports.aws.api_gateway_usage_plan import ApiGatewayUsagePlan -from imports.aws.api_gateway_usage_plan_key import ApiGatewayUsagePlanKey -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = ApiGatewayApiKey(self, "example", - name="example" - ) - aws_api_gateway_rest_api_example = ApiGatewayRestApi(self, "example_1", - name="example" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_api_gateway_rest_api_example.override_logical_id("example") - aws_api_gateway_deployment_example = ApiGatewayDeployment(self, "example_2", - rest_api_id=Token.as_string(aws_api_gateway_rest_api_example.id), - stage_name="example" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_api_gateway_deployment_example.override_logical_id("example") - aws_api_gateway_usage_plan_example = ApiGatewayUsagePlan(self, "example_3", - api_stages=[ApiGatewayUsagePlanApiStages( - api_id=Token.as_string(aws_api_gateway_rest_api_example.id), - stage=Token.as_string(aws_api_gateway_deployment_example.stage_name) - ) - ], - name="example" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_api_gateway_usage_plan_example.override_logical_id("example") - aws_api_gateway_usage_plan_key_example = ApiGatewayUsagePlanKey(self, "example_4", - key_id=example.id, - key_type="API_KEY", - usage_plan_id=Token.as_string(aws_api_gateway_usage_plan_example.id) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_api_gateway_usage_plan_key_example.override_logical_id("example") -``` - -## Resource: aws_api_gateway_integration - -### request_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `request_parameters` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_integration import ApiGatewayIntegration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, httpMethod, resourceId, restApiId, type): - super().__init__(scope, name) - ApiGatewayIntegration(self, "example", - request_parameters_in_json="{\n \"integration.request.header.X-Authorization\": \"'static'\"\n}\n\n", - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id, - type=type - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_integration import ApiGatewayIntegration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, httpMethod, resourceId, restApiId, type): - super().__init__(scope, name) - ApiGatewayIntegration(self, "example", - request_parameters={ - "integration.request.header.X-Authorization": "'static'" - }, - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id, - type=type - ) -``` - -## Resource: aws_api_gateway_integration_response - -### response_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `response_parameters` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_integration_response import ApiGatewayIntegrationResponse -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, httpMethod, resourceId, restApiId, statusCode): - super().__init__(scope, name) - ApiGatewayIntegrationResponse(self, "example", - response_parameters_in_json="{\n \"method.response.header.Content-Type\": \"integration.response.body.type\"\n}\n\n", - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id, - status_code=status_code - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_integration_response import ApiGatewayIntegrationResponse -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, httpMethod, resourceId, restApiId, statusCode): - super().__init__(scope, name) - ApiGatewayIntegrationResponse(self, "example", - response_parameters={ - "method.response.header.Content-Type": "integration.response.body.type" - }, - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id, - status_code=status_code - ) -``` - -## Resource: aws_api_gateway_method - -### request_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `request_parameters` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_method import ApiGatewayMethod -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, authorization, httpMethod, resourceId, restApiId): - super().__init__(scope, name) - ApiGatewayMethod(self, "example", - request_parameters_in_json="{\n \"method.request.header.Content-Type\": false,\n \"method.request.querystring.page\": true\n}\n\n", - authorization=authorization, - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_method import ApiGatewayMethod -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, authorization, httpMethod, resourceId, restApiId): - super().__init__(scope, name) - ApiGatewayMethod(self, "example", - request_parameters={ - "method.request.header.Content-Type": False, - "method.request.querystring.page": True - }, - authorization=authorization, - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id - ) -``` - -## Resource: aws_api_gateway_method_response - -### response_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `response_parameters` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_method_response import ApiGatewayMethodResponse -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, httpMethod, resourceId, restApiId, statusCode): - super().__init__(scope, name) - ApiGatewayMethodResponse(self, "example", - response_parameters_in_json="{\n \"method.response.header.Content-Type\": true\n}\n\n", - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id, - status_code=status_code - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.api_gateway_method_response import ApiGatewayMethodResponse -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, httpMethod, resourceId, restApiId, statusCode): - super().__init__(scope, name) - ApiGatewayMethodResponse(self, "example", - response_parameters={ - "method.response.header.Content-Type": True - }, - http_method=http_method, - resource_id=resource_id, - rest_api_id=rest_api_id, - status_code=status_code - ) -``` - -## Resource: aws_appautoscaling_policy - -### Argument Removals - -The following arguments have been moved into a nested argument named `step_scaling_policy_configuration`: - -* `adjustment_type` -* `cooldown` -* `metric_aggregation_type` -* `min_adjustment_magnitude` -* `step_adjustment` - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Op, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.appautoscaling_policy import AppautoscalingPolicy -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, resourceId, scalableDimension, serviceNamespace): - super().__init__(scope, name) - AppautoscalingPolicy(self, "example", - adjustment_type="ChangeInCapacity", - cooldown=60, - metric_aggregation_type="Maximum", - step_adjustment=[{ - "metric_interval_upper_bound": 0, - "scaling_adjustment": Op.negate(1) - } - ], - name=name, - resource_id=resource_id, - scalable_dimension=scalable_dimension, - service_namespace=service_namespace - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, Op, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.appautoscaling_policy import AppautoscalingPolicy -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, resourceId, scalableDimension, serviceNamespace): - super().__init__(scope, name) - AppautoscalingPolicy(self, "example", - step_scaling_policy_configuration=AppautoscalingPolicyStepScalingPolicyConfiguration( - adjustment_type="ChangeInCapacity", - cooldown=60, - metric_aggregation_type="Maximum", - step_adjustment=[AppautoscalingPolicyStepScalingPolicyConfigurationStepAdjustment( - metric_interval_upper_bound=Token.as_string(0), - scaling_adjustment=Token.as_number(Op.negate(1)) - ) - ] - ), - name=name, - resource_id=resource_id, - scalable_dimension=scalable_dimension, - service_namespace=service_namespace - ) -``` - -## Resource: aws_autoscaling_policy - -### min_adjustment_step Argument Removal - -Switch your Terraform configuration to the `min_adjustment_magnitude` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.autoscaling_policy import AutoscalingPolicy -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, autoscalingGroupName, name): - super().__init__(scope, name) - AutoscalingPolicy(self, "example", - min_adjustment_step=2, - autoscaling_group_name=autoscaling_group_name, - name=name - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.autoscaling_policy import AutoscalingPolicy -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, autoscalingGroupName, name): - super().__init__(scope, name) - AutoscalingPolicy(self, "example", - min_adjustment_magnitude=2, - autoscaling_group_name=autoscaling_group_name, - name=name - ) -``` - -## Resource: aws_batch_compute_environment - -### ecc_cluster_arn Attribute Removal - -Switch your attribute references to the `ecs_cluster_arn` attribute instead. - -## Resource: aws_cloudfront_distribution - -### cache_behavior Argument Removal - -Switch your Terraform configuration to the `ordered_cache_behavior` argument instead. It behaves similar to the previous `cache_behavior` argument, however the ordering of the configurations in Terraform is now reflected in the distribution where previously it was indeterminate. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cloudfront_distribution import CloudfrontDistribution -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, defaultCacheBehavior, enabled, origin, restrictions, viewerCertificate): - super().__init__(scope, name) - CloudfrontDistribution(self, "example", - cache_behavior=[{}, {}], - default_cache_behavior=default_cache_behavior, - enabled=enabled, - origin=origin, - restrictions=restrictions, - viewer_certificate=viewer_certificate - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cloudfront_distribution import CloudfrontDistribution -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, allowedMethods, cachedMethods, pathPattern, targetOriginId, viewerProtocolPolicy, allowedMethods1, cachedMethods1, pathPattern1, targetOriginId1, viewerProtocolPolicy1, defaultCacheBehavior, enabled, origin, restrictions, viewerCertificate): - super().__init__(scope, name) - CloudfrontDistribution(self, "example", - ordered_cache_behavior=[CloudfrontDistributionOrderedCacheBehavior( - allowed_methods=allowed_methods, - cached_methods=cached_methods, - path_pattern=path_pattern, - target_origin_id=target_origin_id, - viewer_protocol_policy=viewer_protocol_policy - ), CloudfrontDistributionOrderedCacheBehavior( - allowed_methods=allowed_methods1, - cached_methods=cached_methods1, - path_pattern=path_pattern1, - target_origin_id=target_origin_id1, - viewer_protocol_policy=viewer_protocol_policy1 - ) - ], - default_cache_behavior=default_cache_behavior, - enabled=enabled, - origin=origin, - restrictions=restrictions, - viewer_certificate=viewer_certificate - ) -``` - -## Resource: aws_cognito_user_pool - -### email_verification_subject Argument Now Conflicts With verification_message_template Configuration Block email_subject Argument - -Choose one argument or the other. These arguments update the same underlying information in Cognito and the selection is indeterminate if differing values are provided. - -### email_verification_message Argument Now Conflicts With verification_message_template Configuration Block email_message Argument - -Choose one argument or the other. These arguments update the same underlying information in Cognito and the selection is indeterminate if differing values are provided. - -### sms_verification_message Argument Now Conflicts With verification_message_template Configuration Block sms_message Argument - -Choose one argument or the other. These arguments update the same underlying information in Cognito and the selection is indeterminate if differing values are provided. - -## Resource: aws_dx_lag - -### number_of_connections Argument Removal - -Default connections have been removed as part of LAG creation. To migrate your Terraform configuration, the AWS provider implements the following resources: - -* [`aws_dx_connection`](/docs/providers/aws/r/dx_connection.html) -* [`aws_dx_connection_association`](/docs/providers/aws/r/dx_connection_association.html) - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.dx_lag import DxLag -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DxLag(self, "example", - connections_bandwidth="1Gbps", - location="EqSe2-EQ", - name="example", - number_of_connections=1 - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.dx_connection import DxConnection -from imports.aws.dx_connection_association import DxConnectionAssociation -from imports.aws.dx_lag import DxLag -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = DxConnection(self, "example", - bandwidth="1Gbps", - location="EqSe2-EQ", - name="example" - ) - aws_dx_lag_example = DxLag(self, "example_1", - connections_bandwidth="1Gbps", - location="EqSe2-EQ", - name="example" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_dx_lag_example.override_logical_id("example") - aws_dx_connection_association_example = DxConnectionAssociation(self, "example_2", - connection_id=example.id, - lag_id=Token.as_string(aws_dx_lag_example.id) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_dx_connection_association_example.override_logical_id("example") -``` - -## Resource: aws_ecs_service - -### placement_strategy Argument Removal - -Switch your Terraform configuration to the `ordered_placement_strategy` argument instead. It behaves similar to the previous `placement_strategy` argument, however the ordering of the configurations in Terraform is now reflected in the distribution where previously it was indeterminate. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ecs_service import EcsService -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name): - super().__init__(scope, name) - EcsService(self, "example", - placement_strategy=[{}, {}], - name=name - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ecs_service import EcsService -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, type, type1, name): - super().__init__(scope, name) - EcsService(self, "example", - ordered_placement_strategy=[EcsServiceOrderedPlacementStrategy( - type=type - ), EcsServiceOrderedPlacementStrategy( - type=type1 - ) - ], - name=name - ) -``` - -## Resource: aws_efs_file_system - -### reference_name Argument Removal - -Switch your Terraform configuration to the `creation_token` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.efs_file_system import EfsFileSystem -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - EfsFileSystem(self, "example", - reference_name="example" - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.efs_file_system import EfsFileSystem -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - EfsFileSystem(self, "example", - creation_token="example" - ) -``` - -## Resource: aws_elasticache_cluster - -### availability_zones Argument Removal - -Switch your Terraform configuration to the `preferred_availability_zones` argument instead. The argument is still optional and the API will continue to automatically choose Availability Zones for nodes if not specified. The new argument will also continue to match the APIs required behavior that the length of the list must be the same as `num_cache_nodes`. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.elasticache_cluster import ElasticacheCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, clusterId): - super().__init__(scope, name) - ElasticacheCluster(self, "example", - availability_zones=["us-west-2a", "us-west-2b"], - cluster_id=cluster_id - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.elasticache_cluster import ElasticacheCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, clusterId): - super().__init__(scope, name) - ElasticacheCluster(self, "example", - preferred_availability_zones=["us-west-2a", "us-west-2b"], - cluster_id=cluster_id - ) -``` - -## Resource: aws_iam_user_login_profile - -### Import Now Required For Existing Infrastructure - -When attempting to bring existing IAM User Login Profiles under Terraform management, `terraform import` is now required. See the [`aws_iam_user_login_profile` resource documentation](https://www.terraform.io/docs/providers/aws/r/iam_user_login_profile.html) for more information. - -## Resource: aws_instance - -### network_interface_id Attribute Removal - -Switch your attribute references to the `primary_network_interface_id` attribute instead. - -## Resource: aws_lambda_function - -### reserved_concurrent_executions Argument Behavior Change - -Setting `reserved_concurrent_executions` to `0` will now disable Lambda Function invocations, causing downtime for the Lambda Function. - -Previously `reserved_concurrent_executions` accepted `0` and below for unreserved concurrency, which means it was not previously possible to disable invocations. The argument now differentiates between a new value for unreserved concurrency (`-1`) and disabling Lambda invocations (`0`). If previously configuring this value to `0` for unreserved concurrency, update the configured value to `-1` or the resource will disable Lambda Function invocations on update. If previously unconfigured, the argument does not require any changes. - -See the [Lambda User Guide](https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html) for more information about concurrency. - -## Resource: aws_lambda_layer_version - -### arn and layer_arn Attribute Value Swap - -Switch your `arn` attribute references to the `layer_arn` attribute instead and vice-versa. - -## Resource: aws_network_acl - -### subnet_id Argument Removal - -Switch your Terraform configuration to the `subnet_ids` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.network_acl import NetworkAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - NetworkAcl(self, "example", - subnet_id="subnet-12345678", - vpc_id=vpc_id - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.network_acl import NetworkAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - NetworkAcl(self, "example", - subnet_ids=["subnet-12345678"], - vpc_id=vpc_id - ) -``` - -## Resource: aws_redshift_cluster - -### Argument Removals - -The following arguments have been moved into a nested argument named `logging`: - -* `bucket_name` -* `enable_logging` (also renamed to just `enable`) -* `s3_key_prefix` - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.redshift_cluster import RedshiftCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, clusterIdentifier, nodeType): - super().__init__(scope, name) - RedshiftCluster(self, "example", - bucket_name="example", - enable_logging=True, - s3_key_prefix="example", - cluster_identifier=cluster_identifier, - node_type=node_type - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.redshift_cluster import RedshiftCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, clusterIdentifier, nodeType): - super().__init__(scope, name) - RedshiftCluster(self, "example", - logging=RedshiftClusterLogging( - bucket_name="example", - enable=True, - s3_key_prefix="example" - ), - cluster_identifier=cluster_identifier, - node_type=node_type - ) -``` - -## Resource: aws_route_table - -### Import Change - -Previously, importing this resource resulted in an `aws_route` resource for each route, in -addition to the `aws_route_table`, in the Terraform state. Support for importing `aws_route` resources has been added and importing this resource only adds the `aws_route_table` -resource, with in-line routes, to the state. - -## Resource: aws_route53_record - -### allow_overwrite Default Value Change - -The resource now requires existing Route 53 Records to be imported into the Terraform state for management unless the `allow_overwrite` argument is enabled. - -For example, if the `www.example.com` Route 53 Record in the `example.com` Route 53 Hosted Zone existed previously and this new Terraform configuration was introduced: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route53_record import Route53Record -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, type, zoneId): - super().__init__(scope, name) - Route53Record(self, "www", - name="www.example.com", - type=type, - zone_id=zone_id - ) -``` - -During resource creation in version 1.X and prior, it would silently perform an `UPSERT` changeset to the existing Route 53 Record and not report back an error. In version 2.0.0 of the Terraform AWS Provider, the resource now performs a `CREATE` changeset, which will error for existing Route 53 Records. - -The `allow_overwrite` argument provides a workaround to keep the old behavior, but most existing workflows should be updated to perform a `terraform import` command like the following instead: - -```console -$ terraform import aws_route53_record.www ZONEID_www.example.com_TYPE -``` - -More information can be found in the [`aws_route53_record` resource documentation](https://www.terraform.io/docs/providers/aws/r/route53_record.html#import). - -## Resource: aws_route53_zone - -### vpc_id and vpc_region Argument Removal - -Switch your Terraform configuration to `vpc` configuration block(s) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route53_zone import Route53Zone -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name): - super().__init__(scope, name) - Route53Zone(self, "example", - vpc_id="...", - name=name - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route53_zone import Route53Zone -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name): - super().__init__(scope, name) - Route53Zone(self, "example", - vpc=[Route53ZoneVpc( - vpc_id="..." - ) - ], - name=name - ) -``` - -## Resource: aws_wafregional_byte_match_set - -### byte_match_tuple Argument Removal - -Switch your Terraform configuration to the `byte_match_tuples` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.wafregional_byte_match_set import WafregionalByteMatchSet -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name): - super().__init__(scope, name) - WafregionalByteMatchSet(self, "example", - byte_match_tuple=[{}, {}], - name=name - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.wafregional_byte_match_set import WafregionalByteMatchSet -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, fieldToMatch, positionalConstraint, textTransformation, fieldToMatch1, positionalConstraint1, textTransformation1, name): - super().__init__(scope, name) - WafregionalByteMatchSet(self, "example", - byte_match_tuples=[WafregionalByteMatchSetByteMatchTuples( - field_to_match=field_to_match, - positional_constraint=positional_constraint, - text_transformation=text_transformation - ), WafregionalByteMatchSetByteMatchTuples( - field_to_match=field_to_match1, - positional_constraint=positional_constraint1, - text_transformation=text_transformation1 - ) - ], - name=name - ) -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/version-3-upgrade.html.md b/website/docs/cdktf/python/guides/version-3-upgrade.html.md deleted file mode 100644 index 2a0a27a5a81..00000000000 --- a/website/docs/cdktf/python/guides/version-3-upgrade.html.md +++ /dev/null @@ -1,2104 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 3 Upgrade Guide" -description: |- - Terraform AWS Provider Version 3 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 3 Upgrade Guide - -Version 3.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. This guide is intended to help with that process and focuses only on changes from version 2.X to version 3.0.0. See the [Version 2 Upgrade Guide](/docs/providers/aws/guides/version-2-upgrade.html) for information about upgrading from 1.X to version 2.0.0. - -Most of the changes outlined in this guide have been previously marked as deprecated in the Terraform plan/apply output throughout previous provider releases. These changes, such as deprecation notices, can always be found in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). - -~> **NOTE:** Version 3.0.0 and later of the AWS Provider can only be automatically installed on Terraform 0.12 and later. - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Provider Authentication Updates](#provider-authentication-updates) -- [Provider Custom Service Endpoint Updates](#provider-custom-service-endpoint-updates) -- [Data Source: aws_availability_zones](#data-source-aws_availability_zones) -- [Data Source: aws_lambda_invocation](#data-source-aws_lambda_invocation) -- [Data Source: aws_launch_template](#data-source-aws_launch_template) -- [Data Source: aws_route53_resolver_rule](#data-source-aws_route53_resolver_rule) -- [Data Source: aws_route53_zone](#data-source-aws_route53_zone) -- [Resource: aws_acm_certificate](#resource-aws_acm_certificate) -- [Resource: aws_api_gateway_method_settings](#resource-aws_api_gateway_method_settings) -- [Resource: aws_autoscaling_group](#resource-aws_autoscaling_group) -- [Resource: aws_cloudfront_distribution](#resource-aws_cloudfront_distribution) -- [Resource: aws_cloudwatch_log_group](#resource-aws_cloudwatch_log_group) -- [Resource: aws_codepipeline](#resource-aws_codepipeline) -- [Resource: aws_cognito_user_pool](#resource-aws_cognito_user_pool) -- [Resource: aws_dx_gateway](#resource-aws_dx_gateway) -- [Resource: aws_dx_gateway_association](#resource-aws_dx_gateway_association) -- [Resource: aws_dx_gateway_association_proposal](#resource-aws_dx_gateway_association_proposal) -- [Resource: aws_ebs_volume](#resource-aws_ebs_volume) -- [Resource: aws_elastic_transcoder_preset](#resource-aws_elastic_transcoder_preset) -- [Resource: aws_emr_cluster](#resource-aws_emr_cluster) -- [Resource: aws_glue_job](#resource-aws_glue_job) -- [Resource: aws_iam_access_key](#resource-aws_iam_access_key) -- [Resource: aws_iam_instance_profile](#resource-aws_iam_instance_profile) -- [Resource: aws_iam_server_certificate](#resource-aws_iam_server_certificate) -- [Resource: aws_instance](#resource-aws_instance) -- [Resource: aws_lambda_alias](#resource-aws_lambda_alias) -- [Resource: aws_launch_template](#resource-aws_launch_template) -- [Resource: aws_lb_listener_rule](#resource-aws_lb_listener_rule) -- [Resource: aws_msk_cluster](#resource-aws_msk_cluster) -- [Resource: aws_rds_cluster](#resource-aws_rds_cluster) -- [Resource: aws_route53_resolver_rule](#resource-aws_route53_resolver_rule) -- [Resource: aws_route53_zone](#resource-aws_route53_zone) -- [Resource: aws_s3_bucket](#resource-aws_s3_bucket) -- [Resource: aws_s3_bucket_metric](#resource-aws_s3_bucket_metric) -- [Resource: aws_security_group](#resource-aws_security_group) -- [Resource: aws_sns_platform_application](#resource-aws_sns_platform_application) -- [Resource: aws_spot_fleet_request](#resource-aws_spot_fleet_request) - - - -## Provider Version Configuration - --> Before upgrading to version 3.0.0, it is recommended to upgrade to the most recent 2.X version of the provider and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html) without unexpected changes or deprecation notices. - -We recommend using [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -Update to latest 3.X version: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -## Provider Authentication Updates - -### Authentication Ordering - -Previously, the provider preferred credentials in the following order: - -- Static credentials (those defined in the Terraform configuration) -- Environment variables (e.g., `AWS_ACCESS_KEY_ID` or `AWS_PROFILE`) -- Shared credentials file (e.g., `~/.aws/credentials`) -- EC2 Instance Metadata Service -- Default AWS Go SDK handling (shared configuration, CodeBuild/ECS/EKS) - -The provider now prefers the following credential ordering: - -- Static credentials (those defined in the Terraform configuration) -- Environment variables (e.g., `AWS_ACCESS_KEY_ID` or `AWS_PROFILE`) -- Shared credentials and/or configuration file (e.g., `~/.aws/credentials` and `~/.aws/config`) -- Default AWS Go SDK handling (shared configuration, CodeBuild/ECS/EKS, EC2 Instance Metadata Service) - -This means workarounds of disabling the EC2 Instance Metadata Service handling to enable CodeBuild/ECS/EKS credentials or to enable other credential methods such as `credential_process` in the AWS shared configuration are no longer necessary. - -### Shared Configuration File Automatically Enabled - -The `AWS_SDK_LOAD_CONFIG` environment variable is no longer necessary for the provider to automatically load the AWS shared configuration file (e.g., `~/.aws/config`). - -### Removal of AWS_METADATA_TIMEOUT Environment Variable Usage - -The provider now relies on the default AWS Go SDK timeouts for interacting with the EC2 Instance Metadata Service. - -## Provider Custom Service Endpoint Updates - -### Removal of kinesis_analytics and r53 Arguments - -The [custom service endpoints](custom-service-endpoints.html) for Kinesis Analytics and Route 53 now use the `kinesisanalytics` and `route53` argument names in the provider configuration. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - endpoints=[AwsProviderEndpoints( - kinesis_analytics="https://example.com", - r53="https://example.com" - ) - ] - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - endpoints=[AwsProviderEndpoints( - kinesisanalytics="https://example.com", - route53="https://example.com" - ) - ] - ) -``` - -## Data Source: aws_availability_zones - -### blacklisted_names Attribute Removal - -Switch your Terraform configuration to the `exclude_names` attribute instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_availability_zones import DataAwsAvailabilityZones -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsAvailabilityZones(self, "example", - blacklisted_names=["us-west-2d"] - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_availability_zones import DataAwsAvailabilityZones -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsAvailabilityZones(self, "example", - exclude_names=["us-west-2d"] - ) -``` - -### blacklisted_zone_ids Attribute Removal - -Switch your Terraform configuration to the `exclude_zone_ids` attribute instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_availability_zones import DataAwsAvailabilityZones -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsAvailabilityZones(self, "example", - blacklisted_zone_ids=["usw2-az4"] - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_availability_zones import DataAwsAvailabilityZones -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsAvailabilityZones(self, "example", - exclude_zone_ids=["usw2-az4"] - ) -``` - -## Data Source: aws_lambda_invocation - -### result_map Attribute Removal - -Switch your Terraform configuration to the `result` attribute with the [`jsondecode()` function](https://www.terraform.io/docs/configuration/functions/jsondecode.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformOutput, Fn, TerraformStack -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - TerraformOutput(self, "lambda_result", - value=Fn.lookup_nested(example.result_map, ["\"key1\""]) - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformOutput, Fn, Token, TerraformStack -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - TerraformOutput(self, "lambda_result", - value=Fn.lookup_nested(Fn.jsondecode(Token.as_string(example.result)), ["\"key1\"" - ]) - ) -``` - -## Data Source: aws_launch_template - -### Error raised if no matching launch template is found - -Previously, when a launch template matching the criteria was not found the data source would have been `null`. -Now this could produce errors similar to the below: - -``` -data.aws_launch_template.current: Refreshing state... - -Error: error reading launch template: empty output -``` - -Configuration that depend on the previous behavior will need to be updated. - -## Data Source: aws_route53_resolver_rule - -### Removal of trailing period in domain_name argument - -Previously the data-source returned the Resolver Rule Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Resolver Rule Domain Name such as `example.com.`, the attribute now will be returned as `example.com`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Data Source: aws_route53_zone - -### Removal of trailing period in name argument - -Previously the data-source returned the Hosted Zone Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Hosted Zone Domain Name such as `example.com.`, the attribute now will be returned as `example.com`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Resource: aws_acm_certificate - -### domain_validation_options Changed from List to Set - -Previously, the `domain_validation_options` attribute was a list type and completely unknown until after an initial `terraform apply`. This generally required complicated configuration workarounds to properly create DNS validation records since referencing this attribute directly could produce errors similar to the below: - -``` -Error: Invalid for_each argument - - on main.tf line 16, in resource "aws_route53_record" "existing": - 16: for_each = aws_acm_certificate.existing.domain_validation_options - -The `for_each` value depends on resource attributes that cannot be determined -until apply, so Terraform cannot predict how many instances will be created. -To work around this, use the -target argument to first apply only the -resources that the for_each depends on. -``` - -The `domain_validation_options` attribute is now a set type and the resource will attempt to populate the information necessary during the planning phase to handle the above situation in most environments without workarounds. This change also prevents Terraform from showing unexpected differences if the API returns the results in varying order. - -Configuration references to this attribute will likely require updates since sets cannot be indexed (e.g., `domain_validation_options[0]` or the older `domain_validation_options.0.` syntax will return errors). -If the `domain_validation_options` list previously contained only a single element like the two examples just shown, -it may be possible to wrap these references using the [`tolist()` function](https://www.terraform.io/docs/configuration/functions/tolist.html) - -(e.g., `tolist(aws_acm_certificate.example.domain_validation_options)[0]`) as a quick configuration update. -However given the complexity and workarounds required with the previous `domain_validation_options` attribute implementation, -different environments will require different configuration updates and migration steps. -Below is a more advanced example. -Further questions on potential update steps can be submitted to the [community forums](https://discuss.hashicorp.com/c/terraform-providers/tf-aws/33). - -For example, given this previous configuration using a `count` based resource approach that may have been used in certain environments: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Fn, Op, Token, TerraformCount, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.acm_certificate import AcmCertificate -from imports.aws.acm_certificate_validation import AcmCertificateValidation -from imports.aws.data_aws_route53_zone import DataAwsRoute53Zone -from imports.aws.route53_record import Route53Record -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - existing = AcmCertificate(self, "existing", - domain_name="existing.${" + public_root_domain.value + "}", - subject_alternative_names=["existing1.${" + public_root_domain.value + "}", "existing2.${" + public_root_domain.value + "}", "existing3.${" + public_root_domain.value + "}" - ], - validation_method="DNS" - ) - data_aws_route53_zone_public_root_domain = DataAwsRoute53Zone(self, "public_root_domain", - name=public_root_domain.string_value - ) - # In most cases loops should be handled in the programming language context and - # not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - # you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - # you need to keep this like it is. - existing_count = TerraformCount.of( - Token.as_number(Op.add(Fn.length_of(existing.subject_alternative_names), 1))) - aws_route53_record_existing = Route53Record(self, "existing_2", - allow_overwrite=True, - name=Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing.domain_validation_options, [existing_count.index - ]), ["resource_record_name"])), - records=[ - Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing.domain_validation_options, [existing_count.index - ]), ["resource_record_value"])) - ], - ttl=60, - type=Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing.domain_validation_options, [existing_count.index - ]), ["resource_record_type"])), - zone_id=Token.as_string(data_aws_route53_zone_public_root_domain.zone_id), - count=existing_count - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_route53_record_existing.override_logical_id("existing") - aws_acm_certificate_validation_existing = AcmCertificateValidation(self, "existing_3", - certificate_arn=existing.arn, - validation_record_fqdns=Token.as_list( - Fn.lookup_nested(aws_route53_record_existing, ["*", "fqdn"])) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_acm_certificate_validation_existing.override_logical_id("existing") -``` - -It will receive errors like the below after upgrading: - -``` -Error: Invalid index - - on main.tf line 14, in resource "aws_route53_record" "existing": - 14: name = aws_acm_certificate.existing.domain_validation_options[count.index].resource_record_name - |---------------- - | aws_acm_certificate.existing.domain_validation_options is set of object with 4 elements - | count.index is 1 - -This value does not have any indices. -``` - -Since the `domain_validation_options` attribute changed from a list to a set and sets cannot be indexed in Terraform, the recommendation is to update the configuration to use the more stable [resource `for_each` support](https://www.terraform.io/docs/configuration/meta-arguments/for_each.html) instead of [`count`](https://www.terraform.io/docs/configuration/meta-arguments/count.html). Note the slight change in the `validation_record_fqdns` syntax as well. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformIterator, Fn, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.acm_certificate_validation import AcmCertificateValidation -from imports.aws.route53_record import Route53Record -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - # In most cases loops should be handled in the programming language context and - # not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - # you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - # you need to keep this like it is. - existing_for_each_iterator = TerraformIterator.from_list( - Token.as_any("${{ for dvo in ${" + aws_acm_certificate_existing.domain_validation_options + "} : dvo.domain_name => {\n name = dvo.resource_record_name\n record = dvo.resource_record_value\n type = dvo.resource_record_type\n }}}")) - existing = Route53Record(self, "existing", - allow_overwrite=True, - name=Token.as_string( - Fn.lookup_nested(existing_for_each_iterator.value, ["name"])), - records=[ - Token.as_string( - Fn.lookup_nested(existing_for_each_iterator.value, ["record"])) - ], - ttl=60, - type=Token.as_string( - Fn.lookup_nested(existing_for_each_iterator.value, ["type"])), - zone_id=Token.as_string(public_root_domain.zone_id), - for_each=existing_for_each_iterator - ) - aws_acm_certificate_validation_existing = AcmCertificateValidation(self, "existing_1", - certificate_arn=Token.as_string(aws_acm_certificate_existing.arn), - validation_record_fqdns=Token.as_list("${[ for record in ${" + existing.fqn + "} : record.fqdn]}") - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_acm_certificate_validation_existing.override_logical_id("existing") -``` - -After the configuration has been updated, a plan should no longer error and may look like the following: - -``` ------------------------------------------------------------------------- - -An execution plan has been generated and is shown below. -Resource actions are indicated with the following symbols: - + create - - destroy --/+ destroy and then create replacement - -Terraform will perform the following actions: - - # aws_acm_certificate_validation.existing must be replaced --/+ resource "aws_acm_certificate_validation" "existing" { - certificate_arn = "arn:aws:acm:us-east-2:123456789012:certificate/ccbc58e8-061d-4443-9035-d3af0512e863" - ~ id = "2020-07-16 00:01:19 +0000 UTC" -> (known after apply) - ~ validation_record_fqdns = [ - - "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com", - - "_812ddf11b781af1eec1643ec58f102d2.existing.example.com", - - "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com", - - "_d7112da809a40e848207c04399babcec.existing1.example.com", - ] -> (known after apply) # forces replacement - } - - # aws_route53_record.existing will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" -> null - - id = "Z123456789012__812ddf11b781af1eec1643ec58f102d2.existing.example.com._CNAME" -> null - - name = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" -> null - - records = [ - - "_bdeba72164eec216c55a32374bcceafd.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing[1] will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" -> null - - id = "Z123456789012__40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com._CNAME" -> null - - name = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" -> null - - records = [ - - "_638532db1fa6a1b71aaf063c8ea29d52.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing[2] will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_d7112da809a40e848207c04399babcec.existing1.example.com" -> null - - id = "Z123456789012__d7112da809a40e848207c04399babcec.existing1.example.com._CNAME" -> null - - name = "_d7112da809a40e848207c04399babcec.existing1.example.com" -> null - - records = [ - - "_6e1da5574ab46a6c782ed73438274181.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing[3] will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" -> null - - id = "Z123456789012__8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com._CNAME" -> null - - name = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" -> null - - records = [ - - "_a419f8410d2e0720528a96c3506f3841.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing["existing.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" - + records = [ - + "_bdeba72164eec216c55a32374bcceafd.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing1.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_d7112da809a40e848207c04399babcec.existing1.example.com" - + records = [ - + "_6e1da5574ab46a6c782ed73438274181.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing2.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" - + records = [ - + "_638532db1fa6a1b71aaf063c8ea29d52.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing3.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" - + records = [ - + "_a419f8410d2e0720528a96c3506f3841.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - -Plan: 5 to add, 0 to change, 5 to destroy. -``` - -Due to the type of configuration change, Terraform does not know that the previous `aws_route53_record` resources (indexed by number in the existing state) and the new resources (indexed by domain names in the updated configuration) are equivalent. Typically in this situation, the [`terraform state mv` command](https://www.terraform.io/docs/commands/state/mv.html) can be used to reduce the plan to show no changes. This is done by associating the count index (e.g., `[1]`) with the equivalent domain name index (e.g., `["existing2.example.com"]`), making one of the four commands to fix the above example: `terraform state mv 'aws_route53_record.existing[1]' 'aws_route53_record.existing["existing2.example.com"]'`. We recommend using this `terraform state mv` update process where possible to reduce chances of unexpected behaviors or changes in an environment. - -If using `terraform state mv` to reduce the plan to show no changes, no additional steps are required. - -In larger or more complex environments though, this process can be tedius to match the old resource address to the new resource address and run all the necessary `terraform state mv` commands. Instead, since the `aws_route53_record` resource implements the `allow_overwrite = true` argument, it is possible to just remove the old `aws_route53_record` resources from the Terraform state using the [`terraform state rm` command](https://www.terraform.io/docs/commands/state/rm.html). In this case, Terraform will leave the existing records in Route 53 and plan to just overwrite the existing validation records with the same exact (previous) values. - --> This guide is showing the simpler `terraform state rm` option below as a potential shortcut in this specific situation, however in most other cases `terraform state mv` is required to change from `count` based resources to `for_each` based resources and properly match the existing Terraform state to the updated Terraform configuration. - -```console -$ terraform state rm aws_route53_record.existing -Removed aws_route53_record.existing[0] -Removed aws_route53_record.existing[1] -Removed aws_route53_record.existing[2] -Removed aws_route53_record.existing[3] -Successfully removed 4 resource instance(s). -``` - -Now the Terraform plan will show only the additions of new Route 53 records (which are exactly the same as before the upgrade) and the proposed recreation of the `aws_acm_certificate_validation` resource. The `aws_acm_certificate_validation` resource recreation will have no effect as the certificate is already validated and issued. - -``` -An execution plan has been generated and is shown below. -Resource actions are indicated with the following symbols: - + create --/+ destroy and then create replacement - -Terraform will perform the following actions: - - # aws_acm_certificate_validation.existing must be replaced --/+ resource "aws_acm_certificate_validation" "existing" { - certificate_arn = "arn:aws:acm:us-east-2:123456789012:certificate/ccbc58e8-061d-4443-9035-d3af0512e863" - ~ id = "2020-07-16 00:01:19 +0000 UTC" -> (known after apply) - ~ validation_record_fqdns = [ - - "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com", - - "_812ddf11b781af1eec1643ec58f102d2.existing.example.com", - - "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com", - - "_d7112da809a40e848207c04399babcec.existing1.example.com", - ] -> (known after apply) # forces replacement - } - - # aws_route53_record.existing["existing.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" - + records = [ - + "_bdeba72164eec216c55a32374bcceafd.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing1.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_d7112da809a40e848207c04399babcec.existing1.example.com" - + records = [ - + "_6e1da5574ab46a6c782ed73438274181.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing2.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" - + records = [ - + "_638532db1fa6a1b71aaf063c8ea29d52.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing3.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" - + records = [ - + "_a419f8410d2e0720528a96c3506f3841.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - -Plan: 5 to add, 0 to change, 1 to destroy. -``` - -Once applied, no differences should be shown and no additional steps should be necessary. - -Alternatively, if you are referencing a subset of `domain_validation_options`, there is another method of upgrading from v2 to v3 without having to move state. Given the scenario below... - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Fn, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.acm_certificate import AcmCertificate -from imports.aws.acm_certificate_validation import AcmCertificateValidation -from imports.aws.data_aws_route53_zone import DataAwsRoute53Zone -from imports.aws.route53_record import Route53Record -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - existing = AcmCertificate(self, "existing", - domain_name="existing.${" + public_root_domain.value + "}", - subject_alternative_names=["existing1.${" + public_root_domain.value + "}", "existing2.${" + public_root_domain.value + "}", "existing3.${" + public_root_domain.value + "}" - ], - validation_method="DNS" - ) - data_aws_route53_zone_public_root_domain = DataAwsRoute53Zone(self, "public_root_domain", - name=public_root_domain.string_value - ) - existing1 = Route53Record(self, "existing_1", - allow_overwrite=True, - name=Token.as_string( - Fn.lookup_nested(existing.domain_validation_options, ["0", "resource_record_name" - ])), - records=[ - Token.as_string( - Fn.lookup_nested(existing.domain_validation_options, ["0", "resource_record_value" - ])) - ], - ttl=60, - type=Token.as_string( - Fn.lookup_nested(existing.domain_validation_options, ["0", "resource_record_type" - ])), - zone_id=Token.as_string(data_aws_route53_zone_public_root_domain.zone_id) - ) - existing3 = Route53Record(self, "existing_3", - allow_overwrite=True, - name=Token.as_string( - Fn.lookup_nested(existing.domain_validation_options, ["2", "resource_record_name" - ])), - records=[ - Token.as_string( - Fn.lookup_nested(existing.domain_validation_options, ["2", "resource_record_value" - ])) - ], - ttl=60, - type=Token.as_string( - Fn.lookup_nested(existing.domain_validation_options, ["2", "resource_record_type" - ])), - zone_id=Token.as_string(data_aws_route53_zone_public_root_domain.zone_id) - ) - aws_acm_certificate_validation_existing1 = AcmCertificateValidation(self, "existing_1_4", - certificate_arn=existing.arn, - validation_record_fqdns=Token.as_list(existing1.fqdn) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_acm_certificate_validation_existing1.override_logical_id("existing_1") - aws_acm_certificate_validation_existing3 = AcmCertificateValidation(self, "existing_3_5", - certificate_arn=existing.arn, - validation_record_fqdns=Token.as_list(existing3.fqdn) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_acm_certificate_validation_existing3.override_logical_id("existing_3") -``` - -You can perform a conversion of the new `domain_validation_options` object into a map, to allow you to perform a lookup by the domain name in place of an index number. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Fn, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.acm_certificate_validation import AcmCertificateValidation -from imports.aws.route53_record import Route53Record -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - existing_domain_validation_options = "${{ for dvo in ${" + cloudfront_cert.domain_validation_options + "} : dvo.domain_name => {\n name = dvo.resource_record_name\n record = dvo.resource_record_value\n type = dvo.resource_record_type\n }}}" - existing1 = Route53Record(self, "existing_1", - allow_overwrite=True, - name=Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing_domain_validation_options, ["existing1.${" + public_root_domain.value + "}" - ]), ["name"])), - records=[ - Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing_domain_validation_options, ["existing1.${" + public_root_domain.value + "}" - ]), ["record"])) - ], - ttl=60, - type=Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing_domain_validation_options, ["existing1.${" + public_root_domain.value + "}" - ]), ["type"])), - zone_id=Token.as_string(data_aws_route53_zone_public_root_domain.zone_id) - ) - existing3 = Route53Record(self, "existing_3", - allow_overwrite=True, - name=Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing_domain_validation_options, ["existing3.${" + public_root_domain.value + "}" - ]), ["name"])), - records=[ - Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing_domain_validation_options, ["existing3.${" + public_root_domain.value + "}" - ]), ["record"])) - ], - ttl=60, - type=Token.as_string( - Fn.lookup_nested( - Fn.lookup_nested(existing_domain_validation_options, ["existing3.${" + public_root_domain.value + "}" - ]), ["type"])), - zone_id=Token.as_string(data_aws_route53_zone_public_root_domain.zone_id) - ) - aws_acm_certificate_validation_existing1 = AcmCertificateValidation(self, "existing_1_2", - certificate_arn=existing.arn, - validation_record_fqdns=Token.as_list(existing1.fqdn) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_acm_certificate_validation_existing1.override_logical_id("existing_1") - aws_acm_certificate_validation_existing3 = AcmCertificateValidation(self, "existing_3_3", - certificate_arn=existing.arn, - validation_record_fqdns=Token.as_list(existing3.fqdn) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_acm_certificate_validation_existing3.override_logical_id("existing_3") -``` - -Performing a plan against these resources will not cause any change in state, since underlying resources have not changed. - -### subject_alternative_names Changed from List to Set - -Previously the `subject_alternative_names` argument was stored in the Terraform state as an ordered list while the API returned information in an unordered manner. The attribute is now configured as a set instead of a list. Certain Terraform configuration language features distinguish between these two attribute types such as not being able to index a set (e.g., `aws_acm_certificate.example.subject_alternative_names[0]` is no longer a valid reference). Depending on the implementation details of a particular configuration using `subject_alternative_names` as a reference, possible solutions include changing references to using `for`/`for_each` or using the `tolist()` function as a temporary workaround to keep the previous behavior until an appropriate configuration (properly using the unordered set) can be determined. Usage questions can be submitted to the [community forums](https://discuss.hashicorp.com/c/terraform-providers/tf-aws/33). - -### certificate_body, certificate_chain, and private_key Arguments No Longer Stored as Hash - -Previously when the `certificate_body`, `certificate_chain`, and `private_key` arguments were stored in state, they were stored as a hash of the actual value. This prevented Terraform from properly updating the resource when necessary and the hashing has been removed. The Terraform AWS Provider will show an update to these arguments on the first apply after upgrading to version 3.0.0, which is fixing the Terraform state to remove the hash. Since the `private_key` attribute is marked as sensitive, the values in the update will not be visible in the Terraform output. If the non-hashed values have not changed, then no update is occurring other than the Terraform state update. If these arguments are the only updates and they all match the hash removal, the apply will occur without submitting API calls. - -## Resource: aws_api_gateway_method_settings - -### throttling_burst_limit and throttling_rate_limit Arguments Now Default to -1 - -Previously when the `throttling_burst_limit` or `throttling_rate_limit` argument was not configured, the resource would enable throttling and set the limit value to the AWS API Gateway default. In addition, as these arguments were marked as `Computed`, Terraform ignored any subsequent changes made to these arguments in the resource. These behaviors have been removed and, by default, the `throttling_burst_limit` and `throttling_rate_limit` arguments will be disabled in the resource with a value of `-1`. - -## Resource: aws_autoscaling_group - -### availability_zones and vpc_zone_identifier Arguments Now Report Plan-Time Conflict - -Specifying both the `availability_zones` and `vpc_zone_identifier` arguments previously led to confusing behavior and errors. Now this issue is reported at plan-time. Use the `null` value instead of `[]` (empty list) in conditionals to ensure this validation does not unexpectedly trigger. - -### Drift detection enabled for `load_balancers` and `target_group_arns` arguments - -If you previously set one of these arguments to an empty list to enable drift detection (e.g., when migrating an ASG from ELB to ALB), this can be updated as follows. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.autoscaling_group import AutoscalingGroup -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, maxSize, minSize): - super().__init__(scope, name) - AutoscalingGroup(self, "example", - load_balancers=[], - target_group_arns=[Token.as_string(aws_lb_target_group_example.arn)], - max_size=max_size, - min_size=min_size - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.autoscaling_group import AutoscalingGroup -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, maxSize, minSize): - super().__init__(scope, name) - AutoscalingGroup(self, "example", - target_group_arns=[Token.as_string(aws_lb_target_group_example.arn)], - max_size=max_size, - min_size=min_size - ) -``` - -If `aws_autoscaling_attachment` resources reference your ASG configurations, you will need to add the [`lifecycle` configuration block](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html) with an `ignore_changes` argument to prevent Terraform non-empty plans (i.e., forcing resource update) during the next state refresh. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.autoscaling_attachment import AutoscalingAttachment -from imports.aws.autoscaling_group import AutoscalingGroup -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, maxSize, minSize): - super().__init__(scope, name) - example = AutoscalingGroup(self, "example", - max_size=max_size, - min_size=min_size - ) - aws_autoscaling_attachment_example = AutoscalingAttachment(self, "example_1", - autoscaling_group_name=example.id, - elb=Token.as_string(aws_elb_example.id) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_autoscaling_attachment_example.override_logical_id("example") -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from cdktf import TerraformResourceLifecycle -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.autoscaling_attachment import AutoscalingAttachment -from imports.aws.autoscaling_group import AutoscalingGroup -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, maxSize, minSize): - super().__init__(scope, name) - example = AutoscalingGroup(self, "example", - lifecycle=TerraformResourceLifecycle( - ignore_changes=[load_balancers, target_group_arns] - ), - max_size=max_size, - min_size=min_size - ) - aws_autoscaling_attachment_example = AutoscalingAttachment(self, "example_1", - autoscaling_group_name=example.id, - elb=Token.as_string(aws_elb_example.id) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_autoscaling_attachment_example.override_logical_id("example") -``` - -## Resource: aws_cloudfront_distribution - -### active_trusted_signers Attribute Name and Type Change - -Previously, the `active_trusted_signers` computed attribute was implemented with a Map that did not support accessing its computed `items` attribute in Terraform 0.12 correctly. -To address this, the `active_trusted_signers` attribute has been renamed to `trusted_signers` and is now implemented as a List with a computed `items` List attribute and computed `enabled` boolean attribute. -The nested `items` attribute includes computed `aws_account_number` and `key_pair_ids` sub-fields, with the latter implemented as a List. -Thus, user configurations referencing the `active_trusted_signers` attribute and its sub-fields will need to be changed as follows. - -Given these previous references: - -``` -aws_cloudfront_distribution.example.active_trusted_signers.enabled -aws_cloudfront_distribution.example.active_trusted_signers.items -``` - -Updated references: - -``` -aws_cloudfront_distribution.example.trusted_signers[0].enabled -aws_cloudfront_distribution.example.trusted_signers[0].items -``` - -## Resource: aws_cloudwatch_log_group - -### Removal of arn Wildcard Suffix - -Previously, the resource returned the ARN directly from the API, which included a `:*` suffix to denote all CloudWatch Log Streams under the CloudWatch Log Group. Most other AWS resources that return ARNs and many other AWS services do not use the `:*` suffix. The suffix is now automatically removed. For example, the resource previously returned an ARN such as `arn:aws:logs:us-east-1:123456789012:log-group:/example:*` but will now return `arn:aws:logs:us-east-1:123456789012:log-group:/example`. - -Workarounds, such as using `replace()` as shown below, should be removed: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Fn, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cloudwatch_log_group import CloudwatchLogGroup -from imports.aws.datasync_task import DatasyncTask -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, destinationLocationArn, sourceLocationArn): - super().__init__(scope, name) - example = CloudwatchLogGroup(self, "example", - name="example" - ) - aws_datasync_task_example = DatasyncTask(self, "example_1", - cloudwatch_log_group_arn=Token.as_string(Fn.replace(example.arn, ":*", "")), - destination_location_arn=destination_location_arn, - source_location_arn=source_location_arn - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_datasync_task_example.override_logical_id("example") -``` - -Removing the `:*` suffix is a breaking change for some configurations. Fix these configurations using string interpolations as demonstrated below. For example, this configuration is now broken: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_iam_policy_document import DataAwsIamPolicyDocument -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsIamPolicyDocument(self, "ad-log-policy", - statement=[DataAwsIamPolicyDocumentStatement( - actions=["logs:CreateLogStream", "logs:PutLogEvents"], - effect="Allow", - principals=[DataAwsIamPolicyDocumentStatementPrincipals( - identifiers=["ds.amazonaws.com"], - type="Service" - ) - ], - resources=[example.arn] - ) - ] - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_iam_policy_document import DataAwsIamPolicyDocument -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsIamPolicyDocument(self, "ad-log-policy", - statement=[DataAwsIamPolicyDocumentStatement( - actions=["logs:CreateLogStream", "logs:PutLogEvents"], - effect="Allow", - principals=[DataAwsIamPolicyDocumentStatementPrincipals( - identifiers=["ds.amazonaws.com"], - type="Service" - ) - ], - resources=["${" + example.arn + "}:*"] - ) - ] - ) -``` - -## Resource: aws_codepipeline - -### GITHUB_TOKEN environment variable removal - -Switch your Terraform configuration to the `OAuthToken` element in the `action` `configuration` map instead. - -For example, given this previous configuration: - -```console -$ GITHUB_TOKEN= terraform apply -``` - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.codepipeline import Codepipeline -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, artifactStore, name, roleArn): - super().__init__(scope, name) - Codepipeline(self, "example", - stage=[CodepipelineStage( - action=[CodepipelineStageAction( - category="Source", - configuration={ - "Branch": "main", - "Owner": "lifesum-terraform", - "Repo": "example" - }, - name="Source", - output_artifacts=["example"], - owner="ThirdParty", - provider="GitHub", - version="1" - ) - ], - name="Source" - ) - ], - artifact_store=artifact_store, - name=name, - role_arn=role_arn - ) -``` - -The configuration could be updated as follows: - -```console -$ TF_VAR_github_token= terraform apply -``` - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformVariable, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.codepipeline import Codepipeline -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, artifactStore, name, roleArn): - super().__init__(scope, name) - # Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. - # You can read more about this at https://cdk.tf/variables - github_token = TerraformVariable(self, "github_token") - Codepipeline(self, "example", - stage=[CodepipelineStage( - action=[CodepipelineStageAction( - category="Source", - configuration={ - "Branch": "main", - "OAuthToken": github_token.string_value, - "Owner": "lifesum-terraform", - "Repo": "example" - }, - name="Source", - output_artifacts=["example"], - owner="ThirdParty", - provider="GitHub", - version="1" - ) - ], - name="Source" - ) - ], - artifact_store=artifact_store, - name=name, - role_arn=role_arn - ) -``` - -## Resource: aws_cognito_user_pool - -### Removal of admin_create_user_config.unused_account_validity_days Argument - -The Cognito API previously deprecated the `admin_create_user_config` configuration block `unused_account_validity_days` argument in preference of the `password_policy` configuration block `temporary_password_validity_days` argument. Configurations will need to be updated to use the API supported configuration. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cognito_user_pool import CognitoUserPool -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name): - super().__init__(scope, name) - CognitoUserPool(self, "example", - admin_create_user_config=CognitoUserPoolAdminCreateUserConfig( - unused_account_validity_days=7 - ), - name=name - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cognito_user_pool import CognitoUserPool -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name): - super().__init__(scope, name) - CognitoUserPool(self, "example", - password_policy=CognitoUserPoolPasswordPolicy( - temporary_password_validity_days=7 - ), - name=name - ) -``` - -## Resource: aws_dx_gateway - -### Removal of Automatic aws_dx_gateway_association Import - -Previously when importing the `aws_dx_gateway` resource with the [`terraform import` command](https://www.terraform.io/docs/commands/import.html), the Terraform AWS Provider would automatically attempt to import an associated `aws_dx_gateway_association` resource(s) as well. This automatic resource import has been removed. Use the [`aws_dx_gateway_association` resource import](/docs/providers/aws/r/dx_gateway_association.html#import) to import those resources separately. - -## Resource: aws_dx_gateway_association - -### vpn_gateway_id Argument Removal - -Switch your Terraform configuration to the `associated_gateway_id` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.dx_gateway_association import DxGatewayAssociation -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, dxGatewayId): - super().__init__(scope, name) - DxGatewayAssociation(self, "example", - vpn_gateway_id=Token.as_string(aws_vpn_gateway_example.id), - dx_gateway_id=dx_gateway_id - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.dx_gateway_association import DxGatewayAssociation -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, dxGatewayId): - super().__init__(scope, name) - DxGatewayAssociation(self, "example", - associated_gateway_id=Token.as_string(aws_vpn_gateway_example.id), - dx_gateway_id=dx_gateway_id - ) -``` - -## Resource: aws_dx_gateway_association_proposal - -### vpn_gateway_id Argument Removal - -Switch your Terraform configuration to the `associated_gateway_id` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.dx_gateway_association_proposal import DxGatewayAssociationProposal -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, associatedGatewayId, dxGatewayId, dxGatewayOwnerAccountId): - super().__init__(scope, name) - DxGatewayAssociationProposal(self, "example", - vpn_gateway_id=aws_vpn_gateway_example.id, - associated_gateway_id=associated_gateway_id, - dx_gateway_id=dx_gateway_id, - dx_gateway_owner_account_id=dx_gateway_owner_account_id - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.dx_gateway_association_proposal import DxGatewayAssociationProposal -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, dxGatewayId, dxGatewayOwnerAccountId): - super().__init__(scope, name) - DxGatewayAssociationProposal(self, "example", - associated_gateway_id=Token.as_string(aws_vpn_gateway_example.id), - dx_gateway_id=dx_gateway_id, - dx_gateway_owner_account_id=dx_gateway_owner_account_id - ) -``` - -## Resource: aws_ebs_volume - -### iops Argument Apply-Time Validation - -Previously when the `iops` argument was configured with a `type` other than `io1` (either explicitly or omitted, indicating the default type `gp2`), the Terraform AWS Provider would automatically disregard the value provided to `iops` as it is only configurable for the `io1` volume type per the AWS EC2 API. This behavior has changed such that the Terraform AWS Provider will instead return an error at apply time indicating an `iops` value is invalid for types other than `io1`. -Exceptions to this are in cases where `iops` is set to `null` or `0` such that the Terraform AWS Provider will continue to accept the value regardless of `type`. - -## Resource: aws_elastic_transcoder_preset - -### video Configuration Block max_frame_rate Argument No Longer Uses 30 Default - -Previously when the `max_frame_rate` argument was not configured, the resource would default to 30. This behavior has been removed and allows for auto frame rate presets to automatically set the appropriate value. - -## Resource: aws_emr_cluster - -### core_instance_count Argument Removal - -Switch your Terraform configuration to the `core_instance_group` configuration block instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, releaseLabel, serviceRole): - super().__init__(scope, name) - EmrCluster(self, "example", - core_instance_count=2, - name=name, - release_label=release_label, - service_role=service_role - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, instanceType, name, releaseLabel, serviceRole): - super().__init__(scope, name) - EmrCluster(self, "example", - core_instance_group=EmrClusterCoreInstanceGroup( - instance_count=2, - instance_type=instance_type - ), - name=name, - release_label=release_label, - service_role=service_role - ) -``` - -### core_instance_type Argument Removal - -Switch your Terraform configuration to the `core_instance_group` configuration block instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, releaseLabel, serviceRole): - super().__init__(scope, name) - EmrCluster(self, "example", - core_instance_type="m4.large", - name=name, - release_label=release_label, - service_role=service_role - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, releaseLabel, serviceRole): - super().__init__(scope, name) - EmrCluster(self, "example", - core_instance_group=EmrClusterCoreInstanceGroup( - instance_type="m4.large" - ), - name=name, - release_label=release_label, - service_role=service_role - ) -``` - -### instance_group Configuration Block Removal - -Switch your Terraform configuration to the `master_instance_group` and `core_instance_group` configuration blocks instead. For any task instance groups, use the `aws_emr_instance_group` resource. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, releaseLabel, serviceRole): - super().__init__(scope, name) - EmrCluster(self, "example", - instance_group=[{ - "instance_role": "MASTER", - "instance_type": "m4.large" - }, { - "instance_count": 1, - "instance_role": "CORE", - "instance_type": "c4.large" - }, { - "instance_count": 2, - "instance_role": "TASK", - "instance_type": "c4.xlarge" - } - ], - name=name, - release_label=release_label, - service_role=service_role - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -from imports.aws.emr_instance_group import EmrInstanceGroup -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, releaseLabel, serviceRole): - super().__init__(scope, name) - example = EmrCluster(self, "example", - core_instance_group=EmrClusterCoreInstanceGroup( - instance_count=1, - instance_type="c4.large" - ), - master_instance_group=EmrClusterMasterInstanceGroup( - instance_type="m4.large" - ), - name=name, - release_label=release_label, - service_role=service_role - ) - aws_emr_instance_group_example = EmrInstanceGroup(self, "example_1", - cluster_id=example.id, - instance_count=2, - instance_type="c4.xlarge" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_emr_instance_group_example.override_logical_id("example") -``` - -### master_instance_type Argument Removal - -Switch your Terraform configuration to the `master_instance_group` configuration block instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, releaseLabel, serviceRole): - super().__init__(scope, name) - EmrCluster(self, "example", - master_instance_type="m4.large", - name=name, - release_label=release_label, - service_role=service_role - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.emr_cluster import EmrCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, name, releaseLabel, serviceRole): - super().__init__(scope, name) - EmrCluster(self, "example", - master_instance_group=EmrClusterMasterInstanceGroup( - instance_type="m4.large" - ), - name=name, - release_label=release_label, - service_role=service_role - ) -``` - -## Resource: aws_glue_job - -### allocated_capacity Argument Removal - -The Glue API has deprecated the `allocated_capacity` argument. Switch your Terraform configuration to the `max_capacity` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.glue_job import GlueJob -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, command, name, roleArn): - super().__init__(scope, name) - GlueJob(self, "example", - allocated_capacity=2, - command=command, - name=name, - role_arn=role_arn - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.glue_job import GlueJob -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, command, name, roleArn): - super().__init__(scope, name) - GlueJob(self, "example", - max_capacity=2, - command=command, - name=name, - role_arn=role_arn - ) -``` - -## Resource: aws_iam_access_key - -### ses_smtp_password Attribute Removal - -In many regions today and in all regions after October 1, 2020, the [SES API will only accept version 4 signatures](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html). If referencing the `ses_smtp_password` attribute, switch your Terraform configuration to the `ses_smtp_password_v4` attribute instead. Please note that this signature is based on the region of the Terraform AWS Provider. If you need the SES v4 password in multiple regions, it may require using [multiple provider instances](https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-configurations). - -Depending on when the `aws_iam_access_key` resource was created, it may not have a `ses_smtp_password_v4` attribute for you to use. If this is the case you will need to [taint](/docs/commands/taint.html) the resource so that it can be recreated with the new value. - -Alternatively, you can stage the change by creating a new `aws_iam_access_key` resource and change any downstream dependencies to use the new `ses_smtp_password_v4` attribute. Once dependents have been updated with the new resource you can remove the old one. - -## Resource: aws_iam_instance_profile - -### roles Argument Removal - -Switch your Terraform configuration to the `role` argument instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.iam_instance_profile import IamInstanceProfile -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - IamInstanceProfile(self, "example", - roles=[aws_iam_role_example.id] - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.iam_instance_profile import IamInstanceProfile -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - IamInstanceProfile(self, "example", - role=Token.as_string(aws_iam_role_example.id) - ) -``` - -## Resource: aws_iam_server_certificate - -### certificate_body, certificate_chain, and private_key Arguments No Longer Stored as Hash - -Previously when the `certificate_body`, `certificate_chain`, and `private_key` arguments were stored in state, they were stored as a hash of the actual value. This hashing has been removed for new or recreated resources to prevent lifecycle issues. - -## Resource: aws_instance - -### ebs_block_device.iops and root_block_device.iops Argument Apply-Time Validations - -Previously when the `iops` argument was configured in either the `ebs_block_device` or `root_block_device` configuration block, the Terraform AWS Provider would automatically disregard the value provided to `iops` if the `type` argument was also configured with a value other than `io1` (either explicitly or omitted, indicating the default type `gp2`) as `iops` are only configurable for the `io1` volume type per the AWS EC2 API. This behavior has changed such that the Terraform AWS Provider will instead return an error at apply time indicating an `iops` value is invalid for volume types other than `io1`. -Exceptions to this are in cases where `iops` is set to `null` or `0` such that the Terraform AWS Provider will continue to accept the value regardless of `type`. - -## Resource: aws_lambda_alias - -### Import No Longer Converts Function Name to ARN - -Previously the resource import would always convert the `function_name` portion of the import identifier into the ARN format. Configurations using the Lambda Function name would show this as an unexpected difference after import. Now this will passthrough the given value on import whether its a Lambda Function name or ARN. - -## Resource: aws_launch_template - -### network_interfaces.delete_on_termination Argument type change - -The `network_interfaces.delete_on_termination` argument is now of type `string`, allowing an unspecified value for the argument since the previous `bool` type only allowed for `true/false` and defaulted to `false` when no value was set. Now to enforce `delete_on_termination` to `false`, the string `"false"` or bare `false` value must be used. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.launch_template import LaunchTemplate -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - LaunchTemplate(self, "example", - network_interfaces=[LaunchTemplateNetworkInterfaces( - delete_on_termination=[null] - ) - ] - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.launch_template import LaunchTemplate -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - LaunchTemplate(self, "example", - network_interfaces=[LaunchTemplateNetworkInterfaces( - delete_on_termination=Token.as_string(False) - ) - ] - ) -``` - -## Resource: aws_lb_listener_rule - -### condition.field and condition.values Arguments Removal - -Switch your Terraform configuration to use the `host_header` or `path_pattern` configuration block instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.lb_listener_rule import LbListenerRule -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, action, listenerArn): - super().__init__(scope, name) - LbListenerRule(self, "example", - condition=[LbListenerRuleCondition( - field="path-pattern", - values=["/static/*"] - ) - ], - action=action, - listener_arn=listener_arn - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.lb_listener_rule import LbListenerRule -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, action, listenerArn): - super().__init__(scope, name) - LbListenerRule(self, "example", - condition=[LbListenerRuleCondition( - path_pattern=LbListenerRuleConditionPathPattern( - values=["/static/*"] - ) - ) - ], - action=action, - listener_arn=listener_arn - ) -``` - -## Resource: aws_msk_cluster - -### encryption_info.encryption_in_transit.client_broker Default Updated to Match API - -A few weeks after general availability launch and initial release of the `aws_msk_cluster` resource, the MSK API default for client broker encryption switched from `TLS_PLAINTEXT` to `TLS`. The attribute default has now been updated to match the more secure API default, however existing Terraform configurations may show a difference if this setting is not configured. - -To continue using the old default when it was previously not configured, add or modify this configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.msk_cluster import MskCluster -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, brokerNodeGroupInfo, clusterName, kafkaVersion, numberOfBrokerNodes): - super().__init__(scope, name) - MskCluster(self, "example", - encryption_info=MskClusterEncryptionInfo( - encryption_in_transit=MskClusterEncryptionInfoEncryptionInTransit( - client_broker="TLS_PLAINTEXT" - ) - ), - broker_node_group_info=broker_node_group_info, - cluster_name=cluster_name, - kafka_version=kafka_version, - number_of_broker_nodes=number_of_broker_nodes - ) -``` - -## Resource: aws_rds_cluster - -### scaling_configuration.min_capacity Now Defaults to 1 - -Previously when the `min_capacity` argument in a `scaling_configuration` block was not configured, the resource would default to 2. This behavior has been updated to align with the AWS RDS Cluster API default of 1. - -## Resource: aws_route53_resolver_rule - -### Removal of trailing period in domain_name argument - -Previously the resource returned the Resolver Rule Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Resolver Rule Domain Name such as `example.com.`, the attribute now will be returned as `example.com`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Resource: aws_route53_zone - -### Removal of trailing period in name argument - -Previously the resource returned the Hosted Zone Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Hosted Zone Domain Name such as `example.com.`, the attribute now will be returned as `example.com`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Resource: aws_s3_bucket - -### Removal of Automatic aws_s3_bucket_policy Import - -Previously when importing the `aws_s3_bucket` resource with the [`terraform import` command](https://www.terraform.io/docs/commands/import.html), the Terraform AWS Provider would automatically attempt to import an associated `aws_s3_bucket_policy` resource as well. This automatic resource import has been removed. Use the [`aws_s3_bucket_policy` resource import](/docs/providers/aws/r/s3_bucket_policy.html#import) to import that resource separately. - -### region Attribute Is Now Read-Only - -The `region` attribute is no longer configurable, but it remains as a read-only attribute. The region of the `aws_s3_bucket` resource is determined by the region of the Terraform AWS Provider, similar to all other resources. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - region="us-west-2" - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example") -``` - -## Resource: aws_s3_bucket_metric - -### filter configuration block Plan-Time Validation Change - -The `filter` configuration block no longer supports the empty block `{}` and requires at least one of the `prefix` or `tags` attributes to be specified. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket_metric import S3BucketMetric -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, bucket, name): - super().__init__(scope, name) - S3BucketMetric(self, "example", - filter=S3BucketMetricFilter(), - bucket=bucket, - name=name - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket_metric import S3BucketMetric -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, bucket, name): - super().__init__(scope, name) - S3BucketMetric(self, "example", - bucket=bucket, - name=name - ) -``` - -## Resource: aws_security_group - -### Removal of Automatic aws_security_group_rule Import - -Previously when importing the `aws_security_group` resource with the [`terraform import` command](https://www.terraform.io/docs/commands/import.html), the Terraform AWS Provider would automatically attempt to import an associated `aws_security_group_rule` resource(s) as well. This automatic resource import has been removed. Use the [`aws_security_group_rule` resource import](/docs/providers/aws/r/security_group_rule.html#import) to import those resources separately. - -## Resource: aws_sns_platform_application - -### platform_credential and platform_principal Arguments No Longer Stored as SHA256 Hash - -Previously when the `platform_credential` and `platform_principal` arguments were stored in state, they were stored as a SHA256 hash of the actual value. This prevented Terraform from properly updating the resource when necessary and the hashing has been removed. The Terraform AWS Provider will show an update to these arguments on the first apply after upgrading to version 3.0.0, which is fixing the Terraform state to remove the hash. Since the attributes are marked as sensitive, the values in the update will not be visible in the Terraform output. If the non-hashed values have not changed, then no update is occurring other than the Terraform state update. If these arguments are the only two updates and they both match the SHA256 removal, the apply will occur without submitting an actual `SetPlatformApplicationAttributes` API call. - -## Resource: aws_spot_fleet_request - -### valid_until Argument No Longer Uses 24 Hour Default - -Previously when the `valid_until` argument was not configured, the resource would default to a 24 hour request. This behavior has been removed and allows for non-expiring requests. To recreate the old behavior, the [`time_offset` resource](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/offset) can potentially be used. - -## Resource: aws_ssm_maintenance_window_task - -### logging_info Configuration Block Removal - -Switch your Terraform configuration to the `task_invocation_parameters` configuration block `run_command_parameters` configuration block `output_s3_bucket` and `output_s3_key_prefix` arguments instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ssm_maintenance_window_task import SsmMaintenanceWindowTask -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, taskArn, taskType, windowId): - super().__init__(scope, name) - SsmMaintenanceWindowTask(self, "example", - logging_info=[{ - "s3_bucket_key_prefix": "example", - "s3_bucket_name": aws_s3_bucket_example.id - } - ], - task_arn=task_arn, - task_type=task_type, - window_id=window_id - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ssm_maintenance_window_task import SsmMaintenanceWindowTask -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, taskArn, taskType, windowId): - super().__init__(scope, name) - SsmMaintenanceWindowTask(self, "example", - task_invocation_parameters=SsmMaintenanceWindowTaskTaskInvocationParameters( - run_command_parameters=SsmMaintenanceWindowTaskTaskInvocationParametersRunCommandParameters( - output_s3_bucket=Token.as_string(aws_s3_bucket_example.id), - output_s3_key_prefix="example" - ) - ), - task_arn=task_arn, - task_type=task_type, - window_id=window_id - ) -``` - -### task_parameters Configuration Block Removal - -Switch your Terraform configuration to the `task_invocation_parameters` configuration block `run_command_parameters` configuration block `parameter` configuration blocks instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ssm_maintenance_window_task import SsmMaintenanceWindowTask -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, taskArn, taskType, windowId): - super().__init__(scope, name) - SsmMaintenanceWindowTask(self, "example", - task_parameters=[{ - "name": "commands", - "values": ["date"] - } - ], - task_arn=task_arn, - task_type=task_type, - window_id=window_id - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.ssm_maintenance_window_task import SsmMaintenanceWindowTask -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, taskArn, taskType, windowId): - super().__init__(scope, name) - SsmMaintenanceWindowTask(self, "example", - task_invocation_parameters=SsmMaintenanceWindowTaskTaskInvocationParameters( - run_command_parameters=SsmMaintenanceWindowTaskTaskInvocationParametersRunCommandParameters( - parameter=[SsmMaintenanceWindowTaskTaskInvocationParametersRunCommandParametersParameter( - name="commands", - values=["date"] - ) - ] - ) - ), - task_arn=task_arn, - task_type=task_type, - window_id=window_id - ) -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/version-4-upgrade.html.md b/website/docs/cdktf/python/guides/version-4-upgrade.html.md deleted file mode 100644 index 593b4cddb62..00000000000 --- a/website/docs/cdktf/python/guides/version-4-upgrade.html.md +++ /dev/null @@ -1,4657 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 4 Upgrade Guide" -description: |- - Terraform AWS Provider Version 4 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 4 Upgrade Guide - -Version 4.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. We intend this guide to help with that process and focus only on changes from version 3.X to version 4.0.0. See the [Version 3 Upgrade Guide](/docs/providers/aws/guides/version-3-upgrade.html) for information about upgrading from 2.X to version 3.0.0. - -We previously marked most of the changes we outline in this guide as deprecated in the Terraform plan/apply output throughout previous provider releases. You can find these changes, including deprecation notices, in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). - -~> **NOTE:** Versions 4.0.0 through v4.8.0 of the AWS Provider introduce significant breaking changes to the `aws_s3_bucket` resource. See [S3 Bucket Refactor](#s3-bucket-refactor) for more details. -We recommend upgrading to v4.9.0 or later of the AWS Provider instead, where only non-breaking changes and deprecation notices are introduced to the `aws_s3_bucket`. See [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection) for additional considerations when upgrading to v4.9.0 or later. - -~> **NOTE:** Version 4.0.0 of the AWS Provider introduces changes to the precedence of some authentication and configuration parameters. -These changes bring the provider in line with the AWS CLI and SDKs. -See [Changes to Authentication](#changes-to-authentication) for more details. - -~> **NOTE:** Version 4.0.0 of the AWS Provider will be the last major version to support [EC2-Classic resources](#ec2-classic-resource-and-data-source-support) as AWS plans to fully retire EC2-Classic Networking. See the [AWS News Blog](https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/) for additional details. - -~> **NOTE:** Version 4.0.0 of the AWS Provider will be the last major version to support [Macie Classic resources](#macie-classic-resource-support) as AWS plans to fully retire Macie Classic. See the [Amazon Macie Classic FAQs](https://aws.amazon.com/macie/classic-faqs/) for additional details. - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Changes to Authentication](#changes-to-authentication) -- [New Provider Arguments](#new-provider-arguments) -- [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection) (**Applicable to v4.9.0 and later of the AWS Provider**) -- [S3 Bucket Refactor](#s3-bucket-refactor) (**Only applicable to v4.0.0 through v4.8.0 of the AWS Provider**) - - [`acceleration_status` Argument](#acceleration_status-argument) - - [`acl` Argument](#acl-argument) - - [`cors_rule` Argument](#cors_rule-argument) - - [`grant` Argument](#grant-argument) - - [`lifecycle_rule` Argument](#lifecycle_rule-argument) - - [`logging` Argument](#logging-argument) - - [`object_lock_configuration` `rule` Argument](#object_lock_configuration-rule-argument) - - [`policy` Argument](#policy-argument) - - [`replication_configuration` Argument](#replication_configuration-argument) - - [`request_payer` Argument](#request_payer-argument) - - [`server_side_encryption_configuration` Argument](#server_side_encryption_configuration-argument) - - [`versioning` Argument](#versioning-argument) - - [`website`, `website_domain`, and `website_endpoint` Arguments](#website-website_domain-and-website_endpoint-arguments) -- [Full Resource Lifecycle of Default Resources](#full-resource-lifecycle-of-default-resources) - - [Resource: aws_default_subnet](#resource-aws_default_subnet) - - [Resource: aws_default_vpc](#resource-aws_default_vpc) -- [Plural Data Source Behavior](#plural-data-source-behavior) -- [Empty Strings Not Valid For Certain Resources](#empty-strings-not-valid-for-certain-resources) - - [Resource: aws_cloudwatch_event_target (Empty String)](#resource-aws_cloudwatch_event_target-empty-string) - - [Resource: aws_customer_gateway](#resource-aws_customer_gateway) - - [Resource: aws_default_network_acl](#resource-aws_default_network_acl) - - [Resource: aws_default_route_table](#resource-aws_default_route_table) - - [Resource: aws_default_vpc (Empty String)](#resource-aws_default_vpc-empty-string) - - [Resource: aws_efs_mount_target](#resource-aws_efs_mount_target) - - [Resource: aws_elasticsearch_domain](#resource-aws_elasticsearch_domain) - - [Resource: aws_instance](#resource-aws_instance) - - [Resource: aws_network_acl](#resource-aws_network_acl) - - [Resource: aws_route](#resource-aws_route) - - [Resource: aws_route_table](#resource-aws_route_table) - - [Resource: aws_vpc](#resource-aws_vpc) - - [Resource: aws_vpc_ipv6_cidr_block_association](#resource-aws_vpc_ipv6_cidr_block_association) -- [Data Source: aws_cloudwatch_log_group](#data-source-aws_cloudwatch_log_group) -- [Data Source: aws_subnet_ids](#data-source-aws_subnet_ids) -- [Data Source: aws_s3_bucket_object](#data-source-aws_s3_bucket_object) -- [Data Source: aws_s3_bucket_objects](#data-source-aws_s3_bucket_objects) -- [Resource: aws_batch_compute_environment](#resource-aws_batch_compute_environment) -- [Resource: aws_cloudwatch_event_target](#resource-aws_cloudwatch_event_target) -- [Resource: aws_elasticache_cluster](#resource-aws_elasticache_cluster) -- [Resource: aws_elasticache_global_replication_group](#resource-aws_elasticache_global_replication_group) -- [Resource: aws_fsx_ontap_storage_virtual_machine](#resource-aws_fsx_ontap_storage_virtual_machine) -- [Resource: aws_lb_target_group](#resource-aws_lb_target_group) -- [Resource: aws_s3_bucket_object](#resource-aws_s3_bucket_object) - - - -Additional Topics: - - - -- [EC2-Classic resource and data source support](#ec2-classic-resource-and-data-source-support) -- [Macie Classic resource support](#macie-classic-resource-support) - - - -## Provider Version Configuration - --> Before upgrading to version 4.0.0, upgrade to the most recent 3.X version of the provider and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html). You should not see changes you don't expect or deprecation notices. - -Use [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init -upgrade`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -Update to the latest 4.X version: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -## Changes to Authentication - -The authentication configuration for the AWS Provider has changed in this version to match the behavior of other AWS products, including the AWS SDK and AWS CLI. _This will cause authentication failures in AWS provider configurations where you set a non-empty `profile` in the `provider` configuration but the profile does not correspond to an AWS profile with valid credentials._ - -Precedence for authentication settings is as follows: - -* `provider` configuration -* Environment variables -* Shared credentials and configuration files (_e.g._, `~/.aws/credentials` and `~/.aws/config`) - -In previous versions of the provider, you could explicitly set `profile` in the `provider`, and if the profile did not correspond to valid credentials, the provider would use credentials from environment variables. Starting in v4.0, the Terraform AWS provider enforces the precedence shown above, similarly to how the AWS SDK and AWS CLI behave. - -In other words, when you explicitly set `profile` in `provider`, the AWS provider will not use environment variables per the precedence shown above. Before v4.0, if `profile` was configured in the `provider` configuration but did not correspond to an AWS profile or valid credentials, the provider would attempt to use environment variables. **This is no longer the case.** An explicitly set profile that does not have valid credentials will cause an authentication error. - -For example, with the following, the environment variables will not be used: - -```console -$ export AWS_ACCESS_KEY_ID="anaccesskey" -$ export AWS_SECRET_ACCESS_KEY="asecretkey" -``` - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - profile="customprofile", - region="us-west-2" - ) -``` - -## New Provider Arguments - -Version 4.x adds these new `provider` arguments: - -* `assume_role.duration` - Assume role duration as a string, _e.g._, `"1h"` or `"1h30s"`. Terraform AWS Provider v4.0.0 deprecates `assume_role.duration_seconds` and a future version will remove it. -* `custom_ca_bundle` - File containing custom root and intermediate certificates. Can also be configured using the `AWS_CA_BUNDLE` environment variable. (Setting `ca_bundle` in the shared config file is not supported.) -* `ec2_metadata_service_endpoint` - Address of the EC2 metadata service (IMDS) endpoint to use. Can also be set with the `AWS_EC2_METADATA_SERVICE_ENDPOINT` environment variable. -* `ec2_metadata_service_endpoint_mode` - Mode to use in communicating with the metadata service. Valid values are `IPv4` and `IPv6`. Can also be set with the `AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE` environment variable. -* `s3_use_path_style` - Replaces `s3_force_path_style`, which has been deprecated in Terraform AWS Provider v4.0.0 and support will be removed in a future version. -* `shared_config_files` - List of paths to AWS shared config files. If not set, the default is `[~/.aws/config]`. A single value can also be set with the `AWS_CONFIG_FILE` environment variable. -* `shared_credentials_files` - List of paths to the shared credentials file. If not set, the default is `[~/.aws/credentials]`. A single value can also be set with the `AWS_SHARED_CREDENTIALS_FILE` environment variable. Replaces `shared_credentials_file`, which has been deprecated in Terraform AWS Provider v4.0.0 and support will be removed in a future version. -* `sts_region` - Region where AWS STS operations will take place. For example, `us-east-1` and `us-west-2`. -* `use_dualstack_endpoint` - Force the provider to resolve endpoints with DualStack capability. Can also be set with the `AWS_USE_DUALSTACK_ENDPOINT` environment variable or in a shared config file (`use_dualstack_endpoint`). -* `use_fips_endpoint` - Force the provider to resolve endpoints with FIPS capability. Can also be set with the `AWS_USE_FIPS_ENDPOINT` environment variable or in a shared config file (`use_fips_endpoint`). - -~> **NOTE:** Using the `AWS_METADATA_URL` environment variable has been deprecated in Terraform AWS Provider v4.0.0 and support will be removed in a future version. Change any scripts or environments using `AWS_METADATA_URL` to instead use `AWS_EC2_METADATA_SERVICE_ENDPOINT`. - -For example, in previous versions, to use FIPS endpoints, you would need to provide all the FIPS endpoints that you wanted to use in the `endpoints` configuration block: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - endpoints=[AwsProviderEndpoints( - ec2="https://ec2-fips.us-west-2.amazonaws.com", - s3="https://s3-fips.us-west-2.amazonaws.com", - sts="https://sts-fips.us-west-2.amazonaws.com" - ) - ] - ) -``` - -In v4.0.0, you can still set endpoints in the same way. However, you can instead use the `use_fips_endpoint` argument to have the provider automatically resolve FIPS endpoints for all supported services: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws", - use_fips_endpoint=True - ) -``` - -Note that the provider can only resolve FIPS endpoints where AWS provides FIPS support. Support depends on the service and may include `us-east-1`, `us-east-2`, `us-west-1`, `us-west-2`, `us-gov-east-1`, `us-gov-west-1`, and `ca-central-1`. For more information, see [Federal Information Processing Standard (FIPS) 140-2](https://aws.amazon.com/compliance/fips/). - -## Changes to S3 Bucket Drift Detection - -~> **NOTE:** This only applies to v4.9.0 and later of the AWS Provider. - -~> **NOTE:** If you are migrating from v3.75.x of the AWS Provider and you have already adopted the standalone S3 bucket resources (e.g. `aws_s3_bucket_lifecycle_configuration`), -a [`lifecycle` configuration block to ignore changes](https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes) to the internal parameters of the source `aws_s3_bucket` resources will no longer be necessary and can be removed upon upgrade. - -~> **NOTE:** In the next major version, v5.0, the parameters listed below will be removed entirely from the `aws_s3_bucket` resource. -For this reason, a deprecation notice is printed in the Terraform CLI for each of the parameters when used in a configuration. - -To remediate the breaking changes introduced to the `aws_s3_bucket` resource in v4.0.0 of the AWS Provider, -v4.9.0 and later retain the same configuration parameters of the `aws_s3_bucket` resource as in v3.x and functionality of the `aws_s3_bucket` resource only differs from v3.x -in that Terraform will only perform drift detection for each of the following parameters if a configuration value is provided: - -* `acceleration_status` -* `acl` -* `cors_rule` -* `grant` -* `lifecycle_rule` -* `logging` -* `object_lock_configuration` -* `policy` -* `replication_configuration` -* `request_payer` -* `server_side_encryption_configuration` -* `versioning` -* `website` - -Thus, if one of these parameters was once configured and then is entirely removed from an `aws_s3_bucket` resource configuration, -Terraform will not pick up on these changes on a subsequent `terraform plan` or `terraform apply`. - -For example, given the following configuration with a single `cors_rule`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - cors_rule=[S3BucketCorsRule( - allowed_headers=["*"], - allowed_methods=["PUT", "POST"], - allowed_origins=["https://s3-website-test.hashicorp.com"], - expose_headers=["ETag"], - max_age_seconds=3000 - ) - ] - ) -``` - -When updated to the following configuration without a `cors_rule`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere" - ) -``` - -Terraform CLI with v4.9.0 of the AWS Provider will report back: - -```console -aws_s3_bucket.example: Refreshing state... [id=yournamehere] -... -No changes. Your infrastructure matches the configuration. -``` - -With that said, to manage changes to these parameters in the `aws_s3_bucket` resource, practitioners should configure each parameter's respective standalone resource -and perform updates directly on those new configurations. The parameters are mapped to the standalone resources as follows: - -| `aws_s3_bucket` Parameter | Standalone Resource | -|----------------------------------------|------------------------------------------------------| -| `acceleration_status` | `aws_s3_bucket_accelerate_configuration` | -| `acl` | `aws_s3_bucket_acl` | -| `cors_rule` | `aws_s3_bucket_cors_configuration` | -| `grant` | `aws_s3_bucket_acl` | -| `lifecycle_rule` | `aws_s3_bucket_lifecycle_configuration` | -| `logging` | `aws_s3_bucket_logging` | -| `object_lock_configuration` | `aws_s3_bucket_object_lock_configuration` | -| `policy` | `aws_s3_bucket_policy` | -| `replication_configuration` | `aws_s3_bucket_replication_configuration` | -| `request_payer` | `aws_s3_bucket_request_payment_configuration` | -| `server_side_encryption_configuration` | `aws_s3_bucket_server_side_encryption_configuration` | -| `versioning` | `aws_s3_bucket_versioning` | -| `website` | `aws_s3_bucket_website_configuration` | - -Going back to the earlier example, given the following configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - cors_rule=[S3BucketCorsRule( - allowed_headers=["*"], - allowed_methods=["PUT", "POST"], - allowed_origins=["https://s3-website-test.hashicorp.com"], - expose_headers=["ETag"], - max_age_seconds=3000 - ) - ] - ) -``` - -Practitioners can upgrade to v4.9.0 and then introduce the standalone `aws_s3_bucket_cors_configuration` resource, e.g. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_cors_configuration import S3BucketCorsConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_cors_configuration_example = S3BucketCorsConfiguration(self, "example_1", - bucket=example.id, - cors_rule=[S3BucketCorsConfigurationCorsRule( - allowed_headers=["*"], - allowed_methods=["PUT", "POST"], - allowed_origins=["https://s3-website-test.hashicorp.com"], - expose_headers=["ETag"], - max_age_seconds=3000 - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_cors_configuration_example.override_logical_id("example") -``` - -Depending on the tools available to you, the above configuration can either be directly applied with Terraform or the standalone resource -can be imported into Terraform state. Please refer to each standalone resource's _Import_ documentation for the proper syntax. - -Once the standalone resources are managed by Terraform, updates and removal can be performed as needed. - -The following sections depict standalone resource adoption per individual parameter. Standalone resource adoption is not required to upgrade but is recommended to ensure drift is detected by Terraform. -The examples below are by no means exhaustive. The aim is to provide important concepts when migrating to a standalone resource whose parameters may not entirely align with the corresponding parameter in the `aws_s3_bucket` resource. - -### Migrating to `aws_s3_bucket_accelerate_configuration` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - acceleration_status="Enabled", - bucket="yournamehere" - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_accelerate_configuration import S3BucketAccelerateConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_accelerate_configuration_example = - S3BucketAccelerateConfiguration(self, "example_1", - bucket=example.id, - status="Enabled" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_accelerate_configuration_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_acl` - -#### With `acl` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - acl="private", - bucket="yournamehere" - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_acl import S3BucketAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_acl_example = S3BucketAcl(self, "example_1", - acl="private", - bucket=example.id - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_acl_example.override_logical_id("example") -``` - -#### With `grant` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - grant=[S3BucketGrant( - id=Token.as_string(current_user.id), - permissions=["FULL_CONTROL"], - type="CanonicalUser" - ), S3BucketGrant( - permissions=["READ_ACP", "WRITE"], - type="Group", - uri="http://acs.amazonaws.com/groups/s3/LogDelivery" - ) - ] - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_acl import S3BucketAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_acl_example = S3BucketAcl(self, "example_1", - access_control_policy=S3BucketAclAccessControlPolicy( - grant=[S3BucketAclAccessControlPolicyGrant( - grantee=S3BucketAclAccessControlPolicyGrantGrantee( - id=Token.as_string(current_user.id), - type="CanonicalUser" - ), - permission="FULL_CONTROL" - ), S3BucketAclAccessControlPolicyGrant( - grantee=S3BucketAclAccessControlPolicyGrantGrantee( - type="Group", - uri="http://acs.amazonaws.com/groups/s3/LogDelivery" - ), - permission="READ_ACP" - ), S3BucketAclAccessControlPolicyGrant( - grantee=S3BucketAclAccessControlPolicyGrantGrantee( - type="Group", - uri="http://acs.amazonaws.com/groups/s3/LogDelivery" - ), - permission="WRITE" - ) - ], - owner=S3BucketAclAccessControlPolicyOwner( - id=Token.as_string(current_user.id) - ) - ), - bucket=example.id - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_acl_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_cors_configuration` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - cors_rule=[S3BucketCorsRule( - allowed_headers=["*"], - allowed_methods=["PUT", "POST"], - allowed_origins=["https://s3-website-test.hashicorp.com"], - expose_headers=["ETag"], - max_age_seconds=3000 - ) - ] - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_cors_configuration import S3BucketCorsConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_cors_configuration_example = S3BucketCorsConfiguration(self, "example_1", - bucket=example.id, - cors_rule=[S3BucketCorsConfigurationCorsRule( - allowed_headers=["*"], - allowed_methods=["PUT", "POST"], - allowed_origins=["https://s3-website-test.hashicorp.com"], - expose_headers=["ETag"], - max_age_seconds=3000 - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_cors_configuration_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_lifecycle_configuration` - -~> **Note:** In version `3.x` of the provider, the `lifecycle_rule.id` argument was optional, while in version `4.x`, the `aws_s3_bucket_lifecycle_configuration.rule.id` argument required. Use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html) to get the source bucket's lifecycle configuration to determine the ID. - -#### For Lifecycle Rules with no `prefix` previously configured - -~> **Note:** When configuring the `rule.filter` configuration block in the new `aws_s3_bucket_lifecycle_configuration` resource, use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html) -to get the source bucket's lifecycle configuration and determine if the `Filter` is configured as `"Filter" : {}` or `"Filter" : { "Prefix": "" }`. -If AWS returns the former, configure `rule.filter` as `filter {}`. Otherwise, neither a `rule.filter` nor `rule.prefix` parameter should be configured as shown here: - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - id="Keep previous version 30 days, then in Glacier another 60", - noncurrent_version_expiration=S3BucketLifecycleRuleNoncurrentVersionExpiration( - days=90 - ), - noncurrent_version_transition=[S3BucketLifecycleRuleNoncurrentVersionTransition( - days=30, - storage_class="GLACIER" - ) - ] - ), S3BucketLifecycleRule( - abort_incomplete_multipart_upload_days=7, - enabled=True, - id="Delete old incomplete multi-part uploads" - ) - ] - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - id="Keep previous version 30 days, then in Glacier another 60", - noncurrent_version_expiration=S3BucketLifecycleConfigurationRuleNoncurrentVersionExpiration( - noncurrent_days=90 - ), - noncurrent_version_transition=[S3BucketLifecycleConfigurationRuleNoncurrentVersionTransition( - noncurrent_days=30, - storage_class="GLACIER" - ) - ], - status="Enabled" - ), S3BucketLifecycleConfigurationRule( - abort_incomplete_multipart_upload=S3BucketLifecycleConfigurationRuleAbortIncompleteMultipartUpload( - days_after_initiation=7 - ), - id="Delete old incomplete multi-part uploads", - status="Enabled" - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -#### For Lifecycle Rules with `prefix` previously configured as an empty string - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - id="log-expiration", - prefix="", - transition=[S3BucketLifecycleRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - id="log-expiration", - status="Enabled", - transition=[S3BucketLifecycleConfigurationRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleConfigurationRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -#### For Lifecycle Rules with `prefix` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - id="log-expiration", - prefix="foobar", - transition=[S3BucketLifecycleRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - filter=S3BucketLifecycleConfigurationRuleFilter( - prefix="foobar" - ), - id="log-expiration", - status="Enabled", - transition=[S3BucketLifecycleConfigurationRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleConfigurationRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -#### For Lifecycle Rules with `prefix` and `tags` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - expiration=S3BucketLifecycleRuleExpiration( - days=90 - ), - id="log", - prefix="log/", - tags={ - "autoclean": "true", - "rule": "log" - }, - transition=[S3BucketLifecycleRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleRuleTransition( - days=60, - storage_class="GLACIER" - ) - ] - ), S3BucketLifecycleRule( - enabled=True, - expiration=S3BucketLifecycleRuleExpiration( - date="2022-12-31" - ), - id="tmp", - prefix="tmp/" - ) - ] - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - expiration=S3BucketLifecycleConfigurationRuleExpiration( - days=90 - ), - filter=S3BucketLifecycleConfigurationRuleFilter( - and=S3BucketLifecycleConfigurationRuleFilterAnd( - prefix="log/", - tags={ - "autoclean": "true", - "rule": "log" - } - ) - ), - id="log", - status="Enabled", - transition=[S3BucketLifecycleConfigurationRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleConfigurationRuleTransition( - days=60, - storage_class="GLACIER" - ) - ] - ), S3BucketLifecycleConfigurationRule( - expiration=S3BucketLifecycleConfigurationRuleExpiration( - date="2022-12-31T00:00:00Z" - ), - filter=S3BucketLifecycleConfigurationRuleFilter( - prefix="tmp/" - ), - id="tmp", - status="Enabled" - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_logging` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - log_bucket = S3Bucket(self, "log_bucket", - bucket="example-log-bucket" - ) - S3Bucket(self, "example", - bucket="yournamehere", - logging=S3BucketLogging( - target_bucket=log_bucket.id, - target_prefix="log/" - ) - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_logging import S3BucketLoggingA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - log_bucket = S3Bucket(self, "log_bucket", - bucket="example-log-bucket" - ) - aws_s3_bucket_logging_example = S3BucketLoggingA(self, "example_2", - bucket=example.id, - target_bucket=log_bucket.id, - target_prefix="log/" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_logging_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_object_lock_configuration` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - object_lock_configuration=S3BucketObjectLockConfiguration( - object_lock_enabled="Enabled", - rule=S3BucketObjectLockConfigurationRule( - default_retention=S3BucketObjectLockConfigurationRuleDefaultRetention( - days=3, - mode="COMPLIANCE" - ) - ) - ) - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_object_lock_configuration import S3BucketObjectLockConfigurationA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere", - object_lock_enabled=True - ) - aws_s3_bucket_object_lock_configuration_example = - S3BucketObjectLockConfigurationA(self, "example_1", - bucket=example.id, - rule=S3BucketObjectLockConfigurationRuleA( - default_retention=S3BucketObjectLockConfigurationRuleDefaultRetentionA( - days=3, - mode="COMPLIANCE" - ) - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_object_lock_configuration_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_policy` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - policy="{\n \"Id\": \"Policy1446577137248\",\n \"Statement\": [\n {\n \"Action\": \"s3:PutObject\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"${" + current.arn + "}\"\n },\n \"Resource\": \"arn:${" + data_aws_partition_current.partition + "}:s3:::yournamehere/*\",\n \"Sid\": \"Stmt1446575236270\"\n }\n ],\n \"Version\": \"2012-10-17\"\n}\n\n" - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_policy import S3BucketPolicy -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_policy_example = S3BucketPolicy(self, "example_1", - bucket=example.id, - policy="{\n \"Id\": \"Policy1446577137248\",\n \"Statement\": [\n {\n \"Action\": \"s3:PutObject\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"${" + current.arn + "}\"\n },\n \"Resource\": \"${" + example.arn + "}/*\",\n \"Sid\": \"Stmt1446575236270\"\n }\n ],\n \"Version\": \"2012-10-17\"\n}\n\n" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_policy_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_replication_configuration` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - provider=central, - replication_configuration=S3BucketReplicationConfiguration( - role=replication.arn, - rules=[S3BucketReplicationConfigurationRules( - destination=S3BucketReplicationConfigurationRulesDestination( - bucket=destination.arn, - metrics=S3BucketReplicationConfigurationRulesDestinationMetrics( - minutes=15, - status="Enabled" - ), - replication_time=S3BucketReplicationConfigurationRulesDestinationReplicationTime( - minutes=15, - status="Enabled" - ), - storage_class="STANDARD" - ), - filter=S3BucketReplicationConfigurationRulesFilter( - tags={} - ), - id="foobar", - status="Enabled" - ) - ] - ) - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_replication_configuration import S3BucketReplicationConfigurationA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - provider=central - ) - aws_s3_bucket_replication_configuration_example = - S3BucketReplicationConfigurationA(self, "example_1", - bucket=source.id, - role=replication.arn, - rule=[S3BucketReplicationConfigurationRule( - delete_marker_replication=S3BucketReplicationConfigurationRuleDeleteMarkerReplication( - status="Enabled" - ), - destination=S3BucketReplicationConfigurationRuleDestination( - bucket=destination.arn, - metrics=S3BucketReplicationConfigurationRuleDestinationMetrics( - event_threshold=S3BucketReplicationConfigurationRuleDestinationMetricsEventThreshold( - minutes=15 - ), - status="Enabled" - ), - replication_time=S3BucketReplicationConfigurationRuleDestinationReplicationTime( - status="Enabled", - time=S3BucketReplicationConfigurationRuleDestinationReplicationTimeTime( - minutes=15 - ) - ), - storage_class="STANDARD" - ), - filter=S3BucketReplicationConfigurationRuleFilter(), - id="foobar", - status="Enabled" - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_replication_configuration_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_request_payment_configuration` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - request_payer="Requester" - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_request_payment_configuration import S3BucketRequestPaymentConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_request_payment_configuration_example = - S3BucketRequestPaymentConfiguration(self, "example_1", - bucket=example.id, - payer="Requester" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_request_payment_configuration_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_server_side_encryption_configuration` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - server_side_encryption_configuration=S3BucketServerSideEncryptionConfiguration( - rule=S3BucketServerSideEncryptionConfigurationRule( - apply_server_side_encryption_by_default=S3BucketServerSideEncryptionConfigurationRuleApplyServerSideEncryptionByDefault( - kms_master_key_id=mykey.arn, - sse_algorithm="aws:kms" - ) - ) - ) - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_server_side_encryption_configuration import S3BucketServerSideEncryptionConfigurationA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_server_side_encryption_configuration_example = - S3BucketServerSideEncryptionConfigurationA(self, "example_1", - bucket=example.id, - rule=[S3BucketServerSideEncryptionConfigurationRuleA( - apply_server_side_encryption_by_default=S3BucketServerSideEncryptionConfigurationRuleApplyServerSideEncryptionByDefaultA( - kms_master_key_id=mykey.arn, - sse_algorithm="aws:kms" - ) - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_server_side_encryption_configuration_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_versioning` - -~> **NOTE:** As `aws_s3_bucket_versioning` is a separate resource, any S3 objects for which versioning is important (_e.g._, a truststore for mutual TLS authentication) must implicitly or explicitly depend on the `aws_s3_bucket_versioning` resource. Otherwise, the S3 objects may be created before versioning has been set. [See below](#ensure-objects-depend-on-versioning) for an example. Also note that AWS recommends waiting 15 minutes after enabling versioning on a bucket before putting or deleting objects in/from the bucket. - -#### Buckets With Versioning Enabled - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - versioning=S3BucketVersioning( - enabled=True - ) - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Enabled" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -#### Buckets With Versioning Disabled or Suspended - -Depending on the version of the Terraform AWS Provider you are migrating from, the interpretation of `versioning.enabled = false` -in your `aws_s3_bucket` resource will differ and thus the migration to the `aws_s3_bucket_versioning` resource will also differ as follows. - -If you are migrating from the Terraform AWS Provider `v3.70.0` or later: - -* For new S3 buckets, `enabled = false` is synonymous to `Disabled`. -* For existing S3 buckets, `enabled = false` is synonymous to `Suspended`. - -If you are migrating from an earlier version of the Terraform AWS Provider: - -* For both new and existing S3 buckets, `enabled = false` is synonymous to `Suspended`. - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - versioning=S3BucketVersioning( - enabled=False - ) - ) -``` - -Update the configuration to one of the following: - -* If migrating from Terraform AWS Provider `v3.70.0` or later and bucket versioning was never enabled: - - ```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Disabled" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -* If migrating from Terraform AWS Provider `v3.70.0` or later and bucket versioning was enabled at one point: - - ```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Suspended" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -* If migrating from an earlier version of Terraform AWS Provider: - - ```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Suspended" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -#### Ensure Objects Depend on Versioning - -When you create an object whose `version_id` you need and an `aws_s3_bucket_versioning` resource in the same configuration, you are more likely to have success by ensuring the `s3_object` depends either implicitly (see below) or explicitly (i.e., using `depends_on = [aws_s3_bucket_versioning.example]`) on the `aws_s3_bucket_versioning` resource. - -~> **NOTE:** For critical and/or production S3 objects, do not create a bucket, enable versioning, and create an object in the bucket within the same configuration. Doing so will not allow the AWS-recommended 15 minutes between enabling versioning and writing to the bucket. - -This example shows the `aws_s3_object.example` depending implicitly on the versioning resource through the reference to `aws_s3_bucket_versioning.example.id` to define `bucket`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -from imports.aws.s3_object import S3Object -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yotto" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Enabled" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") - aws_s3_object_example = S3Object(self, "example_2", - bucket=Token.as_string(aws_s3_bucket_versioning_example.id), - key="droeloe", - source="example.txt" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_object_example.override_logical_id("example") -``` - -### Migrating to `aws_s3_bucket_website_configuration` - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - website=S3BucketWebsite( - error_document="error.html", - index_document="index.html" - ) - ) -``` - -Update the configuration to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_website_configuration import S3BucketWebsiteConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_website_configuration_example = - S3BucketWebsiteConfiguration(self, "example_1", - bucket=example.id, - error_document=S3BucketWebsiteConfigurationErrorDocument( - key="error.html" - ), - index_document=S3BucketWebsiteConfigurationIndexDocument( - suffix="index.html" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_website_configuration_example.override_logical_id("example") -``` - -Given this previous configuration that uses the `aws_s3_bucket` parameter `website_domain` with `aws_route53_record`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route53_record import Route53Record -from imports.aws.route53_zone import Route53Zone -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - main = Route53Zone(self, "main", - name="domain.test" - ) - website = S3Bucket(self, "website", - website=S3BucketWebsite( - error_document="error.html", - index_document="index.html" - ) - ) - Route53Record(self, "alias", - alias=Route53RecordAlias( - evaluate_target_health=True, - name=website.website_domain, - zone_id=website.hosted_zone_id - ), - name="www", - type="A", - zone_id=main.zone_id - ) -``` - -Update the configuration to use the `aws_s3_bucket_website_configuration` resource and its `website_domain` parameter: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route53_record import Route53Record -from imports.aws.route53_zone import Route53Zone -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_website_configuration import S3BucketWebsiteConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - main = Route53Zone(self, "main", - name="domain.test" - ) - website = S3Bucket(self, "website") - example = S3BucketWebsiteConfiguration(self, "example", - bucket=website.id, - index_document=S3BucketWebsiteConfigurationIndexDocument( - suffix="index.html" - ) - ) - Route53Record(self, "alias", - alias=Route53RecordAlias( - evaluate_target_health=True, - name=example.website_domain, - zone_id=website.hosted_zone_id - ), - name="www", - type="A", - zone_id=main.zone_id - ) -``` - -## S3 Bucket Refactor - -~> **NOTE:** This only applies to v4.0.0 through v4.8.0 of the AWS Provider, which introduce significant breaking -changes to the `aws_s3_bucket` resource. We recommend upgrading to v4.9.0 of the AWS Provider instead. See the section above, [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection), for additional upgrade considerations. - -To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the `aws_s3_bucket` resource have become **read-only**. - -Configurations dependent on these arguments should be updated to use the corresponding `aws_s3_bucket_*` resource in order to prevent Terraform from reporting “unconfigurable attribute” errors for read-only arguments. Once updated, it is recommended to import new `aws_s3_bucket_*` resources into Terraform state. - -In the event practitioners do not anticipate future modifications to the S3 bucket settings associated with these read-only arguments or drift detection is not needed, these read-only arguments should be removed from `aws_s3_bucket` resource configurations in order to prevent Terraform from reporting “unconfigurable attribute” errors; the states of these arguments will be preserved but are subject to change with modifications made outside Terraform. - -~> **NOTE:** Each of the new `aws_s3_bucket_*` resources relies on S3 API calls that utilize a `PUT` action in order to modify the target S3 bucket. These calls follow standard HTTP methods for REST APIs, and therefore **should** handle situations where the target configuration already exists. While it is not strictly necessary to import new `aws_s3_bucket_*` resources where the updated configuration matches the configuration used in previous versions of the AWS provider, skipping this step will lead to a diff in the first plan after a configuration change indicating that any new `aws_s3_bucket_*` resources will be created, making it more difficult to determine whether the appropriate actions will be taken. - -### `acceleration_status` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_accelerate_configuration` resource](/docs/providers/aws/r/s3_bucket_accelerate_configuration.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - acceleration_status="Enabled", - bucket="yournamehere" - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "acceleration_status": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `acceleration_status` is now read only, update your configuration to use the `aws_s3_bucket_accelerate_configuration` -resource and remove `acceleration_status` in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_accelerate_configuration import S3BucketAccelerateConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_accelerate_configuration_example = - S3BucketAccelerateConfiguration(self, "example_1", - bucket=example.id, - status="Enabled" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_accelerate_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_accelerate_configuration.example yournamehere -aws_s3_bucket_accelerate_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_accelerate_configuration.example: Import prepared! - Prepared aws_s3_bucket_accelerate_configuration for import -aws_s3_bucket_accelerate_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `acl` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_acl` resource](/docs/providers/aws/r/s3_bucket_acl.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - acl="private", - bucket="yournamehere" - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "acl": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `acl` is now read only, update your configuration to use the `aws_s3_bucket_acl` -resource and remove the `acl` argument in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_acl import S3BucketAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_acl_example = S3BucketAcl(self, "example_1", - acl="private", - bucket=example.id - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_acl_example.override_logical_id("example") -``` - -~> **NOTE:** When importing into `aws_s3_bucket_acl`, make sure you use the S3 bucket name (_e.g._, `yournamehere` in the example above) as part of the ID, and _not_ the Terraform bucket configuration name (_e.g._, `example` in the example above). - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_acl.example yournamehere,private -aws_s3_bucket_acl.example: Importing from ID "yournamehere,private"... -aws_s3_bucket_acl.example: Import prepared! - Prepared aws_s3_bucket_acl for import -aws_s3_bucket_acl.example: Refreshing state... [id=yournamehere,private] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `cors_rule` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_cors_configuration` resource](/docs/providers/aws/r/s3_bucket_cors_configuration.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - cors_rule=[S3BucketCorsRule( - allowed_headers=["*"], - allowed_methods=["PUT", "POST"], - allowed_origins=["https://s3-website-test.hashicorp.com"], - expose_headers=["ETag"], - max_age_seconds=3000 - ) - ] - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "cors_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `cors_rule` is now read only, update your configuration to use the `aws_s3_bucket_cors_configuration` -resource and remove `cors_rule` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_cors_configuration import S3BucketCorsConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_cors_configuration_example = S3BucketCorsConfiguration(self, "example_1", - bucket=example.id, - cors_rule=[S3BucketCorsConfigurationCorsRule( - allowed_headers=["*"], - allowed_methods=["PUT", "POST"], - allowed_origins=["https://s3-website-test.hashicorp.com"], - expose_headers=["ETag"], - max_age_seconds=3000 - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_cors_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_cors_configuration.example yournamehere -aws_s3_bucket_cors_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_cors_configuration.example: Import prepared! - Prepared aws_s3_bucket_cors_configuration for import -aws_s3_bucket_cors_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `grant` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_acl` resource](/docs/providers/aws/r/s3_bucket_acl.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - grant=[S3BucketGrant( - id=Token.as_string(current_user.id), - permissions=["FULL_CONTROL"], - type="CanonicalUser" - ), S3BucketGrant( - permissions=["READ_ACP", "WRITE"], - type="Group", - uri="http://acs.amazonaws.com/groups/s3/LogDelivery" - ) - ] - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "grant": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `grant` is now read only, update your configuration to use the `aws_s3_bucket_acl` -resource and remove `grant` in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_acl import S3BucketAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_acl_example = S3BucketAcl(self, "example_1", - access_control_policy=S3BucketAclAccessControlPolicy( - grant=[S3BucketAclAccessControlPolicyGrant( - grantee=S3BucketAclAccessControlPolicyGrantGrantee( - id=Token.as_string(current_user.id), - type="CanonicalUser" - ), - permission="FULL_CONTROL" - ), S3BucketAclAccessControlPolicyGrant( - grantee=S3BucketAclAccessControlPolicyGrantGrantee( - type="Group", - uri="http://acs.amazonaws.com/groups/s3/LogDelivery" - ), - permission="READ_ACP" - ), S3BucketAclAccessControlPolicyGrant( - grantee=S3BucketAclAccessControlPolicyGrantGrantee( - type="Group", - uri="http://acs.amazonaws.com/groups/s3/LogDelivery" - ), - permission="WRITE" - ) - ], - owner=S3BucketAclAccessControlPolicyOwner( - id=Token.as_string(current_user.id) - ) - ), - bucket=example.id - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_acl_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_acl.example yournamehere -aws_s3_bucket_acl.example: Importing from ID "yournamehere"... -aws_s3_bucket_acl.example: Import prepared! - Prepared aws_s3_bucket_acl for import -aws_s3_bucket_acl.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `lifecycle_rule` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_lifecycle_configuration` resource](/docs/providers/aws/r/s3_bucket_lifecycle_configuration.html) instead. - -#### For Lifecycle Rules with no `prefix` previously configured - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - id="Keep previous version 30 days, then in Glacier another 60", - noncurrent_version_expiration=S3BucketLifecycleRuleNoncurrentVersionExpiration( - days=90 - ), - noncurrent_version_transition=[S3BucketLifecycleRuleNoncurrentVersionTransition( - days=30, - storage_class="GLACIER" - ) - ] - ), S3BucketLifecycleRule( - abort_incomplete_multipart_upload_days=7, - enabled=True, - id="Delete old incomplete multi-part uploads" - ) - ] - ) -``` - -You will receive the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `lifecycle_rule` argument changed to read-only, update the configuration to use the `aws_s3_bucket_lifecycle_configuration` -resource and remove `lifecycle_rule` and its nested arguments in the `aws_s3_bucket` resource. - -~> **Note:** When configuring the `rule.filter` configuration block in the new `aws_s3_bucket_lifecycle_configuration` resource, use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html) -to get the source bucket's lifecycle configuration and determine if the `Filter` is configured as `"Filter" : {}` or `"Filter" : { "Prefix": "" }`. -If AWS returns the former, configure `rule.filter` as `filter {}`. Otherwise, neither a `rule.filter` nor `rule.prefix` parameter should be configured as shown here: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - id="Keep previous version 30 days, then in Glacier another 60", - noncurrent_version_expiration=S3BucketLifecycleConfigurationRuleNoncurrentVersionExpiration( - noncurrent_days=90 - ), - noncurrent_version_transition=[S3BucketLifecycleConfigurationRuleNoncurrentVersionTransition( - noncurrent_days=30, - storage_class="GLACIER" - ) - ], - status="Enabled" - ), S3BucketLifecycleConfigurationRule( - abort_incomplete_multipart_upload=S3BucketLifecycleConfigurationRuleAbortIncompleteMultipartUpload( - days_after_initiation=7 - ), - id="Delete old incomplete multi-part uploads", - status="Enabled" - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### For Lifecycle Rules with `prefix` previously configured as an empty string - -For example, given this configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - id="log-expiration", - prefix="", - transition=[S3BucketLifecycleRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) -``` - -You will receive the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `lifecycle_rule` argument changed to read-only, update the configuration to use the `aws_s3_bucket_lifecycle_configuration` -resource and remove `lifecycle_rule` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - id="log-expiration", - status="Enabled", - transition=[S3BucketLifecycleConfigurationRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleConfigurationRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### For Lifecycle Rules with `prefix` - -For example, given this configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - id="log-expiration", - prefix="foobar", - transition=[S3BucketLifecycleRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) -``` - -You will receive the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `lifecycle_rule` argument changed to read-only, update the configuration to use the `aws_s3_bucket_lifecycle_configuration` -resource and remove `lifecycle_rule` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - filter=S3BucketLifecycleConfigurationRuleFilter( - prefix="foobar" - ), - id="log-expiration", - status="Enabled", - transition=[S3BucketLifecycleConfigurationRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleConfigurationRuleTransition( - days=180, - storage_class="GLACIER" - ) - ] - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### For Lifecycle Rules with `prefix` and `tags` - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - lifecycle_rule=[S3BucketLifecycleRule( - enabled=True, - expiration=S3BucketLifecycleRuleExpiration( - days=90 - ), - id="log", - prefix="log/", - tags={ - "autoclean": "true", - "rule": "log" - }, - transition=[S3BucketLifecycleRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleRuleTransition( - days=60, - storage_class="GLACIER" - ) - ] - ), S3BucketLifecycleRule( - enabled=True, - expiration=S3BucketLifecycleRuleExpiration( - date="2022-12-31" - ), - id="tmp", - prefix="tmp/" - ) - ] - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `lifecycle_rule` is now read only, update your configuration to use the `aws_s3_bucket_lifecycle_configuration` -resource and remove `lifecycle_rule` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_lifecycle_configuration import S3BucketLifecycleConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_lifecycle_configuration_example = - S3BucketLifecycleConfiguration(self, "example_1", - bucket=example.id, - rule=[S3BucketLifecycleConfigurationRule( - expiration=S3BucketLifecycleConfigurationRuleExpiration( - days=90 - ), - filter=S3BucketLifecycleConfigurationRuleFilter( - and=S3BucketLifecycleConfigurationRuleFilterAnd( - prefix="log/", - tags={ - "autoclean": "true", - "rule": "log" - } - ) - ), - id="log", - status="Enabled", - transition=[S3BucketLifecycleConfigurationRuleTransition( - days=30, - storage_class="STANDARD_IA" - ), S3BucketLifecycleConfigurationRuleTransition( - days=60, - storage_class="GLACIER" - ) - ] - ), S3BucketLifecycleConfigurationRule( - expiration=S3BucketLifecycleConfigurationRuleExpiration( - date="2022-12-31T00:00:00Z" - ), - filter=S3BucketLifecycleConfigurationRuleFilter( - prefix="tmp/" - ), - id="tmp", - status="Enabled" - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_lifecycle_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `logging` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_logging` resource](/docs/providers/aws/r/s3_bucket_logging.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - log_bucket = S3Bucket(self, "log_bucket", - bucket="example-log-bucket" - ) - S3Bucket(self, "example", - bucket="yournamehere", - logging=S3BucketLogging( - target_bucket=log_bucket.id, - target_prefix="log/" - ) - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "logging": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `logging` is now read only, update your configuration to use the `aws_s3_bucket_logging` -resource and remove `logging` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_logging import S3BucketLoggingA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - log_bucket = S3Bucket(self, "log_bucket", - bucket="example-log-bucket" - ) - aws_s3_bucket_logging_example = S3BucketLoggingA(self, "example_2", - bucket=example.id, - target_bucket=log_bucket.id, - target_prefix="log/" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_logging_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_logging.example yournamehere -aws_s3_bucket_logging.example: Importing from ID "yournamehere"... -aws_s3_bucket_logging.example: Import prepared! - Prepared aws_s3_bucket_logging for import -aws_s3_bucket_logging.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `object_lock_configuration` `rule` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_object_lock_configuration` resource](/docs/providers/aws/r/s3_bucket_object_lock_configuration.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - object_lock_configuration=S3BucketObjectLockConfiguration( - object_lock_enabled="Enabled", - rule=S3BucketObjectLockConfigurationRule( - default_retention=S3BucketObjectLockConfigurationRuleDefaultRetention( - days=3, - mode="COMPLIANCE" - ) - ) - ) - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "object_lock_configuration.0.rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `rule` argument of the `object_lock_configuration` configuration block changed to read-only, update your configuration to use the `aws_s3_bucket_object_lock_configuration` -resource and remove `rule` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_object_lock_configuration import S3BucketObjectLockConfigurationA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere", - object_lock_enabled=True - ) - aws_s3_bucket_object_lock_configuration_example = - S3BucketObjectLockConfigurationA(self, "example_1", - bucket=example.id, - rule=S3BucketObjectLockConfigurationRuleA( - default_retention=S3BucketObjectLockConfigurationRuleDefaultRetentionA( - days=3, - mode="COMPLIANCE" - ) - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_object_lock_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_object_lock_configuration.example yournamehere -aws_s3_bucket_object_lock_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_object_lock_configuration.example: Import prepared! - Prepared aws_s3_bucket_object_lock_configuration for import -aws_s3_bucket_object_lock_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `policy` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_policy` resource](/docs/providers/aws/r/s3_bucket_policy.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - policy="{\n \"Id\": \"Policy1446577137248\",\n \"Statement\": [\n {\n \"Action\": \"s3:PutObject\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"${" + current.arn + "}\"\n },\n \"Resource\": \"arn:${" + data_aws_partition_current.partition + "}:s3:::yournamehere/*\",\n \"Sid\": \"Stmt1446575236270\"\n }\n ],\n \"Version\": \"2012-10-17\"\n}\n\n" - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "policy": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `policy` is now read only, update your configuration to use the `aws_s3_bucket_policy` -resource and remove `policy` in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_policy import S3BucketPolicy -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_policy_example = S3BucketPolicy(self, "example_1", - bucket=example.id, - policy="{\n \"Id\": \"Policy1446577137248\",\n \"Statement\": [\n {\n \"Action\": \"s3:PutObject\",\n \"Effect\": \"Allow\",\n \"Principal\": {\n \"AWS\": \"${" + current.arn + "}\"\n },\n \"Resource\": \"${" + example.arn + "}/*\",\n \"Sid\": \"Stmt1446575236270\"\n }\n ],\n \"Version\": \"2012-10-17\"\n}\n\n" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_policy_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_policy.example yournamehere -aws_s3_bucket_policy.example: Importing from ID "yournamehere"... -aws_s3_bucket_policy.example: Import prepared! - Prepared aws_s3_bucket_policy for import -aws_s3_bucket_policy.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `replication_configuration` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_replication_configuration` resource](/docs/providers/aws/r/s3_bucket_replication_configuration.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - provider=central, - replication_configuration=S3BucketReplicationConfiguration( - role=replication.arn, - rules=[S3BucketReplicationConfigurationRules( - destination=S3BucketReplicationConfigurationRulesDestination( - bucket=destination.arn, - metrics=S3BucketReplicationConfigurationRulesDestinationMetrics( - minutes=15, - status="Enabled" - ), - replication_time=S3BucketReplicationConfigurationRulesDestinationReplicationTime( - minutes=15, - status="Enabled" - ), - storage_class="STANDARD" - ), - filter=S3BucketReplicationConfigurationRulesFilter( - tags={} - ), - id="foobar", - status="Enabled" - ) - ] - ) - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "replication_configuration": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `replication_configuration` is now read only, update your configuration to use the `aws_s3_bucket_replication_configuration` -resource and remove `replication_configuration` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_replication_configuration import S3BucketReplicationConfigurationA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere", - provider=central - ) - aws_s3_bucket_replication_configuration_example = - S3BucketReplicationConfigurationA(self, "example_1", - bucket=example.id, - role=replication.arn, - rule=[S3BucketReplicationConfigurationRule( - delete_marker_replication=S3BucketReplicationConfigurationRuleDeleteMarkerReplication( - status="Enabled" - ), - destination=S3BucketReplicationConfigurationRuleDestination( - bucket=destination.arn, - metrics=S3BucketReplicationConfigurationRuleDestinationMetrics( - event_threshold=S3BucketReplicationConfigurationRuleDestinationMetricsEventThreshold( - minutes=15 - ), - status="Enabled" - ), - replication_time=S3BucketReplicationConfigurationRuleDestinationReplicationTime( - status="Enabled", - time=S3BucketReplicationConfigurationRuleDestinationReplicationTimeTime( - minutes=15 - ) - ), - storage_class="STANDARD" - ), - filter=S3BucketReplicationConfigurationRuleFilter(), - id="foobar", - status="Enabled" - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_replication_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_replication_configuration.example yournamehere -aws_s3_bucket_replication_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_replication_configuration.example: Import prepared! - Prepared aws_s3_bucket_replication_configuration for import -aws_s3_bucket_replication_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `request_payer` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_request_payment_configuration` resource](/docs/providers/aws/r/s3_bucket_request_payment_configuration.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - request_payer="Requester" - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "request_payer": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `request_payer` is now read only, update your configuration to use the `aws_s3_bucket_request_payment_configuration` -resource and remove `request_payer` in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_request_payment_configuration import S3BucketRequestPaymentConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_request_payment_configuration_example = - S3BucketRequestPaymentConfiguration(self, "example_1", - bucket=example.id, - payer="Requester" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_request_payment_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_request_payment_configuration.example yournamehere -aws_s3_bucket_request_payment_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_request_payment_configuration.example: Import prepared! - Prepared aws_s3_bucket_request_payment_configuration for import -aws_s3_bucket_request_payment_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `server_side_encryption_configuration` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_server_side_encryption_configuration` resource](/docs/providers/aws/r/s3_bucket_server_side_encryption_configuration.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - server_side_encryption_configuration=S3BucketServerSideEncryptionConfiguration( - rule=S3BucketServerSideEncryptionConfigurationRule( - apply_server_side_encryption_by_default=S3BucketServerSideEncryptionConfigurationRuleApplyServerSideEncryptionByDefault( - kms_master_key_id=mykey.arn, - sse_algorithm="aws:kms" - ) - ) - ) - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "server_side_encryption_configuration": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `server_side_encryption_configuration` is now read only, update your configuration to use the `aws_s3_bucket_server_side_encryption_configuration` -resource and remove `server_side_encryption_configuration` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_server_side_encryption_configuration import S3BucketServerSideEncryptionConfigurationA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_server_side_encryption_configuration_example = - S3BucketServerSideEncryptionConfigurationA(self, "example_1", - bucket=example.id, - rule=[S3BucketServerSideEncryptionConfigurationRuleA( - apply_server_side_encryption_by_default=S3BucketServerSideEncryptionConfigurationRuleApplyServerSideEncryptionByDefaultA( - kms_master_key_id=mykey.arn, - sse_algorithm="aws:kms" - ) - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_server_side_encryption_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_server_side_encryption_configuration.example yournamehere -aws_s3_bucket_server_side_encryption_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_server_side_encryption_configuration.example: Import prepared! - Prepared aws_s3_bucket_server_side_encryption_configuration for import -aws_s3_bucket_server_side_encryption_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `versioning` Argument - -Switch your Terraform configuration to the [`aws_s3_bucket_versioning` resource](/docs/providers/aws/r/s3_bucket_versioning.html) instead. - -~> **NOTE:** As `aws_s3_bucket_versioning` is a separate resource, any S3 objects for which versioning is important (_e.g._, a truststore for mutual TLS authentication) must implicitly or explicitly depend on the `aws_s3_bucket_versioning` resource. Otherwise, the S3 objects may be created before versioning has been set. [See below](#ensure-objects-depend-on-versioning) for an example. Also note that AWS recommends waiting 15 minutes after enabling versioning on a bucket before putting or deleting objects in/from the bucket. - -#### Buckets With Versioning Enabled - -Given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - versioning=S3BucketVersioning( - enabled=True - ) - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `versioning` is now read only, update your configuration to use the `aws_s3_bucket_versioning` -resource and remove `versioning` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Enabled" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_versioning.example yournamehere -aws_s3_bucket_versioning.example: Importing from ID "yournamehere"... -aws_s3_bucket_versioning.example: Import prepared! - Prepared aws_s3_bucket_versioning for import -aws_s3_bucket_versioning.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### Buckets With Versioning Disabled or Suspended - -Depending on the version of the Terraform AWS Provider you are migrating from, the interpretation of `versioning.enabled = false` -in your `aws_s3_bucket` resource will differ and thus the migration to the `aws_s3_bucket_versioning` resource will also differ as follows. - -If you are migrating from the Terraform AWS Provider `v3.70.0` or later: - -* For new S3 buckets, `enabled = false` is synonymous to `Disabled`. -* For existing S3 buckets, `enabled = false` is synonymous to `Suspended`. - -If you are migrating from an earlier version of the Terraform AWS Provider: - -* For both new and existing S3 buckets, `enabled = false` is synonymous to `Suspended`. - -Given this previous configuration : - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - versioning=S3BucketVersioning( - enabled=False - ) - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `versioning` is now read only, update your configuration to use the `aws_s3_bucket_versioning` -resource and remove `versioning` and its nested arguments in the `aws_s3_bucket` resource. - -* If migrating from Terraform AWS Provider `v3.70.0` or later and bucket versioning was never enabled: - - ```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Disabled" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -* If migrating from Terraform AWS Provider `v3.70.0` or later and bucket versioning was enabled at one point: - - ```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Suspended" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -* If migrating from an earlier version of Terraform AWS Provider: - - ```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Suspended" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_versioning.example yournamehere -aws_s3_bucket_versioning.example: Importing from ID "yournamehere"... -aws_s3_bucket_versioning.example: Import prepared! - Prepared aws_s3_bucket_versioning for import -aws_s3_bucket_versioning.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### Ensure Objects Depend on Versioning - -When you create an object whose `version_id` you need and an `aws_s3_bucket_versioning` resource in the same configuration, you are more likely to have success by ensuring the `s3_object` depends either implicitly (see below) or explicitly (i.e., using `depends_on = [aws_s3_bucket_versioning.example]`) on the `aws_s3_bucket_versioning` resource. - -~> **NOTE:** For critical and/or production S3 objects, do not create a bucket, enable versioning, and create an object in the bucket within the same configuration. Doing so will not allow the AWS-recommended 15 minutes between enabling versioning and writing to the bucket. - -This example shows the `aws_s3_object.example` depending implicitly on the versioning resource through the reference to `aws_s3_bucket_versioning.example.bucket` to define `bucket`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_versioning import S3BucketVersioningA -from imports.aws.s3_object import S3Object -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yotto" - ) - aws_s3_bucket_versioning_example = S3BucketVersioningA(self, "example_1", - bucket=example.id, - versioning_configuration=S3BucketVersioningVersioningConfiguration( - status="Enabled" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_versioning_example.override_logical_id("example") - aws_s3_object_example = S3Object(self, "example_2", - bucket=Token.as_string(aws_s3_bucket_versioning_example.id), - key="droeloe", - source="example.txt" - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_object_example.override_logical_id("example") -``` - -### `website`, `website_domain`, and `website_endpoint` Arguments - -Switch your Terraform configuration to the [`aws_s3_bucket_website_configuration` resource](/docs/providers/aws/r/s3_bucket_website_configuration.html) instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - S3Bucket(self, "example", - bucket="yournamehere", - website=S3BucketWebsite( - error_document="error.html", - index_document="index.html" - ) - ) -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "website": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `website` is now read only, update your configuration to use the `aws_s3_bucket_website_configuration` -resource and remove `website` and its nested arguments in the `aws_s3_bucket` resource: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_website_configuration import S3BucketWebsiteConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = S3Bucket(self, "example", - bucket="yournamehere" - ) - aws_s3_bucket_website_configuration_example = - S3BucketWebsiteConfiguration(self, "example_1", - bucket=example.id, - error_document=S3BucketWebsiteConfigurationErrorDocument( - key="error.html" - ), - index_document=S3BucketWebsiteConfigurationIndexDocument( - suffix="index.html" - ) - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_s3_bucket_website_configuration_example.override_logical_id("example") -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_website_configuration.example yournamehere -aws_s3_bucket_website_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_website_configuration.example: Import prepared! - Prepared aws_s3_bucket_website_configuration for import -aws_s3_bucket_website_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -For example, if you use the `aws_s3_bucket` attribute `website_domain` with `aws_route53_record`, as shown below, you will need to update your configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route53_record import Route53Record -from imports.aws.route53_zone import Route53Zone -from imports.aws.s3_bucket import S3Bucket -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - main = Route53Zone(self, "main", - name="domain.test" - ) - website = S3Bucket(self, "website", - website=S3BucketWebsite( - error_document="error.html", - index_document="index.html" - ) - ) - Route53Record(self, "alias", - alias=Route53RecordAlias( - evaluate_target_health=True, - name=website.website_domain, - zone_id=website.hosted_zone_id - ), - name="www", - type="A", - zone_id=main.zone_id - ) -``` - -Instead, you will now use the `aws_s3_bucket_website_configuration` resource and its `website_domain` attribute: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route53_record import Route53Record -from imports.aws.route53_zone import Route53Zone -from imports.aws.s3_bucket import S3Bucket -from imports.aws.s3_bucket_website_configuration import S3BucketWebsiteConfiguration -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - main = Route53Zone(self, "main", - name="domain.test" - ) - website = S3Bucket(self, "website") - example = S3BucketWebsiteConfiguration(self, "example", - bucket=website.id, - index_document=S3BucketWebsiteConfigurationIndexDocument( - suffix="index.html" - ) - ) - Route53Record(self, "alias", - alias=Route53RecordAlias( - evaluate_target_health=True, - name=example.website_domain, - zone_id=website.hosted_zone_id - ), - name="www", - type="A", - zone_id=main.zone_id - ) -``` - -## Full Resource Lifecycle of Default Resources - -Default subnets and vpcs can now do full resource lifecycle operations such that resource -creation and deletion are now supported. - -### Resource: aws_default_subnet - -The `aws_default_subnet` resource behaves differently from normal resources in that if a default subnet exists in the specified Availability Zone, Terraform does not _create_ this resource, but instead "adopts" it into management. -If no default subnet exists, Terraform creates a new default subnet. -By default, `terraform destroy` does not delete the default subnet but does remove the resource from Terraform state. -Set the `force_destroy` argument to `true` to delete the default subnet. - -For example, given this previous configuration with no existing default subnet: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_subnet import DefaultSubnet -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, availabilityZone): - super().__init__(scope, name) - AwsProvider(self, "aws", - region="eu-west-2" - ) - DefaultSubnet(self, "default", - availability_zone=availability_zone - ) -``` - -The following error was thrown on `terraform apply`: - -``` -│ Error: Default subnet not found. -│ -│ with aws_default_subnet.default, -│ on main.tf line 5, in resource "aws_default_subnet" "default": -│ 5: resource "aws_default_subnet" "default" {} -``` - -Now after upgrading, the above configuration will apply successfully. - -To delete the default subnet, the above configuration should be updated as follows: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_subnet import DefaultSubnet -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, availabilityZone): - super().__init__(scope, name) - DefaultSubnet(self, "default", - force_destroy=True, - availability_zone=availability_zone - ) -``` - -### Resource: aws_default_vpc - -The `aws_default_vpc` resource behaves differently from normal resources in that if a default VPC exists, Terraform does not _create_ this resource, but instead "adopts" it into management. -If no default VPC exists, Terraform creates a new default VPC, which leads to the implicit creation of [other resources](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#default-vpc-components). -By default, `terraform destroy` does not delete the default VPC but does remove the resource from Terraform state. -Set the `force_destroy` argument to `true` to delete the default VPC. - -For example, given this previous configuration with no existing default VPC: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_vpc import DefaultVpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DefaultVpc(self, "default") -``` - -The following error was thrown on `terraform apply`: - -``` -│ Error: No default VPC found in this region. -│ -│ with aws_default_vpc.default, -│ on main.tf line 5, in resource "aws_default_vpc" "default": -│ 5: resource "aws_default_vpc" "default" {} -``` - -Now after upgrading, the above configuration will apply successfully. - -To delete the default VPC, the above configuration should be updated to: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_vpc import DefaultVpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DefaultVpc(self, "default", - force_destroy=True - ) -``` - -## Plural Data Source Behavior - -The following plural data sources are now consistent with [Provider Design](https://hashicorp.github.io/terraform-provider-aws/provider-design/#plural-data-sources) -such that they no longer return an error if zero results are found. - -* [aws_cognito_user_pools](/docs/providers/aws/d/cognito_user_pools.html) -* [aws_db_event_categories](/docs/providers/aws/d/db_event_categories.html) -* [aws_ebs_volumes](/docs/providers/aws/d/ebs_volumes.html) -* [aws_ec2_coip_pools](/docs/providers/aws/d/ec2_coip_pools.html) -* [aws_ec2_local_gateway_route_tables](/docs/providers/aws/d/ec2_local_gateway_route_tables.html) -* [aws_ec2_local_gateway_virtual_interface_groups](/docs/providers/aws/d/ec2_local_gateway_virtual_interface_groups.html) -* [aws_ec2_local_gateways](/docs/providers/aws/d/ec2_local_gateways.html) -* [aws_ec2_transit_gateway_route_tables](/docs/providers/aws/d/ec2_transit_gateway_route_tables.html) -* [aws_efs_access_points](/docs/providers/aws/d/efs_access_points.html) -* [aws_emr_release_labels](/docs/providers/aws/d/emr_release_labels.markdown) -* [aws_inspector_rules_packages](/docs/providers/aws/d/inspector_rules_packages.html) -* [aws_ip_ranges](/docs/providers/aws/d/ip_ranges.html) -* [aws_network_acls](/docs/providers/aws/d/network_acls.html) -* [aws_route_tables](/docs/providers/aws/d/route_tables.html) -* [aws_security_groups](/docs/providers/aws/d/security_groups.html) -* [aws_ssoadmin_instances](/docs/providers/aws/d/ssoadmin_instances.html) -* [aws_vpcs](/docs/providers/aws/d/vpcs.html) -* [aws_vpc_peering_connections](/docs/providers/aws/d/vpc_peering_connections.html) - -## Empty Strings Not Valid For Certain Resources - -First, this is a breaking change but should affect very few configurations. - -Second, the motivation behind this change is that previously, you might set an argument to `""` to explicitly convey it is empty. However, with the introduction of `null` in Terraform 0.12 and to prepare for continuing enhancements that distinguish between unset arguments and those that have a value, including an empty string (`""`), we are moving away from this use of zero values. We ask practitioners to either use `null` instead or remove the arguments that are set to `""`. - -### Resource: aws_cloudwatch_event_target (Empty String) - -Previously, you could set `ecs_target.0.launch_type` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `launch_type = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cloudwatch_event_target import CloudwatchEventTarget -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, arn, rule): - super().__init__(scope, name) - CloudwatchEventTarget(self, "example", - ecs_target=CloudwatchEventTargetEcsTarget( - launch_type="", - task_count=1, - task_definition_arn=task.arn - ), - arn=arn, - rule=rule - ) -``` - -We fix this configuration by setting `launch_type` to `null`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cloudwatch_event_target import CloudwatchEventTarget -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, arn, rule): - super().__init__(scope, name) - CloudwatchEventTarget(self, "example", - ecs_target=CloudwatchEventTargetEcsTarget( - launch_type=[null], - task_count=1, - task_definition_arn=task.arn - ), - arn=arn, - rule=rule - ) -``` - -### Resource: aws_customer_gateway - -Previously, you could set `ip_address` to `""`, which would result in an AWS error. However, the provider now also gives an error. - -### Resource: aws_default_network_acl - -Previously, you could set `egress.*.cidr_block`, `egress.*.ipv6_cidr_block`, `ingress.*.cidr_block`, or `ingress.*.ipv6_cidr_block` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_network_acl import DefaultNetworkAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, action, fromPort, protocol, ruleNo, toPort, defaultNetworkAclId): - super().__init__(scope, name) - DefaultNetworkAcl(self, "example", - egress=[DefaultNetworkAclEgress( - cidr_block="0.0.0.0/0", - ipv6_cidr_block="", - action=action, - from_port=from_port, - protocol=protocol, - rule_no=rule_no, - to_port=to_port - ) - ], - default_network_acl_id=default_network_acl_id - ) -``` - -To fix this configuration, we remove the empty-string configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_network_acl import DefaultNetworkAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, action, fromPort, protocol, ruleNo, toPort, defaultNetworkAclId): - super().__init__(scope, name) - DefaultNetworkAcl(self, "example", - egress=[DefaultNetworkAclEgress( - cidr_block="0.0.0.0/0", - action=action, - from_port=from_port, - protocol=protocol, - rule_no=rule_no, - to_port=to_port - ) - ], - default_network_acl_id=default_network_acl_id - ) -``` - -### Resource: aws_default_route_table - -Previously, you could set `route.*.cidr_block` or `route.*.ipv6_cidr_block` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_route_table import DefaultRouteTable -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, defaultRouteTableId): - super().__init__(scope, name) - DefaultRouteTable(self, "example", - route=[DefaultRouteTableRoute( - cidr_block=Token.as_string(conditional(ipv6, "", destination)), - ipv6_cidr_block=Token.as_string(conditional(ipv6, destination_ipv6, "")) - ) - ], - default_route_table_id=default_route_table_id - ) -``` - -We fix this configuration by using `null` instead of an empty string (`""`): - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.default_route_table import DefaultRouteTable -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, defaultRouteTableId): - super().__init__(scope, name) - DefaultRouteTable(self, "example", - route=[DefaultRouteTableRoute( - cidr_block=Token.as_string(conditional(ipv6, "null", destination)), - ipv6_cidr_block=Token.as_string( - conditional(ipv6, destination_ipv6, "null")) - ) - ], - default_route_table_id=default_route_table_id - ) -``` - -### Resource: aws_default_vpc (Empty String) - -Previously, you could set `ipv6_cidr_block` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -### Resource: aws_instance - -Previously, you could set `private_ip` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `private_ip = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.instance import Instance -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Instance(self, "example", - instance_type="t2.micro", - private_ip="" - ) -``` - -We fix this configuration by removing the empty-string configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.instance import Instance -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Instance(self, "example", - instance_type="t2.micro" - ) -``` - -### Resource: aws_efs_mount_target - -Previously, you could set `ip_address` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ip_address = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: `ip_address = ""`. - -### Resource: aws_elasticsearch_domain - -Previously, you could set `ebs_options.0.volume_type` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `volume_type = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Op, conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.elasticsearch_domain import ElasticsearchDomain -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, domainName): - super().__init__(scope, name) - ElasticsearchDomain(self, "example", - ebs_options=ElasticsearchDomainEbsOptions( - ebs_enabled=True, - volume_size=volume_size.number_value, - volume_type=Token.as_string( - conditional(Op.gt(volume_size.value, 0), volume_type, "")) - ), - domain_name=domain_name - ) -``` - -We fix this configuration by using `null` instead of `""`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Op, conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.elasticsearch_domain import ElasticsearchDomain -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, domainName): - super().__init__(scope, name) - ElasticsearchDomain(self, "example", - ebs_options=ElasticsearchDomainEbsOptions( - ebs_enabled=True, - volume_size=volume_size.number_value, - volume_type=Token.as_string( - conditional(Op.gt(volume_size.value, 0), volume_type, "null")) - ), - domain_name=domain_name - ) -``` - -### Resource: aws_network_acl - -Previously, `egress.*.cidr_block`, `egress.*.ipv6_cidr_block`, `ingress.*.cidr_block`, and `ingress.*.ipv6_cidr_block` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.network_acl import NetworkAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - NetworkAcl(self, "example", - egress=[NetworkAclEgress( - cidr_block="0.0.0.0/0", - ipv6_cidr_block="" - ) - ], - vpc_id=vpc_id - ) -``` - -We fix this configuration by removing the empty-string configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.network_acl import NetworkAcl -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - NetworkAcl(self, "example", - egress=[NetworkAclEgress( - cidr_block="0.0.0.0/0" - ) - ], - vpc_id=vpc_id - ) -``` - -### Resource: aws_route - -Previously, `destination_cidr_block` and `destination_ipv6_cidr_block` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `destination_ipv6_cidr_block = null`) or remove the empty-string configuration. - -In addition, now exactly one of `destination_cidr_block`, `destination_ipv6_cidr_block`, and `destination_prefix_list_id` can be set. - -For example, this type of configuration for `aws_route` is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route import Route -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Route(self, "example", - destination_cidr_block=Token.as_string(conditional(ipv6, "", destination)), - destination_ipv6_cidr_block=Token.as_string( - conditional(ipv6, destination_ipv6, "")), - gateway_id=Token.as_string(aws_internet_gateway_example.id), - route_table_id=Token.as_string(aws_route_table_example.id) - ) -``` - -We fix this configuration by using `null` instead of an empty-string (`""`): - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route import Route -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Route(self, "example", - destination_cidr_block=Token.as_string( - conditional(ipv6, "null", destination)), - destination_ipv6_cidr_block=Token.as_string( - conditional(ipv6, destination_ipv6, "null")), - gateway_id=Token.as_string(aws_internet_gateway_example.id), - route_table_id=Token.as_string(aws_route_table_example.id) - ) -``` - -### Resource: aws_route_table - -Previously, `route.*.cidr_block` and `route.*.ipv6_cidr_block` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route_table import RouteTable -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - RouteTable(self, "example", - route=[RouteTableRoute( - cidr_block=Token.as_string(conditional(ipv6, "", destination)), - ipv6_cidr_block=Token.as_string(conditional(ipv6, destination_ipv6, "")) - ) - ], - vpc_id=vpc_id - ) -``` - -We fix this configuration by usingd `null` instead of an empty-string (`""`): - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import conditional, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route_table import RouteTable -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - RouteTable(self, "example", - route=[RouteTableRoute( - cidr_block=Token.as_string(conditional(ipv6, "null", destination)), - ipv6_cidr_block=Token.as_string( - conditional(ipv6, destination_ipv6, "null")) - ) - ], - vpc_id=vpc_id - ) -``` - -### Resource: aws_vpc - -Previously, `ipv6_cidr_block` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.vpc import Vpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Vpc(self, "example", - cidr_block="10.1.0.0/16", - ipv6_cidr_block="" - ) -``` - -We fix this configuration by removing `ipv6_cidr_block`: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.vpc import Vpc -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - Vpc(self, "example", - cidr_block="10.1.0.0/16" - ) -``` - -### Resource: aws_vpc_ipv6_cidr_block_association - -Previously, `ipv6_cidr_block` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -## Data Source: aws_cloudwatch_log_group - -### Removal of arn Wildcard Suffix - -Previously, the data source returned the ARN directly from the API, which included a `:*` suffix to denote all CloudWatch Log Streams under the CloudWatch Log Group. Most other AWS resources that return ARNs and many other AWS services do not use the `:*` suffix. The suffix is now automatically removed. For example, the data source previously returned an ARN such as `arn:aws:logs:us-east-1:123456789012:log-group:/example:*` but will now return `arn:aws:logs:us-east-1:123456789012:log-group:/example`. - -Workarounds, such as using `replace()` as shown below, should be removed: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Fn, Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_cloudwatch_log_group import DataAwsCloudwatchLogGroup -from imports.aws.datasync_task import DatasyncTask -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, destinationLocationArn, sourceLocationArn): - super().__init__(scope, name) - example = DataAwsCloudwatchLogGroup(self, "example", - name="example" - ) - aws_datasync_task_example = DatasyncTask(self, "example_1", - cloudwatch_log_group_arn=Token.as_string( - Fn.replace(Token.as_string(example.arn), ":*", "")), - destination_location_arn=destination_location_arn, - source_location_arn=source_location_arn - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_datasync_task_example.override_logical_id("example") -``` - -Removing the `:*` suffix is a breaking change for some configurations. Fix these configurations using string interpolations as demonstrated below. For example, this configuration is now broken: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_iam_policy_document import DataAwsIamPolicyDocument -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsIamPolicyDocument(self, "ad-log-policy", - statement=[DataAwsIamPolicyDocumentStatement( - actions=["logs:CreateLogStream", "logs:PutLogEvents"], - effect="Allow", - principals=[DataAwsIamPolicyDocumentStatementPrincipals( - identifiers=["ds.amazonaws.com"], - type="Service" - ) - ], - resources=[Token.as_string(example.arn)] - ) - ] - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_iam_policy_document import DataAwsIamPolicyDocument -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - DataAwsIamPolicyDocument(self, "ad-log-policy", - statement=[DataAwsIamPolicyDocumentStatement( - actions=["logs:CreateLogStream", "logs:PutLogEvents"], - effect="Allow", - principals=[DataAwsIamPolicyDocumentStatementPrincipals( - identifiers=["ds.amazonaws.com"], - type="Service" - ) - ], - resources=["${" + example.arn + "}:*"] - ) - ] - ) -``` - -## Data Source: aws_subnet_ids - -The `aws_subnet_ids` data source has been deprecated and will be removed in a future version. Use the `aws_subnets` data source instead. - -For example, change a configuration such as - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformIterator, TerraformOutput, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws. import DataAwsSubnetIds -from imports.aws.data_aws_subnet import DataAwsSubnet -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = DataAwsSubnetIds(self, "example", - vpc_id=vpc_id.value - ) - # In most cases loops should be handled in the programming language context and - # not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - # you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - # you need to keep this like it is. - example_for_each_iterator = TerraformIterator.from_list( - Token.as_any(example.ids)) - data_aws_subnet_example = DataAwsSubnet(self, "example_1", - id=Token.as_string(example_for_each_iterator.value), - for_each=example_for_each_iterator - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - data_aws_subnet_example.override_logical_id("example") - TerraformOutput(self, "subnet_cidr_blocks", - value="${[ for s in ${" + data_aws_subnet_example.fqn + "} : s.cidr_block]}" - ) -``` - -to - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformIterator, TerraformOutput, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.data_aws_subnet import DataAwsSubnet -from imports.aws.data_aws_subnets import DataAwsSubnets -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - example = DataAwsSubnets(self, "example", - filter=[DataAwsSubnetsFilter( - name="vpc-id", - values=[vpc_id.string_value] - ) - ] - ) - # In most cases loops should be handled in the programming language context and - # not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - # you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - # you need to keep this like it is. - example_for_each_iterator = TerraformIterator.from_list( - Token.as_any(example.ids)) - data_aws_subnet_example = DataAwsSubnet(self, "example_1", - id=Token.as_string(example_for_each_iterator.value), - for_each=example_for_each_iterator - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - data_aws_subnet_example.override_logical_id("example") - TerraformOutput(self, "subnet_cidr_blocks", - value="${[ for s in ${" + data_aws_subnet_example.fqn + "} : s.cidr_block]}" - ) -``` - -## Data Source: aws_s3_bucket_object - -Version 4.x deprecates the `aws_s3_bucket_object` data source. Maintainers will remove it in a future version. Use `aws_s3_object` instead, where new features and fixes will be added. - -## Data Source: aws_s3_bucket_objects - -Version 4.x deprecates the `aws_s3_bucket_objects` data source. Maintainers will remove it in a future version. Use `aws_s3_objects` instead, where new features and fixes will be added. - -## Resource: aws_batch_compute_environment - -You can no longer specify `compute_resources` when `type` is `UNMANAGED`. - -Previously, you could apply this configuration and the provider would ignore any compute resources: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.batch_compute_environment import BatchComputeEnvironment -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - BatchComputeEnvironment(self, "test", - compute_environment_name="test", - compute_resources=BatchComputeEnvironmentComputeResources( - instance_role=ecs_instance.arn, - instance_type=["c4.large"], - max_vcpus=16, - min_vcpus=0, - security_group_ids=[Token.as_string(aws_security_group_test.id)], - subnets=[Token.as_string(aws_subnet_test.id)], - type="EC2" - ), - service_role=batch_service.arn, - type="UNMANAGED" - ) -``` - -Now, this configuration is invalid and will result in an error during plan. - -To resolve this error, simply remove or comment out the `compute_resources` configuration block. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.batch_compute_environment import BatchComputeEnvironment -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - BatchComputeEnvironment(self, "test", - compute_environment_name="test", - service_role=batch_service.arn, - type="UNMANAGED" - ) -``` - -## Resource: aws_cloudwatch_event_target - -### Removal of `ecs_target` `launch_type` default value - -Previously, the provider assigned `ecs_target` `launch_type` the default value of `EC2` if you did not configure a value. However, the provider no longer assigns a default value. - -For example, previously you could workaround the default value by using an empty string (`""`), as shown: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cloudwatch_event_target import CloudwatchEventTarget -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - CloudwatchEventTarget(self, "test", - arn=Token.as_string(aws_ecs_cluster_test.id), - ecs_target=CloudwatchEventTargetEcsTarget( - launch_type="", - network_configuration=CloudwatchEventTargetEcsTargetNetworkConfiguration( - subnets=[subnet.id] - ), - task_count=1, - task_definition_arn=task.arn - ), - role_arn=Token.as_string(aws_iam_role_test.arn), - rule=Token.as_string(aws_cloudwatch_event_rule_test.id) - ) -``` - -This is no longer necessary. We fix the configuration by removing the empty string assignment: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.cloudwatch_event_target import CloudwatchEventTarget -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - CloudwatchEventTarget(self, "test", - arn=Token.as_string(aws_ecs_cluster_test.id), - ecs_target=CloudwatchEventTargetEcsTarget( - network_configuration=CloudwatchEventTargetEcsTargetNetworkConfiguration( - subnets=[subnet.id] - ), - task_count=1, - task_definition_arn=task.arn - ), - role_arn=Token.as_string(aws_iam_role_test.arn), - rule=Token.as_string(aws_cloudwatch_event_rule_test.id) - ) -``` - -## Resource: aws_elasticache_cluster - -### Error raised if neither `engine` nor `replication_group_id` is specified - -Previously, when you did not specify either `engine` or `replication_group_id`, Terraform would not prevent you from applying the invalid configuration. -Now, this will produce an error similar to the one below: - -``` -Error: Invalid combination of arguments - - with aws_elasticache_cluster.example, - on terraform_plugin_test.tf line 2, in resource "aws_elasticache_cluster" "example": - 2: resource "aws_elasticache_cluster" "example" { - - "replication_group_id": one of `engine,replication_group_id` must be - specified - - Error: Invalid combination of arguments - - with aws_elasticache_cluster.example, - on terraform_plugin_test.tf line 2, in resource "aws_elasticache_cluster" "example": - 2: resource "aws_elasticache_cluster" "example" { - - "engine": one of `engine,replication_group_id` must be specified -``` - -Update your configuration to supply one of `engine` or `replication_group_id`. - -## Resource: aws_elasticache_global_replication_group - -### actual_engine_version Attribute removal - -Switch your Terraform configuration from using `actual_engine_version` to use the `engine_version_actual` attribute instead. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformOutput, TerraformStack -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - TerraformOutput(self, "elasticache_global_replication_group_version_result", - value=example.actual_engine_version - ) -``` - -An updated configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformOutput, TerraformStack -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - TerraformOutput(self, "elasticache_global_replication_group_version_result", - value=example.engine_version_actual - ) -``` - -## Resource: aws_fsx_ontap_storage_virtual_machine - -We removed the misspelled argument `active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguidshed_name` that we previously deprecated. Use `active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguished_name` now instead. Terraform will automatically migrate the state to `active_directory_configuration.0.self_managed_active_directory_configuration.0.organizational_unit_distinguished_name` during planning. - -## Resource: aws_lb_target_group - -For `protocol = "TCP"`, you can no longer set `stickiness.type` to `lb_cookie` even when `enabled = false`. Instead, either change the `protocol` to `"HTTP"` or `"HTTPS"`, or change `stickiness.type` to `"source_ip"`. - -For example, this configuration is no longer valid: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.lb_target_group import LbTargetGroup -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - LbTargetGroup(self, "test", - port=25, - protocol="TCP", - stickiness=LbTargetGroupStickiness( - enabled=False, - type="lb_cookie" - ), - vpc_id=Token.as_string(aws_vpc_test.id) - ) -``` - -To fix this, we change the `stickiness.type` to `"source_ip"`. - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.lb_target_group import LbTargetGroup -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - LbTargetGroup(self, "test", - port=25, - protocol="TCP", - stickiness=LbTargetGroupStickiness( - enabled=False, - type="source_ip" - ), - vpc_id=Token.as_string(aws_vpc_test.id) - ) -``` - -## Resource: aws_s3_bucket_object - -Version 4.x deprecates the `aws_s3_bucket_object` and maintainers will remove it in a future version. Use `aws_s3_object` instead, where new features and fixes will be added. - -When replacing `aws_s3_bucket_object` with `aws_s3_object` in your configuration, on the next apply, Terraform will recreate the object. If you prefer to not have Terraform recreate the object, import the object using `aws_s3_object`. - -For example, the following will import an S3 object into state, assuming the configuration exists, as `aws_s3_object.example`: - -```console -% terraform import aws_s3_object.example s3://some-bucket-name/some/key.txt -``` - -~> **CAUTION:** We do not recommend modifying the state file manually. If you do, you can make it unusable. However, if you accept that risk, some community members have upgraded to the new resource by searching and replacing `"type": "aws_s3_bucket_object",` with `"type": "aws_s3_object",` in the state file, and then running `terraform apply -refresh-only`. - -## EC2-Classic Resource and Data Source Support - -While an upgrade to this major version will not directly impact EC2-Classic resources configured with Terraform, -it is important to keep in the mind the following AWS Provider resources will eventually no longer -be compatible with EC2-Classic as AWS completes their EC2-Classic networking retirement (expected around August 15, 2022). - -* Running or stopped [EC2 instances](/docs/providers/aws/r/instance.html) -* Running or stopped [RDS database instances](/docs/providers/aws/r/db_instance.html) -* [Elastic IP addresses](/docs/providers/aws/r/eip.html) -* [Classic Load Balancers](/docs/providers/aws/r/lb.html) -* [Redshift clusters](/docs/providers/aws/r/redshift_cluster.html) -* [Elastic Beanstalk environments](/docs/providers/aws/r/elastic_beanstalk_environment.html) -* [EMR clusters](/docs/providers/aws/r/emr_cluster.html) -* [AWS Data Pipelines pipelines](/docs/providers/aws/r/datapipeline_pipeline.html) -* [ElastiCache clusters](/docs/providers/aws/r/elasticache_cluster.html) -* [Spot Requests](/docs/providers/aws/r/spot_instance_request.html) -* [Capacity Reservations](/docs/providers/aws/r/ec2_capacity_reservation.html) - -## Macie Classic Resource Support - -These resources should be considered deprecated and will be removed in version 5.0.0. - -* Macie Member Account Association -* Macie S3 Bucket Association - - \ No newline at end of file diff --git a/website/docs/cdktf/python/guides/version-5-upgrade.html.md b/website/docs/cdktf/python/guides/version-5-upgrade.html.md deleted file mode 100644 index 046d95dae95..00000000000 --- a/website/docs/cdktf/python/guides/version-5-upgrade.html.md +++ /dev/null @@ -1,751 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 5 Upgrade Guide" -description: |- - Terraform AWS Provider Version 5 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 5 Upgrade Guide - -Version 5.0.0 of the AWS provider for Terraform is a major release and includes changes that you need to consider when upgrading. This guide will help with that process and focuses only on changes from version 4.x to version 5.0.0. See the [Version 4 Upgrade Guide](/docs/providers/aws/guides/version-4-upgrade.html) for information on upgrading from 3.x to version 4.0.0. - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Provider Arguments](#provider-arguments) -- [Default Tags](#default-tags) -- [EC2-Classic Retirement](#ec2-classic-retirement) -- [Macie Classic Retirement](#macie-classic-retirement) -- [resource/aws_acmpca_certificate_authority](#resourceaws_acmpca_certificate_authority) -- [resource/aws_api_gateway_rest_api](#resourceaws_api_gateway_rest_api) -- [resource/aws_autoscaling_attachment](#resourceaws_autoscaling_attachment) -- [resource/aws_autoscaling_group](#resourceaws_autoscaling_group) -- [resource/aws_budgets_budget](#resourceaws_budgets_budget) -- [resource/aws_ce_anomaly_subscription](#resourceaws_ce_anomaly_subscription) -- [resource/aws_cloudwatch_event_target](#resourceaws_cloudwatch_event_target) -- [resource/aws_codebuild_project](#resourceaws_codebuild_project) -- [resource/aws_connect_hours_of_operation](#resourceaws_connect_hours_of_operation) -- [resource/aws_connect_queue](#resourceaws_connect_queue) -- [resource/aws_connect_routing_profile](#resourceaws_connect_routing_profile) -- [resource/aws_db_event_subscription](#resourceaws_db_event_subscription) -- [resource/aws_db_instance_role_association](#resourceaws_db_instance_role_association) -- [resource/aws_db_instance](#resourceaws_db_instance) -- [resource/aws_db_proxy_target](#resourceaws_db_proxy_target) -- [resource/aws_db_security_group](#resourceaws_db_security_group) -- [resource/aws_db_snapshot](#resourceaws_db_snapshot) -- [resource/aws_default_vpc](#resourceaws_default_vpc) -- [resource/aws_dms_endpoint](#resourceaws_dms_endpoint) -- [resource/aws_docdb_cluster](#resourceaws_docdb_cluster) -- [resource/aws_dx_gateway_association](#resourceaws_dx_gateway_association) -- [resource/aws_ec2_client_vpn_endpoint](#resourceaws_ec2_client_vpn_endpoint) -- [resource/aws_ec2_client_vpn_network_association](#resourceaws_ec2_client_vpn_network_association) -- [resource/aws_ecs_cluster](#resourceaws_ecs_cluster) -- [resource/aws_eip](#resourceaws_eip) -- [resource/aws_eip_association](#resourceaws_eip_association) -- [resource/aws_eks_addon](#resourceaws_eks_addon) -- [resource/aws_elasticache_cluster](#resourceaws_elasticache_cluster) -- [resource/aws_elasticache_replication_group](#resourceaws_elasticache_replication_group) -- [resource/aws_elasticache_security_group](#resourceaws_elasticache_security_group) -- [resource/aws_flow_log](#resourceaws_flow_log) -- [resource/aws_guardduty_organization_configuration](#resourceaws_guardduty_organization_configuration) -- [resource/aws_kinesis_firehose_delivery_stream](#resourceaws_kinesis_firehose_delivery_stream) -- [resource/aws_launch_configuration](#resourceaws_launch_configuration) -- [resource/aws_launch_template](#resourceaws_launch_template) -- [resource/aws_lightsail_instance](#resourceaws_lightsail_instance) -- [resource/aws_macie_member_account_association](#resourceaws_macie_member_account_association) -- [resource/aws_macie_s3_bucket_association](#resourceaws_macie_s3_bucket_association) -- [resource/aws_medialive_multiplex_program](#resourceaws_medialive_multiplex_program) -- [resource/aws_msk_cluster](#resourceaws_msk_cluster) -- [resource/aws_neptune_cluster](#resourceaws_neptune_cluster) -- [resource/aws_networkmanager_core_network](#resourceaws_networkmanager_core_network) -- [resource/aws_opensearch_domain](#resourceaws_opensearch_domain) -- [resource/aws_rds_cluster](#resourceaws_rds_cluster) -- [resource/aws_rds_cluster_instance](#resourceaws_rds_cluster_instance) -- [resource/aws_redshift_cluster](#resourceaws_redshift_cluster) -- [resource/aws_redshift_security_group](#resourceaws_redshift_security_group) -- [resource/aws_route](#resourceaws_route) -- [resource/aws_route_table](#resourceaws_route_table) -- [resource/aws_s3_object](#resourceaws_s3_object) -- [resource/aws_s3_object_copy](#resourceaws_s3_object_copy) -- [resource/aws_secretsmanager_secret](#resourceaws_secretsmanager_secret) -- [resource/aws_security_group](#resourceaws_security_group) -- [resource/aws_security_group_rule](#resourceaws_security_group_rule) -- [resource/aws_servicecatalog_product](#resourceaws_servicecatalog_product) -- [resource/aws_ssm_association](#resourceaws_ssm_association) -- [resource/aws_ssm_parameter](#resourceaws_ssm_parameter) -- [resource/aws_vpc](#resourceaws_vpc) -- [resource/aws_vpc_peering_connection](#resourceaws_vpc_peering_connection) -- [resource/aws_vpc_peering_connection_accepter](#resourceaws_vpc_peering_connection_accepter) -- [resource/aws_vpc_peering_connection_options](#resourceaws_vpc_peering_connection_options) -- [resource/aws_wafv2_web_acl](#resourceaws_wafv2_web_acl) -- [resource/aws_wafv2_web_acl_logging_configuration](#resourceaws_wafv2_web_acl_logging_configuration) -- [data-source/aws_api_gateway_rest_api](#data-sourceaws_api_gateway_rest_api) -- [data-source/aws_connect_hours_of_operation](#data-sourceaws_connect_hours_of_operation) -- [data-source/aws_db_instance](#data-sourceaws_db_instance) -- [data-source/aws_elasticache_cluster](#data-sourceaws_elasticache_cluster) -- [data-source/aws_elasticache_replication_group](#data-sourceaws_elasticache_replication_group) -- [data-source/aws_iam_policy_document](#data-sourceaws_iam_policy_document) -- [data-source/aws_identitystore_group](#data-sourceaws_identitystore_group) -- [data-source/aws_identitystore_user](#data-sourceaws_identitystore_user) -- [data-source/aws_launch_configuration](#data-sourceaws_launch_configuration) -- [data-source/aws_opensearch_domain](#data-sourceaws_opensearch_domain) -- [data-source/aws_quicksight_data_set](#data-sourceaws_quicksight_data_set) -- [data-source/aws_redshift_cluster](#data-sourceaws_redshift_cluster) -- [data-source/aws_redshift_service_account](#data-sourceaws_redshift_service_account) -- [data-source/aws_secretsmanager_secret](#data-sourceaws_secretsmanager_secret) -- [data-source/aws_service_discovery_service](#data-sourceaws_service_discovery_service) -- [data-source/aws_subnet_ids](#data-sourceaws_subnet_ids) -- [data-source/aws_vpc_peering_connection](#data-sourceaws_vpc_peering_connection) - - - -## Provider Version Configuration - --> Before upgrading to version 5.0.0, upgrade to the most recent 4.X version of the provider and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html). You should not see changes you don't expect or deprecation notices. - -Use [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init -upgrade`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -For example, given this previous configuration: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -Update to the latest 5.X version: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.provider import AwsProvider -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name): - super().__init__(scope, name) - AwsProvider(self, "aws") -``` - -## Provider Arguments - -Version 5.0.0 removes these `provider` arguments: - -* `assume_role.duration_seconds` - Use `assume_role.duration` instead -* `assume_role_with_web_identity.duration_seconds` - Use `assume_role_with_web_identity.duration` instead -* `s3_force_path_style` - Use `s3_use_path_style` instead -* `shared_credentials_file` - Use `shared_credentials_files` instead -* `skip_get_ec2_platforms` - Removed following the retirement of EC2-Classic - -## Default Tags - -The following enhancements are included: - -* Duplicate `default_tags` can now be included and will be overwritten by resource `tags`. -* Zero value tags, `""`, can now be included in both `default_tags` and resource `tags`. -* Tags can now be `computed`. - -## EC2-Classic Retirement - -Following the retirement of EC2-Classic, we removed a number of resources, arguments, and attributes. This list summarizes what we _removed_: - -* `aws_db_security_group` resource -* `aws_elasticache_security_group` resource -* `aws_redshift_security_group` resource -* [`aws_db_instance`](/docs/providers/aws/r/db_instance.html) resource's `security_group_names` argument -* [`aws_elasticache_cluster`](/docs/providers/aws/r/elasticache_cluster.html) resource's `security_group_names` argument -* [`aws_redshift_cluster`](/docs/providers/aws/r/redshift_cluster.html) resource's `cluster_security_groups` argument -* [`aws_launch_configuration`](/docs/providers/aws/r/launch_configuration.html) resource's `vpc_classic_link_id` and `vpc_classic_link_security_groups` arguments -* [`aws_vpc`](/docs/providers/aws/r/vpc.html) resource's `enable_classiclink` and `enable_classiclink_dns_support` arguments -* [`aws_default_vpc`](/docs/providers/aws/r/default_vpc.html) resource's `enable_classiclink` and `enable_classiclink_dns_support` arguments -* [`aws_vpc_peering_connection`](/docs/providers/aws/r/vpc_peering_connection.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments -* [`aws_vpc_peering_connection_accepter`](/docs/providers/aws/r/vpc_peering_connection_accepter.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments -* [`aws_vpc_peering_connection_options`](/docs/providers/aws/r/vpc_peering_connection_options.html) resource's `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` arguments -* [`aws_db_instance`](/docs/providers/aws/d/db_instance.html) data source's `db_security_groups` attribute -* [`aws_elasticache_cluster`](/docs/providers/aws/d/elasticache_cluster.html) data source's `security_group_names` attribute -* [`aws_redshift_cluster`](/docs/providers/aws/d/redshift_cluster.html) data source's `cluster_security_groups` attribute -* [`aws_launch_configuration`](/docs/providers/aws/d/launch_configuration.html) data source's `vpc_classic_link_id` and `vpc_classic_link_security_groups` attributes - -## Macie Classic Retirement - -Following the retirement of Amazon Macie Classic, we removed these resources: - -* `aws_macie_member_account_association` -* `aws_macie_s3_bucket_association` - -## resource/aws_acmpca_certificate_authority - -Remove `status` from configurations as it no longer exists. - -## resource/aws_api_gateway_rest_api - -The `minimum_compression_size` attribute is now a String type, allowing it to be computed when set via the `body` attribute. Valid values remain the same. - -## resource/aws_autoscaling_attachment - -Change `alb_target_group_arn`, which no longer exists, to `lb_target_group_arn` in configurations. - -## resource/aws_autoscaling_group - -Remove `tags` from configurations as it no longer exists. Use the `tag` attribute instead. For use cases requiring dynamic tags, see the [Dynamic Tagging example](../r/autoscaling_group.html.markdown#dynamic-tagging). - -## resource/aws_budgets_budget - -Remove `cost_filters` from configurations as it no longer exists. - -## resource/aws_ce_anomaly_subscription - -Remove `threshold` from configurations as it no longer exists. - -## resource/aws_cloudwatch_event_target - -The `ecs_target.propagate_tags` attribute now has no default value. If no value is specified, the tags are not propagated. - -## resource/aws_codebuild_project - -Remove `secondary_sources.auth` and `source.auth` from configurations as they no longer exist. - -## resource/aws_connect_hours_of_operation - -Remove `hours_of_operation_arn` from configurations as it no longer exists. - -## resource/aws_connect_queue - -Remove `quick_connect_ids_associated` from configurations as it no longer exists. - -## resource/aws_connect_routing_profile - -Remove `queue_configs_associated` from configurations as it no longer exists. - -## resource/aws_db_event_subscription - -Configurations that define `source_ids` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `source_ids = [aws_db_instance.example.id]` must be updated to `source_ids = [aws_db_instance.example.identifier]`. - -## resource/aws_db_instance - -`aws_db_instance` has had a number of changes: - -1. [`id` is no longer the identifier](#aws_db_instanceid-is-no-longer-the-identifier) -2. [Use `db_name` instead of `name`](#use-db_name-instead-of-name) -3. [Remove `db_security_groups`](#remove-db_security_groups) - -### aws_db_instance.id is no longer the identifier - -**What `id` _is_ has changed and can have far-reaching consequences.** Fortunately, fixing configurations is straightforward. - -`id` is _now_ the DBI Resource ID (_i.e._, `dbi-resource-id`), an immutable "identifier" for an instance. `id` is now the same as the `resource_id`. (We recommend using `resource_id` rather than `id` when you need to refer to the DBI Resource ID.) _Previously_, `id` was the DB Identifier. Now when you need to refer to the _DB Identifier_, use `identifier`. - -Fixing configurations involves changing any `id` references to `identifier`, where the reference expects the DB Identifier. For example, if you're replicating an `aws_db_instance`, you can no longer use `id` to define the `replicate_source_db`. - -This configuration will now result in an error since `replicate_source_db` expects a _DB Identifier_: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.db_instance import DbInstance -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, instanceClass): - super().__init__(scope, name) - DbInstance(self, "test", - replicate_source_db=source.id, - instance_class=instance_class - ) -``` - -You can fix the configuration like this: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.db_instance import DbInstance -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, instanceClass): - super().__init__(scope, name) - DbInstance(self, "test", - replicate_source_db=source.identifier, - instance_class=instance_class - ) -``` - -### Use `db_name` instead of `name` - -Change `name` to `db_name` in configurations as `name` no longer exists. - -### Remove `db_security_groups` - -Remove `db_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## resource/aws_db_instance_role_association - -Configurations that define `db_instance_identifier` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. - -## resource/aws_db_proxy_target - -Configurations that define `db_instance_identifier` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. - -## resource/aws_db_security_group - -We removed this resource as part of the EC2-Classic retirement. - -## resource/aws_db_snapshot - -Configurations that define `db_instance_identifier` using the `id` attribute of `aws_db_instance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. - -## resource/aws_default_vpc - -Remove `enable_classiclink` and `enable_classiclink_dns_support` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_dms_endpoint - -Remove `s3_settings.ignore_headers_row` from configurations as it no longer exists. **Be careful to not confuse `ignore_headers_row`, which no longer exists, with `ignore_header_rows`, which still exists.** - -## resource/aws_docdb_cluster - -Changes to the `snapshot_identifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshot_identifier` attribute into alignment with other RDS resources, such as `aws_db_instance`. - -Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. - -## resource/aws_dx_gateway_association - -The `vpn_gateway_id` attribute has been deprecated. All configurations using `vpn_gateway_id` should be updated to use the `associated_gateway_id` attribute instead. - -## resource/aws_ec2_client_vpn_endpoint - -Remove `status` from configurations as it no longer exists. - -## resource/aws_ec2_client_vpn_network_association - -Remove `security_groups` and `status` from configurations as they no longer exist. - -## resource/aws_ecs_cluster - -Remove `capacity_providers` and `default_capacity_provider_strategy` from configurations as they no longer exist. - -## resource/aws_eip - -* With the retirement of EC2-Classic, the `standard` domain is no longer supported. -* The `vpc` argument has been deprecated. Use `domain` argument instead. - -## resource/aws_eip_association - -With the retirement of EC2-Classic, the `standard` domain is no longer supported. - -## resource/aws_eks_addon - -The `resolve_conflicts` argument has been deprecated. Use the `resolve_conflicts_on_create` and/or `resolve_conflicts_on_update` arguments instead. - -## resource/aws_elasticache_cluster - -Remove `security_group_names` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## resource/aws_elasticache_replication_group - -* Remove the `cluster_mode` configuration block. Use top-level `num_node_groups` and `replicas_per_node_group` instead. -* Remove `availability_zones`, `number_cache_clusters`, `replication_group_description` arguments from configurations as they no longer exist. Use `preferred_cache_cluster_azs`, `num_cache_clusters`, and `description`, respectively, instead. - -## resource/aws_elasticache_security_group - -We removed this resource as part of the EC2-Classic retirement. - -## resource/aws_flow_log - -The `log_group_name` attribute has been deprecated. All configurations using `log_group_name` should be updated to use the `log_destination` attribute instead. - -## resource/aws_guardduty_organization_configuration - -The `auto_enable` argument has been deprecated. Use the `auto_enable_organization_members` argument instead. - -## resource/aws_kinesis_firehose_delivery_stream - -* Remove the `s3_configuration` attribute from the root of the resource. `s3_configuration` is now a part of the following blocks: `elasticsearch_configuration`, `opensearch_configuration`, `redshift_configuration`, `splunk_configuration`, and `http_endpoint_configuration`. -* Remove `s3` as an option for `destination`. Use `extended_s3` instead -* Rename `extended_s3_configuration.0.s3_backup_configuration.0.buffer_size` and `extended_s3_configuration.0.s3_backup_configuration.0.buffer_interval` to `extended_s3_configuration.0.s3_backup_configuration.0.buffering_size` and `extended_s3_configuration.0.s3_backup_configuration.0.buffering_interval`, respectively. -* Rename `redshift_configuration.0.s3_backup_configuration.0.buffer_size` and `redshift_configuration.0.s3_backup_configuration.0.buffer_interval` to `redshift_configuration.0.s3_backup_configuration.0.buffering_size` and `redshift_configuration.0.s3_backup_configuration.0.buffering_interval`, respectively. -* Rename `s3_configuration.0.buffer_size` and `s3_configuration.0.buffer_interval` to `s3_configuration.0.buffering_size` and `s3_configuration.0.buffering_interval`, respectively. - -## resource/aws_launch_configuration - -Remove `vpc_classic_link_id` and `vpc_classic_link_security_groups` from configurations as they no longer exist. We removed them as part of the EC2-Classic retirement. - -## resource/aws_launch_template - -We removed defaults from `metatadata_options`. Launch template metadata options will now default to unset values, which is the AWS default behavior. - -## resource/aws_lightsail_instance - -Remove `ipv6_address` from configurations as it no longer exists. - -## resource/aws_macie_member_account_association - -We removed this resource as part of the Macie Classic retirement. - -## resource/aws_macie_s3_bucket_association - -We removed this resource as part of the Macie Classic retirement. - -## resource/aws_medialive_multiplex_program - -Change `statemux_settings`, which no longer exists, to `statmux_settings` in configurations. - -## resource/aws_msk_cluster - -Remove `broker_node_group_info.ebs_volume_size` from configurations as it no longer exists. - -## resource/aws_neptune_cluster - -Changes to the `snapshot_identifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshot_identifier` attribute into alignment with other RDS resources, such as `aws_db_instance`. - -Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. - -## resource/aws_networkmanager_core_network - -Remove `policy_document` from configurations as it no longer exists. Use the `aws_networkmanager_core_network_policy_attachment` resource instead. - -## resource/aws_opensearch_domain - -* The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead. -* The `engine_version` attribute no longer has a default value. Omitting this attribute will now create a domain with the latest OpenSearch version, consistent with the behavior of the AWS API. - -## resource/aws_rds_cluster - -* Update configurations to always include `engine` since it is now required and has no default. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster. -* Changes to the `snapshot_identifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshot_identifier` attribute into alignment with other RDS resources, such as `aws_db_instance`. **NOTE:** Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. - -## resource/aws_rds_cluster_instance - -Update configurations to always include `engine` since it is now required and has no default. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster. - -## resource/aws_redshift_cluster - -Remove `cluster_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## resource/aws_redshift_security_group - -We removed this resource as part of the EC2-Classic retirement. - -## resource/aws_route - -Update configurations to use `network_interface_id` rather than `instance_id`, which no longer exists. - -For example, this configuration is _no longer valid_: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route import Route -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, routeTableId): - super().__init__(scope, name) - Route(self, "example", - instance_id=Token.as_string(aws_instance_example.id), - route_table_id=route_table_id - ) -``` - -One possible way to fix this configuration involves referring to the `primary_network_interface_id` of an instance: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route import Route -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, routeTableId): - super().__init__(scope, name) - Route(self, "example", - network_interface_id=Token.as_string(aws_instance_example.primary_network_interface_id), - route_table_id=route_table_id - ) -``` - -Another fix is to use an ENI: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.instance import Instance -from imports.aws.network_interface import NetworkInterface -from imports.aws.route import Route -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, subnetId, deviceIndex, routeTableId): - super().__init__(scope, name) - example = NetworkInterface(self, "example", - subnet_id=subnet_id - ) - aws_instance_example = Instance(self, "example_1", - network_interface=[InstanceNetworkInterface( - network_interface_id=example.id, - device_index=device_index - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_instance_example.override_logical_id("example") - aws_route_example = Route(self, "example_2", - depends_on=[aws_instance_example], - network_interface_id=example.id, - route_table_id=route_table_id - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_route_example.override_logical_id("example") -``` - -## resource/aws_route_table - -Update configurations to use `route.*.network_interface_id` rather than `route.*.instance_id`, which no longer exists. - -For example, this configuration is _no longer valid_: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route_table import RouteTable -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - RouteTable(self, "example", - route=[RouteTableRoute( - instance_id=aws_instance_example.id - ) - ], - vpc_id=vpc_id - ) -``` - -One possible way to fix this configuration involves referring to the `primary_network_interface_id` of an instance: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import Token, TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.route_table import RouteTable -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, vpcId): - super().__init__(scope, name) - RouteTable(self, "example", - route=[RouteTableRoute( - network_interface_id=Token.as_string(aws_instance_example.primary_network_interface_id) - ) - ], - vpc_id=vpc_id - ) -``` - -Another fix is to use an ENI: - -```python -# DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -from constructs import Construct -from cdktf import TerraformStack -# -# Provider bindings are generated by running `cdktf get`. -# See https://cdk.tf/provider-generation for more details. -# -from imports.aws.instance import Instance -from imports.aws.network_interface import NetworkInterface -from imports.aws.route_table import RouteTable -class MyConvertedCode(TerraformStack): - def __init__(self, scope, name, *, subnetId, deviceIndex, vpcId): - super().__init__(scope, name) - example = NetworkInterface(self, "example", - subnet_id=subnet_id - ) - aws_instance_example = Instance(self, "example_1", - network_interface=[InstanceNetworkInterface( - network_interface_id=example.id, - device_index=device_index - ) - ] - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_instance_example.override_logical_id("example") - aws_route_table_example = RouteTable(self, "example_2", - depends_on=[aws_instance_example], - route=[RouteTableRoute( - network_interface_id=example.id - ) - ], - vpc_id=vpc_id - ) - # This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match. - aws_route_table_example.override_logical_id("example") -``` - -## resource/aws_s3_object - -The `acl` attribute no longer has a default value. Previously this was set to `private` when omitted. Objects requiring a private ACL should now explicitly set this attribute. - -## resource/aws_s3_object_copy - -The `acl` attribute no longer has a default value. Previously this was set to `private` when omitted. Object copies requiring a private ACL should now explicitly set this attribute. - -## resource/aws_secretsmanager_secret - -Remove `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` from configurations as they no longer exist. - -## resource/aws_security_group - -With the retirement of EC2-Classic, non-VPC security groups are no longer supported. - -## resource/aws_security_group_rule - -With the retirement of EC2-Classic, non-VPC security groups are no longer supported. - -## resource/aws_servicecatalog_product - -Changes to any `provisioning_artifact_parameters` arguments now properly trigger a replacement. This fixes incorrect behavior, but may technically be breaking for configurations expecting non-functional in-place updates. - -## resource/aws_ssm_association - -The `instance_id` attribute has been deprecated. All configurations using `instance_id` should be updated to use the `targets` attribute instead. - -## resource/aws_ssm_parameter - -The `overwrite` attribute has been deprecated. Existing parameters should be explicitly imported rather than relying on the "import on create" behavior previously enabled by setting `overwrite = true`. In a future major version the `overwrite` attribute will be removed and attempting to create a parameter that already exists will fail. - -## resource/aws_vpc - -Remove `enable_classiclink` and `enable_classiclink_dns_support` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_vpc_peering_connection - -Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_vpc_peering_connection_accepter - -Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_vpc_peering_connection_options - -Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_wafv2_web_acl - -* Remove `statement.managed_rule_group_statement.excluded_rule` and `statement.rule_group_reference_statement.excluded_rule` from configurations as they no longer exist. -* The `statement.rule_group_reference_statement.rule_action_override` attribute has been added. - -## resource/aws_wafv2_web_acl_logging_configuration - -Remove `redacted_fields.all_query_arguments`, `redacted_fields.body` and `redacted_fields.single_query_argument` from configurations as they no longer exist. - -## data-source/aws_api_gateway_rest_api - -The `minimum_compression_size` attribute is now a String type, allowing it to be computed when set via the `body` attribute. - -## data-source/aws_connect_hours_of_operation - -Remove `hours_of_operation_arn` from configurations as it no longer exists. - -## data-source/aws_db_instance - -Remove `db_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## data-source/aws_elasticache_cluster - -Remove `security_group_names` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## data-source/aws_elasticache_replication_group - -Rename `number_cache_clusters` and `replication_group_description`, which no longer exist, to `num_cache_clusters`, and `description`, respectively. - -## data-source/aws_iam_policy_document - -* Remove `source_json` and `override_json` from configurations. Use `source_policy_documents` and `override_policy_documents`, respectively, instead. -* Don't add empty `statement.sid` values to `json` attribute value. - -## data-source/aws_identitystore_group - -Remove `filter` from configurations as it no longer exists. - -## data-source/aws_identitystore_user - -Remove `filter` from configurations as it no longer exists. - -## data-source/aws_launch_configuration - -Remove `vpc_classic_link_id` and `vpc_classic_link_security_groups` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## data-source/aws_opensearch_domain - -The `kibana_endpoint` attribute has been deprecated. All configurations using `kibana_endpoint` should be updated to use the `dashboard_endpoint` attribute instead. - -## data-source/aws_quicksight_data_set - -The `tags_all` attribute has been deprecated and will be removed in a future version. - -## data-source/aws_redshift_cluster - -Remove `cluster_security_groups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## data-source/aws_redshift_service_account - -[AWS document](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) that [a service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) be used instead of AWS account ID in any relevant IAM policy. -The [`aws_redshift_service_account`](/docs/providers/aws/d/redshift_service_account.html) data source should now be considered deprecated and will be removed in a future version. - -## data-source/aws_service_discovery_service - -The `tags_all` attribute has been deprecated and will be removed in a future version. - -## data-source/aws_secretsmanager_secret - -Remove `rotation_enabled`, `rotation_lambda_arn` and `rotation_rules` from configurations as they no longer exist. - -## data-source/aws_subnet_ids - -We removed the `aws_subnet_ids` data source. Use the [`aws_subnets`](/docs/providers/aws/d/subnets.html) data source instead. - -## data-source/aws_vpc_peering_connection - -Remove `allow_classic_link_to_remote_vpc` and `allow_vpc_to_remote_classic_link` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/continuous-validation-examples.html.md b/website/docs/cdktf/typescript/guides/continuous-validation-examples.html.md deleted file mode 100644 index 16e4493816d..00000000000 --- a/website/docs/cdktf/typescript/guides/continuous-validation-examples.html.md +++ /dev/null @@ -1,119 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Using Terraform Cloud's Continuous Validation feature with the AWS Provider" -description: |- - Using Terraform Cloud's Continuous Validation feature with the AWS Provider ---- - - - -# Using Terraform Cloud's Continuous Validation feature with the AWS Provider - -## Continuous Validation in Terraform Cloud - -The Continuous Validation feature in Terraform Cloud (TFC) allows users to make assertions about their infrastructure between applied runs. This helps users to identify issues at the time they first appear and avoid situations where a change is only identified once it causes a customer-facing problem. - -Users can add checks to their Terraform configuration using check blocks. Check blocks contain assertions that are defined with a custom condition expression and an error message. When the condition expression evaluates to true the check passes, but when the expression evaluates to false Terraform will show a warning message that includes the user-defined error message. - -Custom conditions can be created using data from Terraform providers’ resources and data sources. Data can also be combined from multiple sources; for example, you can use checks to monitor expirable resources by comparing a resource’s expiration date attribute to the current time returned by Terraform’s built-in time functions. - -Below, this guide shows examples of how data returned by the AWS provider can be used to define checks in your Terraform configuration. - -## Example - Ensure your AWS account is within budget (aws_budgets_budget) - -AWS Budgets allows you to track and take action on your AWS costs and usage. You can use AWS Budgets to monitor your aggregate utilization and coverage metrics for your Reserved Instances (RIs) or Savings Plans. - -- You can use AWS Budgets to enable simple-to-complex cost and usage tracking. Some examples include: - -- Setting a monthly cost budget with a fixed target amount to track all costs associated with your account. - -- Setting a monthly cost budget with a variable target amount, with each subsequent month growing the budget target by 5 percent. - -- Setting a monthly usage budget with a fixed usage amount and forecasted notifications to help ensure that you are staying within the service limits for a specific service. - -- Setting a daily utilization or coverage budget to track your RI or Savings Plans. - -The example below shows how a check block can be used to assert that you remain in compliance for the budgets that have been established. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - } -} - -``` - -If the budget exceeds the set limit, the check block assertion will return a warning similar to the following: - -``` -│ Warning: Check block assertion failed -│ -│ on main.tf line 43, in check "check_budget_exceeded": -│ 43: condition = !data.aws_budgets_budget.example.budget_exceeded -│ ├──────────────── -│ │ data.aws_budgets_budget.example.budget_exceeded is true -│ -│ AWS budget has been exceeded! Calculated spend: '1550.0' and budget limit: '1200.0' -``` - -## Example - Check GuardDuty for Threats (aws_guardduty_finding_ids) - -Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your Amazon Web Services accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time consuming for security teams to continuously analyze event log data for potential threats. With GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in Amazon Web Services Cloud. - -The following example outlines how a check block can be utilized to assert that no threats have been identified from AWS GuardDuty. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsGuarddutyDetector } from "./.gen/providers/aws/data-aws-guardduty-detector"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsGuarddutyDetector(this, "example", {}); - } -} - -``` - -If findings are present, the check block assertion will return a warning similar to the following: - -``` -│ Warning: Check block assertion failed -│ -│ on main.tf line 24, in check "check_guardduty_findings": -│ 24: condition = !data.aws_guardduty_finding_ids.example.has_findings -│ ├──────────────── -│ │ data.aws_guardduty_finding_ids.example.has_findings is true -│ -│ AWS GuardDuty detector 'abcdef123456' has 9 open findings! -``` - -## Example - Check for unused IAM roles (aws_iam_role) - -AWS IAM tracks role usage, including the [last used date and region](https://docs.aws.amazon.com/IAM/latest/APIReference/API_RoleLastUsed.html). This information is returned with the [`awsIamRole`](../d/iam_role.html.markdown) data source, and can be used in continuous validation to check for unused roles. AWS reports activity for the trailing 400 days. If a role is unused within that period, the `lastUsedDate` will be an empty string (`""`). - -In the example below, the [`timecmp`](https://developer.hashicorp.com/terraform/language/functions/timecmp) function checks for a `lastUsedDate` more recent than the `unusedLimit` local variable (30 days ago). The [`coalesce`](https://developer.hashicorp.com/terraform/language/functions/coalesce) function handles empty (`""`) `lastUsedDate` values safely, falling back to the `unusedLimit` local, and automatically triggering a failed condition. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - } -} - -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/custom-service-endpoints.html.md b/website/docs/cdktf/typescript/guides/custom-service-endpoints.html.md deleted file mode 100644 index cab8bc51174..00000000000 --- a/website/docs/cdktf/typescript/guides/custom-service-endpoints.html.md +++ /dev/null @@ -1,416 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Custom Service Endpoint Configuration" -description: |- - Configuring the Terraform AWS Provider to connect to custom AWS service endpoints and AWS compatible solutions. ---- - - - - - -# Custom Service Endpoint Configuration - -The Terraform AWS Provider configuration can be customized to connect to non-default AWS service endpoints and AWS compatible solutions. This may be useful for environments with specific compliance requirements, such as using [AWS FIPS 140-2 endpoints](https://aws.amazon.com/compliance/fips/), connecting to AWS Snowball, SC2S, or C2S environments, or local testing. - -This guide outlines how to get started with customizing endpoints, the available endpoint configurations, and offers example configurations for working with certain local development and testing solutions. - -~> **NOTE:** Support for connecting the Terraform AWS Provider with custom endpoints and AWS compatible solutions is offered as best effort. Individual Terraform resources may require compatibility updates to work in certain environments. Integration testing by HashiCorp during provider changes is exclusively done against default AWS endpoints at this time. - - - -- [Getting Started with Custom Endpoints](#getting-started-with-custom-endpoints) -- [Available Endpoint Customizations](#available-endpoint-customizations) -- [Connecting to Local AWS Compatible Solutions](#connecting-to-local-aws-compatible-solutions) - - [DynamoDB Local](#dynamodb-local) - - [LocalStack](#localstack) - - - -## Getting Started with Custom Endpoints - -To configure the Terraform AWS Provider to use customized endpoints, it can be done within `provider` declarations using the `endpoints` configuration block, e.g., - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - endpoints: [ - { - dynamodb: "http://localhost:4569", - s3: "http://localhost:4572", - }, - ], - }); - } -} - -``` - -If multiple, different Terraform AWS Provider configurations are required, see the [Terraform documentation on multiple provider instances](https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-instances) for additional information about the `alias` provider configuration and its usage. - -## Available Endpoint Customizations - -The Terraform AWS Provider allows the following endpoints to be customized. - -**Note:** The Provider allows some service endpoints to be customized despite not supporting those services. - -**Note:** For backward compatibility, some endpoints can be assigned using multiple service "keys" (_e.g._, `dms`, `databasemigration`, or `databasemigrationservice`). If you use more than one equivalent service key in your configuration, the provider will use the _first_ endpoint value set. For example, in the configuration below we have set the DMS service endpoints using both `dms` and `databasemigration`. The provider will set the endpoint to whichever appears first. Subsequent values are ignored. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - endpoints: [ - { - databasemigration: "http://this.value.will.be.ignored.com", - dms: "http://this.value.will.be.used.com", - }, - ], - }); - } -} - -``` - - - - -
-
    -
  • accessanalyzer
  • -
  • account
  • -
  • acm
  • -
  • acmpca
  • -
  • amp (or prometheus or prometheusservice)
  • -
  • amplify
  • -
  • apigateway
  • -
  • apigatewayv2
  • -
  • appautoscaling (or applicationautoscaling)
  • -
  • appconfig
  • -
  • appflow
  • -
  • appintegrations (or appintegrationsservice)
  • -
  • applicationinsights
  • -
  • appmesh
  • -
  • apprunner
  • -
  • appstream
  • -
  • appsync
  • -
  • athena
  • -
  • auditmanager
  • -
  • autoscaling
  • -
  • autoscalingplans
  • -
  • backup
  • -
  • batch
  • -
  • budgets
  • -
  • ce (or costexplorer)
  • -
  • chime
  • -
  • chimesdkmediapipelines
  • -
  • chimesdkvoice
  • -
  • cleanrooms
  • -
  • cloud9
  • -
  • cloudcontrol (or cloudcontrolapi)
  • -
  • cloudformation
  • -
  • cloudfront
  • -
  • cloudhsmv2 (or cloudhsm)
  • -
  • cloudsearch
  • -
  • cloudtrail
  • -
  • cloudwatch
  • -
  • codeartifact
  • -
  • codebuild
  • -
  • codecatalyst
  • -
  • codecommit
  • -
  • codegurureviewer
  • -
  • codepipeline
  • -
  • codestarconnections
  • -
  • codestarnotifications
  • -
  • cognitoidentity
  • -
  • cognitoidp (or cognitoidentityprovider)
  • -
  • comprehend
  • -
  • computeoptimizer
  • -
  • configservice (or config)
  • -
  • connect
  • -
  • controltower
  • -
  • cur (or costandusagereportservice)
  • -
  • dataexchange
  • -
  • datapipeline
  • -
  • datasync
  • -
  • dax
  • -
  • deploy (or codedeploy)
  • -
  • detective
  • -
  • devicefarm
  • -
  • directconnect
  • -
  • dlm
  • -
  • dms (or databasemigration or databasemigrationservice)
  • -
  • docdb
  • -
  • docdbelastic
  • -
  • ds (or directoryservice)
  • -
  • dynamodb
  • -
  • ec2
  • -
  • ecr
  • -
  • ecrpublic
  • -
  • ecs
  • -
  • efs
  • -
  • eks
  • -
  • elasticache
  • -
  • elasticbeanstalk (or beanstalk)
  • -
  • elasticsearch (or es or elasticsearchservice)
  • -
  • elastictranscoder
  • -
  • elb (or elasticloadbalancing)
  • -
  • elbv2 (or elasticloadbalancingv2)
  • -
  • emr
  • -
  • emrcontainers
  • -
  • emrserverless
  • -
  • events (or eventbridge or cloudwatchevents)
  • -
  • evidently (or cloudwatchevidently)
  • -
  • finspace
  • -
  • firehose
  • -
  • fis
  • -
  • fms
  • -
  • fsx
  • -
  • gamelift
  • -
  • glacier
  • -
  • globalaccelerator
  • -
  • glue
  • -
  • grafana (or managedgrafana or amg)
  • -
  • greengrass
  • -
  • guardduty
  • -
  • healthlake
  • -
  • iam
  • -
  • identitystore
  • -
  • imagebuilder
  • -
  • inspector
  • -
  • inspector2 (or inspectorv2)
  • -
  • internetmonitor
  • -
  • iot
  • -
  • iotanalytics
  • -
  • iotevents
  • -
  • ivs
  • -
  • ivschat
  • -
  • kafka (or msk)
  • -
  • kafkaconnect
  • -
  • kendra
  • -
  • keyspaces
  • -
  • kinesis
  • -
  • kinesisanalytics
  • -
  • kinesisanalyticsv2
  • -
  • kinesisvideo
  • -
  • kms
  • -
  • lakeformation
  • -
  • lambda
  • -
  • lexmodels (or lexmodelbuilding or lexmodelbuildingservice or lex)
  • -
  • lexmodelsv2 (or lexv2models)
  • -
  • licensemanager
  • -
  • lightsail
  • -
  • location (or locationservice)
  • -
  • logs (or cloudwatchlog or cloudwatchlogs)
  • -
  • macie2
  • -
  • mediaconnect
  • -
  • mediaconvert
  • -
  • medialive
  • -
  • mediapackage
  • -
  • mediastore
  • -
  • memorydb
  • -
  • mq
  • -
  • mwaa
  • -
  • neptune
  • -
  • networkfirewall
  • -
  • networkmanager
  • -
  • oam (or cloudwatchobservabilityaccessmanager)
  • -
  • opensearch (or opensearchservice)
  • -
  • opensearchserverless
  • -
  • opsworks
  • -
  • organizations
  • -
  • outposts
  • -
  • pinpoint
  • -
  • pipes
  • -
  • pricing
  • -
  • qldb
  • -
  • quicksight
  • -
  • ram
  • -
  • rbin (or recyclebin)
  • -
  • rds
  • -
  • redshift
  • -
  • redshiftdata (or redshiftdataapiservice)
  • -
  • redshiftserverless
  • -
  • resourceexplorer2
  • -
  • resourcegroups
  • -
  • resourcegroupstaggingapi (or resourcegroupstagging)
  • -
  • rolesanywhere
  • -
  • route53
  • -
  • route53domains
  • -
  • route53recoverycontrolconfig
  • -
  • route53recoveryreadiness
  • -
  • route53resolver
  • -
  • rum (or cloudwatchrum)
  • -
  • s3 (or s3api)
  • -
  • s3control
  • -
  • s3outposts
  • -
  • sagemaker
  • -
  • scheduler
  • -
  • schemas
  • -
  • secretsmanager
  • -
  • securityhub
  • -
  • securitylake
  • -
  • serverlessrepo (or serverlessapprepo or serverlessapplicationrepository)
  • -
  • servicecatalog
  • -
  • servicediscovery
  • -
  • servicequotas
  • -
  • ses
  • -
  • sesv2
  • -
  • sfn (or stepfunctions)
  • -
  • shield
  • -
  • signer
  • -
  • simpledb (or sdb)
  • -
  • sns
  • -
  • sqs
  • -
  • ssm
  • -
  • ssmcontacts
  • -
  • ssmincidents
  • -
  • ssoadmin
  • -
  • storagegateway
  • -
  • sts
  • -
  • swf
  • -
  • synthetics
  • -
  • timestreamwrite
  • -
  • transcribe (or transcribeservice)
  • -
  • transfer
  • -
  • verifiedpermissions
  • -
  • vpclattice
  • -
  • waf
  • -
  • wafregional
  • -
  • wafv2
  • -
  • worklink
  • -
  • workspaces
  • -
  • xray
  • -
-
- - -As a convenience, for compatibility with the [Terraform S3 Backend](https://www.terraform.io/language/settings/backends/s3), -the following service endpoints can be configured using environment variables: - -* DynamoDB: `tfAwsDynamodbEndpoint` (or **Deprecated** `awsDynamodbEndpoint`) -* IAM: `tfAwsIamEndpoint` (or **Deprecated** `awsIamEndpoint`) -* S3: `tfAwsS3Endpoint` (or **Deprecated** `awsS3Endpoint`) -* STS: `tfAwsStsEndpoint` (or **Deprecated** `awsStsEndpoint`) - -## Connecting to Local AWS Compatible Solutions - -~> **NOTE:** This information is not intended to be exhaustive for all local AWS compatible solutions or necessarily authoritative configurations for those documented. Check the documentation for each of these solutions for the most up to date information. - -### DynamoDB Local - -The Amazon DynamoDB service offers a downloadable version for writing and testing applications without accessing the DynamoDB web service. For more information about this solution, see the [DynamoDB Local documentation in the Amazon DynamoDB Developer Guide](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html). - -An example provider configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - accessKey: "mock_access_key", - endpoints: [ - { - dynamodb: "http://localhost:8000", - }, - ], - region: "us-east-1", - secretKey: "mock_secret_key", - skipCredentialsValidation: true, - skipMetadataApiCheck: Token.asString(true), - skipRequestingAccountId: true, - }); - } -} - -``` - -### LocalStack - -[LocalStack](https://localstack.cloud/) provides an easy-to-use test/mocking framework for developing Cloud applications. - -An example provider configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - accessKey: "mock_access_key", - endpoints: [ - { - apigateway: "http://localhost:4566", - cloudformation: "http://localhost:4566", - cloudwatch: "http://localhost:4566", - dynamodb: "http://localhost:4566", - es: "http://localhost:4566", - firehose: "http://localhost:4566", - iam: "http://localhost:4566", - kinesis: "http://localhost:4566", - lambda: "http://localhost:4566", - redshift: "http://localhost:4566", - route53: "http://localhost:4566", - s3: "http://localhost:4566", - secretsmanager: "http://localhost:4566", - ses: "http://localhost:4566", - sns: "http://localhost:4566", - sqs: "http://localhost:4566", - ssm: "http://localhost:4566", - stepfunctions: "http://localhost:4566", - sts: "http://localhost:4566", - }, - ], - region: "us-east-1", - s3UsePathStyle: true, - secretKey: "mock_secret_key", - skipCredentialsValidation: true, - skipMetadataApiCheck: Token.asString(true), - skipRequestingAccountId: true, - }); - } -} - -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/resource-tagging.html.md b/website/docs/cdktf/typescript/guides/resource-tagging.html.md deleted file mode 100644 index d62169f3634..00000000000 --- a/website/docs/cdktf/typescript/guides/resource-tagging.html.md +++ /dev/null @@ -1,330 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Resource Tagging" -description: |- - Managing resource tags with the Terraform AWS Provider. ---- - - - -# Resource Tagging - -Many AWS services implement [resource tags](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) as an essential part of managing components. These arbitrary key-value pairs can be utilized for billing, ownership, automation, [access control](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html), and many other use cases. Given that these tags are an important aspect of successfully managing an AWS environment, the Terraform AWS Provider implements additional functionality beyond the typical one-to-one resource lifecycle management for easier and more customized implementations. - --> Not all AWS resources support tagging, which can differ across AWS services and even across resources within the same service. Browse the individual Terraform AWS Provider resource documentation pages for the `tags` argument, to see which support resource tagging. If the AWS API implements tagging support for a resource and it is missing from the Terraform AWS Provider resource, a [feature request](https://github.com/hashicorp/terraform-provider-aws/issues/new?labels=enhancement&template=Feature_Request.md) can be submitted. - - - -- [Getting Started with Resource Tags](#getting-started-with-resource-tags) -- [Ignoring Changes to Specific Tags](#ignoring-changes-to-specific-tags) - - [Ignoring Changes in Individual Resources](#ignoring-changes-in-individual-resources) - - [Ignoring Changes in All Resources](#ignoring-changes-in-all-resources) -- [Managing Individual Resource Tags](#managing-individual-resource-tags) -- [Propagating Tags to All Resources](#propagating-tags-to-all-resources) - - - -## Getting Started with Resource Tags - -Terraform AWS Provider resources that support resource tags implement a consistent argument named `tags` which accepts a key-value map, e.g., - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Vpc } from "./.gen/providers/aws/vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Vpc(this, "example", { - tags: { - Name: "MyVPC", - }, - }); - } -} - -``` - -The tags for the resource are wholly managed by Terraform except tag keys beginning with `aws:` as these are managed by AWS services and cannot typically be edited or deleted. Any non-AWS tags added to the VPC outside of Terraform will be proposed for removal on the next Terraform execution. Missing tags or those with incorrect values from the Terraform configuration will be proposed for addition or update on the next Terraform execution. Advanced patterns that can adjust these behaviors for special use cases, such as Terraform AWS Provider configurations that affect all resources and the ability to manage resource tags for resources not managed by Terraform, can be found later in this guide. - -For most environments and use cases, this is the typical implementation pattern, whether it be in a standalone Terraform configuration or within a [Terraform Module](https://www.terraform.io/docs/modules/). The Terraform configuration language also enables less repetitive configurations via [variables](https://www.terraform.io/docs/configuration/variables.html), [locals](https://www.terraform.io/docs/configuration/locals.html), or potentially a combination of these, e.g., - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { - VariableType, - TerraformVariable, - Fn, - Token, - TerraformStack, -} from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Vpc } from "./.gen/providers/aws/vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - /*Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. - You can read more about this at https://cdk.tf/variables*/ - const additionalTags = new TerraformVariable(this, "additional_tags", { - default: [{}], - description: "Additional resource tags", - type: VariableType.map(VariableType.STRING), - }); - new Vpc(this, "example", { - tags: Token.asStringMap( - Fn.merge([ - additionalTags.value, - { - Name: "MyVPC", - }, - ]) - ), - }); - } -} - -``` - -## Ignoring Changes to Specific Tags - -Systems outside of Terraform may automatically interact with the tagging associated with AWS resources. These external systems may be for administrative purposes, such as a Configuration Management Database, or the tagging may be required functionality for those systems, such as Kubernetes. This section shows methods to prevent Terraform from showing differences for specific tags. - -### Ignoring Changes in Individual Resources - -All Terraform resources support the [`lifecycle` configuration block `ignoreChanges` argument](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html#ignore_changes), which can be used to explicitly ignore all tags changes on a resource beyond an initial configuration or individual tag values. - -In this example, the `name` tag will be added to the VPC on resource creation, however any external changes to the `name` tag value or the addition/removal of any tag (including the `name` tag) will be ignored: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Vpc } from "./.gen/providers/aws/vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Vpc(this, "example", { - lifecycle: { - ignoreChanges: [tags], - }, - tags: { - Name: "MyVPC", - }, - }); - } -} - -``` - -In this example, the `name` and `owner` tags will be added to the VPC on resource creation, however any external changes to the value of the `name` tag will be ignored while any changes to other tags (including the `owner` tag and any additions) will still be proposed: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Vpc } from "./.gen/providers/aws/vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Vpc(this, "example", { - lifecycle: { - ignoreChanges: [name], - }, - tags: { - Name: "MyVPC", - Owner: "Operations", - }, - }); - } -} - -``` - -### Ignoring Changes in All Resources - -As of version 2.60.0 of the Terraform AWS Provider, there is support for ignoring tag changes across all resources under a provider. This simplifies situations where certain tags may be externally applied more globally and enhances functionality beyond `ignoreChanges` to support cases such as tag key prefixes. - -In this example, all resources will ignore any addition of the `lastScanned` tag: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - ignoreTags: [ - { - keys: ["LastScanned"], - }, - ], - }); - } -} - -``` - -In this example, all resources will ignore any addition of tags with the `kubernetesIo/` prefix, such as `kubernetesIo/cluster/name` or `kubernetesIo/role/elb`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - ignoreTags: [ - { - keyPrefixes: ["kubernetes.io/"], - }, - ], - }); - } -} - -``` - -Any of the `ignoreTags` configurations can be combined as needed. - -The provider ignore tags configuration applies to all Terraform AWS Provider resources under that particular instance (the `default` provider instance in the above cases). If multiple, different Terraform AWS Provider configurations are being used (e.g., [multiple provider instances](https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-instances)), the ignore tags configuration must be added to all applicable provider configurations. - -## Managing Individual Resource Tags - -Certain Terraform AWS Provider services support a special resource for managing an individual tag on a resource without managing the resource itself. One example is the [`awsEc2Tag` resource](/docs/providers/aws/r/ec2_tag.html). These resources enable tagging where resources are created outside Terraform such as EC2 Images (AMIs), shared across accounts via Resource Access Manager (RAM), or implicitly created by other means such as EC2 VPN Connections implicitly creating a taggable EC2 Transit Gateway VPN Attachment. - -~> **NOTE:** This is an advanced use case and can cause conflicting management issues when improperly implemented. These individual tag resources should not be combined with the Terraform resource for managing the parent resource. For example, using `awsVpc` and `awsEc2Tag` to manage tags of the same VPC will cause a perpetual difference where the `awsVpc` resource will try to remove the tag being added by the `awsEc2Tag` resource. - --> Not all services supported by the Terraform AWS Provider implement these resources. Browse the Terraform AWS Provider resource documentation pages for a resource with a type ending in `tag`. If there is a use case where this type of resource is missing, a [feature request](https://github.com/hashicorp/terraform-provider-aws/issues/new?labels=enhancement&template=Feature_Request.md) can be submitted. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Ec2Tag } from "./.gen/providers/aws/ec2-tag"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Ec2Tag(this, "example", { - key: "Owner", - resourceId: Token.asString( - awsVpnConnectionExample.transitGatewayAttachmentId - ), - value: "Operations", - }); - } -} - -``` - -To manage multiple tags for a resource in this scenario, [`forEach`](https://www.terraform.io/docs/configuration/meta-arguments/for_each.html) can be used: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformIterator, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Ec2Tag } from "./.gen/providers/aws/ec2-tag"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - /*In most cases loops should be handled in the programming language context and - not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - you need to keep this like it is.*/ - const exampleForEachIterator = TerraformIterator.fromList( - Token.asAny("[object Object]") - ); - new Ec2Tag(this, "example", { - key: exampleForEachIterator.key, - resourceId: Token.asString( - awsVpnConnectionExample.transitGatewayAttachmentId - ), - value: exampleForEachIterator.value, - forEach: exampleForEachIterator, - }); - } -} - -``` - -The inline map provided to `forEach` in the example above is used for brevity, but other Terraform configuration language features similar to those noted at the beginning of this guide can be used to make the example more extensible. - -### Propagating Tags to All Resources - -As of version 3.38.0 of the Terraform AWS Provider, the Terraform Configuration language also enables provider-level tagging as an alternative to the methods described in the [Getting Started with Resource Tags](#getting-started-with-resource-tags) section above. -This functionality is available for all Terraform AWS Provider resources that currently support `tags`, with the exception of the [`awsAutoscalingGroup`](/docs/providers/aws/r/autoscaling_group.html.markdown) resource. Refactoring the use of [variables](https://www.terraform.io/docs/configuration/variables.html) or [locals](https://www.terraform.io/docs/configuration/locals.html) may look like: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -import { Vpc } from "./.gen/providers/aws/vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - defaultTags: [ - { - tags: { - Environment: "Production", - Owner: "Ops", - }, - }, - ], - }); - new Vpc(this, "example", { - tags: { - Name: "MyVPC", - }, - }); - } -} - -``` - -In this example, the `environment` and `owner` tags defined within the provider configuration block will be added to the VPC on resource creation, in addition to the `name` tag defined within the VPC resource configuration. -To access all the tags applied to the VPC resource, use the read-only attribute `tagsAll`, e.g., `awsVpcExampleTagsAll`. - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/using-aws-with-awscc-provider.html.md b/website/docs/cdktf/typescript/guides/using-aws-with-awscc-provider.html.md deleted file mode 100644 index b065afdf269..00000000000 --- a/website/docs/cdktf/typescript/guides/using-aws-with-awscc-provider.html.md +++ /dev/null @@ -1,194 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Using the Terraform awscc provider with aws provider" -description: |- - Managing resource tags with the Terraform AWS Provider. ---- - - - -# Using AWS & AWSCC Provider Together - -~> **NOTE:** The `awscc` provider is currently in technical preview. This means some aspects of its design and implementation are not yet considered stable for production use. We are actively looking for community feedback in order to identify needed improvements. - -The [HashiCorp Terraform AWS Cloud Control Provider](https://registry.terraform.io/providers/hashicorp/awscc/latest) aims to bring Amazon Web Services (AWS) resources to Terraform users faster. The new provider is automatically generated, which means new features and services on AWS can be supported right away. The AWS Cloud Control provider supports hundreds of AWS resources, with more support being added as AWS service teams adopt the Cloud Control API standard. - -For Terraform users managing infrastructure on AWS, we expect the AWSCC provider will be used alongside the existing AWS provider. This guide is provided to show guidance and an example of using the providers together to deploy an AWS Cloud WAN Core Network. - -For more information about the AWSCC provider, please see the provider documentation in [Terraform Registry](https://registry.terraform.io/providers/hashicorp/awscc/latest) - - - -- [AWS CloudWAN Overview](#aws-cloud-wan) -- [Specifying Multiple Providers](#specifying-multiple-providers) - - [First Look at AWSCC Resources](#first-look-at-awscc-resources) - - [Using AWS and AWSCC Providers Together](#using-aws-and-awscc-providers-together) - - - -## AWS Cloud Wan - -In this guide we will deploy [AWS Cloud WAN](https://aws.amazon.com/cloud-wan/) to demonstrate how both AWS & AWSCC can work togther. Cloud WAN is a wide area networking (WAN) service that helps you build, manage, and monitor a unified global network that manages traffic running between resources in your cloud and on-premises environments. - -With Cloud WAN, you define network policies that are used to create a global network that spans multiple locations and networks—eliminating the need to configure and manage different networks individually using different technologies. Your network policies can be used to specify which of your Amazon Virtual Private Clouds (VPCs) and on-premises locations you wish to connect through AWS VPN or third-party software-defined WAN (SD-WAN) products, and the Cloud WAN central dashboard generates a complete view of the network to monitor network health, security, and performance. Cloud WAN automatically creates a global network across AWS Regions using Border Gateway Protocol (BGP), so you can easily exchange routes around the world. - -For more information on AWS Cloud WAN see [the documentation.](https://docs.aws.amazon.com/vpc/latest/cloudwan/what-is-cloudwan.html) - -## Specifying Multiple Providers - -Terraform can use many providers at once, as long as they are specified in your `terraform` configuration block: - -```terraform -terraform { - required_version = ">= 1.0.7" - required_providers { - aws = { - source = "hashicorp/aws" - version = ">= 4.9.0" - } - awscc = { - source = "hashicorp/awscc" - version = ">= 0.25.0" - } - } -} -``` - -The code snippet above informs terraform to download 2 providers as plugins for the current root module, the AWS and AWSCC provider. You can tell which provider is being use by looking at the resource or data source name-prefix. Resources that start with `aws` use the AWS provider, resources that start with `awscc` are using the AWSCC provider. - -### First look at AWSCC resources - -Lets start by building our [global network](https://aws.amazon.com/about-aws/global-infrastructure/global_network/) which will house our core network. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Fn, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { NetworkmanagerGlobalNetwork } from "./.gen/providers/awscc/networkmanager-global-network"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - /*The following providers are missing schema information and might need manual adjustments to synthesize correctly: awscc. - For a more precise conversion please use the --provider flag in convert.*/ - const terraformTag = [ - { - key: "terraform", - value: "true", - }, - ]; - new NetworkmanagerGlobalNetwork(this, "main", { - description: "My Global Network", - tags: Fn.concat([ - terraformTag, - [ - { - key: "Name", - value: "My Global Network", - }, - ], - ]), - }); - } -} - -``` - -Above, we define a `awsccNetworkmanagerGlobalNetwork` with 2 tags and a description. AWSCC resources use the [standard AWS tag format](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) which is expressed in HCL as a list of maps with 2 keys. We want to reuse the `terraform = true` tag so we define it as a `local` then we use [concat](https://www.terraform.io/language/functions/concat) to join the list of tags together. - -### Using AWS and AWSCC providers together - -Next we will create a [core network](https://docs.aws.amazon.com/vpc/latest/cloudwan/cloudwan-core-network-policy.html) using an AWSCC resource `awsccNetworkmanagerCoreNetwork` and an AWS data source `dataAwsNetworkmanagerCoreNetworkPolicyDocument` which allows users to write HCL to generate the json policy used as the [core policy network](https://docs.aws.amazon.com/vpc/latest/cloudwan/cloudwan-policies-json.html). - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, Fn, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsNetworkmanagerCoreNetworkPolicyDocument } from "./.gen/providers/aws/data-aws-networkmanager-core-network-policy-document"; -import { NetworkmanagerCoreNetwork } from "./.gen/providers/awscc/networkmanager-core-network"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - /*The following providers are missing schema information and might need manual adjustments to synthesize correctly: awscc. - For a more precise conversion please use the --provider flag in convert.*/ - const main = new DataAwsNetworkmanagerCoreNetworkPolicyDocument( - this, - "main", - { - attachmentPolicies: [ - { - action: { - associationMethod: "constant", - segment: "shared", - }, - conditionLogic: "or", - conditions: [ - { - key: "segment", - operator: "equals", - type: "tag-value", - value: "shared", - }, - ], - ruleNumber: 1, - }, - ], - coreNetworkConfiguration: [ - { - asnRanges: ["64512-64555"], - edgeLocations: [ - { - asn: Token.asString(64512), - location: "us-east-1", - }, - ], - vpnEcmpSupport: false, - }, - ], - segmentActions: [ - { - action: "share", - mode: "attachment-route", - segment: "shared", - shareWith: ["*"], - }, - ], - segments: [ - { - description: "SegmentForSharedServices", - name: "shared", - requireAttachmentAcceptance: true, - }, - ], - } - ); - const awsccNetworkmanagerCoreNetworkMain = new NetworkmanagerCoreNetwork( - this, - "main_1", - { - description: "My Core Network", - global_network_id: awsccNetworkmanagerGlobalNetworkMain.id, - policy_document: Fn.jsonencode( - Fn.jsondecode(Token.asString(main.json)) - ), - tags: terraformTag, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsccNetworkmanagerCoreNetworkMain.overrideLogicalId("main"); - } -} - -``` - -Thanks to Terraform's plugin design, the providers work together seemlessly! - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/version-2-upgrade.html.md b/website/docs/cdktf/typescript/guides/version-2-upgrade.html.md deleted file mode 100644 index a46c8b36045..00000000000 --- a/website/docs/cdktf/typescript/guides/version-2-upgrade.html.md +++ /dev/null @@ -1,1563 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 2 Upgrade Guide" -description: |- - Terraform AWS Provider Version 2 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 2 Upgrade Guide - -Version 2.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. This guide is intended to help with that process and focuses only on changes from version 1.60.0 to version 2.0.0. - -Most of the changes outlined in this guide have been previously marked as deprecated in the Terraform plan/apply output throughout previous provider releases. These changes, such as deprecation notices, can always be found in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Provider: Configuration](#provider-configuration) -- [Data Source: aws_ami](#data-source-aws_ami) -- [Data Source: aws_ami_ids](#data-source-aws_ami_ids) -- [Data Source: aws_iam_role](#data-source-aws_iam_role) -- [Data Source: aws_kms_secret](#data-source-aws_kms_secret) -- [Data Source: aws_lambda_function](#data-source-aws_lambda_function) -- [Data Source: aws_region](#data-source-aws_region) -- [Resource: aws_api_gateway_api_key](#resource-aws_api_gateway_api_key) -- [Resource: aws_api_gateway_integration](#resource-aws_api_gateway_integration) -- [Resource: aws_api_gateway_integration_response](#resource-aws_api_gateway_integration_response) -- [Resource: aws_api_gateway_method](#resource-aws_api_gateway_method) -- [Resource: aws_api_gateway_method_response](#resource-aws_api_gateway_method_response) -- [Resource: aws_appautoscaling_policy](#resource-aws_appautoscaling_policy) -- [Resource: aws_autoscaling_policy](#resource-aws_autoscaling_policy) -- [Resource: aws_batch_compute_environment](#resource-aws_batch_compute_environment) -- [Resource: aws_cloudfront_distribution](#resource-aws_cloudfront_distribution) -- [Resource: aws_cognito_user_pool](#resource-aws_cognito_user_pool) -- [Resource: aws_dx_lag](#resource-aws_dx_lag) -- [Resource: aws_ecs_service](#resource-aws_ecs_service) -- [Resource: aws_efs_file_system](#resource-aws_efs_file_system) -- [Resource: aws_elasticache_cluster](#resource-aws_elasticache_cluster) -- [Resource: aws_iam_user_login_profile](#resource-aws_iam_user_login_profile) -- [Resource: aws_instance](#resource-aws_instance) -- [Resource: aws_lambda_function](#resource-aws_lambda_function) -- [Resource: aws_lambda_layer_version](#resource-aws_lambda_layer_version) -- [Resource: aws_network_acl](#resource-aws_network_acl) -- [Resource: aws_redshift_cluster](#resource-aws_redshift_cluster) -- [Resource: aws_route_table](#resource-aws_route_table) -- [Resource: aws_route53_record](#resource-aws_route53_record) -- [Resource: aws_route53_zone](#resource-aws_route53_zone) -- [Resource: aws_wafregional_byte_match_set](#resource-aws_wafregional_byte_match_set) - - - -## Provider Version Configuration - --> Before upgrading to version 2.0.0 or later, it is recommended to upgrade to the most recent 1.X version of the provider (version 1.60.0) and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html) without unexpected changes or deprecation notices. - -We recommend using [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -Update to latest 1.X version: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -Update to latest 2.X version: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -## Provider: Configuration - -### skip_requesting_account_id Argument Now Required to Skip Account ID Lookup Errors - -If the provider is unable to determine the AWS account ID from a provider assume role configuration or the STS GetCallerIdentity call used to verify the credentials (if `skip_credentials_validation = false`), it will attempt to lookup the AWS account ID via EC2 metadata, IAM GetUser, IAM ListRoles, and STS GetCallerIdentity. Previously, the provider would silently allow the failure of all the above methods. - -The provider will now return an error to ensure operators understand the implications of the missing AWS account ID in the provider. - -If necessary, the AWS account ID lookup logic can be skipped via: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - skipRequestingAccountId: true, - }); - } -} - -``` - -## Data Source: aws_ami - -### owners Argument Now Required - -The `owners` argument is now required. Specifying `ownerId` or `ownerAlias` under `filter` does not satisfy this requirement. - -## Data Source: aws_ami_ids - -### owners Argument Now Required - -The `owners` argument is now required. Specifying `ownerId` or `ownerAlias` under `filter` does not satisfy this requirement. - -## Data Source: aws_iam_role - -### assume_role_policy_document Attribute Removal - -Switch your attribute references to the `assumeRolePolicy` attribute instead. - -### role_id Attribute Removal - -Switch your attribute references to the `uniqueId` attribute instead. - -### role_name Argument Removal - -Switch your Terraform configuration to the `name` argument instead. - -## Data Source: aws_kms_secret - -### Data Source Removal and Migrating to aws_kms_secrets Data Source - -The implementation of the `awsKmsSecret` data source, prior to Terraform AWS provider version 2.0.0, used dynamic attribute behavior which is not supported with Terraform 0.12 and beyond (full details available in [this GitHub issue](https://github.com/hashicorp/terraform-provider-aws/issues/5144)). - -Terraform configuration migration steps: - -* Change the data source type from `awsKmsSecret` to `awsKmsSecrets` -* Change any attribute reference (e.g., `"${dataAwsKmsSecretExampleAttribute}"`) from `attribute` to `plaintext["attribute"]` - -As an example, lets take the below sample configuration and migrate it. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsKmsSecret } from "./.gen/providers/aws/data-aws-kms-secret"; -import { RdsCluster } from "./.gen/providers/aws/rds-cluster"; -interface MyConfig { - engine: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new DataAwsKmsSecret(this, "example", { - secret: [ - { - name: "master_password", - payload: "AQEC...", - }, - { - name: "master_username", - payload: "AQEC...", - }, - ], - }); - const awsRdsClusterExample = new RdsCluster(this, "example_1", { - masterPassword: Token.asString(example.masterPassword), - masterUsername: Token.asString(example.masterUsername), - engine: config.engine, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsRdsClusterExample.overrideLogicalId("example"); - } -} - -``` - -Notice that the `awsKmsSecret` data source previously was taking the two `secret` configuration block `name` arguments and generating those as attribute names (`masterPassword` and `masterUsername` in this case). To remove the incompatible behavior, this updated version of the data source provides the decrypted value of each of those `secret` configuration block `name` arguments within a map attribute named `plaintext`. - -Updating the sample configuration from above: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Fn, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsKmsSecrets } from "./.gen/providers/aws/data-aws-kms-secrets"; -import { RdsCluster } from "./.gen/providers/aws/rds-cluster"; -interface MyConfig { - engine: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new DataAwsKmsSecrets(this, "example", { - secret: [ - { - name: "master_password", - payload: "AQEC...", - }, - { - name: "master_username", - payload: "AQEC...", - }, - ], - }); - const awsRdsClusterExample = new RdsCluster(this, "example_1", { - masterPassword: Token.asString( - Fn.lookupNested(example.plaintext, ['"master_password"']) - ), - masterUsername: Token.asString( - Fn.lookupNested(example.plaintext, ['"master_username"']) - ), - engine: config.engine, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsRdsClusterExample.overrideLogicalId("example"); - } -} - -``` - -## Data Source: aws_lambda_function - -### arn and qualified_arn Attribute Behavior Changes - -The `arn` attribute now always returns the unqualified (no `:qualifier` or `:version` suffix) ARN value and the `qualifiedArn` attribute now always returns the qualified (includes `:qualifier` or `:version` suffix) ARN value. Previously by default, the `arn` attribute included `:$latest` suffix when not setting the optional `qualifier` argument, which was not compatible with many other resources. To restore the previous default behavior, set the `qualifier` argument to `$latest` and reference the `qualifiedArn` attribute. - -## Data Source: aws_region - -### current Argument Removal - -Simply remove `current = true` from your Terraform configuration. The data source defaults to the current provider region if no other filtering is enabled. - -## Resource: aws_api_gateway_api_key - -### stage_key Argument Removal - -Since the API Gateway usage plans feature was launched on August 11, 2016, usage plans are now required to associate an API key with an API stage. To migrate your Terraform configuration, the AWS provider implements support for usage plans with the following resources: - -* [`awsApiGatewayUsagePlan`](/docs/providers/aws/r/api_gateway_usage_plan.html) -* [`awsApiGatewayUsagePlanKey`](/docs/providers/aws/r/api_gateway_usage_plan_key.html) - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayApiKey } from "./.gen/providers/aws/api-gateway-api-key"; -import { ApiGatewayDeployment } from "./.gen/providers/aws/api-gateway-deployment"; -import { ApiGatewayRestApi } from "./.gen/providers/aws/api-gateway-rest-api"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new ApiGatewayRestApi(this, "example", { - name: "example", - }); - const awsApiGatewayDeploymentExample = new ApiGatewayDeployment( - this, - "example_1", - { - restApiId: example.id, - stageName: "example", - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsApiGatewayDeploymentExample.overrideLogicalId("example"); - const awsApiGatewayApiKeyExample = new ApiGatewayApiKey(this, "example_2", { - name: "example", - stage_key: [ - { - rest_api_id: example.id, - stage_name: awsApiGatewayDeploymentExample.stageName, - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsApiGatewayApiKeyExample.overrideLogicalId("example"); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayApiKey } from "./.gen/providers/aws/api-gateway-api-key"; -import { ApiGatewayDeployment } from "./.gen/providers/aws/api-gateway-deployment"; -import { ApiGatewayRestApi } from "./.gen/providers/aws/api-gateway-rest-api"; -import { ApiGatewayUsagePlan } from "./.gen/providers/aws/api-gateway-usage-plan"; -import { ApiGatewayUsagePlanKey } from "./.gen/providers/aws/api-gateway-usage-plan-key"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new ApiGatewayApiKey(this, "example", { - name: "example", - }); - const awsApiGatewayRestApiExample = new ApiGatewayRestApi( - this, - "example_1", - { - name: "example", - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsApiGatewayRestApiExample.overrideLogicalId("example"); - const awsApiGatewayDeploymentExample = new ApiGatewayDeployment( - this, - "example_2", - { - restApiId: Token.asString(awsApiGatewayRestApiExample.id), - stageName: "example", - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsApiGatewayDeploymentExample.overrideLogicalId("example"); - const awsApiGatewayUsagePlanExample = new ApiGatewayUsagePlan( - this, - "example_3", - { - apiStages: [ - { - apiId: Token.asString(awsApiGatewayRestApiExample.id), - stage: Token.asString(awsApiGatewayDeploymentExample.stageName), - }, - ], - name: "example", - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsApiGatewayUsagePlanExample.overrideLogicalId("example"); - const awsApiGatewayUsagePlanKeyExample = new ApiGatewayUsagePlanKey( - this, - "example_4", - { - keyId: example.id, - keyType: "API_KEY", - usagePlanId: Token.asString(awsApiGatewayUsagePlanExample.id), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsApiGatewayUsagePlanKeyExample.overrideLogicalId("example"); - } -} - -``` - -## Resource: aws_api_gateway_integration - -### request_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `requestParameters` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayIntegration } from "./.gen/providers/aws/api-gateway-integration"; -interface MyConfig { - httpMethod: any; - resourceId: any; - restApiId: any; - type: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayIntegration(this, "example", { - request_parameters_in_json: - '{\n "integration.request.header.X-Authorization": "\'static\'"\n}\n\n', - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - type: config.type, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayIntegration } from "./.gen/providers/aws/api-gateway-integration"; -interface MyConfig { - httpMethod: any; - resourceId: any; - restApiId: any; - type: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayIntegration(this, "example", { - requestParameters: { - "integration.request.header.X-Authorization": "'static'", - }, - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - type: config.type, - }); - } -} - -``` - -## Resource: aws_api_gateway_integration_response - -### response_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `responseParameters` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayIntegrationResponse } from "./.gen/providers/aws/api-gateway-integration-response"; -interface MyConfig { - httpMethod: any; - resourceId: any; - restApiId: any; - statusCode: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayIntegrationResponse(this, "example", { - response_parameters_in_json: - '{\n "method.response.header.Content-Type": "integration.response.body.type"\n}\n\n', - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - statusCode: config.statusCode, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayIntegrationResponse } from "./.gen/providers/aws/api-gateway-integration-response"; -interface MyConfig { - httpMethod: any; - resourceId: any; - restApiId: any; - statusCode: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayIntegrationResponse(this, "example", { - responseParameters: { - "method.response.header.Content-Type": "integration.response.body.type", - }, - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - statusCode: config.statusCode, - }); - } -} - -``` - -## Resource: aws_api_gateway_method - -### request_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `requestParameters` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayMethod } from "./.gen/providers/aws/api-gateway-method"; -interface MyConfig { - authorization: any; - httpMethod: any; - resourceId: any; - restApiId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayMethod(this, "example", { - request_parameters_in_json: - '{\n "method.request.header.Content-Type": false,\n "method.request.querystring.page": true\n}\n\n', - authorization: config.authorization, - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayMethod } from "./.gen/providers/aws/api-gateway-method"; -interface MyConfig { - authorization: any; - httpMethod: any; - resourceId: any; - restApiId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayMethod(this, "example", { - requestParameters: { - "method.request.header.Content-Type": false, - "method.request.querystring.page": true, - }, - authorization: config.authorization, - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - }); - } -} - -``` - -## Resource: aws_api_gateway_method_response - -### response_parameters_in_json Argument Removal - -Switch your Terraform configuration to the `responseParameters` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayMethodResponse } from "./.gen/providers/aws/api-gateway-method-response"; -interface MyConfig { - httpMethod: any; - resourceId: any; - restApiId: any; - statusCode: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayMethodResponse(this, "example", { - response_parameters_in_json: - '{\n "method.response.header.Content-Type": true\n}\n\n', - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - statusCode: config.statusCode, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ApiGatewayMethodResponse } from "./.gen/providers/aws/api-gateway-method-response"; -interface MyConfig { - httpMethod: any; - resourceId: any; - restApiId: any; - statusCode: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ApiGatewayMethodResponse(this, "example", { - responseParameters: { - "method.response.header.Content-Type": true, - }, - httpMethod: config.httpMethod, - resourceId: config.resourceId, - restApiId: config.restApiId, - statusCode: config.statusCode, - }); - } -} - -``` - -## Resource: aws_appautoscaling_policy - -### Argument Removals - -The following arguments have been moved into a nested argument named `stepScalingPolicyConfiguration`: - -* `adjustmentType` -* `cooldown` -* `metricAggregationType` -* `minAdjustmentMagnitude` -* `stepAdjustment` - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Op, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AppautoscalingPolicy } from "./.gen/providers/aws/appautoscaling-policy"; -interface MyConfig { - name: any; - resourceId: any; - scalableDimension: any; - serviceNamespace: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new AppautoscalingPolicy(this, "example", { - adjustment_type: "ChangeInCapacity", - cooldown: 60, - metric_aggregation_type: "Maximum", - step_adjustment: [ - { - metric_interval_upper_bound: 0, - scaling_adjustment: Op.negate(1), - }, - ], - name: config.name, - resourceId: config.resourceId, - scalableDimension: config.scalableDimension, - serviceNamespace: config.serviceNamespace, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, Op, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AppautoscalingPolicy } from "./.gen/providers/aws/appautoscaling-policy"; -interface MyConfig { - name: any; - resourceId: any; - scalableDimension: any; - serviceNamespace: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new AppautoscalingPolicy(this, "example", { - stepScalingPolicyConfiguration: { - adjustmentType: "ChangeInCapacity", - cooldown: 60, - metricAggregationType: "Maximum", - stepAdjustment: [ - { - metricIntervalUpperBound: Token.asString(0), - scalingAdjustment: Token.asNumber(Op.negate(1)), - }, - ], - }, - name: config.name, - resourceId: config.resourceId, - scalableDimension: config.scalableDimension, - serviceNamespace: config.serviceNamespace, - }); - } -} - -``` - -## Resource: aws_autoscaling_policy - -### min_adjustment_step Argument Removal - -Switch your Terraform configuration to the `minAdjustmentMagnitude` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AutoscalingPolicy } from "./.gen/providers/aws/autoscaling-policy"; -interface MyConfig { - autoscalingGroupName: any; - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new AutoscalingPolicy(this, "example", { - min_adjustment_step: 2, - autoscalingGroupName: config.autoscalingGroupName, - name: config.name, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AutoscalingPolicy } from "./.gen/providers/aws/autoscaling-policy"; -interface MyConfig { - autoscalingGroupName: any; - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new AutoscalingPolicy(this, "example", { - minAdjustmentMagnitude: 2, - autoscalingGroupName: config.autoscalingGroupName, - name: config.name, - }); - } -} - -``` - -## Resource: aws_batch_compute_environment - -### ecc_cluster_arn Attribute Removal - -Switch your attribute references to the `ecsClusterArn` attribute instead. - -## Resource: aws_cloudfront_distribution - -### cache_behavior Argument Removal - -Switch your Terraform configuration to the `orderedCacheBehavior` argument instead. It behaves similar to the previous `cacheBehavior` argument, however the ordering of the configurations in Terraform is now reflected in the distribution where previously it was indeterminate. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CloudfrontDistribution } from "./.gen/providers/aws/cloudfront-distribution"; -interface MyConfig { - defaultCacheBehavior: any; - enabled: any; - origin: any; - restrictions: any; - viewerCertificate: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new CloudfrontDistribution(this, "example", { - cache_behavior: [{}, {}], - defaultCacheBehavior: config.defaultCacheBehavior, - enabled: config.enabled, - origin: config.origin, - restrictions: config.restrictions, - viewerCertificate: config.viewerCertificate, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CloudfrontDistribution } from "./.gen/providers/aws/cloudfront-distribution"; -interface MyConfig { - allowedMethods: any; - cachedMethods: any; - pathPattern: any; - targetOriginId: any; - viewerProtocolPolicy: any; - allowedMethods1: any; - cachedMethods1: any; - pathPattern1: any; - targetOriginId1: any; - viewerProtocolPolicy1: any; - defaultCacheBehavior: any; - enabled: any; - origin: any; - restrictions: any; - viewerCertificate: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new CloudfrontDistribution(this, "example", { - orderedCacheBehavior: [ - { - allowedMethods: config.allowedMethods, - cachedMethods: config.cachedMethods, - pathPattern: config.pathPattern, - targetOriginId: config.targetOriginId, - viewerProtocolPolicy: config.viewerProtocolPolicy, - }, - { - allowedMethods: config.allowedMethods1, - cachedMethods: config.cachedMethods1, - pathPattern: config.pathPattern1, - targetOriginId: config.targetOriginId1, - viewerProtocolPolicy: config.viewerProtocolPolicy1, - }, - ], - defaultCacheBehavior: config.defaultCacheBehavior, - enabled: config.enabled, - origin: config.origin, - restrictions: config.restrictions, - viewerCertificate: config.viewerCertificate, - }); - } -} - -``` - -## Resource: aws_cognito_user_pool - -### email_verification_subject Argument Now Conflicts With verification_message_template Configuration Block email_subject Argument - -Choose one argument or the other. These arguments update the same underlying information in Cognito and the selection is indeterminate if differing values are provided. - -### email_verification_message Argument Now Conflicts With verification_message_template Configuration Block email_message Argument - -Choose one argument or the other. These arguments update the same underlying information in Cognito and the selection is indeterminate if differing values are provided. - -### sms_verification_message Argument Now Conflicts With verification_message_template Configuration Block sms_message Argument - -Choose one argument or the other. These arguments update the same underlying information in Cognito and the selection is indeterminate if differing values are provided. - -## Resource: aws_dx_lag - -### number_of_connections Argument Removal - -Default connections have been removed as part of LAG creation. To migrate your Terraform configuration, the AWS provider implements the following resources: - -* [`awsDxConnection`](/docs/providers/aws/r/dx_connection.html) -* [`awsDxConnectionAssociation`](/docs/providers/aws/r/dx_connection_association.html) - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DxLag } from "./.gen/providers/aws/dx-lag"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DxLag(this, "example", { - connectionsBandwidth: "1Gbps", - location: "EqSe2-EQ", - name: "example", - number_of_connections: 1, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DxConnection } from "./.gen/providers/aws/dx-connection"; -import { DxConnectionAssociation } from "./.gen/providers/aws/dx-connection-association"; -import { DxLag } from "./.gen/providers/aws/dx-lag"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new DxConnection(this, "example", { - bandwidth: "1Gbps", - location: "EqSe2-EQ", - name: "example", - }); - const awsDxLagExample = new DxLag(this, "example_1", { - connectionsBandwidth: "1Gbps", - location: "EqSe2-EQ", - name: "example", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsDxLagExample.overrideLogicalId("example"); - const awsDxConnectionAssociationExample = new DxConnectionAssociation( - this, - "example_2", - { - connectionId: example.id, - lagId: Token.asString(awsDxLagExample.id), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsDxConnectionAssociationExample.overrideLogicalId("example"); - } -} - -``` - -## Resource: aws_ecs_service - -### placement_strategy Argument Removal - -Switch your Terraform configuration to the `orderedPlacementStrategy` argument instead. It behaves similar to the previous `placementStrategy` argument, however the ordering of the configurations in Terraform is now reflected in the distribution where previously it was indeterminate. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EcsService } from "./.gen/providers/aws/ecs-service"; -interface MyConfig { - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EcsService(this, "example", { - placement_strategy: [{}, {}], - name: config.name, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EcsService } from "./.gen/providers/aws/ecs-service"; -interface MyConfig { - type: any; - type1: any; - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EcsService(this, "example", { - orderedPlacementStrategy: [ - { - type: config.type, - }, - { - type: config.type1, - }, - ], - name: config.name, - }); - } -} - -``` - -## Resource: aws_efs_file_system - -### reference_name Argument Removal - -Switch your Terraform configuration to the `creationToken` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EfsFileSystem } from "./.gen/providers/aws/efs-file-system"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new EfsFileSystem(this, "example", { - reference_name: "example", - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EfsFileSystem } from "./.gen/providers/aws/efs-file-system"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new EfsFileSystem(this, "example", { - creationToken: "example", - }); - } -} - -``` - -## Resource: aws_elasticache_cluster - -### availability_zones Argument Removal - -Switch your Terraform configuration to the `preferredAvailabilityZones` argument instead. The argument is still optional and the API will continue to automatically choose Availability Zones for nodes if not specified. The new argument will also continue to match the APIs required behavior that the length of the list must be the same as `numCacheNodes`. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ElasticacheCluster } from "./.gen/providers/aws/elasticache-cluster"; -interface MyConfig { - clusterId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ElasticacheCluster(this, "example", { - availability_zones: ["us-west-2a", "us-west-2b"], - clusterId: config.clusterId, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ElasticacheCluster } from "./.gen/providers/aws/elasticache-cluster"; -interface MyConfig { - clusterId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ElasticacheCluster(this, "example", { - preferredAvailabilityZones: ["us-west-2a", "us-west-2b"], - clusterId: config.clusterId, - }); - } -} - -``` - -## Resource: aws_iam_user_login_profile - -### Import Now Required For Existing Infrastructure - -When attempting to bring existing IAM User Login Profiles under Terraform management, `terraform import` is now required. See the [`awsIamUserLoginProfile` resource documentation](https://www.terraform.io/docs/providers/aws/r/iam_user_login_profile.html) for more information. - -## Resource: aws_instance - -### network_interface_id Attribute Removal - -Switch your attribute references to the `primaryNetworkInterfaceId` attribute instead. - -## Resource: aws_lambda_function - -### reserved_concurrent_executions Argument Behavior Change - -Setting `reservedConcurrentExecutions` to `0` will now disable Lambda Function invocations, causing downtime for the Lambda Function. - -Previously `reservedConcurrentExecutions` accepted `0` and below for unreserved concurrency, which means it was not previously possible to disable invocations. The argument now differentiates between a new value for unreserved concurrency (`1`) and disabling Lambda invocations (`0`). If previously configuring this value to `0` for unreserved concurrency, update the configured value to `1` or the resource will disable Lambda Function invocations on update. If previously unconfigured, the argument does not require any changes. - -See the [Lambda User Guide](https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html) for more information about concurrency. - -## Resource: aws_lambda_layer_version - -### arn and layer_arn Attribute Value Swap - -Switch your `arn` attribute references to the `layerArn` attribute instead and vice-versa. - -## Resource: aws_network_acl - -### subnet_id Argument Removal - -Switch your Terraform configuration to the `subnetIds` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { NetworkAcl } from "./.gen/providers/aws/network-acl"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new NetworkAcl(this, "example", { - subnet_id: "subnet-12345678", - vpcId: config.vpcId, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { NetworkAcl } from "./.gen/providers/aws/network-acl"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new NetworkAcl(this, "example", { - subnetIds: ["subnet-12345678"], - vpcId: config.vpcId, - }); - } -} - -``` - -## Resource: aws_redshift_cluster - -### Argument Removals - -The following arguments have been moved into a nested argument named `logging`: - -* `bucketName` -* `enableLogging` (also renamed to just `enable`) -* `s3KeyPrefix` - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { RedshiftCluster } from "./.gen/providers/aws/redshift-cluster"; -interface MyConfig { - clusterIdentifier: any; - nodeType: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new RedshiftCluster(this, "example", { - bucket_name: "example", - enable_logging: true, - s3_key_prefix: "example", - clusterIdentifier: config.clusterIdentifier, - nodeType: config.nodeType, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { RedshiftCluster } from "./.gen/providers/aws/redshift-cluster"; -interface MyConfig { - clusterIdentifier: any; - nodeType: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new RedshiftCluster(this, "example", { - logging: { - bucketName: "example", - enable: true, - s3KeyPrefix: "example", - }, - clusterIdentifier: config.clusterIdentifier, - nodeType: config.nodeType, - }); - } -} - -``` - -## Resource: aws_route_table - -### Import Change - -Previously, importing this resource resulted in an `awsRoute` resource for each route, in -addition to the `awsRouteTable`, in the Terraform state. Support for importing `awsRoute` resources has been added and importing this resource only adds the `awsRouteTable` -resource, with in-line routes, to the state. - -## Resource: aws_route53_record - -### allow_overwrite Default Value Change - -The resource now requires existing Route 53 Records to be imported into the Terraform state for management unless the `allowOverwrite` argument is enabled. - -For example, if the `wwwExampleCom` Route 53 Record in the `exampleCom` Route 53 Hosted Zone existed previously and this new Terraform configuration was introduced: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route53Record } from "./.gen/providers/aws/route53-record"; -interface MyConfig { - type: any; - zoneId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new Route53Record(this, "www", { - name: "www.example.com", - type: config.type, - zoneId: config.zoneId, - }); - } -} - -``` - -During resource creation in version 1.X and prior, it would silently perform an `upsert` changeset to the existing Route 53 Record and not report back an error. In version 2.0.0 of the Terraform AWS Provider, the resource now performs a `create` changeset, which will error for existing Route 53 Records. - -The `allowOverwrite` argument provides a workaround to keep the old behavior, but most existing workflows should be updated to perform a `terraform import` command like the following instead: - -```console -$ terraform import aws_route53_record.www ZONEID_www.example.com_TYPE -``` - -More information can be found in the [`awsRoute53Record` resource documentation](https://www.terraform.io/docs/providers/aws/r/route53_record.html#import). - -## Resource: aws_route53_zone - -### vpc_id and vpc_region Argument Removal - -Switch your Terraform configuration to `vpc` configuration block(s) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route53Zone } from "./.gen/providers/aws/route53-zone"; -interface MyConfig { - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new Route53Zone(this, "example", { - vpc_id: "...", - name: config.name, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route53Zone } from "./.gen/providers/aws/route53-zone"; -interface MyConfig { - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new Route53Zone(this, "example", { - vpc: [ - { - vpcId: "...", - }, - ], - name: config.name, - }); - } -} - -``` - -## Resource: aws_wafregional_byte_match_set - -### byte_match_tuple Argument Removal - -Switch your Terraform configuration to the `byteMatchTuples` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { WafregionalByteMatchSet } from "./.gen/providers/aws/wafregional-byte-match-set"; -interface MyConfig { - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new WafregionalByteMatchSet(this, "example", { - byte_match_tuple: [{}, {}], - name: config.name, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { WafregionalByteMatchSet } from "./.gen/providers/aws/wafregional-byte-match-set"; -interface MyConfig { - fieldToMatch: any; - positionalConstraint: any; - textTransformation: any; - fieldToMatch1: any; - positionalConstraint1: any; - textTransformation1: any; - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new WafregionalByteMatchSet(this, "example", { - byteMatchTuples: [ - { - fieldToMatch: config.fieldToMatch, - positionalConstraint: config.positionalConstraint, - textTransformation: config.textTransformation, - }, - { - fieldToMatch: config.fieldToMatch1, - positionalConstraint: config.positionalConstraint1, - textTransformation: config.textTransformation1, - }, - ], - name: config.name, - }); - } -} - -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/version-3-upgrade.html.md b/website/docs/cdktf/typescript/guides/version-3-upgrade.html.md deleted file mode 100644 index 9ce8e4f4e97..00000000000 --- a/website/docs/cdktf/typescript/guides/version-3-upgrade.html.md +++ /dev/null @@ -1,2545 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 3 Upgrade Guide" -description: |- - Terraform AWS Provider Version 3 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 3 Upgrade Guide - -Version 3.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. This guide is intended to help with that process and focuses only on changes from version 2.X to version 3.0.0. See the [Version 2 Upgrade Guide](/docs/providers/aws/guides/version-2-upgrade.html) for information about upgrading from 1.X to version 2.0.0. - -Most of the changes outlined in this guide have been previously marked as deprecated in the Terraform plan/apply output throughout previous provider releases. These changes, such as deprecation notices, can always be found in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). - -~> **NOTE:** Version 3.0.0 and later of the AWS Provider can only be automatically installed on Terraform 0.12 and later. - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Provider Authentication Updates](#provider-authentication-updates) -- [Provider Custom Service Endpoint Updates](#provider-custom-service-endpoint-updates) -- [Data Source: aws_availability_zones](#data-source-aws_availability_zones) -- [Data Source: aws_lambda_invocation](#data-source-aws_lambda_invocation) -- [Data Source: aws_launch_template](#data-source-aws_launch_template) -- [Data Source: aws_route53_resolver_rule](#data-source-aws_route53_resolver_rule) -- [Data Source: aws_route53_zone](#data-source-aws_route53_zone) -- [Resource: aws_acm_certificate](#resource-aws_acm_certificate) -- [Resource: aws_api_gateway_method_settings](#resource-aws_api_gateway_method_settings) -- [Resource: aws_autoscaling_group](#resource-aws_autoscaling_group) -- [Resource: aws_cloudfront_distribution](#resource-aws_cloudfront_distribution) -- [Resource: aws_cloudwatch_log_group](#resource-aws_cloudwatch_log_group) -- [Resource: aws_codepipeline](#resource-aws_codepipeline) -- [Resource: aws_cognito_user_pool](#resource-aws_cognito_user_pool) -- [Resource: aws_dx_gateway](#resource-aws_dx_gateway) -- [Resource: aws_dx_gateway_association](#resource-aws_dx_gateway_association) -- [Resource: aws_dx_gateway_association_proposal](#resource-aws_dx_gateway_association_proposal) -- [Resource: aws_ebs_volume](#resource-aws_ebs_volume) -- [Resource: aws_elastic_transcoder_preset](#resource-aws_elastic_transcoder_preset) -- [Resource: aws_emr_cluster](#resource-aws_emr_cluster) -- [Resource: aws_glue_job](#resource-aws_glue_job) -- [Resource: aws_iam_access_key](#resource-aws_iam_access_key) -- [Resource: aws_iam_instance_profile](#resource-aws_iam_instance_profile) -- [Resource: aws_iam_server_certificate](#resource-aws_iam_server_certificate) -- [Resource: aws_instance](#resource-aws_instance) -- [Resource: aws_lambda_alias](#resource-aws_lambda_alias) -- [Resource: aws_launch_template](#resource-aws_launch_template) -- [Resource: aws_lb_listener_rule](#resource-aws_lb_listener_rule) -- [Resource: aws_msk_cluster](#resource-aws_msk_cluster) -- [Resource: aws_rds_cluster](#resource-aws_rds_cluster) -- [Resource: aws_route53_resolver_rule](#resource-aws_route53_resolver_rule) -- [Resource: aws_route53_zone](#resource-aws_route53_zone) -- [Resource: aws_s3_bucket](#resource-aws_s3_bucket) -- [Resource: aws_s3_bucket_metric](#resource-aws_s3_bucket_metric) -- [Resource: aws_security_group](#resource-aws_security_group) -- [Resource: aws_sns_platform_application](#resource-aws_sns_platform_application) -- [Resource: aws_spot_fleet_request](#resource-aws_spot_fleet_request) - - - -## Provider Version Configuration - --> Before upgrading to version 3.0.0, it is recommended to upgrade to the most recent 2.X version of the provider and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html) without unexpected changes or deprecation notices. - -We recommend using [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -Update to latest 3.X version: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -## Provider Authentication Updates - -### Authentication Ordering - -Previously, the provider preferred credentials in the following order: - -- Static credentials (those defined in the Terraform configuration) -- Environment variables (e.g., `awsAccessKeyId` or `awsProfile`) -- Shared credentials file (e.g., `~/Aws/credentials`) -- EC2 Instance Metadata Service -- Default AWS Go SDK handling (shared configuration, CodeBuild/ECS/EKS) - -The provider now prefers the following credential ordering: - -- Static credentials (those defined in the Terraform configuration) -- Environment variables (e.g., `awsAccessKeyId` or `awsProfile`) -- Shared credentials and/or configuration file (e.g., `~/Aws/credentials` and `~/Aws/config`) -- Default AWS Go SDK handling (shared configuration, CodeBuild/ECS/EKS, EC2 Instance Metadata Service) - -This means workarounds of disabling the EC2 Instance Metadata Service handling to enable CodeBuild/ECS/EKS credentials or to enable other credential methods such as `credentialProcess` in the AWS shared configuration are no longer necessary. - -### Shared Configuration File Automatically Enabled - -The `awsSdkLoadConfig` environment variable is no longer necessary for the provider to automatically load the AWS shared configuration file (e.g., `~/Aws/config`). - -### Removal of AWS_METADATA_TIMEOUT Environment Variable Usage - -The provider now relies on the default AWS Go SDK timeouts for interacting with the EC2 Instance Metadata Service. - -## Provider Custom Service Endpoint Updates - -### Removal of kinesis_analytics and r53 Arguments - -The [custom service endpoints](custom-service-endpoints.html) for Kinesis Analytics and Route 53 now use the `kinesisanalytics` and `route53` argument names in the provider configuration. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - endpoints: [ - { - kinesis_analytics: "https://example.com", - r53: "https://example.com", - }, - ], - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - endpoints: [ - { - kinesisanalytics: "https://example.com", - route53: "https://example.com", - }, - ], - }); - } -} - -``` - -## Data Source: aws_availability_zones - -### blacklisted_names Attribute Removal - -Switch your Terraform configuration to the `excludeNames` attribute instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsAvailabilityZones } from "./.gen/providers/aws/data-aws-availability-zones"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsAvailabilityZones(this, "example", { - blacklisted_names: ["us-west-2d"], - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsAvailabilityZones } from "./.gen/providers/aws/data-aws-availability-zones"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsAvailabilityZones(this, "example", { - excludeNames: ["us-west-2d"], - }); - } -} - -``` - -### blacklisted_zone_ids Attribute Removal - -Switch your Terraform configuration to the `excludeZoneIds` attribute instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsAvailabilityZones } from "./.gen/providers/aws/data-aws-availability-zones"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsAvailabilityZones(this, "example", { - blacklisted_zone_ids: ["usw2-az4"], - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsAvailabilityZones } from "./.gen/providers/aws/data-aws-availability-zones"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsAvailabilityZones(this, "example", { - excludeZoneIds: ["usw2-az4"], - }); - } -} - -``` - -## Data Source: aws_lambda_invocation - -### result_map Attribute Removal - -Switch your Terraform configuration to the `result` attribute with the [`jsondecode()` function](https://www.terraform.io/docs/configuration/functions/jsondecode.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformOutput, Fn, TerraformStack } from "cdktf"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new TerraformOutput(this, "lambda_result", { - value: Fn.lookupNested(example.resultMap, ['"key1"']), - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformOutput, Fn, Token, TerraformStack } from "cdktf"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new TerraformOutput(this, "lambda_result", { - value: Fn.lookupNested(Fn.jsondecode(Token.asString(example.result)), [ - '"key1"', - ]), - }); - } -} - -``` - -## Data Source: aws_launch_template - -### Error raised if no matching launch template is found - -Previously, when a launch template matching the criteria was not found the data source would have been `null`. -Now this could produce errors similar to the below: - -``` -data.aws_launch_template.current: Refreshing state... - -Error: error reading launch template: empty output -``` - -Configuration that depend on the previous behavior will need to be updated. - -## Data Source: aws_route53_resolver_rule - -### Removal of trailing period in domain_name argument - -Previously the data-source returned the Resolver Rule Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Resolver Rule Domain Name such as `exampleCom`, the attribute now will be returned as `exampleCom`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Data Source: aws_route53_zone - -### Removal of trailing period in name argument - -Previously the data-source returned the Hosted Zone Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Hosted Zone Domain Name such as `exampleCom`, the attribute now will be returned as `exampleCom`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Resource: aws_acm_certificate - -### domain_validation_options Changed from List to Set - -Previously, the `domainValidationOptions` attribute was a list type and completely unknown until after an initial `terraform apply`. This generally required complicated configuration workarounds to properly create DNS validation records since referencing this attribute directly could produce errors similar to the below: - -``` -Error: Invalid for_each argument - - on main.tf line 16, in resource "aws_route53_record" "existing": - 16: for_each = aws_acm_certificate.existing.domain_validation_options - -The `for_each` value depends on resource attributes that cannot be determined -until apply, so Terraform cannot predict how many instances will be created. -To work around this, use the -target argument to first apply only the -resources that the for_each depends on. -``` - -The `domainValidationOptions` attribute is now a set type and the resource will attempt to populate the information necessary during the planning phase to handle the above situation in most environments without workarounds. This change also prevents Terraform from showing unexpected differences if the API returns the results in varying order. - -Configuration references to this attribute will likely require updates since sets cannot be indexed (e.g., `domainValidationOptions[0]` or the older `domainValidationOptions0` syntax will return errors). -If the `domainValidationOptions` list previously contained only a single element like the two examples just shown, -it may be possible to wrap these references using the [`tolist()` function](https://www.terraform.io/docs/configuration/functions/tolist.html) - -(e.g., `tolist(awsAcmCertificateExampleDomainValidationOptions)[0]`) as a quick configuration update. -However given the complexity and workarounds required with the previous `domainValidationOptions` attribute implementation, -different environments will require different configuration updates and migration steps. -Below is a more advanced example. -Further questions on potential update steps can be submitted to the [community forums](https://discuss.hashicorp.com/c/terraform-providers/tf-aws/33). - -For example, given this previous configuration using a `count` based resource approach that may have been used in certain environments: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Fn, Op, Token, TerraformCount, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AcmCertificate } from "./.gen/providers/aws/acm-certificate"; -import { AcmCertificateValidation } from "./.gen/providers/aws/acm-certificate-validation"; -import { DataAwsRoute53Zone } from "./.gen/providers/aws/data-aws-route53-zone"; -import { Route53Record } from "./.gen/providers/aws/route53-record"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const existing = new AcmCertificate(this, "existing", { - domainName: "existing.${" + publicRootDomain.value + "}", - subjectAlternativeNames: [ - "existing1.${" + publicRootDomain.value + "}", - "existing2.${" + publicRootDomain.value + "}", - "existing3.${" + publicRootDomain.value + "}", - ], - validationMethod: "DNS", - }); - const dataAwsRoute53ZonePublicRootDomain = new DataAwsRoute53Zone( - this, - "public_root_domain", - { - name: publicRootDomain.stringValue, - } - ); - /*In most cases loops should be handled in the programming language context and - not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - you need to keep this like it is.*/ - const existingCount = TerraformCount.of( - Token.asNumber(Op.add(Fn.lengthOf(existing.subjectAlternativeNames), 1)) - ); - const awsRoute53RecordExisting = new Route53Record(this, "existing_2", { - allowOverwrite: true, - name: Token.asString( - Fn.lookupNested( - Fn.lookupNested(existing.domainValidationOptions, [ - existingCount.index, - ]), - ["resource_record_name"] - ) - ), - records: [ - Token.asString( - Fn.lookupNested( - Fn.lookupNested(existing.domainValidationOptions, [ - existingCount.index, - ]), - ["resource_record_value"] - ) - ), - ], - ttl: 60, - type: Token.asString( - Fn.lookupNested( - Fn.lookupNested(existing.domainValidationOptions, [ - existingCount.index, - ]), - ["resource_record_type"] - ) - ), - zoneId: Token.asString(dataAwsRoute53ZonePublicRootDomain.zoneId), - count: existingCount, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsRoute53RecordExisting.overrideLogicalId("existing"); - const awsAcmCertificateValidationExisting = new AcmCertificateValidation( - this, - "existing_3", - { - certificateArn: existing.arn, - validationRecordFqdns: Token.asList( - Fn.lookupNested(awsRoute53RecordExisting, ["*", "fqdn"]) - ), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAcmCertificateValidationExisting.overrideLogicalId("existing"); - } -} - -``` - -It will receive errors like the below after upgrading: - -``` -Error: Invalid index - - on main.tf line 14, in resource "aws_route53_record" "existing": - 14: name = aws_acm_certificate.existing.domain_validation_options[count.index].resource_record_name - |---------------- - | aws_acm_certificate.existing.domain_validation_options is set of object with 4 elements - | count.index is 1 - -This value does not have any indices. -``` - -Since the `domainValidationOptions` attribute changed from a list to a set and sets cannot be indexed in Terraform, the recommendation is to update the configuration to use the more stable [resource `forEach` support](https://www.terraform.io/docs/configuration/meta-arguments/for_each.html) instead of [`count`](https://www.terraform.io/docs/configuration/meta-arguments/count.html). Note the slight change in the `validationRecordFqdns` syntax as well. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformIterator, Fn, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AcmCertificateValidation } from "./.gen/providers/aws/acm-certificate-validation"; -import { Route53Record } from "./.gen/providers/aws/route53-record"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - /*In most cases loops should be handled in the programming language context and - not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - you need to keep this like it is.*/ - const existingForEachIterator = TerraformIterator.fromList( - Token.asAny( - "${{ for dvo in ${" + - awsAcmCertificateExisting.domainValidationOptions + - "} : dvo.domain_name => {\n name = dvo.resource_record_name\n record = dvo.resource_record_value\n type = dvo.resource_record_type\n }}}" - ) - ); - const existing = new Route53Record(this, "existing", { - allowOverwrite: true, - name: Token.asString( - Fn.lookupNested(existingForEachIterator.value, ["name"]) - ), - records: [ - Token.asString( - Fn.lookupNested(existingForEachIterator.value, ["record"]) - ), - ], - ttl: 60, - type: Token.asString( - Fn.lookupNested(existingForEachIterator.value, ["type"]) - ), - zoneId: Token.asString(publicRootDomain.zoneId), - forEach: existingForEachIterator, - }); - const awsAcmCertificateValidationExisting = new AcmCertificateValidation( - this, - "existing_1", - { - certificateArn: Token.asString(awsAcmCertificateExisting.arn), - validationRecordFqdns: Token.asList( - "${[ for record in ${" + existing.fqn + "} : record.fqdn]}" - ), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAcmCertificateValidationExisting.overrideLogicalId("existing"); - } -} - -``` - -After the configuration has been updated, a plan should no longer error and may look like the following: - -``` ------------------------------------------------------------------------- - -An execution plan has been generated and is shown below. -Resource actions are indicated with the following symbols: - + create - - destroy --/+ destroy and then create replacement - -Terraform will perform the following actions: - - # aws_acm_certificate_validation.existing must be replaced --/+ resource "aws_acm_certificate_validation" "existing" { - certificate_arn = "arn:aws:acm:us-east-2:123456789012:certificate/ccbc58e8-061d-4443-9035-d3af0512e863" - ~ id = "2020-07-16 00:01:19 +0000 UTC" -> (known after apply) - ~ validation_record_fqdns = [ - - "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com", - - "_812ddf11b781af1eec1643ec58f102d2.existing.example.com", - - "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com", - - "_d7112da809a40e848207c04399babcec.existing1.example.com", - ] -> (known after apply) # forces replacement - } - - # aws_route53_record.existing will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" -> null - - id = "Z123456789012__812ddf11b781af1eec1643ec58f102d2.existing.example.com._CNAME" -> null - - name = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" -> null - - records = [ - - "_bdeba72164eec216c55a32374bcceafd.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing[1] will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" -> null - - id = "Z123456789012__40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com._CNAME" -> null - - name = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" -> null - - records = [ - - "_638532db1fa6a1b71aaf063c8ea29d52.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing[2] will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_d7112da809a40e848207c04399babcec.existing1.example.com" -> null - - id = "Z123456789012__d7112da809a40e848207c04399babcec.existing1.example.com._CNAME" -> null - - name = "_d7112da809a40e848207c04399babcec.existing1.example.com" -> null - - records = [ - - "_6e1da5574ab46a6c782ed73438274181.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing[3] will be destroyed - - resource "aws_route53_record" "existing" { - - fqdn = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" -> null - - id = "Z123456789012__8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com._CNAME" -> null - - name = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" -> null - - records = [ - - "_a419f8410d2e0720528a96c3506f3841.jfrzftwwjs.acm-validations.aws.", - ] -> null - - ttl = 60 -> null - - type = "CNAME" -> null - - zone_id = "Z123456789012" -> null - } - - # aws_route53_record.existing["existing.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" - + records = [ - + "_bdeba72164eec216c55a32374bcceafd.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing1.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_d7112da809a40e848207c04399babcec.existing1.example.com" - + records = [ - + "_6e1da5574ab46a6c782ed73438274181.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing2.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" - + records = [ - + "_638532db1fa6a1b71aaf063c8ea29d52.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing3.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" - + records = [ - + "_a419f8410d2e0720528a96c3506f3841.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - -Plan: 5 to add, 0 to change, 5 to destroy. -``` - -Due to the type of configuration change, Terraform does not know that the previous `awsRoute53Record` resources (indexed by number in the existing state) and the new resources (indexed by domain names in the updated configuration) are equivalent. Typically in this situation, the [`terraform state mv` command](https://www.terraform.io/docs/commands/state/mv.html) can be used to reduce the plan to show no changes. This is done by associating the count index (e.g., `[1]`) with the equivalent domain name index (e.g., `["existing2ExampleCom"]`), making one of the four commands to fix the above example: `terraform state mv 'aws_route53_record.existing[1]' 'aws_route53_record.existing["existing2.example.com"]'`. We recommend using this `terraform state mv` update process where possible to reduce chances of unexpected behaviors or changes in an environment. - -If using `terraform state mv` to reduce the plan to show no changes, no additional steps are required. - -In larger or more complex environments though, this process can be tedius to match the old resource address to the new resource address and run all the necessary `terraform state mv` commands. Instead, since the `awsRoute53Record` resource implements the `allow_overwrite = true` argument, it is possible to just remove the old `awsRoute53Record` resources from the Terraform state using the [`terraform state rm` command](https://www.terraform.io/docs/commands/state/rm.html). In this case, Terraform will leave the existing records in Route 53 and plan to just overwrite the existing validation records with the same exact (previous) values. - --> This guide is showing the simpler `terraform state rm` option below as a potential shortcut in this specific situation, however in most other cases `terraform state mv` is required to change from `count` based resources to `forEach` based resources and properly match the existing Terraform state to the updated Terraform configuration. - -```console -$ terraform state rm aws_route53_record.existing -Removed aws_route53_record.existing[0] -Removed aws_route53_record.existing[1] -Removed aws_route53_record.existing[2] -Removed aws_route53_record.existing[3] -Successfully removed 4 resource instance(s). -``` - -Now the Terraform plan will show only the additions of new Route 53 records (which are exactly the same as before the upgrade) and the proposed recreation of the `awsAcmCertificateValidation` resource. The `awsAcmCertificateValidation` resource recreation will have no effect as the certificate is already validated and issued. - -``` -An execution plan has been generated and is shown below. -Resource actions are indicated with the following symbols: - + create --/+ destroy and then create replacement - -Terraform will perform the following actions: - - # aws_acm_certificate_validation.existing must be replaced --/+ resource "aws_acm_certificate_validation" "existing" { - certificate_arn = "arn:aws:acm:us-east-2:123456789012:certificate/ccbc58e8-061d-4443-9035-d3af0512e863" - ~ id = "2020-07-16 00:01:19 +0000 UTC" -> (known after apply) - ~ validation_record_fqdns = [ - - "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com", - - "_812ddf11b781af1eec1643ec58f102d2.existing.example.com", - - "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com", - - "_d7112da809a40e848207c04399babcec.existing1.example.com", - ] -> (known after apply) # forces replacement - } - - # aws_route53_record.existing["existing.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_812ddf11b781af1eec1643ec58f102d2.existing.example.com" - + records = [ - + "_bdeba72164eec216c55a32374bcceafd.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing1.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_d7112da809a40e848207c04399babcec.existing1.example.com" - + records = [ - + "_6e1da5574ab46a6c782ed73438274181.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing2.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_40b71647a8d88eb82d53fe988e8a3cc1.existing2.example.com" - + records = [ - + "_638532db1fa6a1b71aaf063c8ea29d52.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - - # aws_route53_record.existing["existing3.example.com"] will be created - + resource "aws_route53_record" "existing" { - + allow_overwrite = true - + fqdn = (known after apply) - + id = (known after apply) - + name = "_8dc56b6e35f699b8754afcdd79e9748d.existing3.example.com" - + records = [ - + "_a419f8410d2e0720528a96c3506f3841.jfrzftwwjs.acm-validations.aws.", - ] - + ttl = 60 - + type = "CNAME" - + zone_id = "Z123456789012" - } - -Plan: 5 to add, 0 to change, 1 to destroy. -``` - -Once applied, no differences should be shown and no additional steps should be necessary. - -Alternatively, if you are referencing a subset of `domainValidationOptions`, there is another method of upgrading from v2 to v3 without having to move state. Given the scenario below... - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Fn, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AcmCertificate } from "./.gen/providers/aws/acm-certificate"; -import { AcmCertificateValidation } from "./.gen/providers/aws/acm-certificate-validation"; -import { DataAwsRoute53Zone } from "./.gen/providers/aws/data-aws-route53-zone"; -import { Route53Record } from "./.gen/providers/aws/route53-record"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const existing = new AcmCertificate(this, "existing", { - domainName: "existing.${" + publicRootDomain.value + "}", - subjectAlternativeNames: [ - "existing1.${" + publicRootDomain.value + "}", - "existing2.${" + publicRootDomain.value + "}", - "existing3.${" + publicRootDomain.value + "}", - ], - validationMethod: "DNS", - }); - const dataAwsRoute53ZonePublicRootDomain = new DataAwsRoute53Zone( - this, - "public_root_domain", - { - name: publicRootDomain.stringValue, - } - ); - const existing1 = new Route53Record(this, "existing_1", { - allowOverwrite: true, - name: Token.asString( - Fn.lookupNested(existing.domainValidationOptions, [ - "0", - "resource_record_name", - ]) - ), - records: [ - Token.asString( - Fn.lookupNested(existing.domainValidationOptions, [ - "0", - "resource_record_value", - ]) - ), - ], - ttl: 60, - type: Token.asString( - Fn.lookupNested(existing.domainValidationOptions, [ - "0", - "resource_record_type", - ]) - ), - zoneId: Token.asString(dataAwsRoute53ZonePublicRootDomain.zoneId), - }); - const existing3 = new Route53Record(this, "existing_3", { - allowOverwrite: true, - name: Token.asString( - Fn.lookupNested(existing.domainValidationOptions, [ - "2", - "resource_record_name", - ]) - ), - records: [ - Token.asString( - Fn.lookupNested(existing.domainValidationOptions, [ - "2", - "resource_record_value", - ]) - ), - ], - ttl: 60, - type: Token.asString( - Fn.lookupNested(existing.domainValidationOptions, [ - "2", - "resource_record_type", - ]) - ), - zoneId: Token.asString(dataAwsRoute53ZonePublicRootDomain.zoneId), - }); - const awsAcmCertificateValidationExisting1 = new AcmCertificateValidation( - this, - "existing_1_4", - { - certificateArn: existing.arn, - validationRecordFqdns: Token.asList(existing1.fqdn), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAcmCertificateValidationExisting1.overrideLogicalId("existing_1"); - const awsAcmCertificateValidationExisting3 = new AcmCertificateValidation( - this, - "existing_3_5", - { - certificateArn: existing.arn, - validationRecordFqdns: Token.asList(existing3.fqdn), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAcmCertificateValidationExisting3.overrideLogicalId("existing_3"); - } -} - -``` - -You can perform a conversion of the new `domainValidationOptions` object into a map, to allow you to perform a lookup by the domain name in place of an index number. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Fn, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AcmCertificateValidation } from "./.gen/providers/aws/acm-certificate-validation"; -import { Route53Record } from "./.gen/providers/aws/route53-record"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const existingDomainValidationOptions = - "${{ for dvo in ${" + - cloudfrontCert.domainValidationOptions + - "} : dvo.domain_name => {\n name = dvo.resource_record_name\n record = dvo.resource_record_value\n type = dvo.resource_record_type\n }}}"; - const existing1 = new Route53Record(this, "existing_1", { - allowOverwrite: true, - name: Token.asString( - Fn.lookupNested( - Fn.lookupNested(existingDomainValidationOptions, [ - "existing1.${" + publicRootDomain.value + "}", - ]), - ["name"] - ) - ), - records: [ - Token.asString( - Fn.lookupNested( - Fn.lookupNested(existingDomainValidationOptions, [ - "existing1.${" + publicRootDomain.value + "}", - ]), - ["record"] - ) - ), - ], - ttl: 60, - type: Token.asString( - Fn.lookupNested( - Fn.lookupNested(existingDomainValidationOptions, [ - "existing1.${" + publicRootDomain.value + "}", - ]), - ["type"] - ) - ), - zoneId: Token.asString(dataAwsRoute53ZonePublicRootDomain.zoneId), - }); - const existing3 = new Route53Record(this, "existing_3", { - allowOverwrite: true, - name: Token.asString( - Fn.lookupNested( - Fn.lookupNested(existingDomainValidationOptions, [ - "existing3.${" + publicRootDomain.value + "}", - ]), - ["name"] - ) - ), - records: [ - Token.asString( - Fn.lookupNested( - Fn.lookupNested(existingDomainValidationOptions, [ - "existing3.${" + publicRootDomain.value + "}", - ]), - ["record"] - ) - ), - ], - ttl: 60, - type: Token.asString( - Fn.lookupNested( - Fn.lookupNested(existingDomainValidationOptions, [ - "existing3.${" + publicRootDomain.value + "}", - ]), - ["type"] - ) - ), - zoneId: Token.asString(dataAwsRoute53ZonePublicRootDomain.zoneId), - }); - const awsAcmCertificateValidationExisting1 = new AcmCertificateValidation( - this, - "existing_1_2", - { - certificateArn: existing.arn, - validationRecordFqdns: Token.asList(existing1.fqdn), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAcmCertificateValidationExisting1.overrideLogicalId("existing_1"); - const awsAcmCertificateValidationExisting3 = new AcmCertificateValidation( - this, - "existing_3_3", - { - certificateArn: existing.arn, - validationRecordFqdns: Token.asList(existing3.fqdn), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAcmCertificateValidationExisting3.overrideLogicalId("existing_3"); - } -} - -``` - -Performing a plan against these resources will not cause any change in state, since underlying resources have not changed. - -### subject_alternative_names Changed from List to Set - -Previously the `subjectAlternativeNames` argument was stored in the Terraform state as an ordered list while the API returned information in an unordered manner. The attribute is now configured as a set instead of a list. Certain Terraform configuration language features distinguish between these two attribute types such as not being able to index a set (e.g., `awsAcmCertificateExampleSubjectAlternativeNames[0]` is no longer a valid reference). Depending on the implementation details of a particular configuration using `subjectAlternativeNames` as a reference, possible solutions include changing references to using `for`/`forEach` or using the `tolist()` function as a temporary workaround to keep the previous behavior until an appropriate configuration (properly using the unordered set) can be determined. Usage questions can be submitted to the [community forums](https://discuss.hashicorp.com/c/terraform-providers/tf-aws/33). - -### certificate_body, certificate_chain, and private_key Arguments No Longer Stored as Hash - -Previously when the `certificateBody`, `certificateChain`, and `privateKey` arguments were stored in state, they were stored as a hash of the actual value. This prevented Terraform from properly updating the resource when necessary and the hashing has been removed. The Terraform AWS Provider will show an update to these arguments on the first apply after upgrading to version 3.0.0, which is fixing the Terraform state to remove the hash. Since the `privateKey` attribute is marked as sensitive, the values in the update will not be visible in the Terraform output. If the non-hashed values have not changed, then no update is occurring other than the Terraform state update. If these arguments are the only updates and they all match the hash removal, the apply will occur without submitting API calls. - -## Resource: aws_api_gateway_method_settings - -### throttling_burst_limit and throttling_rate_limit Arguments Now Default to -1 - -Previously when the `throttlingBurstLimit` or `throttlingRateLimit` argument was not configured, the resource would enable throttling and set the limit value to the AWS API Gateway default. In addition, as these arguments were marked as `computed`, Terraform ignored any subsequent changes made to these arguments in the resource. These behaviors have been removed and, by default, the `throttlingBurstLimit` and `throttlingRateLimit` arguments will be disabled in the resource with a value of `1`. - -## Resource: aws_autoscaling_group - -### availability_zones and vpc_zone_identifier Arguments Now Report Plan-Time Conflict - -Specifying both the `availabilityZones` and `vpcZoneIdentifier` arguments previously led to confusing behavior and errors. Now this issue is reported at plan-time. Use the `null` value instead of `[]` (empty list) in conditionals to ensure this validation does not unexpectedly trigger. - -### Drift detection enabled for `loadBalancers` and `targetGroupArns` arguments - -If you previously set one of these arguments to an empty list to enable drift detection (e.g., when migrating an ASG from ELB to ALB), this can be updated as follows. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AutoscalingGroup } from "./.gen/providers/aws/autoscaling-group"; -interface MyConfig { - maxSize: any; - minSize: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new AutoscalingGroup(this, "example", { - loadBalancers: [], - targetGroupArns: [Token.asString(awsLbTargetGroupExample.arn)], - maxSize: config.maxSize, - minSize: config.minSize, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AutoscalingGroup } from "./.gen/providers/aws/autoscaling-group"; -interface MyConfig { - maxSize: any; - minSize: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new AutoscalingGroup(this, "example", { - targetGroupArns: [Token.asString(awsLbTargetGroupExample.arn)], - maxSize: config.maxSize, - minSize: config.minSize, - }); - } -} - -``` - -If `awsAutoscalingAttachment` resources reference your ASG configurations, you will need to add the [`lifecycle` configuration block](https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html) with an `ignoreChanges` argument to prevent Terraform non-empty plans (i.e., forcing resource update) during the next state refresh. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AutoscalingAttachment } from "./.gen/providers/aws/autoscaling-attachment"; -import { AutoscalingGroup } from "./.gen/providers/aws/autoscaling-group"; -interface MyConfig { - maxSize: any; - minSize: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new AutoscalingGroup(this, "example", { - maxSize: config.maxSize, - minSize: config.minSize, - }); - const awsAutoscalingAttachmentExample = new AutoscalingAttachment( - this, - "example_1", - { - autoscalingGroupName: example.id, - elb: Token.asString(awsElbExample.id), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAutoscalingAttachmentExample.overrideLogicalId("example"); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AutoscalingAttachment } from "./.gen/providers/aws/autoscaling-attachment"; -import { AutoscalingGroup } from "./.gen/providers/aws/autoscaling-group"; -interface MyConfig { - maxSize: any; - minSize: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new AutoscalingGroup(this, "example", { - lifecycle: { - ignoreChanges: [loadBalancers, targetGroupArns], - }, - maxSize: config.maxSize, - minSize: config.minSize, - }); - const awsAutoscalingAttachmentExample = new AutoscalingAttachment( - this, - "example_1", - { - autoscalingGroupName: example.id, - elb: Token.asString(awsElbExample.id), - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsAutoscalingAttachmentExample.overrideLogicalId("example"); - } -} - -``` - -## Resource: aws_cloudfront_distribution - -### active_trusted_signers Attribute Name and Type Change - -Previously, the `activeTrustedSigners` computed attribute was implemented with a Map that did not support accessing its computed `items` attribute in Terraform 0.12 correctly. -To address this, the `activeTrustedSigners` attribute has been renamed to `trustedSigners` and is now implemented as a List with a computed `items` List attribute and computed `enabled` boolean attribute. -The nested `items` attribute includes computed `awsAccountNumber` and `keyPairIds` sub-fields, with the latter implemented as a List. -Thus, user configurations referencing the `activeTrustedSigners` attribute and its sub-fields will need to be changed as follows. - -Given these previous references: - -``` -aws_cloudfront_distribution.example.active_trusted_signers.enabled -aws_cloudfront_distribution.example.active_trusted_signers.items -``` - -Updated references: - -``` -aws_cloudfront_distribution.example.trusted_signers[0].enabled -aws_cloudfront_distribution.example.trusted_signers[0].items -``` - -## Resource: aws_cloudwatch_log_group - -### Removal of arn Wildcard Suffix - -Previously, the resource returned the ARN directly from the API, which included a `:*` suffix to denote all CloudWatch Log Streams under the CloudWatch Log Group. Most other AWS resources that return ARNs and many other AWS services do not use the `:*` suffix. The suffix is now automatically removed. For example, the resource previously returned an ARN such as `arn:aws:logs:usEast1:123456789012:logGroup:/example:*` but will now return `arn:aws:logs:usEast1:123456789012:logGroup:/example`. - -Workarounds, such as using `replace()` as shown below, should be removed: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Fn, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CloudwatchLogGroup } from "./.gen/providers/aws/cloudwatch-log-group"; -import { DatasyncTask } from "./.gen/providers/aws/datasync-task"; -interface MyConfig { - destinationLocationArn: any; - sourceLocationArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new CloudwatchLogGroup(this, "example", { - name: "example", - }); - const awsDatasyncTaskExample = new DatasyncTask(this, "example_1", { - cloudwatchLogGroupArn: Token.asString(Fn.replace(example.arn, ":*", "")), - destinationLocationArn: config.destinationLocationArn, - sourceLocationArn: config.sourceLocationArn, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsDatasyncTaskExample.overrideLogicalId("example"); - } -} - -``` - -Removing the `:*` suffix is a breaking change for some configurations. Fix these configurations using string interpolations as demonstrated below. For example, this configuration is now broken: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsIamPolicyDocument } from "./.gen/providers/aws/data-aws-iam-policy-document"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsIamPolicyDocument(this, "ad-log-policy", { - statement: [ - { - actions: ["logs:CreateLogStream", "logs:PutLogEvents"], - effect: "Allow", - principals: [ - { - identifiers: ["ds.amazonaws.com"], - type: "Service", - }, - ], - resources: [example.arn], - }, - ], - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsIamPolicyDocument } from "./.gen/providers/aws/data-aws-iam-policy-document"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsIamPolicyDocument(this, "ad-log-policy", { - statement: [ - { - actions: ["logs:CreateLogStream", "logs:PutLogEvents"], - effect: "Allow", - principals: [ - { - identifiers: ["ds.amazonaws.com"], - type: "Service", - }, - ], - resources: ["${" + example.arn + "}:*"], - }, - ], - }); - } -} - -``` - -## Resource: aws_codepipeline - -### GITHUB_TOKEN environment variable removal - -Switch your Terraform configuration to the `oAuthToken` element in the `action` `configuration` map instead. - -For example, given this previous configuration: - -```console -$ GITHUB_TOKEN= terraform apply -``` - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Codepipeline } from "./.gen/providers/aws/codepipeline"; -interface MyConfig { - artifactStore: any; - name: any; - roleArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new Codepipeline(this, "example", { - stage: [ - { - action: [ - { - category: "Source", - configuration: { - Branch: "main", - Owner: "lifesum-terraform", - Repo: "example", - }, - name: "Source", - outputArtifacts: ["example"], - owner: "ThirdParty", - provider: "GitHub", - version: "1", - }, - ], - name: "Source", - }, - ], - artifactStore: config.artifactStore, - name: config.name, - roleArn: config.roleArn, - }); - } -} - -``` - -The configuration could be updated as follows: - -```console -$ TF_VAR_github_token= terraform apply -``` - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformVariable, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Codepipeline } from "./.gen/providers/aws/codepipeline"; -interface MyConfig { - artifactStore: any; - name: any; - roleArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - /*Terraform Variables are not always the best fit for getting inputs in the context of Terraform CDK. - You can read more about this at https://cdk.tf/variables*/ - const githubToken = new TerraformVariable(this, "github_token", {}); - new Codepipeline(this, "example", { - stage: [ - { - action: [ - { - category: "Source", - configuration: { - Branch: "main", - OAuthToken: githubToken.stringValue, - Owner: "lifesum-terraform", - Repo: "example", - }, - name: "Source", - outputArtifacts: ["example"], - owner: "ThirdParty", - provider: "GitHub", - version: "1", - }, - ], - name: "Source", - }, - ], - artifactStore: config.artifactStore, - name: config.name, - roleArn: config.roleArn, - }); - } -} - -``` - -## Resource: aws_cognito_user_pool - -### Removal of admin_create_user_config.unused_account_validity_days Argument - -The Cognito API previously deprecated the `adminCreateUserConfig` configuration block `unusedAccountValidityDays` argument in preference of the `passwordPolicy` configuration block `temporaryPasswordValidityDays` argument. Configurations will need to be updated to use the API supported configuration. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CognitoUserPool } from "./.gen/providers/aws/cognito-user-pool"; -interface MyConfig { - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new CognitoUserPool(this, "example", { - adminCreateUserConfig: { - unused_account_validity_days: 7, - }, - name: config.name, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CognitoUserPool } from "./.gen/providers/aws/cognito-user-pool"; -interface MyConfig { - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new CognitoUserPool(this, "example", { - passwordPolicy: { - temporaryPasswordValidityDays: 7, - }, - name: config.name, - }); - } -} - -``` - -## Resource: aws_dx_gateway - -### Removal of Automatic aws_dx_gateway_association Import - -Previously when importing the `awsDxGateway` resource with the [`terraform import` command](https://www.terraform.io/docs/commands/import.html), the Terraform AWS Provider would automatically attempt to import an associated `awsDxGatewayAssociation` resource(s) as well. This automatic resource import has been removed. Use the [`awsDxGatewayAssociation` resource import](/docs/providers/aws/r/dx_gateway_association.html#import) to import those resources separately. - -## Resource: aws_dx_gateway_association - -### vpn_gateway_id Argument Removal - -Switch your Terraform configuration to the `associatedGatewayId` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DxGatewayAssociation } from "./.gen/providers/aws/dx-gateway-association"; -interface MyConfig { - dxGatewayId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DxGatewayAssociation(this, "example", { - vpnGatewayId: Token.asString(awsVpnGatewayExample.id), - dxGatewayId: config.dxGatewayId, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DxGatewayAssociation } from "./.gen/providers/aws/dx-gateway-association"; -interface MyConfig { - dxGatewayId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DxGatewayAssociation(this, "example", { - associatedGatewayId: Token.asString(awsVpnGatewayExample.id), - dxGatewayId: config.dxGatewayId, - }); - } -} - -``` - -## Resource: aws_dx_gateway_association_proposal - -### vpn_gateway_id Argument Removal - -Switch your Terraform configuration to the `associatedGatewayId` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DxGatewayAssociationProposal } from "./.gen/providers/aws/dx-gateway-association-proposal"; -interface MyConfig { - associatedGatewayId: any; - dxGatewayId: any; - dxGatewayOwnerAccountId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DxGatewayAssociationProposal(this, "example", { - vpn_gateway_id: awsVpnGatewayExample.id, - associatedGatewayId: config.associatedGatewayId, - dxGatewayId: config.dxGatewayId, - dxGatewayOwnerAccountId: config.dxGatewayOwnerAccountId, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DxGatewayAssociationProposal } from "./.gen/providers/aws/dx-gateway-association-proposal"; -interface MyConfig { - dxGatewayId: any; - dxGatewayOwnerAccountId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DxGatewayAssociationProposal(this, "example", { - associatedGatewayId: Token.asString(awsVpnGatewayExample.id), - dxGatewayId: config.dxGatewayId, - dxGatewayOwnerAccountId: config.dxGatewayOwnerAccountId, - }); - } -} - -``` - -## Resource: aws_ebs_volume - -### iops Argument Apply-Time Validation - -Previously when the `iops` argument was configured with a `type` other than `io1` (either explicitly or omitted, indicating the default type `gp2`), the Terraform AWS Provider would automatically disregard the value provided to `iops` as it is only configurable for the `io1` volume type per the AWS EC2 API. This behavior has changed such that the Terraform AWS Provider will instead return an error at apply time indicating an `iops` value is invalid for types other than `io1`. -Exceptions to this are in cases where `iops` is set to `null` or `0` such that the Terraform AWS Provider will continue to accept the value regardless of `type`. - -## Resource: aws_elastic_transcoder_preset - -### video Configuration Block max_frame_rate Argument No Longer Uses 30 Default - -Previously when the `maxFrameRate` argument was not configured, the resource would default to 30. This behavior has been removed and allows for auto frame rate presets to automatically set the appropriate value. - -## Resource: aws_emr_cluster - -### core_instance_count Argument Removal - -Switch your Terraform configuration to the `coreInstanceGroup` configuration block instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -interface MyConfig { - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EmrCluster(this, "example", { - core_instance_count: 2, - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -interface MyConfig { - instanceType: any; - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EmrCluster(this, "example", { - coreInstanceGroup: { - instanceCount: 2, - instanceType: config.instanceType, - }, - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - } -} - -``` - -### core_instance_type Argument Removal - -Switch your Terraform configuration to the `coreInstanceGroup` configuration block instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -interface MyConfig { - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EmrCluster(this, "example", { - core_instance_type: "m4.large", - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -interface MyConfig { - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EmrCluster(this, "example", { - coreInstanceGroup: { - instanceType: "m4.large", - }, - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - } -} - -``` - -### instance_group Configuration Block Removal - -Switch your Terraform configuration to the `masterInstanceGroup` and `coreInstanceGroup` configuration blocks instead. For any task instance groups, use the `awsEmrInstanceGroup` resource. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -interface MyConfig { - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EmrCluster(this, "example", { - instance_group: [ - { - instance_role: "MASTER", - instance_type: "m4.large", - }, - { - instance_count: 1, - instance_role: "CORE", - instance_type: "c4.large", - }, - { - instance_count: 2, - instance_role: "TASK", - instance_type: "c4.xlarge", - }, - ], - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -import { EmrInstanceGroup } from "./.gen/providers/aws/emr-instance-group"; -interface MyConfig { - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new EmrCluster(this, "example", { - coreInstanceGroup: { - instanceCount: 1, - instanceType: "c4.large", - }, - masterInstanceGroup: { - instanceType: "m4.large", - }, - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - const awsEmrInstanceGroupExample = new EmrInstanceGroup(this, "example_1", { - clusterId: example.id, - instanceCount: 2, - instanceType: "c4.xlarge", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsEmrInstanceGroupExample.overrideLogicalId("example"); - } -} - -``` - -### master_instance_type Argument Removal - -Switch your Terraform configuration to the `masterInstanceGroup` configuration block instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -interface MyConfig { - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EmrCluster(this, "example", { - master_instance_type: "m4.large", - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { EmrCluster } from "./.gen/providers/aws/emr-cluster"; -interface MyConfig { - name: any; - releaseLabel: any; - serviceRole: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new EmrCluster(this, "example", { - masterInstanceGroup: { - instanceType: "m4.large", - }, - name: config.name, - releaseLabel: config.releaseLabel, - serviceRole: config.serviceRole, - }); - } -} - -``` - -## Resource: aws_glue_job - -### allocated_capacity Argument Removal - -The Glue API has deprecated the `allocatedCapacity` argument. Switch your Terraform configuration to the `maxCapacity` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { GlueJob } from "./.gen/providers/aws/glue-job"; -interface MyConfig { - command: any; - name: any; - roleArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new GlueJob(this, "example", { - allocated_capacity: 2, - command: config.command, - name: config.name, - roleArn: config.roleArn, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { GlueJob } from "./.gen/providers/aws/glue-job"; -interface MyConfig { - command: any; - name: any; - roleArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new GlueJob(this, "example", { - maxCapacity: 2, - command: config.command, - name: config.name, - roleArn: config.roleArn, - }); - } -} - -``` - -## Resource: aws_iam_access_key - -### ses_smtp_password Attribute Removal - -In many regions today and in all regions after October 1, 2020, the [SES API will only accept version 4 signatures](https://docs.aws.amazon.com/ses/latest/DeveloperGuide/using-ses-api-authentication.html). If referencing the `sesSmtpPassword` attribute, switch your Terraform configuration to the `sesSmtpPasswordV4` attribute instead. Please note that this signature is based on the region of the Terraform AWS Provider. If you need the SES v4 password in multiple regions, it may require using [multiple provider instances](https://www.terraform.io/docs/configuration/providers.html#alias-multiple-provider-configurations). - -Depending on when the `awsIamAccessKey` resource was created, it may not have a `sesSmtpPasswordV4` attribute for you to use. If this is the case you will need to [taint](/docs/commands/taint.html) the resource so that it can be recreated with the new value. - -Alternatively, you can stage the change by creating a new `awsIamAccessKey` resource and change any downstream dependencies to use the new `sesSmtpPasswordV4` attribute. Once dependents have been updated with the new resource you can remove the old one. - -## Resource: aws_iam_instance_profile - -### roles Argument Removal - -Switch your Terraform configuration to the `role` argument instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { IamInstanceProfile } from "./.gen/providers/aws/iam-instance-profile"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new IamInstanceProfile(this, "example", { - roles: [awsIamRoleExample.id], - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { IamInstanceProfile } from "./.gen/providers/aws/iam-instance-profile"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new IamInstanceProfile(this, "example", { - role: Token.asString(awsIamRoleExample.id), - }); - } -} - -``` - -## Resource: aws_iam_server_certificate - -### certificate_body, certificate_chain, and private_key Arguments No Longer Stored as Hash - -Previously when the `certificateBody`, `certificateChain`, and `privateKey` arguments were stored in state, they were stored as a hash of the actual value. This hashing has been removed for new or recreated resources to prevent lifecycle issues. - -## Resource: aws_instance - -### ebs_block_device.iops and root_block_device.iops Argument Apply-Time Validations - -Previously when the `iops` argument was configured in either the `ebsBlockDevice` or `rootBlockDevice` configuration block, the Terraform AWS Provider would automatically disregard the value provided to `iops` if the `type` argument was also configured with a value other than `io1` (either explicitly or omitted, indicating the default type `gp2`) as `iops` are only configurable for the `io1` volume type per the AWS EC2 API. This behavior has changed such that the Terraform AWS Provider will instead return an error at apply time indicating an `iops` value is invalid for volume types other than `io1`. -Exceptions to this are in cases where `iops` is set to `null` or `0` such that the Terraform AWS Provider will continue to accept the value regardless of `type`. - -## Resource: aws_lambda_alias - -### Import No Longer Converts Function Name to ARN - -Previously the resource import would always convert the `functionName` portion of the import identifier into the ARN format. Configurations using the Lambda Function name would show this as an unexpected difference after import. Now this will passthrough the given value on import whether its a Lambda Function name or ARN. - -## Resource: aws_launch_template - -### network_interfaces.delete_on_termination Argument type change - -The `networkInterfacesDeleteOnTermination` argument is now of type `string`, allowing an unspecified value for the argument since the previous `bool` type only allowed for `true/false` and defaulted to `false` when no value was set. Now to enforce `deleteOnTermination` to `false`, the string `"false"` or bare `false` value must be used. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { LaunchTemplate } from "./.gen/providers/aws/launch-template"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new LaunchTemplate(this, "example", { - networkInterfaces: [ - { - deleteOnTermination: [null], - }, - ], - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { LaunchTemplate } from "./.gen/providers/aws/launch-template"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new LaunchTemplate(this, "example", { - networkInterfaces: [ - { - deleteOnTermination: Token.asString(false), - }, - ], - }); - } -} - -``` - -## Resource: aws_lb_listener_rule - -### condition.field and condition.values Arguments Removal - -Switch your Terraform configuration to use the `hostHeader` or `pathPattern` configuration block instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { LbListenerRule } from "./.gen/providers/aws/lb-listener-rule"; -interface MyConfig { - action: any; - listenerArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new LbListenerRule(this, "example", { - condition: [ - { - field: "path-pattern", - values: ["/static/*"], - }, - ], - action: config.action, - listenerArn: config.listenerArn, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { LbListenerRule } from "./.gen/providers/aws/lb-listener-rule"; -interface MyConfig { - action: any; - listenerArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new LbListenerRule(this, "example", { - condition: [ - { - pathPattern: { - values: ["/static/*"], - }, - }, - ], - action: config.action, - listenerArn: config.listenerArn, - }); - } -} - -``` - -## Resource: aws_msk_cluster - -### encryption_info.encryption_in_transit.client_broker Default Updated to Match API - -A few weeks after general availability launch and initial release of the `awsMskCluster` resource, the MSK API default for client broker encryption switched from `tlsPlaintext` to `tls`. The attribute default has now been updated to match the more secure API default, however existing Terraform configurations may show a difference if this setting is not configured. - -To continue using the old default when it was previously not configured, add or modify this configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { MskCluster } from "./.gen/providers/aws/msk-cluster"; -interface MyConfig { - brokerNodeGroupInfo: any; - clusterName: any; - kafkaVersion: any; - numberOfBrokerNodes: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new MskCluster(this, "example", { - encryptionInfo: { - encryptionInTransit: { - clientBroker: "TLS_PLAINTEXT", - }, - }, - brokerNodeGroupInfo: config.brokerNodeGroupInfo, - clusterName: config.clusterName, - kafkaVersion: config.kafkaVersion, - numberOfBrokerNodes: config.numberOfBrokerNodes, - }); - } -} - -``` - -## Resource: aws_rds_cluster - -### scaling_configuration.min_capacity Now Defaults to 1 - -Previously when the `minCapacity` argument in a `scalingConfiguration` block was not configured, the resource would default to 2. This behavior has been updated to align with the AWS RDS Cluster API default of 1. - -## Resource: aws_route53_resolver_rule - -### Removal of trailing period in domain_name argument - -Previously the resource returned the Resolver Rule Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Resolver Rule Domain Name such as `exampleCom`, the attribute now will be returned as `exampleCom`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Resource: aws_route53_zone - -### Removal of trailing period in name argument - -Previously the resource returned the Hosted Zone Domain Name directly from the API, which included a `.` suffix. This proves difficult when many other AWS services do not accept this trailing period (e.g., ACM Certificate). This period is now automatically removed. For example, when the attribute would previously return a Hosted Zone Domain Name such as `exampleCom`, the attribute now will be returned as `exampleCom`. -While the returned value will omit the trailing period, use of configurations with trailing periods will not be interrupted. - -## Resource: aws_s3_bucket - -### Removal of Automatic aws_s3_bucket_policy Import - -Previously when importing the `awsS3Bucket` resource with the [`terraform import` command](https://www.terraform.io/docs/commands/import.html), the Terraform AWS Provider would automatically attempt to import an associated `awsS3BucketPolicy` resource as well. This automatic resource import has been removed. Use the [`awsS3BucketPolicy` resource import](/docs/providers/aws/r/s3_bucket_policy.html#import) to import that resource separately. - -### region Attribute Is Now Read-Only - -The `region` attribute is no longer configurable, but it remains as a read-only attribute. The region of the `awsS3Bucket` resource is determined by the region of the Terraform AWS Provider, similar to all other resources. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - region: "us-west-2", - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", {}); - } -} - -``` - -## Resource: aws_s3_bucket_metric - -### filter configuration block Plan-Time Validation Change - -The `filter` configuration block no longer supports the empty block `{}` and requires at least one of the `prefix` or `tags` attributes to be specified. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3BucketMetric } from "./.gen/providers/aws/s3-bucket-metric"; -interface MyConfig { - bucket: any; - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new S3BucketMetric(this, "example", { - filter: {}, - bucket: config.bucket, - name: config.name, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3BucketMetric } from "./.gen/providers/aws/s3-bucket-metric"; -interface MyConfig { - bucket: any; - name: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new S3BucketMetric(this, "example", { - bucket: config.bucket, - name: config.name, - }); - } -} - -``` - -## Resource: aws_security_group - -### Removal of Automatic aws_security_group_rule Import - -Previously when importing the `awsSecurityGroup` resource with the [`terraform import` command](https://www.terraform.io/docs/commands/import.html), the Terraform AWS Provider would automatically attempt to import an associated `awsSecurityGroupRule` resource(s) as well. This automatic resource import has been removed. Use the [`awsSecurityGroupRule` resource import](/docs/providers/aws/r/security_group_rule.html#import) to import those resources separately. - -## Resource: aws_sns_platform_application - -### platform_credential and platform_principal Arguments No Longer Stored as SHA256 Hash - -Previously when the `platformCredential` and `platformPrincipal` arguments were stored in state, they were stored as a SHA256 hash of the actual value. This prevented Terraform from properly updating the resource when necessary and the hashing has been removed. The Terraform AWS Provider will show an update to these arguments on the first apply after upgrading to version 3.0.0, which is fixing the Terraform state to remove the hash. Since the attributes are marked as sensitive, the values in the update will not be visible in the Terraform output. If the non-hashed values have not changed, then no update is occurring other than the Terraform state update. If these arguments are the only two updates and they both match the SHA256 removal, the apply will occur without submitting an actual `setPlatformApplicationAttributes` API call. - -## Resource: aws_spot_fleet_request - -### valid_until Argument No Longer Uses 24 Hour Default - -Previously when the `validUntil` argument was not configured, the resource would default to a 24 hour request. This behavior has been removed and allows for non-expiring requests. To recreate the old behavior, the [`timeOffset` resource](https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/offset) can potentially be used. - -## Resource: aws_ssm_maintenance_window_task - -### logging_info Configuration Block Removal - -Switch your Terraform configuration to the `taskInvocationParameters` configuration block `runCommandParameters` configuration block `outputS3Bucket` and `outputS3KeyPrefix` arguments instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { SsmMaintenanceWindowTask } from "./.gen/providers/aws/ssm-maintenance-window-task"; -interface MyConfig { - taskArn: any; - taskType: any; - windowId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new SsmMaintenanceWindowTask(this, "example", { - logging_info: [ - { - s3_bucket_key_prefix: "example", - s3_bucket_name: awsS3BucketExample.id, - }, - ], - taskArn: config.taskArn, - taskType: config.taskType, - windowId: config.windowId, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { SsmMaintenanceWindowTask } from "./.gen/providers/aws/ssm-maintenance-window-task"; -interface MyConfig { - taskArn: any; - taskType: any; - windowId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new SsmMaintenanceWindowTask(this, "example", { - taskInvocationParameters: { - runCommandParameters: { - outputS3Bucket: Token.asString(awsS3BucketExample.id), - outputS3KeyPrefix: "example", - }, - }, - taskArn: config.taskArn, - taskType: config.taskType, - windowId: config.windowId, - }); - } -} - -``` - -### task_parameters Configuration Block Removal - -Switch your Terraform configuration to the `taskInvocationParameters` configuration block `runCommandParameters` configuration block `parameter` configuration blocks instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { SsmMaintenanceWindowTask } from "./.gen/providers/aws/ssm-maintenance-window-task"; -interface MyConfig { - taskArn: any; - taskType: any; - windowId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new SsmMaintenanceWindowTask(this, "example", { - task_parameters: [ - { - name: "commands", - values: ["date"], - }, - ], - taskArn: config.taskArn, - taskType: config.taskType, - windowId: config.windowId, - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { SsmMaintenanceWindowTask } from "./.gen/providers/aws/ssm-maintenance-window-task"; -interface MyConfig { - taskArn: any; - taskType: any; - windowId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new SsmMaintenanceWindowTask(this, "example", { - taskInvocationParameters: { - runCommandParameters: { - parameter: [ - { - name: "commands", - values: ["date"], - }, - ], - }, - }, - taskArn: config.taskArn, - taskType: config.taskType, - windowId: config.windowId, - }); - } -} - -``` - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/version-4-upgrade.html.md b/website/docs/cdktf/typescript/guides/version-4-upgrade.html.md deleted file mode 100644 index 3d703fea736..00000000000 --- a/website/docs/cdktf/typescript/guides/version-4-upgrade.html.md +++ /dev/null @@ -1,5276 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 4 Upgrade Guide" -description: |- - Terraform AWS Provider Version 4 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 4 Upgrade Guide - -Version 4.0.0 of the AWS provider for Terraform is a major release and includes some changes that you will need to consider when upgrading. We intend this guide to help with that process and focus only on changes from version 3.X to version 4.0.0. See the [Version 3 Upgrade Guide](/docs/providers/aws/guides/version-3-upgrade.html) for information about upgrading from 2.X to version 3.0.0. - -We previously marked most of the changes we outline in this guide as deprecated in the Terraform plan/apply output throughout previous provider releases. You can find these changes, including deprecation notices, in the [Terraform AWS Provider CHANGELOG](https://github.com/hashicorp/terraform-provider-aws/blob/main/CHANGELOG.md). - -~> **NOTE:** Versions 4.0.0 through v4.8.0 of the AWS Provider introduce significant breaking changes to the `awsS3Bucket` resource. See [S3 Bucket Refactor](#s3-bucket-refactor) for more details. -We recommend upgrading to v4.9.0 or later of the AWS Provider instead, where only non-breaking changes and deprecation notices are introduced to the `awsS3Bucket`. See [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection) for additional considerations when upgrading to v4.9.0 or later. - -~> **NOTE:** Version 4.0.0 of the AWS Provider introduces changes to the precedence of some authentication and configuration parameters. -These changes bring the provider in line with the AWS CLI and SDKs. -See [Changes to Authentication](#changes-to-authentication) for more details. - -~> **NOTE:** Version 4.0.0 of the AWS Provider will be the last major version to support [EC2-Classic resources](#ec2-classic-resource-and-data-source-support) as AWS plans to fully retire EC2-Classic Networking. See the [AWS News Blog](https://aws.amazon.com/blogs/aws/ec2-classic-is-retiring-heres-how-to-prepare/) for additional details. - -~> **NOTE:** Version 4.0.0 of the AWS Provider will be the last major version to support [Macie Classic resources](#macie-classic-resource-support) as AWS plans to fully retire Macie Classic. See the [Amazon Macie Classic FAQs](https://aws.amazon.com/macie/classic-faqs/) for additional details. - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Changes to Authentication](#changes-to-authentication) -- [New Provider Arguments](#new-provider-arguments) -- [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection) (**Applicable to v4.9.0 and later of the AWS Provider**) -- [S3 Bucket Refactor](#s3-bucket-refactor) (**Only applicable to v4.0.0 through v4.8.0 of the AWS Provider**) - - [`accelerationStatus` Argument](#acceleration_status-argument) - - [`acl` Argument](#acl-argument) - - [`corsRule` Argument](#cors_rule-argument) - - [`grant` Argument](#grant-argument) - - [`lifecycleRule` Argument](#lifecycle_rule-argument) - - [`logging` Argument](#logging-argument) - - [`objectLockConfiguration` `rule` Argument](#object_lock_configuration-rule-argument) - - [`policy` Argument](#policy-argument) - - [`replicationConfiguration` Argument](#replication_configuration-argument) - - [`requestPayer` Argument](#request_payer-argument) - - [`serverSideEncryptionConfiguration` Argument](#server_side_encryption_configuration-argument) - - [`versioning` Argument](#versioning-argument) - - [`website`, `websiteDomain`, and `websiteEndpoint` Arguments](#website-website_domain-and-website_endpoint-arguments) -- [Full Resource Lifecycle of Default Resources](#full-resource-lifecycle-of-default-resources) - - [Resource: aws_default_subnet](#resource-aws_default_subnet) - - [Resource: aws_default_vpc](#resource-aws_default_vpc) -- [Plural Data Source Behavior](#plural-data-source-behavior) -- [Empty Strings Not Valid For Certain Resources](#empty-strings-not-valid-for-certain-resources) - - [Resource: aws_cloudwatch_event_target (Empty String)](#resource-aws_cloudwatch_event_target-empty-string) - - [Resource: aws_customer_gateway](#resource-aws_customer_gateway) - - [Resource: aws_default_network_acl](#resource-aws_default_network_acl) - - [Resource: aws_default_route_table](#resource-aws_default_route_table) - - [Resource: aws_default_vpc (Empty String)](#resource-aws_default_vpc-empty-string) - - [Resource: aws_efs_mount_target](#resource-aws_efs_mount_target) - - [Resource: aws_elasticsearch_domain](#resource-aws_elasticsearch_domain) - - [Resource: aws_instance](#resource-aws_instance) - - [Resource: aws_network_acl](#resource-aws_network_acl) - - [Resource: aws_route](#resource-aws_route) - - [Resource: aws_route_table](#resource-aws_route_table) - - [Resource: aws_vpc](#resource-aws_vpc) - - [Resource: aws_vpc_ipv6_cidr_block_association](#resource-aws_vpc_ipv6_cidr_block_association) -- [Data Source: aws_cloudwatch_log_group](#data-source-aws_cloudwatch_log_group) -- [Data Source: aws_subnet_ids](#data-source-aws_subnet_ids) -- [Data Source: aws_s3_bucket_object](#data-source-aws_s3_bucket_object) -- [Data Source: aws_s3_bucket_objects](#data-source-aws_s3_bucket_objects) -- [Resource: aws_batch_compute_environment](#resource-aws_batch_compute_environment) -- [Resource: aws_cloudwatch_event_target](#resource-aws_cloudwatch_event_target) -- [Resource: aws_elasticache_cluster](#resource-aws_elasticache_cluster) -- [Resource: aws_elasticache_global_replication_group](#resource-aws_elasticache_global_replication_group) -- [Resource: aws_fsx_ontap_storage_virtual_machine](#resource-aws_fsx_ontap_storage_virtual_machine) -- [Resource: aws_lb_target_group](#resource-aws_lb_target_group) -- [Resource: aws_s3_bucket_object](#resource-aws_s3_bucket_object) - - - -Additional Topics: - - - -- [EC2-Classic resource and data source support](#ec2-classic-resource-and-data-source-support) -- [Macie Classic resource support](#macie-classic-resource-support) - - - -## Provider Version Configuration - --> Before upgrading to version 4.0.0, upgrade to the most recent 3.X version of the provider and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html). You should not see changes you don't expect or deprecation notices. - -Use [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init -upgrade`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -Update to the latest 4.X version: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -## Changes to Authentication - -The authentication configuration for the AWS Provider has changed in this version to match the behavior of other AWS products, including the AWS SDK and AWS CLI. _This will cause authentication failures in AWS provider configurations where you set a non-empty `profile` in the `provider` configuration but the profile does not correspond to an AWS profile with valid credentials._ - -Precedence for authentication settings is as follows: - -* `provider` configuration -* Environment variables -* Shared credentials and configuration files (_e.g._, `~/Aws/credentials` and `~/Aws/config`) - -In previous versions of the provider, you could explicitly set `profile` in the `provider`, and if the profile did not correspond to valid credentials, the provider would use credentials from environment variables. Starting in v4.0, the Terraform AWS provider enforces the precedence shown above, similarly to how the AWS SDK and AWS CLI behave. - -In other words, when you explicitly set `profile` in `provider`, the AWS provider will not use environment variables per the precedence shown above. Before v4.0, if `profile` was configured in the `provider` configuration but did not correspond to an AWS profile or valid credentials, the provider would attempt to use environment variables. **This is no longer the case.** An explicitly set profile that does not have valid credentials will cause an authentication error. - -For example, with the following, the environment variables will not be used: - -```console -$ export AWS_ACCESS_KEY_ID="anaccesskey" -$ export AWS_SECRET_ACCESS_KEY="asecretkey" -``` - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - profile: "customprofile", - region: "us-west-2", - }); - } -} - -``` - -## New Provider Arguments - -Version 4.x adds these new `provider` arguments: - -* `assumeRoleDuration` - Assume role duration as a string, _e.g._, `"1H"` or `"1H30S"`. Terraform AWS Provider v4.0.0 deprecates `assumeRoleDurationSeconds` and a future version will remove it. -* `customCaBundle` - File containing custom root and intermediate certificates. Can also be configured using the `awsCaBundle` environment variable. (Setting `caBundle` in the shared config file is not supported.) -* `ec2MetadataServiceEndpoint` - Address of the EC2 metadata service (IMDS) endpoint to use. Can also be set with the `awsEc2MetadataServiceEndpoint` environment variable. -* `ec2MetadataServiceEndpointMode` - Mode to use in communicating with the metadata service. Valid values are `iPv4` and `iPv6`. Can also be set with the `awsEc2MetadataServiceEndpointMode` environment variable. -* `s3UsePathStyle` - Replaces `s3ForcePathStyle`, which has been deprecated in Terraform AWS Provider v4.0.0 and support will be removed in a future version. -* `sharedConfigFiles` - List of paths to AWS shared config files. If not set, the default is `[~/Aws/config]`. A single value can also be set with the `awsConfigFile` environment variable. -* `sharedCredentialsFiles` - List of paths to the shared credentials file. If not set, the default is `[~/Aws/credentials]`. A single value can also be set with the `awsSharedCredentialsFile` environment variable. Replaces `sharedCredentialsFile`, which has been deprecated in Terraform AWS Provider v4.0.0 and support will be removed in a future version. -* `stsRegion` - Region where AWS STS operations will take place. For example, `usEast1` and `usWest2`. -* `useDualstackEndpoint` - Force the provider to resolve endpoints with DualStack capability. Can also be set with the `awsUseDualstackEndpoint` environment variable or in a shared config file (`useDualstackEndpoint`). -* `useFipsEndpoint` - Force the provider to resolve endpoints with FIPS capability. Can also be set with the `awsUseFipsEndpoint` environment variable or in a shared config file (`useFipsEndpoint`). - -~> **NOTE:** Using the `awsMetadataUrl` environment variable has been deprecated in Terraform AWS Provider v4.0.0 and support will be removed in a future version. Change any scripts or environments using `awsMetadataUrl` to instead use `awsEc2MetadataServiceEndpoint`. - -For example, in previous versions, to use FIPS endpoints, you would need to provide all the FIPS endpoints that you wanted to use in the `endpoints` configuration block: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - endpoints: [ - { - ec2: "https://ec2-fips.us-west-2.amazonaws.com", - s3: "https://s3-fips.us-west-2.amazonaws.com", - sts: "https://sts-fips.us-west-2.amazonaws.com", - }, - ], - }); - } -} - -``` - -In v4.0.0, you can still set endpoints in the same way. However, you can instead use the `useFipsEndpoint` argument to have the provider automatically resolve FIPS endpoints for all supported services: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", { - useFipsEndpoint: true, - }); - } -} - -``` - -Note that the provider can only resolve FIPS endpoints where AWS provides FIPS support. Support depends on the service and may include `usEast1`, `usEast2`, `usWest1`, `usWest2`, `usGovEast1`, `usGovWest1`, and `caCentral1`. For more information, see [Federal Information Processing Standard (FIPS) 140-2](https://aws.amazon.com/compliance/fips/). - -## Changes to S3 Bucket Drift Detection - -~> **NOTE:** This only applies to v4.9.0 and later of the AWS Provider. - -~> **NOTE:** If you are migrating from v3.75.x of the AWS Provider and you have already adopted the standalone S3 bucket resources (e.g. `awsS3BucketLifecycleConfiguration`), -a [`lifecycle` configuration block to ignore changes](https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes) to the internal parameters of the source `awsS3Bucket` resources will no longer be necessary and can be removed upon upgrade. - -~> **NOTE:** In the next major version, v5.0, the parameters listed below will be removed entirely from the `awsS3Bucket` resource. -For this reason, a deprecation notice is printed in the Terraform CLI for each of the parameters when used in a configuration. - -To remediate the breaking changes introduced to the `awsS3Bucket` resource in v4.0.0 of the AWS Provider, -v4.9.0 and later retain the same configuration parameters of the `awsS3Bucket` resource as in v3.x and functionality of the `awsS3Bucket` resource only differs from v3.x -in that Terraform will only perform drift detection for each of the following parameters if a configuration value is provided: - -* `accelerationStatus` -* `acl` -* `corsRule` -* `grant` -* `lifecycleRule` -* `logging` -* `objectLockConfiguration` -* `policy` -* `replicationConfiguration` -* `requestPayer` -* `serverSideEncryptionConfiguration` -* `versioning` -* `website` - -Thus, if one of these parameters was once configured and then is entirely removed from an `awsS3Bucket` resource configuration, -Terraform will not pick up on these changes on a subsequent `terraform plan` or `terraform apply`. - -For example, given the following configuration with a single `corsRule`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - corsRule: [ - { - allowedHeaders: ["*"], - allowedMethods: ["PUT", "POST"], - allowedOrigins: ["https://s3-website-test.hashicorp.com"], - exposeHeaders: ["ETag"], - maxAgeSeconds: 3000, - }, - ], - }); - } -} - -``` - -When updated to the following configuration without a `corsRule`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - } -} - -``` - -Terraform CLI with v4.9.0 of the AWS Provider will report back: - -```console -aws_s3_bucket.example: Refreshing state... [id=yournamehere] -... -No changes. Your infrastructure matches the configuration. -``` - -With that said, to manage changes to these parameters in the `awsS3Bucket` resource, practitioners should configure each parameter's respective standalone resource -and perform updates directly on those new configurations. The parameters are mapped to the standalone resources as follows: - -| `awsS3Bucket` Parameter | Standalone Resource | -|----------------------------------------|------------------------------------------------------| -| `accelerationStatus` | `awsS3BucketAccelerateConfiguration` | -| `acl` | `awsS3BucketAcl` | -| `corsRule` | `awsS3BucketCorsConfiguration` | -| `grant` | `awsS3BucketAcl` | -| `lifecycleRule` | `awsS3BucketLifecycleConfiguration` | -| `logging` | `awsS3BucketLogging` | -| `objectLockConfiguration` | `awsS3BucketObjectLockConfiguration` | -| `policy` | `awsS3BucketPolicy` | -| `replicationConfiguration` | `awsS3BucketReplicationConfiguration` | -| `requestPayer` | `awsS3BucketRequestPaymentConfiguration` | -| `serverSideEncryptionConfiguration` | `awsS3BucketServerSideEncryptionConfiguration` | -| `versioning` | `awsS3BucketVersioning` | -| `website` | `awsS3BucketWebsiteConfiguration` | - -Going back to the earlier example, given the following configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - corsRule: [ - { - allowedHeaders: ["*"], - allowedMethods: ["PUT", "POST"], - allowedOrigins: ["https://s3-website-test.hashicorp.com"], - exposeHeaders: ["ETag"], - maxAgeSeconds: 3000, - }, - ], - }); - } -} - -``` - -Practitioners can upgrade to v4.9.0 and then introduce the standalone `awsS3BucketCorsConfiguration` resource, e.g. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketCorsConfiguration } from "./.gen/providers/aws/s3-bucket-cors-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketCorsConfigurationExample = new S3BucketCorsConfiguration( - this, - "example_1", - { - bucket: example.id, - corsRule: [ - { - allowedHeaders: ["*"], - allowedMethods: ["PUT", "POST"], - allowedOrigins: ["https://s3-website-test.hashicorp.com"], - exposeHeaders: ["ETag"], - maxAgeSeconds: 3000, - }, - ], - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketCorsConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Depending on the tools available to you, the above configuration can either be directly applied with Terraform or the standalone resource -can be imported into Terraform state. Please refer to each standalone resource's _Import_ documentation for the proper syntax. - -Once the standalone resources are managed by Terraform, updates and removal can be performed as needed. - -The following sections depict standalone resource adoption per individual parameter. Standalone resource adoption is not required to upgrade but is recommended to ensure drift is detected by Terraform. -The examples below are by no means exhaustive. The aim is to provide important concepts when migrating to a standalone resource whose parameters may not entirely align with the corresponding parameter in the `awsS3Bucket` resource. - -### Migrating to `awsS3BucketAccelerateConfiguration` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - accelerationStatus: "Enabled", - bucket: "yournamehere", - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketAccelerateConfiguration } from "./.gen/providers/aws/s3-bucket-accelerate-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketAccelerateConfigurationExample = - new S3BucketAccelerateConfiguration(this, "example_1", { - bucket: example.id, - status: "Enabled", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketAccelerateConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketAcl` - -#### With `acl` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - acl: "private", - bucket: "yournamehere", - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketAcl } from "./.gen/providers/aws/s3-bucket-acl"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketAclExample = new S3BucketAcl(this, "example_1", { - acl: "private", - bucket: example.id, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketAclExample.overrideLogicalId("example"); - } -} - -``` - -#### With `grant` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - grant: [ - { - id: Token.asString(currentUser.id), - permissions: ["FULL_CONTROL"], - type: "CanonicalUser", - }, - { - permissions: ["READ_ACP", "WRITE"], - type: "Group", - uri: "http://acs.amazonaws.com/groups/s3/LogDelivery", - }, - ], - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketAcl } from "./.gen/providers/aws/s3-bucket-acl"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketAclExample = new S3BucketAcl(this, "example_1", { - accessControlPolicy: { - grant: [ - { - grantee: { - id: Token.asString(currentUser.id), - type: "CanonicalUser", - }, - permission: "FULL_CONTROL", - }, - { - grantee: { - type: "Group", - uri: "http://acs.amazonaws.com/groups/s3/LogDelivery", - }, - permission: "READ_ACP", - }, - { - grantee: { - type: "Group", - uri: "http://acs.amazonaws.com/groups/s3/LogDelivery", - }, - permission: "WRITE", - }, - ], - owner: { - id: Token.asString(currentUser.id), - }, - }, - bucket: example.id, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketAclExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketCorsConfiguration` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - corsRule: [ - { - allowedHeaders: ["*"], - allowedMethods: ["PUT", "POST"], - allowedOrigins: ["https://s3-website-test.hashicorp.com"], - exposeHeaders: ["ETag"], - maxAgeSeconds: 3000, - }, - ], - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketCorsConfiguration } from "./.gen/providers/aws/s3-bucket-cors-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketCorsConfigurationExample = new S3BucketCorsConfiguration( - this, - "example_1", - { - bucket: example.id, - corsRule: [ - { - allowedHeaders: ["*"], - allowedMethods: ["PUT", "POST"], - allowedOrigins: ["https://s3-website-test.hashicorp.com"], - exposeHeaders: ["ETag"], - maxAgeSeconds: 3000, - }, - ], - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketCorsConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketLifecycleConfiguration` - -~> **Note:** In version `3X` of the provider, the `lifecycleRuleId` argument was optional, while in version `4X`, the `awsS3BucketLifecycleConfigurationRuleId` argument required. Use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html) to get the source bucket's lifecycle configuration to determine the ID. - -#### For Lifecycle Rules with no `prefix` previously configured - -~> **Note:** When configuring the `ruleFilter` configuration block in the new `awsS3BucketLifecycleConfiguration` resource, use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html) -to get the source bucket's lifecycle configuration and determine if the `filter` is configured as `"Filter" : {}` or `"Filter" : { "Prefix": "" }`. -If AWS returns the former, configure `ruleFilter` as `filter {}`. Otherwise, neither a `ruleFilter` nor `rulePrefix` parameter should be configured as shown here: - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - id: "Keep previous version 30 days, then in Glacier another 60", - noncurrentVersionExpiration: { - days: 90, - }, - noncurrentVersionTransition: [ - { - days: 30, - storageClass: "GLACIER", - }, - ], - }, - { - abortIncompleteMultipartUploadDays: 7, - enabled: true, - id: "Delete old incomplete multi-part uploads", - }, - ], - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - id: "Keep previous version 30 days, then in Glacier another 60", - noncurrentVersionExpiration: { - noncurrentDays: 90, - }, - noncurrentVersionTransition: [ - { - noncurrentDays: 30, - storageClass: "GLACIER", - }, - ], - status: "Enabled", - }, - { - abortIncompleteMultipartUpload: { - daysAfterInitiation: 7, - }, - id: "Delete old incomplete multi-part uploads", - status: "Enabled", - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -#### For Lifecycle Rules with `prefix` previously configured as an empty string - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - id: "log-expiration", - prefix: "", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - id: "log-expiration", - status: "Enabled", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -#### For Lifecycle Rules with `prefix` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - id: "log-expiration", - prefix: "foobar", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - filter: { - prefix: "foobar", - }, - id: "log-expiration", - status: "Enabled", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -#### For Lifecycle Rules with `prefix` and `tags` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - expiration: { - days: 90, - }, - id: "log", - prefix: "log/", - tags: { - autoclean: "true", - rule: "log", - }, - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 60, - storageClass: "GLACIER", - }, - ], - }, - { - enabled: true, - expiration: { - date: "2022-12-31", - }, - id: "tmp", - prefix: "tmp/", - }, - ], - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - expiration: { - days: 90, - }, - filter: { - and: { - prefix: "log/", - tags: { - autoclean: "true", - rule: "log", - }, - }, - }, - id: "log", - status: "Enabled", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 60, - storageClass: "GLACIER", - }, - ], - }, - { - expiration: { - date: "2022-12-31T00:00:00Z", - }, - filter: { - prefix: "tmp/", - }, - id: "tmp", - status: "Enabled", - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketLogging` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const logBucket = new S3Bucket(this, "log_bucket", { - bucket: "example-log-bucket", - }); - new S3Bucket(this, "example", { - bucket: "yournamehere", - logging: { - targetBucket: logBucket.id, - targetPrefix: "log/", - }, - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLoggingA } from "./.gen/providers/aws/s3-bucket-logging"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const logBucket = new S3Bucket(this, "log_bucket", { - bucket: "example-log-bucket", - }); - const awsS3BucketLoggingExample = new S3BucketLoggingA(this, "example_2", { - bucket: example.id, - targetBucket: logBucket.id, - targetPrefix: "log/", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLoggingExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketObjectLockConfiguration` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - objectLockConfiguration: { - objectLockEnabled: "Enabled", - rule: { - defaultRetention: { - days: 3, - mode: "COMPLIANCE", - }, - }, - }, - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketObjectLockConfigurationA } from "./.gen/providers/aws/s3-bucket-object-lock-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - objectLockEnabled: true, - }); - const awsS3BucketObjectLockConfigurationExample = - new S3BucketObjectLockConfigurationA(this, "example_1", { - bucket: example.id, - rule: { - defaultRetention: { - days: 3, - mode: "COMPLIANCE", - }, - }, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketObjectLockConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketPolicy` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - policy: - '{\n "Id": "Policy1446577137248",\n "Statement": [\n {\n "Action": "s3:PutObject",\n "Effect": "Allow",\n "Principal": {\n "AWS": "${' + - current.arn + - '}"\n },\n "Resource": "arn:${' + - dataAwsPartitionCurrent.partition + - '}:s3:::yournamehere/*",\n "Sid": "Stmt1446575236270"\n }\n ],\n "Version": "2012-10-17"\n}\n\n', - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketPolicy } from "./.gen/providers/aws/s3-bucket-policy"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketPolicyExample = new S3BucketPolicy(this, "example_1", { - bucket: example.id, - policy: - '{\n "Id": "Policy1446577137248",\n "Statement": [\n {\n "Action": "s3:PutObject",\n "Effect": "Allow",\n "Principal": {\n "AWS": "${' + - current.arn + - '}"\n },\n "Resource": "${' + - example.arn + - '}/*",\n "Sid": "Stmt1446575236270"\n }\n ],\n "Version": "2012-10-17"\n}\n\n', - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketPolicyExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketReplicationConfiguration` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - provider: central, - replicationConfiguration: { - role: replication.arn, - rules: [ - { - destination: { - bucket: destination.arn, - metrics: { - minutes: 15, - status: "Enabled", - }, - replicationTime: { - minutes: 15, - status: "Enabled", - }, - storageClass: "STANDARD", - }, - filter: { - tags: {}, - }, - id: "foobar", - status: "Enabled", - }, - ], - }, - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketReplicationConfigurationA } from "./.gen/providers/aws/s3-bucket-replication-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - provider: central, - }); - const awsS3BucketReplicationConfigurationExample = - new S3BucketReplicationConfigurationA(this, "example_1", { - bucket: source.id, - role: replication.arn, - rule: [ - { - deleteMarkerReplication: { - status: "Enabled", - }, - destination: { - bucket: destination.arn, - metrics: { - eventThreshold: { - minutes: 15, - }, - status: "Enabled", - }, - replicationTime: { - status: "Enabled", - time: { - minutes: 15, - }, - }, - storageClass: "STANDARD", - }, - filter: {}, - id: "foobar", - status: "Enabled", - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketReplicationConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketRequestPaymentConfiguration` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - requestPayer: "Requester", - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketRequestPaymentConfiguration } from "./.gen/providers/aws/s3-bucket-request-payment-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketRequestPaymentConfigurationExample = - new S3BucketRequestPaymentConfiguration(this, "example_1", { - bucket: example.id, - payer: "Requester", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketRequestPaymentConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketServerSideEncryptionConfiguration` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - serverSideEncryptionConfiguration: { - rule: { - applyServerSideEncryptionByDefault: { - kmsMasterKeyId: mykey.arn, - sseAlgorithm: "aws:kms", - }, - }, - }, - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketServerSideEncryptionConfigurationA } from "./.gen/providers/aws/s3-bucket-server-side-encryption-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketServerSideEncryptionConfigurationExample = - new S3BucketServerSideEncryptionConfigurationA(this, "example_1", { - bucket: example.id, - rule: [ - { - applyServerSideEncryptionByDefault: { - kmsMasterKeyId: mykey.arn, - sseAlgorithm: "aws:kms", - }, - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketServerSideEncryptionConfigurationExample.overrideLogicalId( - "example" - ); - } -} - -``` - -### Migrating to `awsS3BucketVersioning` - -~> **NOTE:** As `awsS3BucketVersioning` is a separate resource, any S3 objects for which versioning is important (_e.g._, a truststore for mutual TLS authentication) must implicitly or explicitly depend on the `awsS3BucketVersioning` resource. Otherwise, the S3 objects may be created before versioning has been set. [See below](#ensure-objects-depend-on-versioning) for an example. Also note that AWS recommends waiting 15 minutes after enabling versioning on a bucket before putting or deleting objects in/from the bucket. - -#### Buckets With Versioning Enabled - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - versioning: { - enabled: true, - }, - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Enabled", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -#### Buckets With Versioning Disabled or Suspended - -Depending on the version of the Terraform AWS Provider you are migrating from, the interpretation of `versioning.enabled = false` -in your `awsS3Bucket` resource will differ and thus the migration to the `awsS3BucketVersioning` resource will also differ as follows. - -If you are migrating from the Terraform AWS Provider `v3700` or later: - -* For new S3 buckets, `enabled = false` is synonymous to `disabled`. -* For existing S3 buckets, `enabled = false` is synonymous to `suspended`. - -If you are migrating from an earlier version of the Terraform AWS Provider: - -* For both new and existing S3 buckets, `enabled = false` is synonymous to `suspended`. - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - versioning: { - enabled: false, - }, - }); - } -} - -``` - -Update the configuration to one of the following: - -* If migrating from Terraform AWS Provider `v3700` or later and bucket versioning was never enabled: - - ```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Disabled", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -* If migrating from Terraform AWS Provider `v3700` or later and bucket versioning was enabled at one point: - - ```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Suspended", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -* If migrating from an earlier version of Terraform AWS Provider: - - ```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Suspended", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -#### Ensure Objects Depend on Versioning - -When you create an object whose `versionId` you need and an `awsS3BucketVersioning` resource in the same configuration, you are more likely to have success by ensuring the `s3Object` depends either implicitly (see below) or explicitly (i.e., using `depends_on = [aws_s3_bucket_versioning.example]`) on the `awsS3BucketVersioning` resource. - -~> **NOTE:** For critical and/or production S3 objects, do not create a bucket, enable versioning, and create an object in the bucket within the same configuration. Doing so will not allow the AWS-recommended 15 minutes between enabling versioning and writing to the bucket. - -This example shows the `awsS3ObjectExample` depending implicitly on the versioning resource through the reference to `awsS3BucketVersioningExampleId` to define `bucket`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -import { S3Object } from "./.gen/providers/aws/s3-object"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yotto", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Enabled", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - const awsS3ObjectExample = new S3Object(this, "example_2", { - bucket: Token.asString(awsS3BucketVersioningExample.id), - key: "droeloe", - source: "example.txt", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3ObjectExample.overrideLogicalId("example"); - } -} - -``` - -### Migrating to `awsS3BucketWebsiteConfiguration` - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - website: { - errorDocument: "error.html", - indexDocument: "index.html", - }, - }); - } -} - -``` - -Update the configuration to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketWebsiteConfiguration } from "./.gen/providers/aws/s3-bucket-website-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketWebsiteConfigurationExample = - new S3BucketWebsiteConfiguration(this, "example_1", { - bucket: example.id, - errorDocument: { - key: "error.html", - }, - indexDocument: { - suffix: "index.html", - }, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketWebsiteConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Given this previous configuration that uses the `awsS3Bucket` parameter `websiteDomain` with `awsRoute53Record`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route53Record } from "./.gen/providers/aws/route53-record"; -import { Route53Zone } from "./.gen/providers/aws/route53-zone"; -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const main = new Route53Zone(this, "main", { - name: "domain.test", - }); - const website = new S3Bucket(this, "website", { - website: { - errorDocument: "error.html", - indexDocument: "index.html", - }, - }); - new Route53Record(this, "alias", { - alias: { - evaluateTargetHealth: true, - name: website.websiteDomain, - zoneId: website.hostedZoneId, - }, - name: "www", - type: "A", - zoneId: main.zoneId, - }); - } -} - -``` - -Update the configuration to use the `awsS3BucketWebsiteConfiguration` resource and its `websiteDomain` parameter: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route53Record } from "./.gen/providers/aws/route53-record"; -import { Route53Zone } from "./.gen/providers/aws/route53-zone"; -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketWebsiteConfiguration } from "./.gen/providers/aws/s3-bucket-website-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const main = new Route53Zone(this, "main", { - name: "domain.test", - }); - const website = new S3Bucket(this, "website", {}); - const example = new S3BucketWebsiteConfiguration(this, "example", { - bucket: website.id, - indexDocument: { - suffix: "index.html", - }, - }); - new Route53Record(this, "alias", { - alias: { - evaluateTargetHealth: true, - name: example.websiteDomain, - zoneId: website.hostedZoneId, - }, - name: "www", - type: "A", - zoneId: main.zoneId, - }); - } -} - -``` - -## S3 Bucket Refactor - -~> **NOTE:** This only applies to v4.0.0 through v4.8.0 of the AWS Provider, which introduce significant breaking -changes to the `awsS3Bucket` resource. We recommend upgrading to v4.9.0 of the AWS Provider instead. See the section above, [Changes to S3 Bucket Drift Detection](#changes-to-s3-bucket-drift-detection), for additional upgrade considerations. - -To help distribute the management of S3 bucket settings via independent resources, various arguments and attributes in the `awsS3Bucket` resource have become **read-only**. - -Configurations dependent on these arguments should be updated to use the corresponding `awsS3Bucket_*` resource in order to prevent Terraform from reporting “unconfigurable attribute” errors for read-only arguments. Once updated, it is recommended to import new `awsS3Bucket_*` resources into Terraform state. - -In the event practitioners do not anticipate future modifications to the S3 bucket settings associated with these read-only arguments or drift detection is not needed, these read-only arguments should be removed from `awsS3Bucket` resource configurations in order to prevent Terraform from reporting “unconfigurable attribute” errors; the states of these arguments will be preserved but are subject to change with modifications made outside Terraform. - -~> **NOTE:** Each of the new `awsS3Bucket_*` resources relies on S3 API calls that utilize a `put` action in order to modify the target S3 bucket. These calls follow standard HTTP methods for REST APIs, and therefore **should** handle situations where the target configuration already exists. While it is not strictly necessary to import new `awsS3Bucket_*` resources where the updated configuration matches the configuration used in previous versions of the AWS provider, skipping this step will lead to a diff in the first plan after a configuration change indicating that any new `awsS3Bucket_*` resources will be created, making it more difficult to determine whether the appropriate actions will be taken. - -### `accelerationStatus` Argument - -Switch your Terraform configuration to the [`awsS3BucketAccelerateConfiguration` resource](/docs/providers/aws/r/s3_bucket_accelerate_configuration.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - accelerationStatus: "Enabled", - bucket: "yournamehere", - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "acceleration_status": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `accelerationStatus` is now read only, update your configuration to use the `awsS3BucketAccelerateConfiguration` -resource and remove `accelerationStatus` in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketAccelerateConfiguration } from "./.gen/providers/aws/s3-bucket-accelerate-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketAccelerateConfigurationExample = - new S3BucketAccelerateConfiguration(this, "example_1", { - bucket: example.id, - status: "Enabled", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketAccelerateConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_accelerate_configuration.example yournamehere -aws_s3_bucket_accelerate_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_accelerate_configuration.example: Import prepared! - Prepared aws_s3_bucket_accelerate_configuration for import -aws_s3_bucket_accelerate_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `acl` Argument - -Switch your Terraform configuration to the [`awsS3BucketAcl` resource](/docs/providers/aws/r/s3_bucket_acl.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - acl: "private", - bucket: "yournamehere", - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "acl": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `acl` is now read only, update your configuration to use the `awsS3BucketAcl` -resource and remove the `acl` argument in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketAcl } from "./.gen/providers/aws/s3-bucket-acl"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketAclExample = new S3BucketAcl(this, "example_1", { - acl: "private", - bucket: example.id, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketAclExample.overrideLogicalId("example"); - } -} - -``` - -~> **NOTE:** When importing into `awsS3BucketAcl`, make sure you use the S3 bucket name (_e.g._, `yournamehere` in the example above) as part of the ID, and _not_ the Terraform bucket configuration name (_e.g._, `example` in the example above). - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_acl.example yournamehere,private -aws_s3_bucket_acl.example: Importing from ID "yournamehere,private"... -aws_s3_bucket_acl.example: Import prepared! - Prepared aws_s3_bucket_acl for import -aws_s3_bucket_acl.example: Refreshing state... [id=yournamehere,private] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `corsRule` Argument - -Switch your Terraform configuration to the [`awsS3BucketCorsConfiguration` resource](/docs/providers/aws/r/s3_bucket_cors_configuration.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - corsRule: [ - { - allowedHeaders: ["*"], - allowedMethods: ["PUT", "POST"], - allowedOrigins: ["https://s3-website-test.hashicorp.com"], - exposeHeaders: ["ETag"], - maxAgeSeconds: 3000, - }, - ], - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "cors_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `corsRule` is now read only, update your configuration to use the `awsS3BucketCorsConfiguration` -resource and remove `corsRule` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketCorsConfiguration } from "./.gen/providers/aws/s3-bucket-cors-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketCorsConfigurationExample = new S3BucketCorsConfiguration( - this, - "example_1", - { - bucket: example.id, - corsRule: [ - { - allowedHeaders: ["*"], - allowedMethods: ["PUT", "POST"], - allowedOrigins: ["https://s3-website-test.hashicorp.com"], - exposeHeaders: ["ETag"], - maxAgeSeconds: 3000, - }, - ], - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketCorsConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_cors_configuration.example yournamehere -aws_s3_bucket_cors_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_cors_configuration.example: Import prepared! - Prepared aws_s3_bucket_cors_configuration for import -aws_s3_bucket_cors_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `grant` Argument - -Switch your Terraform configuration to the [`awsS3BucketAcl` resource](/docs/providers/aws/r/s3_bucket_acl.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - grant: [ - { - id: Token.asString(currentUser.id), - permissions: ["FULL_CONTROL"], - type: "CanonicalUser", - }, - { - permissions: ["READ_ACP", "WRITE"], - type: "Group", - uri: "http://acs.amazonaws.com/groups/s3/LogDelivery", - }, - ], - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "grant": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `grant` is now read only, update your configuration to use the `awsS3BucketAcl` -resource and remove `grant` in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketAcl } from "./.gen/providers/aws/s3-bucket-acl"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketAclExample = new S3BucketAcl(this, "example_1", { - accessControlPolicy: { - grant: [ - { - grantee: { - id: Token.asString(currentUser.id), - type: "CanonicalUser", - }, - permission: "FULL_CONTROL", - }, - { - grantee: { - type: "Group", - uri: "http://acs.amazonaws.com/groups/s3/LogDelivery", - }, - permission: "READ_ACP", - }, - { - grantee: { - type: "Group", - uri: "http://acs.amazonaws.com/groups/s3/LogDelivery", - }, - permission: "WRITE", - }, - ], - owner: { - id: Token.asString(currentUser.id), - }, - }, - bucket: example.id, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketAclExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_acl.example yournamehere -aws_s3_bucket_acl.example: Importing from ID "yournamehere"... -aws_s3_bucket_acl.example: Import prepared! - Prepared aws_s3_bucket_acl for import -aws_s3_bucket_acl.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `lifecycleRule` Argument - -Switch your Terraform configuration to the [`awsS3BucketLifecycleConfiguration` resource](/docs/providers/aws/r/s3_bucket_lifecycle_configuration.html) instead. - -#### For Lifecycle Rules with no `prefix` previously configured - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - id: "Keep previous version 30 days, then in Glacier another 60", - noncurrentVersionExpiration: { - days: 90, - }, - noncurrentVersionTransition: [ - { - days: 30, - storageClass: "GLACIER", - }, - ], - }, - { - abortIncompleteMultipartUploadDays: 7, - enabled: true, - id: "Delete old incomplete multi-part uploads", - }, - ], - }); - } -} - -``` - -You will receive the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `lifecycleRule` argument changed to read-only, update the configuration to use the `awsS3BucketLifecycleConfiguration` -resource and remove `lifecycleRule` and its nested arguments in the `awsS3Bucket` resource. - -~> **Note:** When configuring the `ruleFilter` configuration block in the new `awsS3BucketLifecycleConfiguration` resource, use the AWS CLI s3api [get-bucket-lifecycle-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-lifecycle-configuration.html) -to get the source bucket's lifecycle configuration and determine if the `filter` is configured as `"Filter" : {}` or `"Filter" : { "Prefix": "" }`. -If AWS returns the former, configure `ruleFilter` as `filter {}`. Otherwise, neither a `ruleFilter` nor `rulePrefix` parameter should be configured as shown here: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - id: "Keep previous version 30 days, then in Glacier another 60", - noncurrentVersionExpiration: { - noncurrentDays: 90, - }, - noncurrentVersionTransition: [ - { - noncurrentDays: 30, - storageClass: "GLACIER", - }, - ], - status: "Enabled", - }, - { - abortIncompleteMultipartUpload: { - daysAfterInitiation: 7, - }, - id: "Delete old incomplete multi-part uploads", - status: "Enabled", - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### For Lifecycle Rules with `prefix` previously configured as an empty string - -For example, given this configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - id: "log-expiration", - prefix: "", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - } -} - -``` - -You will receive the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `lifecycleRule` argument changed to read-only, update the configuration to use the `awsS3BucketLifecycleConfiguration` -resource and remove `lifecycleRule` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - id: "log-expiration", - status: "Enabled", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### For Lifecycle Rules with `prefix` - -For example, given this configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - id: "log-expiration", - prefix: "foobar", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - } -} - -``` - -You will receive the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `lifecycleRule` argument changed to read-only, update the configuration to use the `awsS3BucketLifecycleConfiguration` -resource and remove `lifecycleRule` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - filter: { - prefix: "foobar", - }, - id: "log-expiration", - status: "Enabled", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 180, - storageClass: "GLACIER", - }, - ], - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### For Lifecycle Rules with `prefix` and `tags` - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - lifecycleRule: [ - { - enabled: true, - expiration: { - days: 90, - }, - id: "log", - prefix: "log/", - tags: { - autoclean: "true", - rule: "log", - }, - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 60, - storageClass: "GLACIER", - }, - ], - }, - { - enabled: true, - expiration: { - date: "2022-12-31", - }, - id: "tmp", - prefix: "tmp/", - }, - ], - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "lifecycle_rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `lifecycleRule` is now read only, update your configuration to use the `awsS3BucketLifecycleConfiguration` -resource and remove `lifecycleRule` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLifecycleConfiguration } from "./.gen/providers/aws/s3-bucket-lifecycle-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketLifecycleConfigurationExample = - new S3BucketLifecycleConfiguration(this, "example_1", { - bucket: example.id, - rule: [ - { - expiration: { - days: 90, - }, - filter: { - and: { - prefix: "log/", - tags: { - autoclean: "true", - rule: "log", - }, - }, - }, - id: "log", - status: "Enabled", - transition: [ - { - days: 30, - storageClass: "STANDARD_IA", - }, - { - days: 60, - storageClass: "GLACIER", - }, - ], - }, - { - expiration: { - date: "2022-12-31T00:00:00Z", - }, - filter: { - prefix: "tmp/", - }, - id: "tmp", - status: "Enabled", - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLifecycleConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_lifecycle_configuration.example yournamehere -aws_s3_bucket_lifecycle_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_lifecycle_configuration.example: Import prepared! - Prepared aws_s3_bucket_lifecycle_configuration for import -aws_s3_bucket_lifecycle_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `logging` Argument - -Switch your Terraform configuration to the [`awsS3BucketLogging` resource](/docs/providers/aws/r/s3_bucket_logging.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const logBucket = new S3Bucket(this, "log_bucket", { - bucket: "example-log-bucket", - }); - new S3Bucket(this, "example", { - bucket: "yournamehere", - logging: { - targetBucket: logBucket.id, - targetPrefix: "log/", - }, - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "logging": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `logging` is now read only, update your configuration to use the `awsS3BucketLogging` -resource and remove `logging` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketLoggingA } from "./.gen/providers/aws/s3-bucket-logging"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const logBucket = new S3Bucket(this, "log_bucket", { - bucket: "example-log-bucket", - }); - const awsS3BucketLoggingExample = new S3BucketLoggingA(this, "example_2", { - bucket: example.id, - targetBucket: logBucket.id, - targetPrefix: "log/", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketLoggingExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_logging.example yournamehere -aws_s3_bucket_logging.example: Importing from ID "yournamehere"... -aws_s3_bucket_logging.example: Import prepared! - Prepared aws_s3_bucket_logging for import -aws_s3_bucket_logging.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `objectLockConfiguration` `rule` Argument - -Switch your Terraform configuration to the [`awsS3BucketObjectLockConfiguration` resource](/docs/providers/aws/r/s3_bucket_object_lock_configuration.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - objectLockConfiguration: { - objectLockEnabled: "Enabled", - rule: { - defaultRetention: { - days: 3, - mode: "COMPLIANCE", - }, - }, - }, - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "object_lock_configuration.0.rule": its value will be decided automatically based on the result of applying this configuration. -``` - -Since the `rule` argument of the `objectLockConfiguration` configuration block changed to read-only, update your configuration to use the `awsS3BucketObjectLockConfiguration` -resource and remove `rule` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketObjectLockConfigurationA } from "./.gen/providers/aws/s3-bucket-object-lock-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - objectLockEnabled: true, - }); - const awsS3BucketObjectLockConfigurationExample = - new S3BucketObjectLockConfigurationA(this, "example_1", { - bucket: example.id, - rule: { - defaultRetention: { - days: 3, - mode: "COMPLIANCE", - }, - }, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketObjectLockConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_object_lock_configuration.example yournamehere -aws_s3_bucket_object_lock_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_object_lock_configuration.example: Import prepared! - Prepared aws_s3_bucket_object_lock_configuration for import -aws_s3_bucket_object_lock_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `policy` Argument - -Switch your Terraform configuration to the [`awsS3BucketPolicy` resource](/docs/providers/aws/r/s3_bucket_policy.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - policy: - '{\n "Id": "Policy1446577137248",\n "Statement": [\n {\n "Action": "s3:PutObject",\n "Effect": "Allow",\n "Principal": {\n "AWS": "${' + - current.arn + - '}"\n },\n "Resource": "arn:${' + - dataAwsPartitionCurrent.partition + - '}:s3:::yournamehere/*",\n "Sid": "Stmt1446575236270"\n }\n ],\n "Version": "2012-10-17"\n}\n\n', - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "policy": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `policy` is now read only, update your configuration to use the `awsS3BucketPolicy` -resource and remove `policy` in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketPolicy } from "./.gen/providers/aws/s3-bucket-policy"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketPolicyExample = new S3BucketPolicy(this, "example_1", { - bucket: example.id, - policy: - '{\n "Id": "Policy1446577137248",\n "Statement": [\n {\n "Action": "s3:PutObject",\n "Effect": "Allow",\n "Principal": {\n "AWS": "${' + - current.arn + - '}"\n },\n "Resource": "${' + - example.arn + - '}/*",\n "Sid": "Stmt1446575236270"\n }\n ],\n "Version": "2012-10-17"\n}\n\n', - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketPolicyExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_policy.example yournamehere -aws_s3_bucket_policy.example: Importing from ID "yournamehere"... -aws_s3_bucket_policy.example: Import prepared! - Prepared aws_s3_bucket_policy for import -aws_s3_bucket_policy.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `replicationConfiguration` Argument - -Switch your Terraform configuration to the [`awsS3BucketReplicationConfiguration` resource](/docs/providers/aws/r/s3_bucket_replication_configuration.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - provider: central, - replicationConfiguration: { - role: replication.arn, - rules: [ - { - destination: { - bucket: destination.arn, - metrics: { - minutes: 15, - status: "Enabled", - }, - replicationTime: { - minutes: 15, - status: "Enabled", - }, - storageClass: "STANDARD", - }, - filter: { - tags: {}, - }, - id: "foobar", - status: "Enabled", - }, - ], - }, - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "replication_configuration": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `replicationConfiguration` is now read only, update your configuration to use the `awsS3BucketReplicationConfiguration` -resource and remove `replicationConfiguration` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketReplicationConfigurationA } from "./.gen/providers/aws/s3-bucket-replication-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - provider: central, - }); - const awsS3BucketReplicationConfigurationExample = - new S3BucketReplicationConfigurationA(this, "example_1", { - bucket: example.id, - role: replication.arn, - rule: [ - { - deleteMarkerReplication: { - status: "Enabled", - }, - destination: { - bucket: destination.arn, - metrics: { - eventThreshold: { - minutes: 15, - }, - status: "Enabled", - }, - replicationTime: { - status: "Enabled", - time: { - minutes: 15, - }, - }, - storageClass: "STANDARD", - }, - filter: {}, - id: "foobar", - status: "Enabled", - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketReplicationConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_replication_configuration.example yournamehere -aws_s3_bucket_replication_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_replication_configuration.example: Import prepared! - Prepared aws_s3_bucket_replication_configuration for import -aws_s3_bucket_replication_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `requestPayer` Argument - -Switch your Terraform configuration to the [`awsS3BucketRequestPaymentConfiguration` resource](/docs/providers/aws/r/s3_bucket_request_payment_configuration.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - requestPayer: "Requester", - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "request_payer": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `requestPayer` is now read only, update your configuration to use the `awsS3BucketRequestPaymentConfiguration` -resource and remove `requestPayer` in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketRequestPaymentConfiguration } from "./.gen/providers/aws/s3-bucket-request-payment-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketRequestPaymentConfigurationExample = - new S3BucketRequestPaymentConfiguration(this, "example_1", { - bucket: example.id, - payer: "Requester", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketRequestPaymentConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_request_payment_configuration.example yournamehere -aws_s3_bucket_request_payment_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_request_payment_configuration.example: Import prepared! - Prepared aws_s3_bucket_request_payment_configuration for import -aws_s3_bucket_request_payment_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `serverSideEncryptionConfiguration` Argument - -Switch your Terraform configuration to the [`awsS3BucketServerSideEncryptionConfiguration` resource](/docs/providers/aws/r/s3_bucket_server_side_encryption_configuration.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - serverSideEncryptionConfiguration: { - rule: { - applyServerSideEncryptionByDefault: { - kmsMasterKeyId: mykey.arn, - sseAlgorithm: "aws:kms", - }, - }, - }, - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "server_side_encryption_configuration": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `serverSideEncryptionConfiguration` is now read only, update your configuration to use the `awsS3BucketServerSideEncryptionConfiguration` -resource and remove `serverSideEncryptionConfiguration` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketServerSideEncryptionConfigurationA } from "./.gen/providers/aws/s3-bucket-server-side-encryption-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketServerSideEncryptionConfigurationExample = - new S3BucketServerSideEncryptionConfigurationA(this, "example_1", { - bucket: example.id, - rule: [ - { - applyServerSideEncryptionByDefault: { - kmsMasterKeyId: mykey.arn, - sseAlgorithm: "aws:kms", - }, - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketServerSideEncryptionConfigurationExample.overrideLogicalId( - "example" - ); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_server_side_encryption_configuration.example yournamehere -aws_s3_bucket_server_side_encryption_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_server_side_encryption_configuration.example: Import prepared! - Prepared aws_s3_bucket_server_side_encryption_configuration for import -aws_s3_bucket_server_side_encryption_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -### `versioning` Argument - -Switch your Terraform configuration to the [`awsS3BucketVersioning` resource](/docs/providers/aws/r/s3_bucket_versioning.html) instead. - -~> **NOTE:** As `awsS3BucketVersioning` is a separate resource, any S3 objects for which versioning is important (_e.g._, a truststore for mutual TLS authentication) must implicitly or explicitly depend on the `awsS3BucketVersioning` resource. Otherwise, the S3 objects may be created before versioning has been set. [See below](#ensure-objects-depend-on-versioning) for an example. Also note that AWS recommends waiting 15 minutes after enabling versioning on a bucket before putting or deleting objects in/from the bucket. - -#### Buckets With Versioning Enabled - -Given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - versioning: { - enabled: true, - }, - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `versioning` is now read only, update your configuration to use the `awsS3BucketVersioning` -resource and remove `versioning` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Enabled", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_versioning.example yournamehere -aws_s3_bucket_versioning.example: Importing from ID "yournamehere"... -aws_s3_bucket_versioning.example: Import prepared! - Prepared aws_s3_bucket_versioning for import -aws_s3_bucket_versioning.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### Buckets With Versioning Disabled or Suspended - -Depending on the version of the Terraform AWS Provider you are migrating from, the interpretation of `versioning.enabled = false` -in your `awsS3Bucket` resource will differ and thus the migration to the `awsS3BucketVersioning` resource will also differ as follows. - -If you are migrating from the Terraform AWS Provider `v3700` or later: - -* For new S3 buckets, `enabled = false` is synonymous to `disabled`. -* For existing S3 buckets, `enabled = false` is synonymous to `suspended`. - -If you are migrating from an earlier version of the Terraform AWS Provider: - -* For both new and existing S3 buckets, `enabled = false` is synonymous to `suspended`. - -Given this previous configuration : - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - versioning: { - enabled: false, - }, - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "versioning": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `versioning` is now read only, update your configuration to use the `awsS3BucketVersioning` -resource and remove `versioning` and its nested arguments in the `awsS3Bucket` resource. - -* If migrating from Terraform AWS Provider `v3700` or later and bucket versioning was never enabled: - - ```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Disabled", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -* If migrating from Terraform AWS Provider `v3700` or later and bucket versioning was enabled at one point: - - ```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Suspended", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -* If migrating from an earlier version of Terraform AWS Provider: - - ```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Suspended", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_versioning.example yournamehere -aws_s3_bucket_versioning.example: Importing from ID "yournamehere"... -aws_s3_bucket_versioning.example: Import prepared! - Prepared aws_s3_bucket_versioning for import -aws_s3_bucket_versioning.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -#### Ensure Objects Depend on Versioning - -When you create an object whose `versionId` you need and an `awsS3BucketVersioning` resource in the same configuration, you are more likely to have success by ensuring the `s3Object` depends either implicitly (see below) or explicitly (i.e., using `depends_on = [aws_s3_bucket_versioning.example]`) on the `awsS3BucketVersioning` resource. - -~> **NOTE:** For critical and/or production S3 objects, do not create a bucket, enable versioning, and create an object in the bucket within the same configuration. Doing so will not allow the AWS-recommended 15 minutes between enabling versioning and writing to the bucket. - -This example shows the `awsS3ObjectExample` depending implicitly on the versioning resource through the reference to `awsS3BucketVersioningExampleBucket` to define `bucket`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketVersioningA } from "./.gen/providers/aws/s3-bucket-versioning"; -import { S3Object } from "./.gen/providers/aws/s3-object"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yotto", - }); - const awsS3BucketVersioningExample = new S3BucketVersioningA( - this, - "example_1", - { - bucket: example.id, - versioningConfiguration: { - status: "Enabled", - }, - } - ); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketVersioningExample.overrideLogicalId("example"); - const awsS3ObjectExample = new S3Object(this, "example_2", { - bucket: Token.asString(awsS3BucketVersioningExample.id), - key: "droeloe", - source: "example.txt", - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3ObjectExample.overrideLogicalId("example"); - } -} - -``` - -### `website`, `websiteDomain`, and `websiteEndpoint` Arguments - -Switch your Terraform configuration to the [`awsS3BucketWebsiteConfiguration` resource](/docs/providers/aws/r/s3_bucket_website_configuration.html) instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new S3Bucket(this, "example", { - bucket: "yournamehere", - website: { - errorDocument: "error.html", - indexDocument: "index.html", - }, - }); - } -} - -``` - -You will get the following error after upgrading: - -``` -│ Error: Value for unconfigurable attribute -│ -│ with aws_s3_bucket.example, -│ on main.tf line 1, in resource "aws_s3_bucket" "example": -│ 1: resource "aws_s3_bucket" "example" { -│ -│ Can't configure a value for "website": its value will be decided automatically based on the result of applying this configuration. -``` - -Since `website` is now read only, update your configuration to use the `awsS3BucketWebsiteConfiguration` -resource and remove `website` and its nested arguments in the `awsS3Bucket` resource: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketWebsiteConfiguration } from "./.gen/providers/aws/s3-bucket-website-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new S3Bucket(this, "example", { - bucket: "yournamehere", - }); - const awsS3BucketWebsiteConfigurationExample = - new S3BucketWebsiteConfiguration(this, "example_1", { - bucket: example.id, - errorDocument: { - key: "error.html", - }, - indexDocument: { - suffix: "index.html", - }, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsS3BucketWebsiteConfigurationExample.overrideLogicalId("example"); - } -} - -``` - -Run `terraform import` on each new resource, _e.g._, - -```console -$ terraform import aws_s3_bucket_website_configuration.example yournamehere -aws_s3_bucket_website_configuration.example: Importing from ID "yournamehere"... -aws_s3_bucket_website_configuration.example: Import prepared! - Prepared aws_s3_bucket_website_configuration for import -aws_s3_bucket_website_configuration.example: Refreshing state... [id=yournamehere] - -Import successful! - -The resources that were imported are shown above. These resources are now in -your Terraform state and will henceforth be managed by Terraform. -``` - -For example, if you use the `awsS3Bucket` attribute `websiteDomain` with `awsRoute53Record`, as shown below, you will need to update your configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route53Record } from "./.gen/providers/aws/route53-record"; -import { Route53Zone } from "./.gen/providers/aws/route53-zone"; -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const main = new Route53Zone(this, "main", { - name: "domain.test", - }); - const website = new S3Bucket(this, "website", { - website: { - errorDocument: "error.html", - indexDocument: "index.html", - }, - }); - new Route53Record(this, "alias", { - alias: { - evaluateTargetHealth: true, - name: website.websiteDomain, - zoneId: website.hostedZoneId, - }, - name: "www", - type: "A", - zoneId: main.zoneId, - }); - } -} - -``` - -Instead, you will now use the `awsS3BucketWebsiteConfiguration` resource and its `websiteDomain` attribute: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route53Record } from "./.gen/providers/aws/route53-record"; -import { Route53Zone } from "./.gen/providers/aws/route53-zone"; -import { S3Bucket } from "./.gen/providers/aws/s3-bucket"; -import { S3BucketWebsiteConfiguration } from "./.gen/providers/aws/s3-bucket-website-configuration"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const main = new Route53Zone(this, "main", { - name: "domain.test", - }); - const website = new S3Bucket(this, "website", {}); - const example = new S3BucketWebsiteConfiguration(this, "example", { - bucket: website.id, - indexDocument: { - suffix: "index.html", - }, - }); - new Route53Record(this, "alias", { - alias: { - evaluateTargetHealth: true, - name: example.websiteDomain, - zoneId: website.hostedZoneId, - }, - name: "www", - type: "A", - zoneId: main.zoneId, - }); - } -} - -``` - -## Full Resource Lifecycle of Default Resources - -Default subnets and vpcs can now do full resource lifecycle operations such that resource -creation and deletion are now supported. - -### Resource: aws_default_subnet - -The `awsDefaultSubnet` resource behaves differently from normal resources in that if a default subnet exists in the specified Availability Zone, Terraform does not _create_ this resource, but instead "adopts" it into management. -If no default subnet exists, Terraform creates a new default subnet. -By default, `terraform destroy` does not delete the default subnet but does remove the resource from Terraform state. -Set the `forceDestroy` argument to `true` to delete the default subnet. - -For example, given this previous configuration with no existing default subnet: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultSubnet } from "./.gen/providers/aws/default-subnet"; -import { AwsProvider } from "./.gen/providers/aws/provider"; -interface MyConfig { - availabilityZone: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new AwsProvider(this, "aws", { - region: "eu-west-2", - }); - new DefaultSubnet(this, "default", { - availabilityZone: config.availabilityZone, - }); - } -} - -``` - -The following error was thrown on `terraform apply`: - -``` -│ Error: Default subnet not found. -│ -│ with aws_default_subnet.default, -│ on main.tf line 5, in resource "aws_default_subnet" "default": -│ 5: resource "aws_default_subnet" "default" {} -``` - -Now after upgrading, the above configuration will apply successfully. - -To delete the default subnet, the above configuration should be updated as follows: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultSubnet } from "./.gen/providers/aws/default-subnet"; -interface MyConfig { - availabilityZone: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DefaultSubnet(this, "default", { - forceDestroy: true, - availabilityZone: config.availabilityZone, - }); - } -} - -``` - -### Resource: aws_default_vpc - -The `awsDefaultVpc` resource behaves differently from normal resources in that if a default VPC exists, Terraform does not _create_ this resource, but instead "adopts" it into management. -If no default VPC exists, Terraform creates a new default VPC, which leads to the implicit creation of [other resources](https://docs.aws.amazon.com/vpc/latest/userguide/default-vpc.html#default-vpc-components). -By default, `terraform destroy` does not delete the default VPC but does remove the resource from Terraform state. -Set the `forceDestroy` argument to `true` to delete the default VPC. - -For example, given this previous configuration with no existing default VPC: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultVpc } from "./.gen/providers/aws/default-vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DefaultVpc(this, "default", {}); - } -} - -``` - -The following error was thrown on `terraform apply`: - -``` -│ Error: No default VPC found in this region. -│ -│ with aws_default_vpc.default, -│ on main.tf line 5, in resource "aws_default_vpc" "default": -│ 5: resource "aws_default_vpc" "default" {} -``` - -Now after upgrading, the above configuration will apply successfully. - -To delete the default VPC, the above configuration should be updated to: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultVpc } from "./.gen/providers/aws/default-vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DefaultVpc(this, "default", { - forceDestroy: true, - }); - } -} - -``` - -## Plural Data Source Behavior - -The following plural data sources are now consistent with [Provider Design](https://hashicorp.github.io/terraform-provider-aws/provider-design/#plural-data-sources) -such that they no longer return an error if zero results are found. - -* [aws_cognito_user_pools](/docs/providers/aws/d/cognito_user_pools.html) -* [aws_db_event_categories](/docs/providers/aws/d/db_event_categories.html) -* [aws_ebs_volumes](/docs/providers/aws/d/ebs_volumes.html) -* [aws_ec2_coip_pools](/docs/providers/aws/d/ec2_coip_pools.html) -* [aws_ec2_local_gateway_route_tables](/docs/providers/aws/d/ec2_local_gateway_route_tables.html) -* [aws_ec2_local_gateway_virtual_interface_groups](/docs/providers/aws/d/ec2_local_gateway_virtual_interface_groups.html) -* [aws_ec2_local_gateways](/docs/providers/aws/d/ec2_local_gateways.html) -* [aws_ec2_transit_gateway_route_tables](/docs/providers/aws/d/ec2_transit_gateway_route_tables.html) -* [aws_efs_access_points](/docs/providers/aws/d/efs_access_points.html) -* [aws_emr_release_labels](/docs/providers/aws/d/emr_release_labels.markdown) -* [aws_inspector_rules_packages](/docs/providers/aws/d/inspector_rules_packages.html) -* [aws_ip_ranges](/docs/providers/aws/d/ip_ranges.html) -* [aws_network_acls](/docs/providers/aws/d/network_acls.html) -* [aws_route_tables](/docs/providers/aws/d/route_tables.html) -* [aws_security_groups](/docs/providers/aws/d/security_groups.html) -* [aws_ssoadmin_instances](/docs/providers/aws/d/ssoadmin_instances.html) -* [aws_vpcs](/docs/providers/aws/d/vpcs.html) -* [aws_vpc_peering_connections](/docs/providers/aws/d/vpc_peering_connections.html) - -## Empty Strings Not Valid For Certain Resources - -First, this is a breaking change but should affect very few configurations. - -Second, the motivation behind this change is that previously, you might set an argument to `""` to explicitly convey it is empty. However, with the introduction of `null` in Terraform 0.12 and to prepare for continuing enhancements that distinguish between unset arguments and those that have a value, including an empty string (`""`), we are moving away from this use of zero values. We ask practitioners to either use `null` instead or remove the arguments that are set to `""`. - -### Resource: aws_cloudwatch_event_target (Empty String) - -Previously, you could set `ecsTarget0LaunchType` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `launch_type = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CloudwatchEventTarget } from "./.gen/providers/aws/cloudwatch-event-target"; -interface MyConfig { - arn: any; - rule: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new CloudwatchEventTarget(this, "example", { - ecsTarget: { - launchType: "", - taskCount: 1, - taskDefinitionArn: task.arn, - }, - arn: config.arn, - rule: config.rule, - }); - } -} - -``` - -We fix this configuration by setting `launchType` to `null`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CloudwatchEventTarget } from "./.gen/providers/aws/cloudwatch-event-target"; -interface MyConfig { - arn: any; - rule: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new CloudwatchEventTarget(this, "example", { - ecsTarget: { - launchType: [null], - taskCount: 1, - taskDefinitionArn: task.arn, - }, - arn: config.arn, - rule: config.rule, - }); - } -} - -``` - -### Resource: aws_customer_gateway - -Previously, you could set `ipAddress` to `""`, which would result in an AWS error. However, the provider now also gives an error. - -### Resource: aws_default_network_acl - -Previously, you could set `egress.*CidrBlock`, `egress.*Ipv6CidrBlock`, `ingress.*CidrBlock`, or `ingress.*Ipv6CidrBlock` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultNetworkAcl } from "./.gen/providers/aws/default-network-acl"; -interface MyConfig { - action: any; - fromPort: any; - protocol: any; - ruleNo: any; - toPort: any; - defaultNetworkAclId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DefaultNetworkAcl(this, "example", { - egress: [ - { - cidrBlock: "0.0.0.0/0", - ipv6CidrBlock: "", - action: config.action, - fromPort: config.fromPort, - protocol: config.protocol, - ruleNo: config.ruleNo, - toPort: config.toPort, - }, - ], - defaultNetworkAclId: config.defaultNetworkAclId, - }); - } -} - -``` - -To fix this configuration, we remove the empty-string configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultNetworkAcl } from "./.gen/providers/aws/default-network-acl"; -interface MyConfig { - action: any; - fromPort: any; - protocol: any; - ruleNo: any; - toPort: any; - defaultNetworkAclId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DefaultNetworkAcl(this, "example", { - egress: [ - { - cidrBlock: "0.0.0.0/0", - action: config.action, - fromPort: config.fromPort, - protocol: config.protocol, - ruleNo: config.ruleNo, - toPort: config.toPort, - }, - ], - defaultNetworkAclId: config.defaultNetworkAclId, - }); - } -} - -``` - -### Resource: aws_default_route_table - -Previously, you could set `route.*CidrBlock` or `route.*Ipv6CidrBlock` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultRouteTable } from "./.gen/providers/aws/default-route-table"; -interface MyConfig { - defaultRouteTableId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DefaultRouteTable(this, "example", { - route: [ - { - cidrBlock: Token.asString(conditional(ipv6, "", destination)), - ipv6CidrBlock: Token.asString(conditional(ipv6, destinationIpv6, "")), - }, - ], - defaultRouteTableId: config.defaultRouteTableId, - }); - } -} - -``` - -We fix this configuration by using `null` instead of an empty string (`""`): - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DefaultRouteTable } from "./.gen/providers/aws/default-route-table"; -interface MyConfig { - defaultRouteTableId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DefaultRouteTable(this, "example", { - route: [ - { - cidrBlock: Token.asString(conditional(ipv6, "null", destination)), - ipv6CidrBlock: Token.asString( - conditional(ipv6, destinationIpv6, "null") - ), - }, - ], - defaultRouteTableId: config.defaultRouteTableId, - }); - } -} - -``` - -### Resource: aws_default_vpc (Empty String) - -Previously, you could set `ipv6CidrBlock` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -### Resource: aws_instance - -Previously, you could set `privateIp` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `private_ip = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Instance } from "./.gen/providers/aws/instance"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Instance(this, "example", { - instanceType: "t2.micro", - privateIp: "", - }); - } -} - -``` - -We fix this configuration by removing the empty-string configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Instance } from "./.gen/providers/aws/instance"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Instance(this, "example", { - instanceType: "t2.micro", - }); - } -} - -``` - -### Resource: aws_efs_mount_target - -Previously, you could set `ipAddress` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ip_address = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: `ip_address = ""`. - -### Resource: aws_elasticsearch_domain - -Previously, you could set `ebsOptions0VolumeType` to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `volume_type = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Op, conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ElasticsearchDomain } from "./.gen/providers/aws/elasticsearch-domain"; -interface MyConfig { - domainName: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ElasticsearchDomain(this, "example", { - ebsOptions: { - ebsEnabled: true, - volumeSize: volumeSize.numberValue, - volumeType: Token.asString( - conditional(Op.gt(volumeSize.value, 0), volumeType, "") - ), - }, - domainName: config.domainName, - }); - } -} - -``` - -We fix this configuration by using `null` instead of `""`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Op, conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { ElasticsearchDomain } from "./.gen/providers/aws/elasticsearch-domain"; -interface MyConfig { - domainName: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new ElasticsearchDomain(this, "example", { - ebsOptions: { - ebsEnabled: true, - volumeSize: volumeSize.numberValue, - volumeType: Token.asString( - conditional(Op.gt(volumeSize.value, 0), volumeType, "null") - ), - }, - domainName: config.domainName, - }); - } -} - -``` - -### Resource: aws_network_acl - -Previously, `egress.*CidrBlock`, `egress.*Ipv6CidrBlock`, `ingress.*CidrBlock`, and `ingress.*Ipv6CidrBlock` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { NetworkAcl } from "./.gen/providers/aws/network-acl"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new NetworkAcl(this, "example", { - egress: [ - { - cidrBlock: "0.0.0.0/0", - ipv6CidrBlock: "", - }, - ], - vpcId: config.vpcId, - }); - } -} - -``` - -We fix this configuration by removing the empty-string configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { NetworkAcl } from "./.gen/providers/aws/network-acl"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new NetworkAcl(this, "example", { - egress: [ - { - cidrBlock: "0.0.0.0/0", - }, - ], - vpcId: config.vpcId, - }); - } -} - -``` - -### Resource: aws_route - -Previously, `destinationCidrBlock` and `destinationIpv6CidrBlock` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `destination_ipv6_cidr_block = null`) or remove the empty-string configuration. - -In addition, now exactly one of `destinationCidrBlock`, `destinationIpv6CidrBlock`, and `destinationPrefixListId` can be set. - -For example, this type of configuration for `awsRoute` is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route } from "./.gen/providers/aws/route"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Route(this, "example", { - destinationCidrBlock: Token.asString(conditional(ipv6, "", destination)), - destinationIpv6CidrBlock: Token.asString( - conditional(ipv6, destinationIpv6, "") - ), - gatewayId: Token.asString(awsInternetGatewayExample.id), - routeTableId: Token.asString(awsRouteTableExample.id), - }); - } -} - -``` - -We fix this configuration by using `null` instead of an empty-string (`""`): - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route } from "./.gen/providers/aws/route"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Route(this, "example", { - destinationCidrBlock: Token.asString( - conditional(ipv6, "null", destination) - ), - destinationIpv6CidrBlock: Token.asString( - conditional(ipv6, destinationIpv6, "null") - ), - gatewayId: Token.asString(awsInternetGatewayExample.id), - routeTableId: Token.asString(awsRouteTableExample.id), - }); - } -} - -``` - -### Resource: aws_route_table - -Previously, `route.*CidrBlock` and `route.*Ipv6CidrBlock` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { RouteTable } from "./.gen/providers/aws/route-table"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new RouteTable(this, "example", { - route: [ - { - cidrBlock: Token.asString(conditional(ipv6, "", destination)), - ipv6CidrBlock: Token.asString(conditional(ipv6, destinationIpv6, "")), - }, - ], - vpcId: config.vpcId, - }); - } -} - -``` - -We fix this configuration by usingd `null` instead of an empty-string (`""`): - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { conditional, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { RouteTable } from "./.gen/providers/aws/route-table"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new RouteTable(this, "example", { - route: [ - { - cidrBlock: Token.asString(conditional(ipv6, "null", destination)), - ipv6CidrBlock: Token.asString( - conditional(ipv6, destinationIpv6, "null") - ), - }, - ], - vpcId: config.vpcId, - }); - } -} - -``` - -### Resource: aws_vpc - -Previously, `ipv6CidrBlock` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -For example, this type of configuration is now not valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Vpc } from "./.gen/providers/aws/vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Vpc(this, "example", { - cidrBlock: "10.1.0.0/16", - ipv6CidrBlock: "", - }); - } -} - -``` - -We fix this configuration by removing `ipv6CidrBlock`: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Vpc } from "./.gen/providers/aws/vpc"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new Vpc(this, "example", { - cidrBlock: "10.1.0.0/16", - }); - } -} - -``` - -### Resource: aws_vpc_ipv6_cidr_block_association - -Previously, `ipv6CidrBlock` could be set to `""`. However, the value `""` is no longer valid. Now, set the argument to `null` (_e.g._, `ipv6_cidr_block = null`) or remove the empty-string configuration. - -## Data Source: aws_cloudwatch_log_group - -### Removal of arn Wildcard Suffix - -Previously, the data source returned the ARN directly from the API, which included a `:*` suffix to denote all CloudWatch Log Streams under the CloudWatch Log Group. Most other AWS resources that return ARNs and many other AWS services do not use the `:*` suffix. The suffix is now automatically removed. For example, the data source previously returned an ARN such as `arn:aws:logs:usEast1:123456789012:logGroup:/example:*` but will now return `arn:aws:logs:usEast1:123456789012:logGroup:/example`. - -Workarounds, such as using `replace()` as shown below, should be removed: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Fn, Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsCloudwatchLogGroup } from "./.gen/providers/aws/data-aws-cloudwatch-log-group"; -import { DatasyncTask } from "./.gen/providers/aws/datasync-task"; -interface MyConfig { - destinationLocationArn: any; - sourceLocationArn: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new DataAwsCloudwatchLogGroup(this, "example", { - name: "example", - }); - const awsDatasyncTaskExample = new DatasyncTask(this, "example_1", { - cloudwatchLogGroupArn: Token.asString( - Fn.replace(Token.asString(example.arn), ":*", "") - ), - destinationLocationArn: config.destinationLocationArn, - sourceLocationArn: config.sourceLocationArn, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsDatasyncTaskExample.overrideLogicalId("example"); - } -} - -``` - -Removing the `:*` suffix is a breaking change for some configurations. Fix these configurations using string interpolations as demonstrated below. For example, this configuration is now broken: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsIamPolicyDocument } from "./.gen/providers/aws/data-aws-iam-policy-document"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsIamPolicyDocument(this, "ad-log-policy", { - statement: [ - { - actions: ["logs:CreateLogStream", "logs:PutLogEvents"], - effect: "Allow", - principals: [ - { - identifiers: ["ds.amazonaws.com"], - type: "Service", - }, - ], - resources: [Token.asString(example.arn)], - }, - ], - }); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsIamPolicyDocument } from "./.gen/providers/aws/data-aws-iam-policy-document"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new DataAwsIamPolicyDocument(this, "ad-log-policy", { - statement: [ - { - actions: ["logs:CreateLogStream", "logs:PutLogEvents"], - effect: "Allow", - principals: [ - { - identifiers: ["ds.amazonaws.com"], - type: "Service", - }, - ], - resources: ["${" + example.arn + "}:*"], - }, - ], - }); - } -} - -``` - -## Data Source: aws_subnet_ids - -The `awsSubnetIds` data source has been deprecated and will be removed in a future version. Use the `awsSubnets` data source instead. - -For example, change a configuration such as - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { - Token, - TerraformIterator, - TerraformOutput, - TerraformStack, -} from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsSubnetIds } from "./.gen/providers/aws/"; -import { DataAwsSubnet } from "./.gen/providers/aws/data-aws-subnet"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new DataAwsSubnetIds(this, "example", { - vpc_id: vpcId.value, - }); - /*In most cases loops should be handled in the programming language context and - not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - you need to keep this like it is.*/ - const exampleForEachIterator = TerraformIterator.fromList( - Token.asAny(example.ids) - ); - const dataAwsSubnetExample = new DataAwsSubnet(this, "example_1", { - id: Token.asString(exampleForEachIterator.value), - forEach: exampleForEachIterator, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - dataAwsSubnetExample.overrideLogicalId("example"); - new TerraformOutput(this, "subnet_cidr_blocks", { - value: - "${[ for s in ${" + dataAwsSubnetExample.fqn + "} : s.cidr_block]}", - }); - } -} - -``` - -to - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { - Token, - TerraformIterator, - TerraformOutput, - TerraformStack, -} from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DataAwsSubnet } from "./.gen/providers/aws/data-aws-subnet"; -import { DataAwsSubnets } from "./.gen/providers/aws/data-aws-subnets"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - const example = new DataAwsSubnets(this, "example", { - filter: [ - { - name: "vpc-id", - values: [vpcId.stringValue], - }, - ], - }); - /*In most cases loops should be handled in the programming language context and - not inside of the Terraform context. If you are looping over something external, e.g. a variable or a file input - you should consider using a for loop. If you are looping over something only known to Terraform, e.g. a result of a data source - you need to keep this like it is.*/ - const exampleForEachIterator = TerraformIterator.fromList( - Token.asAny(example.ids) - ); - const dataAwsSubnetExample = new DataAwsSubnet(this, "example_1", { - id: Token.asString(exampleForEachIterator.value), - forEach: exampleForEachIterator, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - dataAwsSubnetExample.overrideLogicalId("example"); - new TerraformOutput(this, "subnet_cidr_blocks", { - value: - "${[ for s in ${" + dataAwsSubnetExample.fqn + "} : s.cidr_block]}", - }); - } -} - -``` - -## Data Source: aws_s3_bucket_object - -Version 4.x deprecates the `awsS3BucketObject` data source. Maintainers will remove it in a future version. Use `awsS3Object` instead, where new features and fixes will be added. - -## Data Source: aws_s3_bucket_objects - -Version 4.x deprecates the `awsS3BucketObjects` data source. Maintainers will remove it in a future version. Use `awsS3Objects` instead, where new features and fixes will be added. - -## Resource: aws_batch_compute_environment - -You can no longer specify `computeResources` when `type` is `unmanaged`. - -Previously, you could apply this configuration and the provider would ignore any compute resources: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { BatchComputeEnvironment } from "./.gen/providers/aws/batch-compute-environment"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new BatchComputeEnvironment(this, "test", { - computeEnvironmentName: "test", - computeResources: { - instanceRole: ecsInstance.arn, - instanceType: ["c4.large"], - maxVcpus: 16, - minVcpus: 0, - securityGroupIds: [Token.asString(awsSecurityGroupTest.id)], - subnets: [Token.asString(awsSubnetTest.id)], - type: "EC2", - }, - serviceRole: batchService.arn, - type: "UNMANAGED", - }); - } -} - -``` - -Now, this configuration is invalid and will result in an error during plan. - -To resolve this error, simply remove or comment out the `computeResources` configuration block. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { BatchComputeEnvironment } from "./.gen/providers/aws/batch-compute-environment"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new BatchComputeEnvironment(this, "test", { - computeEnvironmentName: "test", - serviceRole: batchService.arn, - type: "UNMANAGED", - }); - } -} - -``` - -## Resource: aws_cloudwatch_event_target - -### Removal of `ecsTarget` `launchType` default value - -Previously, the provider assigned `ecsTarget` `launchType` the default value of `ec2` if you did not configure a value. However, the provider no longer assigns a default value. - -For example, previously you could workaround the default value by using an empty string (`""`), as shown: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CloudwatchEventTarget } from "./.gen/providers/aws/cloudwatch-event-target"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new CloudwatchEventTarget(this, "test", { - arn: Token.asString(awsEcsClusterTest.id), - ecsTarget: { - launchType: "", - networkConfiguration: { - subnets: [subnet.id], - }, - taskCount: 1, - taskDefinitionArn: task.arn, - }, - roleArn: Token.asString(awsIamRoleTest.arn), - rule: Token.asString(awsCloudwatchEventRuleTest.id), - }); - } -} - -``` - -This is no longer necessary. We fix the configuration by removing the empty string assignment: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { CloudwatchEventTarget } from "./.gen/providers/aws/cloudwatch-event-target"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new CloudwatchEventTarget(this, "test", { - arn: Token.asString(awsEcsClusterTest.id), - ecsTarget: { - networkConfiguration: { - subnets: [subnet.id], - }, - taskCount: 1, - taskDefinitionArn: task.arn, - }, - roleArn: Token.asString(awsIamRoleTest.arn), - rule: Token.asString(awsCloudwatchEventRuleTest.id), - }); - } -} - -``` - -## Resource: aws_elasticache_cluster - -### Error raised if neither `engine` nor `replicationGroupId` is specified - -Previously, when you did not specify either `engine` or `replicationGroupId`, Terraform would not prevent you from applying the invalid configuration. -Now, this will produce an error similar to the one below: - -``` -Error: Invalid combination of arguments - - with aws_elasticache_cluster.example, - on terraform_plugin_test.tf line 2, in resource "aws_elasticache_cluster" "example": - 2: resource "aws_elasticache_cluster" "example" { - - "replication_group_id": one of `engine,replication_group_id` must be - specified - - Error: Invalid combination of arguments - - with aws_elasticache_cluster.example, - on terraform_plugin_test.tf line 2, in resource "aws_elasticache_cluster" "example": - 2: resource "aws_elasticache_cluster" "example" { - - "engine": one of `engine,replication_group_id` must be specified -``` - -Update your configuration to supply one of `engine` or `replicationGroupId`. - -## Resource: aws_elasticache_global_replication_group - -### actual_engine_version Attribute removal - -Switch your Terraform configuration from using `actualEngineVersion` to use the `engineVersionActual` attribute instead. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformOutput, TerraformStack } from "cdktf"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new TerraformOutput( - this, - "elasticache_global_replication_group_version_result", - { - value: example.actualEngineVersion, - } - ); - } -} - -``` - -An updated configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformOutput, TerraformStack } from "cdktf"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new TerraformOutput( - this, - "elasticache_global_replication_group_version_result", - { - value: example.engineVersionActual, - } - ); - } -} - -``` - -## Resource: aws_fsx_ontap_storage_virtual_machine - -We removed the misspelled argument `activeDirectoryConfiguration0SelfManagedActiveDirectoryConfiguration0OrganizationalUnitDistinguidshedName` that we previously deprecated. Use `activeDirectoryConfiguration0SelfManagedActiveDirectoryConfiguration0OrganizationalUnitDistinguishedName` now instead. Terraform will automatically migrate the state to `activeDirectoryConfiguration0SelfManagedActiveDirectoryConfiguration0OrganizationalUnitDistinguishedName` during planning. - -## Resource: aws_lb_target_group - -For `protocol = "TCP"`, you can no longer set `stickinessType` to `lbCookie` even when `enabled = false`. Instead, either change the `protocol` to `"http"` or `"https"`, or change `stickinessType` to `"sourceIp"`. - -For example, this configuration is no longer valid: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { LbTargetGroup } from "./.gen/providers/aws/lb-target-group"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new LbTargetGroup(this, "test", { - port: 25, - protocol: "TCP", - stickiness: { - enabled: false, - type: "lb_cookie", - }, - vpcId: Token.asString(awsVpcTest.id), - }); - } -} - -``` - -To fix this, we change the `stickinessType` to `"sourceIp"`. - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { LbTargetGroup } from "./.gen/providers/aws/lb-target-group"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new LbTargetGroup(this, "test", { - port: 25, - protocol: "TCP", - stickiness: { - enabled: false, - type: "source_ip", - }, - vpcId: Token.asString(awsVpcTest.id), - }); - } -} - -``` - -## Resource: aws_s3_bucket_object - -Version 4.x deprecates the `awsS3BucketObject` and maintainers will remove it in a future version. Use `awsS3Object` instead, where new features and fixes will be added. - -When replacing `awsS3BucketObject` with `awsS3Object` in your configuration, on the next apply, Terraform will recreate the object. If you prefer to not have Terraform recreate the object, import the object using `awsS3Object`. - -For example, the following will import an S3 object into state, assuming the configuration exists, as `awsS3ObjectExample`: - -```console -% terraform import aws_s3_object.example s3://some-bucket-name/some/key.txt -``` - -~> **CAUTION:** We do not recommend modifying the state file manually. If you do, you can make it unusable. However, if you accept that risk, some community members have upgraded to the new resource by searching and replacing `"type": "aws_s3_bucket_object",` with `"type": "aws_s3_object",` in the state file, and then running `terraform apply -refresh-only`. - -## EC2-Classic Resource and Data Source Support - -While an upgrade to this major version will not directly impact EC2-Classic resources configured with Terraform, -it is important to keep in the mind the following AWS Provider resources will eventually no longer -be compatible with EC2-Classic as AWS completes their EC2-Classic networking retirement (expected around August 15, 2022). - -* Running or stopped [EC2 instances](/docs/providers/aws/r/instance.html) -* Running or stopped [RDS database instances](/docs/providers/aws/r/db_instance.html) -* [Elastic IP addresses](/docs/providers/aws/r/eip.html) -* [Classic Load Balancers](/docs/providers/aws/r/lb.html) -* [Redshift clusters](/docs/providers/aws/r/redshift_cluster.html) -* [Elastic Beanstalk environments](/docs/providers/aws/r/elastic_beanstalk_environment.html) -* [EMR clusters](/docs/providers/aws/r/emr_cluster.html) -* [AWS Data Pipelines pipelines](/docs/providers/aws/r/datapipeline_pipeline.html) -* [ElastiCache clusters](/docs/providers/aws/r/elasticache_cluster.html) -* [Spot Requests](/docs/providers/aws/r/spot_instance_request.html) -* [Capacity Reservations](/docs/providers/aws/r/ec2_capacity_reservation.html) - -## Macie Classic Resource Support - -These resources should be considered deprecated and will be removed in version 5.0.0. - -* Macie Member Account Association -* Macie S3 Bucket Association - - \ No newline at end of file diff --git a/website/docs/cdktf/typescript/guides/version-5-upgrade.html.md b/website/docs/cdktf/typescript/guides/version-5-upgrade.html.md deleted file mode 100644 index 1101b15c4a6..00000000000 --- a/website/docs/cdktf/typescript/guides/version-5-upgrade.html.md +++ /dev/null @@ -1,818 +0,0 @@ ---- -subcategory: "" -layout: "aws" -page_title: "Terraform AWS Provider Version 5 Upgrade Guide" -description: |- - Terraform AWS Provider Version 5 Upgrade Guide ---- - - - -# Terraform AWS Provider Version 5 Upgrade Guide - -Version 5.0.0 of the AWS provider for Terraform is a major release and includes changes that you need to consider when upgrading. This guide will help with that process and focuses only on changes from version 4.x to version 5.0.0. See the [Version 4 Upgrade Guide](/docs/providers/aws/guides/version-4-upgrade.html) for information on upgrading from 3.x to version 4.0.0. - -Upgrade topics: - - - -- [Provider Version Configuration](#provider-version-configuration) -- [Provider Arguments](#provider-arguments) -- [Default Tags](#default-tags) -- [EC2-Classic Retirement](#ec2-classic-retirement) -- [Macie Classic Retirement](#macie-classic-retirement) -- [resource/aws_acmpca_certificate_authority](#resourceaws_acmpca_certificate_authority) -- [resource/aws_api_gateway_rest_api](#resourceaws_api_gateway_rest_api) -- [resource/aws_autoscaling_attachment](#resourceaws_autoscaling_attachment) -- [resource/aws_autoscaling_group](#resourceaws_autoscaling_group) -- [resource/aws_budgets_budget](#resourceaws_budgets_budget) -- [resource/aws_ce_anomaly_subscription](#resourceaws_ce_anomaly_subscription) -- [resource/aws_cloudwatch_event_target](#resourceaws_cloudwatch_event_target) -- [resource/aws_codebuild_project](#resourceaws_codebuild_project) -- [resource/aws_connect_hours_of_operation](#resourceaws_connect_hours_of_operation) -- [resource/aws_connect_queue](#resourceaws_connect_queue) -- [resource/aws_connect_routing_profile](#resourceaws_connect_routing_profile) -- [resource/aws_db_event_subscription](#resourceaws_db_event_subscription) -- [resource/aws_db_instance_role_association](#resourceaws_db_instance_role_association) -- [resource/aws_db_instance](#resourceaws_db_instance) -- [resource/aws_db_proxy_target](#resourceaws_db_proxy_target) -- [resource/aws_db_security_group](#resourceaws_db_security_group) -- [resource/aws_db_snapshot](#resourceaws_db_snapshot) -- [resource/aws_default_vpc](#resourceaws_default_vpc) -- [resource/aws_dms_endpoint](#resourceaws_dms_endpoint) -- [resource/aws_docdb_cluster](#resourceaws_docdb_cluster) -- [resource/aws_dx_gateway_association](#resourceaws_dx_gateway_association) -- [resource/aws_ec2_client_vpn_endpoint](#resourceaws_ec2_client_vpn_endpoint) -- [resource/aws_ec2_client_vpn_network_association](#resourceaws_ec2_client_vpn_network_association) -- [resource/aws_ecs_cluster](#resourceaws_ecs_cluster) -- [resource/aws_eip](#resourceaws_eip) -- [resource/aws_eip_association](#resourceaws_eip_association) -- [resource/aws_eks_addon](#resourceaws_eks_addon) -- [resource/aws_elasticache_cluster](#resourceaws_elasticache_cluster) -- [resource/aws_elasticache_replication_group](#resourceaws_elasticache_replication_group) -- [resource/aws_elasticache_security_group](#resourceaws_elasticache_security_group) -- [resource/aws_flow_log](#resourceaws_flow_log) -- [resource/aws_guardduty_organization_configuration](#resourceaws_guardduty_organization_configuration) -- [resource/aws_kinesis_firehose_delivery_stream](#resourceaws_kinesis_firehose_delivery_stream) -- [resource/aws_launch_configuration](#resourceaws_launch_configuration) -- [resource/aws_launch_template](#resourceaws_launch_template) -- [resource/aws_lightsail_instance](#resourceaws_lightsail_instance) -- [resource/aws_macie_member_account_association](#resourceaws_macie_member_account_association) -- [resource/aws_macie_s3_bucket_association](#resourceaws_macie_s3_bucket_association) -- [resource/aws_medialive_multiplex_program](#resourceaws_medialive_multiplex_program) -- [resource/aws_msk_cluster](#resourceaws_msk_cluster) -- [resource/aws_neptune_cluster](#resourceaws_neptune_cluster) -- [resource/aws_networkmanager_core_network](#resourceaws_networkmanager_core_network) -- [resource/aws_opensearch_domain](#resourceaws_opensearch_domain) -- [resource/aws_rds_cluster](#resourceaws_rds_cluster) -- [resource/aws_rds_cluster_instance](#resourceaws_rds_cluster_instance) -- [resource/aws_redshift_cluster](#resourceaws_redshift_cluster) -- [resource/aws_redshift_security_group](#resourceaws_redshift_security_group) -- [resource/aws_route](#resourceaws_route) -- [resource/aws_route_table](#resourceaws_route_table) -- [resource/aws_s3_object](#resourceaws_s3_object) -- [resource/aws_s3_object_copy](#resourceaws_s3_object_copy) -- [resource/aws_secretsmanager_secret](#resourceaws_secretsmanager_secret) -- [resource/aws_security_group](#resourceaws_security_group) -- [resource/aws_security_group_rule](#resourceaws_security_group_rule) -- [resource/aws_servicecatalog_product](#resourceaws_servicecatalog_product) -- [resource/aws_ssm_association](#resourceaws_ssm_association) -- [resource/aws_ssm_parameter](#resourceaws_ssm_parameter) -- [resource/aws_vpc](#resourceaws_vpc) -- [resource/aws_vpc_peering_connection](#resourceaws_vpc_peering_connection) -- [resource/aws_vpc_peering_connection_accepter](#resourceaws_vpc_peering_connection_accepter) -- [resource/aws_vpc_peering_connection_options](#resourceaws_vpc_peering_connection_options) -- [resource/aws_wafv2_web_acl](#resourceaws_wafv2_web_acl) -- [resource/aws_wafv2_web_acl_logging_configuration](#resourceaws_wafv2_web_acl_logging_configuration) -- [data-source/aws_api_gateway_rest_api](#data-sourceaws_api_gateway_rest_api) -- [data-source/aws_connect_hours_of_operation](#data-sourceaws_connect_hours_of_operation) -- [data-source/aws_db_instance](#data-sourceaws_db_instance) -- [data-source/aws_elasticache_cluster](#data-sourceaws_elasticache_cluster) -- [data-source/aws_elasticache_replication_group](#data-sourceaws_elasticache_replication_group) -- [data-source/aws_iam_policy_document](#data-sourceaws_iam_policy_document) -- [data-source/aws_identitystore_group](#data-sourceaws_identitystore_group) -- [data-source/aws_identitystore_user](#data-sourceaws_identitystore_user) -- [data-source/aws_launch_configuration](#data-sourceaws_launch_configuration) -- [data-source/aws_opensearch_domain](#data-sourceaws_opensearch_domain) -- [data-source/aws_quicksight_data_set](#data-sourceaws_quicksight_data_set) -- [data-source/aws_redshift_cluster](#data-sourceaws_redshift_cluster) -- [data-source/aws_redshift_service_account](#data-sourceaws_redshift_service_account) -- [data-source/aws_secretsmanager_secret](#data-sourceaws_secretsmanager_secret) -- [data-source/aws_service_discovery_service](#data-sourceaws_service_discovery_service) -- [data-source/aws_subnet_ids](#data-sourceaws_subnet_ids) -- [data-source/aws_vpc_peering_connection](#data-sourceaws_vpc_peering_connection) - - - -## Provider Version Configuration - --> Before upgrading to version 5.0.0, upgrade to the most recent 4.X version of the provider and ensure that your environment successfully runs [`terraform plan`](https://www.terraform.io/docs/commands/plan.html). You should not see changes you don't expect or deprecation notices. - -Use [version constraints when configuring Terraform providers](https://www.terraform.io/docs/configuration/providers.html#provider-versions). If you are following that recommendation, update the version constraints in your Terraform configuration and run [`terraform init -upgrade`](https://www.terraform.io/docs/commands/init.html) to download the new version. - -For example, given this previous configuration: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -Update to the latest 5.X version: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { AwsProvider } from "./.gen/providers/aws/provider"; -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string) { - super(scope, name); - new AwsProvider(this, "aws", {}); - } -} - -``` - -## Provider Arguments - -Version 5.0.0 removes these `provider` arguments: - -* `assumeRoleDurationSeconds` - Use `assumeRoleDuration` instead -* `assumeRoleWithWebIdentityDurationSeconds` - Use `assumeRoleWithWebIdentityDuration` instead -* `s3ForcePathStyle` - Use `s3UsePathStyle` instead -* `sharedCredentialsFile` - Use `sharedCredentialsFiles` instead -* `skipGetEc2Platforms` - Removed following the retirement of EC2-Classic - -## Default Tags - -The following enhancements are included: - -* Duplicate `defaultTags` can now be included and will be overwritten by resource `tags`. -* Zero value tags, `""`, can now be included in both `defaultTags` and resource `tags`. -* Tags can now be `computed`. - -## EC2-Classic Retirement - -Following the retirement of EC2-Classic, we removed a number of resources, arguments, and attributes. This list summarizes what we _removed_: - -* `awsDbSecurityGroup` resource -* `awsElasticacheSecurityGroup` resource -* `awsRedshiftSecurityGroup` resource -* [`awsDbInstance`](/docs/providers/aws/r/db_instance.html) resource's `securityGroupNames` argument -* [`awsElasticacheCluster`](/docs/providers/aws/r/elasticache_cluster.html) resource's `securityGroupNames` argument -* [`awsRedshiftCluster`](/docs/providers/aws/r/redshift_cluster.html) resource's `clusterSecurityGroups` argument -* [`awsLaunchConfiguration`](/docs/providers/aws/r/launch_configuration.html) resource's `vpcClassicLinkId` and `vpcClassicLinkSecurityGroups` arguments -* [`awsVpc`](/docs/providers/aws/r/vpc.html) resource's `enableClassiclink` and `enableClassiclinkDnsSupport` arguments -* [`awsDefaultVpc`](/docs/providers/aws/r/default_vpc.html) resource's `enableClassiclink` and `enableClassiclinkDnsSupport` arguments -* [`awsVpcPeeringConnection`](/docs/providers/aws/r/vpc_peering_connection.html) resource's `allowClassicLinkToRemoteVpc` and `allowVpcToRemoteClassicLink` arguments -* [`awsVpcPeeringConnectionAccepter`](/docs/providers/aws/r/vpc_peering_connection_accepter.html) resource's `allowClassicLinkToRemoteVpc` and `allowVpcToRemoteClassicLink` arguments -* [`awsVpcPeeringConnectionOptions`](/docs/providers/aws/r/vpc_peering_connection_options.html) resource's `allowClassicLinkToRemoteVpc` and `allowVpcToRemoteClassicLink` arguments -* [`awsDbInstance`](/docs/providers/aws/d/db_instance.html) data source's `dbSecurityGroups` attribute -* [`awsElasticacheCluster`](/docs/providers/aws/d/elasticache_cluster.html) data source's `securityGroupNames` attribute -* [`awsRedshiftCluster`](/docs/providers/aws/d/redshift_cluster.html) data source's `clusterSecurityGroups` attribute -* [`awsLaunchConfiguration`](/docs/providers/aws/d/launch_configuration.html) data source's `vpcClassicLinkId` and `vpcClassicLinkSecurityGroups` attributes - -## Macie Classic Retirement - -Following the retirement of Amazon Macie Classic, we removed these resources: - -* `awsMacieMemberAccountAssociation` -* `awsMacieS3BucketAssociation` - -## resource/aws_acmpca_certificate_authority - -Remove `status` from configurations as it no longer exists. - -## resource/aws_api_gateway_rest_api - -The `minimumCompressionSize` attribute is now a String type, allowing it to be computed when set via the `body` attribute. Valid values remain the same. - -## resource/aws_autoscaling_attachment - -Change `albTargetGroupArn`, which no longer exists, to `lbTargetGroupArn` in configurations. - -## resource/aws_autoscaling_group - -Remove `tags` from configurations as it no longer exists. Use the `tag` attribute instead. For use cases requiring dynamic tags, see the [Dynamic Tagging example](../r/autoscaling_group.html.markdown#dynamic-tagging). - -## resource/aws_budgets_budget - -Remove `costFilters` from configurations as it no longer exists. - -## resource/aws_ce_anomaly_subscription - -Remove `threshold` from configurations as it no longer exists. - -## resource/aws_cloudwatch_event_target - -The `ecsTargetPropagateTags` attribute now has no default value. If no value is specified, the tags are not propagated. - -## resource/aws_codebuild_project - -Remove `secondarySourcesAuth` and `sourceAuth` from configurations as they no longer exist. - -## resource/aws_connect_hours_of_operation - -Remove `hoursOfOperationArn` from configurations as it no longer exists. - -## resource/aws_connect_queue - -Remove `quickConnectIdsAssociated` from configurations as it no longer exists. - -## resource/aws_connect_routing_profile - -Remove `queueConfigsAssociated` from configurations as it no longer exists. - -## resource/aws_db_event_subscription - -Configurations that define `sourceIds` using the `id` attribute of `awsDbInstance` must be updated to use `identifier` instead. For example, `source_ids = [aws_db_instance.example.id]` must be updated to `source_ids = [aws_db_instance.example.identifier]`. - -## resource/aws_db_instance - -`awsDbInstance` has had a number of changes: - -1. [`id` is no longer the identifier](#aws_db_instanceid-is-no-longer-the-identifier) -2. [Use `dbName` instead of `name`](#use-db_name-instead-of-name) -3. [Remove `dbSecurityGroups`](#remove-db_security_groups) - -### aws_db_instance.id is no longer the identifier - -**What `id` _is_ has changed and can have far-reaching consequences.** Fortunately, fixing configurations is straightforward. - -`id` is _now_ the DBI Resource ID (_i.e._, `dbiResourceId`), an immutable "identifier" for an instance. `id` is now the same as the `resourceId`. (We recommend using `resourceId` rather than `id` when you need to refer to the DBI Resource ID.) _Previously_, `id` was the DB Identifier. Now when you need to refer to the _DB Identifier_, use `identifier`. - -Fixing configurations involves changing any `id` references to `identifier`, where the reference expects the DB Identifier. For example, if you're replicating an `awsDbInstance`, you can no longer use `id` to define the `replicateSourceDb`. - -This configuration will now result in an error since `replicateSourceDb` expects a _DB Identifier_: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DbInstance } from "./.gen/providers/aws/db-instance"; -interface MyConfig { - instanceClass: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DbInstance(this, "test", { - replicateSourceDb: source.id, - instanceClass: config.instanceClass, - }); - } -} - -``` - -You can fix the configuration like this: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { DbInstance } from "./.gen/providers/aws/db-instance"; -interface MyConfig { - instanceClass: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new DbInstance(this, "test", { - replicateSourceDb: source.identifier, - instanceClass: config.instanceClass, - }); - } -} - -``` - -### Use `dbName` instead of `name` - -Change `name` to `dbName` in configurations as `name` no longer exists. - -### Remove `dbSecurityGroups` - -Remove `dbSecurityGroups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## resource/aws_db_instance_role_association - -Configurations that define `dbInstanceIdentifier` using the `id` attribute of `awsDbInstance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. - -## resource/aws_db_proxy_target - -Configurations that define `dbInstanceIdentifier` using the `id` attribute of `awsDbInstance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. - -## resource/aws_db_security_group - -We removed this resource as part of the EC2-Classic retirement. - -## resource/aws_db_snapshot - -Configurations that define `dbInstanceIdentifier` using the `id` attribute of `awsDbInstance` must be updated to use `identifier` instead. For example, `db_instance_identifier = aws_db_instance.example.id` must be updated to `db_instance_identifier = aws_db_instance.example.identifier`. - -## resource/aws_default_vpc - -Remove `enableClassiclink` and `enableClassiclinkDnsSupport` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_dms_endpoint - -Remove `s3SettingsIgnoreHeadersRow` from configurations as it no longer exists. **Be careful to not confuse `ignoreHeadersRow`, which no longer exists, with `ignoreHeaderRows`, which still exists.** - -## resource/aws_docdb_cluster - -Changes to the `snapshotIdentifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshotIdentifier` attribute into alignment with other RDS resources, such as `awsDbInstance`. - -Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. - -## resource/aws_dx_gateway_association - -The `vpnGatewayId` attribute has been deprecated. All configurations using `vpnGatewayId` should be updated to use the `associatedGatewayId` attribute instead. - -## resource/aws_ec2_client_vpn_endpoint - -Remove `status` from configurations as it no longer exists. - -## resource/aws_ec2_client_vpn_network_association - -Remove `securityGroups` and `status` from configurations as they no longer exist. - -## resource/aws_ecs_cluster - -Remove `capacityProviders` and `defaultCapacityProviderStrategy` from configurations as they no longer exist. - -## resource/aws_eip - -* With the retirement of EC2-Classic, the `standard` domain is no longer supported. -* The `vpc` argument has been deprecated. Use `domain` argument instead. - -## resource/aws_eip_association - -With the retirement of EC2-Classic, the `standard` domain is no longer supported. - -## resource/aws_eks_addon - -The `resolveConflicts` argument has been deprecated. Use the `resolveConflictsOnCreate` and/or `resolveConflictsOnUpdate` arguments instead. - -## resource/aws_elasticache_cluster - -Remove `securityGroupNames` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## resource/aws_elasticache_replication_group - -* Remove the `clusterMode` configuration block. Use top-level `numNodeGroups` and `replicasPerNodeGroup` instead. -* Remove `availabilityZones`, `numberCacheClusters`, `replicationGroupDescription` arguments from configurations as they no longer exist. Use `preferredCacheClusterAzs`, `numCacheClusters`, and `description`, respectively, instead. - -## resource/aws_elasticache_security_group - -We removed this resource as part of the EC2-Classic retirement. - -## resource/aws_flow_log - -The `logGroupName` attribute has been deprecated. All configurations using `logGroupName` should be updated to use the `logDestination` attribute instead. - -## resource/aws_guardduty_organization_configuration - -The `autoEnable` argument has been deprecated. Use the `autoEnableOrganizationMembers` argument instead. - -## resource/aws_kinesis_firehose_delivery_stream - -* Remove the `s3Configuration` attribute from the root of the resource. `s3Configuration` is now a part of the following blocks: `elasticsearchConfiguration`, `opensearchConfiguration`, `redshiftConfiguration`, `splunkConfiguration`, and `httpEndpointConfiguration`. -* Remove `s3` as an option for `destination`. Use `extendedS3` instead -* Rename `extendedS3Configuration0S3BackupConfiguration0BufferSize` and `extendedS3Configuration0S3BackupConfiguration0BufferInterval` to `extendedS3Configuration0S3BackupConfiguration0BufferingSize` and `extendedS3Configuration0S3BackupConfiguration0BufferingInterval`, respectively. -* Rename `redshiftConfiguration0S3BackupConfiguration0BufferSize` and `redshiftConfiguration0S3BackupConfiguration0BufferInterval` to `redshiftConfiguration0S3BackupConfiguration0BufferingSize` and `redshiftConfiguration0S3BackupConfiguration0BufferingInterval`, respectively. -* Rename `s3Configuration0BufferSize` and `s3Configuration0BufferInterval` to `s3Configuration0BufferingSize` and `s3Configuration0BufferingInterval`, respectively. - -## resource/aws_launch_configuration - -Remove `vpcClassicLinkId` and `vpcClassicLinkSecurityGroups` from configurations as they no longer exist. We removed them as part of the EC2-Classic retirement. - -## resource/aws_launch_template - -We removed defaults from `metatadataOptions`. Launch template metadata options will now default to unset values, which is the AWS default behavior. - -## resource/aws_lightsail_instance - -Remove `ipv6Address` from configurations as it no longer exists. - -## resource/aws_macie_member_account_association - -We removed this resource as part of the Macie Classic retirement. - -## resource/aws_macie_s3_bucket_association - -We removed this resource as part of the Macie Classic retirement. - -## resource/aws_medialive_multiplex_program - -Change `statemuxSettings`, which no longer exists, to `statmuxSettings` in configurations. - -## resource/aws_msk_cluster - -Remove `brokerNodeGroupInfoEbsVolumeSize` from configurations as it no longer exists. - -## resource/aws_neptune_cluster - -Changes to the `snapshotIdentifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshotIdentifier` attribute into alignment with other RDS resources, such as `awsDbInstance`. - -Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. - -## resource/aws_networkmanager_core_network - -Remove `policyDocument` from configurations as it no longer exists. Use the `awsNetworkmanagerCoreNetworkPolicyAttachment` resource instead. - -## resource/aws_opensearch_domain - -* The `kibanaEndpoint` attribute has been deprecated. All configurations using `kibanaEndpoint` should be updated to use the `dashboardEndpoint` attribute instead. -* The `engineVersion` attribute no longer has a default value. Omitting this attribute will now create a domain with the latest OpenSearch version, consistent with the behavior of the AWS API. - -## resource/aws_rds_cluster - -* Update configurations to always include `engine` since it is now required and has no default. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster. -* Changes to the `snapshotIdentifier` attribute will now correctly force re-creation of the resource. Previously, changing this attribute would result in a successful apply, but without the cluster being restored (only the resource state was changed). This change brings behavior of the cluster `snapshotIdentifier` attribute into alignment with other RDS resources, such as `awsDbInstance`. **NOTE:** Automated snapshots **should not** be used for this attribute, unless from a different cluster. Automated snapshots are deleted as part of cluster destruction when the resource is replaced. - -## resource/aws_rds_cluster_instance - -Update configurations to always include `engine` since it is now required and has no default. Previously, not including `engine` was equivalent to `engine = "aurora"` and created a MySQL-5.6-compatible cluster. - -## resource/aws_redshift_cluster - -Remove `clusterSecurityGroups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## resource/aws_redshift_security_group - -We removed this resource as part of the EC2-Classic retirement. - -## resource/aws_route - -Update configurations to use `networkInterfaceId` rather than `instanceId`, which no longer exists. - -For example, this configuration is _no longer valid_: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route } from "./.gen/providers/aws/route"; -interface MyConfig { - routeTableId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new Route(this, "example", { - instanceId: Token.asString(awsInstanceExample.id), - routeTableId: config.routeTableId, - }); - } -} - -``` - -One possible way to fix this configuration involves referring to the `primaryNetworkInterfaceId` of an instance: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Route } from "./.gen/providers/aws/route"; -interface MyConfig { - routeTableId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new Route(this, "example", { - networkInterfaceId: Token.asString( - awsInstanceExample.primaryNetworkInterfaceId - ), - routeTableId: config.routeTableId, - }); - } -} - -``` - -Another fix is to use an ENI: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Instance } from "./.gen/providers/aws/instance"; -import { NetworkInterface } from "./.gen/providers/aws/network-interface"; -import { Route } from "./.gen/providers/aws/route"; -interface MyConfig { - subnetId: any; - deviceIndex: any; - routeTableId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new NetworkInterface(this, "example", { - subnetId: config.subnetId, - }); - const awsInstanceExample = new Instance(this, "example_1", { - networkInterface: [ - { - networkInterfaceId: example.id, - deviceIndex: config.deviceIndex, - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsInstanceExample.overrideLogicalId("example"); - const awsRouteExample = new Route(this, "example_2", { - dependsOn: [awsInstanceExample], - networkInterfaceId: example.id, - routeTableId: config.routeTableId, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsRouteExample.overrideLogicalId("example"); - } -} - -``` - -## resource/aws_route_table - -Update configurations to use `route.*NetworkInterfaceId` rather than `route.*InstanceId`, which no longer exists. - -For example, this configuration is _no longer valid_: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { RouteTable } from "./.gen/providers/aws/route-table"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new RouteTable(this, "example", { - route: [ - { - instance_id: awsInstanceExample.id, - }, - ], - vpcId: config.vpcId, - }); - } -} - -``` - -One possible way to fix this configuration involves referring to the `primaryNetworkInterfaceId` of an instance: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { Token, TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { RouteTable } from "./.gen/providers/aws/route-table"; -interface MyConfig { - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - new RouteTable(this, "example", { - route: [ - { - networkInterfaceId: Token.asString( - awsInstanceExample.primaryNetworkInterfaceId - ), - }, - ], - vpcId: config.vpcId, - }); - } -} - -``` - -Another fix is to use an ENI: - -```typescript -// DO NOT EDIT. Code generated by 'cdktf convert' - Please report bugs at https://cdk.tf/bug -import { Construct } from "constructs"; -import { TerraformStack } from "cdktf"; -/* - * Provider bindings are generated by running `cdktf get`. - * See https://cdk.tf/provider-generation for more details. - */ -import { Instance } from "./.gen/providers/aws/instance"; -import { NetworkInterface } from "./.gen/providers/aws/network-interface"; -import { RouteTable } from "./.gen/providers/aws/route-table"; -interface MyConfig { - subnetId: any; - deviceIndex: any; - vpcId: any; -} -class MyConvertedCode extends TerraformStack { - constructor(scope: Construct, name: string, config: MyConfig) { - super(scope, name); - const example = new NetworkInterface(this, "example", { - subnetId: config.subnetId, - }); - const awsInstanceExample = new Instance(this, "example_1", { - networkInterface: [ - { - networkInterfaceId: example.id, - deviceIndex: config.deviceIndex, - }, - ], - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsInstanceExample.overrideLogicalId("example"); - const awsRouteTableExample = new RouteTable(this, "example_2", { - dependsOn: [awsInstanceExample], - route: [ - { - networkInterfaceId: example.id, - }, - ], - vpcId: config.vpcId, - }); - /*This allows the Terraform resource name to match the original name. You can remove the call if you don't need them to match.*/ - awsRouteTableExample.overrideLogicalId("example"); - } -} - -``` - -## resource/aws_s3_object - -The `acl` attribute no longer has a default value. Previously this was set to `private` when omitted. Objects requiring a private ACL should now explicitly set this attribute. - -## resource/aws_s3_object_copy - -The `acl` attribute no longer has a default value. Previously this was set to `private` when omitted. Object copies requiring a private ACL should now explicitly set this attribute. - -## resource/aws_secretsmanager_secret - -Remove `rotationEnabled`, `rotationLambdaArn` and `rotationRules` from configurations as they no longer exist. - -## resource/aws_security_group - -With the retirement of EC2-Classic, non-VPC security groups are no longer supported. - -## resource/aws_security_group_rule - -With the retirement of EC2-Classic, non-VPC security groups are no longer supported. - -## resource/aws_servicecatalog_product - -Changes to any `provisioningArtifactParameters` arguments now properly trigger a replacement. This fixes incorrect behavior, but may technically be breaking for configurations expecting non-functional in-place updates. - -## resource/aws_ssm_association - -The `instanceId` attribute has been deprecated. All configurations using `instanceId` should be updated to use the `targets` attribute instead. - -## resource/aws_ssm_parameter - -The `overwrite` attribute has been deprecated. Existing parameters should be explicitly imported rather than relying on the "import on create" behavior previously enabled by setting `overwrite = true`. In a future major version the `overwrite` attribute will be removed and attempting to create a parameter that already exists will fail. - -## resource/aws_vpc - -Remove `enableClassiclink` and `enableClassiclinkDnsSupport` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_vpc_peering_connection - -Remove `allowClassicLinkToRemoteVpc` and `allowVpcToRemoteClassicLink` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_vpc_peering_connection_accepter - -Remove `allowClassicLinkToRemoteVpc` and `allowVpcToRemoteClassicLink` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_vpc_peering_connection_options - -Remove `allowClassicLinkToRemoteVpc` and `allowVpcToRemoteClassicLink` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## resource/aws_wafv2_web_acl - -* Remove `statementManagedRuleGroupStatementExcludedRule` and `statementRuleGroupReferenceStatementExcludedRule` from configurations as they no longer exist. -* The `statementRuleGroupReferenceStatementRuleActionOverride` attribute has been added. - -## resource/aws_wafv2_web_acl_logging_configuration - -Remove `redactedFieldsAllQueryArguments`, `redactedFieldsBody` and `redactedFieldsSingleQueryArgument` from configurations as they no longer exist. - -## data-source/aws_api_gateway_rest_api - -The `minimumCompressionSize` attribute is now a String type, allowing it to be computed when set via the `body` attribute. - -## data-source/aws_connect_hours_of_operation - -Remove `hoursOfOperationArn` from configurations as it no longer exists. - -## data-source/aws_db_instance - -Remove `dbSecurityGroups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## data-source/aws_elasticache_cluster - -Remove `securityGroupNames` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## data-source/aws_elasticache_replication_group - -Rename `numberCacheClusters` and `replicationGroupDescription`, which no longer exist, to `numCacheClusters`, and `description`, respectively. - -## data-source/aws_iam_policy_document - -* Remove `sourceJson` and `overrideJson` from configurations. Use `sourcePolicyDocuments` and `overridePolicyDocuments`, respectively, instead. -* Don't add empty `statementSid` values to `json` attribute value. - -## data-source/aws_identitystore_group - -Remove `filter` from configurations as it no longer exists. - -## data-source/aws_identitystore_user - -Remove `filter` from configurations as it no longer exists. - -## data-source/aws_launch_configuration - -Remove `vpcClassicLinkId` and `vpcClassicLinkSecurityGroups` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - -## data-source/aws_opensearch_domain - -The `kibanaEndpoint` attribute has been deprecated. All configurations using `kibanaEndpoint` should be updated to use the `dashboardEndpoint` attribute instead. - -## data-source/aws_quicksight_data_set - -The `tagsAll` attribute has been deprecated and will be removed in a future version. - -## data-source/aws_redshift_cluster - -Remove `clusterSecurityGroups` from configurations as it no longer exists. We removed it as part of the EC2-Classic retirement. - -## data-source/aws_redshift_service_account - -[AWS document](https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html#db-auditing-bucket-permissions) that [a service principal name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html#principal-services) be used instead of AWS account ID in any relevant IAM policy. -The [`awsRedshiftServiceAccount`](/docs/providers/aws/d/redshift_service_account.html) data source should now be considered deprecated and will be removed in a future version. - -## data-source/aws_service_discovery_service - -The `tagsAll` attribute has been deprecated and will be removed in a future version. - -## data-source/aws_secretsmanager_secret - -Remove `rotationEnabled`, `rotationLambdaArn` and `rotationRules` from configurations as they no longer exist. - -## data-source/aws_subnet_ids - -We removed the `awsSubnetIds` data source. Use the [`awsSubnets`](/docs/providers/aws/d/subnets.html) data source instead. - -## data-source/aws_vpc_peering_connection - -Remove `allowClassicLinkToRemoteVpc` and `allowVpcToRemoteClassicLink` from configurations as they no longer exist. They were part of the EC2-Classic retirement. - - \ No newline at end of file