Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] func NewWorkspace does not create deep copy of terraformTemplates #1101

Open
kanngiesser opened this issue Sep 26, 2024 · 1 comment
Open
Labels
bug Something isn't working

Comments

@kanngiesser
Copy link
Contributor

Description

The function NewWorkspace does not create a deep copy of Terraform Templates which are read from the respective service definition.

Expected Behavior

The function should create a deep copy.
Changes to Terraform Templates which are saved for a distinct workspace object must not affect Terraform Templates which are stored for the serviceDefinition object.

Actual Behavior

Changes to Terraform Templates which are stored for a workspace object overwrite the serviceDefinition:

pkg/providers/tf/provision.go, line 185:

workspace.Modules[0].Definitions["main"] = tf

Possible Fix

Steps to Reproduce

Context

We created a Brokerpak which allows to provision new S3 buckets or import existing ones. The brokerpak manifest looks similar to this (excerpt):

plans:
  - name: standard
    # (...)

provision:
  import_inputs:
    - field_name: s3_bucket_name_subsume 
      type: string
      details: Existing AWS S3 bucket to subsume
      tf_resource: aws_s3_bucket.this
  import_parameters_to_delete:
    - aws_s3_bucket.this.id
    - aws_s3_bucket.this.arn
    - aws_s3_bucket.this.bucket_domain_name
    - aws_s3_bucket.this.bucket_regional_domain_name
    - aws_s3_bucket.this.hosted_zone_id
    - aws_s3_bucket.this.region
  plan_inputs:
    - field_name: subsume
      type: boolean
      details: Subsume existing
      default: false
  user_inputs: 
    - required: false
      field_name: subsume
      type: boolean
      details: "Determines whether existing bucket is imported"
      default: false
    - required: false
      field_name: s3_bucket_name_subsume
      type: string
      details: "Name of the imported bucket"
      prohibit_update: true   # bucket name cannot be changed since this would trigger a recreate
  template_refs:
    data: terraform/s3/provision/data.tf
    main: terraform/s3/provision/main.tf
    outputs: terraform/s3/provision/outputs.tf
    provider: terraform/s3/provision/provider.tf
    variables: terraform/s3/provision/variables.tf
    versions: terraform/s3/provision/versions.tf

The Brokerpak allows us to:

  • create a new bucket with cf create-service <service> <plan> <instance> -c '{}'
  • import an existing bucket with cf create-service <service> <plan> <instance> -c '{"subsume": true, "s3_bucket_name_subsume": <bucket-name>}'
  • migrate settings of the imported buckets to what is actually defined at the brokerpak: cf update-service <service> -c '{"subsume": false}'. This allows to patch settings of imported buckets.

The service and plan ids do not change for these three operations.

We observe that the broker generates a new main.tf module when importing an existing bucket. After the import, the same generated main.tf module is used for all newly provisioned buckets (without import) and for all service updates (in case the feature flag TERRAFORM_UPGRADES_ENABLED: true is set) instead of the main.tf which is actually part of the terraform template defined at the brokerpak.

We believe that this is an effect of NewWorkspace not creating a deep copy of terraformTemplates.

The wrong main.tf module is persisted to the database each time the terraform workspace is written for a service instance. As a consequence, once a single bucket has been imported, an update operation for any other service instance will break the stored workspace for the respective service instance.

Oftentimes hard-coded resource-IDs are written to main.tf module which is generated after the import operation. As a consequence workflows like this possible:

  1. Provision service instance 1
  2. Import an existing bucket into service instance 2
    • generates main.tf with hard coded s3 bucket-id and settings of imported bucket
    • tofu apply has no effect since terraform modules and state match
  3. Update Service Instance 1
    • tries to apply main.tf from imported bucket on service instance 1
    • update might try to drop current bucket (!) and create the bucket which was imported in step 2 (since bucket name is hard-coded in main.tf
    • update fails since bucket to create already exists
  4. Deprovision Service Instance 1
    • might remove settings from bucket which has been imported into service instance 2 due to hard-coded resource ids
    • might destroy the bucket which has been imported into service instance 2

Your Environment

  • Version used: d1a7c75
  • Operating System and version (desktop): macOS Sonoma 14.6.1
  • Link to your project (if public):
  • Platform (Azure/AWS/GCP): AWS
  • Applicable Services:
@kanngiesser
Copy link
Contributor Author

For our team this is a serious issue which blocks further rollout to production.
I would be happy to receive some feedback for the PR (or any input for alternative solutions or workarounds).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Development

No branches or pull requests

1 participant