Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warnings when importing a freshly created GKE cluster with default parameters #844

Closed
alamothe opened this issue Jul 2, 2022 · 7 comments
Assignees
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Milestone

Comments

@alamothe
Copy link

alamothe commented Jul 2, 2022

What happened?

Pulumi outputs warnings when importing a freshly created GKE cluster from Google Cloud console.

Steps to reproduce

pulumi import gcp:container/cluster:Cluster carsync-prod us-central1-a/carsync-prod

Expected Behavior

No warnings

Actual Behavior

Diagnostics:
  gcp:container:Cluster (carsync-prod):
    warning: One or more imported inputs failed to validate. This is almost certainly a bug in the `gcp` provider. The import will still proceed, but you will need to edit the generated code after copying it into your program.
    warning: gcp:container/cluster:Cluster resource 'carsync-prod' has a problem: Conflicting configuration arguments: "cluster_ipv4_cidr": conflicts with ip_allocation_policy. Examine values at 'Cluster.ClusterIpv4Cidr'.
    warning: gcp:container/cluster:Cluster resource 'carsync-prod' has a problem: Conflicting configuration arguments: "ip_allocation_policy": conflicts with cluster_ipv4_cidr. Examine values at 'Cluster.IpAllocationPolicy'.
    warning: gcp:container/cluster:Cluster resource 'carsync-prod' has a problem: Conflicting configuration arguments: "monitoring_service": conflicts with cluster_telemetry. Examine values at 'Cluster.MonitoringService'.
    warning: gcp:container/cluster:Cluster resource 'carsync-prod' has a problem: Conflicting configuration arguments: "logging_service": conflicts with cluster_telemetry. Examine values at 'Cluster.LoggingService'.

Versions used

CLI          
Version      3.35.3
Go Version   go1.18.3
Go Compiler  gc

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@alamothe alamothe added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Jul 2, 2022
@stack72
Copy link
Contributor

stack72 commented Jul 5, 2022

@Frassle I'd love your thoughts on this...

@alamothe
Copy link
Author

alamothe commented Jul 6, 2022

Even though these were just warnings, and the import was successful, I had to remove a bunch of conflicting fields for pulumi up to work (it was giving an error).

I still have no idea how the cluster was actually configured on Google Cloud. Did it use this or did it use that? Pulumi said it couldn't use both, but that was precisely what it imported.

@Frassle
Copy link
Member

Frassle commented Jul 6, 2022

@stack72 This will be based on whatever the Read method returned from the provider. The engine data flow is a very simple call Read, run the result of that through Check warn about any check failures but then save the state and code as returned.

I'd try making a gke cluster and then seeing what provider Read returns for it.

@jkodroff jkodroff removed the needs-triage Needs attention from the triage team label Jul 7, 2022
@jondkelley
Copy link

jondkelley commented Jan 11, 2023

I also get these conflicts when importing an existing Google GKE cluster.

pulumi import gcp:container/cluster:Cluster my-cluster com-my-dev-760a2504/us-central1/com-my-us-gke-dev

Then execute pulumi up with my code:

    error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "logging_service": conflicts with cluster_telemetry. Examine values at 'Cluster.LoggingService'.
    error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "ip_allocation_policy": conflicts with cluster_ipv4_cidr. Examine values at 'Cluster.IpAllocationPolicy'.
    error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "monitoring_service": conflicts with cluster_telemetry. Examine values at 'Cluster.MonitoringService'.
    error: gcp:container/cluster:Cluster resource 'my-cluster' has a problem: Conflicting configuration arguments: "cluster_ipv4_cidr": conflicts with ip_allocation_policy. Examine values at 'Cluster.ClusterIpv4Cidr'.

The only way to resolve these errors is changes to the imported YAML to address the conflict.

For instance ipAllocationPolicy has conflicting entries so I remove the cidr blocks from them.

    clusterIpv4CidrBlock: 10.162.0.0/18
    clusterSecondaryRangeName: gke-com-my-us-gke-dev-pods-5c803d01
    servicesIpv4CidrBlock: 10.162.96.0/19
    servicesSecondaryRangeName: gke-com-my-us-gke-dev-services-5c803d01

Then I have to remove

  monitoringService: monitoring.googleapis.com/kubernetes
  loggingService: logging.googleapis.com/kubernetes

This fixes the conflicts. But it causes state drift against the remote API.
As indicated below:

     Type                      Name               Plan     Info
     pulumi:pulumi:Stack       k8s-us-dev
     └─ gcp:container:Cluster  my-cluster           [diff: +enableKubernetesAlpha,enableL4IlbSubsetting,enableLegacyAbac,enableShieldedNodes-clusterIpv4Cidr,loggingService,monitoringService,project~__defaults,ipAllocationPol

@jondkelley
Copy link

This issue needs ownership.

@Shrooblord
Copy link

I used @jondkelley's fixes and do not experience state drift with remote API; however, it still seems odd that I'd have to manually edit an import from the actual state of the GCP Cluster. How can the current state be in conflict with itself...?

t0yv0 added a commit to pulumi/pulumi-terraform-bridge that referenced this issue May 9, 2024
Toward #1225 -
this fixes the special case of ConflictsWith warnings. This fixes
spurious warnings on `pulumi import`, popular bugs such as:

- pulumi/pulumi-aws#2318 
- pulumi/pulumi-aws#3670
- pulumi/pulumi-gitlab#293
- pulumi/pulumi-gcp#844
- pulumi/pulumi-linode#373

TF does not guarantee Read results to be compatible with calling Check
on, in particular Read can return results that run afoul of
ConflictsWith constraint. This change compensates by arbitrarily
dropping data from the Read result until it passes ConflictsWith checks.

This affects `pulumi refresh` as well as I think it should although I
have not seen "in the wild" cases where refresh is affected, since it
typically will not copy these properties to the input bag unless they're
present in old inputs, which are usually correct wrt to ConflictsWith.
@t0yv0
Copy link
Member

t0yv0 commented Jun 13, 2024

Given this cluster:

import * as gcp from "@pulumi/gcp";

const cluster = new gcp.container.Cluster("primary", {
    name: "my-gke-cluster",
    location: "us-central1",
    removeDefaultNodePool: true,
    initialNodeCount: 1,
});

export const clusterId = cluster.id;

I can now do an import without warnings:

Importing (dev2)

View in Browser (Ctrl+O): https://app.pulumi.com/anton-pulumi-corp/pulumi-gcp-844/dev2/updates/2

     Type                      Name                 Status               
     pulumi:pulumi:Stack       pulumi-gcp-844-dev2                       
 =   └─ gcp:container:Cluster  c2                   imported (0.95s)     

Outputs:
    clusterId: "projects/pulumi-development/locations/us-central1/clusters/my-gke-cluster"

Resources:
    = 1 imported
    2 unchanged

Duration: 2s

Please copy the following code into your Pulumi application. Not doing so
will cause Pulumi to report that an update will happen on the next update command.

Please note that the imported resources are marked as protected. To destroy them
you will need to remove the `protect` option and run `pulumi update` *before*
the destroy will take effect.

import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";

const c2 = new gcp.container.Cluster("c2", {
    addonsConfig: {
        gcePersistentDiskCsiDriverConfig: {
            enabled: true,
        },
        networkPolicyConfig: {
            disabled: true,
        },
    },
    clusterIpv4Cidr: "10.80.0.0/14",
    clusterTelemetry: {
        type: "ENABLED",
    },
    databaseEncryption: {
        state: "DECRYPTED",
    },
    defaultMaxPodsPerNode: 110,
    defaultSnatStatus: {
        disabled: false,
    },
    initialNodeCount: 1,
    location: "us-central1",
    loggingConfig: {
        enableComponents: [
            "SYSTEM_COMPONENTS",
            "WORKLOADS",
        ],
    },
    masterAuth: {
        clientCertificateConfig: {
            issueClientCertificate: false,
        },
    },
    monitoringConfig: {
        advancedDatapathObservabilityConfigs: [{
            enableMetrics: false,
            enableRelay: false,
        }],
        enableComponents: ["SYSTEM_COMPONENTS"],
        managedPrometheus: {
            enabled: true,
        },
    },
    name: "my-gke-cluster",
    network: "projects/pulumi-development/global/networks/default",
    networkPolicy: {
        enabled: false,
        provider: "PROVIDER_UNSPECIFIED",
    },
    networkingMode: "VPC_NATIVE",
    nodeLocations: [
        "us-central1-b",
        "us-central1-c",
        "us-central1-a",
    ],
    nodePoolDefaults: {
        nodeConfigDefaults: {
            loggingVariant: "DEFAULT",
        },
    },
    nodeVersion: "1.29.4-gke.1043002",
    notificationConfig: {
        pubsub: {
            enabled: false,
        },
    },
    podSecurityPolicyConfig: {
        enabled: false,
    },
    privateClusterConfig: {
        masterGlobalAccessConfig: {
            enabled: false,
        },
    },
    project: "pulumi-development",
    protectConfig: {
        workloadConfig: {
            auditMode: "BASIC",
        },
        workloadVulnerabilityMode: "WORKLOAD_VULNERABILITY_MODE_UNSPECIFIED",
    },
    releaseChannel: {
        channel: "REGULAR",
    },
    securityPostureConfig: {
        mode: "BASIC",
        vulnerabilityMode: "VULNERABILITY_MODE_UNSPECIFIED",
    },
    serviceExternalIpsConfig: {
        enabled: false,
    },
    subnetwork: "projects/pulumi-development/regions/us-central1/subnetworks/def}, {
    protect: true,
});

This is accomplished by dropping out conflicting properties in pulumi-terraform-bridge during import. The dropout is not very intelligent but attempts to resolve conflicts.

Versions:

CLI          
Version      3.117.0
Go Version   go1.22.3
Go Compiler  gc

Plugins
KIND      NAME    VERSION
resource  gcp     7.26.0
language  nodejs  unknown

Host     
OS       darwin
Version  14.5
Arch     arm64

This project is written in nodejs: executable='/Users/anton/bin/node' version='v18.18.2'

Current Stack: anton-pulumi-corp/pulumi-gcp-844/dev2

TYPE                           URN
pulumi:pulumi:Stack            urn:pulumi:dev2::pulumi-gcp-844::pulumi:pulumi:Stack::pulumi-gcp-844-dev2
pulumi:providers:gcp           urn:pulumi:dev2::pulumi-gcp-844::pulumi:providers:gcp::default_7_26_0
gcp:container/cluster:Cluster  urn:pulumi:dev2::pulumi-gcp-844::gcp:container/cluster:Cluster::primary
gcp:container/cluster:Cluster  urn:pulumi:dev2::pulumi-gcp-844::gcp:container/cluster:Cluster::c2


Found no pending operations associated with dev2

Backend        
Name           pulumi.com
URL            https://app.pulumi.com/anton-pulumi-corp
User           anton-pulumi-corp
Organizations  anton-pulumi-corp, moolumi, pulumi
Token type     personal

Dependencies:
NAME            VERSION
@pulumi/pulumi  3.120.0
@types/node     18.19.34
typescript      5.4.5
@pulumi/gcp     7.26.0

Pulumi locates its logs in /var/folders/gd/3ncjb1lj5ljgk8xl5ssn_gvc0000gn/T/com.apple.shortcuts.mac-helper// by default

I will close this as fixed but please feel free to open another issue if something is not working as expected.

@t0yv0 t0yv0 self-assigned this Jun 13, 2024
@t0yv0 t0yv0 added this to the 0.106 milestone Jun 13, 2024
@t0yv0 t0yv0 added the resolution/fixed This issue was fixed label Jun 13, 2024
@t0yv0 t0yv0 closed this as completed Jun 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Some behavior is incorrect or out of spec resolution/fixed This issue was fixed
Projects
None yet
Development

No branches or pull requests

7 participants