Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Existing secret config.env doens't work #2328

Closed
D1StrX opened this issue Oct 3, 2024 · 12 comments
Closed

Existing secret config.env doens't work #2328

D1StrX opened this issue Oct 3, 2024 · 12 comments

Comments

@D1StrX
Copy link

D1StrX commented Oct 3, 2024

Expected Behavior

Only the existing secret should exist when tenant.configSecret.existingSecret: true.

Current Behavior

When tenant.configSecret.existingSecret: true and tenant.configuration.name: <secret name>, there shouldn't be a secret created by MinIO. Yet in ArgoCD I can see that configSecret creates the default secret, despite existingSecret = true. The other secret is deployed by a SealedSecret, which should be the only one existing.

Possible Solution

Helm values should probably be looked after. Not sure where the problem lies.

Steps to Reproduce (for bugs)

Create a tenant with this config specifically, deployed by ArgoCD:
The existing secret is for example deployed with SealedSecrets.

  configuration:
    name: <secret>
  configSecret:
    existingSecret: true

Then you can clearly see 2 secrets are created. One by MinIO, one by SealedSecrets.

Context

Locks you out of your MinIO instance and Bucket creation fails with error buckets creation failed: The Access Key Id you provided does not exist in our records.

Regression

Yes, since 6.0.0 due to secret deprecation.

Your Environment

  • Version used (minio-operator): 6.0.3
  • Environment name and version (e.g. kubernetes v1.17.2): K8s 1.30
@ravindk89
Copy link
Contributor

@D1StrX this may be resolved in 6.0.4 based on another user report - @harshavardhana @pjuarezd we need to cut that release fully

@D1StrX
Copy link
Author

D1StrX commented Oct 11, 2024

@ravindk89 Updated to v6.0.4. Another issue arise: ...failed exit status 1: Error: execution error at (tenant/templates/tenant-configuration.yaml:21:4): # ERROR: cannot set access-key when an existing secret is used Use --debug flag to render out invalid YAML
I tried this:

  configuration:
    name: <secret>
  configSecret:
    existingSecret: true

and tried

  configuration:
    name: <secret>
  configSecret:
    name: <secret>
    existingSecret: true

and tried

  configSecret:
    name: <secret>
    existingSecret: true

@D1StrX
Copy link
Author

D1StrX commented Oct 11, 2024

Apparently tenant/templates/tenant-configuration.yaml expects this, empty accessKey and secretKey:

  configuration:
    name: <secret>
  configSecret:
    name: <secret>
    accessKey:
    secretKey:
    existingSecret: true

Makes sense with the default values.yaml... it contains minio and myminio123

@D1StrX
Copy link
Author

D1StrX commented Oct 11, 2024

Another issue arise:

host <pool>.<tenant>-hl.<namespace>.svc.cluster.local:9000: server update failed with: open /usr/bin/.minio.check-perm: permission denied, do not restart the servers yet

@ravindk89
Copy link
Contributor

Can you provide your full values.yaml with PII redacted?

@D1StrX
Copy link
Author

D1StrX commented Oct 11, 2024

Here we go:

# Root key for MinIO Tenant Chart
tenant:
  name: example
  image:
    repository: quay.io/minio/minio
    tag: RELEASE.2024-10-02T17-50-41Z
    pullPolicy: IfNotPresent
  imagePullSecret: {}
  scheduler: {}
  configuration:
    name: minio-example-env-configuration
  configSecret:
    accessKey:
    secretKey:
    name: minio-example-env-configuration
    existingSecret: true
  pools:
    - servers: 4
      name: pool-0
      volumesPerServer: 2
      size: 15Gi
      storageClassName: standard
      storageAnnotations: {}
      annotations: {}
      labels: {}
      tolerations: []
      nodeSelector: []
      affinity: {}
      resources: {}
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
        fsGroupChangePolicy: "OnRootMismatch"
        runAsNonRoot: true
      containerSecurityContext:
        runAsUser: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        seccompProfile:
          type: RuntimeDefault
      topologySpreadConstraints: []
  mountPath: /export
  subPath: /data
  metrics:
    enabled: false
    port: 9000
    protocol: http
  certificate:
    externalCaCertSecret: []
    externalCertSecret:
      - name: external-minio-certificate
        type: kubernetes.io/tls
    requestAutoCert: false
    certConfig: {}
  features:
    bucketDNS: false
    domains: {}
    enableSFTP: false
  buckets:
    - name: example-bucket
  users:
    - name: example-user

  podManagementPolicy: Parallel
  liveness: {}
  readiness: {}
  startup: {}
  lifecycle: {}
  exposeServices: {}
  serviceAccountName: ""
  prometheusOperator: false
  logging: {}
  serviceMetadata: {}
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: EC:2
    - name: MINIO_SITE_REGION
      value: <region>
  priorityClassName: ""
  additionalVolumes: []
  additionalVolumeMounts: []
# Set the keys to conform to the Ingress controller and configuration of your choice.
ingress:
  api:
    enabled: false
  console:
    enabled: true
    ingressClassName: "controller"
    labels:
      service: minio-console
    annotations:
      cert-manager.io/cluster-issuer: <issuer>
      nginx.org/ssl-services: "example-console"
      nginx.org/websocket-services: "example-console"
      nginx.ingress.kubernetes.io/rewrite-target: /
      nginx.ingress.kubernetes.io/proxy-body-size: "5t"
      nginx.org/client-max-body-size: "0"
      nginx.ingress.kubernetes.io/configuration-snippet: |
        chunked_transfer_encoding off;
    tls:
      - hosts:
          - console.domain.tld
        secretName: minio-console-tls-secret
    host: console.domain.tld
    path: /
    pathType: Prefix

By coincidence related to #2319 and #2305 ?

@ravindk89
Copy link
Contributor

Possibly, I will defer to engineering on this cc/ @jiuker in case this rings a bell

@jiuker
Copy link
Contributor

jiuker commented Oct 14, 2024

Here we go: 

# Root key for MinIO Tenant Chart
tenant:
  name: example
  image:
    repository: quay.io/minio/minio
    tag: RELEASE.2024-10-02T17-50-41Z
    pullPolicy: IfNotPresent
  imagePullSecret: {}
  scheduler: {}
  configuration:
    name: minio-example-env-configuration
  configSecret:
    accessKey:
    secretKey:
    name: minio-example-env-configuration
    existingSecret: true
  pools:
    - servers: 4
      name: pool-0
      volumesPerServer: 2
      size: 15Gi
      storageClassName: standard
      storageAnnotations: {}
      annotations: {}
      labels: {}
      tolerations: []
      nodeSelector: []
      affinity: {}
      resources: {}
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
        fsGroupChangePolicy: "OnRootMismatch"
        runAsNonRoot: true
      containerSecurityContext:
        runAsUser: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - ALL
        seccompProfile:
          type: RuntimeDefault
      topologySpreadConstraints: []
  mountPath: /export
  subPath: /data
  metrics:
    enabled: false
    port: 9000
    protocol: http
  certificate:
    externalCaCertSecret: []
    externalCertSecret:
      - name: external-minio-certificate
        type: kubernetes.io/tls
    requestAutoCert: false
    certConfig: {}
  features:
    bucketDNS: false
    domains: {}
    enableSFTP: false
  buckets:
    - name: example-bucket
  users:
    - name: example-user

  podManagementPolicy: Parallel
  liveness: {}
  readiness: {}
  startup: {}
  lifecycle: {}
  exposeServices: {}
  serviceAccountName: ""
  prometheusOperator: false
  logging: {}
  serviceMetadata: {}
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: EC:2
    - name: MINIO_SITE_REGION
      value: <region>
  priorityClassName: ""
  additionalVolumes: []
  additionalVolumeMounts: []
# Set the keys to conform to the Ingress controller and configuration of your choice.
ingress:
  api:
    enabled: false
  console:
    enabled: true
    ingressClassName: "controller"
    labels:
      service: minio-console
    annotations:
      cert-manager.io/cluster-issuer: <issuer>
      nginx.org/ssl-services: "example-console"
      nginx.org/websocket-services: "example-console"
      nginx.ingress.kubernetes.io/rewrite-target: /
      nginx.ingress.kubernetes.io/proxy-body-size: "5t"
      nginx.org/client-max-body-size: "0"
      nginx.ingress.kubernetes.io/configuration-snippet: |
        chunked_transfer_encoding off;
    tls:
      - hosts:
          - console.domain.tld
        secretName: minio-console-tls-secret
    host: console.domain.tld
    path: /
    pathType: Prefix

By coincidence related to #2319 and #2305 ?
I'm sorry, I can't reproduce it locally, use the kind cluster, follow the steps you gave, only the secret created manually, and there is no default secret as you said

root@jiuker:/mnt/d/workspace/go/src/eos# kubectl get secrets -A
NAMESPACE        NAME                                   TYPE                            DATA   AGE
kube-system      bootstrap-token-abcdef                 bootstrap.kubernetes.io/token   6      8m14s
minio-operator   minio-example-env-configuration        Opaque                          1      5m41s
minio-operator   sh.helm.release.v1.minio-operator.v1   helm.sh/release.v1              1      6m7s
minio-operator   sh.helm.release.v1.mytest-minio.v1     helm.sh/release.v1              1      5m56s
minio-operator   sts-tls                                Opaque                          2      5m42s
root@jiuker:/mnt/d/workspace/go/src/eos#

minio-example-env-configuration is created by kubectl apply -f test.secret.yaml
test.secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: minio-example-env-configuration
  namespace: minio-operator
type: Opaque
stringData:
  config.env: |-
    export MINIO_ROOT_USER=myminio123
    export MINIO_ROOT_PASSWORD=myminio321

If you can reproduce it locally, please write a detail steps.Like what command, what script

@D1StrX
Copy link
Author

D1StrX commented Oct 15, 2024

It seems the error solved itself... unfortunately not able to pin-point what exactly could've caused it. Feeling says this was network related, not the application.

@D1StrX D1StrX closed this as completed Oct 15, 2024
@BapRx
Copy link

BapRx commented Oct 16, 2024

Apparently tenant/templates/tenant-configuration.yaml expects this, empty accessKey and secretKey:

  configuration:
    name: <secret>
  configSecret:
    name: <secret>
    accessKey:
    secretKey:
    existingSecret: true

Makes sense with the default values.yaml... it contains minio and myminio123

That's correct but when we're using this chart as a dependency (as a subchart), we cannot override these values since an empty string or null would be replaced by the chart's defaults.

The current only way to fix this is to remove the default values from the chart which shouldn't be an issue since these keys need to be customized.

@ravindk89 should I create another issue or is it clear enough?

@D1StrX
Copy link
Author

D1StrX commented Oct 16, 2024

IMHO the documentation could be improved in the values.yaml. Use the least text possible, so it’s clear what to do in each scenario..

@ravindk89
Copy link
Contributor

https://github.com/minio/operator/blob/master/helm/tenant/values.yaml#L54-L57

I actually wonder what is the value of having configSecret at all given that we have configuration.name . The fact that an overlap and collision exists at all adds confusion, which while solveable via documentation is kind of avoiding treating the root problem.

@pjuarezd @jiuker what do y'all think? Do we really benefit from keeping configSecret at all? Would it not be better to always direct users to create or provide a secret via configuration.name ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants