Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to attach or mount volumes: unmounted volumes=[job-logs] #1347

Closed
lxh-015 opened this issue Nov 20, 2022 · 17 comments
Closed

Unable to attach or mount volumes: unmounted volumes=[job-logs] #1347

lxh-015 opened this issue Nov 20, 2022 · 17 comments
Labels

Comments

@lxh-015
Copy link

lxh-015 commented Nov 20, 2022

When installed with harbor-helm 1.10.2, harbor-jobservice is always in ContainerCreating

QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                   From     Message
  ----     ------       ----                  ----     -------
  Warning  FailedMount  31m (x58 over 10h)    kubelet  Unable to attach or mount volumes: unmounted volumes=[job-logs], unattached volumes=[job-logs job-scandata-exports jobservice-config]: timed out waiting for the condition
  Warning  FailedMount  6m3s (x173 over 10h)  kubelet  Unable to attach or mount volumes: unmounted volumes=[job-logs], unattached volumes=[jobservice-config job-logs job-scandata-exports]: timed out waiting for the condition
  Warning  FailedMount  90s (x51 over 10h)    kubelet  Unable to attach or mount volumes: unmounted volumes=[job-logs], unattached volumes=[job-scandata-exports jobservice-config job-logs]: timed out waiting for the condition
@lxh-015
Copy link
Author

lxh-015 commented Nov 20, 2022

expose:
  # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"
  # and fill the information in the corresponding section
  type: nodePort
  tls:
    # Enable TLS or not.
    # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"
    # Note: if the "expose.type" is "ingress" and TLS is disabled,
    # the port must be included in the command when pulling/pushing images.
    # Refer to https://github.com/goharbor/harbor/issues/5291 for details.
    enabled: true
    # The source of the tls certificate. Set as "auto", "secret"
    # or "none" and fill the information in the corresponding section
    # 1) auto: generate the tls certificate automatically
    # 2) secret: read the tls certificate from the specified secret.
    # The tls certificate can be generated manually or by cert manager
    # 3) none: configure no tls certificate for the ingress. If the default
    # tls certificate is configured in the ingress controller, choose this option
    certSource: auto
    auto:
      # The common name used to generate the certificate, it's necessary
      # when the type isn't "ingress"
      commonName: hub.l-xh.com
    secret:
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      secretName: ""
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      # Only needed when the "expose.type" is "ingress".
      notarySecretName: ""
  ingress:
    hosts:
      core: hub.l-xh.com
      notary: notary.harbor.domain
    # set to the type of ingress controller if it has specific requirements.
    # leave as `default` for most ingress controllers.
    # set to `gce` if using the GCE ingress controller
    # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
    # set to `alb` if using the ALB ingress controller
    controller: default
    ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress
    kubeVersionOverride: ""
    className: ""
    annotations:
      # note different ingress controllers may require a different ssl-redirect annotation
      # for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below
      ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
    notary:
      # notary ingress-specific annotations
      annotations: {}
      # notary ingress-specific labels
      labels: {}
    harbor:
      # harbor ingress-specific annotations
      annotations: {}
      # harbor ingress-specific labels
      labels: {}
  clusterIP:
    # The name of ClusterIP service
    name: harbor
    # Annotations on the ClusterIP service
    annotations: {}
    ports:
      # The service port Harbor listens on when serving HTTP
      httpPort: 80
      # The service port Harbor listens on when serving HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled
      # is set to true
      notaryPort: 4443
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving HTTP
        port: 80
        # The node port Harbor listens on when serving HTTP
        nodePort: 30002
      https:
        # The service port Harbor listens on when serving HTTPS
        port: 443
        # The node port Harbor listens on when serving HTTPS
        nodePort: 30003
      # Only needed when notary.enabled is set to true
      notary:
        # The service port Notary listens on
        port: 4443
        # The node port Notary listens on
        nodePort: 30004
  loadBalancer:
    # The name of LoadBalancer service
    name: harbor
    # Set the IP if the LoadBalancer supports assigning IP
    IP: ""
    ports:
      # The service port Harbor listens on when serving HTTP
      httpPort: 80
      # The service port Harbor listens on when serving HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled
      # is set to true
      notaryPort: 4443
    annotations: {}
    sourceRanges: []

# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker/notary client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://hub.l-xh.com:30003

# The internal TLS used for harbor components secure communicating. In order to enable https
# in each components tls cert files need to provided in advance.
internalTLS:
  # If internal TLS enabled
  enabled: false
  # There are three ways to provide tls
  # 1) "auto" will generate cert automatically
  # 2) "manual" need provide cert file manually in following value
  # 3) "secret" internal certificates from secret
  certSource: "auto"
  # The content of trust ca, only available when `certSource` is "manual"
  trustCa: ""
  # core related cert configuration
  core:
    # secret name for core's tls certs
    secretName: ""
    # Content of core's TLS cert file, only available when `certSource` is "manual"
    crt: ""
    # Content of core's TLS key file, only available when `certSource` is "manual"
    key: ""
  # jobservice related cert configuration
  jobservice:
    # secret name for jobservice's tls certs
    secretName: ""
    # Content of jobservice's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of jobservice's TLS key file, only available when `certSource` is "manual"
    key: ""
  # registry related cert configuration
  registry:
    # secret name for registry's tls certs
    secretName: ""
    # Content of registry's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of registry's TLS key file, only available when `certSource` is "manual"
    key: ""
  # portal related cert configuration
  portal:
    # secret name for portal's tls certs
    secretName: ""
    # Content of portal's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of portal's TLS key file, only available when `certSource` is "manual"
    key: ""
  # chartmuseum related cert configuration
  chartmuseum:
    # secret name for chartmuseum's tls certs
    secretName: ""
    # Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
    key: ""
  # trivy related cert configuration
  trivy:
    # secret name for trivy's tls certs
    secretName: ""
    # Content of trivy's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of trivy's TLS key file, only available when `certSource` is "manual"
    key: ""

ipFamily:
  # ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related component
  ipv6:
    enabled: true
  # ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related component
  ipv4:
    enabled: true

# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamically.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you already have existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  # (this does not apply for PVCs that are created for internal database
  # and redis components, i.e. they are never deleted automatically)
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: harbor-pvc
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used (the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "-"
      subPath: "registry"
      accessMode: ReadWriteOnce
      size: 5Gi
      annotations: {}
    chartmuseum:
      existingClaim: harbor-pvc
      storageClass: "-"
      subPath: "chartmuseum"
      accessMode: ReadWriteOnce
      size: 5Gi
      annotations: {}
    jobservice:
      jobLog:
        existingClaim: harbor-pvc
        storageClass: "-"
        subPath: "jobLog"
        accessMode: ReadWriteOnce
        size: 1Gi
        annotations: {}
      scanDataExports:
        existingClaim: harbor-pvc
        storageClass: "-"
        subPath: "scanDataExports"
        accessMode: ReadWriteOnce
        size: 1Gi
        annotations: {}
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: harbor-pvc
      storageClass: "-"
      subPath: "database"
      accessMode: ReadWriteOnce
      size: 1Gi
      annotations: {}
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: harbor-pvc
      storageClass: "-"
      subPath: "redis"
      accessMode: ReadWriteOnce
      size: 1Gi
      annotations: {}
    trivy:
      existingClaim: harbor-pvc
      storageClass: "-"
      subPath: "trivy"
      accessMode: ReadWriteOnce
      size: 5Gi
      annotations: {}
  # Define which storage backend is used for registry and chartmuseum to store
  # images and charts. Refer to
  # https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
  # for the detail.
  imageChartStorage:
    # Specify whether to disable `redirect` for images and chart storage, for
    # backends which not supported it (such as using minio for `s3` storage type), please disable
    # it. To disable redirects, simply set `disableredirect` to `true` instead.
    # Refer to
    # https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
    # for the detail.
    disableredirect: false
    # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
    # The secret must contain keys named "ca.crt" which will be injected into the trust store
    # of registry's and chartmuseum's containers.
    # caBundleSecretName:

    # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
    # "oss" and fill the information needed in the corresponding section. The type
    # must be "filesystem" if you want to use persistent volumes for registry
    # and chartmuseum
    type: filesystem
    filesystem:
      rootdirectory: /storage
      #maxthreads: 100
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
      #realm: core.windows.net
      # To use existing secret, the key must be AZURE_STORAGE_ACCESS_KEY
      existingSecret: ""
    gcs:
      bucket: bucketname
      # The base64 encoded json file which contains the key
      encodedkey: base64-encoded-json-key-file
      #rootdirectory: /gcs/object/name/prefix
      #chunksize: "5242880"
      # To use existing secret, the key must be gcs-key.json
      existingSecret: ""
      useWorkloadIdentity: false
    s3:
      # Set an existing secret for S3 accesskey and secretkey
      # keys in the secret should be AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for chartmuseum
      # keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry
      #existingSecret: ""
      region: us-west-1
      bucket: bucketname
      #accesskey: awsaccesskey
      #secretkey: awssecretkey
      #regionendpoint: http://myobjects.local
      #encrypt: false
      #keyid: mykeyid
      #secure: true
      #skipverify: false
      #v4auth: true
      #chunksize: "5242880"
      #rootdirectory: /s3/object/name/prefix
      #storageclass: STANDARD
      #multipartcopychunksize: "33554432"
      #multipartcopymaxconcurrency: 100
      #multipartcopythresholdsize: "33554432"
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
      #region: fr
      #tenant: tenantname
      #tenantid: tenantid
      #domain: domainname
      #domainid: domainid
      #trustid: trustid
      #insecureskipverify: false
      #chunksize: 5M
      #prefix:
      #secretkey: secretkey
      #accesskey: accesskey
      #authversion: 3
      #endpointtype: public
      #tempurlcontainerkey: false
      #tempurlmethods:
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
      #endpoint: endpoint
      #internal: false
      #encrypt: false
      #secure: true
      #chunksize: 10M
      #rootdirectory: rootdirectory

imagePullPolicy: IfNotPresent

# Use this set to assign a list of default pullSecrets
imagePullSecrets:
#  - name: docker-registry-secret
#  - name: internal-registry-secret

# The update strategy for deployments with persistent volumes(jobservice, registry
# and chartmuseum): "RollingUpdate" or "Recreate"
# Set it as "Recreate" when "RWM" for volumes isn't supported
updateStrategy:
  type: RollingUpdate

# debug, info, warning, error or fatal
logLevel: info

# The initial password of Harbor admin. Change it from portal after launching Harbor
harborAdminPassword: "Harbor12345"

# The name of the secret which contains key named "ca.crt". Setting this enables the
# download link on portal to download the CA certificate when the certificate isn't
# generated automatically
caSecretName: ""

# The secret key used for encryption. Must be a string of 16 chars.
secretKey: "not-a-secure-key"
# If using existingSecretSecretKey, the key must be sercretKey
existingSecretSecretKey: ""

# The proxy settings for updating trivy vulnerabilities from the Internet and replicating
# artifacts from/to the registries that cannot be reached directly
proxy:
  httpProxy:
  httpsProxy:
  noProxy: 127.0.0.1,localhost,.local,.internal
  components:
    - core
    - jobservice
    - trivy

# Run the migration job via helm hook
enableMigrateHelmHook: false

# The custom ca bundle secret, the secret must contain key named "ca.crt"
# which will be injected into the trust store for chartmuseum, core, jobservice, registry, trivy components
# caBundleSecretName: ""

## UAA Authentication Options
# If you're using UAA for authentication behind a self-signed
# certificate you will need to provide the CA Cert.
# Set uaaSecretName below to provide a pre-created secret that
# contains a base64 encoded CA Certificate named `ca.crt`.
# uaaSecretName:

# If service exposed via "ingress", the Nginx will not be used
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: v2.6.2
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  replicas: 1
  revisionHistoryLimit: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  ## The priority class to run the pod as
  priorityClassName:

portal:
  image:
    repository: goharbor/harbor-portal
    tag: v2.6.2
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  replicas: 1
  revisionHistoryLimit: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  ## The priority class to run the pod as
  priorityClassName:

core:
  image:
    repository: goharbor/harbor-core
    tag: v2.6.2
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  replicas: 1
  revisionHistoryLimit: 10
  ## Startup probe values
  startupProbe:
    enabled: true
    initialDelaySeconds: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used when core server communicates with other components.
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # Fill the name of a kubernetes secret if you want to use your own
  # TLS certificate and private key for token encryption/decryption.
  # The secret must contain keys named:
  # "tls.crt" - the certificate
  # "tls.key" - the private key
  # The default key pair will be used if it isn't set
  secretName: ""
  # The XSRF key. Will be generated automatically if it isn't specified
  xsrfKey: ""
  ## The priority class to run the pod as
  priorityClassName:
  # The time duration for async update artifact pull_time and repository
  # pull_count, the unit is second. Will be 10 seconds if it isn't set.
  # eg. artifactPullAsyncFlushDuration: 10
  artifactPullAsyncFlushDuration:
  gdpr:
    deleteUser: false

jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: v2.6.2
  replicas: 1
  revisionHistoryLimit: 10
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  maxJobWorkers: 10
  # The logger for jobs: "file", "database" or "stdout"
  jobLoggers:
    - file
    # - database
    # - stdout
  # The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)
  loggerSweeperDuration: 14 #days

  # resources:
  #   requests:
  #     memory: 256Mi
  #     cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used when job service communicates with other components.
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  ## The priority class to run the pod as
  priorityClassName:

registry:
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  registry:
    image:
      repository: goharbor/registry-photon
      tag: v2.6.2
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: v2.6.2

    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  replicas: 1
  revisionHistoryLimit: 10
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  ## The priority class to run the pod as
  priorityClassName:
  # Secret is used to secure the upload state from client
  # and registry storage backend.
  # See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.
  relativeurls: false
  credentials:
    username: "harbor_registry_user"
    password: "harbor_registry_password"
    # If using existingSecret, the key must be REGISTRY_PASSWD and REGISTRY_HTPASSWD
    existingSecret: ""
    # Login and password in htpasswd string format. Excludes `registry.credentials.username`  and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt.
    # htpasswdString: $apr1$XLefHzeG$Xl4.s00sMSCCcMyJljSZb0 # example string
  middleware:
    enabled: false
    type: cloudFront
    cloudFront:
      baseurl: example.cloudfront.net
      keypairid: KEYPAIRID
      duration: 3000s
      ipfilteredby: none
      # The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key
      # that allows access to CloudFront
      privateKeySecret: "my-secret"
  # enable purge _upload directories
  upload_purging:
    enabled: true
    # remove files in _upload directories which exist for a period of time, default is one week.
    age: 168h
    # the interval of the purge operations
    interval: 24h
    dryrun: false

chartmuseum:
  enabled: true
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  # Harbor defaults ChartMuseum to returning relative urls, if you want using absolute url you should enable it by change the following value to 'true'
  absoluteUrl: false
  image:
    repository: goharbor/chartmuseum-photon
    tag: v2.6.2
  replicas: 1
  revisionHistoryLimit: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  ## The priority class to run the pod as
  priorityClassName:
  ## limit the number of parallel indexers
  indexLimit: 0

trivy:
  # enabled the flag to enable Trivy scanner
  enabled: true
  image:
    # repository the repository for Trivy adapter image
    repository: goharbor/trivy-adapter-photon
    # tag the tag for Trivy adapter image
    tag: v2.6.2
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  # replicas the number of Pod replicas
  replicas: 1
  # debugMode the flag to enable Trivy debug mode with more verbose scanning log
  debugMode: false
  # vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`.
  vulnType: "os,library"
  # severity a comma-separated list of severities to be checked
  severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
  # ignoreUnfixed the flag to display only fixed vulnerabilities
  ignoreUnfixed: false
  # insecure the flag to skip verifying registry certificate
  insecure: false
  # gitHubToken the GitHub access token to download Trivy DB
  #
  # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
  # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
  # in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update
  # timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.
  # Currently, the database is updated every 12 hours and published as a new release to GitHub.
  #
  # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
  # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
  # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
  # https://developer.github.com/v3/#rate-limiting
  #
  # You can create a GitHub token by following the instructions in
  # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
  gitHubToken: ""
  # skipUpdate the flag to disable Trivy DB downloads from GitHub
  #
  # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
  # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
  # `/home/scanner/.cache/trivy/db/trivy.db` path.
  skipUpdate: false
  # The offlineScan option prevents Trivy from sending API requests to identify dependencies.
  #
  # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.
  # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't
  # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.
  # It would work if all the dependencies are in local.
  # This option doesn’t affect DB download. You need to specify skipUpdate as well as offlineScan in an air-gapped environment.
  offlineScan: false
  # Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.
  securityCheck: "vuln"
  # The duration to wait for scan completion
  timeout: 5m0s
  resources:
    requests:
      cpu: 200m
      memory: 512Mi
    limits:
      cpu: 1
      memory: 1Gi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  ## The priority class to run the pod as
  priorityClassName:

notary:
  enabled: true
  server:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    # mount the service account token
    automountServiceAccountToken: false
    image:
      repository: goharbor/notary-server-photon
      tag: v2.6.2
    replicas: 1
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
    ## Additional deployment annotations
    podAnnotations: {}
    ## The priority class to run the pod as
    priorityClassName:
  signer:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    # mount the service account token
    automountServiceAccountToken: false
    image:
      repository: goharbor/notary-signer-photon
      tag: v2.6.2
    replicas: 1
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
    ## Additional deployment annotations
    podAnnotations: {}
    ## The priority class to run the pod as
    priorityClassName:
  # Fill the name of a kubernetes secret if you want to use your own
  # TLS certificate authority, certificate and private key for notary
  # communications.
  # The secret must contain keys named ca.crt, tls.crt and tls.key that
  # contain the CA, certificate and private key.
  # They will be generated if not set.
  secretName: ""

database:
  # if external database is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    # mount the service account token
    automountServiceAccountToken: false
    image:
      repository: goharbor/harbor-db
      tag: v2.6.2
    # The initial superuser password for internal database
    password: "changeit"
    # The size limit for Shared memory, pgSQL use it for shared_buffer
    # More details see:
    # https://github.com/goharbor/harbor/issues/15034
    shmSizeLimit: 512Mi
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
    ## The priority class to run the pod as
    priorityClassName:
    initContainer:
      migrator: {}
      # resources:
      #  requests:
      #    memory: 128Mi
      #    cpu: 100m
      permissions: {}
      # resources:
      #  requests:
      #    memory: 128Mi
      #    cpu: 100m
  external:
    host: "192.168.0.1"
    port: "5432"
    username: "user"
    password: "password"
    coreDatabase: "registry"
    notaryServerDatabase: "notary_server"
    notarySignerDatabase: "notary_signer"
    # if using existing secret, the key must be "password"
    existingSecret: ""
    # "disable" - No SSL
    # "require" - Always SSL (skip verification)
    # "verify-ca" - Always SSL (verify that the certificate presented by the
    # server was signed by a trusted CA)
    # "verify-full" - Always SSL (verify that the certification presented by the
    # server was signed by a trusted CA and the server host name matches the one
    # in the certificate)
    sslmode: "disable"
  # The maximum number of connections in the idle connection pool per pod (core+exporter).
  # If it <=0, no idle connections are retained.
  maxIdleConns: 100
  # The maximum number of open connections to the database per pod (core+exporter).
  # If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 1024 for postgre of harbor.
  maxOpenConns: 900
  ## Additional deployment annotations
  podAnnotations: {}

redis:
  # if external Redis is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    # mount the service account token
    automountServiceAccountToken: false
    image:
      repository: goharbor/redis-photon
      tag: v2.6.2
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
    ## The priority class to run the pod as
    priorityClassName:
  external:
    # support redis, redis+sentinel
    # addr for redis: <host_redis>:<port_redis>
    # addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
    addr: "192.168.0.2:6379"
    # The name of the set of Redis instances to monitor, it must be set to support redis+sentinel
    sentinelMasterSet: ""
    # The "coreDatabaseIndex" must be "0" as the library Harbor
    # used doesn't support configuring it
    coreDatabaseIndex: "0"
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    chartmuseumDatabaseIndex: "3"
    trivyAdapterIndex: "5"
    password: ""
    # If using existingSecret, the key must be REDIS_PASSWORD
    existingSecret: ""
  ## Additional deployment annotations
  podAnnotations: {}

exporter:
  replicas: 1
  revisionHistoryLimit: 10
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  podAnnotations: {}
  serviceAccountName: ""
  # mount the service account token
  automountServiceAccountToken: false
  image:
    repository: goharbor/harbor-exporter
    tag: v2.6.2
  nodeSelector: {}
  tolerations: []
  affinity: {}
  cacheDuration: 23
  cacheCleanInterval: 14400
  ## The priority class to run the pod as
  priorityClassName:

metrics:
  enabled: false
  core:
    path: /metrics
    port: 8001
  registry:
    path: /metrics
    port: 8001
  jobservice:
    path: /metrics
    port: 8001
  exporter:
    path: /metrics
    port: 8001
  ## Create prometheus serviceMonitor to scrape harbor metrics.
  ## This requires the monitoring.coreos.com/v1 CRD. Please see
  ## https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md
  ##
  serviceMonitor:
    enabled: false
    additionalLabels: {}
    # Scrape interval. If not set, the Prometheus default scrape interval is used.
    interval: ""
    # Metric relabel configs to apply to samples before ingestion.
    metricRelabelings:
      []
      # - action: keep
      #   regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
      #   sourceLabels: [__name__]
    # Relabel configs to apply to samples before ingestion.
    relabelings:
      []
      # - sourceLabels: [__meta_kubernetes_pod_node_name]
      #   separator: ;
      #   regex: ^(.*)$
      #   targetLabel: nodename
      #   replacement: $1
      #   action: replace

trace:
  enabled: false
  # trace provider: jaeger or otel
  # jaeger should be 1.26+
  provider: jaeger
  # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth
  sample_rate: 1
  # namespace used to differentiate different harbor services
  # namespace:
  # attributes is a key value dict contains user defined attributes used to initialize trace provider
  # attributes:
  #   application: harbor
  jaeger:
    # jaeger supports two modes:
    #   collector mode(uncomment endpoint and uncomment username, password if needed)
    #   agent mode(uncomment agent_host and agent_port)
    endpoint: http://hostname:14268/api/traces
    # username:
    # password:
    # agent_host: hostname
    # export trace data by jaeger.thrift in compact mode
    # agent_port: 6831
  otel:
    endpoint: hostname:4318
    url_path: /v1/traces
    compression: false
    insecure: true
    timeout: 10s

# cache layer configurations
# if this feature enabled, harbor will cache the resource
# `project/project_metadata/repository/artifact/manifest` in the redis
# which help to improve the performance of high concurrent pulling manifest.
cache:
  # default is not enabled.
  enabled: false
  # default keep cache for one day.
  expireHours: 24

@lxh-015
Copy link
Author

lxh-015 commented Nov 20, 2022

---
# Source: harbor/templates/chartmuseum/chartmuseum-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-chartmuseum"
  labels:
    app: "harbor"
type: Opaque
data:
  CACHE_REDIS_PASSWORD: ""
---
# Source: harbor/templates/core/core-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-core
  labels:
    app: "harbor"
type: Opaque
data:
  secretKey: "bm90LWEtc2VjdXJlLWtleQ=="
  secret: "ckE3TUZYR2ZJRGRoeE93OQ=="
  tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUUwekNDQXJ1Z0F3SUJBZ0lKQVBZL096TE1lVnEyTUEwR0NTcUdTSWIzRFFFQkN3VUFNQUF3SGhjTk1Ua3cKTkRFNE1ESXlOek0zV2hjTk1qa3dOREUxTURJeU56TTNXakFBTUlJQ0lqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQwpBZzhBTUlJQ0NnS0NBZ0VBM3hsVUpzMmIvYUkyTkxveTRPSVErZG4veU1iL085OWlLRFJ5WktwSDhyU09tUytvCkY5dW5tU0F6TDY1WEEvdjZuWTBPTEkvZEFTRGprcWtCcElkVEd6b2dSNWY4VWlCNm9zdUVZN1Y3MVhaZHpXTHIKUGpuSnE2WkxBYW9LbXdHODBXNStXZDZWOFB5Z094NTJta3IxdzdJV0t6KzFaTEk1aXpicHBvbjdYVkdWUmFBVApSdk5aRGlKNkNlSnBjSjVINzIzbGtmNVJ2SldhdFpMQ1lJWURiUmZUaUtzeVEvU2xSY3Y1QlZmSGcvTEpTSDlRCkxHUmhQTUFSbGRsOXd5WkN3WlpESHhoZUk0YSsyNmFhOE1ZM3U5c3QvbDAvT282VkNUR3BNaUVoaUdGMkxWanAKVVdxLytCUDRTRkV2SmZxL0R1aW5JMTM5Vy81YVpaNy9Id1JQbG1ZVTZwWFRSTHlJZzdqZCsxOWZKd1I3WDM3cQp3MG84dDA2RmhqbXJDemFZQ1Vqb1JlcURtSGFObVpOL2Rkdkc3alpXQnUrak5oMFlhdnN5UXlDSVZtdjZ5cVNjCmpQaUQ5dWl2eHFUd2pKaWRJQlJmdVVyejNhRVJRN2NRZ2YwcWhxakl6Zmx6SGJGS2hJTG9jQldxN3p5Tmw5aHIKdlVHVC9XWmN3MHQvT3RNNzJTUGFwbG1UZ1ZiYlFSeGYyVkh6eXB0R0l2dHlkbFhLOHRoeE9NcFhvNGUrU2w4ZAoxZ2RRY0M0b2lzTjlGMjlvTnM4UDV5RlFQLy94WXV2OEM2MDduQ2oxRHpySWQ1YXZHL05WZktCL2ZiREtFRmdOCjJXaEhJblR6UExFY2pGNGZFcmNVQUV1V1cwYnVYLzZGSENHM2lUdHJxeUQ5MktUVkRmTjFKNTZycmNzQ0F3RUEKQWFOUU1FNHdIUVlEVlIwT0JCWUVGRmhOaFRvNFVBQzJQVXNmOGpZYVdqMTYwdkdFTUI4R0ExVWRJd1FZTUJhQQpGRmhOaFRvNFVBQzJQVXNmOGpZYVdqMTYwdkdFTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSWh2Y05BUUVMCkJRQURnZ0lCQU1Bc0V0VmxFTE13ZHRjaWZIZU9UMGtPbWY1d285SW4vZUZTZ3NjQ3pCTURhUngyQjNxMzZBb1MKSWw3WFdBWnBldmFSN1c3eWVBUkthQXNoQkxoeWdVcUxEMHpXYktsU045SHByZDF3ZHBNMGZmeVBwTjVkeE9ZQQplcjA0eTEyR1JuQ2JNWXFpNGN2enRQNFRpblhxcTJ5SFNZaExiTzlxa0k1Z2JXVnhrUnVJY01Ldml4ZGRsbE5ZClEzb2JKYURESG1vdk0zK2cvRysxWUZndDRxRVMzOFhuSjdCclNzaEhubjVFSVFoMjg2eGZKcml5cksyaEhiTEoKcXowWXVGNkczRFhQZVdHZ1h2ajBIaXBjMGY4VURaa0tray9lR0VJNnZFa3l0eXZvZXBvWkkyWGJBZi9aTXk1bgpLd3VoRW40aGhrRk13V2FTV3AvaDBRZE1DYXhrNEJWU09xbU5WYUxTQjcrRmpzSWo0Q2FzRm90WWl5SjJncFJCCk5mOFFhUzRiejBUbjFlQmJDOGtzaitlM1pXZVgyYjV3Vk1qcWw5alR0MlgxSUNzOEtLZTN2RUJranFUMkFVaTIKNTJUdEt6bTczYVdyei9HUHkvUTJMQ29yM0ZoOUZHVlNCT0JCRFhHeTZNSnBOSEpuWVZIOUVFTkZHT2g4NW9sMQoycEFET0JCNXZBVS9rTEI1TEhQajJrdWUvRk1pSGFObnJTWUlHck1sQlNYMmpqOUVZYTF1dVVIK3BkNE1CajFGCjV1SDhPUmlhUTZodDIrV0hrbHhpYzFSajV5VFlRd1ZsSDcwQ0JPbitxVkVkbzYzeVF3ekFNSktGSXdsR1VRRVgKamlsamdjODZxNGNadFVURnJjd01pZGJrKzhRNitKYkRWZzdIVi8rcG5DK3dudjE5N2t3ZQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBM3hsVUpzMmIvYUkyTkxveTRPSVErZG4veU1iL085OWlLRFJ5WktwSDhyU09tUytvCkY5dW5tU0F6TDY1WEEvdjZuWTBPTEkvZEFTRGprcWtCcElkVEd6b2dSNWY4VWlCNm9zdUVZN1Y3MVhaZHpXTHIKUGpuSnE2WkxBYW9LbXdHODBXNStXZDZWOFB5Z094NTJta3IxdzdJV0t6KzFaTEk1aXpicHBvbjdYVkdWUmFBVApSdk5aRGlKNkNlSnBjSjVINzIzbGtmNVJ2SldhdFpMQ1lJWURiUmZUaUtzeVEvU2xSY3Y1QlZmSGcvTEpTSDlRCkxHUmhQTUFSbGRsOXd5WkN3WlpESHhoZUk0YSsyNmFhOE1ZM3U5c3QvbDAvT282VkNUR3BNaUVoaUdGMkxWanAKVVdxLytCUDRTRkV2SmZxL0R1aW5JMTM5Vy81YVpaNy9Id1JQbG1ZVTZwWFRSTHlJZzdqZCsxOWZKd1I3WDM3cQp3MG84dDA2RmhqbXJDemFZQ1Vqb1JlcURtSGFObVpOL2Rkdkc3alpXQnUrak5oMFlhdnN5UXlDSVZtdjZ5cVNjCmpQaUQ5dWl2eHFUd2pKaWRJQlJmdVVyejNhRVJRN2NRZ2YwcWhxakl6Zmx6SGJGS2hJTG9jQldxN3p5Tmw5aHIKdlVHVC9XWmN3MHQvT3RNNzJTUGFwbG1UZ1ZiYlFSeGYyVkh6eXB0R0l2dHlkbFhLOHRoeE9NcFhvNGUrU2w4ZAoxZ2RRY0M0b2lzTjlGMjlvTnM4UDV5RlFQLy94WXV2OEM2MDduQ2oxRHpySWQ1YXZHL05WZktCL2ZiREtFRmdOCjJXaEhJblR6UExFY2pGNGZFcmNVQUV1V1cwYnVYLzZGSENHM2lUdHJxeUQ5MktUVkRmTjFKNTZycmNzQ0F3RUEKQVFLQ0FnRUFrOHE4czRQcnZZYnk3OVVWbFdKTktxY2V5a3dCa3hFMWZqcllPUldRMmhpQWlyeEdWNSs4bERULwprNnVqbTFFV3diNUswSHh4UktrYitQRWExSHFOTkhFNkp4TnBKS0s5ZXhEbFlBUSt4N2RGQnFWci8ybmF6bW80Ck1COE1MWWxtSXp0V1dvU1l3ZThvMm1FZzRxK2J4WXM1SW1kdTdBa2hFN2RKNjNobTIzZ0xNZmVNTGFsUnFvcHUKWEJQd0U1blhQNmFHdVVOSHRHMUs4dFFKRGxaWStMRWJBZU9mUmVOUWhUOU5kUnVrWVNXNTc5dmZLYmxKclN2egp1bGc4OXNWbTNjV0VLNXBCNnJqOXdKYks5NHZvS2Z0VnFiYnVCd1dqZDFhOXBpYktod1ZCZTJMMkZXaHBTWmM1CkYvY29DN25qVGFZVDZ0cjkxeTVWaGhKaElaUUNmL3Z2NFpsNVhoRkhzNVZUWk5iTS9PZnF5RlFMWVhWSk80OEsKRjd0bWF6QUVRUUJRd1ZacUg5QzlOUWR6UEhXbWMzOE9raHRjMXd6YXFuL3JnOSsxc2dBTUQ4aFdDdFFKVWU5NwpiOXltaDVBMFo0UVhLcHlGVDBiK3BYY0QxalJoYTA3VXRrWCsvekxKOUhwQVhjVW16a0crajVDWE5wbnhzSXE1CmZKRmVxM2hCajl3Nm40aCs1ME00VzBGc2U1WW9FVXNjM0IwZno4QmxRQmIrWUpMRkxOSDM0TUg4cDFsMFpEWUoKeWFlMHBzeGxCaWpnNE9QWitXQ0JhK2p0Rlc0TGlXZ0VjeHdnejh3K2hFT0FRcjJhMURjN3c4amQrWTRJSzhVbQpsVFZzNWRicDRtT21QTWxSdi9HTTdrRHVkRnFiTWczWUZ3WGczUWJxdVZxTFp6RXpqVmtDZ2dFQkFQSktaYkNXCllmTGVqa1MvZmtSeVYzVkliNTRtS3dRSG9NV3ViODh0UGdHdVh6anNKeWQ1UVRRNThQcFVqWHJMSG1uOGxTMisKdmlFOEdKeWxLd04xeU1sWnc0MCtrWmhwSFVwQ1d4LzJaS2pBcXZxQTlPT0tvMmZ2NkhkL3dPQW5VNEN0aW9DMQpwcmk3bEtGWVhvUDhEdFFWd0hZdkl6Q1JxRG5oYzRtd0pEcXpUQzl4ZHVJK3N2eHpsNHhIODJmeDBqclBpRlkrCi93T2RYanlmSVBqeWhIQzRqUFRXYmFpcndYUzlkQmpTbDEyOGFJUlQ1ODAveVhFL1NZQXVnZzA1akt0ZzV6UUEKU28xM01UZXpYUkhYZE8wZGkzdEVNSEdSRUVrRnBlVm5uUFF2Q0NlZEswRFYzNmlOd2lXYzhwd2RmTE1WbmVUdApES3daZWRDeCtvLzdldjBDZ2dFQkFPdTQ4REdFSkpKekh4VlI1bVkxSzJBbFp5WXRwVE9XZWhLMXpYNzRKdk0zCll4TjRuZCtaeDVuOXVTUG1tS3pxRjNUVSs0NFJWdGRKSzZlam9GRThkTURUTldhU0xXL1pEbU4xblQwbmp2T24KSVdKbjU5eW5PQ2hXV0taZ1haLzlVcUdSN1B0Nk94U2trZXg5Yy9mWUJzTVgveHVzZFhRaWdlb2dsMGlPWVZGVwpnWElpaUxSTEhwSEpzSy91TnhJaXpqMGhUWVluN3VEN1BSRU53RlJjQ1lmOEoxZVVGYmQ2RHVDVldlUUNLV2dmCk5kMnRTV29pMFZ5bGo0dVVYOEl3MHRqTE5NRDVDUkVKRWs0R1N2NEVEU212VWR2MUxpQktKQ0wybEVjZ29QZUMKb09EMmlDYzVLcWdubVFyYVJpbEZGazhSVlhBOVBXWkdZM0MwYjZUVm1tY0NnZ0VBTlpPMkFPS0FMbENBYlR0YgpGSStrUDA4UlA0dDVINThBTWpac2l3ZWFHbzBRaVduUERxK0ZkNk1JWXBLbjVtdGNBbHZVTVJWb3ZiaW9TSnROCmM2cHNCL3BOZjhKQ044Mm1xSEViN1dseXdNNDZBTUxiWkNXWUZMZThWQkJ2K2lFNEdkQkdQRWZ1NGhLNHZ5VG4KWVpBdlJ6NjRIR280QWRsenRiamc3NlYvbld0Z2dXMDV1TFhjcG01NUtKQVFodisyV1VMakJ3OVBIT0dEb1N3ZgpBbTIrVTU2N3JMaHQ3MHByc1FEajEwbGFKMlF1U0hTMVlYR2xmZUZjdzNlRlVwOVROK0pwdmRvQ29sMmxDSWdsCklIamdaajZPUldmQ3Zwb3hXN1JnQnVadWtxQ0QwUjYwSGRZdGF2eE4zanRpZXBzYXBBODNweE8wSmFwTWdaV1oKcnBVUmtRS0NBUUJPY0V2OUxpdTlUL0dYOXBqa2llelZJWjBoWnk4QjY2RFRlUXZZcEZyUnRDeVQzaDhxdU5GaQp2THRPNXYwSERSNmhFZjVqV0FHOXdldDA3VTM3dWxKZmwraTlLUWRWb0xUWkE5bys3MXJ5V1RzU3MrREQzQ0VqCnl4ZlV4VnhpVUxtZWFpQ2h6aHE2MDhoN0dZUHRoVVU2eGxGdHRBV2hqNW9MZnF6WXlBZzZPTDc2YStOeG0wMmcKMWF5bDNtOFU2ZUFYRjIza3BvVW0rSE5wcVZuR3VKbXpWb1VBNzVZS1orTnJlRWRoU0JiZlB3TjlzSnd0WlVpbAp1N0g0a0hjTTk1SXg4ZXlzQ2pLcUtJcWV6QmxJVGJEVG5qTnZMamNiSjVDKzBhNmx2SVhUMXZRUjUvZUdsYzlNCkJXRTM2MHBOa1YvTEQ4bU9mOUplcGkyUTQzb0RMOUVoQW9JQkFRRFRXSW1meTBLOWdHekEyclB5MTY5bVdZUUsKT2xjbkQzK2hRcTZ4NTFabjFlL3RleEZlVmxoSG40cnJuUmRDRk9BcDQ3dUZrSjJtNzJHQ1ZENzRFd1F1Y0s5eQpBRDVqb3JxZ1ZIcUNLWmRrSGpiMlY2ME16bTZnM3J0TDlXSlhGVkx2TkJiL1FHQjJ2Z0hWT08wenFpcUdaajRlCkV4N2wybS8vNVNFNERMdG43MEo5Q2dHMUh0WENTOGRXckdQTDFwekRuazhWWHRub1h6YjBMQ2hMVUZFZ1pSbWgKY1Y2QUZIRUsySDh3Qkh2aU55ZWhzUlFpRGtsMkFpV09jSk52a3pXNjhjazJuSmpSV3lQWUsxSkwzTkNLcEIzUQpPb2hyUDBmSGNXQVhNVzk3d0ZYWmhSZm5RZkR4eElPbGozTWNZVDBBbGFuWGQwRjROR2MyTnZtcGh4MDQKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
  HARBOR_ADMIN_PASSWORD: "SGFyYm9yMTIzNDU="
  POSTGRESQL_PASSWORD: "Y2hhbmdlaXQ="
  REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk"
  CSRF_KEY: "YnhxcTd0dzRPNkVrUUpFdFViQzRGc2JHNTNwQ2ZHZmk="
---
# Source: harbor/templates/database/database-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-database"
  labels:
    app: "harbor"
type: Opaque
data:
  POSTGRES_PASSWORD: "Y2hhbmdlaXQ="
---
# Source: harbor/templates/jobservice/jobservice-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-jobservice"
  labels:
    app: "harbor"
type: Opaque
data:
  JOBSERVICE_SECRET: "aU10WlBmcUwxd0MxVldSVw=="
  REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk"
---
# Source: harbor/templates/nginx/secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-nginx
  labels:
    app: "harbor"
type: Opaque
data:
  tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMakNDQWhhZ0F3SUJBZ0lRVnVUenRIRkNDdmtNbWF3RVFxcjhRVEFOQmdrcWhraUc5dzBCQVFzRkFEQVUKTVJJd0VBWURWUVFERXdsb1lYSmliM0l0WTJFd0hoY05Nakl4TVRFNU1UTTBORE14V2hjTk1qTXhNVEU1TVRNMApORE14V2pBWE1SVXdFd1lEVlFRREV3eG9kV0l1YkMxNGFDNWpiMjB3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURxS1MvSytxTDlhYmVJc0hmSXhVMnFqKy9lUzNQNDI3QnVXR1V6RnoxOFlpbHUKbENkR2Z6b3ZxWWNudVVSaklhSUVPaEI4b3Jsb05kZXFBR0lNRmdpcEVmSk5EUi9md1UyZVJrbGNPNk9CZzZwMgpwQmdGV05IZ2JGTTV2M01Rbzl4Z1NrVDZvU3RDZzNRRmRwUmJUQ2F5MWFPalY1YXo1WFFsVWovRklnRnZRT2xsCjcra1hxT1VOTHVnRlEwK0xyRS9hM3pVYzRkbWtJRUFHaGZWNXJ5QTJtdGRPMDZCbGNhUWp3Z1BoU3VSNHJ4MzIKQ0l2UFVXeGgzTEtJODVBR09jTjFqRmlpcFZ0cTdzalo4aFV0WEFxalU4RHZtTk5mT3h0UXJPbDNRL25EUDhFcQorYXFpcDhXbFRTRU5IYTUrdE5zWk82aUUyWDZDbytKZGNhQWJzdEhwQWdNQkFBR2plVEIzTUE0R0ExVWREd0VCCi93UUVBd0lGb0RBZEJnTlZIU1VFRmpBVUJnZ3JCZ0VGQlFjREFRWUlLd1lCQlFVSEF3SXdEQVlEVlIwVEFRSC8KQkFJd0FEQWZCZ05WSFNNRUdEQVdnQlEyWlJqenJuZGp0WkZNQVhJYTFVOWpQOGxXcURBWEJnTlZIUkVFRURBTwpnZ3hvZFdJdWJDMTRhQzVqYjIwd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLdFR4MnE2Q3E1bFVxa3hoakVkCi93Wk00QmFDVmpTYVJSZTlURFNONzBTdlNjZ1FjZTJoTXk4a0ttbjZ4cjRxazNFZ3ZRRzhwSzBrWEZqZ2FSL0EKQnVYZnZJVEdkM0k5WlpqbFpuU3hnUXNVT0JEMGtVSDVWdmtpWVFZcm1xczh5NVBtbVRKVTU1NXZqNTA1emxlcgpBaDdrTm40b2dVVTNwVThlUjZHREpmaFBUWVRrakJSdlNmMFdsdTRxMlZndVZZSWlERGhUL3NoajQ1OE8yZlY1CllBMTR6RGhQLzRKMSsrckljbngzVmJtS1l2bjh6R05pK09TbGtNQUdqWDlLdFJvaGJtcVBMN2J0ZGNHZmJicEUKRzJ1a3djN1pLKzZ3Rm16T1JDUTdWL1RpRXYrRmVyWDBiRnZIaW52OUxpd25oWGJ6WVFaa20rd3pkUkRRYk45Rwpac1E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
  tls.key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBNmlrdnl2cWkvV20zaUxCM3lNVk5xby92M2t0eitOdXdibGhsTXhjOWZHSXBicFFuClJuODZMNm1ISjdsRVl5R2lCRG9RZktLNWFEWFhxZ0JpREJZSXFSSHlUUTBmMzhGTm5rWkpYRHVqZ1lPcWRxUVkKQlZqUjRHeFRPYjl6RUtQY1lFcEUrcUVyUW9OMEJYYVVXMHdtc3RXam8xZVdzK1YwSlZJL3hTSUJiMERwWmUvcApGNmpsRFM3b0JVTlBpNnhQMnQ4MUhPSFpwQ0JBQm9YMWVhOGdOcHJYVHRPZ1pYR2tJOElENFVya2VLOGQ5Z2lMCnoxRnNZZHl5aVBPUUJqbkRkWXhZb3FWYmF1N0kyZklWTFZ3S28xUEE3NWpUWHpzYlVLenBkMFA1d3ovQkt2bXEKb3FmRnBVMGhEUjJ1ZnJUYkdUdW9oTmwrZ3FQaVhYR2dHN0xSNlFJREFRQUJBb0lCQVFEa2R6RUpZNGY1cHRjSwp4OXJuaWhKZ016WC9kekQ5QXlSZVZXTFArWUhsUlVWVmZibmdYNndlNnRKUzRNR24weDNuNzlzOEwreWtqN3dQClEzWm1nbTBLd05iZmRNbS9vRFFpRkVQcGVNcnNSOUhmRmZYVjZiWkV1SXh2VUt0czllUEVFMUxBRXRaSmZFYU0KT2dsQUZzbS9QUkQvQXl1bjlGMWhPS0c5cnBNbHZwcWpQdVZqTHlVUVk2aXJZeDJUVW5SVWN5MzlobThkaGJ3WgpQMEoxdm1ZV1BsMWpENG91MzJjUml0VHNKdjQ1aDhiVStOb25wTE03NzRDTFNsOW5XZk0zcGRleWlsM1EwSWNNCnZ2TnhzQ0JCU0pXYTNCTFpucGx6WlBvQUlOeUNuRkFRZnUzUmhDYWlPWkFUeGZlV1o5Vm9iK2hSa2g2cXUvRjgKZEx1NnJidWRBb0dCQU93RnN5Tzh0b0hmSVZYSVVIeHNLUHAwOW1BbnFMOE1ZUGtEanQ0dEJQZzlvL3pRSlE4ZwpsU3hBMGpSV1lFWEZaMWYrSWQrU0JxSytER280OHJ0NEZzNGpORlFNQ0xWQmNTV0ovU0xadDBLN2FMSkZFNzlkCkVCeCtodEoxNEcyalpBYXV6V2srNnVFSmxzNmdGQ2hVMk5OZHVqM2Z3MTNhUGlSUDdKclpoOWhiQW9HQkFQMzcKSnptbWlESWpXVDBIcnBSL1lOY2JJZ05XSFp2Y0xUMm53elhCTEVucGhFYmt5UDUzOWRyclBJNmxHSnlESkRoZgpDc3pzTENjaDgzUzBjdlFDSW10bWduRThUYUFSWTArZjdJM3FMT3NaZ2NxL2Nab2E2WVZUSUQ0NldpWC81Z2ZyCk5UR3lwZlM2cHFXVktDRkFqTTNzZ3cwUFFsMnpaSTF0aTFSVXpISUxBb0dBVis4enMrL2VTM3EzYjc5bkJVUkMKN0RWaGlhZWMvamo5ZENrNE5GeXZpbEZrNUk4a0Q0UERNQzZpWW0rQUJONnNSeU5ldVc0dFhMclQ0MHQrWEF3OAo2dVpBMjhOMEJ4dnZyTjNCV3hla2FJOXNUV2JoR1ozeG43d2dQUTgvNDNsSmpoZllYZ3JiOWFYZDh2Ty9MdHVWCndRSWRubW5jM3YydHcxZW52blduWUJzQ2dZRUF5MWhsUEdRWDBVUzU4d0lPaGJoQ1daYlFzYWttQlo3UDNmaE8KYytCWXpFaVpib1ZJWTJ5Nk8rOXhTYWZuUDVCRDFKcEx0R3cyb1pJdU1MYzAvaVlqVnFmNU1oNGM3RlpmaytXdgo2S09nR1E4UCtyc3lnamFmMXU0Sjk2aVNlckFhaFNhTkxXQWlQUUdmNFJ5OXgzbStBalVsYTdSVzUxeDAya0xsCmRieDYvNmNDZ1lFQTRHTEdQc1ZCci9CZ0RPaUN0dzRuTEtZciszdGE4TzZHRnhOREoyQm85TkM0d2wyUGhrQ00KcVhzaXNCNTV2OFpaRVNEOStwZE93Sm42bkk0QmlTTUZsdDRFL0tWakZ1eFRYR2J4MGRGQ0xkT3EvU2hra3pacgpqSU1zUmYvem5SMVMwdW81LzRZUGt1OFpzSW5qUlRZU3JTaUNHSDN2a1NnZVVNRi9JdWVpY3lRPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo="
  ca.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGRENDQWZ5Z0F3SUJBZ0lSQUs5bUxxV2QySDg5VGJKb1R3czV4M0F3RFFZSktvWklodmNOQVFFTEJRQXcKRkRFU01CQUdBMVVFQXhNSmFHRnlZbTl5TFdOaE1CNFhEVEl5TVRFeE9URXpORFF6TUZvWERUSXpNVEV4T1RFegpORFF6TUZvd0ZERVNNQkFHQTFVRUF4TUphR0Z5WW05eUxXTmhNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUFyU0FmNnVybnRLSVNoK1UvM2g1SThmT0x0NGJhUmh6c05iKzRHRE4yZTdwV0VlTE0KeVBSd3JHazRCZlhDUTljZzJHWmVkWHF2U3lIWDZIR0I0RUJwWElCNXlOT2ZXZFBBMzdmNnRjMGw4WFh4ZzhBMApQQ21HazRJNnpnNm15UWNUR0VzRVJMVExWcEY1cFFia29rNENQOFFmdjZzNGJ2ejFCV25WNThteW1JbkIydXFXClpJaTFoQWR5TTRldnZwSEZERjhtRW03RkdEQjA0eHk3ZU9uQmRKeUNtNDhId2RlSmFna1did3lzMGZ5K3IwVWEKcE1zZ2gvT2lmcm14TC9yUG15STVHeGkyUEtwbkFxbzNIUlJuQWppMkx3Q0ZNV2xLS1E0dkErK216aDY4M0lqaQovL0R4cWdkTXBLaitFMUdiNisrWSttN3N4SFdJRVA1SWkwUE1VUUlEQVFBQm8yRXdYekFPQmdOVkhROEJBZjhFCkJBTUNBcVF3SFFZRFZSMGxCQll3RkFZSUt3WUJCUVVIQXdFR0NDc0dBUVVGQndNQ01BOEdBMVVkRXdFQi93UUYKTUFNQkFmOHdIUVlEVlIwT0JCWUVGRFpsR1BPdWQyTzFrVXdCY2hyVlQyTS95VmFvTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQkJMT2pTb0lkTVIrZUlXb3NlcDZNN3grT1ZFcERKU3VjcmdpaStVZlg4WU42VWlpL2krdkVICmJsU0FjeDhGUGJod2l2NFl4cmJZZENsZ2VjQzR5czY0dWpZYlFsY2k2TmVnRFV6dExXdy90M2RjTGREV2tIMjcKU1NXTmFzMS8yT0tOQU16WVQxczJGbmIvbTBmaTZJMXFnMFIvckdXaG1BZjl2b05Ga1o5UytXNkt0ZVB3UWxsKwpQTi9KUUlrcVhiZlFGM3QzQzZFMXFZNmg5dUg5UU5MZE5BMVJ6bkQwUm44c3IxUnd5UVY2azBFRi85blpPcGtWCnVqdWhVSG15NXBhOWRNTmNSZUNzSWV1ZllocGpONVEraVltclh6ZW5sZmlSMDJzNlFKZFo0NzV6S1BqZmx5LzIKRGVvbHlWc3pzN2tJWnVDMXNySW9MVmlVUHcxcGo3dU8KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
---
# Source: harbor/templates/notary/notary-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-notary-server
  labels:
    app: "harbor"
    component: notary
type: Opaque
data:
  ca.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJakNDQWdxZ0F3SUJBZ0lSQVBPT1JVUXdyL2Rpb3FKS3RaNzhJRzB3RFFZSktvWklodmNOQVFFTEJRQXcKR3pFWk1CY0dBMVVFQXhNUWFHRnlZbTl5TFc1dmRHRnllUzFqWVRBZUZ3MHlNakV4TVRreE16UTBNekJhRncweQpNekV4TVRreE16UTBNekJhTUJzeEdUQVhCZ05WQkFNVEVHaGhjbUp2Y2kxdWIzUmhjbmt0WTJFd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFERWtoZElKZlpyL0tiMndMUjl3MG1iZ3p5dGpFNnkKNVY5Vnc3ZFBjNnRVY2V6bXFWVUQ3cWlkQXAyNlgrYW9PTm1QNSs1MU9TdmJGY0VuajJtVkM1YS9wSWVHMlM5aQpiZXZPQ3YvdG5kQzJ1YVFsTTRHYVAwRm9tcmpVRmRsWU1ZdmZiVHZIV05JV0JqRkp6bVMydjBJRnhNTWNyUEo1CjRaRzY2VGtpUDQyWkFyZmt3WTZ0N2NVVS9RZFJsR2pDUXdzbkNveDYrNXlISWxPU1VIdkdsTXlnYjRacGZjbjgKNjBOc2drZmFYUEx6OFVjTi9rR3hmcXQyVmJteFB5Tm9CVjkxSjg0NmlsYk1kMitCdFlZL0J4NUZETnRzNmdSVAp2U0dYSHBaY0IxVGxDMmZIdUhQMytHbldZL1VtdTNJNWtSamk1U3FUSjRUWW0zVkNWQWVzSkRCZkFnTUJBQUdqCllUQmZNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQWRCZ05WSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUgKQXdJd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWJOL0RtUFB4OFVnZzhpNjljUDAvZlhyVwpZKzR3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUVVcklSaGxmRnk5eHYwd0QyNlpSUjFPN0RBcGthWTlEN1lmCjd4eG1iaDYwcHhITVlldGJVT1lnVEJ3bldud3lpd05oSm1OL2hWZ2I1cXdKWVAydVRBa1pRNVFGUmcvYXVINGMKQ25yTXVFd2JyY2xWd01FWnJlRWpMeTBjejdGbVFrQk4zSnhJeDVpRTJWT3V0aDg1Ujk5M0RTRlhrcjNCcDk4TgpqNkQ1bDVPcURQOGJDM2hIRHRhM3ZPcTgyMTFwUVA1ZkUrc1cyTUkrOFFSMDdDOFZkUGEwSm5qUHFWdnpzWnJiClJ0a3c4YVQ3Y0t4MFBiRmlyckpiUlc1eHRuZnVJRTFRbENJWmVQeVNhMlFMNUNhVm0rWWN4LzZxNE9WdWJEa2YKQ1pGLzltQmx5UWVETmUvWGhzK1lMT1JNVlIzVWRvU0lOMUVValRFdDNhSGZVOFRpZWQwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
  tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURWVENDQWoyZ0F3SUJBZ0lRSFdZNk5MUGhqdDFUUzRDR2wvanF0VEFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWUVFERXhCb1lYSmliM0l0Ym05MFlYSjVMV05oTUI0WERUSXlNVEV4T1RFek5EUXpNRm9YRFRJegpNVEV4T1RFek5EUXpNRm93SmpFa01DSUdBMVVFQXhNYlkzVnpkRzl0TFdoaGNtSnZjaTF1YjNSaGNua3RjMmxuCmJtVnlNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTI3WlJmWit2T0Q1QlBwd1UKSy9SODBVV25NMnQrU0M5ck5ncE9LYjd6N1d5cy9NSjR0dFoybHRqZmJHSTBYSlJ3UkJzNHAyeEd4T0ZlVVBjOQpCVGIvYS8yV2lLQVNPMXdEL1JNbTIzTlUrR2ZDNTlMc2dkaEFKTVFtZENFQjh4STMrc1k2ZmpHaUtpa3paOGM0CnNvRXVieWh2MXdrUHJFdkMwYXhBTzdnRGpNUzk4QkJ5eForMWpMMS9ZV2xZS2xOb0licmY5Ym9xMWxMandMRFgKVjlERnZrQWc4akhFTExTUVl5NU81SHF4dFlrYXQvaGRSWG5HcHNEb3J1akF6dWJtbWFmVmk4d3lEY2xjOG85aApEUHZvN1pQVkx1WXhCRVp0eXdWYVFGenlvRlJwOVRpdkhWdmp0bTM4NlBaQVlCRlVnRHl0elFIYjc1bzNuVXdGCkQ1aTFzUUlEQVFBQm80R0pNSUdHTUE0R0ExVWREd0VCL3dRRUF3SUZvREFkQmdOVkhTVUVGakFVQmdnckJnRUYKQlFjREFRWUlLd1lCQlFVSEF3SXdEQVlEVlIwVEFRSC9CQUl3QURBZkJnTlZIU01FR0RBV2dCUnMzOE9ZOC9IeApTQ0R5THIxdy9UOTlldFpqN2pBbUJnTlZIUkVFSHpBZGdodGpkWE4wYjIwdGFHRnlZbTl5TFc1dmRHRnllUzF6CmFXZHVaWEl3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUUwYkxOSmF2VDdRTHhKcFUveGNMbHB1S0plamVTMzYKY2hqQktRTFhVVFZ1aUpJRWV1b29HUyt4bHltaFlpVktYTmE4dnJBdks0NzlwaStXWEhySTJtT056S0YvSStZdApYQmsyTHdHeUgrWG5BSUd2ai9jcVpYUk5VaExtd0tjM3dFRkxURHVmaVZad3FzSHVGcjZ6TVBQUk9kMGtOYVZKCjMyYXZPbDB6RlMvTUwwM2xGWGxORU9yc0QxK1FEdm41YnMyeFJEUEcrY3g5WlQ0U1JmQjduVmxZbldMZWNkK24KQWpkWC9NWjhGOGR4N2JWRzNyTHdCdzZqdHpVSUsxQlNBRU9QZU5xQm5xMGRneTB3TklVMUFYZ0pRRTJ6bDJaNApuNExEZ3ZIZmI4ZEpUK0c0L2RjU0VVZ1Z6ZTR6c2hFZmExSEFTWFJ1UWhQY1hKSEhKWGs2S25NPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg=="
  tls.key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMjdaUmZaK3ZPRDVCUHB3VUsvUjgwVVduTTJ0K1NDOXJOZ3BPS2I3ejdXeXMvTUo0CnR0WjJsdGpmYkdJMFhKUndSQnM0cDJ4R3hPRmVVUGM5QlRiL2EvMldpS0FTTzF3RC9STW0yM05VK0dmQzU5THMKZ2RoQUpNUW1kQ0VCOHhJMytzWTZmakdpS2lrelo4YzRzb0V1YnlodjF3a1ByRXZDMGF4QU83Z0RqTVM5OEJCeQp4WisxakwxL1lXbFlLbE5vSWJyZjlib3ExbExqd0xEWFY5REZ2a0FnOGpIRUxMU1FZeTVPNUhxeHRZa2F0L2hkClJYbkdwc0RvcnVqQXp1Ym1tYWZWaTh3eURjbGM4bzloRFB2bzdaUFZMdVl4QkVadHl3VmFRRnp5b0ZScDlUaXYKSFZ2anRtMzg2UFpBWUJGVWdEeXR6UUhiNzVvM25Vd0ZENWkxc1FJREFRQUJBb0lCQVFDVGFMVXY5YTBYclIwVgpKcDZhQndTZlczNGNqNmhBUWlCS040dk5rbUlTRDNIWWU1bUdBa202VjNNL1Fud2pDU3h4WlZvVGFoL3BBOVBGCnVydGova3hNWmUxVGFTSVNWV0FQc1lQR2hKQ2p3T29SbzRBdWRhcERnSjdVRnprNW5pV1V5TjZCd0VjdXhrbkkKL083YlJMU2t1NXFkeVdTTWtwLzVFUHBmbHN3NjZwNVlPRy9TS29wd1owbHczZk9rV0dKWFN0M0pmTXpzSEpBYQpHQmRVWTZ0NkpzV2FLWU00bXJzaW95WUxkQWhPajV2eDZ6UGpsYit3VnQrcmRkRHErVUh0WEhRY0hqL1lRb1pwCjdLeTNWN2IrTGlWc1JKbTdWeEdNWklDSG5CWnNqN0ttR1habE9xejlqQjd0NXNZZUdlb1JaMlVaa0k5TlFBdmQKNStydVpEQ0JBb0dCQVBwWW1uVTZqZWNUaTR1SE0ySGR5N1g2c2dPalRoUEpFMjJjMXdLcy94ZU15dHFLYmFtQQpVbm5RL0QzUnF5cUdSWFlEMzFpZHY2T2ZQc2lGbzV5aDE5RDdCSXI3ekJRSDVOWHVVNFNvZ2NxRGM3M2RnNEpJCkt0TS96aEd0NWVBeWx2N0NuajJqeko0QmhGV1NUT0lFdWxBZ2J6WVQzc2NncGNhd1IzcVpuWDNaQW9HQkFPQ3MKbWpjd3ovMUtKMXR0ZE4rUlNoM3BKbFowL2FFNnVNNDJ0cXNWS1NGRFk0M0hpNnI0Q1ZZNDlyQThaMVlUdjgyNgpxSThDR2NPYWo1a1kxTkZ6UmVhTXYvOWZOVy9QUkdFZmxIaG5HeXRSS2p4UE9ERkhpRFIyR3NMZklLdkt3UCtHCkk0MVM4TitocFlVam1tSGJUSWxHVGdHV0RnTCtaY1REM2pidk9oZVpBb0dBZHhBbXZiUlFndUx2empkSi83U2QKeXZubEtnZkQvWEwzVTMxeFlPdG9Fd281L0FBME1aWS9JNEo2Uk9od0VMUjFXckJ5eTlHU3NEM3ZmU3paNkllcwpmYzYwbFdrMTRSenovNjd1ZDd3d3BtRW9iZGRwVVZBRFZoOFZZYUVrSUNIUFlIQ3RFOEhRY2lGa2o5SVowTERRCjc0VE5mNW1wcldqZ2p3T05xMGhkOTlrQ2dZRUFoRE9zRHR1dkZ5aWlIZjg3TFM1TndXbm1nQ3NZN3QwaHoxMk0KZ1FEVGtkb1lZMTNPaGt5ckdjd3RCQ2lXMmFTSFVTUUxlUkxRWERPL1dXT3VIb3pJemM5MlRtc1VnM1VmbEZMRAp4MUZNUUdIakMxZkdCZUZFZVRJaHJ4Y2lIMVFQMm90d3NnRGk4WVFwWkQwOTQyVHFGVUNFT0JTMnEvWkxwY3RuCkgwWUhGY2tDZ1lFQXlnZ2w1eU5jbDhaN0ZCRG05Yi8xS0pZTDlvb2JrY1I3TnIwa1lUOEFPQVRsY2wzK1RUVXUKR2dsNjZhbGFTVTdmWnJQNTIyVEYxMmRaKzdraFdCUGM2SGcxRG1yMi9qMzVKSjl1ZGNXdGp5N3lzb0FVUzE5TQpWcHYxQUR1RGdndGlyK0hHSFRDRDhxSTlCQnpwNUlCaWR6ZzhJR1JLMXpFNllWUEF0SGgvRHhZPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo="
  server.json: ewogICJzZXJ2ZXIiOiB7CiAgICAiaHR0cF9hZGRyIjogIjo0NDQzIgogIH0sCiAgInRydXN0X3NlcnZpY2UiOiB7CiAgICAidHlwZSI6ICJyZW1vdGUiLAogICAgImhvc3RuYW1lIjogImN1c3RvbS1oYXJib3Itbm90YXJ5LXNpZ25lciIsCiAgICAicG9ydCI6ICI3ODk5IiwKICAgICJ0bHNfY2FfZmlsZSI6ICIvZXRjL3NzbC9ub3RhcnkvY2EuY3J0IiwKICAgICJrZXlfYWxnb3JpdGhtIjogImVjZHNhIgogIH0sCiAgImxvZ2dpbmciOiB7CiAgICAibGV2ZWwiOiAiaW5mbyIKICB9LAogICJzdG9yYWdlIjogewogICAgImJhY2tlbmQiOiAicG9zdGdyZXMiLAogICAgImRiX3VybCI6ICJwb3N0Z3JlczovL3Bvc3RncmVzOmNoYW5nZWl0QGN1c3RvbS1oYXJib3ItZGF0YWJhc2U6NTQzMi9ub3RhcnlzZXJ2ZXI/c3NsbW9kZT1kaXNhYmxlIgogIH0sCiAgImF1dGgiOiB7CiAgICAidHlwZSI6ICJ0b2tlbiIsCiAgICAib3B0aW9ucyI6IHsKICAgICAgInJlYWxtIjogImh0dHBzOi8vaHViLmwteGguY29tOjMwMDAzL3NlcnZpY2UvdG9rZW4iLAogICAgICAic2VydmljZSI6ICJoYXJib3Itbm90YXJ5IiwKICAgICAgImlzc3VlciI6ICJoYXJib3ItdG9rZW4taXNzdWVyIiwKICAgICAgInJvb3RjZXJ0YnVuZGxlIjogIi9yb290LmNydCIKICAgIH0KICB9Cn0=
  signer.json: ewogICJzZXJ2ZXIiOiB7CiAgICAiZ3JwY19hZGRyIjogIjo3ODk5IiwKICAgICJ0bHNfY2VydF9maWxlIjogIi9ldGMvc3NsL25vdGFyeS90bHMuY3J0IiwKICAgICJ0bHNfa2V5X2ZpbGUiOiAiL2V0Yy9zc2wvbm90YXJ5L3Rscy5rZXkiCiAgfSwKICAibG9nZ2luZyI6IHsKICAgICJsZXZlbCI6ICJpbmZvIgogIH0sCiAgInN0b3JhZ2UiOiB7CiAgICAiYmFja2VuZCI6ICJwb3N0Z3JlcyIsCiAgICAiZGJfdXJsIjogInBvc3RncmVzOi8vcG9zdGdyZXM6Y2hhbmdlaXRAY3VzdG9tLWhhcmJvci1kYXRhYmFzZTo1NDMyL25vdGFyeXNpZ25lcj9zc2xtb2RlPWRpc2FibGUiLAogICAgImRlZmF1bHRfYWxpYXMiOiAiZGVmYXVsdGFsaWFzIgogIH0KfQ==
  NOTARY_SERVER_DB_URL: cG9zdGdyZXM6Ly9wb3N0Z3JlczpjaGFuZ2VpdEBjdXN0b20taGFyYm9yLWRhdGFiYXNlOjU0MzIvbm90YXJ5c2VydmVyP3NzbG1vZGU9ZGlzYWJsZQ==
  NOTARY_SIGNER_DB_URL: cG9zdGdyZXM6Ly9wb3N0Z3JlczpjaGFuZ2VpdEBjdXN0b20taGFyYm9yLWRhdGFiYXNlOjU0MzIvbm90YXJ5c2lnbmVyP3NzbG1vZGU9ZGlzYWJsZQ==
---
# Source: harbor/templates/registry/registry-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-registry"
  labels:
    app: "harbor"
type: Opaque
data:
  REGISTRY_HTTP_SECRET: "aWFZWjdRUGJ4TEg5eTNWNw=="
  REGISTRY_REDIS_PASSWORD: ""
---
# Source: harbor/templates/registry/registry-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-registry-htpasswd"
  labels:
    app: "harbor"
type: Opaque
data:
  REGISTRY_HTPASSWD: "aGFyYm9yX3JlZ2lzdHJ5X3VzZXI6JDJhJDEwJDk5dXppSEhNQ3dIc0FyZFZDQlpxbC5kYjZoaG8yR1BTQzRmQU9ZWEZHMS83U0dtc3dCZFM2"
---
# Source: harbor/templates/registry/registryctl-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: "harbor-registryctl"
  labels:
    app: "harbor"
type: Opaque
data:
---
# Source: harbor/templates/trivy/trivy-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-trivy
  labels:
    app: "harbor"
type: Opaque
data:
  redisURL: cmVkaXM6Ly9jdXN0b20taGFyYm9yLXJlZGlzOjYzNzkvNT9pZGxlX3RpbWVvdXRfc2Vjb25kcz0zMA==
  gitHubToken: ""
---
# Source: harbor/templates/chartmuseum/chartmuseum-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-chartmuseum"
  labels:
    app: "harbor"
data:
  PORT: "9999"
  CACHE: "redis"
  CACHE_REDIS_ADDR: "harbor-redis:6379"
  CACHE_REDIS_DB: "3"
  BASIC_AUTH_USER: "chart_controller"
  DEPTH: "1"
  DEBUG: "false"
  LOG_JSON: "true"
  DISABLE_METRICS: "false"
  DISABLE_API: "false"
  DISABLE_STATEFILES: "false"
  ALLOW_OVERWRITE: "true"
  AUTH_ANONYMOUS_GET: "false"
  CONTEXT_PATH: ""
  INDEX_LIMIT: "0"
  MAX_STORAGE_OBJECTS: "0"
  MAX_UPLOAD_SIZE: "20971520"
  CHART_POST_FORM_FIELD_NAME: "chart"
  PROV_POST_FORM_FIELD_NAME: "prov"
  STORAGE: "local"
  STORAGE_LOCAL_ROOTDIR: "/chart_storage"
  STORAGE_TIMESTAMP_TOLERANCE: 1s
---
# Source: harbor/templates/core/core-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: harbor-core
  labels:
    app: "harbor"
data:
  app.conf: |+
    appname = Harbor
    runmode = prod
    enablegzip = true

    [prod]
    httpport = 8080
  PORT: "8080"
  DATABASE_TYPE: "postgresql"
  POSTGRESQL_HOST: "harbor-database"
  POSTGRESQL_PORT: "5432"
  POSTGRESQL_USERNAME: "postgres"
  POSTGRESQL_DATABASE: "registry"
  POSTGRESQL_SSLMODE: "disable"
  POSTGRESQL_MAX_IDLE_CONNS: "100"
  POSTGRESQL_MAX_OPEN_CONNS: "900"
  EXT_ENDPOINT: "https://hub.l-xh.com:30003"
  CORE_URL: "http://harbor-core:80"
  JOBSERVICE_URL: "http://harbor-jobservice"
  REGISTRY_URL: "http://harbor-registry:5000"
  TOKEN_SERVICE_URL: "http://harbor-core:80/service/token"
  WITH_NOTARY: "true"
  NOTARY_URL: "http://harbor-notary-server:4443"
  CORE_LOCAL_URL: "http://127.0.0.1:8080"
  WITH_TRIVY: "true"
  TRIVY_ADAPTER_URL: "http://harbor-trivy:8080"
  REGISTRY_STORAGE_PROVIDER_NAME: "filesystem"
  WITH_CHARTMUSEUM: "true"
  CHART_REPOSITORY_URL: "http://harbor-chartmuseum"
  LOG_LEVEL: "info"
  CONFIG_PATH: "/etc/core/app.conf"
  CHART_CACHE_DRIVER: "redis"
  _REDIS_URL_CORE: "redis://harbor-redis:6379/0?idle_timeout_seconds=30"
  _REDIS_URL_REG: "redis://harbor-redis:6379/2?idle_timeout_seconds=30"
  PORTAL_URL: "http://harbor-portal"
  REGISTRY_CONTROLLER_URL: "http://harbor-registry:8080"
  REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user"
  HTTP_PROXY: ""
  HTTPS_PROXY: ""
  NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-chartmuseum,harbor-notary-server,harbor-notary-signer,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
  PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE: "docker-hub,harbor,azure-acr,aws-ecr,google-gcr,quay,docker-registry"
---
# Source: harbor/templates/jobservice/jobservice-cm-env.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-jobservice-env"
  labels:
    app: "harbor"
data:
  CORE_URL: "http://harbor-core:80"
  TOKEN_SERVICE_URL: "http://harbor-core:80/service/token"
  REGISTRY_URL: "http://harbor-registry:5000"
  REGISTRY_CONTROLLER_URL: "http://harbor-registry:8080"
  REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user"
  HTTP_PROXY: ""
  HTTPS_PROXY: ""
  NO_PROXY: "harbor-core,harbor-jobservice,harbor-database,harbor-chartmuseum,harbor-notary-server,harbor-notary-signer,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
---
# Source: harbor/templates/jobservice/jobservice-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-jobservice"
  labels:
    app: "harbor"
data:
  config.yml: |+
    #Server listening port
    protocol: "http"
    port: 8080
    worker_pool:
      workers: 10
      backend: "redis"
      redis_pool:
        redis_url: "redis://harbor-redis:6379/1"
        namespace: "harbor_job_service_namespace"
        idle_timeout_second: 3600
    job_loggers:
      - name: "FILE"
        level: INFO
        settings: # Customized settings of logger
          base_dir: "/var/log/jobs"
        sweeper:
          duration: 14 #days
          settings: # Customized settings of sweeper
            work_dir: "/var/log/jobs"
    metric:
      enabled: false
      path: /metrics
      port: 8001
    #Loggers for the job service
    loggers:
      - name: "STD_OUTPUT"
        level: INFO
---
# Source: harbor/templates/nginx/configmap-https.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: harbor-nginx
  labels:
    app: "harbor"
data:
  nginx.conf: |+
    worker_processes auto;
    pid /tmp/nginx.pid;

    events {
      worker_connections 3096;
      use epoll;
      multi_accept on;
    }

    http {
      client_body_temp_path /tmp/client_body_temp;
      proxy_temp_path /tmp/proxy_temp;
      fastcgi_temp_path /tmp/fastcgi_temp;
      uwsgi_temp_path /tmp/uwsgi_temp;
      scgi_temp_path /tmp/scgi_temp;
      tcp_nodelay on;

      # this is necessary for us to be able to disable request buffering in all cases
      proxy_http_version 1.1;

      upstream core {
        server "harbor-core:80";
      }

      upstream portal {
        server "harbor-portal:80";
      }
      upstream notary-server {
        server harbor-notary-server:4443;
      }

      log_format timed_combined '[$time_local]:$remote_addr - '
        '"$request" $status $body_bytes_sent '
        '"$http_referer" "$http_user_agent" '
        '$request_time $upstream_response_time $pipe';

      access_log /dev/stdout timed_combined;

      map $http_x_forwarded_proto $x_forwarded_proto {
        default $http_x_forwarded_proto;
        ""      $scheme;
      }
      server {
        listen 4443 ssl;
        listen [::]:4443 ssl;
        server_tokens off;
        # ssl
        ssl_certificate /etc/nginx/cert/tls.crt;
        ssl_certificate_key /etc/nginx/cert/tls.key;

        # recommendations from https://raymii.org/s/tutorials/strong_ssl_security_on_nginx.html
        ssl_protocols tlsv1.2;
        ssl_ciphers '!aNULL:kECDH+AESGCM:ECDH+AESGCM:RSA+AESGCM:kECDH+AES:ECDH+AES:RSA+AES:';
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:ssl:10m;

        # disable any limits to avoid http 413 for large image uploads
        client_max_body_size 0;

        # required to avoid http 411: see issue #1486 (https://github.com/docker/docker/issues/1486)
        chunked_transfer_encoding on;

        location /v2/ {
          proxy_pass http://notary-server/v2/;
          proxy_set_header Host $http_host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

          proxy_buffering off;
          proxy_request_buffering off;
        }
      }

      server {
        listen 8443 ssl;
        listen [::]:8443 ssl;
    #    server_name harbordomain.com;
        server_tokens off;
        # SSL
        ssl_certificate /etc/nginx/cert/tls.crt;
        ssl_certificate_key /etc/nginx/cert/tls.key;

        # Recommendations from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
        ssl_protocols TLSv1.2;
        ssl_ciphers '!aNULL:kECDH+AESGCM:ECDH+AESGCM:RSA+AESGCM:kECDH+AES:ECDH+AES:RSA+AES:';
        ssl_prefer_server_ciphers on;
        ssl_session_cache shared:SSL:10m;

        # disable any limits to avoid HTTP 413 for large image uploads
        client_max_body_size 0;

        # required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
        chunked_transfer_encoding on;

        # Add extra headers
        add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
        add_header X-Frame-Options DENY;
        add_header Content-Security-Policy "frame-ancestors 'none'";

        location / {
          proxy_pass http://portal/;
          proxy_set_header Host $http_host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

          proxy_cookie_path / "/; HttpOnly; Secure";

          proxy_buffering off;
          proxy_request_buffering off;
        }

        location /api/ {
          proxy_pass http://core/api/;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

          proxy_cookie_path / "/; Secure";

          proxy_buffering off;
          proxy_request_buffering off;
        }

        location /chartrepo/ {
          proxy_pass http://core/chartrepo/;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

          proxy_cookie_path / "/; Secure";

          proxy_buffering off;
          proxy_request_buffering off;
        }

        location /c/ {
          proxy_pass http://core/c/;
          proxy_set_header Host $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

          proxy_cookie_path / "/; Secure";

          proxy_buffering off;
          proxy_request_buffering off;
        }

        location /v1/ {
          return 404;
        }

        location /v2/ {
          proxy_pass http://core/v2/;
          proxy_set_header Host $http_host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $x_forwarded_proto;
          proxy_buffering off;
          proxy_request_buffering off;
        }

        location /service/ {
          proxy_pass http://core/service/;
          proxy_set_header Host $http_host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto $x_forwarded_proto;

          proxy_cookie_path / "/; Secure";

          proxy_buffering off;
          proxy_request_buffering off;
        }

      location /service/notifications {
          return 404;
        }
      }
        server {
          listen 8080;
          listen [::]:8080;
          #server_name harbordomain.com;
          return 301 https://$host$request_uri;
      }
    }
---
# Source: harbor/templates/portal/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-portal"
  labels:
    app: "harbor"
data:
  nginx.conf: |+
    worker_processes auto;
    pid /tmp/nginx.pid;
    events {
        worker_connections  1024;
    }
    http {
        client_body_temp_path /tmp/client_body_temp;
        proxy_temp_path /tmp/proxy_temp;
        fastcgi_temp_path /tmp/fastcgi_temp;
        uwsgi_temp_path /tmp/uwsgi_temp;
        scgi_temp_path /tmp/scgi_temp;
        server {
            listen 8080;
            listen [::]:8080;
            server_name  localhost;
            root   /usr/share/nginx/html;
            index  index.html index.htm;
            include /etc/nginx/mime.types;
            gzip on;
            gzip_min_length 1000;
            gzip_proxied expired no-cache no-store private auth;
            gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
            location / {
                try_files $uri $uri/ /index.html;
            }
            location = /index.html {
                add_header Cache-Control "no-store, no-cache, must-revalidate";
            }
        }
    }
---
# Source: harbor/templates/registry/registry-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-registry"
  labels:
    app: "harbor"
data:
  config.yml: |+
    version: 0.1
    log:
      level: info
      fields:
        service: registry
    storage:
      filesystem:
        rootdirectory: /storage
      cache:
        layerinfo: redis
      maintenance:
        uploadpurging:
          enabled: true
          age: 168h
          interval: 24h
          dryrun: false
      delete:
        enabled: true
      redirect:
        disable: false
    redis:
      addr: harbor-redis:6379
      db: 2
      readtimeout: 10s
      writetimeout: 10s
      dialtimeout: 10s
      pool:
        maxidle: 100
        maxactive: 500
        idletimeout: 60s
    http:
      addr: :5000
      relativeurls: false
      # set via environment variable
      # secret: placeholder
      debug:
        addr: localhost:5001
    auth:
      htpasswd:
        realm: harbor-registry-basic-realm
        path: /etc/registry/passwd
    validation:
      disabled: true
    compatibility:
      schema1:
        enabled: true
  ctl-config.yml: |+
    ---
    protocol: "http"
    port: 8080
    log_level: info
    registry_config: "/etc/registry/config.yml"
---
# Source: harbor/templates/registry/registryctl-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: "harbor-registryctl"
  labels:
    app: "harbor"
data:
---
# Source: harbor/templates/chartmuseum/chartmuseum-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-chartmuseum"
  labels:
    app: "harbor"
spec:
  ports:
    - port: 80
      targetPort: 9999
  selector:
    app: "harbor"
    component: chartmuseum
---
# Source: harbor/templates/core/core-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor-core
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-web
      port: 80
      targetPort: 8080
  selector:
    app: "harbor"
    component: core
---
# Source: harbor/templates/database/database-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-database"
  labels:
    app: "harbor"
spec:
  ports:
    - port: 5432
  selector:
    app: "harbor"
    component: database
---
# Source: harbor/templates/jobservice/jobservice-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-jobservice"
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-jobservice
      port: 80
      targetPort: 8080
  selector:
    app: "harbor"
    component: jobservice
---
# Source: harbor/templates/nginx/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor
  labels:
    app: "harbor"
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 8080
      nodePort: 30002
    - name: https
      port: 443
      targetPort: 8443
      nodePort: 30003
    - name: notary
      port: 4443
      targetPort: 4443
      nodePort: 30004
  selector:
    app: "harbor"
    component: nginx
---
# Source: harbor/templates/notary/notary-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor-notary-server
  labels:
    app: "harbor"
spec:
  ports:
  - port: 4443
  selector:
    app: "harbor"
    component: notary-server
---
# Source: harbor/templates/notary/notary-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor-notary-signer
  labels:
    app: "harbor"
spec:
  ports:
  - port: 7899
  selector:
    app: "harbor"
    component: notary-signer
---
# Source: harbor/templates/portal/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-portal"
  labels:
    app: "harbor"
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app: "harbor"
    component: portal
---
# Source: harbor/templates/redis/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: harbor-redis
  labels:
    app: "harbor"
spec:
  ports:
    - port: 6379
  selector:
    app: "harbor"
    component: redis
---
# Source: harbor/templates/registry/registry-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-registry"
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-registry
      port: 5000

    - name: http-controller
      port: 8080
  selector:
    app: "harbor"
    component: registry
---
# Source: harbor/templates/trivy/trivy-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: "harbor-trivy"
  labels:
    app: "harbor"
spec:
  ports:
    - name: http-trivy
      protocol: TCP
      port: 8080
  selector:
    app: "harbor"
    component: trivy
---
# Source: harbor/templates/chartmuseum/chartmuseum-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "harbor-chartmuseum"
  labels:
    app: "harbor"
    component: chartmuseum
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: "harbor"
      component: chartmuseum
  template:
    metadata:
      labels:
        app: "harbor"
        component: chartmuseum
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
      - name: chartmuseum
        image: goharbor/chartmuseum-photon:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /health
            scheme: HTTP
            port: 9999
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            scheme: HTTP
            port: 9999
          initialDelaySeconds: 1
          periodSeconds: 10
        envFrom:
        - configMapRef:
            name: "harbor-chartmuseum"
        - secretRef:
            name: "harbor-chartmuseum"
        env:
          - name: BASIC_AUTH_PASS
            valueFrom:
              secretKeyRef:
                name: harbor-core
                key: secret
          - # Needed to make AWS' client connect correctly (see https://github.com/helm/chartmuseum/issues/280)
            name: AWS_SDK_LOAD_CONFIG
            value: "1"
        ports:
        - containerPort: 9999
        volumeMounts:
        - name: chartmuseum-data
          mountPath: /chart_storage
          subPath: chartmuseum
      volumes:
      - name: chartmuseum-data
        persistentVolumeClaim:
          claimName: harbor-pvc
---
# Source: harbor/templates/core/core-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: harbor-core
  labels:
    app: "harbor"
    component: core
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: "harbor"
      component: core
  template:
    metadata:
      labels:
        app: "harbor"
        component: core
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      containers:
      - name: core
        image: goharbor/harbor-core:v2.6.2
        imagePullPolicy: IfNotPresent
        startupProbe:
          httpGet:
            path: /api/v2.0/ping
            scheme: HTTP
            port: 8080
          failureThreshold: 360
          initialDelaySeconds: 10
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /api/v2.0/ping
            scheme: HTTP
            port: 8080
          failureThreshold: 2
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/v2.0/ping
            scheme: HTTP
            port: 8080
          failureThreshold: 2
          periodSeconds: 10
        envFrom:
        - configMapRef:
            name: "harbor-core"
        - secretRef:
            name: "harbor-core"
        env:
          - name: CORE_SECRET
            valueFrom:
              secretKeyRef:
                name: harbor-core
                key: secret
          - name: JOBSERVICE_SECRET
            valueFrom:
              secretKeyRef:
                name: "harbor-jobservice"
                key: JOBSERVICE_SECRET
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: config
          mountPath: /etc/core/app.conf
          subPath: app.conf
        - name: secret-key
          mountPath: /etc/core/key
          subPath: key
        - name: token-service-private-key
          mountPath: /etc/core/private_key.pem
          subPath: tls.key
        - name: ca-download
          mountPath: /etc/core/ca
        - name: psc
          mountPath: /etc/core/token
      volumes:
      - name: config
        configMap:
          name: harbor-core
          items:
            - key: app.conf
              path: app.conf
      - name: secret-key
        secret:
          secretName: harbor-core
          items:
            - key: secretKey
              path: key
      - name: token-service-private-key
        secret:
          secretName: harbor-core
      - name: ca-download
        secret:
          secretName: harbor-nginx
      - name: psc
        emptyDir: {}
---
# Source: harbor/templates/jobservice/jobservice-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "harbor-jobservice"
  labels:
    app: "harbor"
    component: jobservice
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: "harbor"
      component: jobservice
  template:
    metadata:
      labels:
        app: "harbor"
        component: jobservice
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      containers:
      - name: jobservice
        image: goharbor/harbor-jobservice:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /api/v1/stats
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/v1/stats
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 10
        env:
          - name: CORE_SECRET
            valueFrom:
              secretKeyRef:
                name: harbor-core
                key: secret
        envFrom:
        - configMapRef:
            name: "harbor-jobservice-env"
        - secretRef:
            name: "harbor-jobservice"
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: jobservice-config
          mountPath: /etc/jobservice/config.yml
          subPath: config.yml
        - name: job-logs
          mountPath: /var/log/jobs
          subPath: jobLog
        - name: job-scandata-exports
          mountPath: /var/scandata_exports
          subPath: scanDataExports
      volumes:
      - name: jobservice-config
        configMap:
          name: "harbor-jobservice"
      - name: job-logs
        persistentVolumeClaim:
          claimName: harbor-pvc
      - name: job-scandata-exports
        persistentVolumeClaim:
          claimName: harbor-pvc
---
# Source: harbor/templates/nginx/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: harbor-nginx
  labels:
    app: "harbor"
    component: nginx
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: "harbor"
      component: nginx
  template:
    metadata:
      labels:
        app: "harbor"
        component: nginx
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
      - name: nginx
        image: "goharbor/nginx-photon:v2.6.2"
        imagePullPolicy: "IfNotPresent"
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 1
          periodSeconds: 10
        ports:
        - containerPort: 8080
        - containerPort: 8443
        - containerPort: 4443
        volumeMounts:
        - name: config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
        - name: certificate
          mountPath: /etc/nginx/cert
      volumes:
      - name: config
        configMap:
          name: harbor-nginx
      - name: certificate
        secret:
          secretName: harbor-nginx
---
# Source: harbor/templates/notary/notary-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: harbor-notary-server
  labels:
    app: "harbor"
    component: notary-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "harbor"
      component: notary-server
  template:
    metadata:
      labels:
        app: "harbor"
        component: notary-server
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
      - name: notary-server
        image: goharbor/notary-server-photon:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /_notary_server/health
            scheme: "HTTP"
            port: 4443
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /_notary_server/health
            scheme: "HTTP"
            port: 4443
          initialDelaySeconds: 20
          periodSeconds: 10
        env:
        - name: MIGRATIONS_PATH
          value: migrations/server/postgresql
        - name: DB_URL
          valueFrom:
            secretKeyRef:
              name: harbor-notary-server
              key: NOTARY_SERVER_DB_URL
        volumeMounts:
        - name: config
          mountPath: /etc/notary/server-config.postgres.json
          subPath: server.json
        - name: token-service-certificate
          mountPath: /root.crt
          subPath: tls.crt
        - name: signer-certificate
          mountPath: /etc/ssl/notary/ca.crt
          subPath: ca.crt
      volumes:
      - name: config
        secret:
          secretName: "harbor-notary-server"
      - name: token-service-certificate
        secret:
          secretName: harbor-core
      - name: signer-certificate
        secret:
          secretName: harbor-notary-server
---
# Source: harbor/templates/notary/notary-signer.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: harbor-notary-signer
  labels:
    app: "harbor"
    component: notary-signer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: "harbor"
      component: notary-signer
  template:
    metadata:
      labels:
        app: "harbor"
        component: notary-signer
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
      - name: notary-signer
        image: goharbor/notary-signer-photon:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /
            scheme: "HTTPS"
            port: 7899
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            scheme: "HTTPS"
            port: 7899
          initialDelaySeconds: 20
          periodSeconds: 10
        env:
        - name: MIGRATIONS_PATH
          value: migrations/signer/postgresql
        - name: DB_URL
          valueFrom:
            secretKeyRef:
              name: harbor-notary-server
              key: NOTARY_SIGNER_DB_URL
        - name: NOTARY_SIGNER_DEFAULTALIAS
          value: defaultalias
        volumeMounts:
        - name: config
          mountPath: /etc/notary/signer-config.postgres.json
          subPath: signer.json
        - name: signer-certificate
          mountPath: /etc/ssl/notary/tls.crt
          subPath: tls.crt
        - name: signer-certificate
          mountPath: /etc/ssl/notary/tls.key
          subPath: tls.key
      volumes:
      - name: config
        secret:
          secretName: "harbor-notary-server"
      - name: signer-certificate
        secret:
          secretName: harbor-notary-server
---
# Source: harbor/templates/portal/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "harbor-portal"
  labels:
    app: "harbor"
    component: portal
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: "harbor"
      component: portal
  template:
    metadata:
      labels:
        app: "harbor"
        component: portal
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
      - name: portal
        image: goharbor/harbor-portal:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 1
          periodSeconds: 10
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: portal-config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
      volumes:
      - name: portal-config
        configMap:
          name: "harbor-portal"
---
# Source: harbor/templates/registry/registry-dpl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "harbor-registry"
  labels:
    app: "harbor"
    component: registry
spec:
  replicas: 1
  revisionHistoryLimit: 10
  strategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: "harbor"
      component: registry
  template:
    metadata:
      labels:
        app: "harbor"
        component: registry
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
        fsGroupChangePolicy: OnRootMismatch
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      containers:
      - name: registry
        image: goharbor/registry-photon:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 5000
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            scheme: HTTP
            port: 5000
          initialDelaySeconds: 1
          periodSeconds: 10
        args: ["serve", "/etc/registry/config.yml"]
        envFrom:
        - secretRef:
            name: "harbor-registry"
        env:
        ports:
        - containerPort: 5000
        - containerPort: 5001
        volumeMounts:
        - name: registry-data
          mountPath: /storage
          subPath: registry
        - name: registry-htpasswd
          mountPath: /etc/registry/passwd
          subPath: passwd
        - name: registry-config
          mountPath: /etc/registry/config.yml
          subPath: config.yml
      - name: registryctl
        image: goharbor/harbor-registryctl:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /api/health
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/health
            scheme: HTTP
            port: 8080
          initialDelaySeconds: 1
          periodSeconds: 10
        envFrom:
        - configMapRef:
            name: "harbor-registryctl"
        - secretRef:
            name: "harbor-registry"
        - secretRef:
            name: "harbor-registryctl"
        env:
        - name: CORE_SECRET
          valueFrom:
            secretKeyRef:
              name: harbor-core
              key: secret
        - name: JOBSERVICE_SECRET
          valueFrom:
            secretKeyRef:
              name: harbor-jobservice
              key: JOBSERVICE_SECRET
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: registry-data
          mountPath: /storage
          subPath: registry
        - name: registry-config
          mountPath: /etc/registry/config.yml
          subPath: config.yml
        - name: registry-config
          mountPath: /etc/registryctl/config.yml
          subPath: ctl-config.yml
      volumes:
      - name: registry-htpasswd
        secret:
          secretName: harbor-registry-htpasswd
          
          items:
            - key: REGISTRY_HTPASSWD
              path: passwd
      - name: registry-config
        configMap:
          name: "harbor-registry"
      - name: registry-data
        persistentVolumeClaim:
          claimName: harbor-pvc
---
# Source: harbor/templates/database/database-ss.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: "harbor-database"
  labels:
    app: "harbor"
    component: database
spec:
  replicas: 1
  serviceName: "harbor-database"
  selector:
    matchLabels:
      app: "harbor"
      component: database
  template:
    metadata:
      labels:
        app: "harbor"
        component: database
      annotations:
    spec:
      securityContext:
        runAsUser: 999
        fsGroup: 999
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      initContainers:
      # as we change the data directory to a sub folder to support psp, the init container here
      # is used to migrate the existing data. See https://github.com/goharbor/harbor-helm/issues/756
      # for more detail.
      # we may remove it after several releases
      - name: "data-migrator"
        image: goharbor/harbor-db:v2.6.2
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh"]
        args: ["-c", "[ -e /var/lib/postgresql/data/postgresql.conf ] && [ ! -d /var/lib/postgresql/data/pgdata ] && mkdir -m 0700 /var/lib/postgresql/data/pgdata && mv /var/lib/postgresql/data/* /var/lib/postgresql/data/pgdata/ || true"]
        volumeMounts:
          - name: database-data
            mountPath: /var/lib/postgresql/data
            subPath: database
      # with "fsGroup" set, each time a volume is mounted, Kubernetes must recursively chown() and chmod() all the files and directories inside the volume
      # this causes the postgresql reports the "data directory /var/lib/postgresql/data/pgdata has group or world access" issue when using some CSIs e.g. Ceph
      # use this init container to correct the permission
      # as "fsGroup" applied before the init container running, the container has enough permission to execute the command
      - name: "data-permissions-ensurer"
        image: goharbor/harbor-db:v2.6.2
        imagePullPolicy: IfNotPresent
        command: ["/bin/sh"]
        args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"]
        volumeMounts:
          - name: database-data
            mountPath: /var/lib/postgresql/data
            subPath: database
      containers:
      - name: database
        image: goharbor/harbor-db:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - /docker-healthcheck.sh
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          exec:
            command:
            - /docker-healthcheck.sh
          initialDelaySeconds: 1
          periodSeconds: 10
        envFrom:
          - secretRef:
              name: "harbor-database"
        env:
          # put the data into a sub directory to avoid the permission issue in k8s with restricted psp enabled
          # more detail refer to https://github.com/goharbor/harbor-helm/issues/756
          - name: PGDATA
            value: "/var/lib/postgresql/data/pgdata"
        volumeMounts:
        - name: database-data
          mountPath: /var/lib/postgresql/data
          subPath: database
        - name: shm-volume
          mountPath: /dev/shm
      volumes:
      - name: shm-volume
        emptyDir:
          medium: Memory
          sizeLimit: 512Mi
      - name: "database-data"
        persistentVolumeClaim:
          claimName: harbor-pvc
---
# Source: harbor/templates/redis/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: harbor-redis
  labels:
    app: "harbor"
    component: redis
spec:
  replicas: 1
  serviceName: harbor-redis
  selector:
    matchLabels:
      app: "harbor"
      component: redis
  template:
    metadata:
      labels:
        app: "harbor"
        component: redis
    spec:
      securityContext:
        runAsUser: 999
        fsGroup: 999
      automountServiceAccountToken: false
      terminationGracePeriodSeconds: 120
      containers:
      - name: redis
        image: goharbor/redis-photon:v2.6.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 300
          periodSeconds: 10
        readinessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 1
          periodSeconds: 10
        volumeMounts:
        - name: data
          mountPath: /var/lib/redis
          subPath: redis
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: harbor-pvc
---
# Source: harbor/templates/trivy/trivy-sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: harbor-trivy
  labels:
    app: "harbor"
    component: trivy
spec:
  replicas: 1
  serviceName: harbor-trivy
  selector:
    matchLabels:
      app: "harbor"
      component: trivy
  template:
    metadata:
      labels:
        app: "harbor"
        component: trivy
      annotations:
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      automountServiceAccountToken: false
      containers:
        - name: trivy
          image: goharbor/trivy-adapter-photon:v2.6.2
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
            allowPrivilegeEscalation: false
          env:
            - name: HTTP_PROXY
              value: ""
            - name: HTTPS_PROXY
              value: ""
            - name: NO_PROXY
              value: "harbor-core,harbor-jobservice,harbor-database,harbor-chartmuseum,harbor-notary-server,harbor-notary-signer,harbor-registry,harbor-portal,harbor-trivy,harbor-exporter,127.0.0.1,localhost,.local,.internal"
            - name: "SCANNER_LOG_LEVEL"
              value: "info"
            - name: "SCANNER_TRIVY_CACHE_DIR"
              value: "/home/scanner/.cache/trivy"
            - name: "SCANNER_TRIVY_REPORTS_DIR"
              value: "/home/scanner/.cache/reports"
            - name: "SCANNER_TRIVY_DEBUG_MODE"
              value: "false"
            - name: "SCANNER_TRIVY_VULN_TYPE"
              value: "os,library"
            - name: "SCANNER_TRIVY_TIMEOUT"
              value: "5m0s"
            - name: "SCANNER_TRIVY_GITHUB_TOKEN"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: gitHubToken
            - name: "SCANNER_TRIVY_SEVERITY"
              value: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
            - name: "SCANNER_TRIVY_IGNORE_UNFIXED"
              value: "false"
            - name: "SCANNER_TRIVY_SKIP_UPDATE"
              value: "false"
            - name: "SCANNER_TRIVY_OFFLINE_SCAN"
              value: "false"
            - name: "SCANNER_TRIVY_SECURITY_CHECKS"
              value: "vuln"
            - name: "SCANNER_TRIVY_INSECURE"
              value: "false"
            - name: SCANNER_API_SERVER_ADDR
              value: ":8080"
            - name: "SCANNER_REDIS_URL"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: redisURL
            - name: "SCANNER_STORE_REDIS_URL"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: redisURL
            - name: "SCANNER_JOB_QUEUE_REDIS_URL"
              valueFrom:
                secretKeyRef:
                  name: harbor-trivy
                  key: redisURL
          ports:
            - name: api-server
              containerPort: 8080
          volumeMounts:
          - name: data
            mountPath: /home/scanner/.cache
            subPath: trivy
            readOnly: false
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /probe/healthy
              port: api-server
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 10
          readinessProbe:
            httpGet:
              scheme: HTTP
              path: /probe/ready
              port: api-server
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 200m
              memory: 512Mi
      volumes:
      - name: "data"
        persistentVolumeClaim:
          claimName: harbor-pvc


@thangamani-arun
Copy link

@vlinxh Did you fix this mount issue? anyone can suggest, how to overcome this issue?

@lxh-015
Copy link
Author

lxh-015 commented Nov 28, 2022

@vlinxh你解决了这个挂载问题吗?任何人都可以建议,如何克服这个问题?

no solution found

@zyyw
Copy link
Collaborator

zyyw commented Nov 30, 2022

@vlinxh , do you have a default storageclass? please paste the output of the following command kubectl get sc.
It might be no default storageclass.

@lxh-015
Copy link
Author

lxh-015 commented Dec 3, 2022

@vlinxh , do you have a default storageclass? please paste the output of the following command kubectl get sc. It might be no default storageclass.

root@debian:~# kubectl get sc
NAME   PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs    k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           true                   50d
root@debian:~# kubectl get po -n common | grep harbor
harbor-chartmuseum-bdcd55744-tg6lc      1/1     Running             1 (7d19h ago)   13d
harbor-core-bf5bb767c-7mp2j             1/1     Running             3 (7d19h ago)   13d
harbor-database-0                       1/1     Running             7 (5d2h ago)    13d
harbor-jobservice-6dcd8fd86d-99zfm      0/1     ContainerCreating   0               13d
harbor-nginx-d9d7dfd57-6nvq2            1/1     Running             2 (7d19h ago)   13d
harbor-notary-server-74bc56d784-xkbqd   1/1     Running             9 (5d2h ago)    13d
harbor-notary-signer-db7d87cc4-f6x4r    1/1     Running             3 (7d19h ago)   13d
harbor-portal-dd9d9484f-nqgcd           1/1     Running             1 (7d19h ago)   13d
harbor-redis-0                          1/1     Running             1 (7d19h ago)   13d
harbor-registry-7b666d46cf-248kd        2/2     Running             2 (7d19h ago)   13d
harbor-trivy-0                          1/1     Running             1 (7d19h ago)   13d
root@debian:~#

@lxh-015
Copy link
Author

lxh-015 commented Dec 3, 2022

@zyyw Because other services and jobservice use the same volume, different subpath

@zyyw
Copy link
Collaborator

zyyw commented Dec 14, 2022

@vlinxh from the following content you posted

root@debian:~# kubectl get sc
NAME   PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs    k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           true                   50d

the nfs storageclass is not marked as default, that could be the reason why it errors out with the following message you posted in the description of this issue:
Unable to attach or mount volumes

@xuyanzhuo
Copy link

encountered the same problem

@czysl
Copy link

czysl commented Dec 30, 2022

in my case, just create two different specified pvc to provide the jobservice’s two service and i solve this problem . you can try it like this.
jobservice:
jobLog:
existingClaim: "harbor1"
storageClass: "xxx-storage"
subPath: "jobLog"
accessMode: ReadWriteOnce
size: 1Gi
annotations: {}
scanDataExports:
existingClaim: "harbor2
storageClass: "xxx-storage"
subPath: "scanDataExports"
accessMode: ReadWriteOnce
size: 1Gi
annotations: {}

@disconn3ct
Copy link

This shouldn't be a deployment at all. This specific use-case is exactly what StatefulSet is used for - one volume per replica.

@FeiYuJun
Copy link

FeiYuJun commented Mar 1, 2023

I also met. Has the problem been solved?

@bodgit
Copy link

bodgit commented Apr 27, 2023

I think I'm bumping into this issue or something related. Every time the ReplicaSets have a new revision and it wants to roll the Pods, the new Pod gets stuck waiting for the volume(s) which is already attached and mounted by the old Pod.

For example with harbor-registry, I get the error:

Unable to attach or mount volumes: unmounted volumes=[registry-data], unattached volumes=[registry-data registry-htpasswd registry-config]: timed out waiting for the condition

I have a default StorageClass, (the volumes are being created anyway):

$ kubectl get sc
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2             kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  693d
gp3 (default)   ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   247d

I'm guessing this is because the PVC has the access mode ReadWriteOnce. Internet wisdom seems to suggest these should be StatefulSets.

@thangamani-arun
Copy link

@bodgit: If your PVs are RWO then it may be better to change the UpgradeStrategy type from RollingUpdate to Recreate. IT might help you. Give a try and note that the harbor services will be down for 1-2 minutes.

updateStrategy:
  type: Recreate

@bodgit
Copy link

bodgit commented Apr 28, 2023

So after reading through #1137 it turns out the StorageClass used for the jobservice and registry Deployments should ideally be one that permits the ReadWriteMany access mode, i.e. NFS/EFS, etc. Therefore having them as a Deployment rather than a StatefulSet is desired so that multiple Pods share the same filesystem rather than get their own.

For now as I'm just experimenting with Harbor, I've applied the change suggested by @thangamani-arun which will suffice however I wouldn't deploy it this way in a production environment.

Copy link

github-actions bot commented Feb 8, 2024

This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.

@github-actions github-actions bot added the Stale label Feb 8, 2024
Copy link

This issue was closed because it has been stalled for 30 days with no activity. If this issue is still relevant, please re-open a new issue.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants