Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to authenticate user, due to error 'Invalid credentials' #752

Closed
JokerDevops opened this issue Sep 28, 2020 · 15 comments
Closed

Failed to authenticate user, due to error 'Invalid credentials' #752

JokerDevops opened this issue Sep 28, 2020 · 15 comments
Assignees

Comments

@JokerDevops
Copy link

I used harbor version 2.1.0,When I logged in, I encountered a user password error. I checked the log information of Core POD as follows:

2020-09-28T02:37:09Z [ERROR] [/core/controllers/base.go:109]: Error occurred in UserLogin: Failed to authenticate user, due to error 'Invalid credentials'

The contents of my values.YAMl file are as follows:

expose:
  # Set the way how to expose the service. Set the type as "ingress",
  # "clusterIP", "nodePort" or "loadBalancer" and fill the information
  # in the corresponding section
  type: ingress
  tls:
    # Enable the tls or not. Note: if the type is "ingress" and the tls
    # is disabled, the port must be included in the command when pull/push
    # images. Refer to https://github.com/goharbor/harbor/issues/5291
    # for the detail.
    enabled: true
    # The source of the tls certificate. Set it as "auto", "secret"
    # or "none" and fill the information in the corresponding section
    # 1) auto: generate the tls certificate automatically
    # 2) secret: read the tls certificate from the specified secret.
    # The tls certificate can be generated manually or by cert manager
    # 3) none: configure no tls certificate for the ingress. If the default
    # tls certificate is configured in the ingress controller, choose this option
    certSource: secret
    auto:
      # The common name used to generate the certificate, it's necessary
      # when the type isn't "ingress"
      commonName: ""
    secret:
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      secretName: "core-tls"
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      # Only needed when the "expose.type" is "ingress".
      notarySecretName: ""
  ingress:
    hosts:
      core: core.harbor.foo.com
      notary: notary.harbor.foo.com
    # set to the type of ingress controller if it has specific requirements.
    # leave as `default` for most ingress controllers.
    # set to `gce` if using the GCE ingress controller
    # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
    controller: default
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
  clusterIP:
    # The name of ClusterIP service
    name: harbor
    ports:
      # The service port Harbor listens on when serving with HTTP
      httpPort: 80
      # The service port Harbor listens on when serving with HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled
      # is set to true
      notaryPort: 4443
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving with HTTP
        port: 80
        # The node port Harbor listens on when serving with HTTP
        nodePort: 30002
      https:
        # The service port Harbor listens on when serving with HTTPS
        port: 443
        # The node port Harbor listens on when serving with HTTPS
        nodePort: 30003
      # Only needed when notary.enabled is set to true
      notary:
        # The service port Notary listens on
        port: 4443
        # The node port Notary listens on
        nodePort: 30004
  loadBalancer:
    # The name of LoadBalancer service
    name: harbor
    # Set the IP if the LoadBalancer supports assigning IP
    IP: ""
    ports:
      # The service port Harbor listens on when serving with HTTP
      httpPort: 80
      # The service port Harbor listens on when serving with HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled
      # is set to true
      notaryPort: 4443
    annotations: {}
    sourceRanges: []

# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker/notary client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://core.harbor.foo.com

# The internal TLS used for harbor components secure communicating. In order to enable https
# in each components tls cert files need to provided in advance.
internalTLS:
  # If internal TLS enabled
  enabled: false
  # There are three ways to provide tls
  # 1) "auto" will generate cert automatically
  # 2) "manual" need provide cert file manually in following value
  # 3) "secret" internal certificates from secret
  certSource: "auto"
  # The content of trust ca, only available when `certSource` is "manual"
  trustCa: ""
  # core related cert configuration
  core:
    # secret name for core's tls certs
    secretName: ""
    # Content of core's TLS cert file, only available when `certSource` is "manual"
    crt: ""
    # Content of core's TLS key file, only available when `certSource` is "manual"
    key: ""
  # jobservice related cert configuration
  jobservice:
    # secret name for jobservice's tls certs
    secretName: ""
    # Content of jobservice's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of jobservice's TLS key file, only available when `certSource` is "manual"
    key: ""
  # registry related cert configuration
  registry:
    # secret name for registry's tls certs
    secretName: ""
    # Content of registry's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of registry's TLS key file, only available when `certSource` is "manual"
    key: ""
  # portal related cert configuration
  portal:
    # secret name for portal's tls certs
    secretName: ""
    # Content of portal's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of portal's TLS key file, only available when `certSource` is "manual"
    key: ""
  # chartmuseum related cert configuration
  chartmuseum:
    # secret name for chartmuseum's tls certs
    secretName: ""
    # Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of chartmuseum's TLS key file, only available when `certSource` is "manual"
    key: ""
  # clair related cert configuration
  clair:
    # secret name for clair's tls certs
    secretName: ""
    # Content of clair's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of clair's TLS key file, only available when `certSource` is "manual"
    key: ""
  # trivy related cert configuration
  trivy:
    # secret name for trivy's tls certs
    secretName: ""
    # Content of trivy's TLS key file, only available when `certSource` is "manual"
    crt: ""
    # Content of trivy's TLS key file, only available when `certSource` is "manual"
    key: ""

# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamicly.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you have already existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  # (this does not apply for PVCs that are created for internal database
  # and redis components, i.e. they are never deleted automatically)
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: "registry-pvc"
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used(the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    chartmuseum:
      existingClaim: "chartmuseum-pvc"
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: "jobservice-pvc"
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: ""
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: ""
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    trivy:
      existingClaim: "trivy-pvc"
      storageClass: ""
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
  # Define which storage backend is used for registry and chartmuseum to store
  # images and charts. Refer to
  # https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
  # for the detail.
  imageChartStorage:
    # Specify whether to disable `redirect` for images and chart storage, for
    # backends which not supported it (such as using minio for `s3` storage type), please disable
    # it. To disable redirects, simply set `disableredirect` to `true` instead.
    # Refer to
    # https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
    # for the detail.
    disableredirect: false
    # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
    # The secret must contain keys named "ca.crt" which will be injected into the trust store
    # of registry's and chartmuseum's containers.
    # caBundleSecretName:

    # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
    # "oss" and fill the information needed in the corresponding section. The type
    # must be "filesystem" if you want to use persistent volumes for registry
    # and chartmuseum
    type: filesystem
    filesystem:
      rootdirectory: /storage
      #maxthreads: 100
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
      #realm: core.windows.net
    gcs:
      bucket: bucketname
      # The base64 encoded json file which contains the key
      encodedkey: base64-encoded-json-key-file
      #rootdirectory: /gcs/object/name/prefix
      #chunksize: "5242880"
    s3:
      region: us-west-1
      bucket: bucketname
      #accesskey: awsaccesskey
      #secretkey: awssecretkey
      #regionendpoint: http://myobjects.local
      #encrypt: false
      #keyid: mykeyid
      #secure: true
      #skipverify: false
      #v4auth: true
      #chunksize: "5242880"
      #rootdirectory: /s3/object/name/prefix
      #storageclass: STANDARD
      #multipartcopychunksize: "33554432"
      #multipartcopymaxconcurrency: 100
      #multipartcopythresholdsize: "33554432"
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
      #region: fr
      #tenant: tenantname
      #tenantid: tenantid
      #domain: domainname
      #domainid: domainid
      #trustid: trustid
      #insecureskipverify: false
      #chunksize: 5M
      #prefix:
      #secretkey: secretkey
      #accesskey: accesskey
      #authversion: 3
      #endpointtype: public
      #tempurlcontainerkey: false
      #tempurlmethods:
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
      #endpoint: endpoint
      #internal: false
      #encrypt: false
      #secure: true
      #chunksize: 10M
      #rootdirectory: rootdirectory

imagePullPolicy: IfNotPresent

# Use this set to assign a list of default pullSecrets
imagePullSecrets:
#  - name: docker-registry-secret
#  - name: internal-registry-secret

# The update strategy for deployments with persistent volumes(jobservice, registry
# and chartmuseum): "RollingUpdate" or "Recreate"
# Set it as "Recreate" when "RWM" for volumes isn't supported
updateStrategy:
  type: RollingUpdate

# debug, info, warning, error or fatal
logLevel: info

# The initial password of Harbor admin. Change it from portal after launching Harbor
harborAdminPassword: "1q2w3e4r"

# The name of the secret which contains key named "ca.crt". Setting this enables the
# download link on portal to download the certificate of CA when the certificate isn't
# generated automatically
caSecretName: ""

# The secret key used for encryption. Must be a string of 16 chars.
secretKey: "not-a-secure-key"

# The proxy settings for updating clair vulnerabilities from the Internet and replicating
# artifacts from/to the registries that cannot be reached directly
proxy:
  httpProxy:
  httpsProxy:
  noProxy: 127.0.0.1,localhost,.local,.internal
  components:
    - core
    - jobservice
    - clair
    - trivy

# The custom ca bundle secret, the secret must contain key named "ca.crt"
# which will be injected into the trust store for chartmuseum, clair, core, jobservice, registry, trivy components
# caBundleSecretName: ""

## UAA Authentication Options
# If you're using UAA for authentication behind a self-signed
# certificate you will need to provide the CA Cert.
# Set uaaSecretName below to provide a pre-created secret that
# contains a base64 encoded CA Certificate named `ca.crt`.
# uaaSecretName:

# If expose the service via "ingress", the Nginx will not be used
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: v2.1.0
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  replicas: 1
  resources:
    requests:
      memory: 256Mi
      cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

portal:
  image:
    repository: goharbor/harbor-portal
    tag: v2.1.0
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  replicas: 1
  resources:
    requests:
      memory: 256Mi
      cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

core:
  image:
    repository: goharbor/harbor-core
    tag: v2.1.0
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  replicas: 1
  ## Startup probe values
  startupProbe:
    initialDelaySeconds: 10
    resources:
     requests:
       memory: 256Mi
       cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used when core server communicates with other components.
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # Fill the name of a kubernetes secret if you want to use your own
  # TLS certificate and private key for token encryption/decryption.
  # The secret must contain keys named:
  # "tls.crt" - the certificate
  # "tls.key" - the private key
  # The default key pair will be used if it isn't set
  secretName: ""
  # The XSRF key. Will be generated automatically if it isn't specified
  xsrfKey: ""

jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: v2.1.0
  replicas: 1
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  maxJobWorkers: 10
  # The logger for jobs: "file", "database" or "stdout"
  jobLogger: file
  resources:
    requests:
      memory: 256Mi
      cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used when job service communicates with other components.
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""

registry:
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  registry:
    image:
      repository: goharbor/registry-photon
      tag: v2.1.0
      resources:
       requests:
         memory: 256Mi
         cpu: 100m
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: v2.1.0

    resources:
     requests:
       memory: 256Mi
       cpu: 100m
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used to secure the upload state from client
  # and registry storage backend.
  # See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.
  relativeurls: false
  credentials:
    username: "harbor_registry_user"
    password: "harbor_registry_password"
    # If you update the username or password of registry, make sure use cli tool htpasswd to generate the bcrypt hash
    # e.g. "htpasswd -nbBC10 $username $password"
    htpasswd: "harbor_registry_user:$2y$10$9L4Tc0DJbFFMB6RdSCunrOpTHdwhid4ktBJmLD00bYgqkkGOvll3m"

  middleware:
    enabled: false
    type: cloudFront
    cloudFront:
      baseurl: example.cloudfront.net
      keypairid: KEYPAIRID
      duration: 3000s
      ipfilteredby: none
      # The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key
      # that allows access to CloudFront
      privateKeySecret: "my-secret"

chartmuseum:
  enabled: true
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # Harbor defaults ChartMuseum to returning relative urls, if you want using absolute url you should enable it by change the following value to 'true'
  absoluteUrl: false
  image:
    repository: goharbor/chartmuseum-photon
    tag: v2.1.0
  replicas: 1
  resources:
   requests:
     memory: 256Mi
     cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

clair:
  enabled: true
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  clair:
    image:
      repository: goharbor/clair-photon
      tag: v2.1.0
    resources:
     requests:
       memory: 256Mi
       cpu: 100m
  adapter:
    image:
      repository: goharbor/clair-adapter-photon
      tag: v2.1.0
    resources:
     requests:
       memory: 256Mi
       cpu: 100m
  replicas: 1
  # The interval of clair updaters, the unit is hour, set to 0 to
  # disable the updaters
  updatersInterval: 12
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

trivy:
  # enabled the flag to enable Trivy scanner
  enabled: true
  image:
    # repository the repository for Trivy adapter image
    repository: goharbor/trivy-adapter-photon
    # tag the tag for Trivy adapter image
    tag: v2.1.0
  # set the service account to be used, default if left empty
  serviceAccountName: ""
  # replicas the number of Pod replicas
  replicas: 1
  # debugMode the flag to enable Trivy debug mode with more verbose scanning log
  debugMode: false
  # vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`.
  vulnType: "os,library"
  # severity a comma-separated list of severities to be checked
  severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
  # ignoreUnfixed the flag to display only fixed vulnerabilities
  ignoreUnfixed: false
  # insecure the flag to skip verifying registry certificate
  insecure: false
  # gitHubToken the GitHub access token to download Trivy DB
  #
  # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
  # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
  # in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update
  # timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.
  # Currently, the database is updated every 12 hours and published as a new release to GitHub.
  #
  # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
  # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
  # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
  # https://developer.github.com/v3/#rate-limiting
  #
  # You can create a GitHub token by following the instructions in
  # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
  gitHubToken: ""
  # skipUpdate the flag to disable Trivy DB downloads from GitHub
  #
  # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
  # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
  # `/home/scanner/.cache/trivy/db/trivy.db` path.
  skipUpdate: false
  resources:
    requests:
      cpu: 200m
      memory: 512Mi
    limits:
      cpu: 1
      memory: 1Gi
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

notary:
  enabled: true
  server:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    image:
      repository: goharbor/notary-server-photon
      tag: v2.1.0
    replicas: 1
    resources:
     requests:
       memory: 256Mi
       cpu: 100m
  signer:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    image:
      repository: goharbor/notary-signer-photon
      tag: v2.1.0
    replicas: 1
    resources:
     requests:
       memory: 256Mi
       cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Fill the name of a kubernetes secret if you want to use your own
  # TLS certificate authority, certificate and private key for notary
  # communications.
  # The secret must contain keys named ca.crt, tls.crt and tls.key that
  # contain the CA, certificate and private key.
  # They will be generated if not set.
  secretName: ""

database:
  # if external database is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: external
  internal:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    image:
      repository: goharbor/harbor-db
      tag: v2.1.0
    # The initial superuser password for internal database
    password: "changeit"
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "postgres.harbor"
    port: "5432"
    username: "postgres"
    password: "password"
    coreDatabase: "registry"
    clairDatabase: "clair"
    notaryServerDatabase: "notary_server"
    notarySignerDatabase: "notary_signer"
    # "disable" - No SSL
    # "require" - Always SSL (skip verification)
    # "verify-ca" - Always SSL (verify that the certificate presented by the
    # server was signed by a trusted CA)
    # "verify-full" - Always SSL (verify that the certification presented by the
    # server was signed by a trusted CA and the server host name matches the one
    # in the certificate)
    sslmode: "disable"
  # The maximum number of connections in the idle connection pool.
  # If it <=0, no idle connections are retained.
  maxIdleConns: 50
  # The maximum number of open connections to the database.
  # If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 1024 for postgre of harbor.
  maxOpenConns: 1000
  ## Additional deployment annotations
  podAnnotations: {}

redis:
  # if external Redis is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: external
  internal:
    # set the service account to be used, default if left empty
    serviceAccountName: ""
    image:
      repository: goharbor/redis-photon
      tag: v2.1.0
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    # support redis, redis+sentinel
    # addr for redis: <host_redis>:<port_redis>
    # addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>
    addr: "redis.harbor:6379"
    # The name of the set of Redis instances to monitor, it must be set to support redis+sentinel
    sentinelMasterSet: ""
    # The "coreDatabaseIndex" must be "0" as the library Harbor
    # used doesn't support configuring it
    coreDatabaseIndex: "0"
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    chartmuseumDatabaseIndex: "3"
    clairAdapterIndex: "4"
    trivyAdapterIndex: "5"
    password: ""
  ## Additional deployment annotations
  podAnnotations: {}
@lfdominguez
Copy link

Me too.. i cleaned the password on database and restarted the core, then the password is updated on database.. but always invalid credentials...

@linuxzha
Copy link

i met same issue and i checked redis logs
1:M 16 Oct 01:39:53.026 * 1 changes in 900 seconds. Saving... 1:M 16 Oct 01:39:53.026 * Background saving started by pid 9213 9213:C 16 Oct 01:39:53.027 # Failed opening the RDB file dump.rdb (in server root dir /var/lib/redis) for saving: Permission denied 1:M 16 Oct 01:39:53.126 # Background saving error
problem solved after i changed access permission of redis data directory.
try this:
sudo chmod 777 /pathToRedis/data

@lfdominguez
Copy link

lfdominguez commented Oct 16, 2020

uff i dont have that error, and extracted the golang code of generating the password hash and checked that my password is the same hash that the bd... so where i can get what is the error?

image

@stepan111
Copy link

Unfortunately seems that failed login issue will be shown for most deployment/misconfiguration cases.

At first I tried to deploy this helm chart with expose type ClusterIp and without SSL. But seems that it is not possible now due to CSRF protection: goharbor/harbor#12348

Then I configured ingress and it almost worked with next settings(i suppose that externalURL should be set properly):

helm install harbor harbor/harbor --set expose.type=ingress \
  --set expose.ingress.hosts.core=harbor.my.domain \
  --set expose.ingress.annotations."kubernetes\.io\/ingress\.class"=istio \
  --set expose.tls.certSource=secret \
  --set expose.tls.secret.secretName=harbor-core-cert \
  --set harborAdminPassword=admin \
  --set logLevel=debug \
  --set  externalURL=https://harbor.my.domain \
  --version=v1.5.0 \
  -n harbor

I said almost because i use istio-ingress and I had to edit ingress resource itself to make it work with istio:

kubectl edit ingress harbor-harbor-ingress

I had to change order for backends and add * for last rule:

    http:
      paths:
      - backend:
          serviceName: harbor-harbor-core
          servicePort: 80
        path: /api/*
        pathType: ImplementationSpecific
      - backend:
          serviceName: harbor-harbor-core
          servicePort: 80
        path: /service/*
        pathType: ImplementationSpecific
      - backend:
          serviceName: harbor-harbor-core
          servicePort: 80
        path: /v2/*
        pathType: ImplementationSpecific
      - backend:
          serviceName: harbor-harbor-core
          servicePort: 80
        path: /chartrepo/*
        pathType: ImplementationSpecific
      - backend:
          serviceName: harbor-harbor-core
          servicePort: 80
        path: /c/*
        pathType: ImplementationSpecific
      - backend:
          serviceName: harbor-harbor-portal
          servicePort: 80
        path: /*
        pathType: ImplementationSpecific

I hope that this info will be useful for others who recieve 'Invalid credentials' issue.

@aongwen
Copy link

aongwen commented Dec 10, 2020

Configure tls and expose type ingress and it will work

expose:

Set the way how to expose the service. Set the type as "ingress",

"clusterIP", "nodePort" or "loadBalancer" and fill the information

in the corresponding section

type: ingress
tls:
# Enable the tls or not. Note: if the type is "ingress" and the tls
# is disabled, the port must be included in the command when pull/push
# images. Refer to goharbor/harbor#5291
# for the detail.
enabled: true
# The source of the tls certificate. Set it as "auto", "secret"
# or "none" and fill the information in the corresponding section
# 1) auto: generate the tls certificate automatically
# 2) secret: read the tls certificate from the specified secret.
# The tls certificate can be generated manually or by cert manager
# 3) none: configure no tls certificate for the ingress. If the default
# tls certificate is configured in the ingress controller, choose this option
certSource: secret
auto:
# The common name used to generate the certificate, it's necessary
# when the type isn't "ingress"
commonName: "abc.com"
secret:
# The name of secret which contains keys named:
# "tls.crt" - the certificate
# "tls.key" - the private key
secretName: "core-tls"
# The name of secret which contains keys named:
# "tls.crt" - the certificate
# "tls.key" - the private key
# Only needed when the "expose.type" is "ingress".
notarySecretName: "notary.tls"
ingress:
hosts:
core: core.harbor.foo.com
notary: notary.harbor.foo.com
# set to the type of ingress controller if it has specific requirements.
# leave as default for most ingress controllers.
# set to gce if using the GCE ingress controller
# set to ncp if using the NCP (NSX-T Container Plugin) ingress controller
controller: default
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"

@ywk253100 ywk253100 self-assigned this Dec 16, 2020
@LinuxSuRen
Copy link

LinuxSuRen commented Jan 15, 2021

I've checked almost everything, no relevant error message can be found. But it still isn't working.

Anyway, I resolved it by replacing the password encrypting method. Please see the following SQL command:

update harbor_user set password_version='sha1' where user_id=1;

It's working after I run this SQL in the database pod. It should be a bug if understand it correctly.

@DandyDeveloper
Copy link

I have a similar problem but I am trying to authenticate THROUGH harbor-core to the Notary server.

@haowaiwai
Copy link

update harbor_user set password_version='sha1' where user_id=1;

Actually, why

@LinuxSuRen
Copy link

I didn't go deep into the code base. But I guess that there's a change in the password encryption method.

@haowaiwai
Copy link

I didn't go deep into the code base. But I guess that there's a change in the password encryption method.

thank you

@yogit2020
Copy link

it works with External URL update, but has anyone tried updating user name password both

@kelonsen
Copy link

kelonsen commented Feb 28, 2023

harbor v2.1.0, i change admin password in web ui,But something bad happened,Success and unauthorized cross appear.

WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://harbor.miduchina.com/v2/: unauthorized: authentication required

@kurollo
Copy link

kurollo commented Apr 15, 2023

I had the same problem, I rosolved it through this steps, i used helm :
take a look how install harbor by helm bitname repo by helm : https://bitnami.com/stack/harbor/helm

  1. Check your configuration TLS certification (if is activated)
  2. Create strong password like this : k84D)4#9q^2Y)$M in values.yaml or through helm: helm install registry-harbor registry/harbor -n harbor-system --set harborAdminPassword=k84D)4#9q^2Y)$M (if you have helm)
  3. forwad port DB: kubectl port-forward pod/registry-harbor-postgresql-0 5432:5432 -n harbor-system
  4. Login to your Postgres DB (password: not-secure-database-password by default in your values.yaml) : psql --host localhost --username postgres
  5. Use registry DB : \c registry and run update harbor_user set salt='', password='' where user_id = 1;
  6. Uninstall : helm uninstall registry-harbor registry/harbor -n harbor-system
  7. Install again : helm install registry-harbor registry/harbor -n harbor-system --set harborAdminPassword=k84D)4#9q^2Y)$M
  8. check passoword : $(kubectl get secret --namespace harbor-system registry-harbor-core-envvars -o jsonpath="{.data.HARBOR_ADMIN_PASSWORD}" | base64 -d)
  9. Default user admin : admin and password is our custom password: k84D)4#9q^2Y)$M
  10. Open your url and have fan with harbor registry 👍🏼

@HowHsu
Copy link

HowHsu commented May 16, 2024

I've checked almost everything, no relevant error message can be found. But it still isn't working.

Anyway, I resolved it by replacing the password encrypting method. Please see the following SQL command:

update harbor_user set password_version='sha1' where user_id=1;

It's working after I run this SQL in the database pod. It should be a bug if understand it correctly.

Is this bug addressed? I deploy Harbor in google marketplace on GKE, but meet the same problem, and I try updating password_version, doesn't work. And I even found the records in harbor_user table changes after I delete(thus will re-create) the harbor-core pod. Any tips?

@Xun66
Copy link

Xun66 commented Jun 18, 2024

Facing the same issue with v2.11.0, deployed with Docker. The related Docker-compose logs show:

nginx              | 172.26.0.1 - "GET /v2/ HTTP/1.1" 401 76 "-" "docker/25.0.3 go/go1.21.6 git-commit/f417435 kernel/5.15.0-97-generic os/linux arch/amd64 UpstreamClient(Docker-Client/25.0.3 \x5C(linux\x5C))" 0.002 0.002 .
harbor-core        | 2024-06-18T02:47:26Z [WARNING] [/core/auth/authenticator.go:158]: Login failed, locking robot$gitlab_bot, and sleep for 1.5s
harbor-core        | 2024-06-18T02:47:28Z [ERROR] [/server/middleware/security/basic_auth.go:72][client IP="172.28.14.148" requestID="5933e526-3bcd-4b70-8808-144cfe9f1e30" user agent="docker/25.0.3 go/go1.21.6 git-commit/f417435 kernel/5.15.0-97-generic os/linux arch/amd64 UpstreamClient(Docker-Client/25.0.3 \(linux\))"]: failed to authenticate user:robot$gitlab_bot, error:Failed to authenticate user, due to error 'Invalid credentials'

login command:

docker login registry.xxxxxx.com --username 'robot$gitlab_bot' --password-stdin <<< 'BotTokenMasked'
Error response from daemon: Get "https://registry.xxxxxx.com/v2/": unauthorized:

Finally solved by deleting the gitlab_bot and recreating a bot with the same name, then login succeeded. 👍

Not sure if this is an issue with brute force protection (account was locked so it always fail).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests