Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default password and username #589

Closed
itsecforu opened this issue May 20, 2020 · 44 comments
Closed

Default password and username #589

itsecforu opened this issue May 20, 2020 · 44 comments

Comments

@itsecforu
Copy link

Hello everyone. Unable to log in with standard credentials.
Which pod should I describe?
Where can I find error information?

@thedewpoint
Copy link

thedewpoint commented May 21, 2020

@itsecforu just spent several hours debugging the same issue. If you have the same issue I had I can help you. Can you do a test for me and see if you are getting a 405 in your network response when you click login? (in your browser debugger)

@itsecforu
Copy link
Author

@thedewpoint how can i test it? do u mean chrome?I have limited rights in the working environment

@thedewpoint
Copy link

@itsecforu yea in chrome. Just open the developer tools (f12) and click on the network tab. Then try to logon and check the response code. If its 405 you have the same issue I did and I can walk you through the solution.

@thedewpoint
Copy link

I'm going away for memorial day so I'll post what I did in case you have the same issue. After installing the helm chart I (incorrectly) changed the service type of the harbor-portal from clusterip to loadbalancer to expose it. Even though I was able to access the webapp, login requests were incorrectly routed to the portal service even though they should be routed to the core service. I changed the portal service back to clusterip and then made sure it was running on port 80. This chart sets up the ingress for harbor, if you run
Kubectl describe ingress my-release-harbor you will see it has uri based routing to the respective services. It does this routing based on the host. By default it is something like core.harbor.domain. I created a public dns record using duckdns(free) and changed the host to reflect my new domain. I also didn't install any certs so I flipped the 2 flags at the top related to sslredirect to false so I could hit it on http. By default this is running on port 80 so make sure nothing else is running on that port. Now you should be able to hit it with your domain and see login requests succeeding. Good luck

@0pendev
Copy link

0pendev commented May 23, 2020

I'm having this issue when deploying with an Ingress. I get a 405 from an nginx server (I use haproxy-ingress so it's definitely one from the harbor deployment).

@itsecforu
Copy link
Author

@thedewpoint thx for reply. I got 502 error

@thedewpoint
Copy link

.@0pendev sounds like you are having the same problem as me. Should be able to troubleshoot with my steps above. 1. Check that the domain you are accessing harbor on matches that in your ingress. 2. Check that your pods are listening on the correct ports. 3. Check the access logs and make sure logon request is being routed to core inside the core pod.

@itsecforu unfortunately thats a different problem. Need more detail from logs to figure out whats going on.

@itsecforu
Copy link
Author

itsecforu commented May 27, 2020

@thedewpoint Seems so, cource I used type Nodeport.
Which logs are best for clarifying the situation?
I got this issue about that #585
I dont know, how to fix it. May be somebody got working values file? :-(

I also tried to change type to loadbalancer and get that situation:

# helm status harbor
LAST DEPLOYED: Wed May 27 16:58:14 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                       DATA  AGE
harbor-harbor-chartmuseum  23    27m
harbor-harbor-core         41    27m
harbor-harbor-jobservice   1     27m
harbor-harbor-nginx        1     27m
harbor-harbor-registry     2     27m

==> v1/Deployment
NAME                         READY  UP-TO-DATE  AVAILABLE  AGE
harbor-harbor-chartmuseum    1/1    1           1          27m
harbor-harbor-clair          0/1    1           0          27m
harbor-harbor-core           0/1    1           0          27m
harbor-harbor-jobservice     0/1    1           0          27m
harbor-harbor-nginx          0/1    1           0          27m
harbor-harbor-notary-server  0/1    1           0          27m
harbor-harbor-notary-signer  0/1    1           0          27m
harbor-harbor-portal         0/1    1           0          27m
harbor-harbor-registry       1/1    1           1          27m

==> v1/PersistentVolumeClaim
NAME                       STATUS  VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS   AGE
harbor-harbor-chartmuseum  Bound   pv5     10Gi      RWO           local-storage  27m
harbor-harbor-jobservice   Bound   pv7     10Gi      RWO           local-storage  27m
harbor-harbor-registry     Bound   pv9     10Gi      RWO           local-storage  27m

==> v1/Pod(related)
NAME                                          READY  STATUS            RESTARTS  AGE
harbor-harbor-chartmuseum-5f9f5f4bb6-xwthw    1/1    Running           0         4m10s
harbor-harbor-clair-7887988b6d-qc68n          1/2    CrashLoopBackOff  8         27m
harbor-harbor-core-66db88cd5b-fcb2r           0/1    CrashLoopBackOff  8         27m
harbor-harbor-core-688fb8759b-6xxn2           0/1    CrashLoopBackOff  2         4m10s
harbor-harbor-database-0                      1/1    Running           0         27m
harbor-harbor-jobservice-694cbffc84-6dmbh     0/1    CrashLoopBackOff  5         27m
harbor-harbor-jobservice-86b68b878-4qd9h      0/1    Running           2         4m10s
harbor-harbor-nginx-5bcbc5fc5f-xn7w2          0/1    CrashLoopBackOff  9         27m
harbor-harbor-notary-server-c4886678-nfx2j    0/1    Error             4         4m10s
harbor-harbor-notary-signer-6c577bd759-x6t64  0/1    Error             4         4m10s
harbor-harbor-portal-5b5d5485d5-59zbr         0/1    Running           4         27m
harbor-harbor-redis-0                         1/1    Running           0         27m
harbor-harbor-registry-7968db596b-t92cr       2/2    Running           0         4m10s

==> v1/Secret
NAME                         TYPE    DATA  AGE
harbor-harbor-chartmuseum    Opaque  1     27m
harbor-harbor-clair          Opaque  3     27m
harbor-harbor-core           Opaque  7     27m
harbor-harbor-database       Opaque  1     27m
harbor-harbor-jobservice     Opaque  1     27m
harbor-harbor-notary-server  Opaque  5     27m
harbor-harbor-registry       Opaque  2     27m

==> v1/Service
NAME                         TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                      AGE
harbor                       LoadBalancer  10.233.35.100  <pending>    80:31071/TCP,4443:32007/TCP  27m
harbor-harbor-chartmuseum    ClusterIP     10.233.39.210  <none>       80/TCP                       27m
harbor-harbor-clair          ClusterIP     10.233.27.204  <none>       8080/TCP                     27m
harbor-harbor-core           ClusterIP     10.233.35.182  <none>       80/TCP                       27m
harbor-harbor-database       ClusterIP     10.233.57.6    <none>       5432/TCP                     27m
harbor-harbor-jobservice     ClusterIP     10.233.33.178  <none>       80/TCP                       27m
harbor-harbor-notary-server  ClusterIP     10.233.30.182  <none>       4443/TCP                     27m
harbor-harbor-notary-signer  ClusterIP     10.233.20.160  <none>       7899/TCP                     27m
harbor-harbor-portal         ClusterIP     10.233.10.92   <none>       80/TCP                       27m
harbor-harbor-redis          ClusterIP     10.233.9.122   <none>       6379/TCP                     27m
harbor-harbor-registry       ClusterIP     10.233.23.82   <none>       5000/TCP,8080/TCP            27m

==> v1/StatefulSet
NAME                    READY  AGE
harbor-harbor-database  1/1    27m
harbor-harbor-redis     1/1    27m


NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://10.2.67.201.
For more details, please visit https://github.com/goharbor/harbor.

[root@master ~]# helm status harbor
LAST DEPLOYED: Wed May 27 16:58:14 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                       DATA  AGE
harbor-harbor-chartmuseum  23    35m
harbor-harbor-core         41    35m
harbor-harbor-jobservice   1     35m
harbor-harbor-nginx        1     35m
harbor-harbor-registry     2     35m

==> v1/Deployment
NAME                         READY  UP-TO-DATE  AVAILABLE  AGE
harbor-harbor-chartmuseum    1/1    1           1          35m
harbor-harbor-clair          0/1    1           0          35m
harbor-harbor-core           0/1    1           0          35m
harbor-harbor-jobservice     0/1    1           0          35m
harbor-harbor-nginx          0/1    1           0          35m
harbor-harbor-notary-server  0/1    1           0          35m
harbor-harbor-notary-signer  0/1    1           0          35m
harbor-harbor-portal         0/1    1           0          35m
harbor-harbor-registry       1/1    1           1          35m

==> v1/PersistentVolumeClaim
NAME                       STATUS  VOLUME  CAPACITY  ACCESS MODES  STORAGECLASS   AGE
harbor-harbor-chartmuseum  Bound   pv5     10Gi      RWO           local-storage  35m
harbor-harbor-jobservice   Bound   pv7     10Gi      RWO           local-storage  35m
harbor-harbor-registry     Bound   pv9     10Gi      RWO           local-storage  35m

==> v1/Pod(related)
NAME                                          READY  STATUS            RESTARTS  AGE
harbor-harbor-chartmuseum-5f9f5f4bb6-xwthw    1/1    Running           0         12m
harbor-harbor-clair-7887988b6d-qc68n          1/2    CrashLoopBackOff  10        35m
harbor-harbor-core-66db88cd5b-fcb2r           0/1    CrashLoopBackOff  9         35m
harbor-harbor-core-688fb8759b-6xxn2           0/1    CrashLoopBackOff  5         12m
harbor-harbor-database-0                      1/1    Running           0         35m
harbor-harbor-jobservice-694cbffc84-6dmbh     0/1    CrashLoopBackOff  6         35m
harbor-harbor-jobservice-86b68b878-4qd9h      0/1    CrashLoopBackOff  5         12m
harbor-harbor-nginx-5bcbc5fc5f-xn7w2          0/1    CrashLoopBackOff  11        35m
harbor-harbor-notary-server-c4886678-nfx2j    0/1    CrashLoopBackOff  6         12m
harbor-harbor-notary-signer-6c577bd759-x6t64  0/1    CrashLoopBackOff  6         12m
harbor-harbor-portal-5b5d5485d5-59zbr         0/1    Running           6         35m
harbor-harbor-redis-0                         1/1    Running           0         35m
harbor-harbor-registry-7968db596b-t92cr       2/2    Running           0         12m

==> v1/Secret
NAME                         TYPE    DATA  AGE
harbor-harbor-chartmuseum    Opaque  1     35m
harbor-harbor-clair          Opaque  3     35m
harbor-harbor-core           Opaque  7     35m
harbor-harbor-database       Opaque  1     35m
harbor-harbor-jobservice     Opaque  1     35m
harbor-harbor-notary-server  Opaque  5     35m
harbor-harbor-registry       Opaque  2     35m

==> v1/Service
NAME                         TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                      AGE
harbor                       LoadBalancer  10.233.35.100  <pending>    80:31071/TCP,4443:32007/TCP  35m
harbor-harbor-chartmuseum    ClusterIP     10.233.39.210  <none>       80/TCP                       35m
harbor-harbor-clair          ClusterIP     10.233.27.204  <none>       8080/TCP                     35m
harbor-harbor-core           ClusterIP     10.233.35.182  <none>       80/TCP                       35m
harbor-harbor-database       ClusterIP     10.233.57.6    <none>       5432/TCP                     35m
harbor-harbor-jobservice     ClusterIP     10.233.33.178  <none>       80/TCP                       35m
harbor-harbor-notary-server  ClusterIP     10.233.30.182  <none>       4443/TCP                     35m
harbor-harbor-notary-signer  ClusterIP     10.233.20.160  <none>       7899/TCP                     35m
harbor-harbor-portal         ClusterIP     10.233.10.92   <none>       80/TCP                       35m
harbor-harbor-redis          ClusterIP     10.233.9.122   <none>       6379/TCP                     35m
harbor-harbor-registry       ClusterIP     10.233.23.82   <none>       5000/TCP,8080/TCP            35m

==> v1/StatefulSet
NAME                    READY  AGE
harbor-harbor-database  1/1    35m
harbor-harbor-redis     1/1    35m


NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://10.2.67.201.
For more details, please visit https://github.com/goharbor/harbor.

@thedewpoint
Copy link

@itsecforu you can see that harbor core pod is down, among some other pods. they are crashed. i would look at the logs to determine why for that pod and the others. also can you inspect your ingress for harbor and post it here.

@itsecforu
Copy link
Author

I havent ingrees as u can see

@thedewpoint
Copy link

can you try doing kubectl get ingress

@thedewpoint
Copy link

Just read your other issue, can you get logs from the postgres container

@itsecforu
Copy link
Author

ingress:

 kubectl get ing
No resources found.

Postgres seems not ok:

 kubectl  logs  harbor-harbor-database-0
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: C
  MONETARY: C
  NUMERIC:  C
  TIME:     C
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

    pg_ctl -D /var/lib/postgresql/data -l logfile start

postgres
waiting for server to start....LOG:  database system was shut down at 2020-05-27 13:35:22 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started
 done
server started
ALTER ROLE


/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initial-notaryserver.sql
CREATE DATABASE
CREATE ROLE
ALTER ROLE
GRANT


/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initial-notarysigner.sql
CREATE DATABASE
CREATE ROLE
ALTER ROLE
GRANT


/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/initial-registry.sql
CREATE DATABASE
You are now connected to database "registry" as user "postgres".
CREATE TABLE


waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

LOG:  database system was shut down at 2020-05-27 13:35:24 UTC
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

@thedewpoint
Copy link

it looks ok, i have to double check but i'm pretty sure my logs look like that as well

@thedewpoint
Copy link

@itsecforu can you get the pod logs for the "core" harbor pod

@itsecforu
Copy link
Author

itsecforu commented May 27, 2020

Hmm i have got 2 pods - core:

1st -nothing into logs

2nd:

kubectl logs harbor-harbor-core-688fb8759b-c6mq7

ls: /harbor_cust_cert: No such file or directory
2020-05-27T16:49:07Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.index.v1+json registered
2020-05-27T16:49:07Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.list.v2+json registered
2020-05-27T16:49:07Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.v1+prettyjws registered
2020-05-27T16:49:07Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.config.v1+json registered
2020-05-27T16:49:07Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.container.image.v1+json registered
2020-05-27T16:49:07Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cncf.helm.config.v1+json registered
2020-05-27T16:49:07Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cnab.manifest.v1 registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/native/adapter.go:36]: the factory for adapter docker-registry registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/harbor/adaper.go:31]: the factory for adapter harbor registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/dockerhub/adapter.go:25]: Factory for adapter docker-hub registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/huawei/huawei_adapter.go:27]: the factory of Huawei adapter was registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/googlegcr/adapter.go:29]: the factory for adapter google-gcr registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/awsecr/adapter.go:47]: the factory for adapter aws-ecr registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/azurecr/adapter.go:15]: Factory for adapter azure-acr registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/aliacr/adapter.go:31]: the factory for adapter ali-acr registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/jfrog/adapter.go:30]: the factory of jfrog artifactory adapter was registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/quayio/adapter.go:38]: the factory of Quay.io adapter was registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered
2020-05-27T16:49:07Z [INFO] [/replication/adapter/gitlab/adapter.go:17]: the factory for adapter gitlab registered
2020-05-27T16:49:07Z [INFO] [/core/controllers/base.go:299]: Config path: /etc/core/app.conf
2020-05-27T16:49:07Z [INFO] [/core/main.go:111]: initializing configurations...
2020-05-27T16:49:07Z [INFO] [/core/config/config.go:83]: key path: /etc/core/key
2020-05-27T16:49:07Z [INFO] [/core/config/config.go:60]: init secret store
2020-05-27T16:49:07Z [INFO] [/core/config/config.go:63]: init project manager
2020-05-27T16:49:07Z [INFO] [/core/config/config.go:95]: initializing the project manager based on local database...
2020-05-27T16:49:07Z [INFO] [/core/main.go:113]: configurations initialization completed
2020-05-27T16:49:07Z [INFO] [/common/dao/base.go:84]: Registering database: type-PostgreSQL host-harbor-harbor-database port-5432 databse-registry sslmode-"disable"
2020-05-27T16:49:08Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2020-05-27T16:49:12Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2020-05-27T16:49:18Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2020-05-27T16:49:28Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2020-05-27T16:49:46Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2020-05-27T16:50:20Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout
2020-05-27T16:50:22Z [FATAL] [/core/main.go:120]: failed to initialize database: failed to connect to tcp:harbor-harbor-database:5432 after 60 seconds

@thedewpoint
Copy link

thats problem #1, we need to figure out why it can't connect to the database. theres no database logs corresponding to that same timestamp? Also you shouldn't have 2 core containers, can you delete the one that has no logs. Make sure your service for core is correctly routing to the pod that is running.

@itsecforu
Copy link
Author

itsecforu commented May 27, 2020

How to solve that #1 ?
If I delete, pod will up again :-)
Duplication occurred after:
helm upgrade -f values
when i changed config
How can I check this routing?

@itsecforu
Copy link
Author

itsecforu commented May 28, 2020

I notice my helm version is 2.14.3, maybe is it root of evil?

helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

logs

harbor-registry:

ls: /harbor_cust_cert: No such file or directory 
time="2020-05-28T07:58:58.551873799Z" level=info msg="debug server listening localhost:5001"  
time="2020-05-28T07:58:58.554012799Z" level=info msg="configuring endpoint harbor (http://harbor-harbor-core/service/notifications), timeout=3s, headers=map[]" go.version=go1.13.8 instance.id=54c6a687-7be2-4b33-996c-72aa782169f7 service=registry version=v2.7.1.m  
time="2020-05-28T07:58:58.564441649Z" level=info msg="using redis blob descriptor cache" go.version=go1.13.8 instance.id=54c6a687-7be2-4b33-996c-72aa782169f7 service=registry version=v2.7.1.m  
time="2020-05-28T07:58:58.564755199Z" level=info msg="listening on [::]:5000" go.version=go1.13.8 instance.id=54c6a687-7be2-4b33-996c-72aa782169f7 service=registry version=v2.7.1.m  

harbor-clair:

ls: /harbor_cust_cert: No such file or directory 
{"Event":"pgsql: could not open database: dial tcp: lookup harbor-harbor-database on 10.233.0.10:53: read udp 10.233.70.180:43769-\u003e10.233.0.10:53: i/o timeout","Level":"fatal","Location":"main.go:97","Time":"2020-05-28 07:58:02.386700"}

harbor-core:

ls: /harbor_cust_cert: No such file or directory 
2020-05-28T07:59:49Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.index.v1+json registered 
2020-05-28T07:59:49Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.list.v2+json registered 
2020-05-28T07:59:49Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.distribution.manifest.v1+prettyjws registered 
2020-05-28T07:59:49Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.oci.image.config.v1+json registered 
2020-05-28T07:59:49Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.docker.container.image.v1+json registered 
2020-05-28T07:59:49Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cncf.helm.config.v1+json registered 
2020-05-28T07:59:49Z [INFO] [/controller/artifact/processor/processor.go:58]: the processor to process media type application/vnd.cnab.manifest.v1 registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/native/adapter.go:36]: the factory for adapter docker-registry registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/harbor/adaper.go:31]: the factory for adapter harbor registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/dockerhub/adapter.go:25]: Factory for adapter docker-hub registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/huawei/huawei_adapter.go:27]: the factory of Huawei adapter was registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/googlegcr/adapter.go:29]: the factory for adapter google-gcr registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/awsecr/adapter.go:47]: the factory for adapter aws-ecr registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/azurecr/adapter.go:15]: Factory for adapter azure-acr registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/aliacr/adapter.go:31]: the factory for adapter ali-acr registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/jfrog/adapter.go:30]: the factory of jfrog artifactory adapter was registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/quayio/adapter.go:38]: the factory of Quay.io adapter was registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/helmhub/adapter.go:30]: the factory for adapter helm-hub registered 
2020-05-28T07:59:49Z [INFO] [/replication/adapter/gitlab/adapter.go:17]: the factory for adapter gitlab registered 
2020-05-28T07:59:49Z [INFO] [/core/controllers/base.go:299]: Config path: /etc/core/app.conf 
2020-05-28T07:59:49Z [INFO] [/core/main.go:111]: initializing configurations... 
2020-05-28T07:59:49Z [INFO] [/core/config/config.go:83]: key path: /etc/core/key 
2020-05-28T07:59:49Z [INFO] [/core/config/config.go:60]: init secret store 
2020-05-28T07:59:49Z [INFO] [/core/config/config.go:63]: init project manager 
2020-05-28T07:59:49Z [INFO] [/core/config/config.go:95]: initializing the project manager based on local database... 
2020-05-28T07:59:49Z [INFO] [/core/main.go:113]: configurations initialization completed 
2020-05-28T07:59:49Z [INFO] [/common/dao/base.go:84]: Registering database: type-PostgreSQL host-harbor-harbor-database port-5432 databse-registry sslmode-"disable" 
2020-05-28T07:59:50Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout 
2020-05-28T07:59:54Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout 
2020-05-28T08:00:00Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout 
2020-05-28T08:00:10Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout 
2020-05-28T08:00:28Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout 
2020-05-28T08:01:02Z [ERROR] [/common/utils/utils.go:102]: failed to connect to tcp://harbor-harbor-database:5432, retry after 2 seconds :dial tcp: i/o timeout 
2020-05-28T08:01:04Z [FATAL] [/core/main.go:120]: failed to initialize database: failed to connect to tcp:harbor-harbor-database:5432 after 60 seconds

notary-singer:

2020/05/28 08:00:26 Updating database. 
2020/05/28 08:00:56 Failed to connect DB after 30 seconds, time out.  

harbor-jobservice:

2020-05-28T08:02:12Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-harbor-core/api/internal/configurations: dial tcp: i/o timeout, url:http://harbor-harbor-core/api/internal/configurations 
2020-05-28T08:02:12Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config 
2020-05-28T08:02:12Z [INFO] [/jobservice/job/impl/context.go:78]: Retry in 9 seconds 
2020-05-28T08:02:21Z [INFO] [/common/config/store/driver/rest.go:31]: get configuration from url: http://harbor-harbor-core/api/internal/configurations 
2020-05-28T08:02:51Z [ERROR] [/common/config/store/driver/rest.go:34]: Failed on load rest config err:Get http://harbor-harbor-core/api/internal/configurations: dial tcp: i/o timeout, url:http://harbor-harbor-core/api/internal/configurations 
2020-05-28T08:02:51Z [ERROR] [/jobservice/job/impl/context.go:75]: Job context initialization error: failed to load rest config

harbor-nginx:

2020/05/28 08:04:20 [emerg] 1#0: host not found in upstream "harbor-harbor-core" in /etc/nginx/nginx.conf:22 
nginx: [emerg] host not found in upstream "harbor-harbor-core" in /etc/nginx/nginx.conf:22

harbor-notary-server:

format_color_text text_fields timer_off refresh

exposure_zero file_download

2020/05/28 08:00:09 Updating database. 
2020/05/28 08:00:39 Failed to connect DB after 30 seconds, time out.  

harbor-database:

The files belonging to this database system will be owned by user "postgres". 
This user must also own the server process. 

The database cluster will be initialized with locales 
  COLLATE:  en_US.UTF-8 
  CTYPE:    en_US.UTF-8 
  MESSAGES: C 
  MONETARY: C 
  NUMERIC:  C 
  TIME:     C 
The default text search configuration will be set to "english". 

Data page checksums are disabled. 

fixing permissions on existing directory /var/lib/postgresql/data ... ok 
creating subdirectories ... ok 
selecting default max_connections ... 100 
selecting default shared_buffers ... 128MB 
selecting default timezone ... UTC 
selecting dynamic shared memory implementation ... posix 
creating configuration files ... ok 
running bootstrap script ... ok 
performing post-bootstrap initialization ... ok 
syncing data to disk ... ok 

Success. You can now start the database server using: 

    pg_ctl -D /var/lib/postgresql/data -l logfile start 


WARNING: enabling "trust" authentication for local connections 
You can change this by editing pg_hba.conf or using the option -A, or 
--auth-local and --auth-host, the next time you run initdb. 
postgres 
waiting for server to start....LOG:  database system was shut down at 2020-05-28 07:40:36 UTC

@itsecforu
Copy link
Author

itsecforu commented May 28, 2020

Highlighted in bold for what exactly changed

values.yaml:

expose:
  # Set the way how to expose the service. Set the type as "ingress",
  # "clusterIP", "nodePort" or "loadBalancer" and fill the information
  # in the corresponding section
  **type: nodePort**
  tls:
    # Enable the tls or not. Note: if the type is "ingress" and the tls
    # is disabled, the port must be included in the command when pull/push
    # images. Refer to https://github.com/goharbor/harbor/issues/5291
    # for the detail.
    enabled: false
    # Fill the name of secret if you want to use your own TLS certificate.
    # The secret contains keys named:
    # "tls.crt" - the certificate (required)
    # "tls.key" - the private key (required)
    # "ca.crt" - the certificate of CA (optional), this enables the download
    # link on portal to download the certificate of CA
    # These files will be generated automatically if the "secretName" is not set
    secretName: ""
    # By default, the Notary service will use the same cert and key as
    # described above. Fill the name of secret if you want to use a
    # separated one. Only needed when the type is "ingress".
    notarySecretName: ""
    # The common name used to generate the certificate, it's necessary
    # when the type isn't "ingress" and "secretName" is null
    commonName: ""
  ingress:
    hosts:
      core: core.harbor.domain
      notary: notary.harbor.domain
    # set to the type of ingress controller if it has specific requirements.
    # leave as `default` for most ingress controllers.
    # set to `gce` if using the GCE ingress controller
    # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller
    controller: default
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
  clusterIP:
    # The name of ClusterIP service
    name: harbor
    ports:
      # The service port Harbor listens on when serving with HTTP
      httpPort: 80
      # The service port Harbor listens on when serving with HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled
      # is set to true
      notaryPort: 4443
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving with HTTP
        port: 80
        # The node port Harbor listens on when serving with HTTP
        **nodePort: 30088**
      https:
        # The service port Harbor listens on when serving with HTTPS
        port: 443
        # The node port Harbor listens on when serving with HTTPS
        nodePort: 30003
      # Only needed when notary.enabled is set to true
      notary:
        # The service port Notary listens on
        port: 4443
        # The node port Notary listens on
        nodePort: 30004
  loadBalancer:
    # The name of LoadBalancer service
    name: harbor
    # Set the IP if the LoadBalancer supports assigning IP
    IP: ""
    ports:
      # The service port Harbor listens on when serving with HTTP
      httpPort: 80
      # The service port Harbor listens on when serving with HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled
      # is set to true
      notaryPort: 4443
    annotations: {}
    sourceRanges: []

# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker/notary client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
**externalURL: https://10.2.67.201**

# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamicly.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you have already existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: ""
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used(the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "local-storage"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    chartmuseum:
      existingClaim: ""
      storageClass: "local-storage"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: ""
      storageClass: "local-storage"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: ""
      storageClass: "local-storage"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: ""
      storageClass: "local-storage"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 1Gi
    trivy:
      existingClaim: ""
      storageClass: "local-storage"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
  # Define which storage backend is used for registry and chartmuseum to store
  # images and charts. Refer to
  # https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
  # for the detail.
  imageChartStorage:
    # Specify whether to disable `redirect` for images and chart storage, for
    # backends which not supported it (such as using minio for `s3` storage type), please disable
    # it. To disable redirects, simply set `disableredirect` to `true` instead.
    # Refer to
    # https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
    # for the detail.
    disableredirect: false
    # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
    # The secret must contain keys named "ca.crt" which will be injected into the trust store
    # of registry's and chartmuseum's containers.
    # caBundleSecretName:

    # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
    # "oss" and fill the information needed in the corresponding section. The type
    # must be "filesystem" if you want to use persistent volumes for registry
    # and chartmuseum
    type: filesystem
    filesystem:
      rootdirectory: /storage
      #maxthreads: 100
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
      #realm: core.windows.net
    gcs:
      bucket: bucketname
      # The base64 encoded json file which contains the key
      encodedkey: base64-encoded-json-key-file
      #rootdirectory: /gcs/object/name/prefix
      #chunksize: "5242880"
    s3:
      region: us-west-1
      bucket: bucketname
      #accesskey: awsaccesskey
      #secretkey: awssecretkey
      #regionendpoint: http://myobjects.local
      #encrypt: false
      #keyid: mykeyid
      #secure: true
      #v4auth: true
      #chunksize: "5242880"
      #rootdirectory: /s3/object/name/prefix
      #storageclass: STANDARD
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
      #region: fr
      #tenant: tenantname
      #tenantid: tenantid
      #domain: domainname
      #domainid: domainid
      #trustid: trustid
      #insecureskipverify: false
      #chunksize: 5M
      #prefix:
      #secretkey: secretkey
      #accesskey: accesskey
      #authversion: 3
      #endpointtype: public
      #tempurlcontainerkey: false
      #tempurlmethods:
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
      #endpoint: endpoint
      #internal: false
      #encrypt: false
      #secure: true
      #chunksize: 10M
      #rootdirectory: rootdirectory

imagePullPolicy: IfNotPresent

# Use this set to assign a list of default pullSecrets
imagePullSecrets:
#  - name: docker-registry-secret
#  - name: internal-registry-secret

# The update strategy for deployments with persistent volumes(jobservice, registry 
# and chartmuseum): "RollingUpdate" or "Recreate"
# Set it as "Recreate" when "RWM" for volumes isn't supported 
updateStrategy:
  type: RollingUpdate

# debug, info, warning, error or fatal
logLevel: info

# The initial password of Harbor admin. Change it from portal after launching Harbor
harborAdminPassword: "Harbor12345"
# The secret key used for encryption. Must be a string of 16 chars.
secretKey: "not-a-secure-key"

# The proxy settings for updating clair vulnerabilities from the Internet and replicating
# artifacts from/to the registries that cannot be reached directly
proxy:
  httpProxy:
  httpsProxy:
  noProxy: 127.0.0.1,localhost,.local,.internal
  components:
    - core
    - jobservice
    - clair

## UAA Authentication Options
# If you're using UAA for authentication behind a self-signed
# certificate you will need to provide the CA Cert.
# Set uaaSecretName below to provide a pre-created secret that
# contains a base64 encoded CA Certificate named `ca.crt`.
# uaaSecretName:

# If expose the service via "ingress", the Nginx will not be used
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: dev
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

portal:
  image:
    repository: goharbor/harbor-portal
    tag: dev
  replicas: 1
# resources:
#  requests:
#    memory: 256Mi
#    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

core:
  image:
    repository: goharbor/harbor-core
    tag: dev
  replicas: 1
  ## Liveness probe values
  livenessProbe:
    **initialDelaySeconds: 300000**
# resources:
#  requests:
#    memory: 256Mi
#    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used when core server communicates with other components.
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # Fill the name of a kubernetes secret if you want to use your own
  # TLS certificate and private key for token encryption/decryption.
  # The secret must contain keys named:
  # "tls.crt" - the certificate
  # "tls.key" - the private key
  # The default key pair will be used if it isn't set
  secretName: ""
  # The XSRF key. Will be generated automatically if it isn't specified
  xsrfKey: ""

jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: dev
  replicas: 1
  maxJobWorkers: 10
  # The logger for jobs: "file", "database" or "stdout"
  jobLogger: file
# resources:
#   requests:
#     memory: 256Mi
#     cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used when job service communicates with other components.
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""

registry:
  registry:
    image:
      repository: goharbor/registry-photon
      tag: dev

    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: dev

    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Secret is used to secure the upload state from client
  # and registry storage backend.
  # See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http
  # If a secret key is not specified, Helm will generate one.
  # Must be a string of 16 chars.
  secret: ""
  # If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.
  relativeurls: false
  credentials:
    username: "harbor_registry_user"
    password: "harbor_registry_password"
    # If you update the username or password of registry, make sure use cli tool htpasswd to generate the bcrypt hash
    # e.g. "htpasswd -nbBC10 $username $password"
    htpasswd: "harbor_registry_user:$2y$10$9L4Tc0DJbFFMB6RdSCunrOpTHdwhid4ktBJmLD00bYgqkkGOvll3m"

  middleware:
    enabled: false
    type: cloudFront
    cloudFront:
      baseurl: example.cloudfront.net
      keypairid: KEYPAIRID
      duration: 3000s
      ipfilteredby: none
      # The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key
      # that allows access to CloudFront
      privateKeySecret: "my-secret"

chartmuseum:
  enabled: true
  # Harbor defaults ChartMuseum to returning relative urls, if you want using absolute url you should enable it by change the following value to 'true'
  absoluteUrl: false
  image:
    repository: goharbor/chartmuseum-photon
    tag: dev
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

clair:
  enabled: true
  clair:
    image:
      repository: goharbor/clair-photon
      tag: dev
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  adapter:
    image:
      repository: goharbor/clair-adapter-photon
      tag: dev
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  replicas: 1
  # The interval of clair updaters, the unit is hour, set to 0 to
  # disable the updaters
  updatersInterval: 12
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

trivy:
  # enabled the flag to enable Trivy scanner
  enabled: true
  image:
    # repository the repository for Trivy adapter image
    repository: goharbor/trivy-adapter-photon
    # tag the tag for Trivy adapter image
    tag: dev
  # replicas the number of Pod replicas
  replicas: 1
  # debugMode the flag to enable Trivy debug mode with more verbose scanning log
  debugMode: false
  # vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`.
  vulnType: "os,library"
  # severity a comma-separated list of severities to be checked
  severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"
  # ignoreUnfixed the flag to display only fixed vulnerabilities
  ignoreUnfixed: false
  # gitHubToken the GitHub access token to download Trivy DB
  #
  # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.
  # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached
  # in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update
  # timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.
  # Currently, the database is updated every 12 hours and published as a new release to GitHub.
  #
  # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough
  # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000
  # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult
  # https://developer.github.com/v3/#rate-limiting
  #
  # You can create a GitHub token by following the instructions in
  # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line
  gitHubToken: ""
  # skipUpdate the flag to disable Trivy DB downloads from GitHub
  #
  # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
  # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
  # `/home/scanner/.cache/trivy/db/trivy.db` path.
  skipUpdate: false
  resources:
    requests:
      cpu: 200m
      memory: 512Mi
    limits:
      cpu: 1
      memory: 1Gi

notary:
  enabled: true
  server:
    image:
      repository: goharbor/notary-server-photon
      tag: dev
    replicas: 1
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  signer:
    image:
      repository: goharbor/notary-signer-photon
      tag: dev
    replicas: 1
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}
  # Fill the name of a kubernetes secret if you want to use your own
  # TLS certificate authority, certificate and private key for notary
  # communications.
  # The secret must contain keys named ca.crt, tls.crt and tls.key that
  # contain the CA, certificate and private key.
  # They will be generated if not set.
  secretName: ""

database:
  # if external database is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    image:
      repository: goharbor/harbor-db
      tag: dev
    # the image used by the init container
    initContainerImage:
      repository: busybox
      tag: latest
    # The initial superuser password for internal database
    password: "changeit"
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.1"
    port: "5432"
    username: "user"
    password: "password"
    coreDatabase: "registry"
    clairDatabase: "clair"
    notaryServerDatabase: "notary_server"
    notarySignerDatabase: "notary_signer"
    # "disable" - No SSL
    # "require" - Always SSL (skip verification)
    # "verify-ca" - Always SSL (verify that the certificate presented by the
    # server was signed by a trusted CA)
    # "verify-full" - Always SSL (verify that the certification presented by the
    # server was signed by a trusted CA and the server host name matches the one
    # in the certificate)
    sslmode: "disable"
  # The maximum number of connections in the idle connection pool.
  # If it <=0, no idle connections are retained.
  maxIdleConns: 50
  # The maximum number of open connections to the database.
  # If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 100 for postgre.
  maxOpenConns: 100
  ## Additional deployment annotations
  podAnnotations: {}

redis:
  # if external Redis is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    image:
      repository: goharbor/redis-photon
      tag: dev
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.2"
    port: "6379"
    # The "coreDatabaseIndex" must be "0" as the library Harbor
    # used doesn't support configuring it
    coreDatabaseIndex: "0"
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    chartmuseumDatabaseIndex: "3"
    clairAdapterIndex: "4"
    trivyAdapterIndex: "5"
    password: ""
  ## Additional deployment annotations
  podAnnotations: {}

@itsecforu
Copy link
Author

Can somebody helps? :-(

@thedewpoint
Copy link

did you already try upgrading helm? this is my helm output
sudo helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

@itsecforu
Copy link
Author

I updated to 2.8

@thedewpoint
Copy link

why not the 3.02? just so we are the same

@itsecforu
Copy link
Author

I can’t update this way, I need to install version 3 in parallel

@itsecforu
Copy link
Author

the same Readiness probe failed with Helm 3.0.2 :-(

@itsecforu
Copy link
Author

From my database pod:

CREATE TABLE 
waiting for server to shut down....LOG:  received fast shutdown request 
LOG:  aborting any active transactions 
LOG:  autovacuum launcher shutting down 
LOG:  shutting down 
LOG:  database system is shut down 
 done 
server stopped 

PostgreSQL init process complete; ready for start up. 

LOG:  database system was shut down at 2020-06-08 12:04:01 UTC 
LOG:  MultiXact member wraparound protections are now enabled 
LOG:  database system is ready to accept connections 
LOG:  autovacuum launcher started

May be do I need to start DB into pod?

@itsecforu
Copy link
Author

I was triying install with:
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}

and:

helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}

I collected a minimal file of values:

expose:
  type: nodePort
  tls:
    enabled: false
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving with HTTP
        port: 80
        # The node port Harbor listens on when serving with HTTP
        nodePort: 30088
      https:
        # The service port Harbor listens on when serving with HTTPS
        port: 443
        # The node port Harbor listens on when serving with HTTPS
        nodePort: 30003
      # Only needed when notary.enabled is set to true
      notary:
        # The service port Notary listens on
        port: 4443
        # The node port Notary listens on
        nodePort: 30004

externalURL: http://10.7.29.147:30088

persistence:
  enabled: true
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      storageClass: "local-storage"
    chartmuseum:
      storageClass: "local-storage"
    jobservice:
      storageClass: "local-storage"
    database:
      storageClass: "local-storage"
    redis:
      storageClass: "local-storage"


but the situation is similar

1

Readiness probe failed: on many pods.

Help folks! :-(

@reasonerjt
Copy link
Contributor

same root cause as #585 , the network is not working as expected in your k8s cluster and the name of the service can't be resolved.

@w564791
Copy link

w564791 commented Oct 27, 2020

helm(2.17) wish same problem , type nodePort, persistence enable(false)

@patsevanton
Copy link
Contributor

patsevanton commented Mar 10, 2021

Same issue

Add repo
$ helm repo add harbor https://helm.goharbor.io
Install
$ helm install --set expose.ingress.hosts.core=harbor.192.168.22.7.xip.io --set expose.ingress.hosts.notary=notary.192.168.22.7.xip.io --set persistence.enabled=true --set externalURL=http://harbor.192.168.22.7.xip.io --set harborAdminPassword=admin harbor harbor/harbor

Get Logs

$ kubectl logs  harbor-harbor-core-d666d9fdd-77krr

Logs

2021-03-10T04:35:30Z [ERROR] [/core/controllers/base.go:109]: Error occurred in UserLogin: Failed to authenticate user, due to error 'Invalid credentials'

2021-03-10T04:37:05Z [ERROR] [/core/controllers/base.go:109]: Error occurred in UserLogin: Failed to authenticate user, due to error 'Invalid credentials'

Tri login with login password - admin:admin

@inductor
Copy link

I'm facing this issue and --set harborAdminPassword=admin is not helping me :(

@klzsysy
Copy link

klzsysy commented Jun 9, 2021

Same issue

  • helm repo add harbor https://helm.goharbor.io
  • helm install my-release harbor/harbor --set expose.type=nodePort --set expose.tls.enabled=false --set persistence.enabled=false
❯ kubectl --kubeconfig /var/tmp/default.kubeconfig get pod
NAME                                               READY   STATUS    RESTARTS   AGE
my-release-harbor-chartmuseum-59c694665-sgbnk      1/1     Running   0          7m33s
my-release-harbor-core-565b9db589-c52qw            1/1     Running   0          7m33s
my-release-harbor-database-0                       1/1     Running   0          7m33s
my-release-harbor-jobservice-588578f86f-2l6dx      1/1     Running   0          7m33s
my-release-harbor-nginx-6cbcdbd4db-dp9qt           1/1     Running   0          7m33s
my-release-harbor-notary-server-6bc6b9bfbf-2bww2   1/1     Running   0          7m33s
my-release-harbor-notary-signer-5bf68b9455-n5sfn   1/1     Running   0          7m33s
my-release-harbor-portal-7fb85d5598-fpvgk          1/1     Running   0          7m33s
my-release-harbor-redis-0                          1/1     Running   0          7m33s
my-release-harbor-registry-67d799947-h94b9         2/2     Running   0          7m33s
my-release-harbor-trivy-0                          1/1     Running   0          7m33s
❯ kubectl --kubeconfig /var/tmp/default.kubeconfig get svc
NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
harbor                            NodePort    10.97.236.126    <none>        80:30002/TCP,4443:30004/TCP   7m34s
kubernetes                        ClusterIP   10.96.0.1        <none>        443/TCP                       38m
my-release-harbor-chartmuseum     ClusterIP   10.97.32.163     <none>        80/TCP                        7m35s
my-release-harbor-core            ClusterIP   10.98.69.192     <none>        80/TCP                        7m35s
my-release-harbor-database        ClusterIP   10.110.48.139    <none>        5432/TCP                      7m35s
my-release-harbor-jobservice      ClusterIP   10.111.175.111   <none>        80/TCP                        7m35s
my-release-harbor-notary-server   ClusterIP   10.96.75.175     <none>        4443/TCP                      7m34s
my-release-harbor-notary-signer   ClusterIP   10.109.127.153   <none>        7899/TCP                      7m34s
my-release-harbor-portal          ClusterIP   10.104.87.126    <none>        80/TCP                        7m34s
my-release-harbor-redis           ClusterIP   10.96.229.212    <none>        6379/TCP                      7m34s
my-release-harbor-registry        ClusterIP   10.104.250.250   <none>        5000/TCP,8080/TCP             7m34s
my-release-harbor-trivy           ClusterIP   10.103.124.203   <none>        8080/TCP                      7m34s

use admin/Harbor12345

image

@inductor
Copy link

I actually found that when you disable TLS in harbor helm it happens

@kkonovodoff
Copy link

Hi there, faced the same credential issue and answered here: #485 (comment)

Hope it helps

@dhiru-byte
Copy link

dhiru-byte commented Feb 1, 2022

I am facing the same issue when trying to login to harbor after deploying using helm chart and exposing it as nodeport.

Solution for me :: Enabling the TLS in values.yaml...

Screenshot 2022-02-01 at 11 07 13 PM

@inductor
Copy link

inductor commented Feb 1, 2022

image

@deepio
Copy link

deepio commented Feb 27, 2022

The default password is Harbor12345. But if you took the image from bitnami and ran it with their recommended docker-compose file you will see that HARBOR_ADMIN_PASSWORD is set to bitnami. Look for this environment variable and try what it is set to when Harbor12345 does not work.

@pcgeek86
Copy link

Same problem with Kubernetes 1.22.8 on Digital Ocean managed k8s. I installed the Helm chart for Harbor (not the Bitnami one), used kubectl port-forward to connect to the Harbor web front-end, and it won't let me login with admin / Harbor12345.

@kislow
Copy link

kislow commented Sep 6, 2022

#485 (comment)

This also happens when I enable TLS.

@cbelloc
Copy link

cbelloc commented Sep 7, 2022

In my case the problem came from externalURL which was not filled in (service in clusterIP).
No need to enable TLS.

  expose:
    type: clusterIP
    tls:
      enabled: no
      auto:
        commonName: "harbor"
  externalURL: http://harbor

To know that for my part harbor is behind traefik via an ingressRoute.
Hope this help !

@kshaharrajuan
Copy link

kshaharrajuan commented Sep 22, 2023

I had it solved by port-forwarding the nginx pod and not the harbor-portal service or pod:

kubectl port-forward pod/harbor-nginx-795cc6d684-gldv7 -n harbor 8088:8080

@mrclrchtr
Copy link

For me this comment helped me to fix it: #589 (comment)

@pantmal
Copy link

pantmal commented Jul 4, 2024

Can someone please help? I'm also facing the same issue but none of the above solutions worked. I want to deploy harbor using NodePort as follows: helm install my-release harbor/harbor --set expose.type=nodePort --set expose.tls.enabled=false --set expose.nodePort.ports.http.nodePort=30087 --set externalURL=<VM-IP>:30087

However I'm unable to login at :30087 using admin and Harbor12345.

EDIT: OK, I fixed it. Thought I should let everyone know you also need to add the protocol in the externalURL value:
helm install my-release harbor/harbor --set expose.type=nodePort --set expose.tls.enabled=false --set expose.nodePort.ports.http.nodePort=30087 --set externalURL=http://<VM-IP>:30087

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests