Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

405 error when logging into portal #485

Closed
darthguinea opened this issue Jan 9, 2020 · 38 comments · Fixed by #1132
Closed

405 error when logging into portal #485

darthguinea opened this issue Jan 9, 2020 · 38 comments · Fixed by #1132
Assignees

Comments

@darthguinea
Copy link

darthguinea commented Jan 9, 2020

I've deployed to microk8s and cannot log in at all, there doesn't appear to be much useful in the logfiles, any ideas as to what can be causing this issue:

image

NAME                                        READY   STATUS    RESTARTS   AGE
hub-harbor-chartmuseum-6997cc4488-2vzmx     1/1     Running   1          28m
hub-harbor-clair-856994f455-7zr4j           2/2     Running   6          55m
hub-harbor-core-8499cb754b-5h8x2            1/1     Running   1          28m
hub-harbor-database-0                       1/1     Running   1          55m
hub-harbor-jobservice-6dffccd557-4x7rl      1/1     Running   1          28m
hub-harbor-notary-server-84bf74c77d-jr2lg   1/1     Running   1          28m
hub-harbor-notary-signer-5b4fc4d5cd-t5rc7   1/1     Running   1          28m
hub-harbor-portal-5dc595c6bd-46vcr          1/1     Running   1          55m
hub-harbor-redis-0                          1/1     Running   1          55m
hub-harbor-registry-7d57dff68-t5zw9         2/2     Running   2          28m

portal logfile:

10.1.71.1 - - [09/Jan/2020:11:30:13 +0000] "GET /api/systeminfo HTTP/1.1" 200 856 "http://10.152.183.144/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0"
10.1.71.1 - - [09/Jan/2020:11:30:13 +0000] "GET /images/harbor-logo.svg HTTP/1.1" 304 0 "http://10.152.183.144/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0"
10.1.71.1 - - [09/Jan/2020:11:30:16 +0000] "GET / HTTP/1.1" 200 856 "-" "kube-probe/1.17"
10.1.71.1 - - [09/Jan/2020:11:30:18 +0000] "POST /c/login HTTP/1.1" 405 157 "http://10.152.183.144/harbor/sign-in?redirect_url=%2Fharbor%2Fprojects" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0"
127.0.0.1 - - [09/Jan/2020:11:40:16 +0000] "POST /c/login HTTP/1.1" 405 157 "-" "curl/7.59.0"

curling from the box:

nginx [ / ]$ curl 'http://127.0.0.1:8080/c/login' --data 'principal=admin&password=Harbor12345'  -svo /dev/null
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> POST /c/login HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.59.0
> Accept: */*
> Content-Length: 36
> Content-Type: application/x-www-form-urlencoded
> 
} [36 bytes data]
* upload completely sent off: 36 out of 36 bytes
< HTTP/1.1 405 Not Allowed
< Server: nginx/1.16.1
< Date: Thu, 09 Jan 2020 11:40:16 GMT
< Content-Type: text/html
< Content-Length: 157
< Connection: keep-alive
< 
{ [157 bytes data]
* Connection #0 to host 127.0.0.1 left intact

plz help!

@darthguinea
Copy link
Author

darthguinea commented Jan 10, 2020

I also get this same issue after deploying it to k8s, and I've tried multiple versions, any ideas what I am missing or where I can look?

@darthguinea
Copy link
Author

OK; after further investigation I think that this issue actually might be partly due to kubectl proxy not handling http patch requests. If you expose a public IP address and use that you can log in successfully. Can you add an option to sign in using kproxy?

@reasonerjt
Copy link
Contributor

I don't think when you login via portal it issues PATCH

Could you let me know what version you are using?

@laszlocph
Copy link

Same issue :)

@ksummersill2
Copy link

Same Issue as well. This is something that has many hours of my life figuring out. Any help would be much appreciated.

@laszlocph
Copy link

Same issue :)

What I can say is that it happens with kubectl proxy. Once I set up a proper ingress endpoint, it all works.
I believe this harms first user experience a lot.

@ksummersill2
Copy link

ksummersill2 commented Jul 22, 2020

I figured it out. So the Registry does not tell you that the nginx server is the place that you need to connect to. I saw no nginx pods in the harbor namespace. But all I had to do is port-forward the service and everything worked like a charm. 👍 You cannot connect straight to the portal as this will not work.

@jiangxiaoqiang
Copy link

Same issue :)

@jiangxiaoqiang
Copy link

I figured it out. So the Registry does not tell you that the nginx server is the place that you need to connect to. I saw no nginx pods in the harbor namespace. But all I had to do is port-forward the service and everything worked like a charm. 👍 You cannot connect straight to the portal as this will not work.

still not understand how you solve the problem

@fcrespofastly
Copy link

Same issue here

@socotra69
Copy link

socotra69 commented Aug 24, 2020

Hi,
It's a simple matter of service exposition.
When you specify the ingress exposition the default is to configure multiple path :

  http:
    paths:
    - backend:
        serviceName: harbor-harbor-portal
        servicePort: 80
      path: /
      pathType: ImplementationSpecific
    - backend:
        serviceName: harbor-harbor-core
        servicePort: 80
      path: /api/
      pathType: ImplementationSpecific
    - backend:
        serviceName: harbor-harbor-core
        servicePort: 80
      path: /service/
      pathType: ImplementationSpecific
    - backend:
        serviceName: harbor-harbor-core
        servicePort: 80
      path: /v2/
      pathType: ImplementationSpecific
    - backend:
        serviceName: harbor-harbor-core
        servicePort: 80
      path: /chartrepo/
      pathType: ImplementationSpecific
    - backend:
        serviceName: harbor-harbor-core
        servicePort: 80
      path: /c/
      pathType: ImplementationSpecific

The last one will handle the login function and if no specific path is configured then it will try on the portal service who does not know how to handle it. (But it could respond with a 404 it would be better).

To correct it, you should always but a reverse proxy before your UI : a simple ingress or a list of mappings for ambassador or you can try with another product like ory oathkeeper.

@dkulchinsky
Copy link

I've been on this for a few hours now, until I noticed this DEBUG message kept appearing in the core service when trying to login:

2020-08-26T20:06:48Z [DEBUG] [/server/middleware/security/unauthorized.go:29][requestID="326b3ee5-1586-4522-8d0c-51f5d4841a14"]: an unauthorized security context generated for request POST /

the above is clearly wrong, since the path should be /c/login and not /, I started looking into my ingress configuration which is based on the Ambassador Ingress Controller Mappings and found this:

By default, the prefix is rewritten to /

https://www.getambassador.io/docs/latest/topics/using/rewrites/#rewrite

indeed I had two Mappings:

  1. Portal
  host: harbor.<domain>
  prefix: /
  service: harbor-portal.platform-harbor
  1. Core
  host: harbor.<domain>
  prefix: /(api|service|v2|chartrepo|c)/.*
  prefix_regex: true
  service: harbor-core.platform-harbor

the problem is with the Core mapping, since Ambassador automatically rewrites the matching prefix to /, the request that hits the core service comes with the wrong path.

The answer was hidden in the docs:

To prevent Ambassador rewrite the matched prefix to / by default, it can be configured to not change the prefix as it forwards a request to the upstream service. To do that, specify an empty rewrite directive:

rewrite: ""

So changing the core mapping to below solved the problem:

  host: harbor.<domain>
  prefix: /(api|service|v2|chartrepo|c)/.*
  prefix_regex: true
  rewrite: ""
  service: harbor-core.platform-harbor

Not sure what kind of setup other folks here have, but sure looks like a similar issue to me.

@tangx
Copy link

tangx commented Sep 3, 2020

405 Not Allowed

如果 portal 规则在最前面, 则所有请求都匹配到 portal 上, 因此出现 405 Error。 修改 Ingress rule 顺序, 将 portal 放在最后。

ingress rule match order cause this problem。 if portal rule at first, all request will match it, and *core rules is omitted.
mv portal rule to the end can solve this .

how to solve

  http:
    paths:
    - backend:
        serviceName: harbor-harbor-core
        servicePort: 80
      path: /api/
      pathType: ImplementationSpecific

     # ... some other rules

    - backend:
        serviceName: harbor-harbor-portal
        servicePort: 80
      path: /
      pathType: ImplementationSpecific

I solve this with istio-ingress : https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPMatchRequest

@texanraj
Copy link

texanraj commented Oct 3, 2020

Still same issue. I tried on EKS and minikube. Worst first user experience! Cannot login at all. Still get the 405 error.

on EKS: All pods are running but cannot login with default login admin/Harbor12345

`NAME READY STATUS RESTARTS AGE
pod/harbor-harbor-chartmuseum-8b56b68f7-8tz2n 1/1 Running 0 29h
pod/harbor-harbor-clair-5899c555c5-w5hwd 2/2 Running 3 29h
pod/harbor-harbor-core-5c8f6779cd-lm9k2 1/1 Running 1 29h
pod/harbor-harbor-database-0 1/1 Running 0 29h
pod/harbor-harbor-jobservice-ccbf8689b-c6msv 1/1 Running 0 29h
pod/harbor-harbor-notary-server-76c7b757bd-28ptq 1/1 Running 1 29h
pod/harbor-harbor-notary-signer-66bfd76949-2tlnz 1/1 Running 1 29h
pod/harbor-harbor-portal-559d4dfc84-65zz4 1/1 Running 0 29h
pod/harbor-harbor-redis-0 1/1 Running 0 29h
pod/harbor-harbor-registry-68d864c757-x95nm 2/2 Running 0 29h
pod/harbor-harbor-trivy-0 1/1 Running 0 29h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/harbor-harbor-chartmuseum ClusterIP 172.20.161.103 80/TCP 29h
service/harbor-harbor-clair ClusterIP 172.20.171.253 8080/TCP 29h
service/harbor-harbor-core ClusterIP 172.20.127.46 80/TCP 29h
service/harbor-harbor-database ClusterIP 172.20.105.28 5432/TCP 29h
service/harbor-harbor-jobservice ClusterIP 172.20.248.91 80/TCP 29h
service/harbor-harbor-notary-server ClusterIP 172.20.22.84 4443/TCP 29h
service/harbor-harbor-notary-signer ClusterIP 172.20.119.215 7899/TCP 29h
service/harbor-harbor-portal NodePort 172.20.237.159 80:32678/TCP 29h
service/harbor-harbor-redis ClusterIP 172.20.100.180 6379/TCP 29h
service/harbor-harbor-registry ClusterIP 172.20.123.170 5000/TCP,8080/TCP 29h
service/harbor-harbor-trivy ClusterIP 172.20.212.137 8080/TCP 29h
service/kubernetes ClusterIP 172.20.0.1 443/TCP 9d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/harbor-harbor-chartmuseum 1/1 1 1 29h
deployment.apps/harbor-harbor-clair 1/1 1 1 29h
deployment.apps/harbor-harbor-core 1/1 1 1 29h
deployment.apps/harbor-harbor-jobservice 1/1 1 1 29h
deployment.apps/harbor-harbor-notary-server 1/1 1 1 29h
deployment.apps/harbor-harbor-notary-signer 1/1 1 1 29h
deployment.apps/harbor-harbor-portal 1/1 1 1 29h
deployment.apps/harbor-harbor-registry 1/1 1 1 29h

NAME DESIRED CURRENT READY AGE
replicaset.apps/harbor-harbor-chartmuseum-8b56b68f7 1 1 1 29h
replicaset.apps/harbor-harbor-clair-5899c555c5 1 1 1 29h
replicaset.apps/harbor-harbor-core-5c8f6779cd 1 1 1 29h
replicaset.apps/harbor-harbor-jobservice-ccbf8689b 1 1 1 29h
replicaset.apps/harbor-harbor-notary-server-76c7b757bd 1 1 1 29h
replicaset.apps/harbor-harbor-notary-signer-66bfd76949 1 1 1 29h
replicaset.apps/harbor-harbor-portal-559d4dfc84 1 1 1 29h
replicaset.apps/harbor-harbor-registry-68d864c757 1 1 1 29h

NAME READY AGE
statefulset.apps/harbor-harbor-database 1/1 29h
statefulset.apps/harbor-harbor-redis 1/1 29h
statefulset.apps/harbor-harbor-trivy 1/1 29h`

@ksummersill2
Copy link

Note: You must change the Network to use "ClusterIP". This will then allow you to have a service created called harbor. This service will then work like a Reverse Proxy that will connect to the portal. You will set your endpoint; either ingress controller or api gateway to this server.

@texanraj
Copy link

texanraj commented Oct 4, 2020

The default is "ClusterIP" and that does not work either. I tried NodePort and LoadBalancer which works but at the UI the default username/password doesn't work. Can you elaborate on the steps you took to get this working? Did you use nginx ingress controller?

@ksummersill2
Copy link

ksummersill2 commented Oct 4, 2020

The default is not ClusterIP.

`expose:
  # Set the way how to expose the service. Set the type as "ingress",
  # "clusterIP", "nodePort" or "loadBalancer" and fill the information
  # in the corresponding section
  type: clusterIP
  tls:
    # Enable the tls or not. Note: if the type is "ingress" and the tls
    # is disabled, the port must be included in the command when pull/push
    # images. Refer to https://github.com/goharbor/harbor/issues/5291
    # for the detail.
    enabled: true
    # Fill the name of secret if you want to use your own TLS certificate.
    # The secret contains keys named:
    # "tls.crt" - the certificate (required)
    # "tls.key" - the private key (required)
    # "ca.crt" - the certificate of CA (optional), this enables the download
    # link on portal to download the certificate of CA
    # These files will be generated automatically if the "secretName" is not set
    secretName: ""
    # By default, the Notary service will use the same cert and key as
    # described above. Fill the name of secret if you want to use a
    # separated one. Only needed when the type is "ingress".
    notarySecretName: ""
    # The common name used to generate the certificate, it's necessary
    # when the type isn't "ingress" and "secretName" is null`

I installed using Helm and change the values for type of ClusterIP. This will set up a service.

@ksummersill2
Copy link

harbor                      ClusterIP   100.65.196.254   <none>        80/TCP,443/TCP,4443/TCP   43h
hulk-harbor-chartmuseum     ClusterIP   100.64.83.142    <none>        80/TCP                    43h
hulk-harbor-clair           ClusterIP   100.70.122.61    <none>        8080/TCP                  43h
hulk-harbor-core            ClusterIP   100.69.242.183   <none>        80/TCP                    43h
hulk-harbor-database        ClusterIP   100.64.245.205   <none>        5432/TCP                  43h
hulk-harbor-jobservice      ClusterIP   100.67.255.21    <none>        80/TCP                    43h
hulk-harbor-notary-server   ClusterIP   100.67.166.244   <none>        4443/TCP                  43h
hulk-harbor-notary-signer   ClusterIP   100.66.152.46    <none>        7899/TCP                  43h
hulk-harbor-portal          ClusterIP   100.68.215.138   <none>        80/TCP                    43h
hulk-harbor-redis           ClusterIP   100.64.69.195    <none>        6379/TCP                  43h
hulk-harbor-registry        ClusterIP   100.66.60.158    <none>        5000/TCP,8080/TCP         43h
hulk-harbor-trivy           ClusterIP   100.69.198.91    <none>        8080/TCP                  43h

Do a kubectl get services and you will see the NGINX service used as a reverse proxy. This is what you tie your endpoint to.

@texanraj
Copy link

texanraj commented Oct 4, 2020

Where did you change the network to use ClusterIP? I do not see a "harbor service" running on my cluster. I did install with Helm - helm install harbor harbor/harbor

> NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
> harbor-harbor-chartmuseum     ClusterIP   172.20.236.62    <none>        80/TCP              14m
> harbor-harbor-clair           ClusterIP   172.20.123.116   <none>        8080/TCP            14m
> harbor-harbor-core            ClusterIP   172.20.10.253    <none>        80/TCP              14m
> harbor-harbor-database        ClusterIP   172.20.77.63     <none>        5432/TCP            14m
> harbor-harbor-jobservice      ClusterIP   172.20.157.75    <none>        80/TCP              14m
> harbor-harbor-notary-server   ClusterIP   172.20.206.174   <none>        4443/TCP            14m
> harbor-harbor-notary-signer   ClusterIP   172.20.234.81    <none>        7899/TCP            14m
> harbor-harbor-portal          ClusterIP   172.20.60.45     <none>        80/TCP              14m
> harbor-harbor-redis           ClusterIP   172.20.21.63     <none>        6379/TCP            14m
> harbor-harbor-registry        ClusterIP   172.20.227.103   <none>        5000/TCP,8080/TCP   14m
> harbor-harbor-trivy           ClusterIP   172.20.224.181   <none>        8080/TCP            14m
> kubernetes                    ClusterIP   172.20.0.1       <none>        443/TCP             10d
> 

@ksummersill2
Copy link

You have to change it via the values.yml file when you run helm

@texanraj
Copy link

texanraj commented Oct 4, 2020

Did the install with values.yaml and I now see the "harbor" service. I did a port-forward to this service and I get the UI but the default username/password still does not work.

NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
harbor                        ClusterIP   172.20.54.198    <none>        80/TCP,4443/TCP     12m
harbor-harbor-chartmuseum     ClusterIP   172.20.245.43    <none>        80/TCP              12m
harbor-harbor-clair           ClusterIP   172.20.209.64    <none>        8080/TCP            12m
harbor-harbor-core            ClusterIP   172.20.43.11     <none>        80/TCP              12m
harbor-harbor-database        ClusterIP   172.20.242.86    <none>        5432/TCP            12m
harbor-harbor-jobservice      ClusterIP   172.20.65.152    <none>        80/TCP              12m
harbor-harbor-notary-server   ClusterIP   172.20.113.197   <none>        4443/TCP            12m
harbor-harbor-notary-signer   ClusterIP   172.20.233.60    <none>        7899/TCP            12m
harbor-harbor-portal          ClusterIP   172.20.222.28    <none>        80/TCP              12m
harbor-harbor-redis           ClusterIP   172.20.144.31    <none>        6379/TCP            12m
harbor-harbor-registry        ClusterIP   172.20.14.118    <none>        5000/TCP,8080/TCP   12m
harbor-harbor-trivy           ClusterIP   172.20.194.102   <none>        8080/TCP            12m
kubernetes                    ClusterIP   172.20.0.1       <none>        443/TCP             10d

@ksummersill2
Copy link

You set the default password in the values. Should be admin and then Harbor12345

@texanraj
Copy link

texanraj commented Oct 4, 2020

Correct. I see that in values.yaml but does not work with admin/Harbor12345. I'm doing the port-forward - kubectl port-forward service/harbor 8080:80. Getting a 403 error!

Failed to load resource: the server responded with a status of 403 (Forbidden)

@ksummersill2
Copy link

Will not work on port forward as you need to come from the domain name specified in values when you ran the Helm package.

@quguanwen
Copy link

quguanwen commented Nov 21, 2020

我开始也是403,后来使用下面的参数,30005是nodeport关联nginx的pod
expose.type=clusterIP
expose.tls.enabled=false
externalURL: http://10.0.1.1:30005

@SataQiu
Copy link

SataQiu commented Apr 21, 2021

I solved the login problem by the following steps:

# helm repo add harbor https://helm.goharbor.io
# helm fetch harbor/harbor --untar
# cd harbor
# sed -i 's/  type: ingress/  type: clusterIP/g' values.yaml
# sed -i 's/      commonName: ""/      commonName: "harbor"/g' values.yaml
# kubectl create ns harbor
# helm install harbor . -n harbor

Then get harbor service clusterIP by

# kubectl get svc -n harbor harbor
NAME     TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                   AGE
harbor   ClusterIP   10.96.42.70   <none>        80/TCP,443/TCP,4443/TCP   15m

Use a browser to access the ClusterIP(10.96.42.70), you can log in via admin/Harbor12345
Note that you should access the ClusterIP within the cluster, otherwise you may need to install a proxy.

@evindunn
Copy link

This happens to me on chart versions 1.6.3 and 1.7.0 on EKS.

Here're my values:

externalUrl: https://harbor.my.url.com
expose:
  type: ingress
  tls:
    enabled: true
    certSource: none
  ingress:
    hosts:
      core: harbor.my.url.com
      notary: notary.my.url.com
    annotations:
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/security-groups: sg1, sg2
      alb.ingress.kubernetes.io/ssl-redirect: '443'
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'

persistence:
  resourcePolicy: ""
  persistentVolumeClaim:
    registry:
      storageClass: st1
      size: 128Gi

updateStrategy:
  type: Recreate

secretKey: super-secret-key

@ksummersill2
Copy link

@evindunn so what is your issue? The nginx server is what your ingress should be connecting to as it works as a reverse proxy.

@evindunn
Copy link

evindunn commented Jul 23, 2021

I get a 405 when the attempting to log in with the default credentials (when the login page POSTs to /c/login). Are you saying that only clusterIP, the reverse proxy service, will work, and I shouldn't be using ingress?

@iron-rain
Copy link

Hey @evindunn, the chart ingress isn't compatible with AWS ALB's as I've just worked out.

To make it work you need to:

  • Add a wildcard '' to the end of each path, i.e. /, /v2*, /api/*
  • Move the /* to the bottom of the paths list in the Ingress itself.

Sorry I haven't got an example my works on another system I can't copy and paste from.

@kkonovodoff
Copy link

Had the exact same issue on a fresh cluster (v1.22.1) for testing purposes and this answer helped me thank you!

Just note that in my case I needed another step: using Chrome/Firefox instead of safari.
Seems quite ridiculous but somehow, I can only logging successfully from Chrome or Firefox.

Hope this helps someone

@tgeci
Copy link

tgeci commented Oct 13, 2021

Had the same issue with a fresh cluster on Kubernetes 1.21.5 and harbor helm chart version 1.7.4. It is a config issue at the ingress resource.
The first entry within paths has the options path: / with pathType: Prefix. This means that every request is routed to the harbor-portal, which is wrong. Request with paths like /api, /service or /v2 should be routed to harbor-core.

IMHO there are two possible workarounds: Move the config for the path: / to the end or set the prefixType: related to path: / to ImplementationSpecific.
The following solution works for me:

...

spec:
  rules:
  - host: [HARBOR HOSTNAME]
    http:
      paths:
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /api/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /service/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /v2
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /chartrepo/
        pathType: Prefix
      - backend:
          service:
            name: harbor-core
            port:
              number: 80
        path: /c/
        pathType: Prefix
      - backend:
          service:
            name: harbor-portal
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
  
  ...

In my setup, unlike most others, an istio-ingressgateway (envoy) is used instead of the nginx-ingress.

Perhaps this helps someone :)

@queil
Copy link

queil commented Oct 19, 2021

If someone just tries it out and wants to run it locally and port-forward then the following values.yaml worked:

externalURL: https://127.0.0.1:8443
expose:
  type: clusterIP
  tls:
    auto:
      commonName: harbor

Then kubectl -n default port-forward svc/harbor 8443:443 (assuming it is in the default namespace) and access it via https://127.0.0.1:8443/ (ignore certs error)

sathieu added a commit to sathieu/harbor-helm that referenced this issue Jan 12, 2022
According to the ingress specification [1], the longest matching path should
be used. But for Istio, "The first rule matching an incoming request is
used" [2,3].

[1]: https://kubernetes.io/docs/concepts/services-networking/ingress/#multiple-matches
[2]: https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService
[3]: istio/istio#35033

Fixes: goharbor#485
Signed-off-by: Mathieu Parent <[email protected]>
@sathieu
Copy link
Contributor

sathieu commented Jan 12, 2022

Proposed fix in #1132 (at least for Istio).

@Victorion
Copy link

Victorion commented Feb 6, 2022

I bumped into the same issue, but it was mostly due to not reading default values properly.
I just wanted to use Harbor locally - w/o ingress, w/o TLS.
What did I do wrong? - I used the wrong service port-forward + provided the wrong values (expose.type is set to ingress by default, so nginx container won't be created)

# wrong, as it has to be nginx/proxy container instead ("harbor" service):
kubectl port-forward -n harbor svc/harbor-portal 8080:80

All the "fixes" already mentioned above, to summarize:
values.yaml -

harborAdminPassword: "initialPasswordHere"
expose:
  type: clusterIP
  tls:
    enabled: false
helm upgrade --install --create-namespace -n harbor harbor harbor/harbor -f values.yaml
kubectl port-forward -n harbor svc/harbor 8080:80

this's only for local, quick, non-secure deployments - better use Ingress and TLS for non-local deployments.

@vin4git
Copy link

vin4git commented Feb 14, 2022

I bumped into the same issue, but it was mostly due to not reading default values properly. I just wanted to use Harbor locally - w/o ingress, w/o TLS. What did I do wrong? - I used the wrong service port-forward + provided the wrong values (expose.type is set to ingress by default, so nginx container won't be created)

# wrong, as it has to be nginx/proxy container instead ("harbor" service):
kubectl port-forward -n harbor svc/harbor-portal 8080:80

All the "fixes" already mentioned above, to summarize: values.yaml -

harborAdminPassword: "initialPasswordHere"
expose:
  type: clusterIP
  tls:
    enabled: false
helm upgrade --install --create-namespace -n harbor harbor harbor/harbor -f values.yaml
kubectl port-forward -n harbor svc/harbor 8080:80

this's only for local, quick, non-secure deployments - better use Ingress and TLS for non-local deployments.

@Victorion Had a quick try with this values.yaml file and helm install is failing with the below errors:

Error: INSTALLATION FAILED: template: harbor/templates/trivy/trivy-tls.yaml:1:18: executing "harbor/templates/trivy/trivy-tls.yaml" at <.Values.trivy.enabled>: nil pointer evaluating interface {}.enabled

Anything else is missing in values.yaml file?

@tdeheurles
Copy link

tdeheurles commented Jul 3, 2023

@darthguinea thank you, your answer about not using port-forward and 405 error was the trick.

So a quick summary for the one using helm chart on localhost:
⚠️ You need to access without port-forward ⚠️, so in my case I fixed by using service type loadbalancer. I didn't have to change externalURL.

Here is the helm configuration I used:

expose:
  type: loadBalancer
  ports:
    httpPort: 80
  tls:
    enabled: false

Then go to your http://localhost:80

Quick comment for the Harbor team, you guys could add a comment in your documentation for this issue ... spending a few hours to just enter the UI can be a bit a frustrating 😄

@shiveshabhishek
Copy link

My usecase was for nodePort. I was getting error 403, method not allowed, I updated the externalURL with http instead of https and it worked. Error was gone

externalURL: https://103.240.11.10:31002/ to
externalURL: http://103.240.11.10:31002/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.