-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connect() failed (111: Connection refused) while connecting to upstream #41
Comments
I've just gone back to the previous nginx controller I was using based on git sha 3138c71 and this works (although now I can't override the server_names_hash_bucket_size as it's an old version which causes a different issue) however I can see that the nginx config So I'm thinking this is bug in the nginx controller?
|
I've just tested the last two versioned nginx-ingress images and checked the upstream sections. They do in fact differ and the 0.3 doesn't have the correct Kubernetes service IP, so I suspect this is a recent regression. Both were tested using the exact same ingress rule. nginxdemos/nginx-ingress:0.3
nginxdemos/nginx-ingress:0.2
|
I tracked it down to this code here https://github.com/nginxinc/kubernetes-ingress/blob/master/nginx-controller/nginx/configurator.go#L210 The port.Port=3000 This is reflected in the gogs service..
It looks like the nginx-controller code gets the port.Port from the kubernetes endpoints and when I check that from the CLI it does in fact use the service targetPort and not the port..
|
So it's definitely an issue where the nginx-controller is using the kubernetes So either:
Could someone take a look to confirm if indeed these are the two options please, if it's the latter I'd expect everyone to be affected by it. Thanks in advance. |
Thanks for the thorough investigation!
an endpoint must contain the IP address of a pod + targetPort . The output from
This is a bug in the controller, it should've used the target port instead of service port. As a temporal workaround I suggest to change the port in your Ingress resouce from the service port 80 to the targetport 3000. Does it help? In the mean time I provide a bug fix |
Yes changing the Ingress servicePort to match the targetPort works, this is a decent workaround, thanks. |
I think I'm seeing this same issue but noticed via another symptom. Because requests aren't forwarded to the Service IP:port, they are not load-balanced across pods. All requests into the load balancer are forwarded to a single pod. (Also, how do I configure the Controller to output the more detailed logs, as in the comments above?) |
the port mapping issue was fixed in #54 |
thanks @pleshakov Can I just check something please? Has that fix been released in dockerhub? Only our GKE instances have started to fail for new deployments and I'm wondering if this fix has been pushed and replaced the 0.3 tag in dockerhub? |
yes, it did get pushed to the dockerhub. Is the failing related to the port mapping? I will update the tag 0.3 to the previous version and push the new one with the 0.3.1 tag, since the update may affect some users because of the port mapping changes. Thanks for pointing that out! |
The issue with incorrect upstream is still present. |
|
I'm using image v0.3.1. |
@comeanother could you provide the ingress resource for fabric8/fabric8-docker-registry or fabric8/jenkinshift ? The correct behaviour is the following: the port specified in an Ingress resource (by the name or the port number) must much the port declared in the corresponding service file. |
Sorry, these are the correct one
|
Am I right: port declared in ingress must match port (not target port) declared in service? |
You're right. version 0.3 and before (incorrect behavior): port declared in ingress must match the target port declared in service |
@pleshakov @comeanother I'm one of the fabric8 committers and created the exposecontroller that generates the ingress rules in the sample above. we originally had to use the target port until this current issue was fixed as the code explains here https://github.com/fabric8io/exposecontroller/blob/master/exposestrategy/ingress.go#L86 but my guess is that this fix was originally pushed to the 0.3 tag which has since been reverted and moved to 3.1 (I think that's right) on the fabric8 project we will need to update the exposecontroller to work with this fix and version 3.1 of the nginx ingress controller. |
@rawlingsj I installed two fabric8 environments. |
@rawlingsj correct, the fix was applied to 0.3 initially, but then the 0.3 image has been reverted to the previous version. The fix was applied to 0.3.1 image |
@comeanother until we update exposecontroller and perform a fabric8 release which includes the upgrade to nginxdemos/nginx-ingress:0.3.1 then we will need to stick to using targetPorts and 0.3 As @pleshakov just confirmed the issue here is that the I can try to update fabric8 to use the 0.3.1 tag but in the interest of time the easiest approach I think would be to force a
on the node that's running the pod or if no ssh access change the pull policy of the ingress controller deployment in the fabric8-system namespace to |
@rawlingsj I'm trying now |
@rawlingsj Still no luck. 'server 127.0.0.1:8181' in the upstreams. |
@pleshakov thanks for the quick and great responses BTW! I had a quick look at updating the ingress rule in our exposecontroller but I wasn't 100% what I needed to change to with this port mapping fix. Where I was using a https://github.com/fabric8io/exposecontroller/blob/master/exposestrategy/ingress.go#L85-L86 |
@rawlingsj Yes,
|
@comeanother fabric8 can update that mapping and reference the new version of nginxdemos/nginx-ingress:0.3.1 but that's going to require a new fabric8 release which can't happen until tomorrow and providing everything works. (it's gone 11 pm here in the UK) Regardless of the changes needed you shouldn't be affected by this if you've been able to pull the 0.3 tag again, now that the change has been reverted. |
@comeanother please redeploy the Ingress controller. The 0.3 wasn't fixed by mistake. |
Thanks @pleshakov Now 0.3 is working. |
0.3.1 tested successfully - thanks @pleshakov! |
server { server { Error Log : |
Have you been able to make it work ? I am still unable to (#8207) |
@vikomte the issue you link to is the K8s community nginx ingress controller project. Which is distinct in function and notation from this one. |
After upgrading and building from master, (using config to increase server_names_hash_bucket_size + client_max_body_size) I'm not able to access my services via ingress routes.
The error I'm getting is..
[error] 26#26: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.36.5.79, server: gogs.default.beast.fabric8.io, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8181/", host: "gogs.default.beast.fabric8.io"
My Ingress looks like:
I can successfully access the gogs service from within the nginx ingress controller pod if I install curl and use the kubernetes service:
curl http://gogs
, so the cluster dns all seems to work fine.By no means am I ruling out something I've done but I've checked a number of things and now out of ideas, I'm wondering if this
upstream
section in the logs is correct?The full log with the nginx config used and error at the bottom:
The text was updated successfully, but these errors were encountered: