-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net/http: go server behind nginx require read entire body before writing response #22209
Comments
Looks like server already tries to do this Lines 1611 to 1623 in 6013052
|
I too am experiencing this issue running a Go server behind nginx. Any time my server receives a large enough POST payload (8K?) that fails Authorization via headers, I felt I should respond with I've read various relevant issue threads, including https://golang.org/issue/3595, but haven't found a resolution other than reading and discarding the request body, but that seems like an exploitable situation to me. |
Have you tried disabling proxy request buffering in your nginx config? |
@huguesb that is a really helpful tip. i'm going to give it a try and report back. thanks! |
I add the config below in nginx config and it works, i am confused why this issue related to tcp keepalive.
|
If someone can distill this to a pure-Go test, we can take a closer look, but as of now there's no clear bug for us to fix. As @nvartolomei mentioned earlier, we already try to CloseWrite the connection to prevent sending an early RST. Perhaps we're not doing that in the right way, or not in all cases, but at the moment I can't tell if the problem is in Go or nginx. |
I think this is due to this part of // Per RFC 2616, we should consume the request body before
// replying, if the handler hasn't already done so. But we
// don't want to do an unbounded amount of reading here for
// DoS reasons, so we only try up to a threshold.
// TODO(bradfitz): where does RFC 2616 say that? See Issue 15527
// about HTTP/1.x Handlers concurrently reading and writing, like
// HTTP/2 handlers can do. Maybe this code should be relaxed? And it references #15527. The threshold it uses is 256k: // maxPostHandlerReadBytes is the max number of Request.Body bytes not
// consumed by a handler that the server will read from the client
// in order to keep a connection alive. If there are more bytes than
// this then the server to be paranoid instead sends a "Connection:
// close" response.
//
// This number is approximately what a typical machine's TCP buffer
// size is anyway. (if we have the bytes on the machine, we might as
// well read them)
const maxPostHandlerReadBytes = 256 << 10 But it's not clear that changing this behavior would make nginx happy anyway. I think we have enough information in this bug to repro (with nginx), though, and then make a decision whether we need to make changes. I'll flag this as HelpWanted and NeedsInvestigation. Maybe somebody can play around. |
Trying to reproduce this example in pure go led to the correct behavior.
backend.go
proxy.go
Output
|
I also have an error similar to this one. |
I am experiencing a similar issue as well. The following code reproduces the issue in pure-Go: @tombergan For HTTP 1.1 with Transfer-Encoding: chunked, with Content-Length: -1 the request body is closed if we attempt to write part of the response immediately after receiving the first chunk, which is a big problem if you are doing some kind of streaming operation. @bradfitz When I look at the source code you mentioned above, a possible solution is to change the if condition to check for Content-Length > 0 instead of Content-Length != 0: Line 1282 in 5a4e098
This minor change fixes the issue for the code I posted, at least for my use case. Please take a look at it, since this is a very big issue for us affecting many users. |
I am having the same annoying problem. The workaround @herrberk posted seems to fix the issue locally. @bradfitz, @tombergan is it possible to add this fix to the repo? |
I had a similar issue while running go web server behind Error was resolved after restarting nginx. |
@DineshBhosale were you using Transfer-Encoding:chunked ? Asking because the example I gave above, fails to receive all the chunks, and I am not using any reverse proxy at all. |
This is the response header for some static files that are being served by
Request header
It works fine almost all the time but once it fails it is never able to recover and serve any request and I had to restart nginx to make it work again. Also I noticed that |
@DineshBhosale You are sending a |
Ok thanks for the info. I don't know why I get error 503 for https traffic , it happened again on my end. Sometimes works sometimes don't. Happens for get requests as well. |
I previously posted I had a similar error. |
it seems not works, did you use tcpdump to verify ? |
I'm having a similar problem. I have a go server behind an nginx reverse proxy via nginx-ingress in kubernetes. My POST request body is empty, and restarting nginx didn't solve it. If I query my server without going through nginx, it works. |
hi @riiiiizzzzzohmmmmm I face this problem recently. there is a nginx proxy before my server ,and I foud that the client side can get the server response but the nginx will record a "connection reset by peer" error. |
I think the behavior described in #22209 (comment) is reasonable. The solution is to configure nginx such that it only passes big requests to the upstream on the path that accepts big requests (means that it will read the whole request body), and rejects all other big requests. client_max_body_size can be applied to upstream nodes {
server 127.0.0.1:8080 max_fails=0;
}
server {
listen 80;
server_name ~.*;
access_log logs/access.log main;
error_log logs/error.log;
- client_max_body_size 1024m;
+ client_max_body_size 256K;
client_body_buffer_size 512K;
proxy_next_upstream error timeout non_idempotent;
client_body_temp_path client_body_temp_path 3 2;
location / {
proxy_http_version 1.1;
proxy_pass http://nodes;
break;
}
+ # the upstream is expected to accept big request on this path
+ location /paths/accepts/big/request {
+ client_max_body_size 1024m;
+ proxy_http_version 1.1;
+ proxy_pass http://nodes;
+ break;
+ }
} |
What version of Go are you using (
go version
)?go version go1.9.1 windows/amd64
Does this issue reproduce with the latest release?
YES
What operating system and processor architecture are you using (
go env
)?set GOARCH=amd64
set GOBIN=
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=E:\Documents\Develop\WorkSpace\Go
set GORACE=
set GOROOT=E:\Documents\Develop\RunTime\Go\x64
set GOTOOLDIR=E:\Documents\Develop\RunTime\Go\x64\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0 -fdebug-prefix-map=D:\CommonSoft\MSYS2\tmp\go-build716063943=/tmp/go-build -gno-record-gcc-switches
set CXX=g++
set CGO_ENABLED=1
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2
set PKG_CONFIG=pkg-config
What did you do?
nginx.conf
What did you expect to see?
Response:
What did you see instead?
Resp:
Nginx Error:
If i send request without body or the server side read all the body, response will be as expected.
It seems golang should ensure that a FIN packet is sent before any RST packet.
https://trac.nginx.org/nginx/ticket/1037
The text was updated successfully, but these errors were encountered: