-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid handling of chunked requests: Transfer-encoding "chunked" changed automatically to content-length #4838
Comments
@OscarNeira if you can provide a self-contained example that reproduces the issue, I can take a look. |
Yes, I will. |
Hi @aledbf , I managed to reproduce the issue in a way that you can also try. and added some steps:
tcpdump-ingress-chunk-1.pcap.zip Here is the ingress config:
|
@OscarNeira please update to 0.28.0 and test again. |
Today I will try to reproduce this issue. Thank you for the procedure to reproduce the issue. |
same results with 0.28.0 |
@OscarNeira can you try to provide a simpler way to reproduce this? Something like https://gist.github.com/aledbf/266940de7569a1163b9e1c085aa4e771 |
yep, I will try. |
Hi @aledbf , Added more details to the repo that just created. Here is the link to the script and the yaml files are in the same folder. Let me know if you need anything else. https://github.com/OscarNeira/simple-java-client/blob/master/k8s/start. |
|
could you try to run this |
|
could you check that |
|
ok, could you try to open new command line/powershell/terminal and try again |
|
Ok, for some reason cannot find the host. I will delete mi minikube and run it clean. ~ took 6s |
@OscarNeira not sure what you mean. This is the log of the mockserver pod
|
could you try to run the image again |
Deleted all the objects
and run the script again
|
Could you try to delete and pull the latest image? |
Same result. I started a fresh minikube cluster
|
Updated the project a bit to get more info, I feel that we are using an old image, created a new tag for this test. I also updated the script a bit. https://github.com/OscarNeira/simple-java-client/blob/master/k8s/start.sh
|
Same
Also, reading the ingress controller pod logs, I see the 302 is returned by the app |
Ok, it seems that the java client cannot create expectation rules. I will test using the minikube ip and let you know. |
It works as well with minikube IP could you share your minikube Ip? and I will create a new image tag with that change. After I push the change could you change the script
|
Here plan B: created expectations using curl. You can update the project to the latest version and run this one. https://github.com/OscarNeira/simple-java-client/blob/master/k8s/start-curl.sh |
Maybe you should read some environment variable? Hardcoding an IP is not very flexible |
Same
|
Did you get errors with this calls?
|
@aledbf Yep, it already had these metadata: |
@OscarNeira ok, then remove all the annotations, run the tests and post the log, please. |
ok |
@aledbf same results: |
@aledbf any luck? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This is still a problem |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
never fix/ still and issue |
Hi @OscarNeira , do you manage to workaround the issue? I think I'm having the same issue where the transfer-encoding is replaced by content-length where it causes incorrect behaviour in the client. |
@herman-d no, we still have same issue, we disable the chunck transfer for now. |
Hi Everyone, |
any solution now? |
This issue was reported in Dec 2019. so is this the right place to track the problem |
Do you have any solution? I don't want nginx to change the block encoding when doing reverse proxy. |
Has the issue been fixed in the latest version of nginx ingress controller? |
chunking is on by default in the version of Nginx being used in the controller. so if there is a problem, then its in the app code and not in the controller. |
Hi, @longwuyuan, |
@jasonhwang-max its not easy to generalize so
|
Thanks, @longwuyuan |
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
NGINX Ingress controller version:
Release: 0.26.1
Build: git-2de5a893a
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: openresty/1.15.8.2
Kubernetes version (use
kubectl version
):Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a
):What happened:
I have a java client that sends chunked HTTP POST requests, and it works fine when it is connected directly to my service using the node IP and service port. The problem is when the client sends requests through the Ingress. Requests with chunked data lower than 2m are failling and one or two passing and it feels that is random. After checking the logs, I found out that that the nginx ingress controller is changing the header "Transfer-Encoding: chunked" to "Content-Length".
What you expected to happen:
Ingress controller should not buffer the incoming requests if proxy_request_buffering is off.
How to reproduce it (as minimally and precisely as possible):
Set Ingress controller in debug mode
Send POST HTTP request with chunked data. In my case, the java client sends the same request with different payload sizes 3k, 33k, 500k, 1m, 2m, 3m, 5m, 7m .....
Here is one of the failing requests. The ingress controller response with 200 OK and zero content. The Nginx ingress controller gets the chunk data from the client and changes the request to use Content-Length: 38. (including an example with logs from the same request with the correct headers)
Anything else we need to know:
In this particular test, the tcpdumps show that the client is sending 5 segments of data to the nginx and the ingress controller resend the chunked data in 2 segments. Here are the logs of the same request but the ingress controller does nothing to the buffers or headers.
Here is the nginx config:
The text was updated successfully, but these errors were encountered: