-
Notifications
You must be signed in to change notification settings - Fork 376
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
varnish does not relay POST request bodies to BEs under certain circumstances #1927
Comments
What is the status of this? |
Still being discussed and not before 5.0. |
Only available in vcl_backend_fetch{}. If it is unset varnish will not pass the request body to the backend. Related to varnishcache#1927.
Before this commit it was only sent on pass'd requests, making impossible to cache e.g. POST requests. To keep the previous behaviour unset bereq.body if the method is GET, which is normally true for misses. Fixes varnishcache#1927.
Only available in vcl_backend_fetch{}. If it is unset varnish will not pass the request body to the backend. This is in preparation for addressing #1927.
Backport review: not for 4.0/4.1. |
From which version on has this patch been included exactly? |
@onelharrison @frko This is available from 5.0 upwards. |
Great news @fgsch, appreciate the fast feedback a lot! We will be ugrading to varnish 5 shortly. Any breaking vcl changes to be aware of? I remember some difficulties when we moved from 3.x to 4.0. Regards, Frank. |
@frko not as many as between 3 and 4. You can see the changes and how to upgrade here: |
Is there anything special I need todo on varnish 5 to have varnish send the request body to the backends? I have some vcl in place hashing on a http header with a content hash but I see empty bodies arriving when using tcpdump at the backends. |
Shameless plug: Did you read https://docs.varnish-software.com/tutorials/caching-post-requests/ ? |
@hermunn Yes, I did, I am a bit confused I think about the bodyaccess vmod, is this strictly required indeed? As the example shows its use primarily for hashing the body ( which we already do at the client and pass the hash in a http header ). I presumed therefore I would not need it - which very well might be wrong. |
If you trust the client to hash correctly and never |
I'm trying to cache responses to a certain subset of POST requests that have no side effects, in order to save a HTTP-based search appliance some work on repeated queries. For that, I have the client hash the POST request body into an HTTP request header, and use the following VCL definitions to implement the cache key calculation and cache lookup in Varnish:
What I see both on the backend http service and via tcpdump is that varnish seems to throw away the original request body, which results in a query the backends still find well-formed and generates a response for. This response is, of course, semantically wrong, but does get cached by Varnish. Subsequent requests with the same GH-Search-Hash request header get served from the cache.
Expected Behavior
Varnish should pass the original request body to the backend during
vcl_backend_fetch
. Additionally, I don't think it's a good idea to automagically and silently rewrite the request method from POST (and possibly other methods?) to GET there.Current Behavior
Varnish truncates the POST request body during relaying of the request to the backend in
vcl_backend_fetch
, so that it arrives empty. From what I gathered from both tcpdump and a chatty BE http server that I set up for debugging purposes, the Content-Length request header is also stripped out of the relayed request.Possible Solution
I had a peek in
cache_http1_fetch.c:V1F_SendReq
, and it does mention relaying request bodies to backends there, but I'm not familiar enough with the Varnish codebase to spot anything obvious that would explain the behaviour I described above. That said, I unfortunately don't have any conrete suggestions on how to address this.Steps to Reproduce (for bugs)
Your Environment
varnishlog excerpt produced by varnishd with the VCL from above for two requests (one triggering a backend fetch, the second delivering a response from cache):
The text was updated successfully, but these errors were encountered: