Replies: 14 comments
-
As far as I intuitively agree with the relevance of supporting this in Starlette core, my two cents here...
Would this be a kwarg on the So personally not sure we want a new config option, or just a middleware that’s documented as « you should really turn this on for production » (both in the middleware docs and any « deployment » docs). |
Beta Was this translation helpful? Give feedback.
-
What about a middleware that's used by default? For example there's already:
Add to this list the request size middleware. Another option is that the middleware parameter has default value that has the middleware. |
Beta Was this translation helpful? Give feedback.
-
Including the middlware by default leads to the question of how to leave it out in case users want to specify it explicitly… I don't think it'd be terribly awful for Starlette to not have everything turned-on-and-secure-by-default, i.e. if there are small steps required for the developer to make their app production-ready. (The philosophy is that of an ASGI toolkit and lightweight framework, so that's how I interpret it.) There's been related discussion on FastAPI too, eg fastapi/fastapi#362. Conversation was resolved as "you can implement this as middleware", and that feature was not provided in core. (Always a good idea to go see how things are going there, as a lot of FastAPI users want to see stuff in FastAPI whereas it would really benefit from being in Starlette or separate ASGI components…) So I'm still inclined to think that "here's an upload limit ASGI middleware, please turn it on for prod" (i.e. no default, but easy to add) would be an okay approach for Starlette and the wider ASGI ecosystem. |
Beta Was this translation helpful? Give feedback.
-
What about optional argument to the json, body and form functions with default values?
|
Beta Was this translation helpful? Give feedback.
-
Wouldn't it be more reliable for this limit to live on edge server, such as NGINX? |
Beta Was this translation helpful? Give feedback.
-
It is possible to limit it using NGINX, but you don't want to couple Starlette's resilience with an edge server. |
Beta Was this translation helpful? Give feedback.
-
I always have these limits in Nginx. It would not occur to me to have these limits in the framework but others may need this if they're exposing services without a reverse proxy in front? I'd just want a way to turn it off or not have it enabled by default. |
Beta Was this translation helpful? Give feedback.
-
Because if Starlette doesn't need a reverse proxy, why add it? if Starlette relies on a reverse proxy to be protected it needs to be well documented and IMO it's a bad decision, as each layer need to protect itself. |
Beta Was this translation helpful? Give feedback.
-
I deploy everything behind a reverse proxy because my services require load balancing and redundancy. In fairness, however, I was trying to remember what Django does for this, and so I went looking for it. I honestly can't ever remember setting this There's a similar discussion on this issue in Gunicorn where in addition to talking about DDOS attacks, they also talk about what Werkzeug (the library underpinning Flask) does. Further, it appears that Tornado also offers a In summary, it seems like lots of Python web frameworks (although I tend of think of Tornado as more of a server) implement something like this, but there's some concern about trending too far into full-blown server territory because there are could be many strategies to mitigate DDOS attacks that belong elsewhere than the web framework. Thus, it's probably not an uncontroversial thing to add if all the established libraries already offer it. |
Beta Was this translation helpful? Give feedback.
-
IMO there really should be a configurable limit and reasonable default if its not too expensive to do a simple check. This is an attack vector everyone has by default. Right now I feel stuff like this keeps Starlette as more of a "pro only" framework rather than something usable by any level of developer. |
Beta Was this translation helpful? Give feedback.
-
Has there been any progress so far on this? Or should I configure my webserver or uvicorn to do this? |
Beta Was this translation helpful? Give feedback.
-
For those following this, even behind nginx, I've realised there's a number of ways to run denial of service on Starlette which can be mitigated with a simple timeout middleware. I recommend a dual solution of setting up nginx
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.responses import HTMLResponse
import asyncio
class TimeoutMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request, call_next):
try:
response = await asyncio.wait_for(call_next(request), timeout=30)
except asyncio.TimeoutError:
return HTMLResponse("Request reached timeout.", status_code=504)
return response Timeout middleware seems to free the memory used by the attempted file upload as well. Any additional thoughts, suggestions appreciated! We all want to achieve the goal of making our Starlette Apps more robust. |
Beta Was this translation helpful? Give feedback.
-
Note: By default PHP sets a request time limit of 30 seconds since version 4.0, which undoubtedly handles a lot of DOS sidecases. |
Beta Was this translation helpful? Give feedback.
-
2023 note: Sanic Server has these safeties out of the box, if you're looking for something that just works. Still no sane default in the Uvicorn / Starlette stack, even with nginx in front- You must use the middleware or risk a DOS. |
Beta Was this translation helpful? Give feedback.
-
As discussed in the Gitter, my opinion is that starlette should provide a default limit for request size.
The main reason is that without it, any Starlette application is vulnerable to very easy DoS.
For example, newbie me can write a program as follows:
As a malicious user, I could send a 30GB sized JSON and cause the memory to go OOM.
Other frameworks support this also - Django, Quart.
My proposal is to add a default limit which can be overrided in the app configuration.
Beta Was this translation helpful? Give feedback.
All reactions