-
-
Notifications
You must be signed in to change notification settings - Fork 658
Proposal: Log SystemExit in wsgi wrapper #675
Comments
this sounds like noise to me -- how is it different than things like the client disconnecting before the response finishes streaming? |
That shouldn't trigger a This timeout is specifically when gunicorn kills the worker because it was taking too long to respond. Maybe a busy loop or something like that. So this goes unnoticed. If the client disconnects, it should be an |
The problem is that "SystemExit" is not a generic error. It'd be akin to us logging an error whenever you called sys.exit(1), which doesnt make much sense. We have the syshook handlers which capture things that get bubbled up (and raise SystemExit should), so I'm not sure what case this is specifically trying to cater towards. |
For posterity: # timeout.py
from time import sleep
def application(env, start_response):
try:
sleep(5)
except:
import traceback
traceback.print_exc()
raise Run with |
I think this is different since it's happening within the wsgi middleware, so the
I'll investigate this and see if that's triggered. I forgot about that.
The specific case is a view in your app that is timing out and the process is getting killed without you knowing about it since nothing was logged. This would log it and log where it was killed from. For example, the stacktrace looks like this:
|
Oh is this specifically because we ignore that error today? I personally dont like catering to gunicorn, but its probably fine to capture all in the middleware |
We don't ignore it specifically, but it's not caught by We specifically miss it here: https://github.com/getsentry/raven-python/blob/master/raven/middleware.py#L36, https://github.com/getsentry/raven-python/blob/master/raven/middleware.py#L43, and https://github.com/getsentry/raven-python/blob/master/raven/middleware.py#L52 since it's not an My proposal would be to basically add: except Exception:
self.handle_exception(environ)
except SystemExit:
self.handle_exception(environ)
raise |
except (Exception, SystemExit), but thats a minor detail You're sure other processes dont cause it to raise SystemExit when shutting down? Otherwise seems fine. |
Or actually, probably just: except (Exception, SystemExit):
self.handle_exception(environ)
raise |
Ah, so on further investigation, we can narrow this down a bit: A normal shut down of gunicorn from Gonna test against uwsgi to see what this does. |
I'm having this exact issue where I can't trace an issue with Sentry where I'm getting systemexit(1) errors and I can't really see what is going on. |
When using gunicorn, at least, and a worker times out, gunicorn sends a
SIGABRT
signal to the worker process. This manifests itself as aSystemExit
exception that gets raised in the worker.When this happens in Django, Django completely ignores
SystemExit
and lets it re-raise so the process can exit, without ever calling thegot_request_exception
signal, so in our case, we never see the exception and nobody knows that anything happened since it happily exited without a fuss.My proposal would be to explicitly catch
SystemExit
, log, then reraise.imo this seems like a reasonable thing to do within a middleware. If something is trying to shut down the process while in the middle of the request, probably worth logging.
In gunicorn, this wouldn't log for a normal shutdown, since that sends a
SIGTERM
instead ofSIGABRT
.Would need to be tested against other wsgi runners, specifically uwsgi.
Refs:
https://github.com/django/django/blob/37ea3cb03e80de80380009a7a7939bc48d75abe9/django/core/handlers/base.py#L223-L225
https://github.com/benoitc/gunicorn/blob/e1912db59a2f1df36387dd28bb3515202cc03098/gunicorn/arbiter.py#L443
The text was updated successfully, but these errors were encountered: