Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should we add a stdlib logging handler which writes entries to the logging API? #1497

Closed
tseaver opened this issue Feb 19, 2016 · 10 comments
Closed
Assignees
Labels
api: logging Issues related to the Cloud Logging API. priority: p2 Moderately-important priority. Fix may not be included in next release. type: question Request for information or clarification. Not an issue.

Comments

@tseaver
Copy link
Contributor

tseaver commented Feb 19, 2016

The handler would be similar to the syslog handler in the stdlib. It could be configured via an INI file, e.g.:

[handler_gcloud]
class=gcloud.logging.handler.APIHandler
level=INFO
log_name=my-log

On a GCE / GAE host, the stdlib's SyslogHandler would work without this feature.

@tseaver tseaver added type: question Request for information or clarification. Not an issue. api: logging Issues related to the Cloud Logging API. labels Feb 19, 2016
@theacodes
Copy link
Contributor

/cc @andrewsg

This is probably a yes, as we'd like to have this for the Python runtimes.

@dhermes
Copy link
Contributor

dhermes commented Feb 20, 2016

👍

@jgeewax
Copy link
Contributor

jgeewax commented Feb 21, 2016

+1 from me too. I believe we do this with Node also.

@txomon
Copy link

txomon commented May 26, 2016

Sorry I didn't find this issue. I was saying that I found a working version here:

https://medium.com/google-cloud/cloud-logging-though-a-python-log-handler-a3fbeaf14704#.mcrqgtkhh

@theacodes
Copy link
Contributor

As mentioned in that article:

I have to point out that for each log call an API call is done which can impact performance. But as a lot of Python scripts are written for automation task that are not performance critical this should not be a problem to them.

If we decide to do this here (also probably something to consider in node as well) we should be really careful about every logging statement making an API call.

Raven uses a separate thread to asyncronously send logs, and also has other transports that play well with stuff like gevent.

@txomon
Copy link

txomon commented May 26, 2016

Because of that, I would propose it to be a handler configured by default to logging.ERROR level.

If we want to do more, we can implement another logger, threaded one, where the handle() method launches a thread instead of doing the work itself. I would see this as a different functionality though.

I would first start creating the proposed handler, and then maybe implement a ThreadedHandler or something like that that spins threads on logging requests.

@txomon
Copy link

txomon commented May 26, 2016

Of course, if we go that way, we should also create an Asyncio based handler. But I see the basic one as a first step, documenting properly the possible performance problem is misused.

@waprin
Copy link
Contributor

waprin commented Jun 20, 2016

I'd like to see this get done, because right now logging on flex doesn't capture log levels etc and it's really pretty easy to fix.

@andrewsg showed me the compat handler which with almost no modifications at least gets the log level right.

I think what we want is:

  1. Auto-detection if you're in a flex environment and writing to the expected log file for fluentd to pick up.
  2. Config option (maybe auto-detect?) for people who have installed the google-fluentd handler on a VM
  3. Fallback to writing to the logging API similar to the medium post (except using this lib instead of google api client). Start with a synchronous handler but leave it easy to swap in asynchronous transports.
  4. For messages of severity error or greater, format the log entry in such a way that it gets properly picked up Cloud Error Reporting.
  5. Optional hook that installs a global exception handler that will log stacktraces to Cloud Error Reporting

I would really like to see this done as we fill out Stackdriver section of Python page, will work on a PR if nobody else is working on it and people are ok with the plan.

@jonparrott

This was referenced Jun 21, 2016
@lukesneeringer lukesneeringer added the priority: p2 Moderately-important priority. Fix may not be included in next release. label Apr 19, 2017
@lukesneeringer
Copy link
Contributor

@jonparrott Can you give me an idea of where this stands (in importance, progress, and difficulty)?

@theacodes
Copy link
Contributor

This is done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: logging Issues related to the Cloud Logging API. priority: p2 Moderately-important priority. Fix may not be included in next release. type: question Request for information or clarification. Not an issue.
Projects
None yet
Development

No branches or pull requests

7 participants