Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hook up GRPC logger to jaeger's Zap logger #1330

Closed
wants to merge 2 commits into from

Conversation

benley
Copy link
Contributor

@benley benley commented Feb 12, 2019

Having GRPC log messages be visible makes debugging waaayyyyyy easier when setting things up.

Log output from jaeger-agent before this change, to illustrate:

{"level":"info","ts":1549952413.8778048,"caller":"agent/main.go:89","msg":"Registering metrics handler with HTTP server","route":"/metrics"}
{"level":"info","ts":1549952413.8788803,"caller":"agent/main.go:93","msg":"Starting agent"}
{"level":"info","ts":1549952413.8791533,"caller":"app/agent.go:68","msg":"Starting jaeger-agent HTTP server","http-port":5778}

(that's all it prints, generally)

And after:

{"level":"info","ts":1549952947.2143118,"caller":"zap/grpclogger.go:41","msg":"parsed scheme: \"\"","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2150836,"caller":"zap/grpclogger.go:41","msg":"scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.215112,"caller":"zap/grpclogger.go:41","msg":"ccResolverWrapper: sending new addresses to cc: [{collector.jaeger.svc.prod.us-west-1.aws.k8s:14250 0  <nil>}]","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2151368,"caller":"zap/grpclogger.go:41","msg":"ClientConn switching balancer to \"pick_first\"","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2161894,"caller":"zap/grpclogger.go:41","msg":"Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2162235,"caller":"zap/grpclogger.go:41","msg":"pickfirstBalancer: HandleSubConnStateChange: 0xc000440010, CONNECTING","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.216558,"caller":"agent/main.go:89","msg":"Registering metrics handler with HTTP server","route":"/metrics"}
{"level":"info","ts":1549952947.2165728,"caller":"agent/main.go:93","msg":"Starting agent"}
{"level":"info","ts":1549952947.2166758,"caller":"app/agent.go:68","msg":"Starting jaeger-agent HTTP server","http-port":5778}
{"level":"info","ts":1549952947.815286,"caller":"zap/grpclogger.go:41","msg":"Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.8154342,"caller":"zap/grpclogger.go:41","msg":"pickfirstBalancer: HandleSubConnStateChange: 0xc000440010, READY","system":"grpc","grpc_log":true}

... and then it prints messages when collector backends appear or disappear. It's extremely helpful.

Having GRPC log messages be visible makes debugging waaayyyyyy easier
when setting things up.

Log output from `jaeger-agent` before this change, to illustrate:

```
{"level":"info","ts":1549952413.8778048,"caller":"agent/main.go:89","msg":"Registering metrics handler with HTTP server","route":"/metrics"}
{"level":"info","ts":1549952413.8788803,"caller":"agent/main.go:93","msg":"Starting agent"}
{"level":"info","ts":1549952413.8791533,"caller":"app/agent.go:68","msg":"Starting jaeger-agent HTTP server","http-port":5778}
```
(that's all it prints, generally)

And after:

```
{"level":"info","ts":1549952947.2143118,"caller":"zap/grpclogger.go:41","msg":"parsed scheme: \"\"","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2150836,"caller":"zap/grpclogger.go:41","msg":"scheme \"\" not registered, fallback to default scheme","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.215112,"caller":"zap/grpclogger.go:41","msg":"ccResolverWrapper: sending new addresses to cc: [{collector.jaeger.svc.prod.us-west-1.aws.k8s:14250 0  <nil>}]","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2151368,"caller":"zap/grpclogger.go:41","msg":"ClientConn switching balancer to \"pick_first\"","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2161894,"caller":"zap/grpclogger.go:41","msg":"Subchannel Connectivity change to CONNECTING","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.2162235,"caller":"zap/grpclogger.go:41","msg":"pickfirstBalancer: HandleSubConnStateChange: 0xc000440010, CONNECTING","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.216558,"caller":"agent/main.go:89","msg":"Registering metrics handler with HTTP server","route":"/metrics"}
{"level":"info","ts":1549952947.2165728,"caller":"agent/main.go:93","msg":"Starting agent"}
{"level":"info","ts":1549952947.2166758,"caller":"app/agent.go:68","msg":"Starting jaeger-agent HTTP server","http-port":5778}
{"level":"info","ts":1549952947.815286,"caller":"zap/grpclogger.go:41","msg":"Subchannel Connectivity change to READY","system":"grpc","grpc_log":true}
{"level":"info","ts":1549952947.8154342,"caller":"zap/grpclogger.go:41","msg":"pickfirstBalancer: HandleSubConnStateChange: 0xc000440010, READY","system":"grpc","grpc_log":true}
```

... and then it prints messages when collector backends appear or
disappear.  It's extremely helpful.

Signed-off-by: Benjamin Staffin <[email protected]>
@jpkrohling
Copy link
Contributor

Are there log entries on a per-span basis as well (req/res)? Their info level seems a bit verbose to my taste, but if it's only for the bootstrap, I think it might be OK.

@benley
Copy link
Contributor Author

benley commented Feb 12, 2019

Are there log entries on a per-span basis as well (req/res)? Their info level seems a bit verbose to my taste, but if it's only for the bootstrap, I think it might be OK.

Not unless you crank the verbosity way up by setting environment variables like GODEBUG=http2debug=2. On the server side, this hardly adds any noise; the only notable new messages will be when clients disconnect uncleanly. On the client side, there are info-level messages when backends come and go, and at startup to describe the balancer config.

@yurishkuro
Copy link
Member

@benley nice change. The initialization of the grpc logger has been moved to a single place cmd/flags/service.go, so this PR needs to be synced up with master.

There's also seem to be a bug in the implementation of that zap adapter, because according to your sample, "caller":"zap/grpclogger.go:41", the caller is now always listed as the wrapper, instead of the original caller. I believe zap has a way to address that by changing the offset in the stack. It's worth creating a ticket in grpc-ecosystem/go-grpc-middleware, but it doesn't need to block this PR.

@yurishkuro
Copy link
Member

Superseded by #1606

@yurishkuro yurishkuro closed this Jun 14, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants