You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Requirement - what kind of business use case are you trying to solve?
Forward traces from kafka storage to elasticsearch storage with jaeger-ingester deployed to kubernetes-cluster. There are periods when traced system is not used and no traces are being recorded.
Problem - what in Jaeger blocks you from solving the requirement?
When ingester does not process any message for Time.Minute process dies with:
{"level":"panic","ts":1539766162.6593273,"caller":"consumer/deadlock_detector.go:69","msg":"No messages processed in the last check interval"
That stops docker container. Kubernetes brings it back, but with exponential backoff. Each restart makes pauses longer.
If not using kubernetes or some systemd manager, that keeps starting ingester it would just die and stay that way.
Proposal - what do you suggest to solve the problem or improve the existing situation?
Trust kafka client (sarama) build in fail detection mechanism. Possibly in combination with exposing kafka consumer options (like read message timeouts) to be configurable.
Restart consumer within running process (build new consumer and bootstrap app without exiting).
Expose configuration for deadlock_detector tick duration - does not solve a problem, but helps managing impact on business.
Any open questions to address
Which of proposed solutions is best?
The text was updated successfully, but these errors were encountered:
Requirement - what kind of business use case are you trying to solve?
Forward traces from kafka storage to elasticsearch storage with jaeger-ingester deployed to kubernetes-cluster. There are periods when traced system is not used and no traces are being recorded.
Problem - what in Jaeger blocks you from solving the requirement?
When ingester does not process any message for
Time.Minute
process dies with:That stops docker container. Kubernetes brings it back, but with exponential backoff. Each restart makes pauses longer.
If not using kubernetes or some systemd manager, that keeps starting ingester it would just die and stay that way.
Proposal - what do you suggest to solve the problem or improve the existing situation?
Any open questions to address
Which of proposed solutions is best?
The text was updated successfully, but these errors were encountered: