-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[filebeat] Sometimes Pod logs are not collected by Filebeat #17396
Comments
Pinging @elastic/integrations-platforms (Team:Platforms) |
We eventually just put a buffer timer in our app to wait xx seconds (currently set to 10) after the app has completed all processing upon receiving a k8s SIGTERM. This allowed any harvesting to complete before the pod is terminated as well as the app to actually terminate within the default k8s 30sec timeout. The only line of logging we miss with this is the very last line from the app acknowledging a "Shutting down" INFO Message. Not the most ideal solution but it worked for our current needs, going forward it is a much larger problem altogether with high trans apps deployed to k8s. |
Is there an update on this topic? |
We're having the same issue, but in our case it's not logging stacktraces that led to a pod restart. Despite having the stacktrace printed in my terminal (from tailing the pod logs), I can't find the exact same message through searching Kibana. We're at a point now where we don't know how to reproduce bugs because the stacktraces that document them just don't exist. Until this issue is resolved, I really can't recommend using filebeat for forwarding important logs. I'll likely write a new fluentd config and switch back to using it. |
Hey folks! Sorry for the delay here. I just opened a patch PR for this which hopefully fixes the issue: #20084 |
As @mkirkevo already observed, sometimes Filebeat doesn't collect some of the logs when the pod is going in Terminating state. I tried to reproduce it and I think I found at least one case in which this can happen. In that case, a random pod is handling the SIGTERM when someone deletes the pod, it sleeps for some time and in the end creates another log entry. As you can see in my findings, the logs after the SIGTERM was triggered never reach our backend(in that case Elasticsearch), but we can still see the logs using
kubectl
. As far as I can tell, the Harvester is not deleted prematurely.It might be related to #14259 but I'm not sure as I don't know the codebase.
Running configuration:
Steps to reproduce:
In order to reproduce you need a running Kubernetes cluster with Filebeat up and running using the configuration above(please also create an output, we have Elasticsearch as a backend as I already mentioned).
This is just a long running pod that will not be killed immediately after you do
kubectl delete pod test-pod
.2. Check the logs of this pod using
kubectl logs -f test-pod
in a new shell.3. Kill the pod by running
kubectl delete pod test-pod
.4. Check your backend(depending on what you have set in the filebeat configuration).
You are gonna see something like this(please ignore the wrong timestamps after SIGTERM triggered, I didn't invest much time to figure out how to provide a good date to the trap):
In the meantime, I've checked the Filebeat logs and it seems that the harvester is terminated after the pod has finished completely. You can see that the last message arrived at 14:03:50.853 and if you add 5+2 seconds from the trap mechanism you end up close to the timestamp the file was removed.
The text was updated successfully, but these errors were encountered: