-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve some logging messages for add_kubernetes_metadata processor #16866
Conversation
7d58101
to
a41cb96
Compare
Pinging @elastic/integrations-platforms (Team:Platforms) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@flaper87 Change looks good. Need to add a changelog entry.
Switch from Debug to Error when unrecoveral events happen and add extra debug messages when indexing and matching pods.
a41cb96
to
3715681
Compare
Done, thanks for the review :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, looks good.
@@ -86,7 +86,7 @@ func (f *LogPathMatcher) MetadataIndex(event common.MapStr) string { | |||
logp.Debug("kubernetes", "Incoming log.file.path value: %s", source) | |||
|
|||
if !strings.Contains(source, f.LogsPath) { | |||
logp.Debug("kubernetes", "Error extracting container id - source value does not contain matcher's logs_path '%s'.", f.LogsPath) | |||
logp.Err("Error extracting container id - source value does not contain matcher's logs_path '%s'.", f.LogsPath) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are in process of refactoring log related like logp.Err
in #15699. I can create a separate PR for this after this PR gets merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you! 😊
I can't check the failures in great detail right now. Are these related to the PR or can I send the PR through? |
@flaper87 they seem unrelated |
…lastic#16866) Switch from Debug to Error when unrecoveral events happen and add extra debug messages when indexing and matching pods. (cherry picked from commit 1d6323f)
backport was missing, I've opened it: #16893 |
…16866) (#16893) Switch from Debug to Error when unrecoveral events happen and add extra debug messages when indexing and matching pods. (cherry picked from commit 1d6323f) Co-authored-by: Flavio Percoco <[email protected]>
This was happening due to the error level logging when the log path matcher detected a `log.file.path` that does not start with a standard Docker container log folder `/var/lib/docker/containers` because AKS dropped support for Docker in September 2022 and switched to containerd. It looks like this message was not supposed to be on the error level in the first place since it just means that the matcher didn't match and it's not an error. But it was mistakenly promoted from the debug level in elastic#16866 most likely because the message started with `Error` and looked confusing. This a partial fix to unblock our customers, but we still need to come up with the full AKS/containerd support in a follow up change.
This was happening due to the error level logging when the log path matcher detected a `log.file.path` that does not start with a standard Docker container log folder `/var/lib/docker/containers` because AKS dropped support for Docker in September 2022 and switched to containerd. It looks like this message was not supposed to be on the error level in the first place since it just means that the matcher didn't match and it's not an error. But it was mistakenly promoted from the debug level in #16866 most likely because the message started with `Error` and looked confusing. This a partial fix to unblock our customers, but we still need to come up with the full AKS/containerd support in a follow up change.
This was happening due to the error level logging when the log path matcher detected a `log.file.path` that does not start with a standard Docker container log folder `/var/lib/docker/containers` because AKS dropped support for Docker in September 2022 and switched to containerd. It looks like this message was not supposed to be on the error level in the first place since it just means that the matcher didn't match and it's not an error. But it was mistakenly promoted from the debug level in #16866 most likely because the message started with `Error` and looked confusing. This a partial fix to unblock our customers, but we still need to come up with the full AKS/containerd support in a follow up change. (cherry picked from commit 29f0b4c)
This was happening due to the error level logging when the log path matcher detected a `log.file.path` that does not start with a standard Docker container log folder `/var/lib/docker/containers` because AKS dropped support for Docker in September 2022 and switched to containerd. It looks like this message was not supposed to be on the error level in the first place since it just means that the matcher didn't match and it's not an error. But it was mistakenly promoted from the debug level in #16866 most likely because the message started with `Error` and looked confusing. This a partial fix to unblock our customers, but we still need to come up with the full AKS/containerd support in a follow up change. (cherry picked from commit 29f0b4c)
This was happening due to the error level logging when the log path matcher detected a `log.file.path` that does not start with a standard Docker container log folder `/var/lib/docker/containers` because AKS dropped support for Docker in September 2022 and switched to containerd. It looks like this message was not supposed to be on the error level in the first place since it just means that the matcher didn't match and it's not an error. But it was mistakenly promoted from the debug level in #16866 most likely because the message started with `Error` and looked confusing. This a partial fix to unblock our customers, but we still need to come up with the full AKS/containerd support in a follow up change. (cherry picked from commit 29f0b4c) Co-authored-by: Denis <[email protected]>
This was happening due to the error level logging when the log path matcher detected a `log.file.path` that does not start with a standard Docker container log folder `/var/lib/docker/containers` because AKS dropped support for Docker in September 2022 and switched to containerd. It looks like this message was not supposed to be on the error level in the first place since it just means that the matcher didn't match and it's not an error. But it was mistakenly promoted from the debug level in #16866 most likely because the message started with `Error` and looked confusing. This a partial fix to unblock our customers, but we still need to come up with the full AKS/containerd support in a follow up change. (cherry picked from commit 29f0b4c) Co-authored-by: Denis <[email protected]>
This was happening due to the error level logging when the log path matcher detected a `log.file.path` that does not start with a standard Docker container log folder `/var/lib/docker/containers` because AKS dropped support for Docker in September 2022 and switched to containerd. It looks like this message was not supposed to be on the error level in the first place since it just means that the matcher didn't match and it's not an error. But it was mistakenly promoted from the debug level in #16866 most likely because the message started with `Error` and looked confusing. This a partial fix to unblock our customers, but we still need to come up with the full AKS/containerd support in a follow up change.
What does this PR do?
Switch from Debug to Error when unrecoveral events happen and add extra debug messages when indexing and matching pods.
I was trying to debug why my configs wouldn't work and I was forced to enable debug to notice that the
add_kubernetes_metadata
processor was failing because my configs were wrong.Sadly, my configs weren't working even after I fixed the above and it was a bit painful to figure out what was going on because there weren't enough debug messages to understand what index keys, matches, and metadata was being processed by filebeat.
Why is it important?
It makes operations and debugging easier
Checklist