Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Containerd metricbeat module #29247

Merged

Conversation

MichaelKatsoulis
Copy link
Contributor

@MichaelKatsoulis MichaelKatsoulis commented Dec 2, 2021

What does this PR do?

This PR creates containerd metricbeat module.
Cpu, memory and blkio metricsets are created.

Why is it important?

Containerd is a container runtime that implements Container Runtime Interface (CRI).
It is used as one of Kubernetes runtimes after k8s deprecating docker after v1.20.
Containerd if configured to expose metrics it provides useful informations about cpu, memory and blkio.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

How to test this PR locally

  1. Create a kind kubernetes cluster of version higher than 1.20
    kind create cluster --image 'kindest/node:v1.21.1' --config kind-cluster.yaml
  2. docker exec in the kind docker container that is created
  3. Edit /etc/containerd/config.toml and add
[metrics]
        address = "127.0.0.1:1338"
  1. Restart containerd service systemctl restart containerd
  2. Before deploying metricbeat add the following data in metricbeat-daemonset-modules ConfigMap
containerd.yml: |-
    - module: containerd
      metricsets:
        - cpu
        - memory
        - blkio
      enabled: true
      period: 10s
      hosts: ["localhost:1338"]
      calcpct: true
  1. Run Metricbeat and watch the containerd fields getting populated.

Use cases

Screenshots

containerd_cpu

containerd_memory

containerd_blkio

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Dec 2, 2021
@MichaelKatsoulis MichaelKatsoulis marked this pull request as draft December 2, 2021 11:56
@mergify
Copy link
Contributor

mergify bot commented Dec 2, 2021

This pull request does not have a backport label. Could you fix it @MichaelKatsoulis? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-v./d./d./d is the label to automatically backport to the 7./d branch. /d is the digit

NOTE: backport-skip has been added to this pull request.

@mergify mergify bot added the backport-skip Skip notification from the automated backport with mergify label Dec 2, 2021
@MichaelKatsoulis MichaelKatsoulis added the Team:Integrations Label for the Integrations team label Dec 2, 2021
@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Dec 2, 2021
@elasticmachine
Copy link
Collaborator

elasticmachine commented Dec 2, 2021

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2022-01-11T11:20:58.847+0000

  • Duration: 107 min 55 sec

  • Commit: 34badbe

Test stats 🧪

Test Results
Failed 0
Passed 9718
Skipped 2528
Total 12246

💚 Flaky test report

Tests succeeded.

🤖 GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

  • /package : Generate the packages and run the E2E tests.

  • /beats-tester : Run the installation tests with beats-tester.

  • run elasticsearch-ci/docs : Re-trigger the docs validation. (use unformatted text in the comment!)

"process_cpu_seconds_total": prometheus.Metric("system.total"),
},
Labels: map[string]prometheus.LabelMap{
"container_id": p.KeyLabel("id"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are using openmetrics library here, is this intentional? Why not prometheus.KeyLabel()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By mistake


// init registers the MetricSet with the central registry.
// The New method will be called after the setup of the module and before starting to fetch data
func init() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a new module so I guess it would better fit under xpack.

if err == nil {
cpuUsageTotalPct := calcCpuTotalUsagePct(cpuUsageTotal.(float64), systemUsageDelta,
float64(contCpus), cID, m.preContainerCpuTotalUsage)
m.Logger().Infof("cpuUsageTotalPct for %+v is %+v", cID, cpuUsageTotalPct)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe that's too noisy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remember it is a draft . I gave no intention of keeping it

var systemTotalNs int64
perContainerCpus := make(map[string]int)
elToDel := -1
for i, event := range events {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The additional percentances calculations can be configured to be enabled/disabled on demand like what is happening in docker module so as to avoid overloading the in case we have too many containers/metrics and we are interested into getting that much detail.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea

systemUsageDelta := float64(systemTotalNs) - m.preSystemCpuUsage

// Calculate cpu total usage percentage
cpuUsageTotal, err := event.GetValue("usage.total.ns")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we put the rationale of these calculations into the PR's description and in the docs so as to have them clearly documented? I forsee getting questions about how these are calculated and we will have to dig into the code and struggle to understand these calculations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes of course!

@MichaelKatsoulis
Copy link
Contributor Author

Regarding the calculation of cpu usage percentage I followed this approach:

Containerd provides us with container_cpu_total_nanoseconds, container_cpu_user_nanoseconds, container_cpu_kernel_nanoseconds metrics per container id.
And also container_per_cpu_nanoseconds for each container.

container_cpu_total_nanoseconds is the sum of the container_per_cpu_nanoseconds for every cpu.

Containerd also provides process_cpu_seconds_total which is the Total user and system CPU time spent in seconds.

So in order to get the cpu usage percentage of each container I followed the approach we have in docker.

For each container:
(container_cpu_total_nanoseconds - pre_container_cpu_total_nanoseconds) / (process_cpu_seconds_total - pre_process_cpu_seconds_total)

In order to set the pre_container_cpu_total_nanoseconds and pre_process_cpu_seconds_total , every time new events are fetched , those values get updated with the latest values received.
The pre_container_cpu_total_nanoseconds is calculated per container id.

Basically, I took a point of reference, then see the difference in next batch of events. That way you can tell how much of the time was used by the container.

Also as the container_cpu_total_nanoseconds is the sum for all cpus , in order then to normalise the percentage , the percentage is divided with the number of cpus used by each container.

@fearful-symmetry as you have worked with docker module in similar calculations, what do you think about this approach?


var (
// HostParser validates Prometheus URLs
hostParser = parse.URLHostParserBuilder{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this declared globally? I only see it being used in the init function?

// The New method will be called after the setup of the module and before starting to fetch data
func init() {
// Mapping of state metrics
mapping := &prometheus.MetricsMapping{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why this map is declared here? I don't think that getMetricsetFactory function is getting reused, so we could just put it in one place?

systemUsageDelta := float64(systemTotalNs) - m.preSystemCpuUsage

// Calculate cpu total usage percentage
cpuUsageTotal, err := event.GetValue("usage.total.ns")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This nearly identical logic block gets duplicated 3 times, we may want to try to abstract this away to the calcCpuTotalUsagePct function so it's a bit easier to wrangle.

//}
}

func calcCpuTotalUsagePct(cpuUsageTotal, systemUsageDelta, contCpus float64,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not clear on why these are three separate functions. They look really similar?

containerFields.Put("id", cID)
event.Delete("id")
}
e, err := util.CreateEvent(event, "containerd.cpu")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're gonna borrow functions from other modules, we should move that code outside the module to somewhere generic.

@fearful-symmetry
Copy link
Contributor

As far as calculating the CPU percents, your logic seems sound? Generally, we track a previous value, calculate the delta between the previous total and the current, and divide that by the time between the deltas. Pay attention to the values that are being reported upstream, as not all platforms will have a particularly clear-cut idea of what a "total" is.

Normalized CPU usage should be thought of as the total per-CPU. Put another way, the maximum value for norm.pct should be 100%, and the maximum value for pct should usually be 100% * Number_of_cpus

For example, check out https://github.com/elastic/beats/blob/master/metricbeat/module/docker/cpu/helper.go

and https://github.com/elastic/beats/blob/master/metricbeat/internal/metrics/cpu/metrics.go

@MichaelKatsoulis
Copy link
Contributor Author

@ChrsMark I updated the PR based on your comments. Could you take another look?

Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

and more specifically fields `containerd.cpu.usage.total.pct`, `containerd.cpu.usage.kernel.pct`, `containerd.cpu.usage.user.pct`.
Default value is true.

For memory metricset if `calcpct.memory` setting is set to true, memory usage percentages will be calculated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why calcpct.cpu and calcpct.memory were introduced? do we have some reason to make it configurable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thought was initiated by @ChrsMark comment. In general I also agree because those extra calculations (as well as extra iteration of the events because of them) may overload the system in case of too many containers. So it is safer to have it configurable. By default it is true anyway.

Containerd module collects cpu, memory and blkio statistics about
running containers controlled by containerd runtime.

The current metricsets are: `cpu`, `blkio` and `memory`. They are not enabled by default.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but in blkio.asciidoc it is:

This is a default metricset. If the host module is unconfigured, this metricset is enabled by default.

doesn't it contradict each other?
the same for cpu and memory

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am trying to understand how this text in blkio.asciidoc is generated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok this comes from this line.
I believe that all these metricests should be enabled by default when a user enables containerd module. We do that in most of the modules. So I made an update in 69f4b5f

}
// Calculate cpu total usage percentage
cpuUsageTotal, err := event.GetValue("usage.total.ns")
if err == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think it could be helpful to add some debug logs if err != nil ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, we iterate a batch of events and we check if usage.total.ns field is present. There is for sure one event in this batch that this field is not present and that is the one that has system.total field. The reason is that process_cpu_seconds_total containerd metric that is then mapped to system.total field, does not include info about any container id (system wise metric not container specific). So that leads to an event that has only system.total field. While the rest of the events have fields that are grouped together due to container_id mainly. So with this if I just want to be sure that we skip trying to calculate percentages for the event that only has system.total (it does not include any other field, is not container specific). It is not an actual error to log anything. But I will add a comment.

if m.calcPct {
inactiveFiles, err := event.GetValue("inactiveFiles")
if err != nil {
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in which cases inactiveFiles can be not present in event? should here be added a debug log to explain why usage percentage was skipped?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In no case. Only if there is an error. I will add a debug

@MichaelKatsoulis
Copy link
Contributor Author

@fearful-symmetry you have some requested changes that are blocking the merging of the PR. I can still merge it though, unless you want to make a final review

@MichaelKatsoulis MichaelKatsoulis added backport-v8.1.0 Automated backport with mergify and removed backport-skip Skip notification from the automated backport with mergify labels Jan 17, 2022
@MichaelKatsoulis MichaelKatsoulis merged commit 181b83a into elastic:master Jan 17, 2022
@fearful-symmetry
Copy link
Contributor

@MichaelKatsoulis Sorry about that! No idea how I missed your ping last week.

@MichaelKatsoulis
Copy link
Contributor Author

@MichaelKatsoulis Sorry about that! No idea how I missed your ping last week.
No problem @fearful-symmetry . You can still give it a look and if there is something you think is important I can open a follow up pr to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-v8.1.0 Automated backport with mergify enhancement Team:Integrations Label for the Integrations team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants