Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define evaluation metrics #3688

Merged
merged 3 commits into from
Jun 24, 2024
Merged

Define evaluation metrics #3688

merged 3 commits into from
Jun 24, 2024

Conversation

dmjb
Copy link
Contributor

@dmjb dmjb commented Jun 21, 2024

Add in some basic metrics to track rule/entity evaluations to the engine. Note that the metrics for alerts and remediations are not wired in yet since they depend on some changes which will be added in the next PR.

I have taken a different approach to handling noop metrics than is used in the codebase so far: instead of defining an interface and separate real vs noop implementations, I have decided to make use of the noop metrics handling built into otel: there is now a metrics meter factory with alternative implementation for noop vs "real" metrics. Any part of the code which needs to create metrics can use the factory without caring whether the metrics are actually exported or not.

Relates to: #3556

Summary

Provide a brief overview of the changes and the issue being addressed.
Explain the rationale and any background necessary for understanding the changes.
List dependencies required by this change, if any.

Fixes #(related issue)

Change Type

Mark the type of change your PR introduces:

  • Bug fix (resolves an issue without affecting existing features)
  • Feature (adds new functionality without breaking changes)
  • Breaking change (may impact existing functionalities or require documentation updates)
  • Documentation (updates or additions to documentation)
  • Refactoring or test improvements (no bug fixes or new functionality)

Testing

Outline how the changes were tested, including steps to reproduce and any relevant configurations.
Attach screenshots if helpful.

Review Checklist:

  • Reviewed my own code for quality and clarity.
  • Added comments to complex or tricky code sections.
  • Updated any affected documentation.
  • Included tests that validate the fix or feature.
  • Checked that related changes are merged.

Add in some basic metrics to track rule/entity evaluations to the
engine. Note that the metrics for alerts and remediations are not wired
in yet since they depend on some changes which will be added in the next
PR.

I have taken a different approach to handling noop metrics than is used
in the codebase so far: instead of defining an interface and separate
real vs noop implementations, I have decided to make use of the noop
metrics handling built into otel: there is now a metrics meter factory
with alternative implementation for noop vs "real" metrics. Any part of
the code which needs to create metrics can use the factory without
caring whether the metrics are actually exported or not.

Relates to: #3556
@dmjb dmjb requested a review from a team as a code owner June 21, 2024 12:15
@coveralls
Copy link

Coverage Status

coverage: 52.777% (+0.01%) from 52.766%
when pulling 1209d30 on eval-metrics
into 2c0c6b2 on main.

@coveralls
Copy link

Coverage Status

coverage: 52.772% (+0.006%) from 52.766%
when pulling 1209d30 on eval-metrics
into 2c0c6b2 on main.

Copy link
Contributor

@jhrozek jhrozek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good, one question inline.

// EntityFromIDs takes the IDs of the three known entity types and
// returns a single ID along with the type of the entity.
// This assumes that exactly one of the IDs is not equal to uuid.Nil
func EntityFromIDs(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is smart, but I worry that the code might get fragile in the future. Since the caller already passes attributes of interfaces.EvalStatusParams, why not pass its EntityType attribute directly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that the way we refer to entities right now requires us to pass around a UUID field for each type of entity, so if another entity gets added we will need to add another parameter anyway.

I would love to come up with a more elegant way of managing entities in the codebase, but it's definitely a future problem.

@coveralls
Copy link

Coverage Status

coverage: 52.768% (-0.007%) from 52.775%
when pulling 03c021a on eval-metrics
into 0287d08 on main.

@dmjb dmjb merged commit 85f3123 into main Jun 24, 2024
21 checks passed
@dmjb dmjb deleted the eval-metrics branch June 24, 2024 08:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants