-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: improve metric reporting #376
Conversation
87940a3
to
9beb23c
Compare
Codecov Report
@@ Coverage Diff @@
## master #376 +/- ##
========================================
- Coverage 43.1% 42.9% -0.2%
========================================
Files 25 25
Lines 1932 1941 +9
========================================
Hits 833 833
- Misses 972 981 +9
Partials 127 127 |
8e53c3f
to
42eee3e
Compare
42eee3e
to
7c45ed2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question about metric capturing around batches vs individual model persistence. Approval is sticky. 🤝
@@ -34,6 +41,10 @@ func (actors ActorList) Persist(ctx context.Context, s model.StorageBatch) error | |||
ctx, span := global.Tracer("").Start(ctx, "ActorList.Persist", trace.WithAttributes(label.Int("count", len(actors)))) | |||
defer span.End() | |||
|
|||
ctx, _ = tag.New(ctx, tag.Upsert(metrics.Table, "actors")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be tagging metrics for list persistence the same as individual persistence? Seems this would record duration for a batch and individuals just the same and might make the data unreliable. (If you change this, it appears this is the same for all the models.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it matters. The list could be a single item and we would want that to be treated identically to the individual case. We're aiming to measure how long visor is taking to persist all the data it extracts, not necessarily per item timings .
Various cleanups and improvements: