-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support raw Prometheus metrics from models #1651
Comments
Hi, not sure if I understand what you want to achieve. Do you want to bypass logic that transforms output of |
Correct. |
So basically you'd like to add your own custom endpoint for metrics and configure prometheus to scrape it? And you're using the python wrapper? |
Correct, we're using the python wrapper (with 1.0.2 right now). However, we're trying to capture measures (internally within our model) using OpenCensus. We then try to expose them using the metrics method, but Seldon assumes that each COUNTER metric is actually additive to the previous one that was captured (i.e. on each call it performs effectively As an aside, maybe there's an easier way of handling this issue? My goal is to measure failures / latencies internal to my code, classified with tags etc. Given that the model class is instantiated at service startup, and that predict() can be called separately from metrics(), we would need to keep our own aggregations that are then returned by metrics(). The way I see it:
This is part of the reason we're using OpenCensus to return aggregates. But they don't provide a way to reset the count, typically, and so we're just always increasing every time metrics() is called. My workaround for this right now is to keep a counter metric internal to my code, but expose it within metrics() as a GAUGE instead. If I misunderstand the way this works and you suggest a better way of handling this than exposing our aggregates to prometheus directly, please advise. |
@haykinson Would for example having |
Yeah, I suppose that could work as well as a |
Issues go stale after 30d of inactivity. |
Will close this. Can you reopen if still a requirement? |
Currently in our model code we gather metrics using OpenCensus, then have to marshal that data into the format that the
/metrics
endpoint uses. It would be great if we could, instead, just use a Prometheus exporter for OpenCensus and publish those metrics in its native format, to be scraped as per the implementation #1507.The text was updated successfully, but these errors were encountered: