Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitoring using both the input and the output of the model #719

Closed
bbarn3y opened this issue Jul 23, 2019 — with Board Genius Sync · 7 comments
Closed

Monitoring using both the input and the output of the model #719

bbarn3y opened this issue Jul 23, 2019 — with Board Genius Sync · 7 comments
Labels

Comments

Copy link

bbarn3y commented Jul 23, 2019

We have a custom monitoring logic written in Python, that works similarly to the Outlier Detector example, with the exception that we need to have access to both the input AND the matching output of the model in our code.

We started experimenting with the 5 available deployment types and the runtime inference graphs, but so far couldn't find a way to get access to the input and the output of the model in the same place.

We also looked at the send_feedback functionality, whether we could configure seldon or the model to automatically call back to a previous point in the graph, but couldn't find a way to do so and even if we did, we'd need to keep a state in that deployment and match the input and the output somehow and we currently don't see how that would work.

Putting our logic into the model itself would be a possible solution, however, that would involve modifying the already existing models and that's currently not acceptable for our use case. It also doesn't seem like a nice solution, if possible we'd like to separate the logic that's not related to the model.

Is it possible to implement custom monitoring using both the input and the output of the module using Seldon?

@axsaucedo
Copy link
Contributor

axsaucedo commented Jul 23, 2019

What is your ultimate objective with this custom monitoring? This is important to understand this as the solution would be significantly different depending of what your objective is. If your objective on custom monitoring is to just be able to store the inputs and outputs of the model, then you can do this with the latest functionality we released in 0.3.1, which basically allows you to collect and visualise the input and outputs of all your Seldon grahs through logs. You can try it out by updating to 0.3.1 and enabling log collection by following these instructions https://github.com/SeldonIO/seldon-core/blob/master/examples/centralised-logging/README.md

The logs collected include basically the stdout of the containers which are parsed if they are printed as correct json format. Thus should give you enough flexibility to store what you need on a per model Container, and by default we store the whole input and output.

This should be enough for more complex usecases, as you are able to perform preprocessing of the output logs on a per container basis using the sidecar that collects the logs. All of this functionality is open source (ie fluentd, kibna and elastic) so you should be able to extend this accordingly.

@bbarn3y
Copy link
Author

bbarn3y commented Jul 23, 2019

Our goal is to detect issues that occur with the model over time (e.g.: degradation) and alert the model's creator when that happens. What we want is really similar to the Anomaly Detection example that is implemented as an Input Transformer, but our algorithm needs input-output pairs.

Thanks for the tip we'll look into logging. Elasticsearch and kibana won't be useful in this case sadly, but maybe we could build a data processor on top of the logs. Wasn't really the kind of solution we were looking for, but we'll check whether it could work, instead of an inference graph.

@jklaise
Copy link
Contributor

jklaise commented Jul 23, 2019

@bbarn3y another option would be having an architecture similar to model explanations. You could have 2 deployments - the original model and the concept drift (CD) monitor. Then every incoming request is sent to both and the CD would internally make another predict call to the model to get the predictions and then use the(input, output) pair to update its drift state and return an alert or no alert depending on the logic.

That being said we definitely have work to do in this area. A lot of the use cases require stateful components for online computation which we need to investigate how to support them best.

@ukclivecox
Copy link
Contributor

I think this fits into the https://github.com/SeldonIO/mlgraph roadmap

@ukclivecox ukclivecox added this to the 2.0.x milestone Aug 23, 2019
@lukacsg
Copy link

lukacsg commented Aug 29, 2019

Hi!
Could you check and push the PR #832?
We try to use the jaeger tracking (https://docs.seldon.io/projects/seldon-core/en/latest/graph/distributed-tracing.html). And it would be fine if we have the control over the field of the tracing message.
This affects only the microservice.py.

Thanks

@seldondev
Copy link
Collaborator

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale

@seldondev seldondev added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 17, 2020
@ukclivecox ukclivecox removed this from the 2.0.x milestone Apr 23, 2020
@axsaucedo axsaucedo changed the title Monitoring using both the input and the output of the model OSS-10: Monitoring using both the input and the output of the model Apr 26, 2021
@axsaucedo axsaucedo changed the title OSS-10: Monitoring using both the input and the output of the model Monitoring using both the input and the output of the model Apr 28, 2021
@ukclivecox ukclivecox added v2 and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 9, 2022
@ukclivecox
Copy link
Contributor

This should be possible with v2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants