-
Notifications
You must be signed in to change notification settings - Fork 831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monitoring using both the input and the output of the model #719
Comments
What is your ultimate objective with this custom monitoring? This is important to understand this as the solution would be significantly different depending of what your objective is. If your objective on custom monitoring is to just be able to store the inputs and outputs of the model, then you can do this with the latest functionality we released in 0.3.1, which basically allows you to collect and visualise the input and outputs of all your Seldon grahs through logs. You can try it out by updating to 0.3.1 and enabling log collection by following these instructions https://github.com/SeldonIO/seldon-core/blob/master/examples/centralised-logging/README.md The logs collected include basically the stdout of the containers which are parsed if they are printed as correct json format. Thus should give you enough flexibility to store what you need on a per model Container, and by default we store the whole input and output. This should be enough for more complex usecases, as you are able to perform preprocessing of the output logs on a per container basis using the sidecar that collects the logs. All of this functionality is open source (ie fluentd, kibna and elastic) so you should be able to extend this accordingly. |
Our goal is to detect issues that occur with the model over time (e.g.: degradation) and alert the model's creator when that happens. What we want is really similar to the Anomaly Detection example that is implemented as an Input Transformer, but our algorithm needs input-output pairs. Thanks for the tip we'll look into logging. Elasticsearch and kibana won't be useful in this case sadly, but maybe we could build a data processor on top of the logs. Wasn't really the kind of solution we were looking for, but we'll check whether it could work, instead of an inference graph. |
@bbarn3y another option would be having an architecture similar to model explanations. You could have 2 deployments - the original model and the concept drift (CD) monitor. Then every incoming request is sent to both and the CD would internally make another predict call to the model to get the predictions and then use the(input, output) pair to update its drift state and return an alert or no alert depending on the logic. That being said we definitely have work to do in this area. A lot of the use cases require stateful components for online computation which we need to investigate how to support them best. |
I think this fits into the https://github.com/SeldonIO/mlgraph roadmap |
Hi! Thanks |
Issues go stale after 30d of inactivity. |
This should be possible with v2 |
We have a custom monitoring logic written in Python, that works similarly to the Outlier Detector example, with the exception that we need to have access to both the input AND the matching output of the model in our code.
We started experimenting with the 5 available deployment types and the runtime inference graphs, but so far couldn't find a way to get access to the input and the output of the model in the same place.
We also looked at the send_feedback functionality, whether we could configure seldon or the model to automatically call back to a previous point in the graph, but couldn't find a way to do so and even if we did, we'd need to keep a state in that deployment and match the input and the output somehow and we currently don't see how that would work.
Putting our logic into the model itself would be a possible solution, however, that would involve modifying the already existing models and that's currently not acceptable for our use case. It also doesn't seem like a nice solution, if possible we'd like to separate the logic that's not related to the model.
Is it possible to implement custom monitoring using both the input and the output of the module using Seldon?
The text was updated successfully, but these errors were encountered: