You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have created the following inference graph:
router -> transform-input => 10 X models -> combiner -> transform-output
we tarted running some benchmarks and noticed that fairlly earlly the deployment becomes unavailable with no apparent reason in the logs.
we have change the log level to DEBUG, and what we see is that the seldon-container-engine simply stops working. No error message appears.
following is our deployment definition.
all components contain minimal logic. The router always directs to one of the preprocess, the preprocess does nothing, the combiner simply takes the first model response and the postprocess also does nothing.
We have created the following inference graph:
router -> transform-input => 10 X models -> combiner -> transform-output
we tarted running some benchmarks and noticed that fairlly earlly the deployment becomes unavailable with no apparent reason in the logs.
we have change the log level to DEBUG, and what we see is that the
seldon-container-engine
simply stops working. No error message appears.following is our deployment definition.
all components contain minimal logic. The router always directs to one of the preprocess, the preprocess does nothing, the combiner simply takes the first model response and the postprocess also does nothing.
any direction will be highly appreciated
The text was updated successfully, but these errors were encountered: