Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seldon-batch-processor Issue #2173

Closed
yashcodecollab opened this issue Jul 20, 2020 · 8 comments
Closed

Seldon-batch-processor Issue #2173

yashcodecollab opened this issue Jul 20, 2020 · 8 comments
Labels
bug triage Needs to be triaged and prioritised accordingly

Comments

@yashcodecollab
Copy link

I am trying to use the seldon-batch-processor .Given below is the command that i am running. I have also given the input.txt which i am trying to feed as input to the iris classifier model this is the same iris classfier sample code on your website.
But i dont get any ouput the output.txt file is empty. Am i doing anyhting wrong here. Is the input file in wrong format for a batch input.

sh-4.2$ seldon-batch-processor -d "irisclassifier" -g seldon -n air9 -h 0.0.0.0 -t rest -a json -p ndarray -w 100 -r 3 -i "/app/input.txt" -o "/app/output.txt" -m predict --benchmark
Elapsed time: 0.014407873153686523

@cliveseldon @axsaucedo

sh-4.2$ cat input.txt
[[5.964,4.006,2.081,1.031]]
[[1.964,5.006,22.081,1.031]]
[[2.964,6.006,21.081,1.031]]
[[3.964,7.006,22.081,1.031]]
[[4.964,8.006,24.081,1.031]]
[[5.964,9.006,2.081,1.031]]
[[6.964,10.006,2.081,1.031]]
[[7.964,11.006,2.081,1.031]]
[[8.964,12.006,2.081,1.031]]
[[9.964,13.006,20.081,1.031]]

@yashcodecollab yashcodecollab added bug triage Needs to be triaged and prioritised accordingly labels Jul 20, 2020
@yashcodecollab
Copy link
Author

@cliveseldon any help on the above will be much appreciated. I am trying to run the above seldon-batch-processor command on the same pod terminal on which my irisclassfier model is serving. Is there something wrong that i am doing

@ukclivecox
Copy link
Contributor

This is how the code sets up the seldon client:

sc = SeldonClient(
gateway=gateway_type,
transport=transport,
deployment_name=deployment_name,
payload_type=payload_type,
gateway_endpoint=host,
namespace=namespace,
client_return_type="dict",
)

Maybe test this locally in a python shell to check it works for your settings.

@axsaucedo

@axsaucedo
Copy link
Contributor

Hi @yashcodecollab, as clive pointed out that should provide more insight on how the flags are provided, here is a working command for an example:

seldon-batch-processor -d sklearn -n default -h localhost:80 -t rest -p data -a ndarray -w 100 -r 3 -i ./assets/input-data.txt -o ./assets/output-data.txt -m predict -l debug

It seems that you need to specify -p to be "data" if you want to send ndarray.

@yashcodecollab
Copy link
Author

Based on the suggestion i tried the below command now i am able to reach port 5000. But I am getting http 405 reponse. and output file is still empty

seldon-batch-processor -d irisclassifier -n default -h localhost:5000 -t rest -p ndarray -a data -w 100 -r 3 -i /app/ip.txt -o /app/output-data.txt -m predict -l debug

2020-07-21 18:14:41,793 - seldon_core.microservice:main:205 - INFO: Starting microservice.py:main

  | 2020-07-21 18:14:41,793 - seldon_core.microservice:main:206 - INFO: Seldon Core version: 1.2.1
  | 2020-07-21 18:14:41,794 - seldon_core.microservice:main:268 - INFO: Parse JAEGER_EXTRA_TAGS []
  | 2020-07-21 18:14:41,794 - seldon_core.microservice:main:279 - INFO: Annotations: {}
  | 2020-07-21 18:14:41,794 - seldon_core.microservice:main:283 - INFO: Importing irisclassifier
  | /opt/anaconda/lib/python3.7/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator LogisticRegression from version 0.22.1 when using version 0.23.1. This might lead to breaking code or invalid results. Use at your own risk.
  | UserWarning)
  | /opt/anaconda/lib/python3.7/site-packages/sklearn/base.py:334: UserWarning: Trying to unpickle estimator Pipeline from version 0.22.1 when using version 0.23.1. This might lead to breaking code or invalid results. Use at your own risk.
  | UserWarning)
  | 2020-07-21 18:14:42,126 - seldon_core.microservice:main:362 - INFO: REST microservice running on port 5000 single-threaded=0
  | 2020-07-21 18:14:42,126 - seldon_core.microservice:main:410 - INFO: REST metrics microservice running on port 6000
  | 2020-07-21 18:14:42,127 - seldon_core.microservice:main:420 - INFO: Starting servers
  | * Serving Flask app "seldon_core.wrapper" (lazy loading)
  | * Environment: production
  | WARNING: This is a development server. Do not use it in a production deployment.
  | Use a production WSGI server instead.
  | * Debug mode: off
  | 2020-07-21 18:14:42,142 - werkzeug:_log:122 - INFO: * Running on http://0.0.0.0:6000/ (Press CTRL+C to quit)
  | * Serving Flask app "seldon_core.wrapper" (lazy loading)
  | * Environment: production
  | WARNING: This is a development server. Do not use it in a production deployment.
  | Use a production WSGI server instead.
  | * Debug mode: off
  | 2020-07-21 18:14:42,145 - werkzeug:_log:122 - INFO: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
  | 2020-07-22 07:04:48,774 - werkzeug:_log:122 - INFO: 127.0.0.1 - - [22/Jul/2020 07:04:48] "POST /seldon/default/irisclassifier/api/v1.0/predictions HTTP/1.1" 405 -
  | 2020-07-22 07:04:48,774 - werkzeug:_log:122 - INFO: 127.0.0.1 - - [22/Jul/2020 07:04:48] "POST /seldon/default/irisclassifier/api/v1.0/predictions HTTP/1.1" 405 -
  | 2020-07-22 07:04:48,775 - werkzeug:_log:122 - INFO: 127.0.0.1 - - [22/Jul/2020 07:04:48] "POST /seldon/default/irisclassifier/api/v1.0/predictions HTTP/1.1" 405 -

@axsaucedo @cliveseldon

@axsaucedo
Copy link
Contributor

Great, so it seems it was just incorrect flags, you should've seen an error when passing "json". In regards to your question it seems that you are running the microservice locally with the python command, as opposed to as a microservice. That means that the post request will not work, currently the batch processor is configured to work behind a gateway, so you will have to deploy it with Seldon. Could you give that a try?

In the meantime we'll open an issue to explore adding support for the local python microservice.

@yashcodecollab
Copy link
Author

Hi @axsaucedo we are currently using Openshift 3.11 .So we were not able to install seldon core operator with helm charts.

As a workaround to test out the capabilities of seldon core we are creating a docker image using seldon-core package along with the sample python script and hosting it on a pod serving the predict function as endpoint. For the above example we are able to send streaming input using curl but we wanted to try batch mode as well. Looks like we might have to wait for openshift 4.x upgrade then at our end.

@axsaucedo
Copy link
Contributor

Oh right @yashcodecollab thanks for the insight, it would be recommended from our side to upgrade to Openshift 4.x as Seldon officially supports kubernetes 1.12+. For the meantime you could install Istio or Ambassador in your cluster which would provide the ingress on top, but definitely the suggestion is on the 4.x, as we're working closely with RedHat Openshift team and we have launched our Certified Seldon Operator (as well as moving into the Openshift marketplace) which should make it much easier to install / setup.

@axsaucedo
Copy link
Contributor

Closing as the initial issue posted was resolved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug triage Needs to be triaged and prioritised accordingly
Projects
None yet
Development

No branches or pull requests

3 participants