Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to use GRPC APIs via Python - torchserve_grpc_client.py not generated on Windows #907

Closed
jeffxtang opened this issue Dec 15, 2020 · 8 comments · Fixed by #908
Closed
Assignees
Labels
bug Something isn't working documentation Improvements or additions to documentation triaged_wait Waiting for the Reporter's resp

Comments

@jeffxtang
Copy link
Contributor

  • torch 1.7.1+cu110
  • torch-model-archiver 0.2.1b20201214
  • torchaudio 0.7.2
  • torchserve 0.3.0b20201214
  • torchtext 0.8.1
  • torchvision 0.8.2+cu110
  • java version: openjdk version "11.0.2" 2019-01-15
  • Operating System and version: Windows Pro 10

Your Environment

  • Installed using source? [yes/no]: yes
  • Are you planning to deploy it using docker container? [yes/no]:
  • Is it a CPU or GPU environment?: GPU
  • Using a default/custom handler? [If possible upload/share custom handler/model]:
  • What kind of model is it e.g. vision, text, audio?:
  • Are you planning to use local models from model-store or public url being used e.g. from S3 bucket etc.?
    [If public url then provide link.]:
  • Provide config.properties, logs [ts.log] and parameters used for model registration/update APIs:
  • Link to your project [if any]:

Expected Behavior

Get prediction from a model Using GRPC APIs through python client

Current Behavior

'scripts/torchserve_grpc_client.py' not generated

Possible Solution

Steps to Reproduce

See Logs below.

Failure Logs [if any]

(py38) PS C:\Users\Warrior\repos\serve> python -m grpc_tools.protoc --proto_path=frontend/server/src/main/resources/proto/ --python_out=scripts --grpc_python_out=scripts frontend/server/src/main/resources/proto/inference.proto frontend/server/src/main/resources/proto/management.proto
(py38) PS C:\Users\Warrior\repos\serve> dir .\scripts\

-a---- 12/15/2020 5:41 PM 9910 inference_pb2.py
-a---- 12/15/2020 5:41 PM 4307 inference_pb2_grpc.py
-a---- 12/15/2020 5:41 PM 23178 management_pb2.py
-a---- 12/15/2020 5:41 PM 11045 management_pb2_grpc.py

(py38) PS C:\Users\Warrior\repos\serve> python scripts/torchserve_grpc_client.py infer densenet161 examples/image_classifier/kitten.jpg
C:\Users\Warrior\anaconda3\envs\py38\python.exe: can't open file 'scripts/torchserve_grpc_client.py': [Errno 2] No such file or directory

@harshbafna
Copy link
Contributor

harshbafna commented Dec 15, 2020

@jeffxtang: This is a documentation issue. The path has changed from scripts to ts_scripts and is a missout in doc. I will put a PR for this. Note that the torchserve_grpc_client.py is there in ts_scripts directory and is not generated at runtime. Only the gRPC client stubs for inference and management APIs are generated using the first python command.

And this is not a windows specific problem.

@harshbafna harshbafna self-assigned this Dec 15, 2020
@harshbafna harshbafna added documentation Improvements or additions to documentation bug Something isn't working labels Dec 15, 2020
@harshbafna harshbafna linked a pull request Dec 15, 2020 that will close this issue
10 tasks
@maaquib
Copy link
Collaborator

maaquib commented Dec 15, 2020

@jeffxtang Can you test with the updated instructions in #908 and let me know if it works? Thanks

@jeffxtang
Copy link
Contributor Author

(py38) PS C:\Users\Warrior\repos\serve> python ts_scripts/torchserve_grpc_client.py infer densenet161 examples/image_classifier/kitten.jpg
Traceback (most recent call last):
File "ts_scripts/torchserve_grpc_client.py", line 2, in
import inference_pb2
ModuleNotFoundError: No module named 'inference_pb2'

@harshbafna
Copy link
Contributor

harshbafna commented Dec 15, 2020

@jeffxtang: You will need to generate the client stubs as well using the updated command.

python -m grpc_tools.protoc --proto_path=frontend/server/src/main/resources/proto/ --python_out=ts_scripts --grpc_python_out=ts_scripts frontend/server/src/main/resources/proto/inference.proto frontend/server/src/main/resources/proto/management.proto

Also, note that the command in README.md to run inference on the densenet161 model through gRPC API expects the model to be already registered.

@harshbafna harshbafna added the triaged_wait Waiting for the Reporter's resp label Dec 15, 2020
@jeffxtang
Copy link
Contributor Author

@harshbafna by "expects the model to be already registered" you mean run torchserve like torchserve --start --ncs --model-store model_store --models densenet161.mar? I did that and also generated the stubs using the updated command and now python ts_scripts/torchserve_grpc_client.py infer densenet161 examples/image_classifier/kitten.jpg returns no errors, actually returns nothing - but the torchserve returns:

py38) PS C:\Users\Warrior\repos> 9000-densenet161_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepte(py38) PS C:\Users\Warrior\repos> 2020-12-15 10:44:58,589 [INFO ] W-9000-densenet161_1.0 org.pytorch.serve.wlm.WorkerThread - Backend response time: 37 main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChan
2020-12-15 10:44:58,589 [INFO ] W-9000-densenet161_1.0-stdout MODEL_METRICS - HandlerTime.Milliseconds:36.9|#ModelName:densenet161,Level:Model|#hostname:Warrior,requestID:53d95a54-e65a-47a0-b667-674cc0ce849b,timestamp:1608057898082
2020-12-15 10:44:58,590 [INFO ] W-9000-densenet161_1.0 ACCESS_LOG - /0:0:0:0:0:0:0:1:51033 "gRPC org.pytorch.serve.grpc.inference.InferenceAPIsService/Predictions HTTP/2.0" 0 40S - CPUUtilization.Percent:0.0

So it seems working. But it'd be nice if torchserve_grpc_client.py also returns the similar results as using curl for the REST API.

@harshbafna
Copy link
Contributor

@harshbafna by "expects the model to be already registered" you mean run torchserve like torchserve --start --ncs --model-store model_store --models densenet161.mar

Yes, that will register the densenet model at startup.

So it seems working. But it'd be nice if torchserve_grpc_client.py also returns the similar results as using curl for the REST API.

The print statement was missing in the client's infer function. I have fixed that as well in the same PR.

@maaquib
Copy link
Collaborator

maaquib commented Dec 15, 2020

@harshbafna by "expects the model to be already registered" you mean run torchserve like torchserve --start --ncs --model-store model_store --models densenet161.mar

Yes, that will register the densenet model at startup.

So it seems working. But it'd be nice if torchserve_grpc_client.py also returns the similar results as using curl for the REST API.

The print statement was missing in the client's infer function. I have fixed that as well in the same PR.

@jeffxtang Let me know if this resolves the issue. The fix has been pushed to the same PR #908

@jeffxtang
Copy link
Contributor Author

Cool it works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working documentation Improvements or additions to documentation triaged_wait Waiting for the Reporter's resp
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants