Skip to content

Commit

Permalink
[WIP] Documentation fixes and enhancements (#584)
Browse files Browse the repository at this point in the history
* added torch-model-archiver in bug template

* fixed broken links in main README

* refactored image_classifier example readme

* minor fixes in docker documentation

* refactored examples main readme

* fixed broken link issue in batch inference documentation

* updated model archiver documentation with details for reqirements file

* added markdown link check in sanity script

* install npm markdown package in builspec.yml

* fixed broken links

* link checker script fixes and doc fixes

* Updated squeezenet readme
- updated builspec node installation steps

* adds comment for pytest failure check

* install nodejs

* fixed link check disabled for localhost urls

* uncommented link checker

* incorporated doc review comments

Co-authored-by: Aaqib <[email protected]>

* updated path in instructions

* fixed broken links

* fixed link checker issues

* link fixes

* updated ubuntu regression log links

* updated links

Co-authored-by: Shivam Shriwas <[email protected]>
Co-authored-by: dhaniram-kshirsagar <[email protected]>
Co-authored-by: Aaqib <[email protected]>
Co-authored-by: dhanainme <[email protected]>
  • Loading branch information
5 people authored Dec 11, 2020
1 parent 8ecd581 commit bacdf0a
Show file tree
Hide file tree
Showing 27 changed files with 336 additions and 194 deletions.
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/bug_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ Please search on the [issue tracker](https://github.com/pytorch/serve/issues) be
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
* torchserve version:
* torch-model-archiver version:
* torch version:
* torchvision version [if any]:
* torchtext version [if any]:
Expand Down
2 changes: 1 addition & 1 deletion CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ a project may be further defined and clarified by project maintainers.
## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at <[email protected]>. All
reported by contacting the project team at \<[email protected]\>. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,14 @@ TorchServe is a flexible and easy to use tool for serving PyTorch models.

## Contents of this Document

* [Install TorchServe](#install-torchserve)
* [Install TorchServe](#install-torchserve-and-torch-model-archiver)
* [Install TorchServe on Windows](docs/torchserve_on_win_native.md)
* [Install TorchServe on Windows Subsystem for Linux](docs/torchserve_on_wsl.md)
* [Serve a Model](#serve-a-model)
* [Quick start with docker](#quick-start-with-docker)
* [Contributing](#contributing)

## Install TorchServe
## Install TorchServe and torch-model-archiver

1. Install dependencies

Expand Down Expand Up @@ -90,7 +90,7 @@ For information about the model archiver, see [detailed documentation](model-arc

## Serve a model

This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already [installed TorchServe and the model archiver](#install-with-pip).
This section shows a simple example of serving a model with TorchServe. To complete this example, you must have already [installed TorchServe and the model archiver](#install-torchserve-and-torch-model-archiver).

To run this example, clone the TorchServe repository:

Expand Down Expand Up @@ -156,7 +156,7 @@ pip install -U grpcio protobuf grpcio-tools
python -m grpc_tools.protoc --proto_path=frontend/server/src/main/resources/proto/ --python_out=scripts --grpc_python_out=scripts frontend/server/src/main/resources/proto/inference.proto frontend/server/src/main/resources/proto/management.proto
```

- Run inference using a sample client [gRPC python client](scripts/torchserve_grpc_client.py)
- Run inference using a sample client [gRPC python client](ts_scripts/torchserve_grpc_client.py)

```bash
python scripts/torchserve_grpc_client.py infer densenet161 examples/image_classifier/kitten.jpg
Expand Down
20 changes: 10 additions & 10 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ We currently support benchmarking with JMeter & Apache Bench. One can also profi

## Installation

It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](https://github.com/pytorch/serve/blob/master/README.md) for setup.
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](../README.md) for setup.

### Ubuntu

Expand Down Expand Up @@ -44,7 +44,7 @@ python3 windows_install_dependencies.py "C:\\Program Files"

## Models

The pre-trained models for the benchmark can be mostly found in the [TorchServe model zoo](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md). We currently support the following:
The pre-trained models for the benchmark can be mostly found in the [TorchServe model zoo](../docs/model_zoo.md). We currently support the following:
- [resnet: ResNet-18 (Default)](https://torchserve.pytorch.org/mar_files/resnet-18.mar)
- [squeezenet: SqueezeNet V1.1](https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar)

Expand All @@ -63,7 +63,7 @@ We also support compound benchmarks:

#### Using pre-build docker image

* You can specify, docker image using --docker option. You must create docker by following steps given [here](https://github.com/pytorch/serve/tree/master/docker).
* You can specify, docker image using --docker option. You must create docker by following steps given [here](../docker/README.md).

```bash
cd serve/benchmarks
Expand All @@ -81,7 +81,7 @@ NOTE - '--docker' and '--ts' are mutually exclusive options

#### Using local TorchServe instance:

* Install TorchServe using the [install guide](../README.md#install-torchserve)
* Install TorchServe using the [install guide](../README.md#install-torchserve-and-torch-model-archiver)
* Start TorchServe using following command :

```bash
Expand Down Expand Up @@ -166,13 +166,13 @@ Using ```https``` instead of ```http``` as the choice of protocol might not work
The full list of options can be found by running with the -h or --help flags.

## Adding test plans
Refer [adding a new jmeter](NewTestPlan.md) test plan for torchserve.
Refer [adding a new jmeter](add_jmeter_test.md) test plan for torchserve.

# Benchmarking with Apache Bench

## Installation

It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](https://github.com/pytorch/serve/blob/master/README.md) for setup.
It assumes that you have followed quick start/installation section and have required pre-requisites i.e. python3, java and docker [if needed]. If not then please refer [quick start](../README.md) for setup.

### pip dependencies

Expand Down Expand Up @@ -204,7 +204,7 @@ Refer [parameters section](#benchmark-parameters) for more details on configurab
`python benchmark-ab.py`

### Run benchmark with a test plan
The benchmark comes with pre-configured test plans which can be used directly to set parameters. Refer available [test plans](#test-plans ) for more details.
The benchmark comes with pre-configured test plans which can be used directly to set parameters. Refer available [test plans](#test-plans) for more details.
`python benchmark-ab.py <test plan>`

### Run benchmark with a customized test plan
Expand Down Expand Up @@ -238,7 +238,7 @@ This command will use all the configuration parameters given in config.json file
```
### Benchmark parameters
The following parameters can be used to run the AB benchmark suite.
- url: Input model URL. Default: "https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar"
- url: Input model URL. Default: `https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar`
- device: Execution device type. Default: cpu
- exec_env: Execution environment. Default: docker
- concurrency: Concurrency of requests. Default: 10
Expand Down Expand Up @@ -275,7 +275,7 @@ The reports are generated at location "/tmp/benchmark/"
### Sample output CSV
| Benchmark | Model | Concurrency | Requests | TS failed requests | TS throughput | TS latency P50 | TS latency P90| TS latency P90 | TS latency mean | TS error rate | Model_p50 | Model_p90 | Model_p99 |
|---|---|---|---|---|---|---|---|---|---|---|---|---| ---|
| AB | https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar | 10 | 100 | 0 | 15.66 | 512 | 1191 | 2024 | 638.695 | 0 | 196.57 | 270.9 | 106.53|
| AB | [squeezenet1_1](https://torchserve.pytorch.org/mar_files/squeezenet1_1.mar) | 10 | 100 | 0 | 15.66 | 512 | 1191 | 2024 | 638.695 | 0 | 196.57 | 270.9 | 106.53|

### Sample latency graph
![](predict_latency.png)
Expand All @@ -301,7 +301,7 @@ The benchmarks can also be used to analyze the backend performance using cProfil

Using local TorchServe instance:

* Install TorchServe using the [install guide](../README.md#install-torchserve)
* Install TorchServe using the [install guide](../README.md#install-torchserve-and-torch-model-archiver)

By using external docker container for TorchServe:

Expand Down
8 changes: 4 additions & 4 deletions docker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@
* docker - Refer to the [official docker installation guide](https://docs.docker.com/install/)
* git - Refer to the [official git set-up guide](https://help.github.com/en/github/getting-started-with-github/set-up-git)
* For base Ubuntu with GPU, install following nvidia container toolkit and driver-
* [Nvidia container toolkit](https://github.com/NVIDIA/nvidia-docker#ubuntu-160418042004-debian-jessiestretchbuster)
* [Nvidia container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian)
* [Nvidia driver](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-nvidia-driver.html)

* NOTE - Dockerfiles have not been tested on windows native platform.

## First things first

If you have not cloned TorchServe source then:
```bash
1. If you have not clone torchserve source then:
git clone https://github.com/pytorch/serve.git
2. cd serve/docker
cd serve/docker
```

# Create TorchServe docker image
Expand Down Expand Up @@ -199,7 +199,7 @@ curl http://localhost:8080/ping

# Create torch-model-archiver from container

To create mar [model archive] file for torchserve deployment, you can use following steps
To create mar [model archive] file for TorchServe deployment, you can use following steps

1. Start container by sharing your local model-store/any directory containing custom/example mar contents as well as model-store directory (if not there, create it)

Expand Down
42 changes: 21 additions & 21 deletions docs/FAQs.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ Torchserve API's are compliant with the [OpenAPI specification 3.0](https://swag

### How to use Torchserve in production?
Depending on your use case, you will be able to deploy torchserve in production using following mechanisms.
> Standalone deployment. Refer https://github.com/pytorch/serve/docker or https://github.com/pytorch/serve/docs/README.md
> Cloud based deployment. Refer https://github.com/pytorch/serve/kubernetes https://github.com/pytorch/serve/cloudformation
> Standalone deployment. Refer [TorchServe docker documentation](../docker/README.md) or [TorchServe documentation](../docs/README.md)
> Cloud based deployment. Refer [TorchServe kubernetes documentation](../kubernetes/README.md) or [TorchServe cloudformation documentation](../cloudformation/README.md)

### What's difference between Torchserve and a python web app using web frameworks like Flask, Django?
Expand All @@ -38,36 +38,36 @@ Relevant documents.
- [Torchserve configuration](https://github.com/pytorch/serve/blob/master/docs/configuration.md)
- [Model zoo](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md#model-zoo)
- [Snapshot](https://github.com/pytorch/serve/blob/master/docs/snapshot.md)
- [Docker]([https://github.com/pytorch/serve/blob/master/docker/README.md](https://github.com/pytorch/serve/blob/master/docker/README.md))
- [Docker](../docker/README.md)

### Can I run Torchserve APIs on ports other than the default 8080 & 8081?
Yes, Torchserve API ports are configurable using a properties file or environment variable.
Refer [configuration.md](https://github.com/pytorch/serve/blob/master/docs/configuration.md) for more details.
Refer [configuration.md](configuration.md) for more details.


### How can I resolve model specific python dependency?
You can provide a requirements.txt while creating a mar file using "--requirements-file/ -r" flag. Also, you can add dependency files using "--extra-files" flag.
Refer [configuration.md](https://github.com/pytorch/serve/blob/master/docs/configuration.md) for more details.
Refer [configuration.md](configuration.md) for more details.

### Can I deploy Torchserve in Kubernetes?
Yes, you can deploy Torchserve in Kubernetes using Helm charts.
Refer [Kubernetes deployment ](https://github.com/pytorch/serve/blob/master/kubernetes/README.md) for more details.
Refer [Kubernetes deployment ](../kubernetes/README.md) for more details.

### Can I deploy Torchserve with AWS ELB and AWS ASG?
Yes, you can deploy Torchserve on a multinode ASG AWS EC2 cluster. There is a cloud formation template available [here](https://github.com/pytorch/serve/blob/master/cloudformation/ec2-asg.yaml) for this type of deployment. Refer [ Multi-node EC2 deployment behind Elastic LoadBalancer (ELB)](https://github.com/pytorch/serve/tree/master/cloudformation#multi-node-ec2-deployment-behind-elastic-loadbalancer-elb) more details.

### How can I backup and restore Torchserve state?
TorchServe preserves server runtime configuration across sessions such that a TorchServe instance experiencing either a planned or unplanned service stop can restore its state upon restart. These saved runtime configuration files can be used for backup and restore.
Refer [TorchServe model snapshot](https://github.com/pytorch/serve/blob/master/docs/snapshot.md#torchserve-model-snapshot) for more details.
Refer [TorchServe model snapshot](snapshot.md#torchserve-model-snapshot) for more details.

### How can I build a Torchserve image from source?
Torchserve has a utility [script]([https://github.com/pytorch/serve/blob/master/docker/build_image.sh](https://github.com/pytorch/serve/blob/master/docker/build_image.sh)) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.
Torchserve has a utility [script](../docker/build_image.sh) for creating docker images, the docker image can be hardware-based CPU or GPU compatible. A Torchserve docker image could be CUDA version specific as well.

All these docker images can be created using `build_image.sh` with appropriate options.

Run `./build_image.sh --help` for all availble options.

Refer [Create Torchserve docker image from source](../docker/README.md#create-torchserve-docker-image-from-source) for more details.
Refer [Create Torchserve docker image from source](../docker/README.md#create-torchserve-docker-image) for more details.

### How to build a Torchserve image for a specific branch or commit id?
To create a Docker image for a specific branch, use the following command:
Expand All @@ -84,50 +84,50 @@ The image created using Dockerfile.dev has Torchserve installed from source wher

## API
Relevant documents
- [Torchserve Rest API](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md#model-zoo)
- [Torchserve Rest API](../docs/model_zoo.md#model-zoo)

### What can I use other than *curl* to make requests to Torchserve?
You can use any tool like Postman, Insomnia or even use a python script to do so. Find sample python script [here](https://github.com/pytorch/serve/blob/master/docs/default_handlers.md#torchserve-default-inference-handlers).

### How can I add a custom API to an existing framework?
You can add a custom API using **plugins SDK** available in Torchserve.
Refer to [serving sdk](https://github.com/pytorch/serve/blob/master/serving-sdk) and [plugins](https://github.com/pytorch/serve/blob/master/plugins) for more details.
Refer to [serving sdk](../serving-sdk) and [plugins](../plugins) for more details.

### How can pass multiple images in Inference request call to my model?
You can provide multiple data in a single inference request to your custom handler as a key-value pair in the `data` object.
Refer [this](https://github.com/pytorch/serve/issues/529#issuecomment-658012913) for more details.

## Handler
Relevant documents
- [Default handlers](https://github.com/pytorch/serve/blob/master/docs/model_zoo.md#model-zoo)
- [Custom Handlers](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers)
- [Default handlers](default_handlers.md#torchserve-default-inference-handlers)
- [Custom Handlers](custom_service.md#custom-handlers)

### How do I return an image output for a model?
You would have to write a custom handler with the post processing to return image.
Refer [custom service documentation](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers) for more details.
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.

### How to enhance the default handlers?
Write a custom handler that extends the default handler and just override the methods to be tuned.
Refer [custom service documentation](https://github.com/pytorch/serve/blob/master/docs/custom_service.md#custom-handlers) for more details.
Refer [custom service documentation](custom_service.md#custom-handlers) for more details.

### Do I always have to write a custom handler or are there default ones that I can use?
Yes, you can deploy your model with no-code/ zero code by using builtin default handlers.
Refer [default handlers](https://github.com/pytorch/serve/blob/master/docs/default_handlers.md#torchserve-default-inference-handlers) for more details.
Refer [default handlers](default_handlers.md#torchserve-default-inference-handlers) for more details.

### Is it possible to deploy Hugging Face models?
Yes, you can deploy Hugging Face models using a custom handler.
Refer [Huggingface_Transformers](https://github.com/pytorch/serve/blob/master/examples/Huggingface_Transformers/README.md) for example.
Refer [Huggingface_Transformers](../examples/Huggingface_Transformers/README.md) for example.

## Model-archiver
Relevant documents
- [Model-archiver ](https://github.com/pytorch/serve/tree/master/model-archiver#torch-model-archiver-for-torchserve)
- [Docker Readme](https://github.com/pytorch/serve/blob/master/docker/README.md)
- [Model-archiver ](../model-archiver/README.md#torch-model-archiver-for-torchserve)
- [Docker Readme](../docker/README.md)

### What is a mar file?
A mar file is a zip file consisting of all model artifacts with the ".mar" extension. The cmd-line utility *torch-model-archiver* is used to create a mar file.

### How can create mar file using Torchserve docker container?
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](https://github.com/pytorch/serve/blob/master/docker/README.md#create-torch-model-archiver-from-container).
Yes, you create your mar file using a Torchserve container. Follow the steps given [here](../docker/README.md#create-torch-model-archiver-from-container).

### Can I add multiple serialized files in single mar file?
Currently `TorchModelArchiver` allows supplying only one serialized file with `--serialized-file` parameter while creating the mar. However, you can supply any number and any type of file with `--extra-files` flag. All the files supplied in the mar file are available in `model_dir` location which can be accessed through the context object supplied to the handler's entry point.
Expand All @@ -137,7 +137,7 @@ Sample code snippet:
properties = context.system_properties
model_dir = properties.get("model_dir")
```
Refer [Torch model archiver cli](https://github.com/pytorch/serve/blob/master/model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
Refer [Torch model archiver cli](../model-archiver/README.md#torch-model-archiver-command-line-interface) for more details.
Relevant issues: [[#633](https://github.com/pytorch/serve/issues/633)]

### Can I download and register model using s3 presigned v4 url?
Expand Down
Loading

0 comments on commit bacdf0a

Please sign in to comment.