-
Notifications
You must be signed in to change notification settings - Fork 0
GettingStarted
This guide provides you with the information that will help you to start using the OpenVINO™ Toolkit on Linux. With this guide, you will learn how to:
- Configure the Model Optimizer
- Prepare a model for sample inference
- Run the Image Classification Sample Application with the model
- This guide assumes that you have already cloned the
openvino
repo and successfully built the Inference Engine and Samples using the build instructions. - The original structure of the repository directories remains unchanged.
NOTE: Below, the directory to which the
openvino
repository is cloned is referred to as<OPENVINO_DIR>
.
The Model Optimizer is a Python*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, Apache MXNet*, ONNX* and Kaldi*.
You cannot perform inference on your trained model without having first run the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, it outputs an Intermediate Representation, or (IR) of the network, a pair of files that describes the whole model:
-
.xml
: Describes the network topology -
.bin
: Contains the weights and biases binary data
For more information about the Model Optimizer, refer to the Model Optimizer Developer Guide.
You can choose to either configure all supported frameworks at once OR configure one framework at a time. Choose the option that best suits your needs. If you see error messages, check for any missing dependencies.
NOTE: The TensorFlow* framework is not officially supported on CentOS*, so the Model Optimizer for TensorFlow cannot be configured on, or run with CentOS.
IMPORTANT: Internet access is required to execute the following steps successfully. If you access the Internet via proxy server only, please make sure that it is configured in your OS environment as well.
Option 1: Configure all supported frameworks at the same time
- Go to the Model Optimizer prerequisites directory:
cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
- Run the script to configure the Model Optimizer for Caffe, TensorFlow 1.x, MXNet, Kaldi*, and ONNX:
sudo ./install_prerequisites.sh
Option 2: Configure each framework separately
Configure individual frameworks separately ONLY if you did not select Option 1 above.
- Go to the Model Optimizer prerequisites directory:
cd <OPENVINO_DIR>/model_optimizer/install_prerequisites
- Run the script for your model framework. You can run more than one script:
- For Caffe:
sudo ./install_prerequisites_caffe.sh
- For TensorFlow 1.x:
sudo ./install_prerequisites_tf.sh
- For TensorFlow 2.x:
sudo ./install_prerequisites_tf2.sh
- For MXNet:
sudo ./install_prerequisites_mxnet.sh
- For ONNX:
sudo ./install_prerequisites_onnx.sh
- For Kaldi:
sudo ./install_prerequisites_kaldi.sh
The Model Optimizer is configured for one or more frameworks. Continue to the next session to download and prepare a model for running a sample inference.
This section describes how to get a pre-trained model for sample inference and how to prepare the optimized Intermediate Representation (IR) that Inference Inference Engine uses.
To run the Image Classification Sample, you need a pre-trained model to run the inference on. This guide uses the public SqueezeNet 1.1 Caffe model. You can find and download this model manually or use the OpenVINO™ Model Downloader.
With the Model Downloader, you can download other popular public deep learning topologies and OpenVINO™ pre-trained models, which are already prepared for running inference upon a wide list of inference scenarios:
- object detection,
- object recognition,
- object re-identification,
- human pose estimation,
- action recognition, and others.
To download the SqueezeNet 1.1 Caffe* model to a models
folder (referred to
as <models_dir>
below) with the Model Downloader:
- Install the prerequisites.
- Run the
downloader.py
script, specifying the topology name and the path to your<models_dir>
. For example, to download the model to a directory named~/public_models
, run:When the model files are successfully downloaded, output similar to the following is printed:./downloader.py --name squeezenet1.1 --output_dir ~/public_models
################|| Downloading squeezenet1.1 ||################ ========== Downloading /home/user/public_models/public/squeezenet1.1/squeezenet1.1.prototxt ... 100%, 9 KB, 19621 KB/s, 0 seconds passed ========== Downloading /home/user/public_models/public/squeezenet1.1/squeezenet1.1.caffemodel ... 100%, 4834 KB, 5159 KB/s, 0 seconds passed ========== Replacing text in /home/user/public_models/public/squeezenet1.1/squeezenet1.1.prototxt
NOTE: This section assumes that you have configured the Model Optimizer using the instructions from the Configure the Model Optimizer section.
-
Create a
<ir_dir>
directory that contains the Intermediate Representation (IR) of the model. -
Inference Engine can perform inference on a list of supported devices using specific device plugins. Different plugins support models of different precision formats, such as
FP32
,FP16
,INT8
. To prepare an IR to run inference on particular hardware, run the Model Optimizer with the appropriate--data_type
options:For CPU (FP32):
python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/public_models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
For GPU and MYRIAD (FP16):
python3 <OPENVINO_DIR>/model_optimizer/mo.py --input_model <models_dir>/public_models/public/squeezenet1.1/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
After the Model Optimizer script is completed, the produced IR files (
squeezenet1.1.xml
,squeezenet1.1.bin
) are in the specified<ir_dir>
directory. -
Copy the
squeezenet1.1.labels
file from the<OPENVINO_DIR>/scripts/demo/
folder to the model IR directory. This file contains the classes that ImageNet uses so that the inference results show text instead of classification numbers:cp <OPENVINO_DIR>/scripts/demo/squeezenet1.1.labels <ir_dir>
Now you are ready to run the Image Classification Sample Application.
The Inference Engine sample applications are automatically compiled when you
built the Inference Engine using the build instructions.
The binary files are located in the <OPENVINO_DIR>/bin/intel64/Release
directory.
To run the Image Classification sample application with an input image on the prepared IR:
-
Go to the samples build directory:
cd <OPENVINO_DIR>/bin/intel64/Release
-
Run the sample executable with specifying the
car.png
file from the<OPENVINO_DIR>/scripts/demo/
directory as an input image, the IR of your model and a plugin for a hardware device to perform inference on:For CPU:
./classification_sample_async -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d CPU
For GPU:
./classification_sample_async -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d GPU
For MYRIAD:
NOTE: Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires performing additional hardware configuration steps.
./classification_sample_async -i <OPENVINO_DIR>/scripts/demo/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD
When the Sample Application completes, you will have the label and confidence for the top-10 categories printed on the screen. Below is a sample output with inference results on CPU:
Top 10 results:
Image ../../../scripts/demo/car.png
classid probability label
------- ----------- -----
817 0.8363342 sports car, sport car
511 0.0946487 convertible
479 0.0419130 car wheel
751 0.0091071 racer, race car, racing car
436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656 0.0037564 minivan
586 0.0025741 half track
717 0.0016069 pickup, pickup truck
864 0.0012027 tow truck, tow car, wrecker
581 0.0005882 grille, radiator grille
[ INFO ] Execution successful
[ INFO ] This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool
© Copyright 2018-2022, OpenVINO team
- Home
- General resources
- How to build
-
Developer documentation
- Inference Engine architecture
- OpenVINO Python API
- CPU plugin
- GPU plugin
- HETERO plugin architecture
- Snippets
- Sample for IE C++/C/Python API
- Proxy plugin (Concept)
- Tests