-
Notifications
You must be signed in to change notification settings - Fork 491
ReQuEST Artifact Installation Guide
As the Intel caffe enabled the 8-bit Inference of Convolution Neural Networks in the 1.1.0 release, we also submitted one paper to ReQuEST 2018.Correspondingly, we provide this step-by-step tutorial to reproduce the result on Amazon aws cloud.
- Select the AWS cloud instance which contains pre-built caffe,we're using AWS C5.18xlarge while the AMI is ami-96f9c9ec This link has the detailed information;
- Install the latest The Intel C++ compiler on the AWS cloud instance, we tested the script on the icc (ICC) 18.0.1 20171018;
- Run
source <compiler root>/bin/compilervars.sh {ia32 OR intel64}
orsource <compiler root>/bin/compilervars.csh {ia32 OR intel64}
e.gsource /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh intel64
- Download the benchmark zip file from dropbox;
- Unzip it and change the working directory to the benchmark folder.
- Run the command
python benchmark.py -m throughput
.
- Run the command
python benchmark.py -m latency
.
-
Use the calibration tool to generate the quantized prototxt with pre-trained FP32 weights which could be downloaded form this link.
-
Copy the weights/FP32 prototxt/quantized prototxt to /path/to/benchmark/accuracy folder and rename the corresponding weights/FP32 prototxt/quantized prototxt to the that pre-existed examples.
-
We strongly suggest you check the file path definitions in the prototxt, it'd be better to use the absolute path rather than relative path.
-
Run the command
python benchmark.py -m accuracy