-
Notifications
You must be signed in to change notification settings - Fork 100
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #58 from JDAI-CV/update_readme
Update README
- Loading branch information
Showing
5 changed files
with
77 additions
and
34 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
## Benchmark and Comparison | ||
|
||
Benchmark result on Google Pixel 1 (single thread): | ||
|
||
``` | ||
2019-05-06 10:36:48 | ||
Running data/local/tmp/dabnn_benchmark | ||
Run on (4 X 1593.6 MHz CPU s) | ||
***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead. | ||
-------------------------------------------------------------------- | ||
Benchmark Time CPU Iterations | ||
-------------------------------------------------------------------- | ||
dabnn_5x5_256 3661928 ns 3638192 ns 191 <--- input: 14*14*256, kernel: 256*5*5*256, output: 14*14*256, padding: 2 | ||
dabnn_3x3_64 1306391 ns 1281553 ns 546 <--- input: 56*56*64, kernel: 64*3*3*64, output: 56*56*64, padding: 1 | ||
dabnn_3x3_128 958388 ns 954754 ns 735 <--- input: 28*28*128, kernel: 128*3*3*128, output: 28*28*128, padding: 1 | ||
dabnn_3x3_256 975123 ns 969810 ns 691 <--- input: 14*14*256, kernel: 256*3*3*256, output: 14*14*256, padding: 1 | ||
dabnn_3x3_256_s2 268310 ns 267712 ns 2618 <--- input: 14*14*256, kernel: 256*3*3*256, output: 7*7*256, padding: 1, stride: 2 | ||
dabnn_3x3_512 1281832 ns 1253921 ns 588 <--- input: 7* 7*512, kernel: 512*3*3*512, output: 7* 7*512, padding: 1 | ||
dabnn_bireal18_imagenet 61920154 ns 61339185 ns 10 <--- Bi-Real Net 18, 56.4% top-1 on ImageNet | ||
dabnn_bireal18_imagenet_stem 43294019 ns 41401923 ns 14 <--- Bi-Real Net 18 with stem module (The network structure is described in detail in [our paper](https://arxiv.org/abs/1908.05858)), 56.4% top-1 on ImageNet | ||
``` | ||
|
||
The following is the comparison between our dabnn and [Caffe](http://caffe.berkeleyvision.org) (full precision), [TensorFlow Lite](https://www.tensorflow.org/lite) (full precision) and [BMXNet](https://github.com/hpi-xnor/BMXNet) (binary). We surprisingly observe that BMXNet is even slower than the full precision TensorFlow Lite. It suggests that the potential of binary neural networks is far from exploited until our dabnn is published. | ||
|
||
![Comparison](/images/comparison_en.png) | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
## If you want to benchmark an existing full-precision network structure | ||
|
||
If you just want to benchmark the latency of a BNN instead of deploying it, you can determine which convolutions in the input ONNX model are binary by passing "--binary-list filename" command-line argument. Each line of the file is the **output name** of a convolution. | ||
|
||
For example, you have a full-precision model named "model.onnx". In the model, you want three convolutions, whose outputs are "34", "36", "55" respectively, to be binary convolutions, and test how fast the model with three binary convolutions will be. In this case, you should first create a text file with | ||
|
||
> 34 | ||
> | ||
> 36 | ||
> | ||
> 55 | ||
After creating the text file (Let's assume the file is named "my_binary_convs"), you can convert the model by | ||
|
||
```bash | ||
./onnx2bnn model.onnx model.dab --binary-list my_binary_convs | ||
``` | ||
|
||
Once the command finishes, you will get a BNN model named model.dab. | ||
|
||
## If you want to train and export a dabnn-compatible ONNX model | ||
|
||
If you want to train and deploy a BNN on real device, the following instructions are what you needed. | ||
|
||
Binary convolutions are not supported natively by training frameworks (e.g., TensorFlow, PyTorch, MXNet). To implement correct and dabnn-compatible binary convolutions by self, there is something needed attention: | ||
|
||
1. The input of binary convolutions should only be +1/-1, but the padding value of convolution is 0. | ||
|
||
2. PyTorch doesn't support export ONNX sign operator until PyTorch 1.2. | ||
|
||
Therefore, we provide a ["standard" PyTorch implementation](https://gist.github.com/daquexian/7db1e7f1e0a92ab13ac1ad028233a9eb) which is compatible with dabnn and produces a correct result. The implementations TensorFlow, MXNet and other training frameworks should be similar. | ||
|
||
#### How does dabnn recognize binary convolutions in ONNX model | ||
|
||
The converter `onnx2bnn` has three mode in terms of how it recognizes binary convolutions: | ||
|
||
* Aggressive (default). In this mode, onnx2bnn will mark all convolutions whose weights consist of only +1 or -1 as binary convolutions. The aggressive mode is for the existing BNN models which do not have the correct padding value (-1 rather than 0). Note: The output of the generated dabnn model is different from that of the ONNX model since the padding value is 0 instead of -1. | ||
* Moderate. This mode is for our "standard" implementation -- A Conv operator with binary weight and following a -1 Pad operator. | ||
* Strict. In this mode, onnx2bnn only recognizes the following natural and correct "pattern" of binary convolutions: A Conv operator, whose input is got from a Sign op and a Pad op (the order doesn't matter), and weight is got from a Sign op. | ||
|
||
For now "Aggressive" is the default mode. To enable moderate or strict mode, pass "--moderate" or "--strict" command-line argument to onnx2bnn. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters