diff --git a/README.md b/README.md index 81762bd..9b9ef90 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ dabnn_bireal18_imagenet 61809506 ns 61056865 ns 10 <--- Bi- dabnn_bireal18_imagenet_stem 43279353 ns 41533009 ns 14 <--- Bi-Real Net 18 with stem module (The network structure will be described in detail in the coming paper), 56.4% top-1 on ImageNet ``` -The following is the comparison between our dabnn and [Caffe](http://caffe.berkeleyvision.org) (full precision), [TensorFlow Lite](https://www.tensorflow.org/lite) (full precision) and [BMXNet](https://github.com/hpi-xnor/BMXNet) (binary). We surprisingly observe that BMXNet is even slower than the full precision TensorFlow Lite. It suggests that the potential of binary neural networks is far from exploited until our dabnn is published. +The following is the comparison between our dabnn and [Caffe](http://caffe.berkeleyvision.org) (full precision), [TensorFlow Lite](https://www.tensorflow.org/lite) (full precision) and [BMXNet](https://github.com/hpi-xnor/BMXNet) (binary) (the result is slightly different from the above benchmark since they are measured from different runs). We surprisingly observe that BMXNet is even slower than the full precision TensorFlow Lite. It suggests that the potential of binary neural networks is far from exploited until our dabnn is published. ![Comparison](images/comparison_en.png) diff --git a/README_CN.md b/README_CN.md index 02d3732..6e4224d 100644 --- a/README_CN.md +++ b/README_CN.md @@ -42,7 +42,7 @@ dabnn_bireal18_imagenet 61809506 ns 61056865 ns 10 <--- B dabnn_bireal18_imagenet_stem 43279353 ns 41533009 ns 14 <--- 带有 stem 模块的 Bi-Real Net 18 (将在论文中描述), ImageNet top-1 为 56.4% ``` -在 Google Pixel 1 上与 [Caffe](http://caffe.berkeleyvision.org)(全精度), [TensorFlow Lite](https://www.tensorflow.org/lite)(全精度)和 [BMXNet](https://github.com/hpi-xnor/BMXNet)(二值)的对比如下。我们很惊讶的发现现有的二值 inference 框架 BMXNet 甚至比全精度的 TensorFlow Lite 还要慢,这表明,直到 dabnn 推出之前,二值网络的潜力都远远没有被挖掘出来。 +在 Google Pixel 1 上与 [Caffe](http://caffe.berkeleyvision.org)(全精度), [TensorFlow Lite](https://www.tensorflow.org/lite)(全精度)和 [BMXNet](https://github.com/hpi-xnor/BMXNet)(二值)的对比如下(这里的数据和上面 benchmark 里的数据有轻微差异,因为它们不是一次测出来的)。我们很惊讶的发现现有的二值 inference 框架 BMXNet 甚至比全精度的 TensorFlow Lite 还要慢,这表明,直到 dabnn 推出之前,二值网络的潜力都远远没有被挖掘出来。 ![Comparison](images/comparison_cn.png)