Channel Pruning for Accelerating Very Deep Neural Networks
ICCV 2017, by Yihui He, Xiangyu Zhang and Jian Sun
Please have a look our new works on compressing deep models:
- AMC: AutoML for Model Compression and Acceleration on Mobile Devices ECCV’18, which combines channel pruning and reinforcement learning to further accelerate CNN. code and models are available!
- AddressNet: Shift-Based Primitives for Efficient Convolutional Neural Networks WACV’19. We propose a family of efficient networks based on Shift operation.
- MoBiNet: A Mobile Binary Network for Image Classification WACV’20 Binarized MobileNets.
In this repository, we released code for the following models:
model | Speed-up | Accuracy |
---|---|---|
https://github.com/yihui-he/channel-pruning/releases/tag/channel_pruning_5x | 5x | 88.1 (Top-5), 67.8 (Top-1) |
https://github.com/yihui-he/channel-pruning/releases/tag/VGG-16_3C4x | 4x | 89.9 (Top-5), 70.6 (Top-1) |
https://github.com/yihui-he/channel-pruning/releases/tag/ResNet-50-2X | 2x | 90.8 (Top-5), 72.3 (Top-1) |
https://github.com/yihui-he/channel-pruning/releases/tag/faster-RCNN-2X4X | 2x | 36.7 ([email protected]:.05:.95) |
https://github.com/yihui-he/channel-pruning/releases/tag/faster-RCNN-2X4X | 4x | 35.1 ([email protected]:.05:.95) |
3C method combined spatial decomposition (Speeding up Convolutional Neural Networks with Low Rank Expansions) and channel decomposition (Accelerating Very Deep Convolutional Networks for Classification and Detection) (mentioned in 4.1.2)
If you find the code useful in your research, please consider citing:
@InProceedings{He_2017_ICCV,
author = {He, Yihui and Zhang, Xiangyu and Sun, Jian},
title = {Channel Pruning for Accelerating Very Deep Neural Networks},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}
- Python3 packages you might not have:
scipy
,sklearn
,easydict
, usesudo pip3 install
to install. - For finetuning with 128 batch size, 4 GPUs (~11G of memory)
-
Clone the repository
# Make sure to clone with --recursive git clone --recursive https://github.com/yihui-he/channel-pruning.git
-
Build my Caffe fork (which support bicubic interpolation and resizing image shorter side to 256 then crop to 224x224)
cd caffe # If you're experienced with Caffe and have all of the requirements installed, then simply do: make all -j8 && make pycaffe # Or follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html # you might need to add pycaffe to PYTHONPATH, if you've already had a caffe before
-
Download ImageNet classification dataset http://www.image-net.org/download-images
-
Specify imagenet
source
path intemp/vgg.prototxt
(line 12 and 36)
For fast testing, you can directly download pruned model. See next section 1. Download the original VGG-16 model http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
move it to temp/vgg.caffemodel
(or create a softlink instead)
-
Start Channel Pruning
python3 train.py -action c3 -caffe [GPU0] # or log it with ./run.sh python3 train.py -action c3 -caffe [GPU0] # replace [GPU0] with actual GPU device like 0,1 or 2
-
Combine some factorized layers for further compression, and calculate the acceleration ratio. Replace the ImageData layer of
temp/cb_3c_3C4x_mem_bn_vgg.prototxt
with[temp/vgg.prototxt
’s](https://github.com/yihui-he/channel-pruning/blob/master/temp/vgg.prototxt#L1-L49)Shell ./combine.sh | xargs ./calflop.sh
-
Finetuning
caffe train -solver temp/solver.prototxt -weights temp/cb_3c_vgg.caffemodel -gpu [GPU0,GPU1,GPU2,GPU3] # replace [GPU0,GPU1,GPU2,GPU3] with actual GPU device like 0,1,2,3
-
Testing
Though testing is done while finetuning, you can test anytime with:
caffe test -model path/to/prototxt -weights path/to/caffemodel -iterations 5000 -gpu [GPU0] # replace [GPU0] with actual GPU device like 0,1 or 2
Pruned models (for download)
For fast testing, you can directly download pruned model from release: VGG-16 3C 4X, VGG-16 5X, ResNet-50 2X. Or follow Baidu Yun Download link
Test with:
caffe test -model channel_pruning_VGG-16_3C4x.prototxt -weights channel_pruning_VGG-16_3C4x.caffemodel -iterations 5000 -gpu [GPU0]
# replace [GPU0] with actual GPU device like 0,1 or 2
For fast testing, you can directly download pruned model from release Or you can: 1. clone my py-faster-rcnn repo: https://github.com/yihui-he/py-faster-rcnn 2. use the pruned models from this repo to train faster RCNN 2X, 4X, solver prototxts are in https://github.com/yihui-he/py-faster-rcnn/tree/master/models/pascal_voc
You can find answers of some commonly asked questions in our Github wiki, or just create a new issue