Skip to content

Latest commit

 

History

History
105 lines (79 loc) · 6.53 KB

README.md

File metadata and controls

105 lines (79 loc) · 6.53 KB

Refining activation downsampling with SoftPool

supported versions Library GitHub license


Update 10/2021:

We have extended this work with in our paper: AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling. Info, code and resources are available at alexandrosstergiou/adaPool

Abstract

Convolutional Neural Networks (CNNs) use pooling to decrease the size of activation maps. This process is crucial to increase the receptive fields and to reduce computational requirements of subsequent convolutions. An important feature of the pooling operation is the minimization of information loss, with respect to the initial activation maps, without a significant impact on the computation and memory overhead. To meet these requirements, we propose SoftPool: a fast and efficient method for exponentially weighted activation downsampling. Through experiments across a range of architectures and pooling methods, we demonstrate that SoftPool can retain more information in the reduced activation maps. This refined downsampling leads to improvements in a CNN's classification accuracy. Experiments with pooling layer substitutions on ImageNet1K show an increase in accuracy over both original architectures and other pooling methods. We also test SoftPool on video datasets for action recognition. Again, through the direct replacement of pooling layers, we observe consistent performance improvements while computational loads and memory requirements remain limited.


To appear in IEEE International Conference on Computer Vision (ICCV) 2021

[arXiv preprint]     [CVF open access]     [video presentation]

Image based pooling. Images are sub-sampled in both height and width by half.

Original
Soft Pool

Video based pooling. Videos are sub-sampled in time, height and width by half.

Original
Soft Pool

Dependencies

All parts of the code assume that torch is of version 1.4 or higher. There might be instability issues on previous versions.

! Disclaimer: This repository is heavily structurally influenced on Ziteng Gao's LIP repo https://github.com/sebgao/LIP

Installation

You can build the repo through the following commands:

$ git clone https://github.com/alexandrosstergiou/SoftPool.git
$ cd SoftPool-master/pytorch
$ make install
--- (optional) ---
$ make test

Usage

You can load any of the 1D, 2D or 3D variants after the installation with:

import softpool_cuda
from SoftPool import soft_pool1d, SoftPool1d
from SoftPool import soft_pool2d, SoftPool2d
from SoftPool import soft_pool3d, SoftPool3d
  • soft_poolxd: Is a functional interface for SoftPool.
  • SoftPoolxd: Is the class-based version which created an object that can be referenced later in the code.

ImageNet models

ImageNet weight can be downloaded from the following links:

Network link
ResNet-18 link
ResNet-34 link
ResNet-50 link
ResNet-101 link
ResNet-152 link
DenseNet-121 link
DenseNet-161 link
DenseNet-169 link
ResNeXt-50_32x4d link
ResNeXt-101_32x4d link
wide-ResNet50 link

Citation

@inproceedings{stergiou2021refining,
  title={Refining activation downsampling with SoftPool},
  author={Stergiou, Alexandros, Poppe, Ronald and Kalliatakis Grigorios},
  booktitle={International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={10357-10366},
  organization={IEEE}
}

Licence

MIT

Additional resources

A great project is Ren Tianhe's pytorh-pooling repo for overviewing different pooling strategies.