PyTorch implementation of "Stacked Pooling for Boosting Scale Invariance of Crowd Counting" [ICASSP 2020].
@inproceedings{huang2020stacked,
title={Stacked Pooling for Boosting Scale Invariance of Crowd Counting},
author={Huang, Siyu and Li, Xi and Cheng, Zhi-Qi and Zhang, Zhongfei and Hauptmann, Alexander},
booktitle={IEEE International Conference on Acoustics, Speech and Signal Processing},
pages={2578--2582},
year={2020},
}
This code is implemented based on https://github.com/svishwa/crowdcount-mcnn
ShanghaiTech-A | ShanghaiTech-B | WorldExpo'10 | |
---|---|---|---|
Vanilla Pooling | 97.63 | 21.17 | 14.74 |
Stacked Pooling | 93.98 | 18.73 | 12.92 |
- Python 2.7
- PyTorch 0.4.0
- Download ShanghaiTech Dataset from
Dropbox: https://www.dropbox.com/s/fipgjqxl7uj8hd5/ShanghaiTech.zip?dl=0
Baidu Disk: http://pan.baidu.com/s/1nuAYslz - Create Directory
mkdir ./data/original/shanghaitech/
- Save "part_A_final" under ./data/original/shanghaitech/
Save "part_B_final" under ./data/original/shanghaitech/ cd ./data_preparation/
Runcreate_gt_test_set_shtech.m
in matlab to create ground truth files for test data
Runcreate_training_set_shtech.m
in matlab to create training and validataion set along with ground truth files
-
To train Deep Net+vanilla pooling on ShanghaiTechA, edit configurations in
train.py
pool = pools[0]
To train Deep Net+stacked pooling on ShanghaiTechA, edit configurations in
train.py
pool = pools[1]
-
Run
python train.py
respectively to start training
- Follow step 1 of Train to edit corresponding
pool
intest.py
- Edit
model_path
intest.py
using the best checkpoint on validation set (output by training process) - Run
python test.py
respectively to compare them!
-
To try pooling methods (vanilla pooling, stacked pooling, and multi-kernel pooling) described in our paper:
Edit
pool
intrain.py
andtest.py
-
To evaluate on datasets (ShanghaiTechA, ShanghaiTechB) or backbone models (Base Net, Wide-Net, Deep-Net) described in our paper:
Edit
dataset_name
ormodel
intrain.py
andtest.py