Skip to content

Latest commit

 

History

History
69 lines (48 loc) · 2.69 KB

README.md

File metadata and controls

69 lines (48 loc) · 2.69 KB

SOAR: Scene-debiasing Open-set Action Recognition

featured

This repo contains the original PyTorch implementation of our paper:

SOAR: Scene-debiasing Open-set Action Recognition

Yuanhao Zhai, Ziyi Liu, Zhenyu Wu, Yi Wu, Chunluan Zhou, David Doermann, Junsong Yuan, and Gang Hua

University at Buffalo, Wormpex AI Research

ICCV 2023

1. Environment setup

Our project is developed upon MMAction2 v0.24.1, please follow their instruction to setup the environemtn.

2. Dataset preparation

Follow these instructions to setup the datasets

We provide pre-extracted scene feature and labels, and scene-distance-splitted subsets for the three datasets here (coming soon). Please place them in the data folder.

3. Training

Upon the original MMAction2 train and evaluation scripts, we wrote a simple script that combines the training and evalution tools/run.py.

For training and evaluating the whole SOAR model (require the pre-extracted scene label):

python tools/run.py configs/recognition/i3d/i3d_r50_dense_32x2x1_50e_ucf101_rgb_weighted_ae_edl_dis.py --gpus 0,1,2,3

For the unsupervised version that does not require the scene label:

python tools/run.py configs/recognition/i3d/i3d_r50_dense_32x2x1_50e_ucf101_rgb_ae_edl.py --gpus 0,1,2,3

4. Evaluation

Coming soon

Citation

If you find our work helpful, please considering citing our work.

@inproceedings{zhai2023soar,
  title={SOAR: Scene-debiasing Open-set Action Recognition},
  author={Zhai, Yuanhao and Liu, Ziyi and Wu, Zhenyu and Wu, Yi and Zhou, Chunluan and Doermann, David and Yuan, Junsong and Hua, Gang},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={10244--10254},
  year={2023}
}

TODO list

  • Upload pre-extract scene feature and scene label
  • Update scene-bias evaluation code and tutorial.

Acknowledgement

This project is developed heavily upon DEAR and MMAction2. We thank Wentao Bao @Cogito2012 for valuable discussion.