Skip to content

BUT-GRAPH-at-FIT/PersonGONE

Repository files navigation

PersonGONE: Image Inpainting for Automated Checkout Solution

Official implementaion by authors. Team 117 - Graph@FIT

Proposed solution for AI City Challenge 2022 Track4: Multi-Class Product Counting & Recognition for Automated Retail Checkout

teaser

Paper to download

Tested environment

  • Ubuntu 20.04
  • Python 3.8
  • CUDA 11.3
  • CuDNN 8.2.1
  • PyTorch 1.11
  • Nvidia GeForce RTX 3090

Environment setup

  1. Install CUDA 11.3 and CuDNN

  2. Clone this repo:

git clone https://github.com/BUT-GRAPH-at-FIT/PersonGONE.git
  1. Create virtual environment and activate it (optional)

  2. Install dependencies

cd PersonGONE
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
pip install mmcv-full==1.4.6 -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10.0/index.html
pip install -r requirements.txt
  1. Set environment variables
export PERSON_GONE_DIR=$(pwd)

Reproduce results with pre-trained detector (prefered)

Prepare testing dataset

If you do not want to train detector, only AIC22_Track4_TestA.zip (or TestB) is sufficient.

export TRACK_4_DATA_ROOT={/path/to/track_4/root_dir}

For example: export TRACK_4_DATA_ROOT=/mnt/data/AIC22_Track4_TestA/Test_A

Download pre-trained model

cd $PERSON_GONE_DIR
python download_pretrained_models.py --detector

Alternatively, you may train detector at your own

Inpainting process

Run:

python inpainting_process.py --video_id $TRACK_4_DATA_ROOT/video_id.txt

video_id.txt file is available in AIC22_Track4_TestA and contain video IDs and video file names (in the same directory)

Detect ROI

Run:

python detect_ROI.py --video_id $TRACK_4_DATA_ROOT/video_id.txt

Arguments --roi_seed can be set (two values) - it specifies seed position for ROI detection (white tray) in format x y

Detect products and create submission

Run:

python detect_and_create_submission.py --video_id $TRACK_4_DATA_ROOT/video_id.txt

Parameters --tracker and --img_size can be set. The values are pre-set to tracker = BYTE, img_size = 640

Hint

All scripts are set as the result was reported to AI City Challenge and no arguments must be set (only --video_id).

Train object detector for store checkout

Prepare AI City Challenge dataset

If you want to train detector prepare data at least from Track1, Track3, and Track4 (AI City Challenge 2022)

Transform data structure - separate data by classes

cd $PERSON_GONE_DIR
cp split_data.sh Track4/Train_SynData/segmentation_labels/split_data.sh
cd Track4/Train_SynData/segmentation_labels
bash split_data.sh

cd $PERSON_GONE_DIR
cp split_data.sh Track4/Train_SynData/syn_image_train/split_data.sh
cd Track4/Train_SynData/syn_image_train
bash split_data.sh

Train detector (can take many hours/several days)

  1. Download pretrained-model models without detector
python download_pretrained_models.py
  1. Prepare AI City Challenge dataset as described above
  2. Create dataset
python create_dataset.py --t_1_path {/path/to/AIC22_Track_1_MTMC_Tracking} --t_3_path {/path/to/AIC22_Track3_ActionRecognition} --t_4_track {/path/to/AIC_Track4/Train_SynData}
  1. Train detector
python train_detector.py  

Arguments --batch_size and --epochs can be set. Explicit values are batch_size = 16, epochs = 75.

Acknowledgements

Citation

@InProceedings{Bartl_2022_CVPR,
    author    = {Bartl, Vojt\v{e}ch and \v{S}pa\v{n}hel, Jakub and Herout, Adam},
    title     = {PersonGONE: Image Inpainting for Automated Checkout Solution},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2022},
    pages     = {3115-3123}
}

About

Implementaion of PersonGONE

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages