Skip to content

Latest commit

 

History

History
22 lines (17 loc) · 2.22 KB

README.md

File metadata and controls

22 lines (17 loc) · 2.22 KB

CoADNet-CoSOD

CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection (NeurIPS 2020)

Datasets

We employ COCO-SEG as our training dataset, which covers 78 different object categories containing totally 200k labeled images. There is also an auxiliary dataset DUTS, which is a popular benchmark dataset (the training split) for (single-image) salient object detection.

We employ four datasets for performance evaluation, as listed below:

  1. Cosal2015: 50 categories, 2015 images.
  2. iCoseg: 38 categories, 643 images.
  3. MSRC: 7 categories, 210 images.
  4. CoSOD3k: 160 categories, 3316 images.

Put all the above datasets as well as the corresponding info files under ../data folder.

Training

  1. Download backbone networks and put them under ./ckpt/pretrained
  2. Run Pretrain.py to pretrain the whole network, which helps to learn saliency cues and speeds up convergence.
  3. Run Train-COCO-SEG-S1.py to train the whole network on the COCO-SEG dataset. Note that, since COCO-SEG is modified from a generic semantic segmentation dataset (MS-COCO) and thus may ignore the crucial saliency patterns, we need a post-refinement procedure as conducted in Train-COCO-SEG-S2.py. When using other more appropriate training datasets such as CoSOD3k, we skip this procedure.

Testing

We organize the testing codes in a Jupyter notebook test.ipynb, which performs testing on all the four evaluation datasets. Note that there is an is_shuffle option during testing, which enables us to perform multiple trials to output more robust predictions.