-
[News!] 24-07-01: Our work is accepted by ECCV24. Arxiv Paper can be found here. 🎉
-
[News!] 24-07-12: We build our Project Page which includes a brief summary of our work. 🔥
-
[News!] 24-07-13: We released the Datasets and also Codes. Welcome to use this benchmark and also try our proposed method! 🌟
-
[News!] 24-07-16: We build the leaderboards on Paper With Code: Cross-Domain Few-Shot Object Detection. 🥂
-
[News!] 24-09-16: We uploaded the presentation videos at Bilibili: English Pre, Bilibili: 中文讲解, Youtube: English Pre. 😊
In this paper, we:
- reorganize a benchmark for Cross-Domain Few-Shot Object Detection (CD-FSOD);
- conduct extensive study on several different kinds of detectors (Tab.1 in the paper);
- propose a novel CD-ViTO method via enhancing the existing open-set detector (DE-ViT).
In this repo, we provide:
- links and splits for target datasets;
- codes for our CD-ViTO method;
- codes for the DE-ViT-FT method; (in case you would like to build new methods based on this baseline).
We take COCO as source training data and ArTaxOr, Clipart1k, DIOR, DeepFish, NEU-DET, and UODD as targets.
Also, as stated in the paper, we adopt the "pretrain, finetuning, and testing" pipeline, while the pre-trained stage on COCO is directly taken from the DE-ViT, thus in practice, only the targets are needed to run our experiments.
The target datasets could be easily downloaded in the following links: (If you use the datasets, please cite them properly, thanks.)
To train CD-ViTO on a custom dataset, please refer to DATASETS.md for detailed instructions.
An anaconda environment is suggested, take the name "cdfsod" as an example:
git clone [email protected]:lovelyqian/CDFSOD-benchmark.git
conda create -n cdfsod python=3.9
conda activate cdfsod
pip install -r CDFSOD-benchmark/requirements.txt
pip install -e ./CDFSOD-benchmark
cd CDFSOD-benchmark
-
download weights: download pretrained model from DE-ViT.
-
run script:
bash main_results.sh
Add --controller to main_results.sh, then
bash main_results.sh
Our work is built upon DE-ViT, and also we use the codes of ViTDeT, Detic to test them under this new benchmark. Thanks for their work.
If you find our paper or this code useful for your research, please considering cite us (●°u°●)」:
@article{fu2024cross,
title={Cross-Domain Few-Shot Object Detection via Enhanced Open-Set Object Detector},
author={Fu, Yuqian and Wang, Yu and Pan, Yixuan and Huai, Lian and Qiu, Xingyu and Shangguan, Zeyu and Liu, Tong and Kong, Lingjie and Fu, Yanwei and Van Gool, Luc and others},
journal={arXiv preprint arXiv:2402.03094},
year={2024}
}