(ECCV 2020) PyTorch implementation of paper "Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild"
[PDF] [Project webpage] [Code (Detection)]
If our project is helpful for your research, please consider citing:
@INPROCEEDINGS{Xiao2020FSDetView,
author = {Yang Xiao and Renaud Marlet},
title = {Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2020}}
Code built on top of PoseFromShape.
Requirements
- Python=3.6
- PyTorch>=0.4.1
- torchvision matched your PyTorch version
Build
Create conda env:
## Create conda env
conda create --name FSviewpoint --file spec-file.txt
conda activate FSviewpoint
conda install -c conda-forge matplotlib
Install blender for visualizing the estimated 3D poses:
## Install blender as a python module
conda install auxiliary/python-blender-2.77-py36_0.tar.bz2
We evaluate our method on two commonly-used benchmarks:
We use the train set of ObjectNet3D for training and the val set for evaluation. Following StarMap, we split the 100 object classes into 80 base classes and 20 novel classes.
Download ObjectNet3D:
cd ./data/ObjectNet3D
bash download_object3d.sh
Data structure should look like:
data/ObjectNet3D
Annotations/
Images/
ImageSets/
Pointclouds/
...
We use the train set of ObjectNet3D for training and the val set of Pascal3D for evaluation. Following MetaView, we use the 12 object classes that are the same with Pascal3D as novel classes and use the rest 88 as base classes.
Download Pascal3D:
cd ./data/Pascal3D
bash download_pascal3d.sh
Data structure should look like:
data/Pascal3D
Annotations/
Images/
ImageSets/
Pointclouds/
...
We provide pre-trained models of base-class training:
# download from Dropbox
bash download_models.sh
We also provide pth files in BaiduPan with code no8t.
You will get a dir like:
save_models/
IntraDataset/checkpoint.pth
InterDataset/checkpoint.pth
You can also train the network yourself by running:
# Intra-Dataset
bash run/train_intra.sh
# Inter-Dataset
bash run/train_inter.sh
Fine-tune the base-training models on a balanced training data including both base and novel classes:
bash run/finetune_intra.sh
bash run/finetune_inter.sh
In intra-dataset setting, we test on the 20 novel classes of ObjectNet3D:
bash run/test_intra.sh
In inter-dataset setting, we train on the 12 novel classes of Pascal3D:
bash run/test_inter.sh
For a quick re-usage of our model, we provid pre-trained model weights and extracted mean class data:
# download from Dropbox
bash download_models.sh
You will get two folders:
IntraDataset_shot10/
checkpoint.pth
mean_class_attentions.pkl
InterDataset_shot10/
checkpoint.pth
mean_class_attentions.pkl
Once the base-class training is done, you can run 10 times few-shot fine-tuning and testing with few-shot training data randomly selected for each run:
bash run/multiple_times_intra.sh
bash run/multiple_times_inter.sh
To get the performance averaged over multiple runs:
python mean_metrics.py save_models/IntraDataset_shot10
python mean_metrics.py save_models/InterDataset_shot10
To test the pre-trained model on a single object-centered image, run the following command:
python demo.py \
--model {model_weight.pth} \
--class_data {mean_class_attention.pkl} \
--test_cls {test_class_name} \
--test_img {tets_image_path}
The estimated viewpoint will be printed in format of Euler angles.