In this work, we propose a bidirectional cross-modal ZSL approach termed Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning (CREST) that begins by extracting representations for attribute and visual localization and employs Evidential Deep Learning (EDL) to measure underlying epistemic uncertainty and incorporates dual learning pathways, focusing on both visual-category and attribute-category alignments, to ensure robust correlation between latent and observable spaces. Moreover, we introduce an uncertainty-informed cross-modal fusion technique to refine visual-attribute inference. Extensive experiments demonstrate our model's effectiveness and unique explainability across multiple datasets.
2024-04
Our paper is released on arXiv.2024-04
The code for pre-processing is available now!
$ pip install -r requirements.txt
- Python==3.9.18
- numpy==1.26.1
- scikit_learn==1.2.2
- torch==2.0.1
- torchvision==0.15.2
- tqdm==4.65.0
- transformers==4.31.0
Before your model can start flexing its muscles, you need to gather the superhero team of datasets: CUB, SUN, and AWA2. Just like assembling a team of avengers, make sure you've got the right versions:
- CUB - Caltech-UCSD Birds-200-2011
- SUN - SUN Attribute Database: Discovering, Annotating, and Recognizing Scene Attributes
- AWA2 - A free dataset for Animals Attribute Based Classification and Zero-Shot Learning
Oh, and don't forget the rookie of the year, xlsa17
. You'll find them hanging out here.
Once you've got them all, decompress them in a folder that looks like this:
.
├── data
│ ├── CUB/CUB_200_2011/...
│ ├── SUN/images/...
│ ├── AWA2/Animals_with_Attributes2/...
│ └── xlsa17/data/...
└── ···
Now, let's turn the heat up and cook those raw features until they're golden! Open your terminal and let the magic begin:
$ python preprocessing.py --dataset CUB --compression --device cuda:0
$ python preprocessing.py --dataset SUN --compression --device cuda:0
$ python preprocessing.py --dataset AWA2 --compression --device cuda:0
TeleAI takes data confidentiality seriously. Our source and code are undergoing a thorough review process and will be shared with the community once approved. Your understanding is appreciated—stay tuned!
@inproceedings{
huang2024crest,
title={{CREST}: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-ShoT Learning},
author={Haojian Huang and Xiaozhennn Qiao and Zhuo Chen and Haodong Chen and Binyu Li and Zhe Sun and Mulin Chen and Xuelong Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RAUOcGo3Qt}
}