Skip to content

[ECCV 2024] ControlCap: Controllable Region-level Captioning

Notifications You must be signed in to change notification settings

callsys/ControlCap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ControlCap: Controllable Region-level Captioning

This is the official implementaion of paper ControlCap: Controllable Region-level Captioning, which is accepted in ECCV 2024. This repository contains Pytorch training code, evaluation code, pre-trained models, and visualization method. Based on ControlCap, we build DynRefer, which support more tasks and achieves better performance.

arXiv preprint Python 3.8 PyTorch 1.11 LICENSE

1. Contents

2. Introduction

Region-level captioning is challenged by the caption degeneration issue, which refers to that pre-trained multimodal models tend to predict the most frequent captions but miss the less frequent ones. In this study, we propose a controllable region-level captioning (ControlCap) approach, which introduces control words to a multimodal model to address the caption degeneration issue. In specific, ControlCap leverages a discriminative module to generate control words within the caption space to partition it to multiple sub-spaces. The multimodal model is constrained to generate captions within a few sub-spaces containing the control words, which increases the opportunity of hitting less frequent captions, alleviating the caption degeneration issue. Furthermore, interactive control words can be given by either a human or an expert model, which enables captioning beyond the training caption space, enhancing the model’s generalization ability. Extensive experiments on Visual Genome and RefCOCOg datasets show that ControlCap respectively improves the CIDEr score by 21.6 and 2.2, outperforming the state-of-the-arts by significant margins.

3. Results

Region-level captioning performance on Visual Genome (VG) and RefCOCOg.
Gradio demo of ControlCap.

4. Code Usage

5. Contacts

If you have any question about our work or this repository, please don't hesitate to contact us by emails or open an issue under this project.

6. Acknowledgment

  • Part of the code is borrowed from LAVIS, GlaMM, Osprey and RAM, we sincerely thank them for their contributions to the community.

7. Citation

@article{zhao2024controllable,
  title={Controllable Dense Captioner with Multimodal Embedding Bridging},
  author={Zhao, Yuzhong and Liu, Yue and Guo, Zonghao and Wu, Weijia and Gong, Chen and Ye, Qixiang and Wan, Fang},
  journal={arXiv preprint arXiv:2401.17910},
  year={2024}
}

About

[ECCV 2024] ControlCap: Controllable Region-level Captioning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published