Skip to content

Latest commit

 

History

History
45 lines (33 loc) · 2.59 KB

README.md

File metadata and controls

45 lines (33 loc) · 2.59 KB

DRCap_Zeroshot_Audio_Captioning

Introduction

DRCap is a data-efficient and flexible audio captioning system requiring text-only data for training and can quickly adapt to new domains without additional fine-tuning.

Pretrained models

You could download our pretrained CLAP model and linear mapping network through google drive:

Inference

You could modify the variables run_dir, audio_encoder_dir, output_dir, llm_path in scripts/inference_drcap.sh to match the paths where the downloaded checkpoints are located. Additionally, update the source in data/audiocaps_test.jsonl to ensure the audio paths point to your audio files, and then run:

bash scripts/inference_drcap.sh

Data preparation

Prepare your jsonl data file in the following format:

{"key": "Y7fmOlUlwoNg_1", "target": "Constant rattling noise and sharp vibrations", "text": "Constant rattling noise and sharp vibrations"}
{"key": "Y6BJ455B1aAs_1", "target": "A rocket flies by followed by a loud explosion and fire crackling as a truck engine runs idle", "text": "A rocket flies by followed by a loud explosion and fire crackling as a truck engine runs idle"}

Please note that only textual data is required for training. However, for zero-shot inference, audio files are also necessary. You could find an example of the jsonl file in data/audiocaps_test.jsonl

Run the following command to do the retrieval-augmentation and create the text embedding support for evaluation:

bash scripts/data_preprocess.sh

Model Training

You could run the following command to train the model

bash scripts/finetune_drcap.sh

For training only the linear layer (without using LoRA or other PEFT methods), you can set the following parameters: use_peft=false and freeze_llm=true. To turn off the RAG, you could set use_arg=false and rag_first=false

Acknowledgement

The code of training the CLAP model is based on the WavCaps repo, we thank the contributors for open-sourcing their work.