Skip to content

Implementation of "Audio Retrieval with Natural Language Queries", INTERSPEECH 2021, PyTorch

License

Notifications You must be signed in to change notification settings

oncescuandreea/audio-retrieval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Audio Retrieval with Natural Language Queries

This repository is the implementation of Audio Retrieval with Natural Language Queries and it is based on the Use What You Have: Video retrieval using representations from collaborative experts repo. Datasets used in this paper are AudioCaps, CLOTHO, Activity-Net and QuerYD.

More information can be found at our project page: https://www.robots.ox.ac.uk/~vgg/research/audio-retrieval/


âť— An extension of this work along with the new SoundDescs dataset for audio retrieval can be found here. âť—

Requirements

We used PyTorch 1.7.1., CUDA 10.1, and Python 3.7 to generate results and models. The required libraries for running this code can be found in requirements/requirements.txt.

conda create --name audio-retrieval python=3.7
conda activate audio-retrieval
pip install -r requirements/requirements.txt

To be able to run the code below, features extracted from various datasets need to be downloaded. If there is not enough space in your working location to store some of these features (for AudioCaps the file is 6GB while the others are under 1GB) then you will need to create a folder called data inside this repository which should be a symlink to a folder with enough space. As an example, run the following from the audio-experts code-base.

ln -s <path-where-data-can-be-saved> data

To download features for each dataset, follow the steps here

Evaluating a pretrained model on multiple seeds and reproducing results

To reproduce the results in tables below, multiple models trained with different seeds need to be downloaded and evaluated on the test sets.

The steps needed to reproduce the results are:

  1. Select the experiment to be reproduced which is in the form <dataset-name>-<config-file-name>. Tables with experiments names and the corresponding form can be found in misc/exps-names.md.
  2. Download the features and splits corresponding to the dataset for which the experiment is run. For example, for AudioCaps run:
# fetch the pretrained experts for AudioCaps 
python3 misc/sync_experts.py --dataset AudioCaps

Additional examples for the datasets used in this paper can be found in misc/exps-names.md.

  1. Running the eval.py script.

For example, to reproduce the experiments for AudioCaps with all visual and audio experts, run the following line:

python eval.py --experiment audiocaps-train-full-ce-r2p1d-inst-vggish-vggsound

If the --experiment flag is not provided, the eval.py script will download and evaluate all models on the test set.

Training a new model

Training a new audio-text embedding requires:

  1. The pretrained experts for the dataset used for training, which should be located in <root>/data/<dataset-name>/symlinked-feats (this will be done automatically by the utility script, or can be done manually). Examples can be found in misc/exps-names.md.
  2. A config.json file. You can define your own, or use one of the provided configs in the configs directory.

Training is then performed with the following command:

python3 train.py --config <path-to-config.json> --device <gpu-id>

where <gpu-id> is the index of the GPU to train on. This option can be ommitted for training on the CPU.

For example, to train a new embedding for the CLOTHO dataset, run the following sequence of commands:

# fetch the pretrained experts for CLOTHO 
python3 misc/sync_experts.py --dataset CLOTHO

# Train the model
python3 train.py --config configs/clotho/train-vggish-vggsound.json --device 0

AudioCaps

These are the retrieval results obtained for the AudioCaps dataset when using only audio experts:

Experts Task R@1 R@5 R@10 R@50 MdR MnR Geom params Links
CE - VGGish t2v 18.0(0.2) 46.8(0.2) 62.0(0.5) 88.5(0.2) 6.0(0.0) 23.6(1.3) 37.4(0.2) 7.39M config, model, log
CE - VGGish v2t 21.0(0.8) 48.3(1.8) 62.7(1.6) 87.3(0.4) 6.0(0.0) 27.4(1.2) 39.9(0.6) 7.39M config, model, log
CE - VGGSound t2v 20.5(0.6) 52.1(0.4) 67.0(1.0) 91.1(1.6) 5.0(0.0) 20.6(2.8) 41.5(0.7) 12.12M config, model, log
CE - VGGSound v2t 24.6(0.9) 55.9(0.3) 70.4(0.4) 92.4(0.6) 4.3(0.6) 19.9(1.4) 45.9(0.6) 12.12M config, model, log
CE - VGGish + VGGSound t2v 23.1(0.8) 55.1(0.9) 70.7(0.7) 92.9(0.5) 4.7(0.6) 16.5(0.6) 44.8(0.8) 21.86M config, model, log
CE - VGGish + VGGSound v2t 25.1(0.9) 57.1(1.0) 73.2(1.6) 92.5(0.2) 4.0(0.0) 17.0(0.1) 47.2(1.1) 21.86M config, model, log
MoEE - VGGish + VGGSound t2v 22.5(0.3) 54.4(0.6) 69.5(0.9) 92.4(0.4) 5.0(0.0) 17.8(1.1) 44.0(0.4) 8.9M config, model, log
MoEE - VGGish + VGGSound v2t 25.1(0.8) 57.5(1.4) 72.9(1.2) 93.2(0.8) 4.0(0.0) 15.6(0.5) 47.2(1.0) 8.9M config, model, log

Using only visual experts for AudioCaps:

Experts Task R@1 R@5 R@10 R@50 MdR MnR Geom params Links
CE - Scene t2v 6.1(0.4) 22.6(0.9) 35.8(0.6) 69.8(0.4) 19.3(0.6) 69.3(5.7) 17.0(0.5) 7.51M config, model, log
CE - Scene v2t 6.5(0.8) 21.8(1.2) 31.3(1.6) 63.5(2.1) 26.1(2.6) 121.1(3.1) 16.4(1.0) 7.51M config, model, log
CE - R2P1D t2v 8.2(0.5) 28.9(0.8) 44.7(0.9) 76.6(1.3) 12.7(0.6) 58.3(9.2) 22.0(0.8) 6.21M config, model, log
CE - R2P1D v2t 10.3(0.4) 28.7(1.5) 41.8(3.1) 75.6(1.3) 15.4(1.5) 82.0(7.9) 23.1(0.9) 6.21M config, model, log
CE - Inst t2v 7.7(0.2) 29.4(1.3) 46.7(1.3) 79.3(0.6) 11.7(0.6) 50.8(3.2) 21.9(0.7) 7.38M config, model, log
CE - Inst v2t 9.8(0.9) 28.0(0.7) 40.6(0.7) 74.2(2.1) 16.3(0.6) 89.4(3.4) 22.3(0.7) 7.38M config, model, log
CE - Scene + R2P1D t2v 8.8(0.1) 31.5(0.5) 46.8(0.1) 77.1(2.4) 12.0(0.0) 57.8(8.5) 23.5(0.2) 16.07M config, model, log
CE - Scene + R2P1D v2t 11.0(0.6) 31.3(1.7) 45.1(1.7) 75.9(0.9) 13.0(1.0) 73.0(5.2) 25.0(1.2) 16.07M config, model, log
CE - Scene + Inst t2v 8.7(0.5) 30.4(0.9) 47.4(0.5) 78.8(1.4) 11.7(0.6) 53.0(6.4) 23.2(0.7) 17.25M config, model, log
CE - Scene + Inst v2t 10.6(0.6) 28.0(1.6) 41.4(1.5) 74.6(1.0) 15.3(1.2) 85.1(0.6) 23.1(1.2) 17.25M config, model, log
CE - R2P1D + Inst t2v 10.1(0.2) 33.2(0.7) 49.6(1.1) 77.9(2.3) 10.7(0.6) 57.8(8.1) 25.5(0.2) 15.95M config, model, log
CE - R2P1D + Inst v2t 12.1(0.4) 32.2(0.7) 46.1(1.3) 78.0(0.8) 12.8(0.7) 71.8(4.5) 26.2(0.5) 15.95M config, model, log

Visual and audio experts for AudioCaps:

Experts Task R@1 R@5 R@10 R@50 MdR MnR Geom params Links
CE - R2P1D + Inst + VGGish t2v 23.9(0.7) 58.8(0.2) 74.4(0.2) 94.5(0.2) 4.0(0.0) 14.0(0.7) 47.1(0.5) 23.32M config, model, log
CE - R2P1D + Inst + VGGish v2t 29.0(2.0) 63.5(2.5) 77.2(1.9) 95.0(0.1) 3.0(0.0) 12.7(0.1) 52.2(2.2) 23.32M config, model, log
CE - R2P1D + Inst + VGGSound t2v 27.4(0.7) 62.8(0.7) 78.2(0.3) 94.9(0.3) 3.0(0.0) 13.1(0.6) 51.3(0.5) 28.05M config, model, log
CE - R2P1D + Inst + VGGSound v2t 34.0(1.5) 68.5(1.3) 82.5(1.2) 97.3(0.4) 2.7(0.6) 9.1(0.3) 57.7(1.3) 28.05M config, model, log
CE - R2P1D + Inst +VGGish + VGGSound t2v 28.1(0.6) 64.0(0.5) 79.0(0.5) 95.4(0.6) 3.0(0.0) 12.1(1.1) 52.2(0.4) 35.43M config, model, log
CE - R2P1D + Inst +VGGish + VGGSound v2t 33.7(1.6) 70.2(0.8) 83.7(0.4) 97.5(0.1) 2.7(0.3) 8.1(0.4) 58.3(1.2) 35.43M config, model, log

CLOTHO

Experts Task R@1 R@5 R@10 R@50 MdR MnR Geom params Links
CE - VGGish t2v 4.0(0.2) 15.0(0.9) 25.4(0.5) 61.4(1.1) 31.7(1.5) 78.2(2.2) 11.5(0.5) 7.39M config, model, log
CE - VGGish v2t 4.8(0.4) 15.9(1.8) 25.8(1.7) 57.5(2.5) 35.7(2.5) 106.6(5.7) 12.5(1.0) 7.39M config, model, log
CE - VGGish + VGGSound t2v 6.7(0.4) 21.6(0.6) 33.2(0.3) 69.8(0.3) 22.3(0.6) 58.3(1.1) 16.9(0.2) 21.86M config, model, log
CE - VGGish + VGGSound v2t 7.1(0.3) 22.7(0.6) 34.6(0.5) 67.9(2.3) 21.3(0.6) 72.6(3.4) 17.7(0.4) 21.86M config, model, log
MoEE - VGGish + VGGSound t2v 6.0(0.1) 20.8(0.7) 32.3(0.3) 68.5(0.5) 23.0(0.0) 60.2(0.8) 16.0(0.3) 8.9M config, model, log
MoEE - VGGish + VGGSound v2t 7.2(0.5) 22.1(0.7) 33.2(1.1) 67.4(0.3) 22.7(0.6) 71.8(2.3) 17.4(0.7) 8.9M config, model, log

Pretraining on AudioCaps, finetuning on CLOTHO

Experts Task R@1 R@5 R@10 R@50 MdR MnR Geom params Links
CE - VGGish + VGGSound t2v 9.6(0.3) 27.7(0.5) 40.1(0.7) 75.0(0.8) 17.0(1.0) 48.4(0.7) 22.0(0.3) 21.86M config, model, log
CE - VGGish + VGGSound v2t 10.7(0.6) 29.0(1.9) 40.8(1.4) 73.5(2.5) 16.0(1.7) 58.9(3.8) 23.3(1.1) 21.86M config, model, log
MoEE - VGGish + VGGSound t2v 8.6(0.4) 27.0(0.5) 39.3(0.7) 74.4(0.5) 17.3(0.6) 49.0(1.0) 20.9(0.5) 8.9M config, model, log
MoEE - VGGish + VGGSound v2t 10.0(0.3) 27.7(0.9) 40.1(1.3) 73.5(1.0) 16.0(1.0) 55.9(1.8) 22.3(0.0) 8.9M config, model, log

Visual centric datasets

Experts Task R@1 R@5 R@10 R@50 MdR MnR Geom params Links
CE - VGGish QuerYD t2v 3.7(0.2) 11.7(0.4) 17.3(0.6) 36.3(0.3) 115.5(5.2) 273.5(6.7) 9.0(0.0) 7.39M config, model, log
CE - VGGish QuerYD v2t 3.8(0.2) 11.5(0.4) 16.8(0.2) 35.2(0.4) 116.3(2.1) 271.9(5.8) 9.0(0.3) 7.39M config, model, log
CE - VGGish Activity-Net t2v 1.5(0.1) 5.6(0.2) 9.2(0.3) 22.1(1.2) 373.0(46.5) 907.8(56.2) 4.0(0.1) 7.39M config, model, log
CE - VGGish Activity-Net v2t 1.4(0.1) 5.3(0.1) 8.5(0.3) 21.9(1.3) 370.0(40.5) 912.1(51.6) 4.3(0.1) 7.39M config, model, log

More information can be found at our project page: https://www.robots.ox.ac.uk/~vgg/research/audio-retrieval/

References

[1] If you find this code useful, please consider citing:

@inproceedings{Oncescu21a,
               author       = "Oncescu, A.-M. and Koepke, A.S. and Henriques, J. and Akata, Z., Albanie, S.",
               title        = "Audio Retrieval with Natural Language Queries",
               booktitle    = "INTERSPEECH",
               year         = "2021"
             }

[2] If you find this code useful, please consider citing:

@inproceedings{Liu2019a,
  author    = {Liu, Y. and Albanie, S. and Nagrani, A. and Zisserman, A.},
  booktitle = {arXiv preprint arxiv:1907.13487},
  title     = {Use What You Have: Video retrieval using representations from collaborative experts},
  date      = {2019},
}

About

Implementation of "Audio Retrieval with Natural Language Queries", INTERSPEECH 2021, PyTorch

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages