Skip to content

[ICME 2023 Oral] Pytorch implementation for Multimodal Sentiment Analysis with Preferential Fusion and Distance-aware Contrastive Learning.

Notifications You must be signed in to change notification settings

FeipengMa6/PriSA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PriSA

Pytorch implementation for Multimodal Sentiment Analysis with Preferential Fusion and Distance-aware Contrastive Learning (ICME 2023 Oral).

Dependencies

Environments

We provide the anaconda enviroments to help you build a runnable environment.

conda env create -f environments.yml
conda activate prisa

Datasets

Please download MOSEI dataset into ./data for training and evaluation.

Pretrained Models

Please download bert-base-uncased from huggingface into ./pretrained_models.

Training

You can train the model using the following command. The output will be saved at /tmp/log/, you can modify .runx to change the path of training log.

python -m runx.runx mosei.yml -i

Or you can directly run the following command without runx, the output will be saved at ./log

python main.py

Feel free to concat with us ([email protected]) if you have any problem.

Citation

@inproceedings{ma2023multimodal,
  title={Multimodal Sentiment Analysis with Preferential Fusion and Distance-aware Contrastive Learning},
  author={Ma, Feipeng and Zhang, Yueyi and Sun, Xiaoyan},
  booktitle={2023 IEEE International Conference on Multimedia and Expo (ICME)},
  pages={1367--1372},
  year={2023},
  organization={IEEE}
}

About

[ICME 2023 Oral] Pytorch implementation for Multimodal Sentiment Analysis with Preferential Fusion and Distance-aware Contrastive Learning.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages