Official pytorch code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation"
@misc{kim2024deeptalkdynamicemotionembedding,
title={DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation},
author={Jisoo Kim and Jungbin Cho and Joonho Park and Soonmin Hwang and Da Eun Kim and Geon Kim and Youngjae Yu},
year={2024},
eprint={2408.06010},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2408.06010},
}
REPOSITORY UNDER CONSTRUCTION
Download DEE, FER, TH-VQVAE, DEEPTalk checkpoints from here. Place each files in ./DEE/checkpoint, ./FER/checkpoint, ./DEEPTalk/checkpoint/TH-VQVAE, ./DEEPTalk/checkpoint/DEEPTalk, respectively.
cd DEEPTalk
python demo.py \
--DEMOTE_ckpt_path ./checkpoint/DEEPTalk/DEEPTalk.pth \
--DEE_ckpt_path ../DEE/checkpoint/DEE.pth \
--audio_path ../demo/sample_audio.wav
We gratefully acknowledge the open-source projects that served as the foundation for our work:
This code is released under the MIT License.
Please note that our project relies on various other libraries, including FLAME, PyTorch3D, and Spectre, as well as several datasets.