Skip to content

Latest commit

 

History

History
135 lines (103 loc) · 5.29 KB

README.md

File metadata and controls

135 lines (103 loc) · 5.29 KB

TIFace: Improving Facial Reconstruction through Tensorial Radiance Fields and Implicit Surfaces

Ruijie Zhu, Jiahao Chang, Ziyang Song, Jiahuan Yu, Tianzhu Zhang
University of Science and Technology of China
1st place solution in View Synthesis Challenge for Human Heads @ ICCV2023

                 

video

This video describes our solution that secured the first place in the "View Synthesis Challenge for Human Heads (VSCHH)" at the ICCV 2023 workshop. Go to Youtube to see our talk!

News

Installation

Data config

We provide an example scene of ILSH dataset on Google Drive. For full dataset download, please refer to the instructions on CodaLab.

Note that you need to follow the process (sign and email the EULA form) to access the full ILSH dataset.

Put the example data into ./data, the files should be organized as:

data/nerf_datasets/ILSH/chaPhase/002_00
├── images
├── images_4x
├── masks
├── poses_bounds_test.npy
├── poses_bounds_train.npy
├── sam_mask
├── transforms_test.json
├── transforms_train.json
└── vit_mask

Environment Config

Please follow the instructions of T-Face and I-Face.

Running

We provide example bash commands to run training or testing. Please modify these files according to your own configuration before running.

Training

T-Face:

# single scene
cd T-Face
python train.py --config configs/islh_mask.txt \
    --datadir ./data/nerf_datasets/ILSH/chaPhase/002_00 \
    --expname tensorf_ILSH_VM_002_00_vit_mask
# multiple scenes
cd T-Face
bash train_all.sh

I-Face:

# single scene
cd I-Face
python launch.py --config configs/neus-blender_ilsh.yaml \
    —gpu 0 \
    --train dataset.scene=\'002_00\' \
    tag=new 
# multiple scenes
cd I-Face
bash train_all_neus.sh

Validation

Pack your validation results and submit to CodaLab for validation.

Testing

Pack your testing results and submit to CodaLab for evaluation.

Bibtex

If you find our work useful in your research, please consider citing:

@article{zhu2023tiface,
    title={TIFace: Improving Facial Reconstruction through Tensorial Radiance Fields and Implicit Surfaces}, 
    author={Zhu, Ruijie and Chang, Jiahao and Song, Ziyang and Yu, Jiahuan and Zhang, Tianzhu},
    journal={arXiv preprint arXiv:2312.09527},
    year={2023}
}

and

@InProceedings{Jang_2023_VSCHH,
    author    = {Jang, Youngkyoon and Zheng, Jiali and Song, Jifei and Dhamo, Helisa and P\'erez-Pellitero, Eduardo and Tanay, Thomas and Maggioni, Matteo and Shaw, Richard and Catley-Chandar, Sibi and Zhou, Yiren and Deng, Jiankang and Zhu, Ruijie and Chang, Jiahao and Song, Ziyang and Yu, Jiahuan and Zhang, Tianzhu and Nguyen, Khanh-Binh and Yang, Joon-Sung and Dogaru, Andreea and Egger, Bernhard and Yu, Heng and Gupta, Aarush and Julin, Joel and Jeni, L\'aszl\'o A. and Kim, Hyeseong and Cho, Jungbin and Hwang, Dosik and Lee, Deukhee and Kim, Doyeon and Seo, Dongseong and Jeon, SeungJin and Choi, YoungDon and Kang, Jun Seok and Seker, Ahmet Cagatay and Ahn, Sang Chul and Leonardis, Ales and Zafeiriou, Stefanos},
    title     = {VSCHH 2023: A Benchmark for the View Synthesis Challenge of Human Heads},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
    month     = {October},
    year      = {2023},
    pages     = {1121-1128}
}

Acknowledgements

The code is based on TensoRF and instant-nsr-pl. The mask generation uses ViTMatte.