This is an official implementation of "3D Visibility-aware Generalizable Neural Radiance Fields for Interacting Hands".
-
Please install python dependencies specified in requirements.txt:
conda create -n vanerf python=3.9 conda activate vanerf pip install -r requirements.txt
-
Register and download MANO data. Put
MANO_LEFT.pkl
andMANO_RIGHT.pkl
in folder$ROOT/smplx/models/mano
.
-
Download InterHand2.6M dataset and unzip it. (Noted: we used the
v1.0_5fps
version andH+M
subset) The directory structure of$ROOT/InterHand2.6M
is expected as follows:InterHand2.6M │ ├── annotations │ │ ├── skeleton.txt │ │ ├── subject.txt │ │ ├── test │ │ ├── train │ │ └── val │ └── images │ ├── test │ ├── train │ └── val
-
Process the dataset by :
python data_process/dataset_process.py
-
Download the pretrained model and put it to
$ROOT/EXPERIMENTS/vanerf/ckpts/model.ckpt
. -
Run evaluation:
#small view variation python train.py --config ./configs/vanerf.json --run_val --model_ckpt ./EXPERIMENTS/vanerf/ckpts/model.ckpt #big view variation python train.py --config ./configs/vanerf_bvv.json --run_val --model_ckpt ./EXPERIMENTS/vanerf/ckpts/model.ckpt
Results will be stored in folder
$ROOT/EXPERIMENTS/vanerf/
. -
Visualize the dynamic results:
python render_dynamic.py --config ./configs/vanerf.json --model_ckpt ./EXPERIMENTS/vanerf/ckpts/model.ckpt
Execute train.py script to train the model on the InterHand2.6M dataset.
python train.py --config ./configs/vanerf.json --num_gpus 4
The output model would be store in $ROOT//EXPERIMENTS/vanerf/ckpts
.
If you find our code or paper useful, please consider citing:
@article{huang20243d,
title={3D Visibility-aware Generalizable Neural Radiance Fields for Interacting Hands},
author={Huang, Xuan and Li, Hanhui and Yang, Zejun and Wang, Zhisheng and Liang, Xiaodan},
journal={arXiv preprint arXiv:2401.00979},
year={2024}
}