-
Setup dataset files.
Download tarballs from huggingface. You will need the data tarball and the preview version annotation tarball for at least one sequence, the object_preview tarball and the program tarball. Organize these files as follow:
data |-- data | `-- scene_0x__y00z++00000000000000000000__YYYY-mm-dd-HH-MM-SS |-- anno_preview | `-- scene_0x__y00z++00000000000000000000__YYYY-mm-dd-HH-MM-SS.pkl |-- object_preview `-- program
-
Setup the enviroment.
-
Create a virtual env of python 3.10. This can be done by either
conda
or python packagevenv
.-
conda
approachconda create -p ./.conda python=3.10 conda activate ./.conda
-
venv
approach First usepyenv
or other tools to install a python intepreter of version 3.10. Here 3.10.14 is used as example:pyenv install 3.10.14 pyenv shell 3.10.14
Then create a virtual environment:
python -m venv .venv --prompt mocap_blender . .venv/bin/activate
-
-
Install the dependencies.
Make sure all bundled dependencies are there.
git submodule update --init --recursive --progress
Use
pip
to install the packages:pip install -r requirements.txt
-
-
Download the SMPL-X model(version v1.1) and place the files at
asset/smplx_v1_1
.The directory structure should be like:
asset `-- smplx_v1_1 `-- models |-- SMPLX_NEUTRAL.npz `-- SMPLX_NEUTRAL.pkl
-
Launch the preview tool:
python -m launch.viz.gui --cfg config/gui__preview.yml
-
(Optional) View the introductory video on youtube.