Skip to content

Provides the source code for the deep learning components mentioned in "Depth-based hand pose estimation: methods, data, and challenges"

License

Notifications You must be signed in to change notification settings

jsupancic/deep_hand_pose

Repository files navigation

Deep Hand Pose

examples/deep_hand_pose/train.sh should run my implementation of [1] on the NYU dataset. It can be configured to use a variety of datasets.

[1] M. Oberweger, P. Wohlhart, and V. Lepetit. Hands Deep in Deep Learning for Hand Pose Estimation. CVWW, 2015.

I've added 3 new layers to caffe, in the src/caffe/layers directory

  • HandData which loads a variety of hand image/annotation formats
  • PCA which implements Oberweger's PCA bottleneck initialization
  • MVRegLoss which adds visualization to the euclidean loss

step-by-step instructions

git clone [email protected]:jsupancic/deep_hand_pose.git ~/deep_hand_pose

  • I've converted the annotations from .mat to .csv for you:

cd ~/deep_hand_pose/

cp nyu_csv_annotations/test/*.csv /mnt/data/NYU-Hands-v2/test/

cp nyu_csv_annotations/train/*.csv /mnt/data/NYU-Hands-v2/train/

  • Compile caffe and deep_hand_pose

cd ~/deep_hand_pose/ && mkdir build && pushd build && cmake .. && make -j16 && popd

  • create the directory where results will be stored

mkdir out

  • now run the pre-trained model on the NYU dataset!

examples/deep_hand_pose/train.sh

Deep Hand Base License and Citation

The license is the same as caffe from which this is derived. Please see below.

If you find this useful, relevant citations would be

@article{supancic2015depth,
  title={Depth-based hand pose estimation: methods, data, and challenges},
  author={Supancic III, James Steven and Rogez, Gregory and Yang, Yi and Shotton, Jamie and Ramanan, Deva},
  journal={arXiv preprint arXiv:1504.06378},
  year={2015}
}

and

@article{oberweger2015hands,
  title={Hands Deep in Deep Learning for Hand Pose Estimation},
  author={Oberweger, Markus and Wohlhart, Paul and Lepetit, Vincent},
  journal={arXiv preprint arXiv:1502.06807},
  year={2015}
}

Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.

Check out the project site for all the details like

and step-by-step examples.

Join the chat at https://gitter.im/BVLC/caffe

Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.

Happy brewing!

Caffe License and Citation

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}

About

Provides the source code for the deep learning components mentioned in "Depth-based hand pose estimation: methods, data, and challenges"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published