A curated list of the most cited deep learning papers (since 2010)
I believe that there exist classic deep learning papers which are worth reading regardless of their applications. Rather than providing overwhelming amount of papers, I would like to provide a curated list of the classic deep learning papers which can be considered as must-reads in some area.
- 2016 : +30 citations (:sparkles: +50)
- 2015 : +100 citations (:sparkles: +200)
- 2014 : +200 citations (:sparkles: +400)
- 2013 : +300 citations (:sparkles: +600)
- 2012 : +400 citations (:sparkles: +800)
- 2011 : +500 citations (:sparkles: +1000)
- 2010 : +600 citations (:sparkles: +1200)
I need your contributions! Please read the contributing guide before you make a pull request.
- Survey / Review
- Theory / Future
- Optimization / Regularization
- Network Models
- Image
- Caption
- Video / Human Activity
- Word Embedding
- Machine Translation / QnA
- Speech / Etc.
- RL / Robotics
- Unsupervised
- Hardware / Software
- Papers Worth Reading
- Distinguished Researchers
Total 85 papers except for the papers in Hardware / Software and Papers Worth Reading sections.
- Deep learning (Book, 2016), Goodfellow et al. (Bengio) [html]
- Deep learning (2015), Y. LeCun, Y. Bengio and G. Hinton [html] ✨
- Deep learning in neural networks: An overview (2015), J. Schmidhuber [pdf] ✨
- Representation learning: A review and new perspectives (2013), Y. Bengio et al. [pdf] ✨
- Distilling the knowledge in a neural network (2015), G. Hinton et al. [pdf]
- Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. [pdf]
- How transferable are features in deep neural networks? (2014), J. Yosinski et al. (Bengio) [pdf]
- Return of the Devil in the Details: Delving Deep into Convolutional Nets (2014), K. Chatfield et al. [pdf] ✨
- Why does unsupervised pre-training help deep learning (2010), D. Erhan et al. (Bengio) [pdf]
- Understanding the difficulty of training deep feedforward neural networks (2010), X. Glorot and Y. Bengio [pdf]
- Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (2015), S. Loffe and C. Szegedy (Google) [pdf] ✨
- Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al. (Microsoft) [pdf] ✨
- Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. (Hinton) [pdf] ✨
- Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]
- Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al. [pdf]
- On the importance of initialization and momentum in deep learning (2013), I. Sutskever et al. (Hinton) [pdf]
- Regularization of neural networks using dropconnect (2013), L. Wan et al. (LeCun) [pdf]
- Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. [pdf] ✨
- Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio [pdf]
- Deep residual learning for image recognition (2016), K. He et al. (Microsoft) [pdf] ✨
- Region-based convolutional networks for accurate object detection and segmentation (2016), R. Girshick et al. (Microsoft) [pdf]
- Going deeper with convolutions (2015), C. Szegedy et al. (Google) [pdf] ✨
- Fast R-CNN (2015), R. Girshick (Microsoft) [pdf] ✨
- Fully convolutional networks for semantic segmentation (2015), J. Long et al. [pdf] ✨
- Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman [pdf] ✨
- OverFeat: Integrated recognition, localization and detection using convolutional networks (2014), P. Sermanet et al. (LeCun) [pdf]
- Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf] ✨
- Maxout networks (2013), I. Goodfellow et al. (Bengio) [pdf]
- Network in network (2013), M. Lin et al. [pdf]
- ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al. (Hinton) [pdf] ✨
- Large scale distributed deep networks (2012), J. Dean et al. [pdf] ✨
- Deep sparse rectifier neural networks (2011), X. Glorot et al. (Bengio) [pdf]
- Reading text in the wild with convolutional neural networks (2016), M. Jaderberg et al. (DeepMind) [pdf]
- Imagenet large scale visual recognition challenge (2015), O. Russakovsky et al. [pdf] ✨
- Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al. [pdf] ✨
- DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. [pdf]
- Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al. [pdf] ✨
- Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al. [pdf]
- DeepFace: Closing the Gap to Human-Level Performance in Face Verification (2014), Y. Taigman et al. (Facebook) [pdf] ✨
- Decaf: A deep convolutional activation feature for generic visual recognition (2013), J. Donahue et al. [pdf] ✨
- Learning Hierarchical Features for Scene Labeling (2013), C. Farabet et al. (LeCun) [pdf]
- Learning mid-level features for recognition (2010), Y. Boureau (LeCun) [pdf]
- Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al. (Bengio) [pdf] ✨
- Show and tell: A neural image caption generator (2015), O. Vinyals et al. [pdf] ✨
- Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al. [pdf] ✨
- Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei [pdf] ✨
- Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al. (FeiFei) [pdf] ✨
- DeepPose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy (Google) [pdf]
- Two-stream convolutional networks for action recognition in videos (2014), K. Simonyan et al. [pdf]
- A survey on human activity recognition using wearable sensors (2013), O. Lara and M. Labrador [pdf]
- 3D convolutional neural networks for human action recognition (2013), S. Ji et al. [pdf]
- Action recognition with improved trajectories (2013), H. Wang and C. Schmid [pdf]
- Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis (2011), Q. Le et al. [pdf]
- Glove: Global vectors for word representation (2014), J. Pennington et al. [pdf] ✨
- Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov [pdf] (Google) ✨
- Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al. (Google) [pdf] ✨
- Efficient estimation of word representations in vector space (2013), T. Mikolov et al. (Google) [pdf] ✨
- Word representations: a simple and general method for semi-supervised learning (2010), J. Turian (Bengio) [pdf]
- Towards ai-complete question answering: A set of prerequisite toy tasks (2015), J. Weston et al. [pdf]
- Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al. (Bengio) [pdf] ✨
- Sequence to sequence learning with neural networks (2014), I. Sutskever et al. [pdf] ✨
- Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al. (Bengio) [pdf]
- A convolutional neural network for modelling sentences (2014), N. Kalchbrenner et al. [pdf]
- Convolutional neural networks for sentence classification (2014), Y. Kim [pdf]
- The stanford coreNLP natural language processing toolkit (2014), C. Manning et al. [pdf] ✨
- Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al. [pdf] ✨
- Natural language processing (almost) from scratch (2011), R. Collobert et al. [pdf] ✨
- Recurrent neural network based language model (2010), T. Mikolov et al. [pdf]
- Automatic Speech Recognition - A Deep Learning Approach (Book, 2015), D. Yu and L. Deng (Microsoft) [html]
- Speech recognition with deep recurrent neural networks (2013), A. Graves (Hinton) [pdf]
- Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al. [pdf] ✨
- Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al. [pdf] ✨
- Acoustic modeling using deep belief networks (2012), A. Mohamed et al. (Hinton) [pdf]
- Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al. (DeepMind) [[pdf]](Mastering the game of Go with deep neural networks and tree search) ✨
- Human-level control through deep reinforcement learning (2015), V. Mnih et al. (DeepMind) [pdf] ✨
- Deep learning for detecting robotic grasps (2015), I. Lenz et al. [pdf]
- Playing atari with deep reinforcement learning (2013), V. Mnih et al. (DeepMind) [pdf])
- Generative adversarial nets (2014), I. Goodfellow et al. (Bengio) [pdf]
- Auto-Encoding Variational Bayes (2013), D. Kingma and M. Welling [pdf]
- Building high-level features using large scale unsupervised learning (2013), Q. Le et al. [pdf] ✨
- Contractive auto-encoders: Explicit invariance during feature extraction (2011), S. Rifai et al. (Bengio) [pdf]
- An analysis of single-layer networks in unsupervised feature learning (2011), A. Coates et al. [pdf]
- Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. (Bengio) [pdf]
- A practical guide to training restricted boltzmann machines (2010), G. Hinton [pdf]
- Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion (2010), P. Vincent et al. (Bengio) [pdf]
- TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (2016), M. Abadi et al. (Google) [pdf]
- Theano: A Python framework for fast computation of mathematical expressions, R. Al-Rfou et al. (Bengio)
- MatConvNet: Convolutional neural networks for matlab (2015), A. Vedaldi and K. Lenc [pdf]
- Caffe: Convolutional architecture for fast feature embedding (2014), Y. Jia et al. [pdf] ✨
Newly released papers which do not meet the criteria but worth reading
- Understanding Convolutional Neural Networks (2016), J. Koushik [pdf]
- SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size (2016), F. Iandola et al. [pdf]
- Learning to Compose Neural Networks for Question Answering (2016), J. Andreas et al. [pdf]
- Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection (2016) (Google), S. Levine et al. [pdf]
- Taking the human out of the loop: A review of bayesian optimization (2016), B. Shahriari et al. [pdf]
- Eie: Efficient inference engine on compressed deep neural network (2016), S. Han et al. [pdf]
- Adaptive Computation Time for Recurrent Neural Networks (2016), A. Graves [pdf]
- Pixel Recurrent Neural Networks (2016), A. van den Oord et al. (DeepMind) [pdf]
- Recent Advances in Convolutional Neural Networks (2015), J. Gu et al.(http://arxiv.org/pdf/1512.07108)
- LSTM: A search space odyssey (2015), K. Greff et al. [pdf]
Distinguished deep learning researchers who have published +3 (:sparkles: +6) papers which are on the awesome list
- Jian Sun, Microsoft Research ✨
- Geoffrey Hinton, Google, University of Toronto ✨
- Quoc Le, Google ✨
- Yann LeCun, Facebook, New York University ✨
- Yoshua Bengio, University of Montreal ✨
- Aaron Courville, University of Montreal
- Alex Graves, Google DeepMind
- Andrej Karpathy, Stanford University
- Andrew Ng, Baidu
- Andrew Zisserman, University of Oxford
- Christopher Manning, Stanford University
- David Silver, Google DeepMind
- Dong Yu, Microsoft Research
- Ross Girshick, Facebook
- Kaiming He, Microsoft Research
- Karen Simonyan, Google DeepMind
- Kyunghyun Cho, New York University
- Honglak Lee, University of Michigan
- Ian Goodfellow, Google
- Ilya Sutskever, OpenAI
- Jeff Dean, Google,
- Jeff Donahue, U.C. Berkeley
- Juergen Schmidhuber, Swiss AI Lab IDSIA
- Li Fei-Fei, Stanford University
- Oriol Vinyals, Google DeepMind
- Pascal Vincent, University of Montreal
- Rob Fergus, Facebook, New York University
- Ruslan Salakhutdinov, CMU
- Tomas Mikolov, Facebook
- Trevor Darrell, U.C. Berkeley
Thank you for all your contributions. Please make sure to read the contributing guide before you make a pull request.
You can follow my facebook page or google plus to get useful information about machine learning and robotics. If you want to have a talk with me, please send me a message to my facebook page.
You can also check out my blog where I share my thoughts on my research area (deep learning for human/robot motions). I got some thoughts while making this list and summerized them in a blog post, "Some trends of deep learning researches".
To the extent possible under law, Terry T. Um has waived all copyright and related or neighboring rights to this work.