Y
Yann LeCun
Researcher at Facebook
Publications - 418
Citations - 219646
Yann LeCun is an academic researcher from Facebook. The author has contributed to research in topics: Deep learning & Artificial neural network. The author has an hindex of 121, co-authored 369 publications receiving 171211 citations. Previous affiliations of Yann LeCun include New York University & Bell Labs.
Papers
More filters
Journal ArticleDOI
Deep learning
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI
Backpropagation applied to handwritten zip code recognition
Yann LeCun,Bernhard E. Boser,John S. Denker,D. Henderson,Richard Howard,W. Hubbard,Lawrence D. Jackel +6 more
TL;DR: This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service.
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner,Patrick Haffner +7 more
TL;DR: This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task, and Convolutional neural networks are shown to outperform all other techniques.
Proceedings ArticleDOI
Dimensionality Reduction by Learning an Invariant Mapping
TL;DR: This work presents a method - called Dimensionality Reduction by Learning an Invariant Mapping (DrLIM) - for learning a globally coherent nonlinear function that maps the data evenly to the output manifold.