Who Invented Backpropagation?
6 days ago
- #deep learning
- #neural networks
- #backpropagation
- Efficient backpropagation (BP) was first published in 1970 by Seppo Linnainmaa, known as the reverse mode of automatic differentiation.
- Precursors to BP were developed by Henry J. Kelley (1960) and others in the 1960s, focusing on gradient descent in multi-stage systems.
- BP was explicitly used for minimizing cost functions by Dreyfus (1973) and later applied to neural networks by Werbos (1982).
- Amari (1967) suggested training deep multilayer perceptrons (MLPs) with stochastic gradient descent (SGD), a method proposed in 1951.
- The first deep learning MLPs, called GMDH networks, were developed by Ivakhnenko and Lapa in 1965, using incremental layer training.
- Rumelhart et al. (1985) demonstrated BP's effectiveness in creating useful internal representations in neural networks.
- BP became widely accepted for training deep neural networks by 2010, debunking the need for unsupervised pre-training.
- The history of BP includes misleading accounts, with key contributions often not credited properly in later surveys.