NoProp: Training Neural Networks without Back-propagation or Forward-propagation
a year ago
- #Neural Networks
- #Gradient-Free Learning
- #Machine Learning
- Introduces NoProp, a new learning method for neural networks that doesn't rely on forward or backward propagation.
- NoProp is inspired by diffusion and flow matching methods, with each layer independently learning to denoise a noisy target.
- This method represents a step towards gradient-free learning, altering traditional credit assignment in networks.
- NoProp requires fixing each layer's representation to a noised version of the target beforehand.
- Demonstrated effectiveness on MNIST, CIFAR-10, and CIFAR-100, showing superior accuracy and computational efficiency.
- Potential impacts include more efficient distributed learning and changes in learning process characteristics.