Backpropagation everywhere
Can the brain do backpropagation? - Hinton, 2016 In this talk, Hinton rebuts four arguments which neuroscientists have used to argue that the brain is not learning using backpropagation. Most human learning is unsupervised, without the explicit loss functions usually used in backpropagation. Hinton argues that error signals can be derived in unsupervised contexts using many different methods: reconstructing input signal (like autoencoders); comparing local predictions with contextual predictions; learn a generative model (wake-sleep algorithm); use a variational autoencoder; generative adversarial learning. Neurons don't send real numbers, but rather binary spikes. Hinton argues that this is a form of regularisation which actually makes the brain more effective. Any real number can be converted into a probability of a binary firing signal; the effects of doing so are similar to those of using dropout. Such strong regularisation is necessary because the brain has arou