Posts

Showing posts from May, 2018

Backpropagation everywhere

Can the brain do backpropagation? - Hinton, 2016 In this talk, Hinton rebuts four arguments which neuroscientists have used to argue that the brain is not learning using backpropagation. Most human learning is unsupervised, without the explicit loss functions usually used in backpropagation. Hinton argues that error signals can be derived in unsupervised contexts using many different methods: reconstructing input signal (like autoencoders); comparing local predictions with contextual predictions; learn a generative model (wake-sleep algorithm); use a variational autoencoder; generative adversarial learning. Neurons don't send real numbers, but rather binary spikes. Hinton argues that this is a form of regularisation which actually makes the brain more effective. Any real number can be converted into a probability of a binary firing signal; the effects of doing so are similar to those of using dropout. Such strong regularisation is necessary because the brain has arou

The age of superstimuli

There are many lenses through which you can analyse people's behaviour. One lens sees everyone as "economic agents making broadly rational choices". Others include "players of a complex, hierarchical social game"; or "products of cultural conditioning"; or "individuals with unique talents and flaws and hopes and fears". Each is useful in some ways, and misleading in others. I'd like to discuss another lens which I think should be a more standard one. That is "primates constantly faced with superstimuli". I'm using superstimulus to mean, roughly speaking, a set of sensory inputs which has a large effect on our thoughts or behaviour, usually because they're a much-amplified version of something we evolved to respond to in the ancestral environment. Consider how much superstimuli in one form or another pervade your life: You wake up and check your social media notifications, on websites where every tiny detail has been