This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Self-Assembly of a Biologically Plausible
Learning Circuit

Qianli Liao†,1    Liu Ziyin†,3,4    Yulu Gan†,1,2    Brian Cheung1,2   
Mark Harnett5,6
      Tomaso Poggio1,2,5,6
Abstract

Over the last four decades, the amazing success of deep learning has been driven by the use of Stochastic Gradient Descent (SGD) as the main optimization technique. The default implementation for the computation of the gradient for SGD is backpropagation, which, with its variations, is used to this day in almost all computer implementations. From the perspective of neuroscientists, however, the consensus is that backpropagation is unlikely to be used by the brain. Though several alternatives have been discussed, none is so far supported by experimental evidence. Here we propose a circuit for updating the weights in a network that is biologically plausible, works as well as backpropagation, and leads to verifiable predictions about the anatomy and the physiology of a characteristic motif of four plastic synapses between ascending and descending cortical streams. A key prediction of our proposal is a surprising property of self-assembly of the basic circuit, emerging from initial random connectivity and heterosynaptic plasticity rules.

\memonumber

152 \authoraffil Equal contribution
1Center for Brains, Minds, and Machines, MIT
2CSAIL, MIT
3Research Laboratory of Electronics, MIT
4Physics & Informatics Laboratories, NTT Research
5McGovern Institute, MIT
6Department of Brain and Cognitive Sciences, MIT \makememo