- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0000000003000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Scellier, Benjamin (3)
-
Anisetti, Vidyesh Rao (1)
-
Beaudoin, Philippe (1)
-
Bengio, Yoshua (1)
-
Bogacz, Rafal (1)
-
Christensen, Amelia (1)
-
Clopath, Claudia (1)
-
Costa, Rui Ponte (1)
-
Falk, Martin_J (1)
-
Ganguli, Surya (1)
-
Gillon, Colleen J. (1)
-
Hafner, Danijar (1)
-
Kandala, Ananth (1)
-
Kepecs, Adam (1)
-
Kording, Konrad P. (1)
-
Kriegeskorte, Nikolaus (1)
-
Latham, Peter (1)
-
Lillicrap, Timothy P. (1)
-
Lindsay, Grace W. (1)
-
Miller, Kenneth D. (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract The backpropagation method has enabled transformative uses of neural networks. Alternatively, for energy-based models, local learning methods involving only nearby neurons offer benefits in terms of decentralized training, and allow for the possibility of learning in computationally-constrained substrates. One class of local learning methodscontraststhe desired, clamped behavior with spontaneous, free behavior. However, directly contrasting free and clamped behaviors requires explicit memory. Here, we introduce ‘Temporal Contrastive Learning’, an approach that uses integral feedback in each learning degree of freedom to provide a simple form of implicit non-equilibrium memory. During training, free and clamped behaviors are shown in a sawtooth-like protocol over time. When combined with integral feedback dynamics, these alternating temporal protocols generate an implicit memory necessary for comparing free and clamped behaviors, broadening the range of physical and biological systems capable of contrastive learning. Finally, we show that non-equilibrium dissipation improves learning quality and determine a Landauer-like energy cost of contrastive learning through physical dynamics.more » « less
-
Anisetti, Vidyesh Rao; Kandala, Ananth; Scellier, Benjamin; Schwarz, J M (, Neural Computation)Abstract We introduce frequency propagation, a learning algorithm for nonlinear physical networks. In a resistive electrical circuit with variable resistors, an activation current is applied at a set of input nodes at one frequency and an error current is applied at a set of output nodes at another frequency. The voltage response of the circuit to these boundary currents is the superposition of an activation signal and an error signal whose coefficients can be read in different frequencies of the frequency domain. Each conductance is updated proportionally to the product of the two coefficients. The learning rule is local and proved to perform gradient descent on a loss function. We argue that frequency propagation is an instance of a multimechanism learning strategy for physical networks, be it resistive, elastic, or flow networks. Multimechanism learning strategies incorporate at least two physical quantities, potentially governed by independent physical mechanisms, to act as activation and error signals in the training process. Locally available information about these two signals is then used to update the trainable parameters to perform gradient descent. We demonstrate how earlier work implementing learning via chemical signaling in flow networks (Anisetti, Scellier, et al., 2023) also falls under the rubric of multimechanism learning.more » « less
-
Richards, Blake A.; Lillicrap, Timothy P.; Beaudoin, Philippe; Bengio, Yoshua; Bogacz, Rafal; Christensen, Amelia; Clopath, Claudia; Costa, Rui Ponte; de Berker, Archy; Ganguli, Surya; et al (, Nature Neuroscience)