 Home
 Search Results
 Page 1 of 1
Search for: All records

Total Resources1
 Resource Type

10
 Availability

10
 Author / Contributor
 Filter by Author / Creator


Baldi, P. (1)

Vershynin, R. (1)

#Tyler Phillips, Kenneth E. (0)

& Ahmed, Khadija. (0)

& AkcilOkan, O. (0)

& Akuom, D. (0)

& Aleven, V. (0)

& AndrewsLarson, C. (0)

& Archibald, J. (0)

& Attari, S. Z. (0)

& Ayala, O. (0)

& Babbitt, W. (0)

& Baek, Y. (0)

& Bai, F. (0)

& BarthCohen, L. (0)

& Bassett, L. (0)

& Beaulieu, C (0)

& Bein, E. (0)

& Bevis, T. (0)

& Biswas, G. (0)

 Filter by Editor


& Spizer, S. M. (0)

& . Spizer, S. (0)

& Ahn, J. (0)

& Bateiha, S. (0)

& Bosch, N. (0)

& Chen, B. (0)

& Chen, Bodong (0)

& Higgins, A. (0)

& Kali, Y. (0)

& RuizArias, P.M. (0)

& S. Spitzer (0)

& Spitzer, S. (0)

& Spitzer, S.M. (0)

:Chaosong Huang, Gang Lu (0)

A. Beygelzimer (0)

A. Ghate, K. Krishnaiyer (0)

A. Higgins (0)

A. I. Sacristán, J. C. (0)

A. Weinberg, D. MooreRusso (0)

A. Weinberger (0)


Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

A long standing open problem in the theory of neural networks is the development of quantitative methods to estimate and compare the capabilities of different architectures. Here we define the capacity of an architecture by the binary logarithm of the number of functions it can compute, as the synaptic weights are varied. The capacity provides an upper bound on the number of bits that can be extracted from the training data and stored in the architecture during learning. We study the capacity of layered, fullyconnected, architectures of linear threshold neurons with L layers and show that in essence the capacity is given by a cubic polynomial in the layer sizes. In proving the main result, we also develop new techniques (multiplexing, enrichment, and stacking) as well as new bounds on the capacity of finite sets. We use the main result to identify architectures with maximal or minimal capacity under a number of natural constraints. This leads to the notion of structural regularization for deep architectures. While in general, everything else being equal, shallow networks compute more functions than deep networks, the functions computed by deep networks are more regular and “interesting".