 Home
 Search Results
 Page 1 of 1
Search for: All records

Total Resources4
 Resource Type

04000000000
 More
 Availability

40
 Author / Contributor
 Filter by Author / Creator


Ergen, T. (4)

Pauly, J. (4)

Pilanci, M. (4)

Sahiner, A. (4)

Mardani, M. (3)

Ozturkler, B. (3)

Bartan, B. (1)

#Tyler Phillips, Kenneth E. (0)

#Willis, Ciara (0)

& AbreuRamos, E. D. (0)

& Abramson, C. I. (0)

& AbreuRamos, E. D. (0)

& Adams, S.G. (0)

& Ahmed, K. (0)

& Ahmed, Khadija. (0)

& Aina, D.K. Jr. (0)

& AkcilOkan, O. (0)

& Akuom, D. (0)

& Aleven, V. (0)

& AndrewsLarson, C. (0)

 Filter by Editor


& Spizer, S. M. (0)

& . Spizer, S. (0)

& Ahn, J. (0)

& Bateiha, S. (0)

& Bosch, N. (0)

& Brennan K. (0)

& Brennan, K. (0)

& Chen, B. (0)

& Chen, Bodong (0)

& Drown, S. (0)

& Ferretti, F. (0)

& Higgins, A. (0)

& J. Peters (0)

& Kali, Y. (0)

& RuizArias, P.M. (0)

& S. Spitzer (0)

& Sahin. I. (0)

& Spitzer, S. (0)

& Spitzer, S.M. (0)

(submitted  in Review for IEEE ICASSP2024) (0)


Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

Sahiner, A. ; Ergen, T. ; Ozturkler, B. ; Bartan, B. ; Pauly, J. ; Mardani, M. ; Pilanci, M. ( , International Conference on Learning Representations)

Sahiner, A. ; Ergen, T. ; Ozturkler, B. ; Pauly, J. ; Mardani, M. ; Pilanci, M. ( , International Conference on Machine Learning)

Sahiner, A. ; Ergen, T. ; Pauly, J. ; Pilanci, M. ( , International Conference on Learnining Representations (ICLR))We describe the convex semiinfinite dual of the twolayer vectoroutput ReLU neural network training problem. This semiinfinite dual admits a finite dimensional representation, but its support is over a convex set which is difficult to characterize. In particular, we demonstrate that the nonconvex neural network training problem is equivalent to a finitedimensional convex copositive program. Our work is the first to identify this strong connection between the global optima of neural networks and those of copositive programs. We thus demonstrate how neural networks implicitly attempt to solve copositive programs via seminonnegative matrix factorization, and draw key insights from this formulation. We describe the first algorithms for provably finding the global minimum of the vector output neural network training problem, which are polynomial in the number of samples for a fixed data rank, yet exponential in the dimension. However, in the case of convolutional architectures, the computational complexity is exponential in only the filter size and polynomial in all other parameters. We describe the circumstances in which we can find the global optimum of this neural network training problem exactly with softthresholded SVD, and provide a copositive relaxation which is guaranteed to be exact for certain classes of problems, and which corresponds with the solution of Stochastic Gradient Descent in practice.more » « less