- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
20
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Muthukumar, Ramchandran (2)
-
Arora, Raman (1)
-
Kouri, Drew P. (1)
-
Sulam, Jeremias (1)
-
Udell, Madeleine (1)
-
#Tyler Phillips, Kenneth E. (0)
-
& Ahmed, Khadija. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
& Andrews-Larson, C. (0)
-
& Archibald, J. (0)
-
& Attari, S. Z. (0)
-
& Ayala, O. (0)
-
& Babbitt, W. (0)
-
& Baek, Y. (0)
-
& Bahabry, Ahmed. (0)
-
& Bai, F. (0)
-
& Balasubramanian, R. (0)
-
& Barth-Cohen, L. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Higgins, A. (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
:Chaosong Huang, Gang Lu (0)
-
A. Beygelzimer (0)
-
A. E. Lischka, E.B. Dyer (0)
-
A. Ghate, K. Krishnaiyer (0)
-
A. Higgins (0)
-
A. I. Sacristán, J. C. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Sulam, Jeremias ; Muthukumar, Ramchandran ; Arora, Raman ( , Advances in neural information processing systems)Several recent results provide theoretical insights into the phenomena of adversarial examples. Existing results, however, are often limited due to a gap between the simplicity of the models studied and the complexity of those deployed in practice. In this work, we strike a better balance by considering a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate. We focus on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier, and show an interesting interplay between the expressivity and stability of the (supervised) representation map and a notion of margin in the feature space. We bound the robust risk (to $\ell_2$-bounded perturbations) of hypotheses parameterized by dictionaries that achieve a mild encoder gap on training data. Furthermore, we provide a robustness certificate for end-to-end classification. We demonstrate the applicability of our analysis by computing certified accuracy on real data, and compare with other alternatives for certified robustness.