- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0004000000000000
- More
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Goldstein, Tom (4)
-
Schwarzschild, Avi (4)
-
Goldblum, Micah (3)
-
Bansal, Arpit (2)
-
Geiping, Jonas (2)
-
Bartoldson, Brian R (1)
-
Bhatele, Abhinav (1)
-
Borgnia, Eitan (1)
-
Dickerson, John P (1)
-
Emam, Zeyad (1)
-
Gupta, Arjun (1)
-
Huang, Furong (1)
-
Jain, Neel (1)
-
Kailkhura, Bhavya (1)
-
Kirchenbauer, John (1)
-
McLeish, Sean (1)
-
Moeller, Michael (1)
-
Stein, Alex (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 23, 2025
-
Bansal, Arpit; Schwarzschild, Avi; Borgnia, Eitan; Emam, Zeyad; Huang, Furong; Goldblum, Micah; Goldstein, Tom (, 36th Conference on Neural Information Processing Systems (NeurIPS 2022))
-
Schwarzschild, Avi; Goldblum, Micah; Gupta, Arjun; Dickerson, John P; Goldstein, Tom (, Proceedings of the 38th International Conference on Machine Learning)Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference. A recent survey of industry practitioners found that data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks. However, it remains unclear exactly how dangerous poisoning methods are and which ones are more effective considering that these methods, even ones with identical objectives, have not been tested in consistent or realistic settings. We observe that data poisoning and backdoor attacks are highly sensitive to variations in the testing setup. Moreover, we find that existing methods may not generalize to realistic settings. While these existing works serve as valuable prototypes for data poisoning, we apply rigorous tests to determine the extent to which we should fear them. In order to promote fair comparison in future work, we develop standardized benchmarks for data poisoning and backdoor attacks.more » « less
-
Goldblum, Micah; Geiping, Jonas; Schwarzschild, Avi; Moeller, Michael; Goldstein, Tom (, International Conference on Learning Representations)We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike. In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are not optimal for generalization; (3) demonstrate that ResNets do not conform to wide-network theories, such as the neural tangent kernel, and that the interaction between skip connections and batch normalization plays a role; (4) find that rank does not correlate with generalization or robustness in a practical setting.more » « less
An official website of the United States government

Full Text Available