- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
03100000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Alharbi, Raed (2)
-
Thai, My T (2)
-
Thai, My T. (2)
-
Vu, Minh N (2)
-
Vu, Minh N. (2)
-
Dou, Dejing (1)
-
Jeter, Tre’ R (1)
-
Jin, Ruoming (1)
-
Liu, Yang (1)
-
Phan, NhatHai (1)
-
Wu, Xintao (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Vu, Minh N ; Jeter, Tre’ R ; Alharbi, Raed ; Thai, My T ( , IEEE International Conference on Big Data)
-
Alharbi, Raed ; Vu, Minh N. ; Thai, My T. ( , IEEE International Conference on Communications)null (Ed.)
-
Phan, NhatHai ; Vu, Minh N. ; Liu, Yang ; Jin, Ruoming ; Dou, Dejing ; Wu, Xintao ; Thai, My T. ( , Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence)
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, infty), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks.