skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, May 16 until 2:00 AM ET on Saturday, May 17 due to maintenance. We apologize for the inconvenience.


Title: Distributed Online Learning With Adversarial Participants In An Adversarial Environment
Award ID(s):
1939553
PAR ID:
10477508
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
ISBN:
978-1-7281-6327-7
Page Range / eLocation ID:
1 to 5
Format(s):
Medium: X
Location:
Rhodes Island, Greece
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Adversarial training is an effective defense method to protect classification models against adversarial attacks. However, one limitation of this approach is that it can re- quire orders of magnitude additional training time due to high cost of generating strong adversarial examples dur- ing training. In this paper, we first show that there is high transferability between models from neighboring epochs in the same training process, i.e., adversarial examples from one epoch continue to be adversarial in subsequent epochs. Leveraging this property, we propose a novel method, Adversarial Training with Transferable Adversarial Examples (ATTA), that can enhance the robustness of trained models and greatly improve the training efficiency by accumulating adversarial perturbations through epochs. Compared to state-of-the-art adversarial training methods, ATTA enhances adversarial accuracy by up to 7.2% on CIFAR10 and requires 12 ∼ 14× less training time on MNIST and CIFAR10 datasets with comparable model robustness. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)