skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhang, Kaibo"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Adversarial training has emerged as a popular approach for training models that are robust to inference-time adversarial attacks. However, our theoretical understanding of why and when it works remains limited. Prior work has offered generalization analysis of adversarial training, but they are either restricted to the Neural Tangent Kernel (NTK) regime or they make restrictive assumptions about data such as (noisy) linear separability or robust realizability. In this work, we study the stability and generalization of adversarial training for two-layer networks without any data distribution assumptions and beyond the NTK regime. Our findings suggest that for networks with any given initialization and sufficiently large width, the generalization bound can be effectively controlled via early stopping. We further improve the generalization bound by leveraging smoothing using Moreau’s envelope. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  2. Benign overfitting is the phenomenon wherein none of the predictors in the hypothesis class can achieve perfect accuracy (i.e., non-realizable or noisy setting), but a model that interpolates the training data still achieves good generalization. A series of recent works aim to understand this phenomenon for regression and classification tasks using linear predictors as well as two-layer neural networks. In this paper, we study such a benign overfitting phenomenon in an adversarial setting. We show that under a distributional assumption, interpolating neural networks found using adversarial training generalize well despite inferencetime attacks. Specifically, we provide convergence and generalization guarantees for adversarial training of two-layer networks (with smooth as well as non-smooth activation functions) showing that under moderate ℓ2 norm perturbation budget, the trained model has near-zero robust training loss and near-optimal robust generalization error. We support our theoretical findings with an empirical study on synthetic and real-world data. 
    more » « less
  3. Abstract Optical microcomb underpins a wide range of applications from communication, metrology, to sensing. Although extensively explored in recent years, challenges remain in key aspects of microcomb such as complex soliton initialization, low power efficiency, and limited comb reconfigurability. Here we present an on-chip microcomb laser to address these key challenges. Realized with integration between III and V gain chip and a thin-film lithium niobate (TFLN) photonic integrated circuit (PIC), the laser directly emits mode-locked microcomb on demand with robust turnkey operation inherently built in, with individual comb linewidth down to 600 Hz, whole-comb frequency tuning rate exceeding 2.4 × 1017 Hz/s, and 100% utilization of optical power fully contributing to comb generation. The demonstrated approach unifies architecture and operation simplicity, electro-optic reconfigurability, high-speed tunability, and multifunctional capability enabled by TFLN PIC, opening up a great avenue towards on-demand generation of mode-locked microcomb that is of great potential for broad applications. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025