skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wang, Jiayi"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 30, 2026
  2. Contextual bandits, which leverage baseline features of sequentially arriving individuals to optimize cumulative rewards while balancing exploration and exploitation, are critical for online decision-making. Existing approaches typically assume no interference, where each individual’s action affects only their own reward. Yet, such an assumption can be violated in many practical scenarios, and the oversight of interference can lead to short-sighted policies that focus solely on maximizing the immediate outcomes for individuals, which further results in suboptimal decisions and potentially increased regret over time. To address this significant gap, we introduce the foresighted online policy with interference (FRONT) that innovatively considers the long-term impact of the current decision on subsequent decisions and rewards. 
    more » « less
    Free, publicly-accessible full text available June 10, 2026
  3. Free, publicly-accessible full text available December 1, 2025
  4. Free, publicly-accessible full text available December 1, 2025
  5. We propose the Adversarial DEep Learning Transpiler (ADELT), a novel approach to source-to-source transpilation between deep learning frameworks. ADELT uniquely decouples code skeleton transpilation and API keyword mapping. For code skeleton transpilation, it uses few-shot prompting on large language models (LLMs), while for API keyword mapping, it uses contextual embeddings from a code-specific BERT. These embeddings are trained in a domain-adversarial setup to generate a keyword translation dictionary. ADELT is trained on an unlabeled web-crawled deep learning corpus, without relying on any hand-crafted rules or parallel data. It outperforms state-of-the-art transpilers, improving pass@1 rate by 16.2 pts and 15.0 pts for PyTorch-Keras and PyTorch-MXNet transpilation pairs respectively. We provide open access to our code at https://github.com/gonglinyuan/adelt 
    more » « less
  6. Dealing with data heterogeneity is a key challenge in the theoretical analysis of federated learning (FL) algorithms. In the literature, gradient divergence is often used as the sole metric for data heterogeneity. However, we observe that the gradient divergence cannot fully characterize the impact of the data heterogeneity in Federated Averaging (FedAvg) even for the quadratic objective functions. This limitation leads to an overestimate of the communication complexity. Motivated by this observation, we propose a new analysis framework based on the difference between the minima of the global objective function and the minima of the local objective functions. Using the new framework, we derive a tighter convergence upper bound for heterogeneous quadratic objective functions. The theoretical results reveal new insights into the impact of the data heterogeneity on the convergence of FedAvg and provide a deeper understanding of the two-stage learning rates. Experimental results using non-IID data partitions validate the theoretical findings. 
    more » « less