skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Li, Na"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Diffusion policies have achieved superior performance in imitation learning and offline reinforcement learning (RL) due to their rich expressiveness. However, the conventional diffusion training procedure requires samples from target distribution, which is impossible in online RL since we cannot sample from the optimal policy. Backpropagating policy gradient through the diffusion process incurs huge computational costs and instability, thus being expensive and not scalable. To enable efficient training of diffusion policies in online RL, we generalize the conventional denoising score matching by reweighting the loss function. The resulting Reweighted Score Matching (RSM) preserves the optimal solution and low computational cost of denoising score matching, while eliminating the need to sample from the target distribution and allowing learning to optimize value functions. We introduce two tractable reweighted loss functions to solve two commonly used policy optimization problems, policy mirror descent and max-entropy policy, resulting in two practical algorithms named Diffusion Policy Mirror Descent (DPMD) and Soft Diffusion Actor-Critic (SDAC). We conducted comprehensive comparisons on MuJoCo benchmarks. The empirical results show that the proposed algorithms outperform recent diffusion-policy online RLs on most tasks, and the DPMD improves more than 120% over soft actor-critic on Humanoid and Ant. 
    more » « less
    Free, publicly-accessible full text available July 13, 2026
  2. Ozay, Necmiye; Balzano, Laura; Panagou, Dimitra; Abate, Alessandro (Ed.)
    The pursuit of robustness has recently been a popular topic in reinforcement learning (RL) research, yet the existing methods generally suffer from computation issues that obstruct their real-world implementation. In this paper, we consider MDPs with low-rank structures, where the transition kernel can be written as a linear product of feature map and factors. We introduce *duple perturbation* robustness, i.e. perturbation on both the feature map and the factors, via a novel characterization of (𝜉,𝜂) -ambiguity sets featuring computational efficiency. Our novel low-rank robust MDP formulation is compatible with the low-rank function representation view, and therefore, is naturally applicable to practical RL problems with large or even continuous state-action spaces. Meanwhile, it also gives rise to a provably efficient and practical algorithm with theoretical convergence rate guarantee. Lastly, the robustness of our proposed approach is justified by numerical experiments, including classical control tasks with continuous state-action spaces. 
    more » « less
    Free, publicly-accessible full text available June 4, 2026
  3. Li, Yingzhen; Mandt, Stephan; Agrawal, Shipra; Khan, Emtiyaz (Ed.)
    Free, publicly-accessible full text available May 3, 2026
  4. Li, Yingzhen; Mandt, Stephan; Agrawal, Shipra; Khan, Emtiyaz (Ed.)
    Off-policy evaluation (OPE) is one of the most fundamental problems in reinforcement learning (RL) to estimate the expected long-term payoff of a given target policy with \emph{only} experiences from another behavior policy that is potentially unknown. The distribution correction estimation (DICE) family of estimators have advanced the state of the art in OPE by breaking the \emph{curse of horizon}. However, the major bottleneck of applying DICE estimators lies in the difficulty of solving the saddle-point optimization involved, especially with neural network implementations. In this paper, we tackle this challenge by establishing a \emph{linear representation} of value function and stationary distribution correction ratio, \emph{i.e.}, primal and dual variables in the DICE framework, using the spectral decomposition of the transition operator. Such primal-dual representation not only bypasses the non-convex non-concave optimization in vanilla DICE, therefore enabling an computational efficient algorithm, but also paves the way for more efficient utilization of historical data. We highlight that our algorithm, \textbf{SpectralDICE}, is the first to leverage the linear representation of primal-dual variables that is both computation and sample efficient, the performance of which is supported by a rigorous theoretical sample complexity guarantee and a thorough empirical evaluation on various benchmarks. 
    more » « less
    Free, publicly-accessible full text available May 3, 2026
  5. Free, publicly-accessible full text available January 10, 2026
  6. Free, publicly-accessible full text available July 15, 2025
  7. Free, publicly-accessible full text available July 10, 2025
  8. Free, publicly-accessible full text available July 8, 2025
  9. Free, publicly-accessible full text available October 2, 2025
  10. Topological materials are of great interest because they can support metallic edge or surface states that are robust against perturbations, with the potential for technological applications. Here, we experimentally explore the light-induced non-equilibrium properties of two distinct topological phases in NaCd4As3: a topological crystalline insulator (TCI) phase and a topological insulator (TI) phase. This material has surface states that are protected by mirror symmetry in the TCI phase at room temperature, while it undergoes a structural phase transition to a TI phase below 200 K. After exciting the TI phase by an ultrafast laser pulse, we observe a leading band edge shift of >150 meV that slowly builds up and reaches a maximum after ∼0.6 ps and that persists for ∼8 ps. The slow rise time of the excited electron population and electron temperature suggests that the electronic and structural orders are strongly coupled in this TI phase. It also suggests that the directly excited electronic states and the probed electronic states are weakly coupled. Both couplings are likely due to a partial relaxation of the lattice distortion, which is known to be associated with the TI phase. In contrast, no distinct excited state is observed in the TCI phase immediately or after photoexcitation, which we attribute to the low density of states and phase space available near the Fermi level. Our results show how ultrafast laser excitation can reveal the distinct excited states and interactions in phase-rich topological materials. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026