skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Fu, H"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Creating engaging cybersecurity education materials typically requires months of development time and specialized expertise. This paper describes how we used generative AI to address this challenge. We utilized Claude AI to generate a complete interactive platform that teaches students basic microelectronics through IoT hacking. Through iterative prompting, we generated more than 15,000 lines of functional code, including interactive visualizations, Python security tools, and gamified quizzes with real-time leaderboards. The curriculum guides students through the evolution of computing—from vacuum tubes to modern IoT devices—then helps them apply this foundation to discover real vulnerabilities. We implemented this platform at a GenCyber summer camp with 40 participants, where students identified actual security issues in AmpliPi audio systems—opensource network audio devices designed for multi-room audio distribution—including password weaknesses and denial of service flaws. The entire development process took only three weeks instead of the typical several months. The AI produced quality educational content, although we reviewed everything for technical accuracy and ethical considerations. During the camp, students remained engaged through competitive elements and hands-on labs, learning both theoretical concepts and practical skills. The students used AI-generated tools, including working implementations of SlowLoris and dictionary attacks, to test real systems. Our experience demonstrates that generative AI can efficiently create effective cybersecurity education materials that remain technically current. All materials are publicly available on GitHub for educational use. This approach could help educators stay on track with the rapidly evolving technology despite traditional curriculum development constraints. 
    more » « less
    Free, publicly-accessible full text available November 14, 2026
  2. The Tully-Fisher relation is a vital distance indicator, but its precise inference is challenged by selection bias, statistical bias, and uncertain inclination corrections. This study presents a Bayesian framework that simultaneously addresses these issues. To eliminate the need for individual inclination corrections, inclination is treated as a latent variable with a known probability distribution. To correct for the distance-dependent Malmqvist bias arising from sample selection, the model incorporates Gaussian scatter in the dependent variable, the distribution of the independent variable, and the observational selection function into the data likelihood. To mitigate the statistical bias -- termed the ``general Eddington bias'' -- caused by Gaussian scatter and the non-uniform distribution of the independent variable, two methods are introduced: (1) analytical bias corrections applied to the dependent variable before likelihood computation, and (2) a dual-scatter model that accounts for Gaussian scatter in the independent variable within the likelihood function. The effectiveness of these methods is demonstrated using simulated datasets. By rigorously addressing selection and statistical biases in a latent-variable regression analysis, this work provides a robust approach for unbiased distance estimates from standardizable candles, which is critical for improving the accuracy of Hubble constant determinations. 
    more » « less
    Free, publicly-accessible full text available August 27, 2026
  3. This software repository provides the Python functions and a Jupyter notebook that implement the latent-variable bias-mitigating inference methods for the Tully-Fisher Relation. The methods are described in Fu (2025), titled "Mitigating Malmquist and Eddington Biases in Latent-Inclination Regression of the Tully-Fisher Relation". Repository DOI: https://doi.org/10.5281/zenodo.16378199 
    more » « less
    Free, publicly-accessible full text available August 7, 2026
  4. Free, publicly-accessible full text available February 1, 2026
  5. We propose a novel model-based reinforcement learning algorithm—Dynamics Learning and predictive control with Parameterized Actions (DLPA)—for Parameterized Action Markov Decision Processes (PAMDPs). The agent learns a parameterized-action-conditioned dynamics model and plans with a modified Model Predictive Path Integral control. We theoretically quantify the difference between the generated trajectory and the optimal trajectory during planning in terms of the value they achieved through the lens of Lipschitz Continuity. Our empirical results on several standard benchmarks show that our algorithm achieves superior sample efficiency and asymptotic performance than state-of-the-art PAMDP methods. 
    more » « less
  6. We present an algorithm for skill discovery from expert demonstrations. The algorithm first utilizes Large Language Models (LLMs) to propose an initial segmentation of the trajectories. Following that, a hierarchical variational inference framework incorporates the LLM-generated segmentation information to discover reusable skills by merging trajectory segments. To further control the trade-off between compression and reusability, we introduce a novel auxiliary objective based on the Minimum Description Length principle that helps guide this skill discovery process. Our results demonstrate that agents equipped with our method are able to discover skills that help accelerate learning and outperform baseline skill learning approaches on new long-horizon tasks in BabyAI, a grid world navigation environment, as well as ALFRED, a household simulation environment. 
    more » « less
  7. In the Hidden-Parameter MDP (HiP-MDP) framework, a family of reinforcement learning tasks is generated by varying hidden parameters specifying the dynamics and reward function for each individual task. The HiP-MDP is a natural model for families of tasks in which meta- and lifelong-reinforcement learning approaches can succeed. Given a learned context encoder that infers the hidden parameters from previous experience, most existing algorithms fall into two categories: model transfer and policy transfer, depending on which function the hidden parameters are used to parameterize. We characterize the robustness of model and policy transfer algorithms with respect to hidden parameter estimation error. We first show that the value function of HiP-MDPs is Lipschitz continuous under certain conditions. We then derive regret bounds for both settings through the lens of Lipschitz continuity. Finally, we empirically corroborate our theoretical analysis by varying the hyper-parameters governing the Lipschitz constants of two continuous control problems; the resulting performance is consistent with our theoretical results. 
    more » « less
  8. We propose a model-based lifelong reinforcement-learning approach that estimates a hierarchical Bayesian posterior distilling the common structure shared across different tasks. The learned posterior combined with a sample-based Bayesian exploration procedure increases the sample efficiency of learning across a family of related tasks. We first derive an analysis of the relationship between the sample complexity and the initialization quality of the posterior in the finite MDP setting. We next scale the approach to continuous-state domains by introducing a Variational Bayesian Lifelong Reinforcement Learning algorithm that can be combined with recent model-based deep RL methods, and that exhibits backward transfer. Experimental results on several challenging domains show that our algorithms achieve both better forward and backward transfer performance than state-of-the-art lifelong RL methods 
    more » « less
  9. Guichard, P.; Hamel, V. (Ed.)
    This chapter describes two mechanical expansion microscopy methods with accompanying step-by-step protocols. The first method, mechanically resolved expansion microscopy, uses non-uniform expansion of partially digested samples to provide the imaging contrast that resolves local mechanical properties. Examining bacterial cell wall with this method, we are able to distinguish bacterial species in mixed populations based on their distinct cell wall rigidity and detect cell wall damage caused by various physiological and chemical perturbations. The second method is mechanically locked expansion microscopy, in which we use a mechanically stable gel network to prevent the original polyacrylate network from shrinking in ionic buffers. This method allows us to use anti-photobleaching buffers in expansion microscopy, enabling detection of novel ultra-structures under the optical diffraction limit through super-resolution single molecule localization microscopy on bacterial cells and whole-mount immunofluorescence imaging in thick animal tissues. We also discuss potential applications and assess future directions. 
    more » « less