skip to main content


Search for: All records

Creators/Authors contains: "Lu, Y"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available May 1, 2025
  2. Free, publicly-accessible full text available December 1, 2024
  3. Mobile and embedded devices are becoming ubiquitous. Applications such as rescue with autonomous robots and event analysis on traffic cameras rely on devices with limited power supply and computational sources. Thus, the demand for efficient computer vision algorithms increases. Since 2015, we have organized the IEEE Low-Power Computer Vision Challenge to advance the state of the art in low-power computer vision. We describe the competition organizing details including the challenge design, the reference solution, the dataset, the referee system, and the evolution of the solutions from two winning teams. We examine the winning teams’ development patterns and design decisions, focusing on their techniques to balance power consumption and accuracy. We conclude that a successful competition needs a well-designed reference solution and automated referee system, and a solution with modularized components is more likely to win. We hope this paper provides guidelines for future organizers and contestants of computer vision competitions. 
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  4. Free, publicly-accessible full text available May 1, 2024
  5. Deep Neural Networks (DNNs) are being adopted as components in software systems. Creating and specializing DNNs from scratch has grown increasingly difficult as stateof- the-art architectures grow more complex. Following the path of traditional software engineering, machine learning engineers have begun to reuse large-scale pre-trained models (PTMs) and fine-tune these models for downstream tasks. Prior works have studied reuse practices for traditional software packages to guide software engineers towards better package maintenance and dependency management. We lack a similar foundation of knowledge to guide behaviors in pre-trained model ecosystems. In this work, we present the first empirical investigation of PTM reuse. We interviewed 12 practitioners from the most popular PTM ecosystem, Hugging Face, to learn the practices and challenges of PTM reuse. From this data, we model the decision-making process for PTM reuse. Based on the identified practices, we describe useful attributes for model reuse, including provenance, reproducibility, and portability. Three challenges for PTM reuse are missing attributes, discrepancies between claimed and actual performance, and model risks. We substantiate these identified challenges with systematic measurements in the Hugging Face ecosystem. Our work informs future directions on optimizing deep learning ecosystems by automated measuring useful attributes and potential attacks, and envision future research on infrastructure and standardization for model registries. 
    more » « less
    Free, publicly-accessible full text available May 1, 2024
  6. Abstract A discrete degree of freedom can be engineered to match the Hamiltonian of particles moving in a real-space lattice potential. Such synthetic dimensions are powerful tools for quantum simulation because of the control they offer and the ability to create configurations difficult to access in real space. Here, in an ultracold 84 Sr atom, we demonstrate a synthetic-dimension based on Rydberg levels coupled with millimeter waves. Tunneling amplitudes between synthetic lattice sites and on-site potentials are set by the millimeter-wave amplitudes and detunings respectively. Alternating weak and strong tunneling in a one-dimensional configuration realizes the single-particle Su-Schrieffer-Heeger (SSH) Hamiltonian, a paradigmatic model of topological matter. Band structure is probed through optical excitation from the ground state to Rydberg levels, revealing symmetry-protected topological edge states at zero energy. Edge-state energies are robust to perturbations of tunneling-rates that preserve chiral symmetry, but can be shifted by the introduction of on-site potentials. 
    more » « less
  7. Abstract

    In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generatein situthat are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map.

     
    more » « less