skip to main content


Search for: All records

Creators/Authors contains: "Li, Hai"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Benjamin, Paaßen ; Carrie, Demmans Epp (Ed.)
    K-12 Computer Science (CS) education has seen remarkable growth recently, driven by the increasing focus on CS and Computational Thinking (CT) integration. Despite the abundance of Professional development (PD) programs designed to prepare future CS teachers with the required knowledge and skills, there is a lack of research on how teachers' perceptions and attitudes of CS and CT evolve before and after participating in these programs. To address this gap, our exploratory study aims to study the dynamics of pre-and in-service teachers' experiences, attitudes, and perceptions towards CS and CT through their participation in a K-12 CS education micro-credential program. In this study, we employed topic modeling to identify topics that emerged from teachers' written pre- and post-CS autobiographies, conducted statistical analysis to explore how these topics evolve over time and applied regression analysis to investigate the factors influencing these dynamics. We observed a shift in teachers' initial feelings of fear, intimidation, and stress towards confidence, fun, and feeling competent in basic CS, reflecting a positive transformation. Regression analysis revealed that features, such as experienced teacher status and CT conceptual understanding, correlate with participants' evolving views. These observed relationships highlight the micro-credential's role in not only enhancing technical competency but also fostering an adaptive, integrative pedagogical mindset, providing new insights for course design. 
    more » « less
    Free, publicly-accessible full text available July 14, 2025
  2. The unprecedented success of artificial intelligence (AI) enriches machine learning (ML)-based applications. The availability of big data and compute-intensive algorithms empowers versatility and high accuracy in ML approaches. However, the data processing and innumerable computations burden conventional hardware systems with high power consumption and low performance. Breaking away from the traditional hardware design, non-conventional accelerators exploiting emerging technology have gained significant attention with a leap forward since the emerging devices enable processing-in-memory (PIM) designs of dramatic improvement in efficiency. This paper presents a summary of state-of-the-art PIM accelerators over a decade. The PIM accelerators have been implemented for diverse models and advanced algorithm techniques across diverse neural networks in language processing and image recognition to expedite inference and training. We will provide the implemented designs, methodologies, and results, following the development in the past years. The promising direction of the PIM accelerators, vertically stacking for More than Moore, is also discussed. 
    more » « less
    Free, publicly-accessible full text available June 12, 2025
  3. Free, publicly-accessible full text available June 17, 2025
  4. Natural Adversarial Examples (NAEs), images arising naturally from the environment and capable of deceiving classifiers, are instrumental in robustly evaluating and identifying vulnerabilities in trained models. In this work, unlike prior works that passively collect NAEs from real images, we propose to actively synthesize NAEs using the state-of-the-art Stable Diffusion. Specifically, our method formulates a controlled optimization process, where we perturb the token embedding that corresponds to a specified class to generate NAEs. This generation process is guided by the gradient of loss from the target classifier, ensuring that the created image closely mimics the ground-truth class yet fools the classifier. Named SD-NAE (Stable Diffusion for Natural Adversarial Examples), our innovative method is effective in producing valid and useful NAEs, which is demonstrated through a meticulously designed experiment. Code is available at https://github.com/linyueqian/SD-NAE. 
    more » « less
    Free, publicly-accessible full text available May 7, 2025
  5. Approximate nearest neighbor search (ANNS) is a key retrieval technique for vector database and many data center applications, such as person re-identification and recommendation systems. It is also fundamental to retrieval augmented generation (RAG) for large language models (LLM) now. Among all the ANNS algorithms, graph-traversal-based ANNS achieves the highest recall rate. However, as the size of dataset increases, the graph may require hundreds of gigabytes of memory, exceeding the main memory capacity of a single workstation node. Although we can do partitioning and use solid-state drive (SSD) as the backing storage, the limited SSD I/O bandwidth severely degrades the performance of the system. To address this challenge, we present NDSEARCh, a hardware-software co-designed near-data processing (NDP) solution for ANNS processing. NDSeARCH consists of a novel in-storage computing architecture, namely, SEARSSD, that supports the ANNS kernels and leverages logic unit (LUN)-level parallelism inside the NAND flash chips. NDSEARCH also includes a processing model that is customized for NDP and cooperates with SearSSD. The processing model enables us to apply a two-level scheduling to improve the data locality and exploit the internal bandwidth in NDSearch, and a speculative searching mechanism to further accelerate the ANNS workload. Our results show that NDSEARCH improves the throughput by up to 31.7×,14.6×,7.4×, and 2.9× over CPU, GPU, a state-of-the-art SmartSSD-only design, and DeepStore, respectively. NDSEARCH also achieves two orders-of-magnitude higher energy efficiency than CPU and GPU. 
    more » « less
    Free, publicly-accessible full text available June 29, 2025
  6. Free, publicly-accessible full text available May 1, 2025
  7. Free, publicly-accessible full text available June 3, 2025
  8. Free, publicly-accessible full text available March 4, 2025
  9. Free, publicly-accessible full text available January 3, 2025