Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract BackgroundHand hygiene (HH) matters because it decreases pathogen transmission that can cause infection. Automatic alcohol-based hand rub (ABHR) dispensers are widely adopted in healthcare facilities as the preferred means of HH. Traditional automatic dispensers have a large supply of batteries in the dispenser housing, whereas energy-on-the-refill (EOR) is a newer power supply solution, consisting of a relatively small battery attached to a refill bottle. The objective of this study was to assess dispenser design impact on missed HH opportunities and facility workflow disruption by mitigating battery maintenance. MethodsWe used date-driven discrete event simulation to evaluate the performance of three leading types of automatic dispensers in four common types of hospitals (Table 1). We analyzed up to 8 years of historical usage data and identified the usage pattern, which are used as the input traffic for our simulation model. Dispenser energy performance parameters were inputs to measure the workflow disruption of the different types of dispensers over a 6-year period in terms of battery replacements, duration of downtime, and the number of missed HH opportunities.Table 1:Summary of facility information and dispense event details to inform modeling.1. This total included soap dispensers used for hand washing and ABHR dispensers that were not equipped with automated monitoring (e.g., those provided in office areas without patients). 2. These were the dispensers used in the analysis and modeling results. 3. Total dispenses for all ABHR dispensers during the entire time range. 4. This is the average number of ABHR dispenses daily for the facility for all dispensers. ResultsThe simulation results suggested that dispensers with EOR technology were free of battery failures over the entire 6 years, and thus 0 HH misses were incurred due to dead batteries (Figure 1). All other designs had a significant amount HH misses due to battery failures, ranging from 2,514 (± 547) to 40,522 (± 4,506) per facility. However, the majority of HH misses were caused by empty ABHR refills. The maximum number of battery change events was 802 (± 0.60).Figure 1:Modeling results for each hospital and all dispenser types1 over the 6-year period simulation for the ABHR dispensers that comprise 80% of usage based on prior data. This figure displays (a) total dispenser downtime in hours, (b) the total number of HH misses due to battery failures, (c) the total number of HH misses due to ABHR availability (assuming 12 hours between an empty refill being replaced), and (d) the total number of battery change-out events. 95% CIs were also illustrated but might be unnoticeable, as they are on a much smaller scale compared to the mean.1. Dispenser A is a traditional design with 4 “D” cell batteries in the housing and has an average dispense energy of 2.02 J/ml. Dispenser B is a traditional design with 3 “D” cell batteries in the housing and has an average dispense energy of 1.78 J/ml. Dispenser C is a new design with a “AA” battery on the refill and has an average dispense energy of 1.78 J/ml. Dispenser B is modeled at 1 dose and 2 doses because it has some ABHR formulations that can require 2 dispenses to meet the Healthcare Personnel Handwash test method antimicrobial efficacy success criteria. ConclusionDifferences in dispenser design, including the energy management system and usage profiles have significant impact on HH performance, which in turn can affect infection risk. By adopting the EOR system, facilities can effectively eliminate the need for battery maintenance, resulting in labor and workflow efficiencies. The EOR system significantly reduces HH disruptions and may decrease complaints by caregivers, patients and visitors. Importantly facilities should carefully study dispenser usage patterns to implement optimized policies and practices for placement and refill maintenance of ABHR dispensers to minimize overall missed HH opportunities. DisclosuresNanshan Chen, n/a, GOJO Industries, Inc.: Grant/Research Support James W. Arbogast, PhD, GOJO Industries, Inc.: employee John J. McNulty, n/a, GOJO Industries, Inc.: employee Paul J. Brown, n/a, GOJO Industries, Inc.: employee Demetrius Henry, n/a, GOJO Industries, Inc.: employee Susan O'Hara, PhD, GOJO Industries, Inc.: Grant/Research Support Abedallah Al Kader, n/a, GOJO Industries, Inc.: Grant/Research Support Angela Hu, n/a, GOJO Industries, Inc.: Grant/Research Support Theodore T. Allen, PhD, GOJO Industries, Inc.: Grant/Research Support Cathy H. Xia, PhD, GOJO Industries, Inc.: Grant/Research Supportmore » « less
-
The emerging Ray-tracing cores on GPUs have been repurposed for non-ray-tracing tasks by researchers recently. In this paper, we explore the benefits and effectiveness of executing graph algorithms on RT cores. We re-design breadth-first search and triangle counting on the new hardware as graph algorithm representatives. Our implementations focus on how to convert the graph operations to bounding volume hierarchy construction and ray generation, which are computational paradigms specific to ray tracing. We evaluate our RT-based methods on a wide range of real-world datasets. The results do not show the advantage of the RT-based methods over CUDA-based methods. We extend the experiments to the set intersection workload on synthesized datasets, and the RT-based method shows superior performance when the skew ratio is high. By carefully comparing the RT-based and CUDA-based binary search, we discover that RT cores are more efficient at searching for elements, but this comes with a constant and non-trivial overhead of the execution pipeline. Furthermore, the overhead of BVH construction is substantially higher than sorting on CUDA cores for large datasets. Our case studies unveil several rules of adapting graph algorithms to ray-tracing cores that might benefit future evolution of the emerging hardware towards general-computing tasks.more » « lessFree, publicly-accessible full text available May 27, 2026
-
The Ray-Tracing (RT) core has become a widely integrated feature in modern GPUs to accelerate ray-tracing rendering. Recent research has shown that RT cores can also be repurposed to accelerate non-rendering workloads. Since the RT core essentially serves as a hardware accelerator for Bounding Volume Hierarchy (BVH) tree traversal, it holds the potential to significantly improve the performance of spatial workloads. However, the specialized RT programming model poses challenges for using RT cores in these scenarios. Inspired by the core functionality of RT cores, we designed and implemented LibRTS, a spatial index library that leverages RT cores to accelerate spatial queries. LibRTS supports both point and range queries and remains mutable to accommodate changing data. Instead of relying on a case-by-case approach, LibRTS provides a general, highperformance spatial indexing framework for spatial data processing. By formulating spatial queries as RT-suitable problems and overcoming load-balancing challenges, LibRTS delivers superior query performance through RT cores without requiring developers to master complex programming on this specialized hardware. Compared to CPU and GPU spatial libraries, LibRTS achieves speedups of up to 85.1x for point queries, 94.0x for range-contains queries, and 11.0x for range-intersects queries. In a real-world application, pointin-polygon testing, LibRTS also surpasses the state-of-the-art RT method by up to 3.8x.more » « lessFree, publicly-accessible full text available February 28, 2026
-
This guide illuminates the intricate relationship between data management, computer architecture, and system software. It traces the evolution of computing to today's data-centric focus and underscores the importance of hardware-software co-design in achieving efficient data processing systems with high throughput and low latency. The thorough coverage includes topics such as logical data formats, memory architecture, GPU programming, and the innovative use of ray tracing in computational tasks. Special emphasis is placed on minimizing data movement within memory hierarchies and optimizing data storage and retrieval. Tailored for professionals and students in computer science, this book combines theoretical foundations with practical applications, making it an indispensable resource for anyone wanting to master the synergies between data management and computing infrastructure.more » « lessFree, publicly-accessible full text available November 21, 2025
-
Fixed-point decimal operations in databases with arbitrary-precision arithmetic refer to the ability to store and operate decimal fraction numbers with an arbitrary length of digits. This type of operation has become a requirement for many applications, including scientific databases, financial data processing, geometric data processing, and cryptography. However, the state-of-the-art fixed-point decimal technology either provides high performance for low-precision operations or supports arbitrary-precision arithmetic operations at low performance. In this paper, we present a design and implementation of a framework called UltraPrecise which supports arbitraryprecision arithmetic for databases on GPU, aiming to gain high performance for arbitrary-precision arithmetic operations. We build our framework based on the just-in-time compilation technique and optimize its performance via data representation design, PTX acceleration, and expression scheduling. UltraPrecise achieves comparable performance to other high-performance databases for low-precision arithmetic operations. For highprecision, we show that UltraPrecise consistently outperforms existing databases by two orders of magnitude, including workloads of RSA encryption and trigonometric function approximation.more » « less
-
The tree edit distance (TED) has been found in a wide spectrum of applications in artificial intelligence, bioinformatics, and other areas, which serves as a metric to quantify the dissimilarity between two trees. As applications continue to scale in data size, with a growing demand for fast response time, TED has become even more increasingly data- and computing-intensive. Over the years, researchers have made dedicated efforts to improve sequential TED algorithms by reducing their high complexity. However, achieving efficient parallel TED computation in both algorithm and implementation is challenging due to its dynamic programming nature involving non-trivial issues of data dependency, runtime execution pattern changes, and optimal utilization of limited parallel resources. Having comprehensively investigated the bottlenecks in the existing parallel TED algorithms, we develop a massive parallel computation framework for TED and its implementation on GPU, which is called X-TED. For a given TED computation, X-TED applies a fast preprocessing algorithm to identify dependency relationships among millions of dynamic programming tables. Subsequently, it adopts a dynamic parallel strategy to handle various processing stages, aiming to best utilize GPU cores and the limited device memory in an adaptive and automatic way. Our intensive experimental results demonstrate that X-TED surpasses all existing solutions, achieving up to 42x speedup over the state-of-the-art sequential AP-TED, and outperforming the existing multicore parallel MC-TED by an average speedup of 31x.more » « less
An official website of the United States government
