Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Land-use land-cover change affects weather and climate. This paper quantifies land–atmosphere interactions over irrigated and nonirrigated land uses during the Great Plains Irrigation Experiment (GRAINEX). Three coupling metrics were used to quantify land–atmosphere interactions as they relate to convection. They include the convective triggering potential (CTP), the low-level humidity index (HIlow), and the lifting condensation level (LCL) deficit. These metrics were calculated from the rawinsonde data obtained from the Integrated Sounding Systems (ISSs) for Rogers Farm and York Airport along with soundings launched from the three Doppler on Wheels (DOW) sites. Each metric was categorized by intensive observation period (IOP), cloud cover, and time of day. Results show that with higher CTP, lower HIlow, and lower LCL deficit, conditions were more favorable for convective development over irrigated land use. When metrics were grouped and analyzed by IOP, compared to nonirrigated land use, HIlowwas found to be lower for irrigated land use, suggesting favorable conditions for convective development. Furthermore, when metrics were grouped and analyzed by clear and nonclear days, CTP values were higher over irrigated cropland than nonirrigated land use. In addition, compared to nonirrigated land use, the LCL deficit during the peak growing season was lower over irrigated land use, suggesting a favorable condition for convection. It is found that with the transition from the early summer to the mid/peak summer and increased irrigation, the environment became more favorable for convective development over irrigated land use. Finally, it was found that regardless of background atmospheric conditions, irrigated land use provided a favorable environment for convective development.more » « less
-
This survey explores the transformative impact of foundation models (FMs) in artificial intelligence, focusing on their integration with federated learning (FL) in biomedical research. Foundation models such as ChatGPT, LLaMa, and CLIP, which are trained on vast datasets through methods including unsupervised pretraining, self-supervised learning, instructed fine-tuning, and reinforcement learning from human feedback, represent significant advancements in machine learning. These models, with their ability to generate coherent text and realistic images, are crucial for biomedical applications that require processing diverse data forms such as clinical reports, diagnostic images, and multimodal patient interactions. The incorporation of FL with these sophisticated models presents a promising strategy to harness their analytical power while safeguarding the privacy of sensitive medical data. This approach not only enhances the capabilities of FMs in medical diagnostics and personalized treatment but also addresses critical concerns about data privacy and security in healthcare. This survey reviews the current applications of FMs in federated settings, underscores the challenges, and identifies future research directions including scaling FMs, managing data diversity, and enhancing communication efficiency within FL frameworks. The objective is to encourage further research into the combined potential of FMs and FL, laying the groundwork for healthcare innovations.more » « lessFree, publicly-accessible full text available December 1, 2026
-
Sorting is a fundamental operation in various applications and a traditional research topic in computer science. Improving the performance of sorting operations can have a significant impact on many application domains. Much attention has been paid to hardware-based solutions for high-performance sorting. These are often realized with application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). Recently, in-memory sorting solutions have also been proposed to address the movement cost issue between memory and processing units, also known as the Von Neumann bottleneck. Due to the complexity of the sorting algorithms, achieving an efficient hardware implementation for sorting data is challenging. A large body of prior solutions is built on compare-and-swap (CAS) units. These are categorized ascomparison-basedsorting. Some recent solutions offercomparison-freesorting. In this survey, we review the latest works in the area of hardware-based sorting. We also discuss the recent hardware solutions forpartialandstreamsorting. Finally, we discuss some important concerns that need to be considered in the future designs of sorting systems.more » « lessFree, publicly-accessible full text available July 31, 2026
-
Federated Learning (FL) is a promising distributed machine learning framework that allows collaborative learning of a global model across decentralized devices without uploading their local data. However, in real-world FL scenarios, the conventional synchronous FL mechanism suffers from inefficient training caused by slow-speed devices, commonly known as stragglers, especially in heterogeneous communication environments. Though asynchronous FL effectively tackles the efficiency challenge, it induces substantial system overheads and model degradation. Striking for a balance, semi-asynchronous FL has gained increasing attention, while still suffering from the open challenge of stale models, where newly arrived updates are calculated based on outdated weights that easily hurt the convergence of the global model. In this paper, we present SEAFL, a novel FL framework designed to mitigate both the straggler and the stale model challenges in semi-asynchronous FL. SEAFL dynamically assigns weights to uploaded models during aggregation based on their staleness and importance to the current global model. We theoretically analyze the convergence rate of SEAFL and further enhance the training efficiency with an extended variant that allows partial training on slower devices, enabling them to contribute to global aggregation while reducing excessive waiting times. We evaluate the effectiveness of SEAFL through extensive experiments on three benchmark datasets. The experimental results demonstrate that SEAFL outperforms its closest counterpart by up to ∼22% in terms of the wall-clock training time required to achieve target accuracy.more » « lessFree, publicly-accessible full text available June 3, 2026
-
Free, publicly-accessible full text available May 12, 2026
-
Accurate and timely regional weather prediction is vital for sectors dependent on weather-related decisions. Traditional prediction methods, based on atmospheric equations, often struggle with coarse temporal resolutions and inaccuracies. This article presents a novel machine learning (ML) model, called Micro–Macro (MiMa), that integrates both near-surface obser- vational data from Kentucky Mesonet stations (collected every 5 min, known as Micro data) and hourly atmospheric numerical outputs (termed as Macro data) for fine-resolution weather forecasting. The MiMa model employs an encoder–decoder trans- former structure, with two encoders for processing multivariate data from both datasets and a decoder for forecasting weather variables over short time horizons. Each instance of the MiMa model, called a modelet, predicts the values of a specific weather parameter at an individual mesonet station. The approach is extended with Regional MiMa (Re-MiMa) modelets, which are designed to predict weather variables at ungauged locations by training on multivariate data from a few representative stations in a region, tagged with their elevations. Re-MiMa can provide highly accurate predictions across an entire region, even in areas without observational stations. Experimental results show that MiMa significantly outperforms current models, with Re-MiMa offering precise short-term forecasts for ungauged locations, marking a significant advancement in weather forecasting accu- racy and applicability.more » « less
-
Hyperdimensional computing (HDC) has emerged as a promising paradigm offering lightweight yet powerful computing capabilities with inherent learning characteristics. By leveraging binary hyperdimensional vectors, HDC facilitates efficient and robust data processing, surpassing traditional machine learning (ML) approaches in terms of both speed and resilience. This letter addresses key challenges in HDC systems, particularly the conversion of data into the hyperdimensional domain and the integration of HDC with conventional ML frameworks. We propose a novel solution, the hyperdimensional vector quantized variational auto encoder (HDVQ-VAE), which seamlessly merges binary encodings with codebook representations in ML systems. Our approach significantly reduces memory overhead while enhancing training by replacing traditional codebooks with binary (−1, +1) counterparts. Leveraging this architecture, we demonstrate improved encoding-decoding procedures, producing high-quality images within acceptable peak signal-to-noise ratio (PSNR) ranges. Our work advances HDC by considering efficient ML system deployment to embedded systems.more » « less
-
Low-cost and hardware-efficient design of trigonometric functions is challenging. Stochastic computing (SC), an emerging computing model processing random bit-streams, offers promising solutions for this problem. The existing implementations, however, often overlook the importance of the data converters necessary to generate the needed bit-streams. While recent advancements in SC bit-stream generators focus on basic arithmetic operations such as multiplication and addition, energy-efficient SC design of non-linear functions demands attention to both the computation circuit and the bit-stream generator. This work introduces TriSC, a novel approach for SC-based design of trigonometric functions enjoying state-of-the-art (SOTA) quasi-random bit-streams. Unlike SOTA SC designs of trigonometric functions that heavily rely on delay elements to decorrelate bit-streams, our approach avoids delay elements while improving the accuracy of the results. TriSC yields significant energy savings of up to 92% compared to SOTA. As two novel use cases studied for the first time in SC literature, we employ the proposed design for 2D image transformation and forward kinematics of a robotic arm, two computation-intensive applications demanding low-cost trigonometric designs.more » « less
-
Hyperdimensional computing (HDC) is an emerging computing paradigm with significant promise for efficient and robust learning. In HDC, objects are encoded with high-dimensional vector symbolic sequences called hypervectors. The quality of hypervectors, defined by their distribution and independence, directly impacts the performance of HDC systems. Despite a large body of work on the processing parts of HDC systems, little to no attention has been paid to data encoding and the quality of hypervectors. Most prior studies have generated hypervectors using inherent random functions, such as MATLAB’s or Python’s random function. This work introduces an optimization technique for generating hypervectors by employing quasi-random sequences. These sequences have recently demonstrated their effectiveness in achieving accurate and low-discrepancy data encoding in stochastic computing systems. The study outlines the optimization steps for utilizing Sobol sequences to produce highquality hypervectors in HDC systems. An optimization algorithm is proposed to select the most suitable Sobol sequences via indexes for generating minimally correlated hypervectors, particularly in applications related to symbol-oriented architectures. The performance of the proposed technique is evaluated in comparison to two traditional approaches of generating hypervectors based on linear-feedback shift registers and MATLAB random functions. The evaluation is conducted for three applications: (i) language, (ii) headline, and (iii) medical image classification. Our experimental results demonstrate accuracy improvements of up to 10.79%, depending on the vector size. Additionally, the proposed encoding hardware exhibits reduced energy consumption and a superior area-delay product.more » « less
An official website of the United States government
