Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available November 6, 2025
-
Summary Powdery mildew is an economically important disease caused byc. 1000 different fungal species.Erysiphe vacciniiis an emerging powdery mildew species that is impacting the blueberry industry. Once confined to North America,E. vacciniiis now spreading rapidly across major blueberry‐growing regions, including China, Morocco, Mexico, and the USA, threatening millions in losses.This study documents its recent global spread by analyzing both herbarium specimens, some over 150‐yr‐old, and fresh samples collected world‐wide.Our findings were integrated into a ‘living phylogeny’ via T‐BAS to simplify pathogen identification and enable rapid responses to new outbreaks. We identified 50 haplotypes, two primary introductions world‐wide, and revealed a shift from a generalist to a specialist pathogen.This research provides insights into the complexities of host specialization and highlights the need to address this emerging global threat to blueberry production.more » « lessFree, publicly-accessible full text available April 1, 2026
-
Free, publicly-accessible full text available August 12, 2025
-
Xu, H., Liu, M., Bu, Y., Sun, S., Zhang, Y., Zhang, C., Acuna, DE, Gray S., Meyer, E., & Ding, Y. (2024). The impact of heterogeneous shared leadership in scientific teams. Information Processing & Management, 61(1), 103542.more » « lessFree, publicly-accessible full text available August 20, 2025
-
Coreset is a small set that provides a data summary for a large dataset, such that training solely on the small set achieves competitive performance compared with a large dataset. In rehearsal-based continual learning, the coreset is typically used in the memory replay buffer to stand for representative samples in previous tasks, and the coreset selection procedure is typically formulated as a bilevel problem. However, the typical bilevel formulation for coreset selection explicitly performs optimization over discrete decision variables with greedy search, which is computationally expensive. Several works consider other formulations to address this issue, but they ignore the nested nature of bilevel optimization problems and may not solve the bilevel coreset selection problem accurately. To address these issues, we propose a new bilevel formulation, where the inner problem tries to find a model which minimizes the expected training error sampled from a given probability distribution, and the outer problem aims to learn the probability distribution with approximately $$K$$ (coreset size) nonzero entries such that learned model in the inner problem minimizes the training error over the whole data. To ensure the learned probability has approximately $$K$$ nonzero entries, we introduce a novel regularizer based on the smoothed top-$$K$$ loss in the upper problem. We design a new optimization algorithm that provably converges to the $$\epsilon$$-stationary point with $$O(1/\epsilon^4)$$ computational complexity. We conduct extensive experiments in various settings in continual learning, including balanced data, imbalanced data, and label noise, to show that our proposed formulation and new algorithm significantly outperform competitive baselines. From bilevel optimization point of view, our algorithm significantly improves the vanilla greedy coreset selection method in terms of running time on continual learning benchmark datasets. The code is available at \url{https://github.com/MingruiLiu-ML-Lab/Bilevel-Coreset-Selection-via-Regularization}.more » « less
-
Recent research has highlighted the effectiveness of advanced building controls in reducing the energy consumption of heating, ventilation, and air-conditioning (HVAC) systems. Among advanced building control strategies, deep reinforcement learning control (DRL) shows the potential to achieve energy savings for HVAC systems and has emerged as a promising strategy. However, training DRL requires an interactive environment for the agent, which is challenging to achieve with real buildings due to time and response speed constraints. To address this challenge, a simulation environment serving as a training environment is needed, even though the DRL algorithm does not necessarily need a model. The error between the model and the real building is inevitable in this process, which may influence the efficiency of the DRL controller. To investigate the impact of model error, a virtual testbed was established. A high- fidelity Modelica-based model is developed serving as the virtual building. Three reduced-order models (ROMs) (i.e., 3R2C, Light Gradient Boosting Machine (LightGBM) and artificial neural network (ANN) models) were trained with the historical data generated from the virtual building and were embedded in the training environments of DRL. The sensitivity of ROMs and the Modelica model to random and periodical actions were tested and compared. Deploying the policy trained based on a ROM-based environment, which stands for a surrogate model in reality, into the Modelica-based virtual building testing environment, which stands for real-building, is a practical approach to implementing the DRL control. The performance of the practical DRL controller is compared with rule-based control (RBC) and an ideal DRL controller which was trained and deployed both in the virtual building environment. In the final episode with best rewards of the case study, the 3R2C, LightGBM, and ANN-based DRL outperform the RBC by 7.4%, 14.4%, and 11.4%, respectively in terms of the reward, comprising the weighted sum of energy cost, temperature violations, and the slew rate of the control signal, but falls short of the ideal Modelica-based DRL controller which outperforms RBC by 29.5%. The DRL controllers based on data-driven models are highly unstable with higher maximum rewards but much lower average rewards which might be caused by the significant prediction defect in certain action regions of the data-driven model.more » « less