skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Statistical and Machine Learning Approaches for Electrical Energy Forecasting
ABSTRACT With renewable energy being aggressively integrated into the grid, energy supplies are becoming vulnerable to weather and the environment, and are often incapable of meeting population demands at a large scale if not accurately predicted for energy planning. Understanding consumers' power demands ahead of time and the influences of weather on consumption and generation can help producers generate effective power management plans to support the target demand. In addition to the high correlation with the environment, consumers' behaviors also cause non‐stationary characteristics of energy data, which is the main challenge for energy prediction. In this survey, we perform a review of the literature on prediction methods in the energy field. So far, most of the available research encompasses one type of generation or consumption. There is no research approaching prediction in the energy sector as a whole and its correlated features. We propose to address the energy prediction challenges from both consumption and generation sides, encompassing techniques from statistical to machine learning techniques. We also summarize the work related to energy prediction, electricity measurements, challenges related to energy consumption and generation, energy forecasting methods, and real‐world energy forecasting resources, such as datasets and software solutions for energy prediction. This article is categorized under:Application Areas > Industry Specific ApplicationsTechnologies > PredictionTechnologies > Machine Learning  more » « less
Award ID(s):
2236579 2302786
PAR ID:
10616082
Author(s) / Creator(s):
;
Publisher / Repository:
WIREs Data Mining and Knowledge Discovery
Date Published:
Journal Name:
WIREs Data Mining and Knowledge Discovery
Volume:
15
Issue:
3
ISSN:
1942-4787
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Critical minerals are essential for sustaining the supply chain necessary for the transition to a carbon-free energy source for society. Copper, nickel, cobalt, lithium, and rare earth elements are particularly in demand for batteries and high-performance magnets used in low-carbon technologies. Copper, predominantly sourced from porphyry deposits, is critical for electricity generation, storage, and distribution. Nickel, which comes from laterite and magmatic sulfide deposits, and cobalt, often a by-product of nickel or copper mining, are core components of batteries that power electric vehicles. Lithium, sourced from pegmatite deposits and continental brines, is another key battery component. Rare earth elements, primarily obtained from carbonatite- and regolith-hosted ion-adsorption deposits, have unique magnetic properties that are key for motor efficiency. Future demand for these elements is expected to increase significantly over the next decades, potentially outpacing expected mine production. Therefore, to ensure a successful energy transition, efforts must prioritize addressing substantial challenges in the supply of critical minerals, particularly the delays in exploring and mining new resources to meet growing demands.▪The energy transition relies on green technologies needing a secure, sustainable supply of critical minerals sourced from ore deposits worldwide.▪Copper, nickel, cobalt, lithium, and rare earth elements are geologically restricted in occurrence, posing challenges for extraction and availability.▪Future demand is expected to surge in the next decades, requiring unprecedented production rates to make the green energy transition viable. 
    more » « less
  2. Abstract The Institute for Foundations of Machine Learning (IFML) focuses on core foundational tools to power the next generation of machine learning models. Its research underpins the algorithms and data sets that make generative artificial intelligence (AI) more accurate and reliable. Headquartered at The University of Texas at Austin, IFML researchers collaborate across an ecosystem that spans University of Washington, Stanford, UCLA, Microsoft Research, the Santa Fe Institute, and Wichita State University. Over the past year, we have witnessed incredible breakthroughs in AI on topics that are at the heart of IFML's agenda, such as foundation models, LLMs, fine‐tuning, and diffusion with game‐changing applications influencing almost every area of science and technology. In this article, we seek to highlight seek to highlight the application of foundational machine learning research on key use‐inspired topics:Fairness in Imaging with Deep Learning: designing the correct metrics and algorithms to make deep networks less biased.Deep proteins: using foundational machine learning techniques to advance protein engineering and launch a biomanufacturing revolution.Sounds and Space for Audio‐Visual Learning: building agents capable of audio‐visual navigation in complex 3D environments via new data augmentations.Improving Speed and Robustness of Magnetic Resonance Imaging: using deep learning algorithms to develop fast and robust MRI methods for clinical diagnostic imaging.IFML is also responding to explosive industry demand for an AI‐capable workforce. We have launched an accessible, affordable, and scalable new degree program—the MSAI—that looks to wholly reshape the AI/ML workforce pipeline. 
    more » « less
  3. Abstract INTRODUCTIONIdentification of individuals with mild cognitive impairment (MCI) who are at risk of developing Alzheimer's disease (AD) is crucial for early intervention and selection of clinical trials. METHODSWe applied natural language processing techniques along with machine learning methods to develop a method for automated prediction of progression to AD within 6 years using speech. The study design was evaluated on the neuropsychological test interviews ofn = 166 participants from the Framingham Heart Study, comprising 90 progressive MCI and 76 stable MCI cases. RESULTSOur best models, which used features generated from speech data, as well as age, sex, and education level, achieved an accuracy of 78.5% and a sensitivity of 81.1% to predict MCI‐to‐AD progression within 6 years. DISCUSSIONThe proposed method offers a fully automated procedure, providing an opportunity to develop an inexpensive, broadly accessible, and easy‐to‐administer screening tool for MCI‐to‐AD progression prediction, facilitating development of remote assessment. HighlightsVoice recordings from neuropsychological exams coupled with basic demographics can lead to strong predictive models of progression to dementia from mild cognitive impairment.The study leveraged AI methods for speech recognition and processed the resulting text using language models.The developed AI‐powered pipeline can lead to fully automated assessment that could enable remote and cost‐effective screening and prognosis for Alzehimer's disease. 
    more » « less
  4. ABSTRACT Fragment‐based quantum chemistry offers a means to circumvent the nonlinear computational scaling of conventional electronic structure calculations, by partitioning a large calculation into smaller subsystems then considering the many‐body interactions between them. Variants of this approach have been used to parameterize classical force fields and machine learning potentials, applications that benefit from interoperability between quantum chemistry codes. However, there is a dearth of software that provides interoperability yet is purpose‐built to handle the combinatorial complexity of fragment‐based calculations. To fill this void we introduce “Fragme∩t”, an open‐source software application that provides a tool for community validation of fragment‐based methods, a platform for developing new approximations, and a framework for analyzing many‐body interactions.Fragme∩tincludes algorithms for automatic fragment generation and structure modification, and for distance‐ and energy‐based screening of the requisite subsystems. Checkpointing, database management, and parallelization are handled internally and results are archived in a portable database. Interfaces to various quantum chemistry engines are easy to write and exist already for Q‐Chem, PySCF, xTB, Orca, CP2K, MRCC, Psi4, NWChem, GAMESS, and MOPAC. Applications reported here demonstrate parallel efficiencies around 96% on more than 1000 processors but also showcase that the code can handle large‐scale protein fragmentation using only workstation hardware, all with a codebase that is designed to be usable by non‐experts.Fragme∩tconforms to modern software engineering best practices and is built upon well established technologies including Python, SQLite, and Ray. The source code is available under the Apache 2.0 license. This article is categorized under:Electronic Structure Theory > Ab Initio Electronic Structure MethodsTheoretical and Physical Chemistry > ThermochemistrySoftware > Quantum Chemistry 
    more » « less
  5. Abstract Image‐based machine learning tools are an ascendant ‘big data’ research avenue. Citizen science platforms, like iNaturalist, and museum‐led initiatives provide researchers with an abundance of data and knowledge to extract. These include extraction of metadata, species identification, and phenomic data. Ecological and evolutionary biologists are increasingly using complex, multi‐step processes on data. These processes often include machine learning techniques, often built by others, that are difficult to reuse by other members in a collaboration.We present a conceptual workflow model for machine learning applications using image data to extract biological knowledge in the emerging field of imageomics. We derive an implementation of this conceptual workflow for a specific imageomics application that adheres to FAIR principles as a formal workflow definition that allows fully automated and reproducible execution, and consists of reusable workflow components.We outline technologies and best practices for creating an automated, reusable and modular workflow, and we show how they promote the reuse of machine learning models and their adaptation for new research questions. This conceptual workflow can be adapted: it can be semi‐automated, contain different components than those presented here, or have parallel components for comparative studies.We encourage researchers—both computer scientists and biologists—to build upon this conceptual workflow that combines machine learning tools on image data to answer novel scientific questions in their respective fields. 
    more » « less