skip to main content


This content will become publicly available on December 1, 2025

Title: Knowledge-guided machine learning can improve carbon cycle quantification in agroecosystems
Abstract

Accurate and cost-effective quantification of the carbon cycle for agroecosystems at decision-relevant scales is critical to mitigating climate change and ensuring sustainable food production. However, conventional process-based or data-driven modeling approaches alone have large prediction uncertainties due to the complex biogeochemical processes to model and the lack of observations to constrain many key state and flux variables. Here we propose a Knowledge-Guided Machine Learning (KGML) framework that addresses the above challenges by integrating knowledge embedded in a process-based model, high-resolution remote sensing observations, and machine learning (ML) techniques. Using the U.S. Corn Belt as a testbed, we demonstrate that KGML can outperform conventional process-based and black-box ML models in quantifying carbon cycle dynamics. Our high-resolution approach quantitatively reveals 86% more spatial detail of soil organic carbon changes than conventional coarse-resolution approaches. Moreover, we outline a protocol for improving KGML via various paths, which can be generalized to develop hybrid models to better predict complex earth system dynamics.

 
more » « less
Award ID(s):
2147195 2239175
NSF-PAR ID:
10503419
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Nature
Date Published:
Journal Name:
Nature Communications
Volume:
15
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract. Agricultural nitrous oxide (N2O) emission accounts for a non-trivialfraction of global greenhouse gas (GHG) budget. To date, estimatingN2O fluxes from cropland remains a challenging task because the relatedmicrobial processes (e.g., nitrification and denitrification) are controlledby complex interactions among climate, soil, plant and human activities.Existing approaches such as process-based (PB) models have well-knownlimitations due to insufficient representations of the processes oruncertainties of model parameters, and due to leverage recent advances inmachine learning (ML) a new method is needed to unlock the “black box” toovercome its limitations such as low interpretability, out-of-sample failureand massive data demand. In this study, we developed a first-of-its-kindknowledge-guided machine learning model for agroecosystems (KGML-ag) byincorporating biogeophysical and chemical domain knowledge from an advanced PBmodel, ecosys, and tested it by comparing simulating daily N2O fluxes withreal observed data from mesocosm experiments. The gated recurrent unit (GRU)was used as the basis to build the model structure. To optimize the modelperformance, we have investigated a range of ideas, including (1) usinginitial values of intermediate variables (IMVs) instead of time series asmodel input to reduce data demand; (2) building hierarchical structures toexplicitly estimate IMVs for further N2O prediction; (3) using multi-tasklearning to balance the simultaneous training on multiple variables; and (4)pre-training with millions of synthetic data generated from ecosys and fine-tuningwith mesocosm observations. Six other pure ML models were developed usingthe same mesocosm data to serve as the benchmark for the KGML-ag model.Results show that KGML-ag did an excellent job in reproducing the mesocosmN2O fluxes (overall r2=0.81, and RMSE=3.6 mgNm-2d-1from cross validation). Importantly, KGML-ag always outperformsthe PB model and ML models in predicting N2O fluxes, especially forcomplex temporal dynamics and emission peaks. Besides, KGML-ag goes beyondthe pure ML models by providing more interpretable predictions as well aspinpointing desired new knowledge and data to further empower the currentKGML-ag. We believe the KGML-ag development in this study will stimulate anew body of research on interpretable ML for biogeochemistry and otherrelated geoscience processes. 
    more » « less
  2. Abstract Background

    Computational drug repurposing is a cost- and time-efficient approach that aims to identify new therapeutic targets or diseases (indications) of existing drugs/compounds. It is especially critical for emerging and/or orphan diseases due to its cheaper investment and shorter research cycle compared with traditional wet-lab drug discovery approaches. However, the underlying mechanisms of action (MOAs) between repurposed drugs and their target diseases remain largely unknown, which is still a main obstacle for computational drug repurposing methods to be widely adopted in clinical settings.

    Results

    In this work, we propose KGML-xDTD: a Knowledge Graph–based Machine Learning framework for explainably predicting Drugs Treating Diseases. It is a 2-module framework that not only predicts the treatment probabilities between drugs/compounds and diseases but also biologically explains them via knowledge graph (KG) path-based, testable MOAs. We leverage knowledge-and-publication–based information to extract biologically meaningful “demonstration paths” as the intermediate guidance in the Graph-based Reinforcement Learning (GRL) path-finding process. Comprehensive experiments and case study analyses show that the proposed framework can achieve state-of-the-art performance in both predictions of drug repurposing and recapitulation of human-curated drug MOA paths.

    Conclusions

    KGML-xDTD is the first model framework that can offer KG path explanations for drug repurposing predictions by leveraging the combination of prediction outcomes and existing biological knowledge and publications. We believe it can effectively reduce “black-box” concerns and increase prediction confidence for drug repurposing based on predicted path-based explanations and further accelerate the process of drug discovery for emerging diseases.

     
    more » « less
  3. Abstract

    Hybrid Knowledge‐Guided Machine Learning (KGML) models, which are deep learning models that utilize scientific theory and process‐based model simulations, have shown improved performance over their process‐based counterparts for the simulation of water temperature and hydrodynamics. We highlight the modular compositional learning (MCL) methodology as a novel design choice for the development of hybrid KGML models in which the model is decomposed into modular sub‐components that can be process‐based models and/or deep learning models. We develop a hybrid MCL model that integrates a deep learning model into a modularized, process‐based model. To achieve this, we first train individual deep learning models with the output of the process‐based models. In a second step, we fine‐tune one deep learning model with observed field data. In this study, we replaced process‐based calculations of vertical diffusive transport with deep learning. Finally, this fine‐tuned deep learning model is integrated into the process‐based model, creating the hybrid MCL model with improved overall projections for water temperature dynamics compared to the original process‐based model. We further compare the performance of the hybrid MCL model with the process‐based model and two alternative deep learning models and highlight how the hybrid MCL model has the best performance for projecting water temperature, Schmidt stability, buoyancy frequency, and depths of different isotherms. Modular compositional learning can be applied to existing modularized, process‐based model structures to make the projections more robust and improve model performance by letting deep learning estimate uncertain process calculations.

     
    more » « less
  4. Abstract Why the new findings matter

    The process of teaching and learning is complex, multifaceted and dynamic. This paper contributes a seminal resource to highlight the digitisation of the educational sciences by demonstrating how new machine learning methods can be effectively and reliably used in research, education and practical application.

    Implications for educational researchers and policy makers

    The progressing digitisation of societies around the globe and the impact of the SARS‐COV‐2 pandemic have highlighted the vulnerabilities and shortcomings of educational systems. These developments have shown the necessity to provide effective educational processes that can support sometimes overwhelmed teachers to digitally impart knowledge on the plan of many governments and policy makers. Educational scientists, corporate partners and stakeholders can make use of machine learning techniques to develop advanced, scalable educational processes that account for individual needs of learners and that can complement and support existing learning infrastructure. The proper use of machine learning methods can contribute essential applications to the educational sciences, such as (semi‐)automated assessments, algorithmic‐grading, personalised feedback and adaptive learning approaches. However, these promises are strongly tied to an at least basic understanding of the concepts of machine learning and a degree of data literacy, which has to become the standard in education and the educational sciences.

    Demonstrating both the promises and the challenges that are inherent to the collection and the analysis of large educational data with machine learning, this paper covers the essential topics that their application requires and provides easy‐to‐follow resources and code to facilitate the process of adoption.

     
    more » « less
  5. The conventional machine learning (ML) and deep learning (DL) methods use large amount of data to construct desirable prediction models in a central fusion center for recognizing human activities. However, such model training encounters high communication costs and leads to privacy infringement. To address the issues of high communication overhead and privacy leakage, we employed a widely popular distributed ML technique called Federated Learning (FL) that generates a global model for predicting human activities by combining participated agents’ local knowledge. The state-of-the-art FL model fails to maintain acceptable accuracy when there is a large number of unreliable agents who can infuse false model, or, resource-constrained agents that fails to perform an assigned computational task within a given time window. We developed an FL model for predicting human activities by monitoring agent’s contributions towards model convergence and avoiding the unreliable and resource-constrained agents from training. We assign a score to each client when it joins in a network and the score is updated based on the agent’s activities during training. We consider three mobile robots as FL clients that are heterogeneous in terms of their resources such as processing capability, memory, bandwidth, battery-life and data volume. We consider heterogeneous mobile robots for understanding the effects of real-world FL setting in presence of resource-constrained agents. We consider an agent unreliable if it repeatedly gives slow response or infuses incorrect models during training. By disregarding the unreliable and weak agents, we carry-out the local training of the FL process on selected agents. If somehow, a weak agent is selected and started showing straggler issues, we leverage asynchronous FL mechanism that aggregate the local models whenever it receives a model update from the agents. Asynchronous FL eliminates the issue of waiting for a long time to receive model updates from the weak agents. To the end, we simulate how we can track the behavior of the agents through a reward-punishment scheme and present the influence of unreliable and resource-constrained agents in the FL process. We found that FL performs slightly worse than centralized models, if there is no unreliable and resource-constrained agent. However, as the number of malicious and straggler clients increases, our proposed model performs more effectively by identifying and avoiding those agents while recognizing human activities as compared to the stateof-the-art FL and ML approaches. 
    more » « less