skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Title: A New Approach to Evaluate and Reduce Uncertainty of Model-Based Biodiversity Projections for Conservation Policy Formulation
Abstract Biodiversity projections with uncertainty estimates under different climate, land-use, and policy scenarios are essential to setting and achieving international targets to mitigate biodiversity loss. Evaluating and improving biodiversity predictions to better inform policy decisions remains a central conservation goal and challenge. A comprehensive strategy to evaluate and reduce uncertainty of model outputs against observed measurements and multiple models would help to produce more robust biodiversity predictions. We propose an approach that integrates biodiversity models and emerging remote sensing and in-situ data streams to evaluate and reduce uncertainty with the goal of improving policy-relevant biodiversity predictions. In this article, we describe a multivariate approach to directly and indirectly evaluate and constrain model uncertainty, demonstrate a proof of concept of this approach, embed the concept within the broader context of model evaluation and scenario analysis for conservation policy, and highlight lessons from other modeling communities.  more » « less
Award ID(s):
1639014
PAR ID:
10301493
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
BioScience
ISSN:
0006-3568
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In response to the COVID-19 pandemic, there have been various attempts to develop realistic models to both predict the spread of the disease and evaluate policy measures aimed at mitigation. Different models that operate under different parameters and assumptions produce radically different predictions, creating confusion among policy-makers and the general population and limiting the usefulness of the models. This newsletter article proposes a novel ensemble modeling approach that uses representative clustering to identify where existing model predictions of COVID-19 spread agree and unify these predictions into a smaller set of predictions. The proposed ensemble prediction approach is composed of the following stages: (1) the selection of the ensemble components, (2) the imputation of missing predictions for each component, and (3) representative clustering in application to time-series data to determine the degree of agreement between simulation predictions. The results of the proposed approach will produce a set of ensemble model predictions that identify where simulation results converge so that policy-makers and the general public are informed with more comprehensive predictions and the uncertainty among them. 
    more » « less
  2. We develop the first spatially integrated economic-hydrological model of the western Lake Erie basin explicitly linking economic models of farmers' field-level Best Management Practice (BMP) adoption choices with the Soil and Water Assessment Tool (SWAT) model to evaluate nutrient management policy cost-effectiveness. We quantify tradeoffs among phosphorus reduction policies and find that a hybrid policy coupling a fertilizer tax with cost-share payments for subsurface placement is the most cost-effective, and when implemented with a 200% tax can achieve the stated policy goal of 40% reduction in nutrient loadings. We also find economic adoption models alone can overstate the potential for BMPs to reduce nutrient loadings by ignoring biophysical complexities. Key Words: Integrated assessment model; agricultural land watershed model; water quality; cost-share; conservation practice; nutrient management JEL Codes: H23, Q51, Q52, Q53 
    more » « less
  3. null (Ed.)
    Abstract Biodiversity is rapidly changing due to changes in the climate and human related activities; thus, the accurate predictions of species composition and diversity are critical to developing conservation actions and management strategies. In this paper, using satellite remote sensing products as covariates, we constructed stacked species distribution models (S-SDMs) under a Bayesian framework to build next-generation biodiversity models. Model performance of these models was assessed using oak assemblages distributed across the continental United States obtained from the National Ecological Observatory Network (NEON). This study represents an attempt to evaluate the integrated predictions of biodiversity models—including assemblage diversity and composition—obtained by stacking next-generation SDMs. We found that applying constraints to assemblage predictions, such as using the probability ranking rule, does not improve biodiversity prediction models. Furthermore, we found that independent of the stacking procedure (bS-SDM versus pS-SDM versus cS-SDM), these kinds of next-generation biodiversity models do not accurately recover the observed species composition at the plot level or ecological-community scales (NEON plots are 400 m 2 ). However, these models do return reasonable predictions at macroecological scales, i.e., moderately to highly correct assignments of species identities at the scale of NEON sites (mean area ~ 27 km 2 ). Our results provide insights for advancing the accuracy of prediction of assemblage diversity and composition at different spatial scales globally. An important task for future studies is to evaluate the reliability of combining S-SDMs with direct detection of species using image spectroscopy to build a new generation of biodiversity models that accurately predict and monitor ecological assemblages through time and space. 
    more » « less
  4. Risks from human intervention in the climate system are raising concerns with respect to individual species and ecosystem health and resiliency. A dominant approach uses global climate models to predict changes in climate in the coming decades and then to downscale this information to assess impacts to plant communities, animal habitats, agricultural and urban ecosystems, and other parts of the Earth’s life system. To achieve robust assessments of the threats to these systems in this top-down, outcome vulnerability approach, however, requires skillful prediction, and representation of changes in regional and local climate processes, which has not yet been satisfactorily achieved. Moreover, threats to biodiversity and ecosystem function, such as from invasive species, are in general, not adequately included in the assessments. We discuss a complementary assessment framework that builds on a bottom-up vulnerability concept that requires the determination of the major human and natural forcings on the environment including extreme events, and the interactions between these forcings. After these forcings and interactions are identified, then the relative risks of each issue can be compared with other risks or forcings in order to adopt optimal mitigation/adaptation strategies. This framework is a more inclusive way of assessing risks, including climate variability and longer-term natural and anthropogenic-driven change, than the outcome vulnerability approach which is mainly based on multi-decadal global and regional climate model predictions. We therefore conclude that the top-down approach alone is outmoded as it is inadequate for robustly assessing risks to biodiversity and ecosystem function. In contrast the bottom-up, integrative approach is feasible and much more in line with the needs of the assessment and conservation community. A key message of our paper is to emphasize the need to consider coupled feedbacks since the Earth is a dynamically interactive system. This should be done not just in the model structure, but also in its application and subsequent analyses. We recognize that the community is moving toward that goal and we urge an accelerated pace. 
    more » « less
  5. Uncertainty decomposition refers to the task of decomposing the total uncertainty of a predictive model into aleatoric (data) uncertainty, resulting from inherent randomness in the data-generating process, and epistemic (model) uncertainty, resulting from missing information in the model’s training data. In large language models (LLMs) specifically, identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability, but remains an important open research question. In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling, which can be applied to any pre-trained LLM. Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions. We show that, when aleatoric uncertainty arises from ambiguity or under-specification in LLM inputs, this approach makes it possible to factor an (un-clarified) LLM’s predictions into separate aleatoric and epistemic terms, using a decomposition similar to the one employed by Bayesian neural networks. Empirical evaluations demonstrate that input clarification ensembling provides accurate and reliable uncertainty quantification on several language processing tasks. 
    more » « less