Despite considerable community effort, there is no general set of equations to model long‐term landscape evolution. In order to determine a suitable set of landscape evolution process laws for a site where postglacial erosion has incised valleys up to 50 m deep, we generate a set of alternative models and perform a multimodel analysis. The most basic model we consider includes stream power channel incision, uniform lithology, hillslope transport by linear diffusion, and surface‐water discharge proportional to drainage area. We systematically add one, two, or three elements of complexity to this model from one of four categories: hillslope processes, channel processes, surface hydrology, and representation of geologic materials. We apply methods of formal model analysis to the 37 alternative models. The global Method of Morris sensitivity analysis method is used to identify model input parameters that most and least strongly influence model outputs. Only a few parameters are identified as important, and this finding is consistent across two alternative model outputs: one based on a collection of topographic metrics and one that uses an objective function based on a topographic difference. Parameters that control channel erosion are consistently important, while hillslope diffusivity is important for only select model outputs. Uncertainty in initial and boundary conditions is associated with low sensitivity. Sensitivity analysis provides insight to model dynamics and is a critical step in using model analysis for mechanistic hypothesis testing in landscape evolution theory.
This content will become publicly available on January 1, 2024
- NSF-PAR ID:
- Date Published:
- Journal Name:
- Coastal Sediments 2023
- Page Range / eLocation ID:
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
Mechanistic photosynthesis models are at the heart of terrestrial biosphere models (TBMs) simulating the daily, monthly, annual and decadal rhythms of carbon assimilation (
A). These models are founded on robust mathematical hypotheses that describe how Aresponds to changes in light and atmospheric CO2concentration. Two predominant photosynthesis models are in common usage: Farquhar (FvCB) and Collatz (CBGB). However, a detailed quantitative comparison of these two models has never been undertaken. In this study, we unify the FvCB and CBGB models to a common parameter set and use novel multi‐hypothesis methods (that account for both hypothesis and parameter variability) for process‐level sensitivity analysis. These models represent three key biological processes: carboxylation, electron transport, triose phosphate use (TPU) and an additional model process: limiting‐rate selection. Each of the four processes comprises 1–3 alternative hypotheses giving 12 possible individual models with a total of 14 parameters. To broaden inference, TBM simulations were run and novel, high‐resolution photosynthesis measurements were made. We show that parameters associated with carboxylation are the most influential parametersbut also reveal the surprising and marked dominance of the limiting‐rate selection process(accounting for 57% of the variation in Avs. 22% for carboxylation). The limiting‐rate selection assumption proposed by CBGB smooths the transition between limiting rates and always reduces Abelow the minimum of all potentially limiting rates, by up to 25%, effectively imposing a fourth limitation on A. Evaluation of the CBGB smoothing function in three TBMs demonstrated a reduction in global Aby 4%–10%, equivalent to 50%–160% of current annual fossil fuel emissions. This analysis reveals a surprising and previously unquantified influence of a process that has been integral to many TBMs for decades, highlighting the value of multi‐hypothesis methods.
Abstract. Spatially distributed hydrological models are commonly employed to optimize the locations of engineering control measures across a watershed. Yet, parameter screening exercises that aim to reduce the dimensionality of the calibration search space are typically completed only for gauged locations, like the watershed outlet, and use screening metrics that are relevant to calibration instead of explicitly describing the engineering decision objectives. Identifying parameters that describe physical processes in ungauged locations that affect decision objectives should lead to a better understanding of control measure effectiveness. This paper provides guidance on evaluating model parameter uncertainty at the spatial scales and flow magnitudes of interest for such decision-making problems. We use global sensitivity analysis to screen parameters for model calibration, and to subsequently evaluate the appropriateness of using multipliers to adjust the values of spatially distributed parameters to further reduce dimensionality. We evaluate six sensitivity metrics, four of which align with decision objectives and two of which consider model residual error that would be considered in spatial optimizations of engineering designs. We compare the resulting parameter selection for the basin outlet and each hillslope. We also compare basin outlet results for four calibration-relevant metrics. These methods were applied to a RHESSys ecohydrological model of an exurban forested watershed near Baltimore, MD, USA. Results show that (1) the set of parameters selected by calibration-relevant metrics does not include parameters that control decision-relevant high and low streamflows, (2) evaluating sensitivity metrics at the basin outlet misses many parameters that control streamflows in hillslopes, and (3) for some multipliers, calibrating all parameters in the set being adjusted may be preferable to using the multiplier if parameter sensitivities are significantly different, while for others, calibrating a subset of the parameters may be preferable if they are not all influential. Thus, we recommend that parameter screening exercises use decision-relevant metrics that are evaluated at the spatial scales appropriate to decision making. While including more parameters in calibration will exacerbate equifinality, the resulting parametric uncertainty should be important to consider in discovering control measures that are robust to it.more » « less
Reactive transport models (RTMs) are essential tools that simulate the coupling of advective, diffusive, and reactive processes in the subsurface, but their complexity makes them difficult to understand, develop and improve without accompanying statistical analyses. Although global sensitivity analysis (SA) can address these issues, the computational cost associated with most global SA techniques limits their use with RTMs. In this study, we apply distance‐based generalized sensitivity analysis (DGSA), a novel and computationally efficient method of global SA, to a floodplain‐scale RTM and compare DGSA results to those from local SA. Our test case focuses on the impact of 17 uncertain environmental parameters on spatially and temporally variable redox conditions within a floodplain aquifer. The input parameters considered include flow and diffusion rates, geochemical reaction rates, and the spatial distribution of sediment facies. Sensitivity was evaluated for three distinct components of the model response, encompassing both multidimensional and categorical output. Parameter rankings differ between local SA and DGSA, due to nonlinear effects of individual parameters and interaction effects between parameters. DGSA results show that fluid residence time, which is controlled by aquifer permeability, generally exerts a stronger control on redox conditions than do geochemical reaction rates. Sensitivity indices also demonstrate that sulfate reduction is key for establishing and maintaining reducing conditions throughout the aquifer. These results provide insights into the key drivers of heterogeneous redox processes within floodplain aquifers, as well as the main sources of uncertainty when modeling complex subsurface systems.
In light of the significant damage observed after earthquakes in Japan and New Zealand, enhanced performing seismic force‐resisting systems and energy dissipation devices are increasingly being utilized in buildings. Numerical models are needed to estimate the seismic response of these systems for seismic design or assessment. While there have been studies on modeling uncertainty, selecting the model features most important to response can remain ambiguous, especially if the structure employs less well‐established lateral force‐resisting systems and components. Herein, a global sensitivity analysis was used to address modeling uncertainty in specimens with elastic spines and force‐limiting connections (FLCs) physically tested at full‐scale at the E‐Defense shake table in Japan. Modeling uncertainty was addressed for both model class and model parameter uncertainty by varying primary models to develop several secondary models according to pre‐established uncertainty groups. Numerical estimates of peak story drift ratio and floor acceleration were compared to the results from the experimental testing program using confidence intervals and root‐mean‐square error. Metrics such as the coefficient of variation, variance, linear Pearson correlation coefficient, and Sobol index were used to gain intuition about each model feature's contribution to the dispersion in estimates of the engineering demands. Peak floor acceleration was found to be more sensitive to modeling uncertainty compared to story drift ratio. Assumptions for the spine‐to‐frame connection significantly impacted estimates of peak floor accelerations, which could influence future design methods for spines and FLC in enhanced lateral‐force resisting systems.