Abstract This study proposed a framework to evaluate multivariate return periods of hurricanes using event‐based frequency analysis techniques. The applicability of the proposed framework was demonstrated using point‐based and spatial analyses on a recent catastrophic event, Hurricane Ian. Univariate, bivariate, and trivariate frequency analyses were performed by applying generalized extreme value distribution and copula on annual maximum series of flood volume, peak discharge, total rainfall depth, maximum wind speed, wave height and storm surge. As a result of point‐based analyses, return periods of Hurricane Ian was investigated by using our framework; univariate return periods were estimated from 39.2 to 60.2 years, bivariate from 824.1 to 1,592.6 years, and trivariate from 332.1 to 1,722.9 years for the Daytona‐St. Augustine Basin. In the Florida Bay‐Florida Keys Basin, univariate return periods were calculated from 7.5 to 32.9 years, bivariate from 36.5 to 114.9 years, and trivariate from 25.0 to 214.8 years. Using the spatial analyses, we were able to generate the return period map of Hurricane Ian across Florida. Based on bivariate frequency analyses, 18% of hydrologic unit code 8 (HUC8) basins had an average return period of more than 30 years. Sources of uncertainty, due to the scarcity of analysis data, stationarity assumption and impact of other weather systems such as strong frontal passages, were also discussed. Despite these limitations, our framework and results will be valuable in disaster response and recovery.
more »
« less
Comparison of Local, Regional, and Scaling Models for Rainfall Intensity–Duration–Frequency Analysis
Abstract Intensity–duration–frequency (IDF) analyses of rainfall extremes provide critical information to mitigate, manage, and adapt to urban flooding. The accuracy and uncertainty of IDF analyses depend on the availability of historical rainfall records, which are more accessible at daily resolution and, quite often, are very sparse in developing countries. In this work, we quantify performances of different IDF models as a function of the number of available high-resolution (Nτ) and daily (N24h) rain gauges. For this aim, we apply a cross-validation framework that is based on Monte Carlo bootstrapping experiments on records of 223 high-resolution gauges in central Arizona. We test five IDF models based on (two) local, (one) regional, and (two) scaling frequency analyses of annual rainfall maxima from 30-min to 24-h durations with the generalized extreme value (GEV) distribution. All models exhibit similar performances in simulating observed quantiles associated with return periods up to 30 years. When Nτ > 10, local and regional models have the best accuracy; bias correcting the GEV shape parameter for record length is recommended to estimate quantiles for large return periods. The uncertainty of all models, evaluated via Monte Carlo experiments, is very large when Nτ ≤ 5; however, if N24h ≥ 10 additional daily gauges are available, the uncertainty is greatly reduced and accuracy is increased by applying simple scaling models, which infer estimates on subdaily rainfall statistics from information at daily scale. For all models, performances depend on the ability to capture the elevation control on their parameters. Although our work is site specific, its results provide insights to conduct future IDF analyses, especially in regions with sparse data.
more »
« less
- PAR ID:
- 10194664
- Date Published:
- Journal Name:
- Journal of Applied Meteorology and Climatology
- Volume:
- 59
- Issue:
- 9
- ISSN:
- 1558-8424
- Page Range / eLocation ID:
- 1519 to 1536
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Urgency of Precipitation Intensity-Duration-Frequency (IDF) estimation using the most recent data has grown significantly due to recent intense precipitation and cloud burst circumstances impacting infrastructure caused by climate change. Given the continually available digitized up-to-date, long-term, and fine resolution precipitation dataset from the United States Department of Agriculture Forest Service’s (USDAFS) Experimental Forests and Ranges (EF) rain gauge stations, it is both important and relevant to develop precipitation IDF from onsite dataset (Onsite-IDF) that incorporates the most recent time period, aiding in the design, and planning of forest road-stream crossing structures (RSCS) in headwaters to maintain resilient forest ecosystems. Here we developed Onsite-IDFs for hourly and sub-hourly duration, and 25-yr, 50-yr, and 100-yr design return intervals (RIs) from annual maxima series (AMS) of precipitation intensities (PIs) modeled by applying Generalized Extreme Value (GEV) analysis and L-moment based parameter estimation methodology at six USDAFS EFs and compared them with precipitation IDFs obtained from the National Oceanic and Atmospheric Administration Atlas 14 (NOAA-Atlas14). A regional frequency analysis (RFA) was performed for EFs where data from multiple precipitation gauges are available. NOAA’s station-based precipitation IDFs were estimated for comparison using RFA (NOAA-RFA) at one of the EFs where NOAA-Atlas14 precipitation IDFs are unavailable. Onsite-IDFs were then evaluated against the PIs from NOAA-Atlas14 and NOAA-RFA by comparing their relative differences and storm frequencies. Results show considerable relative differences between the Onsite- and NOAA-Atlas14 (or NOAA-RFA) IDFs at these EFs, some of which are strongly dependent on the storm durations and elevation of precipitation gauges, particularly in steep, forested sites of H. J. Andrews (HJA) and Coweeta Hydrological Laboratory (CHL) EFs. At the higher elevation gauge of HJA EF, NOAA-RFA based precipitation IDFs underestimate PI of 25-yr, 50-yr, and 100-yr RIs by considerable amounts for 12-h and 24-h duration storm events relative to the Onsite-IDFs. At the low-gradient Santee (SAN) EF, the PIs of 3- to 24-h storm events with 100-yr frequency (or RI) from NOAA-Atlas14 gauges are found to be equivalent to PIs of more frequent storm events (25–50-yr RI) as estimated from the onsite dataset. Our results recommend use of the Onsite-IDF estimates for the estimation of design storm peak discharge rates at the higher elevation catchments of HJA, CHL, and SAN EF locations, particularly for longer duration events, where NOAA-based precipitation IDFs underestimate the PIs relative to the Onsite-IDFs. This underscores the importance of long-term high resolution EF data for new applications including ecological restorations and indicates that planning and design teams should use as much local data as possible or account for potential PI inconsistencies or underestimations if local data are unavailable.more » « less
-
ABSTRACT Observations of gravitational waves emitted by merging compact binaries have provided tantalizing hints about stellar astrophysics, cosmology, and fundamental physics. However, the physical parameters describing the systems (mass, spin, distance) used to extract these inferences about the Universe are subject to large uncertainties. The most widely used method of performing these analyses requires performing many Monte Carlo integrals to marginalize over the uncertainty in the properties of the individual binaries and the survey selection bias. These Monte Carlo integrals are subject to fundamental statistical uncertainties. Previous treatments of this statistical uncertainty have focused on ensuring that the precision of the inferred inference is unaffected; however, these works have neglected the question of whether sufficient accuracy can also be achieved. In this work, we provide a practical exploration of the impact of uncertainty in our analyses and provide a suggested framework for verifying that astrophysical inferences made with the gravitational-wave transient catalogue are accurate. Applying our framework to models used by the LIGO–Virgo–KAGRA collaboration and in the wider literature, we find that Monte Carlo uncertainty in estimating the survey selection bias is the limiting factor in our ability to probe narrow population models and this will rapidly grow more problematic as the size of the observed population increases.more » « less
-
Abstract Computational advances have made atmospheric modeling at convection‐permitting (≤4 km) grid spacings increasingly feasible. These simulations hold great promise in the projection of climate change impacts including rainfall and flood extremes. The relatively short model runs that are currently feasible, however, inhibit the assessment of the upper tail of rainfall and flood quantiles using conventional statistical methods. Stochastic storm transposition (SST) and process‐based flood frequency analysis are two approaches that together can help to mitigate this limitation. SST generates large numbers of extreme rainfall scenarios by temporal resampling and geospatial transposition of rainfall fields from relatively short data sets. Coupling SST with process‐based flood frequency analysis enables exploration of flood behavior at a range of spatial and temporal scales. We apply these approaches with outputs of 13‐year simulations of regional climate to examine changes in extreme rainfall and flood quantiles up to the 500‐year recurrence interval in a medium‐sized watershed in the Midwestern United States. Intensification of extreme precipitation across a range of spatial and temporal scales is identified in future climate; changes in flood magnitudes depend on watershed area, with small watersheds exhibiting the greatest increases due to their limited capacity to attenuate flood peaks. Flood seasonality and snowmelt are predicted to be earlier in the year under projected warming, while the most extreme floods continue to occur in early summer. Findings highlight both the potential and limitations of convection‐resolving climate models to help understand possible changes in rainfall and flood frequency across watershed scales.more » « less
-
Abstract. Antarctic ice shelves are vulnerable to warming ocean temperatures, and some have already begun thinning in response to increased basal melt rates.Sea level is therefore expected to rise due to Antarctic contributions, but uncertainties in its amount and timing remain largely unquantified. In particular, there is substantial uncertainty in future basal melt rates arising from multi-model differences in thermal forcing and how melt rates depend on that thermal forcing. To facilitate uncertainty quantification in sea level rise projections, we build, validate, and demonstrate projections from a computationally efficient statistical emulator of a high-resolution (4 km) Antarctic ice sheet model, the Community Ice Sheet Model version 2.1. The emulator is trained to a large (500-member) ensemble of 200-year-long 4 km resolution transient ice sheet simulations, whereby regional basal melt rates are perturbed by idealized (yet physically informed) trajectories. The main advantage of our emulation approach is that by sampling a wide range of possible basal melt trajectories, the emulator can be used to (1) produce probabilistic sea level rise projections over much larger Monte Carlo ensembles than are possible by direct numerical simulation alone, thereby providing better statistical characterization of uncertainties, and (2) predict the simulated ice sheet response under differing assumptions about basal melt characteristics as new oceanographic studies are published, without having to run additional numerical ice sheet simulations. As a proof of concept, we propagate uncertainties about future basal melt rate trajectories, derived from regional ocean models, to generate probabilistic sea level rise estimates for 100 and 200 years into the future.more » « less