Taylor's law (TL), a commonly observed and applied pattern in ecology, describes variances of population densities as related to mean densities via log(variance) = log( The goal of this study was to provide tools to help fill this gap in understanding by providing Using numeric simulations and 82 multi‐decadal population datasets, we here propose, test and apply two proximate statistical determinants of TL slopes which we argue can become key tools for understanding the nature and ecological causes of TL slope variation. We find that measures based on population skewness, coefficient of variation and synchrony are effective proximate determinants. We demonstrate their potential for application by using them to help explain covariation in slopes of spatial and temporal TL (two common types of TL). This study provides tools for understanding TL, and demonstrates their usefulness.
- NSF-PAR ID:
- 10058739
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- ISSN:
- 0027-8424
- Page Range / eLocation ID:
- 201703593
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract a ) +b *log(mean). Variations among datasets in the slope,b , have been associated with multiple factors of central importance in ecology, including strength of competitive interactions and demographic rates. But these associations are not transparent, and the relative importance of these and other factors for TL slope variation is poorly studied. TL is thus a ubiquitously used indicator in ecology, the understanding of which is still opaque.proximate determinants of TL slopes , statistical quantities that are correlated to TL slopes but are simpler than the slope itself and are more readily linked to ecological factors. -
Abstract For unmagnetized low temperature Ar plasmas with plasma density ranging from 3 × 10 8 to 10 10 cm −3 and an electron temperature of ∼1 eV, the expansion of the ion collecting area of a double-sided planar Langmuir probe with respect to probe bias is experimentally investigated, through a systematic scan of plasma parameters. In accordance with many existing numerical studies, the ion collecting area is found to follow a power law for a sufficiently negative probe bias. Within our experimental conditions, the power law coefficient and exponent have been parameterized as a function of the normalized probe radius and compared with numerical results where qualitatively comparable features are identified. However, numerical results underestimate the power law coefficient while the exponent is overestimated. Our experimental measurements also confirm that ion–neutral collisions play a role in determining the expanded ion collecting area, thus changing values of the power law coefficient and exponent. This work suggests that a power law fit to the ion collecting area must be performed solely based on experimentally obtained data rather than using empirical formulae from simulation results since material and cleanness of the probe, type of working gas, and neutral pressure may also affect the expansion of the ion collecting area, factors which are difficult to model in a numerical simulation. A proper scheme of analyzing an I – V characteristic of a Langmuir probe based on a power law fit is also presented.more » « less
-
Abstract This study derives simple analytical expressions for the theoretical height profiles of particle number concentrations (
Nt ) and mean volume diameters (Dm ) during the steady-state balance of vapor growth and collision–coalescence with sedimentation. These equations are general for both rain and snow gamma size distributions with size-dependent power-law functions that dictate particle fall speeds and masses. For collision–coalescence only,Nt (Dm ) decreases (increases) as an exponential function of the radar reflectivity difference between two height layers. For vapor deposition only,Dm increases as a generalized power law of this reflectivity difference. Simultaneous vapor deposition and collision–coalescence under steady-state conditions with conservation of number, mass, and reflectivity fluxes lead to a coupled set of first-order, nonlinear ordinary differential equations forNt andDm . The solutions to these coupled equations are generalized power-law functions of heightz forDm (z ) andNt (z ) whereby each variable is related to one another with an exponent that is independent of collision–coalescence efficiency. Compared to observed profiles derived from descending in situ aircraft Lagrangian spiral profiles from the CRYSTAL-FACE field campaign, these analytical solutions can on average capture the height profiles ofNt andDm within 8% and 4% of observations, respectively. Steady-state model projections of radar retrievals aloft are shown to produce the correct rapid enhancement of surface snowfall compared to the lowest-available radar retrievals from 500 m MSL. Future studies can utilize these equations alongside radar measurements to estimateNt andDm below radar tilt elevations and to estimate uncertain microphysical parameters such as collision–coalescence efficiencies.Significance Statement While complex numerical models are often used to describe weather phenomenon, sometimes simple equations can instead provide equally good or comparable results. Thus, these simple equations can be used in place of more complicated models in certain situations and this replacement can allow for computationally efficient and elegant solutions. This study derives such simple equations in terms of exponential and power-law mathematical functions that describe how the average size and total number of snow or rain particles change at different atmospheric height levels due to growth from the vapor phase and aggregation (the sticking together) of these particles balanced with their fallout from clouds. We catalog these mathematical equations for different assumptions of particle characteristics and we then test these equations using spirally descending aircraft observations and ground-based measurements. Overall, we show that these mathematical equations, despite their simplicity, are capable of accurately describing the magnitude and shape of observed height and time series profiles of particle sizes and numbers. These equations can be used by researchers and forecasters along with radar measurements to improve the understanding of precipitation and the estimation of its properties.
-
Abstract This project is funded by the US National Science Foundation (NSF) through their NSF RAPID program under the title “Modeling Corona Spread Using Big Data Analytics.” The project is a joint effort between the Department of Computer & Electrical Engineering and Computer Science at FAU and a research group from LexisNexis Risk Solutions. The novel coronavirus Covid-19 originated in China in early December 2019 and has rapidly spread to many countries around the globe, with the number of confirmed cases increasing every day. Covid-19 is officially a pandemic. It is a novel infection with serious clinical manifestations, including death, and it has reached at least 124 countries and territories. Although the ultimate course and impact of Covid-19 are uncertain, it is not merely possible but likely that the disease will produce enough severe illness to overwhelm the worldwide health care infrastructure. Emerging viral pandemics can place extraordinary and sustained demands on public health and health systems and on providers of essential community services. Modeling the Covid-19 pandemic spread is challenging. But there are data that can be used to project resource demands. Estimates of the reproductive number (R) of SARS-CoV-2 show that at the beginning of the epidemic, each infected person spreads the virus to at least two others, on average (Emanuel et al. in N Engl J Med. 2020, Livingston and Bucher in JAMA 323(14):1335, 2020). A conservatively low estimate is that 5 % of the population could become infected within 3 months. Preliminary data from China and Italy regarding the distribution of case severity and fatality vary widely (Wu and McGoogan in JAMA 323(13):1239–42, 2020). A recent large-scale analysis from China suggests that 80 % of those infected either are asymptomatic or have mild symptoms; a finding that implies that demand for advanced medical services might apply to only 20 % of the total infected. Of patients infected with Covid-19, about 15 % have severe illness and 5 % have critical illness (Emanuel et al. in N Engl J Med. 2020). Overall, mortality ranges from 0.25 % to as high as 3.0 % (Emanuel et al. in N Engl J Med. 2020, Wilson et al. in Emerg Infect Dis 26(6):1339, 2020). Case fatality rates are much higher for vulnerable populations, such as persons over the age of 80 years (> 14 %) and those with coexisting conditions (10 % for those with cardiovascular disease and 7 % for those with diabetes) (Emanuel et al. in N Engl J Med. 2020). Overall, Covid-19 is substantially deadlier than seasonal influenza, which has a mortality of roughly 0.1 %. Public health efforts depend heavily on predicting how diseases such as those caused by Covid-19 spread across the globe. During the early days of a new outbreak, when reliable data are still scarce, researchers turn to mathematical models that can predict where people who could be infected are going and how likely they are to bring the disease with them. These computational methods use known statistical equations that calculate the probability of individuals transmitting the illness. Modern computational power allows these models to quickly incorporate multiple inputs, such as a given disease’s ability to pass from person to person and the movement patterns of potentially infected people traveling by air and land. This process sometimes involves making assumptions about unknown factors, such as an individual’s exact travel pattern. By plugging in different possible versions of each input, however, researchers can update the models as new information becomes available and compare their results to observed patterns for the illness. In this paper we describe the development a model of Corona spread by using innovative big data analytics techniques and tools. We leveraged our experience from research in modeling Ebola spread (Shaw et al. Modeling Ebola Spread and Using HPCC/KEL System. In: Big Data Technologies and Applications 2016 (pp. 347-385). Springer, Cham) to successfully model Corona spread, we will obtain new results, and help in reducing the number of Corona patients. We closely collaborated with LexisNexis, which is a leading US data analytics company and a member of our NSF I/UCRC for Advanced Knowledge Enablement. The lack of a comprehensive view and informative analysis of the status of the pandemic can also cause panic and instability within society. Our work proposes the HPCC Systems Covid-19 tracker, which provides a multi-level view of the pandemic with the informative virus spreading indicators in a timely manner. The system embeds a classical epidemiological model known as SIR and spreading indicators based on causal model. The data solution of the tracker is built on top of the Big Data processing platform HPCC Systems, from ingesting and tracking of various data sources to fast delivery of the data to the public. The HPCC Systems Covid-19 tracker presents the Covid-19 data on a daily, weekly, and cumulative basis up to global-level and down to the county-level. It also provides statistical analysis for each level such as new cases per 100,000 population. The primary analysis such as Contagion Risk and Infection State is based on causal model with a seven-day sliding window. Our work has been released as a publicly available website to the world and attracted a great volume of traffic. The project is open-sourced and available on GitHub. The system was developed on the LexisNexis HPCC Systems, which is briefly described in the paper.more » « less
-
Abstract Aim Understanding how spatial scale of study affects observed dispersal patterns can provide insights to spatiotemporal population dynamics, particularly in systems with significant long‐distance dispersal (LDD). We aimed to investigate the dispersal gradients of two rusts of wheat with spores of similar size, mass and shape, over multiple spatial scales. We hypothesized that a single dispersal kernel could fit the dispersal from all spatial scales well, and that it would be possible to obtain similar results in spatiotemporal increase of disease when modelling based on differing scales.
Location Central Oregon and St. Croix Island.
Taxa Puccinia striiformis f. sp.tritici, Puccinia graminis f. sp.tritici, Triticum aestivum .Methods We compared empirically derived primary disease gradients of cereal rust across three spatial scales: local (inoculum source and sampling unit = 0.0254 m, spatial extent = 1.52 m) field‐wide (inoculum source = 1.52 m, sampling unit = 0.305 m and spatial extent = 91.4 m) and regional (inoculum source and sampling unit = 152 m, spatial extent = 10.5 km). We then examined whether disease spread in spatially explicit simulations depended upon the scale at which data were collected by constructing a compartmental time‐step model.
Results The three data sets could be fit well by a single power law dispersal kernel. Simulating epidemic spread at different spatial resolutions resulted in similar patterns of spatiotemporal spread. Dispersal kernel data obtained at one spatial scale can be used to represent spatiotemporal disease spread at a larger spatial scale.
Main Conclusions Organisms spread by aerially dispersed small propagules that exhibit LDD may follow similar dispersal patterns over a several hundred‐ or thousand‐fold expanse of spatial scale. Given that the primary mechanisms driving aerial dispersal remains constant, it may be possible to extrapolate across scales when empirical data are unavailable at a scale of interest.