skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, April 12 until 2:00 AM ET on Saturday, April 13 due to maintenance. We apologize for the inconvenience.

Search for: All records

Award ID contains: 1915803

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Disease mapping is an important statistical tool used by epidemiologists to assess geographic variation in disease rates and identify lurking environmental risk factors from spatial patterns. Such maps rely upon spatial models for regionally aggregated data, where neighboring regions tend to exhibit similar outcomes than those farther apart. We contribute to the literature on multivariate disease mapping, which deals with measurements on multiple (two or more) diseases in each region. We aim to disentangle associations among the multiple diseases from spatial autocorrelation in each disease. We develop multivariate directed acyclic graphical autoregression models to accommodate spatial and inter‐disease dependence. The hierarchical construction imparts flexibility and richness, interpretability of spatial autocorrelation and inter‐disease relationships, and computational ease, but depends upon the order in which the cancers are modeled. To obviate this, we demonstrate how Bayesian model selection and averaging across orders are easily achieved using bridge sampling. We compare our method with a competitor using simulation studies and present an application to multiple cancer mapping using data from the Surveillance, Epidemiology, and End Results program.

    more » « less
  2. Abstract

    Gaussian process (GP) is a staple in the toolkit of a spatial statistician. Well‐documented computing roadblocks in the analysis of large geospatial datasets using GPs have now largely been mitigated via several recent statistical innovations. Nearest neighbor Gaussian process (NNGP) has emerged as one of the leading candidates for such massive‐scale geospatial analysis owing to their empirical success. This article reviews the connection of NNGP to sparse Cholesky factors of the spatial precision (inverse‐covariance) matrix. Focus of the review is on these sparse Cholesky matrices which are versatile and have recently found many diverse applications beyond the primary usage of NNGP for fast parameter estimation and prediction in the spatial (generalized) linear models. In particular, we discuss applications of sparse NNGP Cholesky matrices to address multifaceted computational issues in spatial bootstrapping, simulation of large‐scale realizations of Gaussian random fields, and extensions to nonparametric mean function estimation of a GP using random forests. We also review a sparse‐Cholesky‐based model for areal (geographically aggregated) data that addresses long‐established interpretability issues of existing areal models. Finally, we highlight some yet‐to‐be‐addressed issues of such sparse Cholesky approximations that warrant further research.

    This article is categorized under:

    Algorithms and Computational Methods > Algorithms

    Algorithms and Computational Methods > Numerical Methods

    more » « less
  3. Abstract Feature selection to identify spatially variable genes or other biologically informative genes is a key step during analyses of spatially-resolved transcriptomics data. Here, we propose nnSVG, a scalable approach to identify spatially variable genes based on nearest-neighbor Gaussian processes. Our method (i) identifies genes that vary in expression continuously across the entire tissue or within a priori defined spatial domains, (ii) uses gene-specific estimates of length scale parameters within the Gaussian process models, and (iii) scales linearly with the number of spatial locations. We demonstrate the performance of our method using experimental data from several technological platforms and simulations. A software implementation is available at . 
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  4. Graphical models have witnessed significant growth and usage in spatial data science for modeling data referenced over a massive number of spatial-temporal coordinates. Much of this literature has focused on a single or relatively few spatially dependent outcomes. Recent attention has focused upon addressing modeling and inference for substantially large number of outcomes. While spatial factor models and multivariate basis expansions occupy a prominent place in this domain, this article elucidates a recent approach, graphical Gaussian Processes, that exploits the notion of conditional independence among a very large number of spatial processes to build scalable graphical models for fully model-based Bayesian analysis of multivariate spatial data. 
    more » « less
    Free, publicly-accessible full text available September 6, 2024
  5. Low-cost sensors enable finer-scale spatiotemporal measurements within the existing methane (CH 4 ) monitoring infrastructure and could help cities mitigate CH 4 emissions to meet their climate goals. While initial studies of low-cost CH 4 sensors have shown potential for effective CH 4 measurement at ambient concentrations, sensor deployment remains limited due to questions about interferences and calibration across environments and seasons. This study evaluates sensor performance across seasons with specific attention paid to the sensor's understudied carbon monoxide (CO) interferences and environmental dependencies through long-term ambient co-location in an urban environment. The sensor was first evaluated in a laboratory using chamber calibration and co-location experiments, and then in the field through two 8 week co-locations with a reference CH 4 instrument. In the laboratory, the sensor was sensitive to CH 4 concentrations below ambient background concentrations. Different sensor units responded similarly to changing CH 4 , CO, temperature, and humidity conditions but required individual calibrations to account for differences in sensor response factors. When deployed in-field, co-located with a reference instrument near Baltimore, MD, the sensor captured diurnal trends in hourly CH 4 concentration after corrections for temperature, absolute humidity, CO concentration, and hour of day. Variable performance was observed across seasons with the sensor performing well ( R 2 = 0.65; percent bias 3.12%; RMSE 0.10 ppm) in the winter validation period and less accurately ( R 2 = 0.12; percent bias 3.01%; RMSE 0.08 ppm) in the summer validation period where there was less dynamic range in CH 4 concentrations. The results highlight the utility of sensor deployment in more variable ambient CH 4 conditions and demonstrate the importance of accounting for temperature and humidity dependencies as well as co-located CO concentrations with low-cost CH 4 measurements. We show this can be addressed via Multiple Linear Regression (MLR) models accounting for key covariates to enable urban measurements in areas with CH 4 enhancement. Together with individualized calibration prior to deployment, the sensor shows promise for use in low-cost sensor networks and represents a valuable supplement to existing monitoring strategies to identify CH 4 hotspots. 
    more » « less
    Free, publicly-accessible full text available April 13, 2024
  6. Historically, two primary criticisms statisticians have of machine learning and deep neural models is their lack of uncertainty quantification and the inability to do inference (i.e., to explain what inputs are important). Explainable AI has developed in the last few years as a sub‐discipline of computer science and machine learning to mitigate these concerns (as well as concerns of fairness and transparency in deep modeling). In this article, our focus is on explaining which inputs are important in models for predicting environmental data. In particular, we focus on three general methods for explainability that are model agnostic and thus applicable across a breadth of models without internal explainability: “feature shuffling”, “interpretable local surrogates”, and “occlusion analysis”. We describe particular implementations of each of these and illustrate their use with a variety of models, all applied to the problem of long‐lead forecasting monthly soil moisture in the North American corn belt given sea surface temperature anomalies in the Pacific Ocean. 
    more » « less
  7. Abstract. Low-cost sensors are often co-located with reference instruments to assess their performance and establish calibration equations, but limiteddiscussion has focused on whether the duration of this calibration period can be optimized. We placed a multipollutant monitor that containedsensors that measured particulate matter smaller than 2.5 µm (PM2.5), carbon monoxide (CO), nitrogendioxide (NO2), ozone (O3), and nitric oxide (NO) at a reference field site for 1 year. We developed calibration equationsusing randomly selected co-location subsets spanning 1 to 180 consecutive days out of the 1-year period and compared the potential root-mean-square error (RMSE) and Pearson correlation coefficient (r) values. The co-located calibration period required to obtain consistent results varied bysensor type, and several factors increased the co-location duration required for accurate calibration, including the response of a sensor toenvironmental factors, such as temperature or relative humidity (RH), or cross-sensitivities to other pollutants. Using measurements fromBaltimore, MD, where a broad range of environmental conditions may be observed over a given year, we found diminishing improvements in the medianRMSE for calibration periods longer than about 6 weeks for all the sensors. The best performing calibration periods were the ones that contained arange of environmental conditions similar to those encountered during the evaluation period (i.e., all other days of the year not used in thecalibration). With optimal, varying conditions it was possible to obtain an accurate calibration in as little as 1 week for all sensors, suggestingthat co-location can be minimized if the period is strategically selected and monitored so that the calibration period is representative of thedesired measurement setting. 
    more » « less
  8. Spatial probit generalized linear mixed models (spGLMM) with a linear fixed effect and a spatial random effect, endowed with a Gaussian Process prior, are widely used for analysis of binary spatial data. However, the canonical Bayesian implementation of this hierarchical mixed model can involve protracted Markov Chain Monte Carlo sampling. Alternate approaches have been proposed that circumvent this by directly representing the marginal likelihood from spGLMM in terms of multivariate normal cummulative distribution functions (cdf). We present a direct and fast rendition of this latter approach for predictions from a spatial probit linear mixed model. We show that the covariance matrix of the cdf characterizing the marginal cdf of binary spatial data from spGLMM is amenable to approximation using Nearest Neighbor Gaussian Processes (NNGP). This facilitates a scalable prediction algorithm for spGLMM using NNGP that only involves sparse or small matrix computations and can be deployed in an embarrassingly parallel manner. We demonstrate the accuracy and scalability of the algorithm via numerous simulation experiments and an analysis of species presence-absence data. 
    more » « less