skip to main content


Title: Graph scan statistics with uncertainty
Scan statistics is one of the most popular approaches for anomaly detection in spatial and network data. In practice, there are numerous sources of uncertainty in the observed data. However, most prior works have overlooked such uncertainty, which can affect the accuracy and inferences of such meth- ods. In this paper, we develop the first systematic approach to incorporating uncertainty in scan statistics. We study two formulations for robust scan statistics, one based on the sam- ple average approximation and the other using a max-min objective. We show that uncertainty significantly increases the computational complexity of these problems. Rigorous algorithms and efficient heuristics for both formulations are developed with justification of theoretical bounds. We evaluate our proposed methods on synthetic and real datasets, and we observe that our methods give significant improvement in the detection power as well as optimization objective, relative to a baseline.  more » « less
Award ID(s):
1633028
NSF-PAR ID:
10067757
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proc. Association for the Advancement of Arti cial Intelligence (AAAI),
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objective

    This study investigates a locally low-rank (LLR) denoising algorithm applied to source images from a clinical task-based functional MRI (fMRI) exam before post-processing for improving statistical confidence of task-based activation maps.

    Methods

    Task-based motor and language fMRI was obtained in eleven healthy volunteers under an IRB approved protocol. LLR denoising was then applied to raw complex-valued image data before fMRI processing. Activation maps generated from conventional non-denoised (control) data were compared with maps derived from LLR-denoised image data. Four board-certified neuroradiologists completed consensus assessment of activation maps; region-specific and aggregate motor and language consensus thresholds were then compared with nonparametric statistical tests. Additional evaluation included retrospective truncation of exam data without and with LLR denoising; a ROI-based analysis tracked t-statistics and temporal SNR (tSNR) as scan durations decreased. A test-retest assessment was performed; retest data were matched with initial test data and compared for one subject.

    Results

    fMRI activation maps generated from LLR-denoised data predominantly exhibited statistically significant ( p = 4.88×10–4to p = 0.042; one p = 0.062) increases in consensus t-statistic thresholds for motor and language activation maps. Following data truncation, LLR data showed task-specific increases in t-statistics and tSNR respectively exceeding 20 and 50% compared to control. LLR denoising enabled truncation of exam durations while preserving cluster volumes at fixed thresholds. Test-retest showed variable activation with LLR data thresholded higher in matching initial test data.

    Conclusion

    LLR denoising affords robust increases in t-statistics on fMRI activation maps compared to routine processing, and offers potential for reduced scan duration while preserving map quality.

     
    more » « less
  2. https://youtu.be/79Py8KU4_k0 (Ed.)
    We consider statistical methods that invoke a min-max distributionally robust formulation to extract good out-of-sample performance in data-driven optimization and learning problems. Acknowledging the distributional uncertainty in learning from limited samples, the min-max formulations introduce an adversarial inner player to explore unseen covariate data. The resulting distributionally robust optimization (DRO) formulations, which include Wasserstein DRO formulations (our main focus), are specified using optimal transportation phenomena. Upon describing how these infinite-dimensional min-max problems can be approached via a finite-dimensional dual reformulation, this tutorial moves into its main component, namely, explaining a generic recipe for optimally selecting the size of the adversary’s budget. This is achieved by studying the limit behavior of an optimal transport projection formulation arising from an inquiry on the smallest confidence region that includes the unknown population risk minimizer. Incidentally, this systematic prescription coincides with those in specific examples in high-dimensional statistics and results in error bounds that are free from the curse of dimensions. Equipped with this prescription, we present a central limit theorem for the DRO estimator and provide a recipe for constructing compatible confidence regions that are useful for uncertainty quantification. The rest of the tutorial is devoted to insights into the nature of the optimizers selected by the min-max formulations and additional applications of optimal transport projections. 
    more » « less
  3. Graph scan statistics have become popular for event detection in networks. This methodology involves finding connected subgraphs that maximize a certain anomaly function, but maximizing these functions is computationally hard in general. We develop a novel approach for graph scan statistics with connectivity constraints. Our algorithm Approx-MultilinearScan relies on an algebraic technique called multilinear detection, and it improves over prior methods for large networks. We also develop a Pregel-based parallel version of this algorithm in Giraph, MultilinearScanGiraph, that allows us to solve instances with over 40 million edges, which is more than one order of magnitude larger than existing methods. 
    more » « less
  4. Abstract

    The GuLF Long-term Follow-up Study (GuLF STUDY) is investigating potential adverse health effects of workers involved in the Deepwater Horizon (DWH) oil spill response and cleanup (OSRC). Over 93% of the 160 000 personal air measurements taken on OSRC workers were below the limit of detection (LOD), as reported by the analytic labs. At this high level of censoring, our ability to develop exposure estimates was limited. The primary objective here was to reduce the number of measurements below the labs’ reported LODs to reflect the analytic methods’ true LODs, thereby facilitating the use of a relatively unbiased and precise Bayesian method to develop exposure estimates for study exposure groups (EGs). The estimates informed a job-exposure matrix to characterize exposure of study participants. A second objective was to develop descriptive statistics for relevant EGs that did not meet the Bayesian criteria of sample size ≥5 and censoring ≤80% to achieve the aforementioned level of bias and precision. One of the analytic labs recalculated the measurements using the analytic method’s LOD; the second lab provided raw analytical data, allowing us to recalculate the data values that fell between the originally reported LOD and the analytical method’s LOD. We developed rules for developing Bayesian estimates for EGs with >80% censoring. The remaining EGs were 100% censored. An order-based statistical method (OBSM) was developed to estimate exposures that considered the number of measurements, geometric standard deviation, and average LOD of the censored samples for N ≥ 20. For N < 20, substitution of ½ of the LOD was assigned. Recalculation of the measurements lowered overall censoring from 93.2 to 60.5% and of the THC measurements, from 83.1 to 11.2%. A total of 71% of the EGs met the ≤15% relative bias and <65% imprecision goal. Another 15% had censoring >80% but enough non-censored measurements to apply Bayesian methods. We used the OBSM for 3% of the estimates and the simple substitution method for 11%. The methods presented here substantially reduced the degree of censoring in the dataset and increased the number of EGs meeting our Bayesian method’s desired performance goal. The OBSM allowed for a systematic and consistent approach impacting only the lowest of the exposure estimates. This approach should be considered when dealing with highly censored datasets.

     
    more » « less
  5. We propose a new approach, the calibrated nonparametric scan statistic (CNSS), for more accurate detection of anomalous patterns in large-scale, real-world graphs. Scan statistics identify connected subgraphs that are interesting or unexpected through maximization of a likelihood ratio statistic; in particular, nonparametric scan statistics (NPSSs) identify subgraphs with a higher than expected proportion of individually significant nodes. However, we show that recently proposed NPSS methods are miscalibrated, failing to account for the maximization of the statistic over the multiplicity of subgraphs. This results in both reduced detection power for subtle signals, and low precision of the detected subgraph even for stronger signals. Thus we develop a new statistical approach to recalibrate NPSSs, correctly adjusting for multiple hypothesis testing and taking the underlying graph structure into account. While the recalibration, based on randomization testing, is computationally expensive, we propose both an efficient (approximate) algorithm and new, closed-form lower bounds (on the expected maximum proportion of significant nodes for subgraphs of a given size, under the null hypothesis of no anomalous patterns). These advances, along with the integration of recent core-tree decomposition methods, enable CNSS to scale to large real-world graphs, with substantial improvement in the accuracy of detected subgraphs. Extensive experiments on both semi-synthetic and real-world datasets are demonstrated to validate the effectiveness of our proposed methods, in comparison with state-of-the-art counterparts. 
    more » « less