skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A new ranking scheme for modern data and its application to two-sample hypothesis testing
Rank-based approaches are among the most popular nonparametric methods for univariate data in tackling statistical problems such as hypothesis testing due to their robustness and effectiveness. However, they are unsatisfactory for more complex data. In the era of big data, high-dimensional and non-Euclidean data, such as networks and images, are ubiquitous and pose challenges for statistical analysis. Existing multivariate ranks such as component-wise, spatial, and depth-based ranks do not apply to non-Euclidean data and have limited performance for high-dimensional data. Instead of dealing with the ranks of observations, we propose two types of ranks applicable to complex data based on a similarity graph constructed on observations: a graph-induced rank defined by the inductive nature of the graph and an overall rank defined by the weight of edges in the graph. To illustrate their utilization, both the new ranks are used to construct test statistics for the two-sample hypothesis testing, which converge to the $$\chi_2^2$$ distribution under the permutation null distribution and some mild conditions of the ranks, enabling an easy type-I error control. Simulation studies show that the new method exhibits good power under a wide range of alternatives compared to existing methods. The new test is illustrated on the New York City taxi data for comparing travel patterns in consecutive months and a brain network dataset comparing male and female subjects.  more » « less
Award ID(s):
1848579
PAR ID:
10518906
Author(s) / Creator(s):
;
Publisher / Repository:
Proceedings of Machine Learning Research
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The distribution function is essential in statistical inference and connected with samples to form a directed closed loop by the correspondence theorem in measure theory and the Glivenko-Cantelli and Donsker properties. This connection creates a paradigm for statistical inference. However, existing distribution functions are defined in Euclidean spaces and are no longer convenient to use in rapidly evolving data objects of complex nature. It is imperative to develop the concept of the distribution function in a more general space to meet emerging needs. Note that the linearity allows us to use hypercubes to define the distribution function in a Euclidean space. Still, without the linearity in a metric space, we must work with the metric to investigate the probability measure. We introduce a class of metric distribution functions through the metric only. We overcome this challenging step by proving the correspondence theorem and the Glivenko-Cantelli theorem for metric distribution functions in metric spaces, laying the foundation for conducting rational statistical inference for metric space-valued data. Then, we develop a homogeneity test and a mutual independence test for non-Euclidean random objects and present comprehensive empirical evidence to support the performance of our proposed methods. Supplementary materials for this article are available online. 
    more » « less
  2. We investigate the impact of low-rank interference on the problem of distinguishing between two seabed types using ambient sound as an acoustic source. The resulting frequency-domain snapshots follow a zero-mean, circularly-symmetric Gaussian distribution, where each seabed type has a unique covariance matrix. Detecting changes in the seabed type across distinct spatial locations can be formulated as a two-sample hypothesis test for equality of covariance, for which Box's M-test is the classical solution. Interference sources such as passing ships result in additive noise with a low-rank covariance that can reduce the performance of hypothesis testing. We first present a method to construct a worst-case interference field, making hypothesis testing as difficult as possible. We then provide an alternating optimization procedure to recover the interference-free covariance matrix. Experiments on synthetic data show that the optimized interferer can greatly reduce hypothesis testing performance, while our recovery method perfectly eliminates this interference for a sufficiently small interference rank. On real data from the New England Shelf Break Acoustics experiment, we show that our approach successfully mitigates interference, allowing for accurate hypothesis testing and improving bottom loss estimation. 
    more » « less
  3. Ellis, K.; Ferrell, W.; Knapp, J. (Ed.)
    The mass transportation distance rank histogram (MTDRh) was developed to assess the reliability of any given scenario generation process for a two-stage, risk-neutral stochastic program. Reliability is defined loosely as goodness of fit between the generated scenario sets and corresponding observed values over a collection of historical instances. This graphical tool can diagnose over- or under-dispersion and/or bias in the scenario sets and support hypothesis testing of scenario reliability. If the risk-averse objective is instead to minimize CVaR of cost, the only important, or effective, scenarios are those that produce cost in the upper tail of the distribution at the optimal solution. We describe a procedure to adapt the MTDRh for use in assessing the reliability of scenarios relative to the upper tail of the cost distribution. This adaptation relies on a conditional probability distribution derived in the context of assessing the effectiveness of scenarios. For a risk-averse newsvendor formulation, we conduct simulation studies to systematically explore the ability of the CVaR-adapted MTDRh to diagnose different ways that scenario sets may fail to capture the upper tail of the cost distribution near optimality. We conjecture that, as with the MTDRh and its predecessor minimum spanning tree rank histogram, the nature of the mismatch between scenarios and observations can be observed according to the non-flat shape of the rank histogram. On the other hand, scenario generation methods can be calibrated according to uniform distribution goodness of fit to the distribution of ranks. 
    more » « less
  4. Abstract This article proposes a new statistical model to infer interpretable population-level preferences from ordinal comparison data. Such data is ubiquitous, e.g., ranked choice votes, top-10 movie lists, and pairwise sports outcomes. Traditional statistical inference on ordinal comparison data results in an overall ranking of objects, e.g., from best to worst, with each object having a unique rank. However, the ranks of some objects may not be statistically distinguishable. This could happen due to insufficient data or to the true underlying object qualities being equal. Because uncertainty communication in estimates of overall rankings is notoriously difficult, we take a different approach and allow groups of objects to have equal ranks or berank-clusteredin our model. Existing models related to rank-clustering are limited by their inability to handle a variety of ordinal data types, to quantify uncertainty, or by the need to pre-specify the number and size of potential rank-clusters. We solve these limitations through our proposed BayesianRank-Clustered Bradley–Terry–Luce (BTL)model. We accommodate rank-clustering via parameter fusion by imposing a novel spike-and-slab prior on object-specific worth parameters in the BTL family of distributions for ordinal comparisons. We demonstrate rank-clustering on simulated and real datasets in surveys, elections, and sports analytics. 
    more » « less
  5. Recent technological advances allow for the collection of massive data in the study of complex phenomena over time and/or space in various fields. Many of these data involve sequences of high-dimensional or non-Euclidean measurements, where change-point analysis is a crucial early step in understanding the data. Segmentation, or offline change-point analysis, divides data into homogeneous temporal or spatial segments, making subsequent analysis easier; its online counterpart detects changes in sequentially observed data, allowing for real-time anomaly detection. This article reviews a nonparametric change-point analysis framework that utilizes graphs representing the similarity between observations. This framework can be applied to data as long as a reasonable dissimilarity distance among the observations can be defined. Thus, this framework can be applied to a wide range of applications, from high-dimensional data to non-Euclidean data, such as imaging data or network data. In addition, analytic formulas can be derived to control the false discoveries, making them easy off-the-shelf data analysis tools. 
    more » « less