skip to main content


Title: An Estimation and Analysis Framework for the Rasch Model
The Rasch model is widely used for item response analysis in applications ranging from recommender systems to psychology, education, and finance. While a number of estimators have been proposed for the Rasch model over the last decades, the associated analytical performance guarantees are mostly asymptotic. This paper provides a framework that relies on a novel linear minimum mean-squared error (L-MMSE) estimator which enables an exact, nonasymptotic, and closed-form analysis of the parameter estimation error under the Rasch model. The proposed framework provides guidelines on the number of items and responses required to attain low estimation errors in tests or surveys. We furthermore demonstrate its efficacy on a number of real-world collaborative filtering datasets, which reveals that the proposed L-MMSE estimator performs on par with state-of-the-art nonlinear estimators in terms of predictive performance.  more » « less
Award ID(s):
1652065
NSF-PAR ID:
10082581
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 35th International Conference on Machine Learning
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Monitoring of a linear diffusive network dynamics that is subject to a stationary stochastic input is considered, from a graph-theoretic perspective. Specifically, the performance of minimum mean square error (MMSE) estimators of the stochastic input and network state, based on remote noisy measurements, is studied. Using a graph-theoretic characterization of frequency responses in the diffusive network model, we show that the performance of an off-line (noncausal) estimator exhibits an exact topological pattern, which is related to vertex cuts and paths in the network's graph. For on-line (causal) estimation, graph-theoretic results are obtained for the case where the measurement noise is small. 
    more » « less
  2. Partially-observed Boolean dynamical systems (POBDS) are large and complex dynamical systems capable of being monitored through various sensors. However, time, storage, and economical constraints may impede the use of all sensors for estimation purposes. Thus, developing a procedure for selecting a subset of sensors is essential. The optimal minimum mean-square error (MMSE) POBDS state estimator is the Boolean Kalman Filter (BKF) and Smoother (BKS). Naturally, the performance of these estimators strongly depends on the choice of sensors. Given a finite subsets of sensors, for a POBDS with a finite observation space, we introduce the optimal procedure to select the best subset which leads to the smallest expected mean-square error (MSE) of the BKF over a finite horizon. The performance of the proposed sensor selection methodology is demonstrated by numerical experiments with a p53-MDM2 negative-feedback loop gene regulatory network observed through Bernoulli noise. 
    more » « less
  3. null (Ed.)
    The minimum mean-square error (MMSE) achievable by optimal estimation of a random variable S given another random variable T is of much interest in a variety of statistical contexts. Motivated by a growing interest in auditing machine learning models for unintended information leakage, we propose a neural network-based estimator of this MMSE. We derive a lower bound for the MMSE based on the proposed estimator and the Barron constant associated with the conditional expectation of S given T . Since the latter is typically unknown in practice, we derive a general bound for the Barron constant that produces order optimal estimates for canonical distribution models. 
    more » « less
  4. This paper deals with linear equalization in massive multi-user multiple-input multiple-output (MU-MIMO) wireless systems. We first provide simple conditions on the antenna configuration for which the well-known linear minimum mean-square error (L-MMSE) equalizer provides near-optimal spectral efficiency, and we analyze its performance in the presence of parameter mismatches in the signal and/or noise powers. We then propose a novel, optimally-tuned NOnParametric Equalizer (NOPE) for massive MU-MIMO systems, which avoids knowledge of the transmit signal and noise powers altogether. We show that NOPE achieves the same performance as that of the L-MMSE equalizer in the large-antenna limit, and we demonstrate its efficacy in realistic, finite-dimensional systems. From a practical perspective, NOPE is computationally efficient and avoids dedicated training that is typically required for parameter estimation. 
    more » « less
  5. Abstract A recently proposed SLOPE estimator [6] has been shown to adaptively achieve the minimax $\ell _2$ estimation rate under high-dimensional sparse linear regression models [25]. Such minimax optimality holds in the regime where the sparsity level $k$, sample size $n$ and dimension $p$ satisfy $k/p\rightarrow 0, k\log p/n\rightarrow 0$. In this paper, we characterize the estimation error of SLOPE under the complementary regime where both $k$ and $n$ scale linearly with $p$, and provide new insights into the performance of SLOPE estimators. We first derive a concentration inequality for the finite sample mean square error (MSE) of SLOPE. The quantity that MSE concentrates around takes a complicated and implicit form. With delicate analysis of the quantity, we prove that among all SLOPE estimators, LASSO is optimal for estimating $k$-sparse parameter vectors that do not have tied nonzero components in the low noise scenario. On the other hand, in the large noise scenario, the family of SLOPE estimators are sub-optimal compared with bridge regression such as the Ridge estimator. 
    more » « less