skip to main content


Title: Minimax Supervised Clustering in the Anisotropic Gaussian Mixture Model: A new take on Robust Interpolation
We study the supervised clustering problem under the two-component anisotropic Gaussian mixture model in high dimensions in the non-asymptotic setting. We first derive a lower and a matching upper bound for the minimax risk of clustering in this framework. We also show that in the high-dimensional regime, the linear discriminant analysis (LDA) classifier turns out to be sub-optimal in a minimax sense. Next, we characterize precisely the risk of regularized supervised least squares classifiers under $\ell_2$ regularization. We deduce the fact that the interpolating solution (0 training error solution) may outperform the regularized classifier, under mild assumptions on the covariance structure of the noise. Our analysis also shows that interpolation can be robust to corruption in the covariance of the noise when the signal is aligned with the ``clean'' part of the covariance, for the properly defined notion of alignment. To the best of our knowledge, this peculiar phenomenon has not yet been investigated in the rapidly growing literature related to interpolation. We conclude that interpolation is not only benign but can also be optimal and in some cases robust.  more » « less
Award ID(s):
1908905
NSF-PAR ID:
10293465
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Technical report
ISSN:
0109-1344
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    For high dimensional classification, it is well known that naively performing the Fisher discriminant rule leads to poor results due to diverging spectra and accumulation of noise. Therefore, researchers proposed independence rules to circumvent the diverging spectra, and sparse independence rules to mitigate the issue of accumulation of noise. However, in biological applications, often a group of correlated genes are responsible for clinical outcomes, and the use of the covariance information can significantly reduce misclassification rates. In theory the extent of such error rate reductions is unveiled by comparing the misclassification rates of the Fisher discriminant rule and the independence rule. To materialize the gain on the basis of finite samples, a regularized optimal affine discriminant (ROAD) is proposed. The ROAD selects an increasing number of features as the regularization relaxes. Further benefits can be achieved when a screening method is employed to narrow the feature pool before applying the ROAD method. An efficient constrained co-ordinate descent algorithm is also developed to solve the associated optimization problems. Sampling properties of oracle type are established. Simulation studies and real data analysis support our theoretical results and demonstrate the advantages of the new classification procedure under a variety of correlation structures. A delicate result on continuous piecewise linear solution paths for the ROAD optimization problem at the population level justifies the linear interpolation of the constrained co-ordinate descent algorithm.

     
    more » « less
  2. Table of Contents: Foreword by the CI 2016 Workshop Chairs …………………………………vi Foreword by the CI 2016 Steering Committee ..…………………………..…..viii List of Organizing Committee ………………………….……....x List of Registered Participants .………………………….……..xi Acknowledgement of Sponsors ……………………………..…xiv Hackathon and Workshop Agenda .………………………………..xv Hackathon Summary .………………………….…..xviii Invited talks - abstracts and links to presentations ………………………………..xxi Proceedings: 34 short research papers ……………………………….. 1-135 Papers 1. BAYESIAN MODELS FOR CLIMATE RECONSTRUCTION FROM POLLEN RECORDS ..................................... 1 Lasse Holmström, Liisa Ilvonen, Heikki Seppä, Siim Veski 2. ON INFORMATION CRITERIA FOR DYNAMIC SPATIO-TEMPORAL CLUSTERING ..................................... 5 Ethan D. Schaeffer, Jeremy M. Testa, Yulia R. Gel, Vyacheslav Lyubchich 3. DETECTING MULTIVARIATE BIOSPHERE EXTREMES ..................................... 9 Yanira Guanche García, Erik Rodner, Milan Flach, Sebastian Sippel, Miguel Mahecha, Joachim Denzler 4. SPATIO-TEMPORAL GENERATIVE MODELS FOR RAINFALL OVER INDIA ..................................... 13 Adway Mitra 5. A NONPARAMETRIC COPULA BASED BIAS CORRECTION METHOD FOR STATISTICAL DOWNSCALING ..................................... 17 Yi Li, Adam Ding, Jennifer Dy 6. DETECTING AND PREDICTING BEAUTIFUL SUNSETS USING SOCIAL MEDIA DATA ..................................... 21 Emma Pierson 7. OCEANTEA: EXPLORING OCEAN-DERIVED CLIMATE DATA USING MICROSERVICES ..................................... 25 Arne N. Johanson, Sascha Flögel, Wolf-Christian Dullo, Wilhelm Hasselbring 8. IMPROVED ANALYSIS OF EARTH SYSTEM MODELS AND OBSERVATIONS USING SIMPLE CLIMATE MODELS ..................................... 29 Balu Nadiga, Nathan Urban 9. SYNERGY AND ANALOGY BETWEEN 15 YEARS OF MICROWAVE SST AND ALONG-TRACK SSH ..................................... 33 Pierre Tandeo, Aitor Atencia, Cristina Gonzalez-Haro 10. PREDICTING EXECUTION TIME OF CLIMATE-DRIVEN ECOLOGICAL FORECASTING MODELS ..................................... 37 Scott Farley and John W. Williams 11. SPATIOTEMPORAL ANALYSIS OF SEASONAL PRECIPITATION OVER US USING CO-CLUSTERING ..................................... 41 Mohammad Gorji–Sefidmazgi, Clayton T. Morrison 12. PREDICTION OF EXTREME RAINFALL USING HYBRID CONVOLUTIONAL-LONG SHORT TERM MEMORY NETWORKS ..................................... 45 Sulagna Gope, Sudeshna Sarkar, Pabitra Mitra 13. SPATIOTEMPORAL PATTERN EXTRACTION WITH DATA-DRIVEN KOOPMAN OPERATORS FOR CONVECTIVELY COUPLED EQUATORIAL WAVES ..................................... 49 Joanna Slawinska, Dimitrios Giannakis 14. COVARIANCE STRUCTURE ANALYSIS OF CLIMATE MODEL OUTPUT ..................................... 53 Chintan Dalal, Doug Nychka, Claudia Tebaldi 15. SIMPLE AND EFFICIENT TENSOR REGRESSION FOR SPATIOTEMPORAL FORECASTING ..................................... 57 Rose Yu, Yan Liu 16. TRACKING OF TROPICAL INTRASEASONAL CONVECTIVE ANOMALIES ..................................... 61 Bohar Singh, James L. Kinter 17. ANALYSIS OF AMAZON DROUGHTS USING SUPERVISED KERNEL PRINCIPAL COMPONENT ANALYSIS ..................................... 65 Carlos H. R. Lima, Amir AghaKouchak 18. A BAYESIAN PREDICTIVE ANALYSIS OF DAILY PRECIPITATION DATA ..................................... 69 Sai K. Popuri, Nagaraj K. Neerchal, Amita Mehta 19. INCORPORATING PRIOR KNOWLEDGE IN SPATIO-TEMPORAL NEURAL NETWORK FOR CLIMATIC DATA ..................................... 73 Arthur Pajot, Ali Ziat, Ludovic Denoyer, Patrick Gallinari 20. DIMENSIONALITY-REDUCTION OF CLIMATE DATA USING DEEP AUTOENCODERS ..................................... 77 Juan A. Saenz, Nicholas Lubbers, Nathan M. Urban 21. MAPPING PLANTATION IN INDONESIA ..................................... 81 Xiaowei Jia, Ankush Khandelwal, James Gerber, Kimberly Carlson, Paul West, Vipin Kumar 22. FROM CLIMATE DATA TO A WEIGHTED NETWORK BETWEEN FUNCTIONAL DOMAINS ..................................... 85 Ilias Fountalis, Annalisa Bracco, Bistra Dilkina, Constantine Dovrolis 23. EMPLOYING SOFTWARE ENGINEERING PRINCIPLES TO ENHANCE MANAGEMENT OF CLIMATOLOGICAL DATASETS FOR CORAL REEF ANALYSIS ..................................... 89 Mark Jenne, M.M. Dalkilic, Claudia Johnson 24. Profiler Guided Manual Optimization for Accelerating Cholesky Decomposition on R Environment ..................................... 93 V.B. Ramakrishnaiah, R.P. Kumar, J. Paige, D. Hammerling, D. Nychka 25. GLOBAL MONITORING OF SURFACE WATER EXTENT DYNAMICS USING SATELLITE DATA ..................................... 97 Anuj Karpatne, Ankush Khandelwal and Vipin Kumar 26. TOWARD QUANTIFYING TROPICAL CYCLONE RISK USING DIAGNOSTIC INDICES .................................... 101 Erica M. Staehling and Ryan E. Truchelut 27. OPTIMAL TROPICAL CYCLONE INTENSITY ESTIMATES WITH UNCERTAINTY FROM BEST TRACK DATA .................................... 105 Suz Tolwinski-Ward 28. EXTREME WEATHER PATTERN DETECTION USING DEEP CONVOLUTIONAL NEURAL NETWORK .................................... 109 Yunjie Liu, Evan Racah, Prabhat, Amir Khosrowshahi, David Lavers, Kenneth Kunkel, Michael Wehner, William Collins 29. INFORMATION TRANSFER ACROSS TEMPORAL SCALES IN ATMOSPHERIC DYNAMICS .................................... 113 Nikola Jajcay and Milan Paluš 30. Identifying precipitation regimes in China using model-based clustering of spatial functional data .................................... 117 Haozhe Zhang, Zhengyuan Zhu, Shuiqing Yin 31. RELATIONAL RECURRENT NEURAL NETWORKS FOR SPATIOTEMPORAL INTERPOLATION FROM MULTI-RESOLUTION CLIMATE DATA .................................... 121 Guangyu Li, Yan Liu 32. OBJECTIVE SELECTION OF ENSEMBLE BOUNDARY CONDITIONS FOR CLIMATE DOWNSCALING .................................... 124 Andrew Rhines, Naomi Goldenson 33. LONG-LEAD PREDICTION OF EXTREME PRECIPITATION CLUSTER VIA A SPATIO-TEMPORAL CONVOLUTIONAL NEURAL NETWORK .................................... 128 Yong Zhuang, Wei Ding 34. MULTIPLE INSTANCE LEARNING FOR BURNED AREA MAPPING USING MULTI –TEMPORAL REFLECTANCE DATA .................................... 132 Guruprasad Nayak, Varun Mithal, Vipin Kumar 
    more » « less
  3. Daumé III, Hal ; Singh, Aarti (Ed.)
    Learning with noisy labels is a common challenge in supervised learning. Existing approaches often require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem, and the specifications are either assumed to be given or estimated using additional steps. In this work, we introduce a new family of loss functions that we name as peer loss functions, which enables learning from noisy labels and does not require a priori specification of the noise rates. Peer loss functions work within the standard empirical risk minimization (ERM) framework. We show that, under mild conditions, performing ERM with peer loss functions on the noisy data leads to the optimal or a near-optimal classifier as if performing ERM over the clean training data, which we do not have access to. We pair our results with an extensive set of experiments. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations. 
    more » « less
  4. A popular assumption for out-of-distribution generalization is that the training data comprises subdatasets, each drawn from a distinct distribution; the goal is then to “interpolate” these distributions and “extrapolate” beyond them—this objective is broadly known as domain generalization. A common belief is that ERM can interpolate but not extrapolate and that the latter task is considerably more difficult, but these claims are vague and lack formal justification. In this work, we recast generalization over sub-groups as an online game between a player minimizing risk and an adversary presenting new test distributions. Under an existing notion of inter- and extrapolation based on reweighting of sub-group likelihoods, we rigorously demonstrate that extrapolation is computationally much harder than interpolation, though their statistical complexity is not significantly different. Furthermore, we show that ERM—or a noisy variant—is provably minimax-optimal for both tasks. Our framework presents a new avenue for the formal analysis of domain generalization algorithms which may be of independent interest. 
    more » « less
  5. High‐dimensional classification is a fundamentally important research problem in high‐dimensional data analysis. In this paper, we derive a nonasymptotic rate for the minimax excess misclassification risk when feature dimension exponentially diverges with the sample size and the Bayes classifier possesses a complicated modular structure. We also show that classifiers based on deep neural networks can attain the above rate, hence, are minimax optimal.

     
    more » « less