skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Self-relevance effects and label choice: Strong variations in label-matching performance due to non-self-relevant factors
Merely associating one’s self with a stimulus may be enough to enhance performance in a label-matching paradigm (Sui, He, & Humphreys, 2012), implying prioritized processing of self-relevant stimuli. For instance, labeling a square as SELF and a circle as OTHER yields speeded performance when verifying square-SELF compared with circle-OTHER label matches. The precise causes of such effects are unclear. We propose that prioritized processing of label-matches can occur for reasons other than self-relevance. Here, we employ the label-matching paradigm to show similar benefits for non-self-relevant labels (SNAKE, FROG, and GREG) over a frequently employed, non-self-relevant control label (OTHER). These benefits suggest the possibility that self-relevance effects in the label-matching paradigm may be confounded with other properties of labels that lead to relative performance benefits, such as concreteness. The size of self-relevance effects may be overestimated in prior work employing the label-matching paradigm, which calls for greater care in the choice of control labels to determine the true magnitude of self-relevance effects. Our results additionally indicate the possibility of a powerful effect of concreteness (and related properties) on associative memory performance.  more » « less
Award ID(s):
1632849
PAR ID:
10025460
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Attention, Perception, & Psychophysics
ISSN:
1943-3921
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We demonstrate a self-folding paper robot with capillary force driven fluid. When water is sprayed on fluidic channels patterned on paper, the 2-D sheet of paper can be controllably self-folded into various 3-D structures; half-oval, circle, round-edge square, triangle, half-circle, and table. The self-folding paper sheet can be readily fabricated via a double-sided wax printing method, forming a bilayer structure of the fluidic channel and the hydrophobic wax, in which these two layers have different swelling/shrinking properties. The patterned paper performs folding actuation with water and unfolding behavior with evaporation without being mechanically manipulated by external forces or moments. Finally, we create a paper gripper based on this self-folding actuation, conveying a low-weight object. This report demonstrates the possibility of paper microfluidics for self-folding actuation and soft robotics. 
    more » « less
  2. null (Ed.)
    A classical problem in causal inference is that of matching, where treatment units need to be matched to control units based on covariate information. In this work, we propose a method that computes high quality almost-exact matches for high-dimensional categorical datasets. This method, called FLAME (Fast Large-scale Almost Matching Exactly), learns a distance metric for matching using a hold-out training data set. In order to perform matching efficiently for large datasets, FLAME leverages techniques that are natural for query processing in the area of database management, and two implementations of FLAME are provided: the first uses SQL queries and the second uses bit-vector techniques. The algorithm starts by constructing matches of the highest quality (exact matches on all covariates), and successively eliminates variables in order to match exactly on as many variables as possible, while still maintaining interpretable high-quality matches and balance between treatment and control groups. We leverage these high quality matches to estimate conditional average treatment effects (CATEs). Our experiments show that FLAME scales to huge datasets with millions of observations where existing state-of-the-art methods fail, and that it achieves significantly better performance than other matching methods. 
    more » « less
  3. Label smoothing (LS) is an arising learning paradigm that uses the positively weighted average of both the hard training labels and uniformly distributed soft labels. It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model. Later it was reported LS even helps with improving robustness when learning with noisy labels. However, we observed that the advantage of LS vanishes when we operate in a high label noise regime. Intuitively speaking, this is due to the increased entropy of ℙ(noisy label|X) when the noise rate is high, in which case, further applying LS tends to "over-smooth" the estimated posterior. We proceeded to discover that several learning-with-noisy-labels solutions in the literature instead relate more closely to negative/not label smoothing (NLS), which acts counter to LS and defines as using a negative weight to combine the hard and soft labels! We provide understandings for the properties of LS and NLS when learning with noisy labels. Among other established properties, we theoretically show NLS is considered more beneficial when the label noise rates are high. We provide extensive experimental results on multiple benchmarks to support our findings too. 
    more » « less
  4. Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we propose an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of Patrini et al. [30]. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading methods. 
    more » « less
  5. Learning from label proportions (LLP) is a weakly supervised classification problem where data points are grouped into bags, and the label proportions within each bag are observed instead of the instance-level labels. The task is to learn a classifier to predict the labels of future individual instances. Prior work on LLP for multi-class data has yet to develop a theoretically grounded algorithm. In this work, we propose an approach to LLP based on a reduction to learning with label noise, using the forward correction (FC) loss of Patrini et al. [30]. We establish an excess risk bound and generalization error analysis for our approach, while also extending the theory of the FC loss which may be of independent interest. Our approach demonstrates improved empirical performance in deep learning scenarios across multiple datasets and architectures, compared to the leading methods. 
    more » « less