skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Predicting implicit attitudes with natural language data
Large-scale language datasets and advances in natural language processing offer opportunities for studying people’s cognitions and behaviors. We show how representations derived from language can be combined with laboratory-based word norms to predict implicit attitudes for diverse concepts. Our approach achieves substantially higher correlations than existing methods. We also show that our approach is more predictive of implicit attitudes than are explicit attitudes, and that it captures variance in implicit attitudes that is largely unexplained by explicit attitudes. Overall, our results shed light on how implicit attitudes can be measured by combining standard psychological data with large-scale language data. In doing so, we pave the way for highly accurate computational modeling of what people think and feel about the world around them.  more » « less
Award ID(s):
1847794
PAR ID:
10498433
Author(s) / Creator(s):
;
Publisher / Repository:
PNAS
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
Volume:
120
Issue:
25
ISSN:
0027-8424
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this article, we model implicit attitude measures using our network theory of attitudes. The model rests on the assumption that implicit measures limit attitudinal entropy reduction, because implicit measures represent a measurement outcome that is the result of evaluating the attitude object in a quick and effortless manner. Implicit measures therefore assess attitudes in high entropy states (i.e., inconsistent and unstable states). In a simulation, we illustrate the implications of our network theory for implicit measures. The results of this simulation show a paradoxical result: Implicit measures can provide a more accurate assessment of conflicting evaluative reactions to an attitude object (e.g., evaluative reactions not in line with the dominant evaluative reactions) than explicit measures, because they assess these properties in a noisier and less reliable manner. We conclude that our network theory of attitudes increases the connection between substantive theorizing on attitudes and psychometric properties of implicit measures. 
    more » « less
  2. null (Ed.)
    Robust learning in expressive languages with real-world data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While probably approximately correct (PAC) Semantics offers strong guarantees, learning explicit representations is not tractable, even in propositional logic. However, recent work on so-called “implicit learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of first-order logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. 
    more » « less
  3. Abstract Three hundred and ninety-one children (195 girls; Mage = 9.56 years) attending Grades 1 and 5 completed implicit and explicit measures of math attitudes and math self-concepts. Math grades were obtained. Multilevel analyses showed that first-grade girls held a strong negative implicit attitude about math, despite no gender differences in math grades or self-reported (explicit) positivity about math. The explicit measures significantly predicted math grades, and implicit attitudes accounted for additional variance in boys. The contrast between the implicit (negativity for girls) and explicit (positivity for girls and boys) effects suggest implicit–explicit dissociations in children, which have also been observed in adults. Early-emerging implicit attitudes may be a foundation for the later development of explicit attitudes and beliefs about math. 
    more » « less
  4. null (Ed.)
    Robustly learning in expressive languages with real-world data continues to be a challenging task. Numerous conventional methods appeal to heuristics without any assurances of robustness. While PAC-Semantics offers strong guarantees, learning explicit representations is not tractable even in a propositional setting. However, recent work on so-called "implicit" learning has shown tremendous promise in terms of obtaining polynomial-time results for fragments of first-order logic. In this work, we extend implicit learning in PAC-Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework keeps the existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. 
    more » « less
  5. We study the problem of aligning large language models (LLMs) with human preference data. Contrastive preference optimization has shown promising results in aligning LLMs with available preference data by optimizing the implicit reward associated with the policy. However, the contrastive objective focuses mainly on the relative values of implicit rewards associated with two responses while ignoring their actual values, resulting in suboptimal alignment with human preferences. To address this limitation, we propose calibrated direct preference optimization (Cal-DPO), a simple yet effective algorithm. We show that substantial improvement in alignment with the given preferences can be achieved simply by calibrating the implicit reward to ensure that the learned implicit rewards are comparable in scale to the ground-truth rewards. We demonstrate the theoretical advantages of Cal-DPO over existing approaches. The results of our experiments on a variety of standard benchmarks show that Cal-DPO remarkably improves off-the-shelf methods. 
    more » « less