skip to main content


Search for: All records

Award ID contains: 1827374

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    We examined the relationship between metaphor comprehension and verbal analogical reasoning in young adults who were either typically developing (TD) or diagnosed with Autism Spectrum Disorder (ASD). The ASD sample was highly educated and high in verbal ability, and closely matched to a subset of TD participants on age, gender, educational background, and verbal ability. Additional TD participants with a broader range of abilities were also tested. Each participant solved sets of verbal analogies and metaphors in verification formats, allowing measurement of both accuracy and reaction times. Measures of individual differences in vocabulary, verbal working memory, and autistic traits were also obtained. Accuracy for both the verbal analogy and the metaphor task was very similar across the ASD and matched TD groups. However, reaction times on both tasks were longer for the ASD group. Additionally, stronger correlations between verbal analogical reasoning and working memory capacity in the ASD group indicated that processing verbal analogies was more effortful for them. In the case of both groups, accuracy on the metaphor and analogy tasks was correlated. A mediation analysis revealed that after controlling for working memory capacity, the inter‐task correlation could be accounted for by the mediating variable of vocabulary knowledge, suggesting that the primary common mechanisms linking the two tasks involve language skills.

     
    more » « less
  2. ABSTRACT

    Analogy is a powerful tool for fostering conceptual understanding and transfer in STEM and other fields. Well‐constructed analogical comparisons focus attention on the causal‐relational structure of STEM concepts, and provide a powerful capability to draw inferences based on a well‐understood source domain that can be applied to a novel target domain. However, analogy must be applied with consideration to students' prior knowledge and cognitive resources. We briefly review theoretical and empirical support for incorporating analogy into education, and recommend five general principles to guide its application so as to maximize the potential benefits. For analogies to be effective, instructors should use well‐understood source analogs and explain correspondences fully; use visuospatial and verbal supports to emphasize shared structure among analogs; discuss the alignment between semantic and formal representations; reduce extraneous cognitive load imposed by analogical comparison; and encourage generation of inferences when students have some proficiency with the material. These principles can be applied flexibly to topics in a wide variety of domains.

     
    more » « less
  3. Delcea, Camelia (Ed.)
    Despite widespread communication of the health risks associated with the COVID-19 virus, many Americans underestimated its risks and were antagonistic regarding preventative measures. Political partisanship has been linked to diverging attitudes towards the virus, but the cognitive processes underlying this divergence remain unclear. Bayesian models fit to data gathered through two preregistered online surveys, administered before (March 13, 2020, N = 850) and during the first wave (April-May, 2020, N = 1610) of cases in the United States, reveal two preexisting forms of distrust––distrust in Democratic politicians and in medical scientists––that drove initial skepticism about the virus. During the first wave of cases, additional factors came into play, suggesting that skeptical attitudes became more deeply embedded within a complex network of auxiliary beliefs. These findings highlight how mechanisms that enhance cognitive coherence can drive anti-science attitudes. 
    more » « less
  4. Using poetic metaphors in the Serbian language, we identified systematic variations in the impact of fluid and crystalized intelligence on comprehen-sion of metaphors that varied in rated aptness and familiarity. Overall, comprehension scores were higher for metaphors that were high rather than low in aptness, and high rather than low in familiarity. A measure of crystalized intelligence was a robust predictor of comprehension across the full range of metaphors, but especially for those that were either relatively unfamiliar or more apt. In contrast, individual differences associated with fluid intelligence were clearly found only for metaphors that were low in aptness. Superior verbal knowledge appears to be particularly important when trying to find meaning in novel metaphorical expressions, and also when exploring the rich interpretive potential of apt metaphors. The broad role of crystalized intelligence in metaphor comprehension is consistent with the view that metaphors are largely understood using semantic integration processes continuous with those that operate in understanding literal language. 
    more » « less
  5. Public opinion polls have shown that beliefs about climate change have become increasingly polarized in the United States. A popular contemporary form of communication relevant to beliefs about climate change involves digital artifacts known as memes. The present study investigated whether memes can influence the assessment of scientific data about climate change, and whether their impact differs between political liberals and conservatives in the United States. In Study 1, we considered three hypotheses about the potential impact of memes on strongly-held politicized beliefs: 1) memes fundamentally serve social functions, and do not actually impact cognitive assessments of objective information; 2) politically incongruent memes will have a “backfire” effect; and 3) memes can indeed change assessments of scientific data about climate change, even for people with strong entering beliefs. We found evidence in support of the hypothesis that memes have the potential to change assessments of scientific information about climate change. Study 2 explored whether different partisan pages that post climate change memes elicit different emotions from their audiences, as well as how climate change is discussed in different ways by those at opposite ends of the political spectrum. 
    more » « less
  6. Understanding abstract relations, and reasoning about various instantiations of the same relation, is an important marker in human cognition. Here we focus on development of understanding for the concept of antonymy. We examined whether four- and five-year-olds (N= 67) are able to complete an analogy task involving antonyms, whether language cues facilitate children’s ability to reason about the antonym relation, and how their performance compares with that of two vector-based computational models. We found that explicit relation labels in the form of a relation phrase (“opposites”) improved performance on the task for five-year-olds but not four-year-olds. Five-year-old (but not four-year-old) children were more accurate for adjective and verb antonyms than for noun antonyms. Two computational models showed substantial variability in performance across different lexical classes, and in some cases fell short of children’s accuracy levels. These results suggest that young children acquire a solid understanding of the abstract relation of opposites, and can generalize it to various instantiations across different lexical classes. These developmental results challenge relation models based on vector semantics, and highlight the importance of examining performance across different parts of speech. 
    more » « less
  7. A key property of human cognition is its ability to generate novel predictions about unfamiliar situations by completing a partially-specified relation or an analogy. Here, we present a computational model capable of producing generative inferences from relations and analogs. This model, BART-Gen, operates on explicit representations of relations learned by BART (Bayesian Analogy with Relational Transformations), to achieve two related forms of generative inference: reasoning from a single relation, and reasoning from an analog. In the first form, a reasoner completes a partially-specified instance of a stated relation (e.g., robin is a type of ____). In the second, a reasoner completes a target analog based on a stated source analog (e.g., sedan:car :: robin:____). We compare the performance of BART-Gen with that of BERT, a popular model for Natural Language Processing (NLP) that is trained on sentence completion tasks and that does not rely on explicit representations of relations. Across simulations and human experiments, we show that BART-Gen produces more human-like responses for generative inferences from relations and analogs than does the NLP model. These results demonstrate the essential role of explicit relation representations in human generative reasoning. 
    more » « less
  8. Computational models of verbal analogy and relational similarity judgments can employ different types of vector representations of word meanings (embeddings) generated by machine-learning algorithms. An important question is whether human-like relational processing depends on explicit representations of relations (i.e., representations separable from those of the concepts being related), or whether implicit relation representations suffice. Earlier machine-learning models produced static embeddings for individual words, identical across all contexts. However, more recent Large Language Models (LLMs), which use transformer architectures applied to much larger training corpora, are able to produce contextualized embeddings that have the potential to capture implicit knowledge of semantic relations. Here we compare multiple models based on different types of embeddings to human data concerning judgments of relational similarity and solutions of verbal analogy problems. For two datasets, a model that learns explicit representations of relations, Bayesian Analogy with Relational Transformations (BART), captured human performance more successfully than either a model using static embeddings (Word2vec) or models using contextualized embeddings created by LLMs (BERT, RoBERTa, and GPT-2). These findings support the proposal that human thinking depends on representations that separate relations from the concepts they relate. 
    more » « less