skip to main content

Search for: All records

Creators/Authors contains: "Holyoak, K. J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. When a group member commits wrongdoing, people sometimes assign responsibility and blame not only to the wrongdoer but also to other members of the same group. We examined such assignment of collective responsibility in the context of exploitation of one family by another. Participants were recruited from the United States and South Korea, which are known to vary in cultural norms and endorsement of collectivistic values. Participants in both countries rated the degree to which an agent (grandson) should be held responsible for his grandfather’s exploitation of a victimized family, while varying the closeness of familial connection. Participants’ responsibility judgmentsmore »showed sensitivity to whether the grandson received financial benefit from the wrongdoer and to the perceived closeness between the grandson and the wrongdoer. Korean participants imposed greater responsibility on the agent than did American participants. Implications for understanding the influence of social norms on moral judgments are discussed.« less
    Free, publicly-accessible full text available January 1, 2023
  2. We see the external world as consisting not only of objects and their parts, but also of relations that hold between them. Visual analogy, which depends on similarities between relations, provides a clear example of how perception supports reasoning. Here we report an experiment in which we quantitatively measured the human ability to find analogical mappings between parts of different objects, where the objects to be compared were drawn either from the same category (e.g., images of two mammals, such as a dog and a horse), or from two dissimilar categories (e.g., a chair image mapped to a cat image).more »Humans showed systematic mapping patterns, but with greater variability in mapping responses when objects were drawn from dissimilar categories. We simulated the human response of analogical mapping using a computational model of mapping between 3D objects, visiPAM (visual Probabilistic Analogical Mapping). VisiPAM takes point-cloud representations of two 3D objects as inputs, and outputs the mapping between analogous parts of the two objects. VisiPAM consists of a visual module that constructs structural representations of individual objects, and a reasoning module that identifies a probabilistic mapping between parts of the two 3D objects. Model simulations not only capture the qualitative pattern of human mapping performance cross conditions, but also approach human-level reliability in solving visual analogy problems.« less
    Free, publicly-accessible full text available January 1, 2023
  3. Human reasoning goes beyond knowledge about individual entities, extending to inferences based on relations between entities. Here we focus on the use of relations in verbal analogical mapping, sketching a general approach based on assessing similarity between patterns of semantic relations between words. This approach combines research in artificial intelligence with work in psychology and cognitive science, with the aim of minimizing hand coding of text inputs for reasoning tasks. The computational framework takes as inputs vector representations of individual word meanings, coupled with semantic representations of the relations between words, and uses these inputs to form semantic-relation networks formore »individual analogues. Analogical mapping is operationalized as graph matching under cognitive and computational constraints. The approach highlights the central role of semantics in analogical mapping.« less
    Free, publicly-accessible full text available January 1, 2023
  4. Analogy problems involving multiple ordered relations of the same type create mapping ambiguity, requiring some mechanism for relational integration to achieve mapping accuracy. We address the question of whether the integration of ordered relations depends on their logical form alone, or on semantic representations that differ across relation types. We developed a triplet mapping task that provides a basic paradigm to investigate analogical reasoning with simple relational structures. Experimental results showed that mapping performance differed across orderings based on category, linear order, and causal relations, providing evidence that each transitive relation has its own semantic representation. Hence, human analogical mappingmore »of ordered relations does not depend solely on their formal property of transitivity. Instead, human ability to solve mapping problems by integrating relations relies on the semantics of relation representations. We also compared human performance to the performance of several vector-based computational models of analogy. These models performed above chance but fell short of human performance for some relations, highlighting the need for further model development.« less
    Free, publicly-accessible full text available January 1, 2023
  5. Computational models of verbal analogy and relational similarity judgments can employ different types of vector representations of word meanings (embeddings) generated by machine-learning algorithms. An important question is whether human-like relational processing depends on explicit representations of relations (i.e., representations separable from those of the concepts being related), or whether implicit relation representations suffice. Earlier machine-learning models produced static embeddings for individual words, identical across all contexts. However, more recent Large Language Models (LLMs), which use transformer architectures applied to much larger training corpora, are able to produce contextualized embeddings that have the potential to capture implicit knowledge of semanticmore »relations. Here we compare multiple models based on different types of embeddings to human data concerning judgments of relational similarity and solutions of verbal analogy problems. For two datasets, a model that learns explicit representations of relations, Bayesian Analogy with Relational Transformations (BART), captured human performance more successfully than either a model using static embeddings (Word2vec) or models using contextualized embeddings created by LLMs (BERT, RoBERTa, and GPT-2). These findings support the proposal that human thinking depends on representations that separate relations from the concepts they relate.« less
    Free, publicly-accessible full text available January 1, 2023
  6. Using poetic metaphors in the Serbian language, we identified systematic variations in the impact of fluid and crystalized intelligence on comprehen-sion of metaphors that varied in rated aptness and familiarity. Overall, comprehension scores were higher for metaphors that were high rather than low in aptness, and high rather than low in familiarity. A measure of crystalized intelligence was a robust predictor of comprehension across the full range of metaphors, but especially for those that were either relatively unfamiliar or more apt. In contrast, individual differences associated with fluid intelligence were clearly found only for metaphors that were low in aptness.more »Superior verbal knowledge appears to be particularly important when trying to find meaning in novel metaphorical expressions, and also when exploring the rich interpretive potential of apt metaphors. The broad role of crystalized intelligence in metaphor comprehension is consistent with the view that metaphors are largely understood using semantic integration processes continuous with those that operate in understanding literal language.« less
    Free, publicly-accessible full text available January 1, 2023
  7. Public opinion polls have shown that beliefs about climate change have become increasingly polarized in the United States. A popular contemporary form of communication relevant to beliefs about climate change involves digital artifacts known as memes. The present study investigated whether memes can influence the assessment of scientific data about climate change, and whether their impact differs between political liberals and conservatives in the United States. In Study 1, we considered three hypotheses about the potential impact of memes on strongly-held politicized beliefs: 1) memes fundamentally serve social functions, and do not actually impact cognitive assessments of objective information; 2)more »politically incongruent memes will have a “backfire” effect; and 3) memes can indeed change assessments of scientific data about climate change, even for people with strong entering beliefs. We found evidence in support of the hypothesis that memes have the potential to change assessments of scientific information about climate change. Study 2 explored whether different partisan pages that post climate change memes elicit different emotions from their audiences, as well as how climate change is discussed in different ways by those at opposite ends of the political spectrum.« less
    Free, publicly-accessible full text available January 1, 2023
  8. A key property of human cognition is its ability to generate novel predictions about unfamiliar situations by completing a partially-specified relation or an analogy. Here, we present a computational model capable of producing generative inferences from relations and analogs. This model, BART-Gen, operates on explicit representations of relations learned by BART (Bayesian Analogy with Relational Transformations), to achieve two related forms of generative inference: reasoning from a single relation, and reasoning from an analog. In the first form, a reasoner completes a partially-specified instance of a stated relation (e.g., robin is a type of ____). In the second, a reasonermore »completes a target analog based on a stated source analog (e.g., sedan:car :: robin:____). We compare the performance of BART-Gen with that of BERT, a popular model for Natural Language Processing (NLP) that is trained on sentence completion tasks and that does not rely on explicit representations of relations. Across simulations and human experiments, we show that BART-Gen produces more human-like responses for generative inferences from relations and analogs than does the NLP model. These results demonstrate the essential role of explicit relation representations in human generative reasoning.« less
    Free, publicly-accessible full text available January 1, 2023
  9. Understanding abstract relations, and reasoning about various instantiations of the same relation, is an important marker in human cognition. Here we focus on development of understanding for the concept of antonymy. We examined whether four- and five-year-olds (N= 67) are able to complete an analogy task involving antonyms, whether language cues facilitate children’s ability to reason about the antonym relation, and how their performance compares with that of two vector-based computational models. We found that explicit relation labels in the form of a relation phrase (“opposites”) improved performance on the task for five-year-olds but not four-year-olds. Five-year-old (but not four-year-old)more »children were more accurate for adjective and verb antonyms than for noun antonyms. Two computational models showed substantial variability in performance across different lexical classes, and in some cases fell short of children’s accuracy levels. These results suggest that young children acquire a solid understanding of the abstract relation of opposites, and can generalize it to various instantiations across different lexical classes. These developmental results challenge relation models based on vector semantics, and highlight the importance of examining performance across different parts of speech.« less
    Free, publicly-accessible full text available January 1, 2023
  10. Many computational models of reasoning rely on explicit relation representations to account for human cognitive capacities such as analogical reasoning. Relational luring, a phenomenon observed in recognition memory, has been interpreted as evidence that explicit relation representations also impact episodic memory; however, this assumption has not been rigorously assessed by computational modeling. We implemented an established model of recognition memory, the Generalized Context Model (GCM), as a framework for simulating human performance on an old/new recognition task that elicits relational luring. Within this basic theoretical framework, we compared representations based on explicit relations, lexical semantics (i.e., individual word meanings), andmore »a combination of the two. We compared the same alternative representations as predictors of accuracy in solving explicit verbal analogies. In accord with previous work, we found that explicit relation representations are necessary for modeling analogical reasoning. In contrast, preliminary simulations incorporating model parameters optimized to fit human data reproduce relational luring using any of the alternative representations, including one based on non-relational lexical semantics. Further work on model comparisons is needed to examine the contributions of lexical semantics and relations on the luring effect in recognition memory.« less
    Free, publicly-accessible full text available January 1, 2023