skip to main content


Search for: All records

Creators/Authors contains: "Lu, Hongjing"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Creativity is typically defined as the generation of novel and useful ideas or artifacts. This generative capacity is crucial to everyday problem solving, technological innovation, scientific discovery, and the arts. A central concern of cognitive scientists is to understand the processes that underlie human creative thinking. We review evidence that one process contributing to human creativity is the ability to generate novel representations of unfamiliar situations by completing a partially-specified relation or an analogy. In particular, cognitive tasks that trigger generation of relational similarities between dissimilar situations—distant analogies—foster a kind of creative mindset. We discuss possible computational mechanisms that might enable relation-driven generation, and hence may contribute to human creativity, and conclude with suggested directions for future research. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  2. Free, publicly-accessible full text available September 1, 2024
  3. Free, publicly-accessible full text available September 1, 2024
  4. Abstract

    A commonplace sight is seeing other people walk. Our visual system specializes in processing such actions. Notably, we are not only quick to recognize actions, but also quick to judge how elegantly (or not) people walk. What movements appear appealing, and why do we have such aesthetic experiences? Do aesthetic preferences for body movements arise simply from perceiving others’ positive emotions? To answer these questions, we showed observers different point-light walkers who expressed neutral, happy, angry, or sad emotions through their movements and measured the observers’ impressions of aesthetic appeal, emotion positivity, and naturalness of these movements. Three experiments were conducted. People showed consensus in aesthetic impressions even after controlling for emotion positivity, finding prototypical walks more aesthetically pleasing than atypical walks. This aesthetic prototype effect could be accounted for by a computational model in which walking actions are treated as a single category (as opposed to multiple emotion categories). The aesthetic impressions were affected both directly by the objective prototypicality of the movements, and indirectly through the mediation of perceived naturalness. These findings extend the boundary of category learning, and hint at possible functions for action aesthetics.

     
    more » « less
  5. Abstract

    Human reasoning is grounded in an ability to identify highly abstract commonalities governing superficially dissimilar visual inputs. Recent efforts to develop algorithms with this capacity have largely focused on approaches that require extensive direct training on visual reasoning tasks, and yield limited generalization to problems with novel content. In contrast, a long tradition of research in cognitive science has focused on elucidating the computational principles underlying human analogical reasoning; however, this work has generally relied on manually constructed representations. Here we present visiPAM (visual Probabilistic Analogical Mapping), a model of visual reasoning that synthesizes these two approaches. VisiPAM employs learned representations derived directly from naturalistic visual inputs, coupled with a similarity-based mapping operation derived from cognitive theories of human reasoning. We show that without any direct training, visiPAM outperforms a state-of-the-art deep learning model on an analogical mapping task. In addition, visiPAM closely matches the pattern of human performance on a novel task involving mapping of 3D objects across disparate categories.

     
    more » « less
  6. Abstract

    Advances in artificial intelligence have raised a basic question about human intelligence: Is human reasoning best emulated by applying task‐specific knowledge acquired from a wealth of prior experience, or is it based on the domain‐general manipulation and comparison of mental representations? We address this question for the case of visual analogical reasoning. Using realistic images of familiar three‐dimensional objects (cars and their parts), we systematically manipulated viewpoints, part relations, and entity properties in visual analogy problems. We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) that were directly trained to solve these problems and to apply their task‐specific knowledge to analogical reasoning. We also developed a new model using part‐based comparison (PCM) by applying a domain‐general mapping procedure to learned representations of cars and their component parts. Across four‐term analogies (Experiment 1) and open‐ended analogies (Experiment 2), the domain‐general PCM model, but not the task‐specific deep learning models, generated performance similar in key aspects to that of human reasoners. These findings provide evidence that human‐like analogical reasoning is unlikely to be achieved by applying deep learning with big data to a specific type of analogy problem. Rather, humans do (and machines might) achieve analogical reasoning by learning representations that encode structural information useful for multiple tasks, coupled with efficient computation of relational similarity.

     
    more » « less