skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Person Perception, Meet People Perception: Exploring the Social Vision of Groups
Groups, teams, and collectives —people—are incredibly important to human behavior. People live in families, work in teams, and celebrate and mourn together in groups. Despite the huge variety of human group activity and its fundamental importance to human life, social-psychological research on person perception has overwhelmingly focused on its namesake, the person, rather than expanding to consider people perception. By looking to two unexpected partners, the vision sciences and organization behavior, we find emerging work that presents a path forward, building a foundation for understanding how people perceive other people. And yet this nascent field is missing critical insights that scholars of social vision might offer: specifically, for example, the chance to connect perception to behavior through the mediators of cognition and motivational processes. Here, we review emerging work across the vision and social sciences to extract core principles of people perception: efficiency, capacity, and complexity. We then consider complexity in more detail, focusing on how people perception modifies person-perception processes and enables the perception of group emergent properties as well as group dynamics. Finally, we use these principles to discuss findings and outline areas fruitful for future work. We hope that fellow scholars take up this people-perception call.  more » « less
Award ID(s):
2017250
PAR ID:
10547580
Author(s) / Creator(s):
 ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Perspectives on Psychological Science
Volume:
17
Issue:
3
ISSN:
1745-6916
Format(s):
Medium: X Size: p. 768-787
Size(s):
p. 768-787
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The robotics community continually strives to create robots that are deployable in real-world environments. Often, robots are expected to interact with human groups. To achieve this goal, we introduce a new method, the Robot-Centric Group Estimation Model (RoboGEM), which enables robots to detect groups of people. Much of the work reported in the literature focuses on dyadic interactions, leaving a gap in our understanding of how to build robots that can effectively team with larger groups of people. Moreover, many current methods rely on exocentric vision, where cameras and sensors are placed externally in the environment, rather than onboard the robot. Consequently, these methods are impractical for robots in unstructured, human-centric environments, which are novel and unpredictable. Furthermore, the majority of work on group perception is supervised, which can inhibit performance in real-world settings. RoboGEM addresses these gaps by being able to predict social groups solely from an egocentric perspective using color and depth (RGB-D) data. To achieve group predictions, RoboGEM leverages joint motion and proximity estimations. We evaluated RoboGEM against a challenging, egocentric, real-world dataset where both pedestrians and the robot are in motion simultaneously, and show RoboGEM outperformed two state-of-the-art supervised methods in detection accuracy by up to 30%, with a lower miss rate. Our work will be helpful to the robotics community, and serve as a milestone to building unsupervised systems that will enable robots to work with human groups in real-world environments. 
    more » « less
  2. Abstract Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context. 
    more » « less
  3. Toward enabling next-generation robots capable of socially intelligent interaction with humans, we present a computational model of interactions in a social environment of multiple agents and multiple groups. The Multiagent Group Perception and Interaction (MGpi) network is a deep neural network that predicts the appropriate social action to execute in a group conversation (e.g., speak, listen, respond, leave), taking into account neighbors' observable features (e.g., location of people, gaze orientation, distraction, etc.). A central component of MGpi is the Kinesic-Proxemic-Message (KPM) gate, that performs social signal gating to extract important information from a group conversation. In particular, KPM gate filters incoming social cues from nearby agents by observing their body gestures (kinesics) and spatial behavior (proxemics). The MGpi network and its KPM gate are learned via imitation learning, using demonstrations from our designed social interaction simulator. Further, we demonstrate the efficacy of the KPM gate as a social attention mechanism, achieving state-of-the-art performance on the task of group identification without using explicit group annotations, layout assumptions, or manually chosen parameters. 
    more » « less
  4. Abstract This paper evaluates perception of complexity in a novel explanatory model that relates product performance and engineering effort. Complexity is an intermediate factor with two facets: it enables desired product performance but also requires effort to achieve. Three causal mechanisms explain how exponential growth bias, excess complexity, and differential perception lead to effort overruns. Secondary data from a human subject experiment validates the existence of perception of complexity as a context‐dependent factor that influences required design effort. A two‐level mixed effects regression model quantifies differences in perception among 40 design groups. Results summarize how perception of complexity may contribute to effort overruns and outline future work to further validate the explanatory model and causal mechanisms. 
    more » « less
  5. A wide range of studies in Human-Robot Interaction (HRI) has shown that robots can influence the social behavior of humans. This phenomenon is commonly explained by the Media Equation. Fundamental to this theory is the idea that when faced with technology (like robots), people perceive it as a social agent with thoughts and intentions similar to those of humans. This perception guides the interaction with the technology and its predicted impact. However, HRI studies have also reported examples in which the Media Equation has been violated, that is when people treat the influence of robots differently from the influence of humans. To address this gap, we propose a model of Robot Social Influence (RoSI) with two contributing factors. The first factor is a robot’s violation of a person’s expectations, whether the robot exceeds expectations or fails to meet expectations. The second factor is a person’s social belonging with the robot, whether the person belongs to the same group as the robot or a different group. These factors are primary predictors of robots’ social influence and commonly mediate the influence of other factors. We review HRI literature and show how RoSI can explain robots’ social influence in concrete HRI scenarios. 
    more » « less