skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Relative rate of expansion controls speed in one-dimensional pedestrian following
Patterns of crowd behavior are believed to result from local interactions between pedestrians. Many studies have investigated the local rules of interaction, such as steering, avoiding, and alignment, but how pedestrians control their walking speed when following another remains unsettled. Most pedestrian models assume the physical speed and distance of others as input. The present study compares such “omniscient” models with “visual” models based on optical variables.We experimentally tested eight speed control models from the pedestrian- and car-following literature. Walking participants were asked to follow a leader (a moving pole) in a virtual environment, while the leader’s speed was perturbed during the trial. In Experiment 1, the leader’s initial distance was varied. Each model was fit to the data and compared. The results showed that visual models based on optical expansion (θ˙) had the smallest root mean square error in speed across conditions, whereas other models exhibited increased error at longer distances. In Experiment 2, the leader’s size (pole diameter) was varied. A model based on the relative rate of expansion (θ˙/θ) performed better than the expansion rate model (θ˙), because it is less sensitive to leader size. Together, the results imply that pedestrians directly control their walking speed in one-dimensional following using relative rate of expansion, rather than the distal speed and distance of the leader.  more » « less
Award ID(s):
1849446
PAR ID:
10514716
Author(s) / Creator(s):
;
Publisher / Repository:
Association for Research in Vision and Ophthalmology
Date Published:
Journal Name:
Journal of Vision
Volume:
23
Issue:
10
ISSN:
1534-7362
Page Range / eLocation ID:
3
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Collective motion in human crowds emerges from local interactions between individual pedestrians. Previously, we found that an individual in a crowd aligns their velocity vector with a weighted average of their neighbors’ velocities, where the weight decays with distance (Rio, Dachner, & Warren, PRSB, 2018; Warren, CDPS, 2018). Here, we explain this “alignment rule” based solely on visual information. When following behind a neighbor, the follower controls speed by canceling the neighbor’s optical expansion (Bai & Warren, VSS, 2018) and heading by canceling the neighbor’s angular velocity. When walking beside a neighbor, these relations reverse: Speed is controlled by canceling angular velocity and heading by canceling optical expansion. These two variables trade off as sinusoidal functions of eccentricity (Dachner & Warren, VSS, 2018). We use this visual model to simulate the trajectories of participants walking in virtual (12 neighbors) and real (20 neighbors) crowds. The model accounts for the data with root mean square errors (.04–.05 m/s, 1.5°–2.0°) the distance decay as a consequence of comparable to those of our previous velocity-alignment model. Moreover, the model explains Euclid’s law of perspective, without an explicit distance term. The visual model thus provides a better explanation of collective motion. 
    more » « less
  2. We previously reported an experiment in which covert or explicit leaders (confederates) were placed in a group of walking pedestrians in order to test leader influence on human crowd motion. Here we simulate the participant trajectories with variants of an empirical pedestrian model, treating the covert leaders’ motion as input, and test model agreement with the experimental data. We are currently using reconstructed influence networks [2] to modify the model weights in order to simulate the influence of explicit leaders. The results help us to understand how leader influence propagates via local interactions in real human crowds. 
    more » « less
  3. Patterns of collective motion in bird flocks, fish schools and human crowds are believed to emerge from local interactions between individuals. Most ‘flocking' models attribute these local interactions to hypothetical rules or metaphorical forces and assume an omniscient third-person view of the positions and velocities of all individuals in space. We develop a visual model of collective motion in human crowds based on the visual coupling that governs pedestrian interactions from a first-person embedded viewpoint. Specifically, humans control their walking speed and direction by cancelling the average angular velocity and optical expansion/contraction of their neighbours, weighted by visibility (1 − occlusion). We test the model by simulating data from experiments with virtual crowds and real human ‘swarms'. The visual model outperforms our previous omniscient model and explains basic properties of interaction: ‘repulsion' forces reduce to cancelling optical expansion, ‘attraction' forces to cancelling optical contraction and ‘alignment' to cancelling the combination of expansion/contraction and angular velocity. Moreover, the neighbourhood of interaction follows from Euclid's Law of perspective and the geometry of occlusion. We conclude that the local interactions underlying human flocking are a natural consequence of the laws of optics. Similar perceptual principles may apply to collective motion in other species. 
    more » « less
  4. The visual control of locomotion has been modeled for individual pedestrian behavior; however, this approach has not been applied to collective human behavior, where spontaneous pattern formation is often observed. We hypothesize that an empirical visual model of human locomotion will reproduce the emergent pattern of lanes and stripes observed in crossing flows, when two groups of pedestrians walk through each other at crosswalks or intersections. Mullick, et al. (2022) manipulated the crossing angle between two groups and found an invariant property: stripe orientation is perpendicular to the mean walking direction (i.e. 90˚ to the bisectrix of the crossing angle). Here we determine the combination of model components required to simulate human-like stripes: (i) steering to a goal (Fajen & Warren, 2003), (ii) collision avoidance with opponents (Bai, 2022; Veprek & Warren, VSS 2023), and (iii) alignment with neighbors (Dachner, et al., 2022), together called the SCruM (Self-organized Collective Motion) model. We performed multi-agent simulations of the data from Mullick et al. (2022), using fixed parameters and initial conditions from the dataset. There were two sets of participants (N=36, 38) with 18 or 19 per group. Crossing angle varied from 60˚ to 180˚ (30˚ intervals), with ~17 trials per condition. The minimal model necessary to reproduce stripe formation consists of the goal and collision avoidance components. Mean stripe orientation did not differ from 90˚ to the bisectrix (BF10 < 0.01, decisive). However, the SD of heading during crossing was significantly larger than the human data (p<0.001), whereas the SD of speed was significantly smaller (p<0.001). Thus, the ratio of heading/speed adjustments is lower than previously found, implying the need to reparameterize model components for walking in groups. In sum, steering to a goal and collision avoidance are sufficient to explain stripe formation in crossing flows, while alignment is unnecessary. 
    more » « less
  5. Nicolas, A; Bain, N; Douin, A; Ramos, O; Furno, A (Ed.)
    Crossing flows of pedestrians result in collective motions containing self-organized lanes or stripes. Over a wide range of crossing angles, stripe orientation is observed to be perpendicular to the mean walking direction. Here, we test the behavioral components needed to reproduce the lanes and stripes in human data using an empirical, vision-based pedestrian model (Visual SCruM). We examine combinations of (i) steering toward a goal, (ii) collision avoidance, and (iii) alignment (both with and without visual occlusion). The minimal model sufficient to reproduce perpendicular stripes was the combination of a common goal and collision avoidance, although the addition of alignment with occlusion better approximated human heading adjustments. However, the model overestimated the variation in heading and underestimated the variation in speed, suggesting that recalibration of the collision avoidance component is needed. 
    more » « less