skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Title: Relative rate of expansion controls speed in one-dimensional pedestrian following
Patterns of crowd behavior are believed to result from local interactions between pedestrians. Many studies have investigated the local rules of interaction, such as steering, avoiding, and alignment, but how pedestrians control their walking speed when following another remains unsettled. Most pedestrian models assume the physical speed and distance of others as input. The present study compares such “omniscient” models with “visual” models based on optical variables.We experimentally tested eight speed control models from the pedestrian- and car-following literature. Walking participants were asked to follow a leader (a moving pole) in a virtual environment, while the leader’s speed was perturbed during the trial. In Experiment 1, the leader’s initial distance was varied. Each model was fit to the data and compared. The results showed that visual models based on optical expansion (θ˙) had the smallest root mean square error in speed across conditions, whereas other models exhibited increased error at longer distances. In Experiment 2, the leader’s size (pole diameter) was varied. A model based on the relative rate of expansion (θ˙/θ) performed better than the expansion rate model (θ˙), because it is less sensitive to leader size. Together, the results imply that pedestrians directly control their walking speed in one-dimensional following using relative rate of expansion, rather than the distal speed and distance of the leader.  more » « less
Award ID(s):
1849446
NSF-PAR ID:
10514716
Author(s) / Creator(s):
;
Publisher / Repository:
Association for Research in Vision and Ophthalmology
Date Published:
Journal Name:
Journal of Vision
Volume:
23
Issue:
10
ISSN:
1534-7362
Page Range / eLocation ID:
3
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Collective motion in human crowds emerges from local interactions between individual pedestrians. Previously, we found that an individual in a crowd aligns their velocity vector with a weighted average of their neighbors’ velocities, where the weight decays with distance (Rio, Dachner, & Warren, PRSB, 2018; Warren, CDPS, 2018). Here, we explain this “alignment rule” based solely on visual information. When following behind a neighbor, the follower controls speed by canceling the neighbor’s optical expansion (Bai & Warren, VSS, 2018) and heading by canceling the neighbor’s angular velocity. When walking beside a neighbor, these relations reverse: Speed is controlled by canceling angular velocity and heading by canceling optical expansion. These two variables trade off as sinusoidal functions of eccentricity (Dachner & Warren, VSS, 2018). We use this visual model to simulate the trajectories of participants walking in virtual (12 neighbors) and real (20 neighbors) crowds. The model accounts for the data with root mean square errors (.04–.05 m/s, 1.5°–2.0°) the distance decay as a consequence of comparable to those of our previous velocity-alignment model. Moreover, the model explains Euclid’s law of perspective, without an explicit distance term. The visual model thus provides a better explanation of collective motion. 
    more » « less
  2. Patterns of collective motion in bird flocks, fish schools and human crowds are believed to emerge from local interactions between individuals. Most ‘flocking' models attribute these local interactions to hypothetical rules or metaphorical forces and assume an omniscient third-person view of the positions and velocities of all individuals in space. We develop a visual model of collective motion in human crowds based on the visual coupling that governs pedestrian interactions from a first-person embedded viewpoint. Specifically, humans control their walking speed and direction by cancelling the average angular velocity and optical expansion/contraction of their neighbours, weighted by visibility (1 − occlusion). We test the model by simulating data from experiments with virtual crowds and real human ‘swarms'. The visual model outperforms our previous omniscient model and explains basic properties of interaction: ‘repulsion' forces reduce to cancelling optical expansion, ‘attraction' forces to cancelling optical contraction and ‘alignment' to cancelling the combination of expansion/contraction and angular velocity. Moreover, the neighbourhood of interaction follows from Euclid's Law of perspective and the geometry of occlusion. We conclude that the local interactions underlying human flocking are a natural consequence of the laws of optics. Similar perceptual principles may apply to collective motion in other species. 
    more » « less
  3. null (Ed.)
    The safety of distracted pedestrians presents a significant public health challenge in the United States and worldwide. An estimated 6,704 American pedestrians died and over 200,000 pedestrians were injured in traffic crashes in 2018, according to the Centers for Disease Control and Prevention (CDC). This number is increasing annually and many researchers posit that distraction by smartphones is a primary reason for the increasing number of pedestrian injuries and deaths. One strategy to prevent pedestrian injuries and death is to use intrusive interruptions that warn distracted pedestrians directly on their smartphones. To this end, we developed StreetBit, a Bluetooth beacon-based mobile application that alerts distracted pedestrians with a visual and/or audio interruption when they are distracted by their smartphones and are approaching a potentially-dangerous traffic intersection. In this paper, we present the background, architecture, and operations of the StreetBit Application. 
    more » « less
  4. This paper presents the Brown Pedestrian Odometry Dataset (BPOD) for benchmarking visual odometry algo- rithms on data from head-mounted sensors. This dataset was captured with stereo and RGB streams from RealSense cameras with rolling and global shutters in 12 diverse in- door and outdoor locations on Brown University’s cam- pus. Its associated ground-truth trajectories were gener- ated from third-person videos that documented the recorded pedestrians’ positions relative to stick-on markers placed along their paths. We evaluate the performance of canoni- cal approaches representative of direct, feature-based, and learning-based visual odometry methods on BPOD. Our finding is that current methods which are successful on other benchmarks fail on BPOD. The failure modes cor- respond in part to rapid pedestrian rotation, erratic body movements, etc. We hope this dataset will play a significant role in the identification of these failure modes and in the design, development, and evaluation of pedestrian odome- try algorithms. 
    more » « less
  5. Agent-based models of “flocking” and “schooling” have shown that a weighted average of neighbor velocities, with weights that decay gradually with distance, yields emergent collective motion. Weighted averaging thus offers a potential mechanism of self-organization that recruits an increasing, but self-limiting, number of individuals into collective motion. Previously, we identified and modeled such a ‘soft metric’ neighborhood of interaction in human crowds that decays exponentially to zero at a distance of 4–5 m. Here we investigate the limits of weighted averaging in humans and find that it is surprisingly robust: pedestrians align with the mean heading direction in their neighborhood, despite high levels of noise and diverging motions in the crowd, as predicted by the model. In three Virtual Reality experiments, participants were immersed in a crowd of virtual humans in a mobile head-mounted display and were instructed to walk with the crowd. By perturbing the heading (walking direction) of virtual neighbors and measuring the participant’s trajectory, we probed the limits of weighted averaging. 1) In the “Noisy Neighbors” experiment, the neighbor headings were randomized (range 0–90°) about the crowd’s mean direction (±10° or ±20°, left or right); 2) in the “Splitting Crowd” experiment, the crowd split into two groups (heading difference = 10–40°) and the proportion of the crowd in one group was varied (50–84%); 3) in the “Coherent Subgroup” experiment, a perturbed subgroup varied in its coherence (heading SD = 0–20°) about a mean direction (±10° or ±20°) within a noisy crowd (heading range = 180°), and the proportion of the crowd in the subgroup was varied. In each scenario, the results were predicted by the weighted averaging model, and attraction strength (turning rate) increased with the participant’s deviation from the mean heading direction, not with group coherence. However, the results indicate that humans ignore highly discrepant headings (45–90°). These findings reveal that weighted averaging in humans is highly robust and generates a common heading direction that acts as a positive feedback to recruit more individuals into collective motion, in a self-reinforcing cascade. Therefore, this “soft” metric neighborhood serves as a mechanism of self-organization in human crowds. 
    more » « less