Abstract Scene memory has known spatial biases. Boundary extension is a well-known bias whereby observers remember visual information beyond an image’s boundaries. While recent studies demonstrate that boundary contraction also reliably occurs based on intrinsic image properties, the specific properties that drive the effect are unknown. This study assesses the extent to which scene memory might have a fixed capacity for information. We assessed both visual and semantic information in a scene database using techniques from image processing and natural language processing, respectively. We then assessed how both types of information predicted memory errors for scene boundaries using a standard rapid serial visual presentation (RSVP) forced error paradigm. A linear regression model indicated that memories for scene boundaries were significantly predicted by semantic, but not visual, information and that this effect persisted when scene depth was considered. Boundary extension was observed for images with low semantic information, and contraction was observed for images with high semantic information. This suggests a cognitive process that normalizes the amount of semantic information held in memory.
more »
« less
Perceived distance alters memory for scene boundaries
Memory often fills in what is not there. A striking example of this is boundary extension, whereby observers mistakenly recall a view that extends beyond what was seen. However, not all visual memories extend in this way, which suggests that this process depends on specific scene properties. What factors determine when visual memories will include details that go beyond perceptual experience? Here, seven experiments (N = 1,100 adults) explored whether spatial scale—specifically, perceived viewing distance—drives boundary extension. We created fake miniatures by exploiting tilt shift, a photographic effect that selectively reduces perceived distance while preserving other scene properties (e.g., making a distant railway appear like a model train). Fake miniaturization increased boundary extension for otherwise identical scenes: Participants who performed a scene-memory task misremembered fake- miniaturized views as farther away than they actually were. This effect went beyond low-level image changes and generalized to a completely different distance manipulation. Thus, visual memory is modulated by the spatial scale at which the environment is viewed.
more »
« less
- Award ID(s):
- 2105228
- PAR ID:
- 10338429
- Date Published:
- Journal Name:
- Psychological science
- Volume:
- 33
- Issue:
- 12
- ISSN:
- 0956-7976
- Page Range / eLocation ID:
- 2040–2058
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Visual scenes are often remembered as if they were observed from a different viewpoint. Some scenes are remembered as farther than they appeared, and others as closer. These memory distortions—also known as boundary extension and contraction—are strikingly consistent for a given scene, but their cause remains unknown. We tested whether these distortions can be explained by an inferential process that adjusts scene memories toward high-probability views, using viewing depth as a test case. We first carried out a large-scale analysis of depth maps of natural indoor scenes to quantify the statistical probability of views in depth. We then assessed human observers’ memory for these scenes at various depths and found that viewpoint judgments were consistently biased toward the modal depth, even when just a few seconds elapsed between viewing and reporting. Thus, scenes closer than the modal depth showed a boundary-extension bias (remembered as farther-away), and scenes farther than the modal depth showed a boundary-contraction bias (remembered as closer). By contrast, scenes at the modal depth did not elicit a consistent bias in either direction. This same pattern of results was observed in a follow-up experiment using tightly controlled stimuli from virtual environments. Together, these findings show that scene memories are biased toward statistically probable views, which may serve to increase the accuracy of noisy or incomplete scene representations.more » « less
-
null (Ed.)Abstract Almost all models of visual working memory—the cognitive system that holds visual information in an active state—assume it has a fixed capacity: Some models propose a limit of three to four objects, where others propose there is a fixed pool of resources for each basic visual feature. Recent findings, however, suggest that memory performance is improved for real-world objects. What supports these increases in capacity? Here, we test whether the meaningfulness of a stimulus alone influences working memory capacity while controlling for visual complexity and directly assessing the active component of working memory using EEG. Participants remembered ambiguous stimuli that could either be perceived as a face or as meaningless shapes. Participants had higher performance and increased neural delay activity when the memory display consisted of more meaningful stimuli. Critically, by asking participants whether they perceived the stimuli as a face or not, we also show that these increases in visual working memory capacity and recruitment of additional neural resources are because of the subjective perception of the stimulus and thus cannot be driven by physical properties of the stimulus. Broadly, this suggests that the capacity for active storage in visual working memory is not fixed but that more meaningful stimuli recruit additional working memory resources, allowing them to be better remembered.more » « less
-
During sleep, recently acquired episodic memories (i.e., autobiographical memories for specific events) are strengthened and transformed, a process termed consolidation. These memories are contextual in nature, with details of specific features interwoven with more general properties such as the time and place of the event. In this study, we hypothesized that the context in which a memory is embedded would guide the process of consolidation during sleep. To test this idea, we used a spatial memory task and considered changes in memory over a 10-h period including either sleep or wake. In both conditions, participants ( N = 62) formed stories that contextually bound four objects together and then encoded the on-screen spatial position of all objects. Results showed that the changes in memory over the sleep period were correlated among contextually linked objects, whereas no such effect was identified for the wake group. These results demonstrate that context-binding plays an important role in memory consolidation during sleep.more » « less
-
While viewing a visual stimulus, we often cannot tell whether it is inherently memorable or forgettable. However, the memorability of a stimulus can be quantified and partially predicted by a collection of conceptual and perceptual factors. Higher-level properties that represent the “meaningfulness” of a visual stimulus to viewers best predict whether it will be remembered or forgotten across a population. Here, we hypothesize that the feelings evoked by an image, operationalized as the valence and arousal dimensions of affect, significantly contribute to the memorability of scene images. We ran two complementary experiments to investigate the influence of affect on scene memorability, in the process creating a new image set (VAMOS) of hundreds of natural scene images for which we obtained valence, arousal, and memorability scores. From our first experiment, we found memorability to be highly reliable for scene images that span a wide range of evoked arousal and valence. From our second experiment, we found that both valence and arousal are significant but weak predictors of image memorability. Scene images were most memorable if they were slightly negatively valenced and highly arousing. Images that were extremely positive or unarousing were most forgettable. Valence and arousal together accounted for less than 8% of the variance in image memorability. These findings suggest that evoked affect contributes to the overall memorability of a scene image but, like other singular predictors, does not fully explain it. Instead, memorability is best explained by an assemblage of visual features that combine, in perhaps unintuitive ways, to predict what is likely to stick in our memory.more » « less
An official website of the United States government

