skip to main content


Title: Chunks are not "Content-Free": Hierarchical Representations Preserve Perceptual Detail within Chunks.
Chunks allow us to use long-term knowledge to efficiently represent the world in working memory. Most views of chunking assume that when we use chunks, this results in the loss of specific perceptual details, since it is presumed the contents of chunks are decoded from long-term memory rather than reflecting the exact details of the item that was presented. However, in two experiments, we find that in situations where participants make use of chunks to improve visual working memory, access to instance-specific perceptual detail (that cannot be retrieved from long-term memory) increased, rather than decreased. This supports an alternative view: that chunks facilitate the encoding and retention into memory of perceptual details as part of structured, hierarchical memories, rather than serving as mere “content-free” pointers. It also provides a strong contrast to accounts in which working memory capacity is assumed to be exhaustively described by the number of chunks remembered.  more » « less
Award ID(s):
1829434
NSF-PAR ID:
10297810
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Annual Conference of the Cognitive Science Society
Volume:
43
ISSN:
1069-7977
Page Range / eLocation ID:
721-727
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we present a novel approach to real-time tracking of full-chip heatmaps for commercial off-the-shelf microprocessors based on machine-learning. The proposed post-silicon approach, named RealMaps, only uses the existing embedded temperature sensors and workload-independent utilization information, which are available in real-time. Moreover, RealMaps does not require any knowledge of the proprietary design details or manufacturing process-specific information of the chip. Consequently, the methods presented in this work can be implemented by either the original chip manufacturer or a third party alike, and is aimed at supplementing, rather than substituting, the temperature data sensed from the existing embedded sensors. The new approach starts with offline acquisition of accurate spatial and temporal heatmaps using an infrared thermal imaging setup while nominal working conditions are maintained on the chip. To build the dynamic thermal model, a temporal-aware long-short-term-memory (LSTM) neutral network is trained with system-level features such as chip frequency, instruction counts, and other high-level performance metrics as inputs. Instead of a pixel-wise heatmap estimation, we perform 2D spatial discrete cosine transformation (DCT) on the heatmaps so that they can be expressed with just a few dominant DCT coefficients. This allows for the model to be built to estimate just the dominant spatial features of the 2D heatmaps, rather than the entire heatmap images, making it significantly more efficient. Experimental results from two commercial chips show that RealMaps can estimate the full-chip heatmaps with 0.9C and 1.2C root-mean-square-error respectively and take only 0.4ms for each inference which suits well for real-time use. Compared to the state of the art pre-silicon approach, RealMaps shows similar accuracy, but with much less computational cost. 
    more » « less
  2. Abstract

    The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process,chunking, may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short‐term memory is fundamentally shaped by long‐term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to an artificial language, using a standard statistical learning exposure phase. Afterward, they recall strings of syllables that either follow the statistics of the artificial language or comprise the same syllables presented in a random order. We hypothesized that if individuals had chunked the artificial language into word‐like units, then the statistically structured items would be more accurately recalled relative to the random controls. Our results demonstrate that SICR effectively captures learning in both the auditory and visual modalities, with participants displaying significantly improved recall of the statistically structured items, and even recall specific trigram chunks from the input. SICR also exhibits greater test–retest reliability in the auditory modality and sensitivity to individual differences in both modalities than the standard two‐alternative forced‐choice task. These results thereby provide key empirical support to the chunking account of statistical learning and contribute a valuable new tool to the literature.

     
    more » « less
  3. We studied the memory representations that control execution of action sequences by training rhesus monkeys ( Macaca mulatta) to touch sets of five images in a predetermined arbitrary order (simultaneous chaining). In Experiment 1, we found that this training resulted in mental representations of ordinal position rather than learning associative chains, replicating the work of others. We conducted novel analyses of performance on probe tests consisting of two images “derived” from the full five-image lists (i.e., test B, D from list A→B→C→D→E). We found a “first item effect” such that monkeys responded most quickly to images that occurred early in the list in which they had been learned, indicating that monkeys covertly execute known lists mentally until an image on the screen matches the one stored in memory. Monkeys also made an ordinal comparison of the two images presented at test based on long-term memory of positional information, resulting in a “symbolic distance effect.” Experiment 2 indicated that ordinal representations were based on absolute, rather than on relative, positional information because subjects did not link two lists into one large list after linking training, unlike what occurs in transitive inference. We further examined the contents of working memory during list execution in Experiments 3 and 4 and found evidence for a prospective, rather than a retrospective, coding of position in the lists. These results indicate that serial expertise in simultaneous chaining results in robust absolute ordinal coding in long-term memory, with rapidly updating prospective coding of position in working memory during list execution.

     
    more » « less
  4. van den Berg, Ronald (Ed.)
    Categorical judgments can systematically bias the perceptual interpretation of stimulus features. However, it remained unclear whether categorical judgments directly modify working memory representations or, alternatively, generate these biases via an inference process down-stream from working memory. To address this question we ran two novel psychophysical experiments in which human subjects had to reverse their categorical judgments about a stimulus feature, if incorrect, before providing an estimate of the feature. If categorical judgments indeed directly altered sensory representations in working memory, subjects’ estimates should reflect some aspects of their initial (incorrect) categorical judgment in those trials. We found no traces of the initial categorical judgment. Rather, subjects seemed to be able to flexibly switch their categorical judgment if needed and use the correct corresponding categorical prior to properly perform feature inference. A cross-validated model comparison also revealed that feedback may lead to selective memory recall such that only memory samples that are consistent with the categorical judgment are accepted for the inference process. Our results suggest that categorical judgments do not modify sensory information in working memory but rather act as top-down expectations in the subsequent sensory recall and inference process. 
    more » « less
  5. null (Ed.)
    Abstract Standard procedures for capture–mark–recapture modelling (CMR) for the study of animal demography include running goodness-of-fit tests on a general starting model. A frequent reason for poor model fit is heterogeneity in local survival among individuals captured for the first time and those already captured or seen on previous occasions. This deviation is technically termed a transience effect. In specific cases, simple, uni-state CMR modeling showing transients may allow researchers to assess the role of these transients on population dynamics. Transient individuals nearly always have a lower local survival probability, which may appear for a number of reasons. In most cases, transients arise due to permanent dispersal, higher mortality, or a combination of both. In the case of higher mortality, transients may be symptomatic of a cost of first reproduction. A few studies working at large spatial scales actually show that transients more often correspond to survival costs of first reproduction rather than to permanent dispersal, bolstering the interpretation of transience as a measure of costs of reproduction, since initial detections are often associated with first breeding attempts. Regardless of their cause, the loss of transients from a local population should lower population growth rate. We review almost 1000 papers using CMR modeling and find that almost 40% of studies fitting the searching criteria (N = 115) detected transients. Nevertheless, few researchers have considered the ecological or evolutionary meaning of the transient phenomenon. Only three studies from the reviewed papers considered transients to be a cost of first reproduction. We also analyze a long-term individual monitoring dataset (1988–2012) on a long-lived bird to quantify transients, and we use a life table response experiment (LTRE) to measure the consequences of transients at a population level. As expected, population growth rate decreased when the environment became harsher while the proportion of transients increased. LTRE analysis showed that population growth can be substantially affected by changes in traits that are variable under environmental stochasticity and deterministic perturbations, such as recruitment, fecundity of experienced individuals, and transient probabilities. This occurred even though sensitivities and elasticities of these parameters were much lower than those for adult survival. The proportion of transients also increased with the strength of density-dependence. These results have implications for ecological and evolutionary studies and may stimulate other researchers to explore the ecological processes behind the occurrence of transients in capture–recapture studies. In population models, the inclusion of a specific state for transients may help to make more reliable predictions for endangered and harvested species. 
    more » « less