skip to main content

Title: Chunks are not "Content-Free": Hierarchical Representations Preserve Perceptual Detail within Chunks.
Chunks allow us to use long-term knowledge to efficiently represent the world in working memory. Most views of chunking assume that when we use chunks, this results in the loss of specific perceptual details, since it is presumed the contents of chunks are decoded from long-term memory rather than reflecting the exact details of the item that was presented. However, in two experiments, we find that in situations where participants make use of chunks to improve visual working memory, access to instance-specific perceptual detail (that cannot be retrieved from long-term memory) increased, rather than decreased. This supports an alternative view: that chunks facilitate the encoding and retention into memory of perceptual details as part of structured, hierarchical memories, rather than serving as mere “content-free” pointers. It also provides a strong contrast to accounts in which working memory capacity is assumed to be exhaustively described by the number of chunks remembered.
Authors:
; ;
Award ID(s):
1829434
Publication Date:
NSF-PAR ID:
10297810
Journal Name:
Proceedings of the Annual Conference of the Cognitive Science Society
Volume:
43
Page Range or eLocation-ID:
721-727
ISSN:
1069-7977
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we present a novel approach to real-time tracking of full-chip heatmaps for commercial off-the-shelf microprocessors based on machine-learning. The proposed post-silicon approach, named RealMaps, only uses the existing embedded temperature sensors and workload-independent utilization information, which are available in real-time. Moreover, RealMaps does not require any knowledge of the proprietary design details or manufacturing process-specific information of the chip. Consequently, the methods presented in this work can be implemented by either the original chip manufacturer or a third party alike, and is aimed at supplementing, rather than substituting, the temperature data sensed from the existing embedded sensors. The new approach starts with offline acquisition of accurate spatial and temporal heatmaps using an infrared thermal imaging setup while nominal working conditions are maintained on the chip. To build the dynamic thermal model, a temporal-aware long-short-term-memory (LSTM) neutral network is trained with system-level features such as chip frequency, instruction counts, and other high-level performance metrics as inputs. Instead of a pixel-wise heatmap estimation, we perform 2D spatial discrete cosine transformation (DCT) on the heatmaps so that they can be expressed with just a few dominant DCT coefficients. This allows for the model to be built to estimate just themore »dominant spatial features of the 2D heatmaps, rather than the entire heatmap images, making it significantly more efficient. Experimental results from two commercial chips show that RealMaps can estimate the full-chip heatmaps with 0.9C and 1.2C root-mean-square-error respectively and take only 0.4ms for each inference which suits well for real-time use. Compared to the state of the art pre-silicon approach, RealMaps shows similar accuracy, but with much less computational cost.« less
  2. We studied the memory representations that control execution of action sequences by training rhesus monkeys ( Macaca mulatta) to touch sets of five images in a predetermined arbitrary order (simultaneous chaining). In Experiment 1, we found that this training resulted in mental representations of ordinal position rather than learning associative chains, replicating the work of others. We conducted novel analyses of performance on probe tests consisting of two images “derived” from the full five-image lists (i.e., test B, D from list A→B→C→D→E). We found a “first item effect” such that monkeys responded most quickly to images that occurred early in the list in which they had been learned, indicating that monkeys covertly execute known lists mentally until an image on the screen matches the one stored in memory. Monkeys also made an ordinal comparison of the two images presented at test based on long-term memory of positional information, resulting in a “symbolic distance effect.” Experiment 2 indicated that ordinal representations were based on absolute, rather than on relative, positional information because subjects did not link two lists into one large list after linking training, unlike what occurs in transitive inference. We further examined the contents of working memory during listmore »execution in Experiments 3 and 4 and found evidence for a prospective, rather than a retrospective, coding of position in the lists. These results indicate that serial expertise in simultaneous chaining results in robust absolute ordinal coding in long-term memory, with rapidly updating prospective coding of position in working memory during list execution.

    « less
  3. Abstract Standard procedures for capture–mark–recapture modelling (CMR) for the study of animal demography include running goodness-of-fit tests on a general starting model. A frequent reason for poor model fit is heterogeneity in local survival among individuals captured for the first time and those already captured or seen on previous occasions. This deviation is technically termed a transience effect. In specific cases, simple, uni-state CMR modeling showing transients may allow researchers to assess the role of these transients on population dynamics. Transient individuals nearly always have a lower local survival probability, which may appear for a number of reasons. In most cases, transients arise due to permanent dispersal, higher mortality, or a combination of both. In the case of higher mortality, transients may be symptomatic of a cost of first reproduction. A few studies working at large spatial scales actually show that transients more often correspond to survival costs of first reproduction rather than to permanent dispersal, bolstering the interpretation of transience as a measure of costs of reproduction, since initial detections are often associated with first breeding attempts. Regardless of their cause, the loss of transients from a local population should lower population growth rate. We review almost 1000 papersmore »using CMR modeling and find that almost 40% of studies fitting the searching criteria (N = 115) detected transients. Nevertheless, few researchers have considered the ecological or evolutionary meaning of the transient phenomenon. Only three studies from the reviewed papers considered transients to be a cost of first reproduction. We also analyze a long-term individual monitoring dataset (1988–2012) on a long-lived bird to quantify transients, and we use a life table response experiment (LTRE) to measure the consequences of transients at a population level. As expected, population growth rate decreased when the environment became harsher while the proportion of transients increased. LTRE analysis showed that population growth can be substantially affected by changes in traits that are variable under environmental stochasticity and deterministic perturbations, such as recruitment, fecundity of experienced individuals, and transient probabilities. This occurred even though sensitivities and elasticities of these parameters were much lower than those for adult survival. The proportion of transients also increased with the strength of density-dependence. These results have implications for ecological and evolutionary studies and may stimulate other researchers to explore the ecological processes behind the occurrence of transients in capture–recapture studies. In population models, the inclusion of a specific state for transients may help to make more reliable predictions for endangered and harvested species.« less
  4. van den Berg, Ronald (Ed.)
    Categorical judgments can systematically bias the perceptual interpretation of stimulus features. However, it remained unclear whether categorical judgments directly modify working memory representations or, alternatively, generate these biases via an inference process down-stream from working memory. To address this question we ran two novel psychophysical experiments in which human subjects had to reverse their categorical judgments about a stimulus feature, if incorrect, before providing an estimate of the feature. If categorical judgments indeed directly altered sensory representations in working memory, subjects’ estimates should reflect some aspects of their initial (incorrect) categorical judgment in those trials. We found no traces of the initial categorical judgment. Rather, subjects seemed to be able to flexibly switch their categorical judgment if needed and use the correct corresponding categorical prior to properly perform feature inference. A cross-validated model comparison also revealed that feedback may lead to selective memory recall such that only memory samples that are consistent with the categorical judgment are accepted for the inference process. Our results suggest that categorical judgments do not modify sensory information in working memory but rather act as top-down expectations in the subsequent sensory recall and inference process.
  5. Abstract: Jury notetaking can be controversial despite evidence suggesting benefits for recall and understanding. Research on note taking has historically focused on the deliberation process. Yet, little research explores the notes themselves. We developed a 10-item coding guide to explore what jurors take notes on (e.g., simple vs. complex evidence) and how they take notes (e.g., gist vs. specific representation). In general, jurors made gist representations of simple and complex information in their notes. This finding is consistent with Fuzzy Trace Theory (Reyna & Brainerd, 1995) and suggests notes may serve as a general memory aid, rather than verbatim representation. Summary: The practice of jury notetaking in the courtroom is often contested. Some states allow it (e.g., Nebraska: State v. Kipf, 1990), while others forbid it (e.g., Louisiana: La. Code of Crim. Proc., Art. 793). Some argue notes may serve as a memory aid, increase juror confidence during deliberation, and help jurors engage in the trial (Hannaford & Munsterman, 2001; Heuer & Penrod, 1988, 1994). Others argue notetaking may distract jurors from listening to evidence, that juror notes may be given undue weight, and that those who took notes may dictate the deliberation process (Dann, Hans, & Kaye, 2005). Whilemore »research has evaluated the efficacy of juror notes on evidence comprehension, little work has explored the specific content of juror notes. In a similar project on which we build, Dann, Hans, and Kaye (2005) found jurors took on average 270 words of notes each with 85% including references to jury instructions in their notes. In the present study we use a content analysis approach to examine how jurors take notes about simple and complex evidence. We were particularly interested in how jurors captured gist and specific (verbatim) information in their notes as they have different implications for information recall during deliberation. According to Fuzzy Trace Theory (Reyna & Brainerd, 1995), people extract “gist” or qualitative meaning from information, and also exact, verbatim representations. Although both are important for helping people make well-informed judgments, gist-based understandings are purported to be even more important than verbatim understanding (Reyna, 2008; Reyna & Brainer, 2007). As such, it could be useful to examine how laypeople represent information in their notes during deliberation of evidence. Methods Prior to watching a 45-minute mock bank robbery trial, jurors were given a pen and notepad and instructed they were permitted to take notes. The evidence included testimony from the defendant, witnesses, and expert witnesses from prosecution and defense. Expert testimony described complex mitochondrial DNA (mtDNA) evidence. The present analysis consists of pilot data representing 2,733 lines of notes from 52 randomly-selected jurors across 41 mock juries. Our final sample for presentation at AP-LS will consist of all 391 juror notes in our dataset. Based on previous research exploring jury note taking as well as our specific interest in gist vs. specific encoding of information, we developed a coding guide to quantify juror note-taking behaviors. Four researchers independently coded a subset of notes. Coders achieved acceptable interrater reliability [(Cronbach’s Alpha = .80-.92) on all variables across 20% of cases]. Prior to AP-LS, we will link juror notes with how they discuss scientific and non-scientific evidence during jury deliberation. Coding Note length. Before coding for content, coders counted lines of text. Each notepad line with at minimum one complete word was coded as a line of text. Gist information vs. Specific information. Any line referencing evidence was coded as gist or specific. We coded gist information as information that did not contain any specific details but summarized the meaning of the evidence (e.g., “bad, not many people excluded”). Specific information was coded as such if it contained a verbatim descriptive (e.g.,“<1 of people could be excluded”). We further coded whether this information was related to non-scientific evidence or related to the scientific DNA evidence. Mentions of DNA Evidence vs. Other Evidence. We were specifically interested in whether jurors mentioned the DNA evidence and how they captured complex evidence. When DNA evidence was mention we coded the content of the DNA reference. Mentions of the characteristics of mtDNA vs nDNA, the DNA match process or who could be excluded, heteroplasmy, references to database size, and other references were coded. Reliability. When referencing DNA evidence, we were interested in whether jurors mentioned the evidence reliability. Any specific mention of reliability of DNA evidence was noted (e.g., “MT DNA is not as powerful, more prone to error”). Expert Qualification. Finally, we were interested in whether jurors noted an expert’s qualifications. All references were coded (e.g., “Forensic analyst”). Results On average, jurors took 53 lines of notes (range: 3-137 lines). Most (83%) mentioned jury instructions before moving on to case specific information. The majority of references to evidence were gist references (54%) focusing on non-scientific evidence and scientific expert testimony equally (50%). When jurors encoded information using specific references (46%), they referenced non-scientific evidence and expert testimony equally as well (50%). Thirty-three percent of lines were devoted to expert testimony with every juror including at least one line. References to the DNA evidence were usually focused on who could be excluded from the FBIs database (43%), followed by references to differences between mtDNA vs nDNA (30%), and mentions of the size of the database (11%). Less frequently, references to DNA evidence focused on heteroplasmy (5%). Of those references that did not fit into a coding category (11%), most focused on the DNA extraction process, general information about DNA, and the uniqueness of DNA. We further coded references to DNA reliability (15%) as well as references to specific statistical information (14%). Finally, 40% of jurors made reference to an expert’s qualifications. Conclusion Jury note content analysis can reveal important information about how jurors capture trial information (e.g., gist vs verbatim), what evidence they consider important, and what they consider relevant and irrelevant. In our case, it appeared jurors largely created gist representations of information that focused equally on non-scientific evidence and scientific expert testimony. This finding suggests note taking may serve not only to represent information verbatim, but also and perhaps mostly as a general memory aid summarizing the meaning of evidence. Further, jurors’ references to evidence tended to be equally focused on the non-scientific evidence and the scientifically complex DNA evidence. This observation suggests jurors may attend just as much to non-scientific evidence as they to do complex scientific evidence in cases involving complicated evidence – an observation that might inform future work on understanding how jurors interpret evidence in cases with complex information. Learning objective: Participants will be able to describe emerging evidence about how jurors take notes during trial.« less