skip to main content


Title: Age-Related Differences in the Influence of Category Expectations on Episodic Memory in Early Childhood
Previous research evaluating the influence of category knowledge on memory found that children, like adults, rely on category information to facilitate recall (Duffy, Huttenlocher, & Crawford, 2006). A model that combines category and target information (Integrative) provides a superior fit to preschoolers recall data compared to a category only (Prototype) and target only (Target) model (Macias, Persaud, Hemmer, & Bonawitz, in revision). Utilizing data and computational approaches from Macias et al., (in revision), we explore whether individual and age-related differences persist in the model fits. Results revealed that a greater proportion of preschoolers recall was best fit by the Prototype model and trials where children displayed individuating behaviors, such as spontaneously labeling, were also best fit by the Prototype model. Furthermore, the best fitting model varied by age. This work demonstrates a rich complexity and variation in recall between developmental groups that can be illuminated by computationally evaluating individual differences.  more » « less
Award ID(s):
1911656
NSF-PAR ID:
10142867
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
41st Annual Meeting of the Cognitive Science Society
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Drawing, as a skill, is closely tied to many creative fields and it is a unique practice for every individual. Drawing has been shown to improve cognitive and communicative abilities, such as visual communication, problem-solving skills, students’ academic achievement, awareness of and attention to surrounding details, and sharpened analytical skills. Drawing also stimulates both sides of the brain and improves peripheral skills of writing, 3-D spatial recognition, critical thinking, and brainstorming. People are often exposed to drawing as children, drawing their families, their houses, animals, and, most notably, their imaginative ideas. These skills develop over time naturally to some extent, however, while the base concept of drawing is a basic skill, the mastery of this skill requires extensive practice and it can often be significantly impacted by the self-efficacy of an individual. Sketchtivity is an AI tool developed by Texas A&M University to facilitate the growth of drawing skills and track their performance. Sketching skill development depends in part on students’ self-efficacy associated with their drawing abilities. Gauging the drawing self-efficacy of individuals is critical in understanding the impact that this drawing practice has had with this new novel instrument, especially in contrast to traditional practicing methods. It may also be very useful for other researchers, educators, and technologists. This study reports the development and initial validation of a new 13-item measure that assesses perceived drawing self efficacy. The13 items to measure drawing self efficacy were developed based on Bandura’s guide for constructing Self-Efficacy Scales. The participants in the study consisted of 222 high school students from engineering, art, and pre-calculus classes. Internal consistency of the 13 observed items were found to be very high (Cronbach alpha: 0.943), indicating a high reliability of the scale. Exploratory Factor Analysis was performed to further investigate the variance among the 13 observed items, to find the underlying latent factors that influenced the observed items, and to see if the items needed revision. We found that a three model was the best fit for our data, given fit statistics and model interpretability. The factors are: Factor 1: Self-efficacy with respect to drawing specific objects; Factor 2: Self-efficacy with respect to drawing practically to solve problems, communicating with others, and brainstorming ideas; Factor 3: Self-efficacy with respect to drawing to create, express ideas, and use one’s imagination. An alternative four-factor model is also discussed. The purpose of our study is to inform interventions that increase self-efficacy. We believe that this assessment will be valuable especially for education researchers who implement AI-based tools to measure drawing skills.This initial validity study shows promising results for a new measure of drawing self-efficacy. Further validation with new populations and drawing classes is needed to support its use, and further psychometric testing of item-level performance. In the future, this self-efficacy assessment could be used by teachers and researchers to guide instructional interventions meant to increase drawing self-efficacy. 
    more » « less
  2. The purpose of the project is to identify how to measure various types of institutional support as it pertains to underrepresented and underserved populations in colleges of engineering and science. We are grounding this investigation in the Model of Co-Curricular Support, a conceptual framework that emphasizes the breadth of assistance currently used to support undergraduate students in engineering and science. The results from our study will help prioritize the elements of institutional support that should appear somewhere in a college’s suite of support efforts to improve engineering and science learning environments and design effective programs, activities, and services. Our poster will present: 1) an overview of the instrument development process; 2) evaluation of the prototype for face and content validity from students and experts; and 3) instrument revision and data collection to determine test validity and reliability across varied institutional contexts. In evaluating the initial survey, we included multiple rounds of feedback from students and experts, receiving feedback from 46 participants (38 students, 8 administrators). We intentionally sampled for representation across engineering and science colleges; gender identity; race/ethnicity; international student status; and transfer student status. The instrument was deployed for the first time in Spring 2018 to the institutional project partners at three universities. It was completed by 722 students: 598 from University 1, 51 from University 2, and 123 from University 3. We tested the construct validity of these responses using a minimum residuals exploratory factor analysis and correlation. A preliminary data analysis shows evidence of differences in perception on types of support college of engineering and college of science students experience. The findings of this preliminary analysis were used to revise the instrument further prior to the next round of testing. Our target sample for the next instrument deployment is 2,000 students, so we will survey ~13,000 students based on a 15% anticipated response rate. Following data collection, we will use confirmatory factor analysis to continue establishing construct validity and report on the stability of constructs emerging from our piloting on a new student sample(s). We will also investigate differences across these constructs by subpopulations of students. 
    more » « less
  3. Abstract: Jury notetaking can be controversial despite evidence suggesting benefits for recall and understanding. Research on note taking has historically focused on the deliberation process. Yet, little research explores the notes themselves. We developed a 10-item coding guide to explore what jurors take notes on (e.g., simple vs. complex evidence) and how they take notes (e.g., gist vs. specific representation). In general, jurors made gist representations of simple and complex information in their notes. This finding is consistent with Fuzzy Trace Theory (Reyna & Brainerd, 1995) and suggests notes may serve as a general memory aid, rather than verbatim representation. Summary: The practice of jury notetaking in the courtroom is often contested. Some states allow it (e.g., Nebraska: State v. Kipf, 1990), while others forbid it (e.g., Louisiana: La. Code of Crim. Proc., Art. 793). Some argue notes may serve as a memory aid, increase juror confidence during deliberation, and help jurors engage in the trial (Hannaford & Munsterman, 2001; Heuer & Penrod, 1988, 1994). Others argue notetaking may distract jurors from listening to evidence, that juror notes may be given undue weight, and that those who took notes may dictate the deliberation process (Dann, Hans, & Kaye, 2005). While research has evaluated the efficacy of juror notes on evidence comprehension, little work has explored the specific content of juror notes. In a similar project on which we build, Dann, Hans, and Kaye (2005) found jurors took on average 270 words of notes each with 85% including references to jury instructions in their notes. In the present study we use a content analysis approach to examine how jurors take notes about simple and complex evidence. We were particularly interested in how jurors captured gist and specific (verbatim) information in their notes as they have different implications for information recall during deliberation. According to Fuzzy Trace Theory (Reyna & Brainerd, 1995), people extract “gist” or qualitative meaning from information, and also exact, verbatim representations. Although both are important for helping people make well-informed judgments, gist-based understandings are purported to be even more important than verbatim understanding (Reyna, 2008; Reyna & Brainer, 2007). As such, it could be useful to examine how laypeople represent information in their notes during deliberation of evidence. Methods Prior to watching a 45-minute mock bank robbery trial, jurors were given a pen and notepad and instructed they were permitted to take notes. The evidence included testimony from the defendant, witnesses, and expert witnesses from prosecution and defense. Expert testimony described complex mitochondrial DNA (mtDNA) evidence. The present analysis consists of pilot data representing 2,733 lines of notes from 52 randomly-selected jurors across 41 mock juries. Our final sample for presentation at AP-LS will consist of all 391 juror notes in our dataset. Based on previous research exploring jury note taking as well as our specific interest in gist vs. specific encoding of information, we developed a coding guide to quantify juror note-taking behaviors. Four researchers independently coded a subset of notes. Coders achieved acceptable interrater reliability [(Cronbach’s Alpha = .80-.92) on all variables across 20% of cases]. Prior to AP-LS, we will link juror notes with how they discuss scientific and non-scientific evidence during jury deliberation. Coding Note length. Before coding for content, coders counted lines of text. Each notepad line with at minimum one complete word was coded as a line of text. Gist information vs. Specific information. Any line referencing evidence was coded as gist or specific. We coded gist information as information that did not contain any specific details but summarized the meaning of the evidence (e.g., “bad, not many people excluded”). Specific information was coded as such if it contained a verbatim descriptive (e.g.,“<1 of people could be excluded”). We further coded whether this information was related to non-scientific evidence or related to the scientific DNA evidence. Mentions of DNA Evidence vs. Other Evidence. We were specifically interested in whether jurors mentioned the DNA evidence and how they captured complex evidence. When DNA evidence was mention we coded the content of the DNA reference. Mentions of the characteristics of mtDNA vs nDNA, the DNA match process or who could be excluded, heteroplasmy, references to database size, and other references were coded. Reliability. When referencing DNA evidence, we were interested in whether jurors mentioned the evidence reliability. Any specific mention of reliability of DNA evidence was noted (e.g., “MT DNA is not as powerful, more prone to error”). Expert Qualification. Finally, we were interested in whether jurors noted an expert’s qualifications. All references were coded (e.g., “Forensic analyst”). Results On average, jurors took 53 lines of notes (range: 3-137 lines). Most (83%) mentioned jury instructions before moving on to case specific information. The majority of references to evidence were gist references (54%) focusing on non-scientific evidence and scientific expert testimony equally (50%). When jurors encoded information using specific references (46%), they referenced non-scientific evidence and expert testimony equally as well (50%). Thirty-three percent of lines were devoted to expert testimony with every juror including at least one line. References to the DNA evidence were usually focused on who could be excluded from the FBIs database (43%), followed by references to differences between mtDNA vs nDNA (30%), and mentions of the size of the database (11%). Less frequently, references to DNA evidence focused on heteroplasmy (5%). Of those references that did not fit into a coding category (11%), most focused on the DNA extraction process, general information about DNA, and the uniqueness of DNA. We further coded references to DNA reliability (15%) as well as references to specific statistical information (14%). Finally, 40% of jurors made reference to an expert’s qualifications. Conclusion Jury note content analysis can reveal important information about how jurors capture trial information (e.g., gist vs verbatim), what evidence they consider important, and what they consider relevant and irrelevant. In our case, it appeared jurors largely created gist representations of information that focused equally on non-scientific evidence and scientific expert testimony. This finding suggests note taking may serve not only to represent information verbatim, but also and perhaps mostly as a general memory aid summarizing the meaning of evidence. Further, jurors’ references to evidence tended to be equally focused on the non-scientific evidence and the scientifically complex DNA evidence. This observation suggests jurors may attend just as much to non-scientific evidence as they to do complex scientific evidence in cases involving complicated evidence – an observation that might inform future work on understanding how jurors interpret evidence in cases with complex information. Learning objective: Participants will be able to describe emerging evidence about how jurors take notes during trial. 
    more » « less
  4. null (Ed.)
    Abstract Categorical induction abilities are robust in typically developing (TD) preschoolers, while children with Autism Spectrum Disorders (ASD) frequently perform inconsistently on tasks asking for the transference of traits from a known category member to a new example based on shared category membership. Here, TD five-year-olds and six-year-olds with ASD participated in a categorical induction task; the TD children performed significantly better and more consistently than the children with ASD. Concurrent verbal and nonverbal tests were not significant correlates; however, the TD children's shape bias performance at two years of age was significantly positively predictive of categorical induction performance at age five. The shape bias, the tendency to extend a novel label to other objects of the same shape during word learning, appears linked with categorical induction ability in TD children, suggesting a common underlying skill and consistent developmental trajectory. Word learning and categorical induction appear uncoupled in children with ASD. 
    more » « less
  5. To support preschool children’s learning about data in an applied way that allows children to leverage their existing mathematical knowledge (i.e. counting, sorting, classifying, comparing) and apply it to answering authentic, developmentally appropriate research questions with data. To accomplish this ultimate goal, a design-based research approach [1] was used to develop and test a classroom-based preschool intervention that includes hands-on, play-based investigations with a digital app that supports and scaffolds the investigation process for teachers and children. This formative study was part of a codesign process with teachers to elicit feedback on the extent to which the series of investigations focused on data collection and analysis (DCA) and the teacher-facing app were (a) developmentally appropriate, (b) aligned with current preschool curricula and routines, (c) feasible to implement, and (d) included design elements and technology affordances teachers felt were useful and anticipated to promote learning. Researchers conducted in-depth interviews (n=10) and an online survey (n=19) with preschool teachers. Findings suggest that teaching preschoolers how to collect and analyze data in a hands-on, play-based, and developmentally appropriate way is feasible and desirable for preschool teachers. Specifically, teachers reported that the initial conceptualization of the investigations were developmentally appropriate, aligned with existing curricular activities and goals, was adaptable for the age and developmental readiness of young children, and that the affordances of the technology are likely to allow preschool children to engage meaningfully in data collection, visualization, and analysis. Findings also suggest that this approach to supporting preschool teachers and children to learn about and conduct DCA merits further study to ensure productive curricular implementation that positively influences preschoolers’ learning. These findings were used to revise the investigations and app, which showed positive outcomes when used in classrooms [2], which add to the scant literature on DCA learning for pre-schoolers and provides insights into the best ways to integrate technology into the classroom. 
    more » « less