skip to main content


Title: Individual differences in the propensity to verbalize: The Internal Representations Questionnaire.
Many people report experiencing their thoughts in the form of natural language, i.e., they experience ‘inner speech’. At present, there exist few ways of quantifying this tendency, making it difficult to investigate whether the propensity to experience verbalize predicts objective cognitive function or whether it is merely epiphenomenal. We present a new instrument—The Internal Representation Questionnaire (IRQ)—for quantifying the subjective format of internal thoughts. The primary goal of the IRQ is to assess whether people vary in their stated use of visual and verbal strategies in their internal representations. Exploratory analyses revealed four factors: Propensity to form visual images, verbal images, a general mental manipulation factor, and an orthographic imagery factor. Here, we describe the properties of the IRQ and report an initial test of its predictive validity by relating it to a speeded picture/word verification task involving pictorial, written, and auditory verbal cues.  more » « less
Award ID(s):
1734260
NSF-PAR ID:
10074359
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018)
Page Range / eLocation ID:
2358–2363
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Denison, S. ; Mack, M. ; Xu, Y. ; Armstrong, B.C. (Ed.)
    Do people perceive shapes to be similar based purely on their physical features? Or is visual similarity influenced by top-down knowledge? In the present studies, we demonstrate that top-down information – in the form of verbal labels that people associate with visual stimuli – predicts visual similarity as measured using subjective (Experiment 1) and objective (Experiment 2) tasks. In Experiment 1, shapes that were previously calibrated to be (putatively) perceptually equidistant were more likely to be grouped together if they shared a name. In Experiment 2, more nameable shapes were easier for participants to discriminate from other images, again controlling for their perceptual distance. We discuss what these results mean for constructing visual stimuli spaces that are perceptually uniform and discuss theoretical implications of the fact that perceptual similarity is sensitive to top-down information such as the ease with which an object can be named. 
    more » « less
  2. This paper presents a systematic review of the empirical literature that uses dual-task interference methods for investigating the on-line involvement of language in various cognitive tasks. In these studies, participants perform some primary task X putatively recruiting linguistic resources while also engaging in a secondary, concurrent task. If performance on the primary task decreases under interference, there is evidence for language involvement in the primary task. We assessed studies (N = 101) reporting at least one experiment with verbal interference and at least one control task (either primary or secondary). We excluded papers with an explicitly clinical, neurological, or developmental focus. The primary tasks identified include categorization, memory, mental arithmetic, motor control, reasoning (verbal and visuospatial), task switching, theory of mind, visual change, and visuospatial integration and wayfinding. Overall, the present review found that internal language is likely to play a facilitative role in memory and categorization when items to be remembered or categorized have readily available labels, when inner speech can act as a form of behavioral self-cuing (inhibitory control, task set reminders, verbal strategy), and when inner speech is plausibly useful as “workspace,” for example, for mental arithmetic. There is less evidence for the role of internal language in cross-modal integration, reasoning relying on a high degree of visual detail or items low on nameability, and theory of mind. We discuss potential pitfalls and suggestions for streamlining and improving the methodology. 
    more » « less
  3. Denison, S. ; Mack, M. ; Xu, Y. ; Armstrong, B.C. (Ed.)
    What affects whether one person represents an item in a similar way to another person? We examined the role of verbal labels in promoting representational alignment. Three groups of participants sorted novel shapes on perceived similarity. Prior to sorting, participants in two of the groups were pre-exposed to the shapes using a simple visual matching task and in one of these groups, shapes were accompanied by one of two novel category labels. Exposure with labels led people to represent the shapes in a more categorical way and to increased alignment between sorters, despite the two categories being visually distinct and participants in both pre-exposure conditions receiving identical visual experience of the shapes. Results hint that labels play a role in aligning people's mental representations, even in the absence of communication 
    more » « less
  4. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  5. Identity, or how people choose to define themselves, is gaining traction as an explanation for who pursues and persists in engineering. A number of quantitative studies have developed scales for predicting engineering identity in undergraduate students. However, the outcome measure of identity is sometimes based on a single item. In this paper, we present the results of a new two-item scale. The scale is adapted from an existing measure of identification with an organization that was developed by Bergami and Bagozzi [1] and refined by Bartel [2]. The measure focuses on the “cognitive (i.e., self-categorization) component of identification” (p. 556), and has been found to have high convergent validity with another, rigorous measure of identification with an organization or other entity created by Mael and Ashforth [3]. This measure utilizes one primarily visual and one verbal item to assess the extent to which an individual cognitively categorizes himself or herself as an engineer. The scale was administered to 1528 engineering undergraduate students during the 2016-2017 academic year. Internal consistency of the new engineering identity scale, as measured by Cronbach’s alpha, is 0.84. This new scale is an important step toward refining quantitative measures of, and the study of, engineering identity development in undergraduate students and other populations. 
    more » « less