skip to main content


Search for: All records

Award ID contains: 1735225

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    The present article investigated the composition of different joint gaze components used to operationalize various types of coordinated attention between parents and infants and which types of coordinated attention were associated with future vocabulary size. Twenty‐five 9‐month‐old infants and their parents wore head‐mounted eye trackers as they played with objects together. With high‐density gaze data, a variety of coordinated attention bout types were quantitatively measured by combining different gaze components, such as mutual gaze, joint object looks, face looks, and triadic gaze patterns. The key components of coordinated attention that were associated with vocabulary size at 12 and 15 months included the simultaneous combination of parent triadic gaze and infant object looking. The results from this article are discussed in terms of the importance of parent attentional monitoring and infant sustained attention for language development.

     
    more » « less
  2. Abstract

    Previous work suggests that auditory–vestibular interactions, which emerge during bodily movement to music, can influence the perception of musical rhythm. In a seminal study on the ontogeny of musical rhythm, Phillips‐Silver and Trainor (2005) found that bouncing infants to an unaccented rhythm influenced infants’ perceptual preferences for accented rhythms that matched the rate of bouncing. In the current study, we ask whether nascent, diffuse coupling between auditory and motor systems is sufficient to bootstrap short‐term Hebbian plasticity in the auditory system and explain infants’ preferences for accented rhythms thought to arise from auditory–vestibular interactions. First, we specify a nonlinear, dynamical system in which two oscillatory neural networks, representing developmentally nascent auditory and motor systems, interact through weak, non‐specific coupling. The auditory network was equipped with short‐term Hebbian plasticity, allowing the auditory network to tune its intrinsic resonant properties. Next, we simulate the effect of vestibular input (e.g., infant bouncing) on infants’ perceptual preferences for accented rhythms. We found that simultaneous auditory–vestibular training shaped the model's response to musical rhythm, enhancing vestibular‐related frequencies in auditory‐network activity. Moreover, simultaneous auditory–vestibular training, relative to auditory‐ or vestibular‐only training, facilitated short‐term auditory plasticity in the model, producing stronger oscillator connections in the auditory network. Finally, when tested on a musical rhythm, models which received simultaneous auditory–vestibular training, but not models that received auditory‐ or vestibular‐only training, resonated strongly at frequencies related to their “bouncing,” a finding qualitatively similar to infants’ preferences for accented rhythms that matched the rate of infant bouncing.

     
    more » « less
  3. Abstract

    This review examines the role of auditory training on speech adaptation for cochlear implant users. A current limitation of the existing evidence base is the failure to adequately account for wide variability in speech perception outcomes following implantation. While many preimplantation factors contribute to the variance observed in outcomes, formal auditory training has been proposed as a way to maximize speech comprehension benefits for cochlear implant users. We adopt an interdisciplinary perspective and focus on integrating the clinical rehabilitation literature with basic research examining perceptual learning of speech. We review findings on the role of auditory training for improving perception of degraded speech signals in normal hearing listeners, with emphasis on how lexically oriented training paradigms may facilitate speech comprehension when the acoustic input is diminished. We conclude with recommendations for future research that could foster translation of principles of speech learning in normal hearing listeners to aural rehabilitation protocols for cochlear implant patients.

     
    more » « less
  4. Abstract

    Visual word recognition is facilitated by the presence oforthographic neighborsthat mismatch the target word by a single letter substitution. However, researchers typically do not considerwhereneighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number ofenemiesat each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over‐time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions.

     
    more » « less
  5. Abstract

    Despite thelack of invariance problem(the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them to provide direct insights into mechanisms that may support HSR. In this brief article, we report preliminary results from a two‐layer network that borrows one element from ASR,long short‐term memorynodes, which provide dynamic memory for a range of temporal spans. This allows the model to learn to map real speech from multiple talkers to semantic targets with high accuracy, with human‐like timecourse of lexical access and phonological competition. Internal representations emerge that resemble phonetically organized responses in human superior temporal gyrus, suggesting that the model develops a distributed phonological code despite no explicit training on phonetic or phonemic targets. The ability to work with real speech is a major advance for cognitive models of HSR.

     
    more » « less
  6. null (Ed.)
    Abstract Neuroimaging and transcranial direct current stimulation (tDCS) research has revealed that generating novel ideas is associated with both reductions and increases in prefrontal cortex (PFC) activity, and engagement of posterior occipital cortex, among other regions. However, there is substantial variability in the robustness of these tDCS‐induced effects due to heterogeneous sample sizes, different creativity measures, and methodological diversity in the application of tDCS across laboratories. To address these shortcomings, we used twelve different montages within a standardized tDCS protocol to investigate how altering activity in frontotemporal and occipital cortex impacts creative thinking. Across four experiments, 246 participants generated either the common or an uncommon use for 60 object pictures while undergoing tDCS. Participants also completed a control short-term memory task. We applied active tDCS for 20 min at 1.5 mA through two 5 cm × 5 cm electrodes over left or right ventrolateral prefrontal (areas F7, F8) or occipital (areas O1, O2) cortex, concurrent bilateral stimulation of these regions across polarities, or sham stimulation. Cathodal stimulation of the left, but not right, ventrolateral PFC improved fluency in creative idea generation, but had no effects on originality, as approximated by measures of semantic distance. No effects were obtained for the control tasks. Concurrent bilateral stimulation of the ventrolateral PFC regardless of polarity direction, and excitatory stimulation of occipital cortex did not alter task performance. Highlighting the importance of cross-experimental methodological consistency, these results extend our past findings and contribute to our understanding of the role of left PFC in creative thinking. 
    more » « less
  7. null (Ed.)
  8. null (Ed.)