skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Magnuson, J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We follow up on recent work demonstrating clear advantages of lexical-to-sublexical feedback in the TRACE model of spoken word recognition. The prior work compared accuracy and recognition times in TRACE with feedback on or off as progressively more noise was added to inputs. Recognition times were faster with feedback at every level of noise, and there was an accuracy advantage for feedback with noise added to inputs. However, a recent article claims that those results must be an artifact of converting activations to response probabilities, because feedback could only reinforce the “status quo.” That is, the claim is that given noisy inputs, feedback must reinforce all inputs equally, whether driven by signal or noise. We demonstrate that the feedback advantage replicates with raw activations. We also demonstrate that lexical feedback selectively reinforces lexically-coherent input patterns – that is, signal over noise – and explain how that behavior emerges naturally in interactive activation. 
    more » « less
  2. Pervasive behavioral and neural evidence for predictive processing has led to claims that language processing depends upon predictive coding. In some cases, this may reflect a conflation of terms, but predictive coding formally is a computational mechanism where only deviations from top- down expectations are passed between levels of representation. We evaluate three models’ ability to simulate predictive processing and ask whether they exhibit the putative hallmark of formal predictive coding (reduced signal when input matches expectations). Of crucial interest, TRACE, an interactive activation model that does not explicitly implement prediction, exhibits both predictive processing and model- internal signal reduction. This may indicate that interactive activation is functionally equivalent or approximant to predictive coding, or that caution is warranted in interpreting neural signal reduction as diagnostic of predictive coding. 
    more » « less
  3. The Time-Invariant String Kernel (TISK) model of spoken word recognition (Hanngan et al., 2013) is an interactive activation model like TRACE (McClelland & Elman, 1986). However, it uses orders of magnitude fewer nodes and connections because it replaces TRACE's time-specific duplicates of phoneme and word nodes with time-invariant nodes based on a string kernel representation (essentially a phoneme-by-phoneme matrix, where a word is encoded as by all ordered open diphones it contains; e.g., cat has /kæ/, /æt/, and /kt/). Hannagan et al. (2013) showed that TISK behaves similarly to TRACE in the time course of phonological competition and even word-specific recognition times. However, the original implementation did not include feedback from words to diphone nodes, precluding simulation of top-down effects. Here, we demonstrate that TISK can be easily adapted to lexical feedback, affording simulation of top-down effects as well as allowing the model to demonstrate graceful degradation given noisy inputs. 
    more » « less
  4. In visual word recognition, having more orthographic neighbors (words that differ by a single letter) generally speeds access to a target word. But neighbors can mismatch at any letter position. In light of evidence that information content varies between letter positions, we consider how neighbor effects might vary across letter positions. Results from a word naming task indicate that response latencies are better predicted by the relative number of positional friends and enemies (respectively, neighbors that match the target at a given letter position and those that mismatch) at some letter positions than at others. In particular, benefits from friends are most pronounced at positions associated with low a priori uncertainty (positional entropy). We consider how these results relate to previous accounts of position-specific effects and how such effects might emerge in serial and parallel processing systems. 
    more » « less
  5. We examined how phonological competition effects in spoken word recognition change with word length. Cohort effects (competition between words that overlap at onset) are strong and easily replicated. Rhyme effects (competition between words that mismatch at onset) are weaker, emerge later in the time course of spoken word recognition, and are more difficult to replicate. We conducted a simple experiment to examine cohort and rhyme competition using monosyllabic vs. bisyllabic words. Degree of competition was predicted by proportion of phonological overlap. Longer rhymes, with greater overlap in both number and proportion of shared phonemes, compete more strongly (e.g., kettle-medal [0.8 overlap] vs. cat-mat [0.67 overlap]). In contrast, long and short cohort pairs constrained to have constant (2-phoneme) overlap vary in proportion of overlap. Longer cohort pairs (e.g., camera-candle) have lower proportion of overlap (in this example, 0.33) than shorter cohorts (e.g., cat-can, with 0.67 overlap) and compete more weakly. This finding has methodological implications (rhyme effects are less likely to be observed with shorter words, while cohort effects are diminished for longer words), but also theoretical implications: degree of competition is not a simple function of overlapping phonemes; degree of competition is conditioned on proportion of overlap. Simulations with TRACE help explicate how this result might emerge. 
    more » « less
  6. Despite the lack of invariance problem (the many-to-many mapping between acoustics and percepts), we experience phonetic constancy and typically perceive what a speaker intends. Models of human speech recognition have side- stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, automatic speech recognition powered by deep learning networks have allowed robust, real-world speech recognition. However, the complexities of deep learning architectures and training regimens make it difficult to use them to provide direct insights into mechanisms that may support human speech recognition. We developed a simple network that borrows one element from automatic speech recognition (long short-term memory nodes, which provide dynamic memory for short and long spans). This allows the network to learn to map real speech from multiple talkers to semantic targets with high accuracy. Internal representations emerge that resemble phonetically-organized responses in human superior temporal gyrus, suggesting that the model develops a distributed phonological code despite no explicit training on phonetic or phonemic targets. The ability to work with real speech is a major advance for cognitive models of human speech recognition. 
    more » « less