skip to main content

Title: Behavioral interference or facilitation does not distinguish between competitive and noncompetitive accounts of lexical selection in word production.
One of the major debates in the field of word production is whether lexical selection is competitive or not. For nearly half a century, semantic interference effectsin picture naming latencieshave beenclaimed as evidence for competitive (relative threshold) models of lexical selection,while semantic facilitation effects have beenclaimed as evidence for non-competitive (simple threshold) models instead.In this paper, we use a computational modelingapproach to comparethe consequences of competitive and noncompetitive selection algorithms for blocked cyclic picture naming latencies, combined with two approaches to representing taxonomic and thematic semantic features.We show thatalthough our simplemodel can capture both semantic interference and facilitation,the presence or absence of competition in the selection mechanism is unrelated to the polarity of these semantic effects.These results question the validity of prior assumptions and offer new perspectives on the origins of interference and facilitation in languageproduction.
Authors:
Editors:
Fitch, T.
Award ID(s):
1949631
Publication Date:
NSF-PAR ID:
10292550
Journal Name:
Proceedings of the 43rd Annual Conference of the Cognitive Science Society.
Sponsoring Org:
National Science Foundation
More Like this
  1. Naming a picture is more difficult in the context of a taxonomically-related picture. Disagreement exists on whether non-taxonomic relations, e.g., associations, have similar or different effects on picture naming. Past work has reported facilitation, interference and null results but with inconsistent methodologies. We paired the same target word (e.g., cow) with unrelated (pen), taxonomically-related (bear), and associatively-related (milk) items in different blocks, as participants repeatedly named one of the two pictures in randomized order. Significant interference was uncovered for the same target item in the taxonomic vs. unrelated and associative blocks. There was no robust evidence of interference in themore »associative blocks. If anything, evidence suggested that associatively-related items marginally facilitated production. This finding suggests that taxonomic and associative relations have different effects on picture naming and has implications for theoretical models of lexical selection and, more generally, for the computations involved in mapping semantic features to lexical items.« less
  2. The present study examined the role of script in bilingual speech planning by comparing the performance of same and different-script bilinguals. Spanish-English bilinguals (Experiment 1) and Japanese-English bilinguals (Experiment 2) performed a picture-word interference task in which they were asked to name a picture of an object in English, their second language, while ignoring a visual distractor word in Spanish or Japanese, their first language. Results replicated the general pattern seen in previous bilingual picture-word interference studies for the same-script, Spanish-English bilinguals but not for the different-script, Japanese-English bilinguals. Both groups showed translation facilitation, whereas only Spanish-English bilinguals demonstrated semanticmore »interference, phonological facilitation, and phono-translation facilitation. These results suggest that when the script of the language not in use is present in the task, bilinguals appear to exploit the perceptual difference as a language cue to direct lexical access to the intended language earlier in the process of speech planning.« less
  3. Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity,more »better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.« less
  4. abstract A growing body of research shows that both signed and spoken languages display regular patterns of iconicity in their vocabularies. We compared iconicity in the lexicons of American Sign Language (ASL) and English by combining previously collected ratings of ASL signs (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2017) and English words (Winter, Perlman, Perry, & Lupyan, 2017) with the use of data-driven semantic vectors derived from English. Our analyses show that models of spoken language lexical semantics drawn from large text corpora can be useful for predicting the iconicity of signs as well as words. Compared to English, ASLmore »has a greater number of regions of semantic space with concentrations of highly iconic vocabulary. There was an overall negative relationship between semantic density and the iconicity of both English words and ASL signs. This negative relationship disappeared for highly iconic signs, suggesting that iconic forms may be more easily discriminable in ASL than in English. Our findings contribute to an increasingly detailed picture of how iconicity is distributed across different languages.« less
  5. People who grow up speaking a language without lexical tones typically find it difficult to master tonal languages after childhood. Accumulating research suggests that much of the challenge for these second language (L2) speakers has to do not with identification of the tones themselves, but with the bindings between tones and lexical units. The question that remains open is how much of these lexical binding problems are problems of encoding (incomplete knowledge of the tone-to-word relations) vs. retrieval (failure to access those relations in online processing). While recent work using lexical decision tasks suggests that both may play a role,more »one issue is that failure on a lexical decision task may reflect a lack of learner confidence about what is not a word, rather than non-native representation or processing of known words. Here we provide complementary evidence using a picture- phonology matching paradigm in Mandarin in which participants decide whether or not a spoken target matches a specific image, with concurrent event-related potential (ERP) recording to provide potential insight into differences in L1 and L2 tone processing strategies. As in the lexical decision case, we find that advanced L2 learners show a clear disadvantage in accurately identifying tone mismatched targets relative to vowel mismatched targets. We explore the contribution of incomplete/uncertain lexical knowledge to this performance disadvantage by examining individual data from an explicit tone knowledge post-test. Results suggest that explicit tone word knowledge and confidence explains some but not all of the errors in picture-phonology matching. Analysis of ERPs from correct trials shows some differences in the strength of L1 and L2 responses, but does not provide clear evidence toward differences in processing that could explain the L2 disadvantage for tones. In sum, these results converge with previous evidence from lexical decision tasks in showing that advanced L2 listeners continue to have difficulties with lexical tone recognition, and in suggesting that these difficulties reflect problems both in encoding lexical tone knowledge and in retrieving that knowledge in real time.« less