Abstract Limited language experience in childhood is common among deaf individuals, which prior research has shown to lead to low levels of language processing. Although basic structures such as word order have been found to be resilient to conditions of sparse language input in early life, whether they are robust to conditions of extreme language delay is unknown. The sentence comprehension strategies of post‐childhood, first‐language (L1) learners of American Sign Language (ASL) with at least 9 years of language experience were investigated, in comparison to two control groups of learners with full access to language from birth (deaf native signers and hearing L2 learners who were native English speakers). The results of a sentence‐to‐picture matching experiment show that event knowledge overrides word order for post‐childhood L1 learners, regardless of the animacy of the subject, while both deaf native signers and hearing L2 signers consistently rely on word order to comprehend sentences. Language inaccessibility throughout early childhood impedes the acquisition of even basic word order. Similar to the strategies used by very young children prior to the development of basic sentence structure, post‐childhood L1 learners rely more on context and event knowledge to comprehend sentences. Language experience during childhood is critical to the development of basic sentence structure.
more »
« less
Number Stroop Effects in Arabic Digits and ASL Number Signs: The Impact of Age and Setting of Language Acquisition
Multiple studies have reported mathematics underachievement for students who are deaf, but the onset, scope, and causes of this phenomenon remain understudied. Early language deprivation might be one factor influencing the acquisition of numbers. In this study, we investigated a basic and fundamental mathematical skill, automatic magnitude processing, in two formats (Arabic digits and American Sign Language number signs) and the influence of age of first language exposure on both formats by using two versions of the Number Stroop Test. We compared the performance of individuals born deaf who experienced early language deprivation to that of individuals born deaf who experienced sign language in early life and hearing second language learners of ASL. In both formats of magnitude representation, late first language learners demonstrated overall slower reaction times. They were also less accurate on incongruent trials but performed no differently from early signers and second language learners on other trials. When magnitude was represented by Arabic digits, late first language learners exhibited robust Number Stroop Effects, suggesting automatic magnitude processing, but they also demonstrated a large speed difference between size and number judgments not observed in the other groups. In a task with ASL number signs, the Number Stroop Effect was not found in any group, suggesting that magnitude representation might be format-specific, in line with the results from several other languages. Late first language learners also demonstrate unusual patterns of slower reaction time for neutral rather than incongruent stimuli. Together, the results show that early language deprivation affects the ability to automatically judge quantities expressed both linguistically and by Arabic digits, but that it can be acquired later in life when language is available. Contrary to previous studies that find differences in speed of number processing between deaf and hearing participants, we find that when language is acquired early in life, deaf signers perform identically to hearing participants.
more »
« less
- Award ID(s):
- 1941456
- PAR ID:
- 10329400
- Date Published:
- Journal Name:
- Language Learning and Development
- ISSN:
- 1547-5441
- Page Range / eLocation ID:
- 1 to 29
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Abstract ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.more » « less
-
This study traces the development of discrete, combinatorial structure in Zinacantec Family Homesign (‘Z Sign’), a sign language developed since the 1970s by several deaf siblings in Mexico (Haviland 2020b), focusing on the expression of motion. The results reveal that the first signer, who generated a homesign system without access to language models, represents motion events holistically. Later-born signers, who acquired this homesign system from infancy, distribute the components of motion events over sequences of discrete signs. Furthermore, later-born signers exhibit greater regularity of form-meaning mappings and increased articulatory efficiency. Importantly, these changes occur abruptly between the first- and second-born signers, rather than incrementally across signers. This study extends previous findings for Nicaraguan Sign Language (Senghas et al. 2004) to a social group of a much smaller scale, suggesting that the parallel processes of cultural transmission and language acquisition drive language emergence, regardless of community size.more » « less
-
null (Ed.)Deaf spaces are unique indoor environments designed to optimize visual communication and Deaf cultural expression. However, much of the technological research geared towards the deaf involve use of video or wearables for American sign language (ASL) translation, with little consideration for Deaf perspective on privacy and usability of the technology. In contrast to video, RF sensors offer the avenue for ambient ASL recognition while also preserving privacy for Deaf signers. Methods: This paper investigates the RF transmit waveform parameters required for effective measurement of ASL signs and their effect on word-level classification accuracy attained with transfer learning and convolutional autoencoders (CAE). A multi-frequency fusion network is proposed to exploit data from all sensors in an RF sensor network and improve the recognition accuracy of fluent ASL signing. Results: For fluent signers, CAEs yield a 20-sign classification accuracy of %76 at 77 GHz and %73 at 24 GHz, while at X-band (10 Ghz) accuracy drops to 67%. For hearing imitation signers, signs are more separable, resulting in a 96% accuracy with CAEs. Further, fluent ASL recognition accuracy is significantly increased with use of the multi-frequency fusion network, which boosts the 20-sign fluent ASL recognition accuracy to 95%, surpassing conventional feature level fusion by 12%. Implications: Signing involves finer spatiotemporal dynamics than typical hand gestures, and thus requires interrogation with a transmit waveform that has a rapid succession of pulses and high bandwidth. Millimeter wave RF frequencies also yield greater accuracy due to the increased Doppler spread of the radar backscatter. Comparative analysis of articulation dynamics also shows that imitation signing is not representative of fluent signing, and not effective in pre-training networks for fluent ASL classification. Deep neural networks employing multi-frequency fusion capture both shared, as well as sensor-specific features and thus offer significant performance gains in comparison to using a single sensor or feature-level fusion.more » « less
-
We investigate the roles of linguistic and sensory experience in the early-produced visual, auditory, and abstract words of congenitally-blind toddlers, deaf toddlers, and typicallysighted/ hearing peers. We also assess the role of language access by comparing early word production in children learning English or American Sign Language (ASL) from birth, versus at a delay. Using parental report data on child word production from the MacArthur-Bates Communicative Development Inventory, we found evidence that while children produced words referring to imperceptible referents before age 2, such words were less likely to be produced relative to words with perceptible referents. For instance, blind (vs. sighted) children said fewer highly visual words like “blue” or “see”; deaf signing (vs. hearing) children produced fewer auditory signs like HEAR. Additionally, in spoken English and ASL, children who received delayed language access were less likely to produce words overall. These results demonstrate and begin to quantify how linguistic and sensory access may influence which words young children produce.more » « less
An official website of the United States government

