skip to main content

Title: Deaf Children’s Engagement with American Sign Language-English Bilingual Storybook Apps

Design features of American Sign Language (ASL)-English bilingual storybook apps on the tablet computers, based on learning research, are intended to facilitate independent and interactive learning of English print literacy and of ASL skill among young learners. In 2013, the Science of Learning Center on Visual Language and Visual Learning introduced the first in a series of storybook apps for the iPad based on literacy and reading research. The current study, employing a sample of signing deaf children examined children’s self-motivated engagement with the various design features presented in the earliest of the apps, The Baobab, and analyzed the relationships of engagement with ASL skill and age of first exposure to ASL, ASL narrative ability, and grade-appropriate English reading ability. Results indicated a robust level of engagement with the app, and a relationship between app pages specifically targeting reading and early exposure and skill levels in ASL. No evidence of relationships between narrative and vocabulary skills and app reading engagement was found. Topics for future research, and strategies for app improvement are discussed.

more » « less
Author(s) / Creator(s):
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
The Journal of Deaf Studies and Deaf Education
Medium: X Size: p. 53-67
p. 53-67
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Language development is an important facet of early life. Deaf children may have exposure to various languages and communication modalities, including spoken and visual. Previous research has documented the rate of growth of English skills among young deaf children, but no studies have investigated the rate of ASL acquisition. The current paper examines young deaf children’s acquisition of ASL skills, the rate of growth over time, and factors impacting levels and growth rates. Seventy-three children ages birth to 5 were rated three times using the Visual Communication and Sign Language Checklist and given a scaled score at each rating. An average monthly gain score was calculated for each participant. The presence of a deaf parent, use of ASL at home, use of cochlear implant(s), whether the child was born deaf, and age of initial diagnosis were analyzed for their impact on the level of ASL skill and rate of growth. Results indicated that the use of ASL in the home has a significant positive effect on deaf children’s ASL skill level. Additionally, children with lower initial ratings showed higher rates of growth than those with higher initial ratings, especially among school-aged children. The paper discusses implications and directions for future studies.

    more » « less
  2. Abstract

    Dialogic reading, when children are read a storybook and engaged in relevant conversation, is a powerful strategy for fostering language development. With the development of artificial intelligence, conversational agents can engage children in elements of dialogic reading. This study examined whether a conversational agent can improve children's story comprehension and engagement, as compared to an adult reading partner. Using a 2 (dialogic reading or non‐dialogic reading) × 2 (agent or human) factorial design, a total of 117 three‐ to six‐year‐olds (50% Female, 37% White, 31% Asian, 21% multi‐ethnic) were randomly assigned into one of the four conditions. Results revealed that a conversational agent can replicate the benefits of dialogic reading with a human partner by enhancing children's narrative‐relevant vocalizations, reducing irrelevant vocalizations, and improving story comprehension.

    more » « less
  3. While a significant amount of work has been done on the commonly used, tightly -constrained weather-based, German sign language (GSL) dataset, little has been done for continuous sign language translation (SLT) in more realistic settings, including American sign language (ASL) translation. Also, while CNN - based features have been consistently shown to work well on the GSL dataset, it is not clear whether such features will work as well in more realistic settings when there are more heterogeneous signers in non-uniform backgrounds. To this end, in this work, we introduce a new, realistic phrase-level ASL dataset (ASLing), and explore the role of different types of visual features (CNN embeddings, human body keypoints, and optical flow vectors) in translating it to spoken American English. We propose a novel Transformer-based, visual feature learning method for ASL translation. We demonstrate the explainability efficacy of our proposed learning methods by visualizing activation weights under various input conditions and discover that the body keypoints are consistently the most reliable set of input features. Using our model, we successfully transfer-learn from the larger GSL dataset to ASLing, resulting in significant BLEU score improvements. In summary, this work goes a long way in bringing together the AI resources required for automated ASL translation in unconstrained environments. 
    more » « less
  4. null (Ed.)
    Purpose This paper aims to explore what design aspects can support data visualization literacy within science museums. Design/methodology/approach The qualitative study thematically analyzes video data of 11 visitor groups as they engage with reading and writing of data visualization through a science museum exhibition that features real-time and uncurated data. Findings Findings present how the design aspects of the exhibit led to identifying single data records, data patterns, mismeasurements and distribution rate. Research limitations/implications The findings preface how to study data visualization literacy learning in short museum interactions. Practical implications Practically, the findings point toward design implications for facilitating data visualization literacy in museum exhibits. Originality/value The originality of the study lays in the way the exhibit supports engagement with data visualization literacy with uncurated data records. 
    more » « less
  5. This work proposes a novel framework for automatically scor- ing children’s oral narrative language abilities. We use audio recordings from 3rd-8th graders of the Atlanta, Georgia area as they take a portion of the Test of Narrative Language. We de- sign a system which extracts linguistic features and fine-tuned BERT-based self-supervised learning representation from state- of-the-art ASR transcripts. We predict manual test scores from the extracted features. This framework significantly outper- forms a deterministic method based on the assessment’s scoring rubric. Last, we evaluate the system performance across stu- dent’s reading level, dialect, and diagnosed learning/language disabilities to establish fairness across diverse demographics of students. Using this system, we achieve approximately 98% classification accuracy of student scores. We are also able to identify key areas of improvement for this type of system across demographic areas and reading ability. 
    more » « less