skip to main content


Title: ASL Developmental Trends Among Deaf Children, Ages Birth to Five
Abstract

Language development is an important facet of early life. Deaf children may have exposure to various languages and communication modalities, including spoken and visual. Previous research has documented the rate of growth of English skills among young deaf children, but no studies have investigated the rate of ASL acquisition. The current paper examines young deaf children’s acquisition of ASL skills, the rate of growth over time, and factors impacting levels and growth rates. Seventy-three children ages birth to 5 were rated three times using the Visual Communication and Sign Language Checklist and given a scaled score at each rating. An average monthly gain score was calculated for each participant. The presence of a deaf parent, use of ASL at home, use of cochlear implant(s), whether the child was born deaf, and age of initial diagnosis were analyzed for their impact on the level of ASL skill and rate of growth. Results indicated that the use of ASL in the home has a significant positive effect on deaf children’s ASL skill level. Additionally, children with lower initial ratings showed higher rates of growth than those with higher initial ratings, especially among school-aged children. The paper discusses implications and directions for future studies.

 
more » « less
NSF-PAR ID:
10373745
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Journal of Deaf Studies and Deaf Education
Volume:
28
Issue:
1
ISSN:
1081-4159
Format(s):
Medium: X Size: p. 7-20
Size(s):
["p. 7-20"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Design features of American Sign Language (ASL)-English bilingual storybook apps on the tablet computers, based on learning research, are intended to facilitate independent and interactive learning of English print literacy and of ASL skill among young learners. In 2013, the Science of Learning Center on Visual Language and Visual Learning introduced the first in a series of storybook apps for the iPad based on literacy and reading research. The current study, employing a sample of signing deaf children examined children’s self-motivated engagement with the various design features presented in the earliest of the apps, The Baobab, and analyzed the relationships of engagement with ASL skill and age of first exposure to ASL, ASL narrative ability, and grade-appropriate English reading ability. Results indicated a robust level of engagement with the app, and a relationship between app pages specifically targeting reading and early exposure and skill levels in ASL. No evidence of relationships between narrative and vocabulary skills and app reading engagement was found. Topics for future research, and strategies for app improvement are discussed.

     
    more » « less
  2. Abstract

    Since its publication in 2013, the Visual Communication and Sign Language (VCSL) Checklist has been widely utilized to assess the development of early American Sign Language skills of deaf children from birth to age 5. However, little research has been published using the results of VCSL assessments. Notably, no psychometric analyses have been conducted to verify the validity of the VCSL in a population whose characteristics are different from those of the small sample of native signing children from whom the published norms were created. The current paper, using data from the online version of the VCSL (VCSL:O), addresses this shortcoming. Ratings of the 114 VCSL items from 562 evaluations were analyzed using a partial-credit Rasch model. Results indicate that the underlying skill across the age range comprises an adequate single dimension. Within the items’ age groupings, however, the dimensionality is not so clear. Item ordering, as well as item fit, is explored in detail. In addition, the paper reports the benefits of using the resulting Rasch scale scores, which, unlike the published scoring strategy that focuses on basal and ceiling performance, makes use of the ratings of partial credit, or emerging, skills. Strategies for revising the VCSL are recommended.

     
    more » « less
  3. Drawing, as a skill, is closely tied to many creative fields and it is a unique practice for every individual. Drawing has been shown to improve cognitive and communicative abilities, such as visual communication, problem-solving skills, students’ academic achievement, awareness of and attention to surrounding details, and sharpened analytical skills. Drawing also stimulates both sides of the brain and improves peripheral skills of writing, 3-D spatial recognition, critical thinking, and brainstorming. People are often exposed to drawing as children, drawing their families, their houses, animals, and, most notably, their imaginative ideas. These skills develop over time naturally to some extent, however, while the base concept of drawing is a basic skill, the mastery of this skill requires extensive practice and it can often be significantly impacted by the self-efficacy of an individual. Sketchtivity is an AI tool developed by Texas A&M University to facilitate the growth of drawing skills and track their performance. Sketching skill development depends in part on students’ self-efficacy associated with their drawing abilities. Gauging the drawing self-efficacy of individuals is critical in understanding the impact that this drawing practice has had with this new novel instrument, especially in contrast to traditional practicing methods. It may also be very useful for other researchers, educators, and technologists. This study reports the development and initial validation of a new 13-item measure that assesses perceived drawing self efficacy. The13 items to measure drawing self efficacy were developed based on Bandura’s guide for constructing Self-Efficacy Scales. The participants in the study consisted of 222 high school students from engineering, art, and pre-calculus classes. Internal consistency of the 13 observed items were found to be very high (Cronbach alpha: 0.943), indicating a high reliability of the scale. Exploratory Factor Analysis was performed to further investigate the variance among the 13 observed items, to find the underlying latent factors that influenced the observed items, and to see if the items needed revision. We found that a three model was the best fit for our data, given fit statistics and model interpretability. The factors are: Factor 1: Self-efficacy with respect to drawing specific objects; Factor 2: Self-efficacy with respect to drawing practically to solve problems, communicating with others, and brainstorming ideas; Factor 3: Self-efficacy with respect to drawing to create, express ideas, and use one’s imagination. An alternative four-factor model is also discussed. The purpose of our study is to inform interventions that increase self-efficacy. We believe that this assessment will be valuable especially for education researchers who implement AI-based tools to measure drawing skills.This initial validity study shows promising results for a new measure of drawing self-efficacy. Further validation with new populations and drawing classes is needed to support its use, and further psychometric testing of item-level performance. In the future, this self-efficacy assessment could be used by teachers and researchers to guide instructional interventions meant to increase drawing self-efficacy. 
    more » « less
  4. The use of virtual humans (i.e., avatars) holds the potential for interactive, automated interaction in domains such as remote communication, customer service, or public announcements. For signed language users, signing avatars could potentially provide accessible content by sharing information in the signer's preferred or native language. As the development of signing avatars has gained traction in recent years, researchers have come up with many different methods of creating signing avatars. The resulting avatars vary widely in their appearance, the naturalness of their movements, and facial expressions—all of which may potentially impact users' acceptance of the avatars. We designed a study to test the effects of these intrinsic properties of different signing avatars while also examining the extent to which people's own language experiences change their responses to signing avatars. We created video stimuli showing individual signs produced by (1) a live human signer (Human), (2) an avatar made using computer-synthesized animation (CS Avatar), and (3) an avatar made using high-fidelity motion capture (Mocap avatar). We surveyed 191 American Sign Language users, including Deaf ( N = 83), Hard-of-Hearing ( N = 34), and Hearing ( N = 67) groups. Participants rated the three signers on multiple dimensions, which were then combined to form ratings of Attitudes, Impressions, Comprehension, and Naturalness. Analyses demonstrated that the Mocap avatar was rated significantly more positively than the CS avatar on all primary variables. Correlations revealed that signers who acquire sign language later in life are more accepting of and likely to have positive impressions of signing avatars. Finally, those who learned ASL earlier were more likely to give lower, more negative ratings to the CS avatar, but we did not see this association for the Mocap avatar or the Human signer. Together, these findings suggest that movement quality and appearance significantly impact users' ratings of signing avatars and show that signed language users with earlier age of ASL acquisition are the most sensitive to movement quality issues seen in computer-generated avatars. We suggest that future efforts to develop signing avatars consider retaining the fluid movement qualities integral to signed languages. 
    more » « less
  5. The PoseASL dataset consists of color and depth videos collected from ASL signers at the Linguistic and Assistive Technologies Laboratory under the direction of Matt Huenerfauth, as part of a collaborative research project with researchers at the Rochester Institute of Technology, Boston University, and the University of Pennsylvania. Access: After becoming an authorized user of Databrary, please contact Matt Huenerfauth if you have difficulty accessing this volume. We have collected a new dataset consisting of color and depth videos of fluent American Sign Language signers performing sequences ASL signs and sentences. Given interest among sign-recognition and other computer-vision researchers in red-green-blue-depth (RBGD) video, we release this dataset for use by the research community. In addition to the video files, we share depth data files from a Kinect v2 sensor, as well as additional motion-tracking files produced through post-processing of this data. Organization of the Dataset: The dataset is organized into sub-folders, with codenames such as "P01" or "P16" etc. These codenames refer to specific human signers who were recorded in this dataset. Please note that there was no participant P11 nor P14; those numbers were accidentally skipped during the process of making appointments to collect video stimuli. Task: During the recording session, the participant was met by a member of our research team who was a native ASL signer. No other individuals were present during the data collection session. After signing the informed consent and video release document, participants responded to a demographic questionnaire. Next, the data-collection session consisted of English word stimuli and cartoon videos. The recording session began with showing participants stimuli consisting of slides that displayed English word and photos of items, and participants were asked to produce the sign for each (PDF included in materials subfolder). Next, participants viewed three videos of short animated cartoons, which they were asked to recount in ASL: - Canary Row, Warner Brothers Merrie Melodies 1950 (the 7-minute video divided into seven parts) - Mr. Koumal Flies Like a Bird, Studio Animovaneho Filmu 1969 - Mr. Koumal Battles his Conscience, Studio Animovaneho Filmu 1971 The word list and cartoons were selected as they are identical to the stimuli used in the collection of the Nicaraguan Sign Language video corpora - see: Senghas, A. (1995). Children’s Contribution to the Birth of Nicaraguan Sign Language. Doctoral dissertation, Department of Brain and Cognitive Sciences, MIT. Demographics: All 14 of our participants were fluent ASL signers. As screening, we asked our participants: Did you use ASL at home growing up, or did you attend a school as a very young child where you used ASL? All the participants responded affirmatively to this question. A total of 14 DHH participants were recruited on the Rochester Institute of Technology campus. Participants included 7 men and 7 women, aged 21 to 35 (median = 23.5). All of our participants reported that they began using ASL when they were 5 years old or younger, with 8 reporting ASL use since birth, and 3 others reporting ASL use since age 18 months. Filetypes: *.avi, *_dep.bin: The PoseASL dataset has been captured by using a Kinect 2.0 RGBD camera. The output of this camera system includes multiple channels which include RGB, depth, skeleton joints (25 joints for every video frame), and HD face (1,347 points). The video resolution produced in 1920 x 1080 pixels for the RGB channel and 512 x 424 pixels for the depth channels respectively. Due to limitations in the acceptable filetypes for sharing on Databrary, it was not permitted to share binary *_dep.bin files directly produced by the Kinect v2 camera system on the Databrary platform. If your research requires the original binary *_dep.bin files, then please contact Matt Huenerfauth. *_face.txt, *_HDface.txt, *_skl.txt: To make it easier for future researchers to make use of this dataset, we have also performed some post-processing of the Kinect data. To extract the skeleton coordinates of the RGB videos, we used the Openpose system, which is capable of detecting body, hand, facial, and foot keypoints of multiple people on single images in real time. The output of Openpose includes estimation of 70 keypoints for the face including eyes, eyebrows, nose, mouth and face contour. The software also estimates 21 keypoints for each of the hands (Simon et al, 2017), including 3 keypoints for each finger, as shown in Figure 2. Additionally, there are 25 keypoints estimated for the body pose (and feet) (Cao et al, 2017; Wei et al, 2016). Reporting Bugs or Errors: Please contact Matt Huenerfauth to report any bugs or errors that you identify in the corpus. We appreciate your help in improving the quality of the corpus over time by identifying any errors. Acknowledgement: This material is based upon work supported by the National Science Foundation under award 1749376: "Collaborative Research: Multimethod Investigation of Articulatory and Perceptual Constraints on Natural Language Evolution." 
    more » « less