skip to main content


Search for: All records

Award ID contains: 1462280

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We are releasing a dataset containing videos of both fluent and non-fluent signers using American Sign Language (ASL), which were collected using a Kinect v2 sensor. This dataset was collected as a part of a project to develop and evaluate computer vision algorithms to support new technologies for automatic detection of ASL fluency attributes. A total of 45 fluent and non-fluent participants were asked to perform signing homework assignments that are similar to the assignments used in introductory or intermediate level ASL courses. The data is annotated to identify several aspects of signing including grammatical features and non-manual markers. Sign language recognition is currently very data-driven and this dataset can support the design of recognition technologies, especially technologies that can benefit ASL learners. This dataset might also be interesting to ASL education researchers who want to contrast fluent and non-fluent signing. 
    more » « less
  2. null (Ed.)
    With the proliferation of voice-based conversational user interfaces (CUIs) comes accessibility barriers for Deaf and Hard of Hearing (DHH) users. There has not been significant prior research on sign-language conversational interactions with technology. In this paper, we motivate research on this topic and identify open questions and challenges in this space, including DHH users' interests in this technology, the types of commands they may use, and the open design questions in how to structure the conversational interaction in this sign-language modality. We also describe our current research methods for addressing these questions, including how we engage with the DHH community. 
    more » « less
  3. Antona, M ; Stephanidis, C (Ed.)
    Environmental sounds can provide important information about surrounding activity, yet recognizing sounds can be challenging for Deaf and Hard-of-Hearing (DHH) individuals. Prior work has examined the preferences of DHH users for various sound-awareness methods. However, these preferences have been observed to vary along some demographic factors. Thus, in this study we investigate the preferences of a specific group of DHH users: current assistive listening devices users. Through a survey of 38 participants, we investigated their challenges and requirements for sound-awareness applications, as well as which type of sounds and what aspects of the sounds are of importance to them. We found that users of assistive listening devices still often miss sounds and rely on other people to obtain information about them. Participants indicated that the importance of awareness of different types of sounds varied according to the environment and the form factor of the sound-awareness technology. Congruent with prior work, participants reported that the location and urgency of the sound were of importance, as well as the confidence of the technology in its identification of that sound. 
    more » « less
  4. To make it easier to add American Sign Language (ASL) to websites, which would increase information accessibility for many Deaf users, we investigate software to semi-automatically produce ASL animation from an easy-to-update script of the message, requiring us to automatically select the speed and timing for the animation. While we can model speed and timing of human signers from video recordings, prior work has suggested that users prefer animations to be slower than videos of humans signers. However, no prior study had systematically examined the multiple parameters of ASL timing, which include: sign duration, transition time, pausing frequency, pausing duration, and differential signing rate. In an experimental study, 16 native ASL signers provided subjective preference judgements during a side-by-side comparison of ASL animations in which each of these five parameters was varied. We empirically identified and report users' preferences for each of these individual timing parameters of ASL animation. 
    more » « less
  5. Developing successful sign language recognition, generation, and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, human-computer interaction, linguistics, and Deaf culture. Despite the need for deep interdisciplinary knowledge, existing research occurs in separate disciplinary silos, and tackles separate portions of the sign language processing pipeline. This leads to three key questions: 1) What does an interdisciplinary view of the current landscape reveal? 2) What are the biggest challenges facing the field? and 3) What are the calls to action for people working in the field? To help answer these questions, we brought together a diverse group of experts for a two-day workshop. This paper presents the results of that interdisciplinary workshop, providing key background that is often overlooked by computer scientists, a review of the state-of-the-art, a set of pressing challenges, and a call to action for the research community. 
    more » « less
  6. Recent research has investigated automatic methods for identifying how important each word in a text is for the overall message, in the context of people who are Deaf and Hard of Hearing (DHH) viewing video with captions. We examine whether DHH users report benefits from visual highlighting of important words in video captions. In formative interview and prototype studies, users indicated a preference for underlining of 5%-15% of words in a caption text to indicate that they are important, and they expressed an interest for such text markup in the context of educational lecture videos. In a subsequent user study, 30 DHH participants viewed lecture videos in two forms: with and without such visual markup. Users indicated that the videos with captions containing highlighted words were easier to read and follow, with lower perceived task-load ratings, compared to the videos without highlighting. This study motivates future research on caption highlighting in online educational videos, and it provides a foundation for how to evaluate the efficacy of such systems with users. 
    more » « less