skip to main content


Search for: All records

Creators/Authors contains: "Malaia, Evie"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Sign languages are human communication systems that are equivalent to spoken language in their capacity for information transfer, but which use a dynamic visual signal for communication. Thus, linguistic metrics of complexity, which are typically developed for linear, symbolic linguistic representation (such as written forms of spoken languages) do not translate easily into sign language analysis. A comparison of physical signal metrics, on the other hand, is complicated by the higher dimensionality (spatial and temporal) of the sign language signal as compared to a speech signal (solely temporal). Here, we review a variety of approaches to operationalizing sign language complexity based on linguistic and physical data, and identify the approaches that allow for high fidelity modeling of the data in the visual domain, while capturing linguistically-relevant features of the sign language signal. 
    more » « less
  2. The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing. Systematic Review Registration [ https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021238911 ], identifier [CRD42021238911]. 
    more » « less
  3. Perlman, Marcus (Ed.)
    Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event’s underlying representation and its syntactic encoding, such that–for example–the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs’ visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes. 
    more » « less
  4. null (Ed.)
  5. null (Ed.)
    RF sensing based human activity and hand gesture recognition (HGR) methods have gained enormous popularity with the development of small package, high frequency radar systems and powerful machine learning tools. However, most HGR experiments in the literature have been conducted on individual gestures and in isolation from preceding and subsequent motions. This paper considers the problem of American sign language (ASL) recognition in the context of daily living, which involves sequential classification of a continuous stream of signing mixed with daily activities. In particular, this paper investigates the efficacy of different RF input representations and fusion techniques for ASL and trigger gesture recognition tasks in a daily living scenario, which can be potentially used for sign language sensitive human-computer interfaces (HCI). The proposed approach involves first detecting and segmenting periods of motion, followed by feature level fusion of the range-Doppler map, micro-Doppler spectrogram, and envelope for classification with a bi-directional long short-term memory (BiL-STM) recurrent neural network. Results show 93.3% accuracy in identification of 6 activities and 4 ASL signs, as well as a trigger sign detection rate of 0.93. 
    more » « less
  6. null (Ed.)
    Over the years, there has been much research in both wearable and video-based American Sign Language (ASL) recognition systems. However, the restrictive and invasive nature of these sensing modalities remains a significant disadvantage in the context of Deaf-centric smart environments or devices that are responsive to ASL. This paper investigates the efficacy of RF sensors for word-level ASL recognition in support of human-computer interfaces designed for deaf or hard-of-hearing individuals. A principal challenge is the training of deep neural networks given the difficulty in acquiring native ASL signing data. In this paper, adversarial domain adaptation is exploited to bridge the physical/kinematic differences between the copysigning of hearing individuals (repetition of sign motion after viewing a video), and native signing of Deaf individuals who are fluent in sign language. Domain adaptation results are compared with those attained by directly synthesizing ASL signs using generative adversarial networks (GANs). Kinematic improvements to the GAN architecture, such as the insertion of micro-Doppler signature envelopes in a secondary branch of the GAN, are utilized to boost performance. Word-level classification accuracy of 91.3% is achieved for 20 ASL words. 
    more » « less