- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- Multimodal Technologies and Interaction
- Page Range or eLocation-ID:
- Sponsoring Org:
- National Science Foundation
More Like this
The HapBack: Evaluation of Absolute and Relative Distance Encoding to Enhance Spatial Awareness in a Wearable Tactile DeviceFor the significant global population of individuals who are blind or visually impaired, spatial awareness during navigation remains a challenge. Tactile Electronic Travel Aids have been designed to assist with the provision of spatiotemporal information, but an intuitive method for mapping this information to patterns on a vibrotactile display remains to be determined. This paper explores the encoding of distance from a navigator to an object using two strategies: absolute and relative. A wearable prototype, the HapBack, is presented with two straps of vertically aligned vibrotactile motors mapped to five distances, with each distance mapped to a row on the display. Absolute patterns emit a single vibration at the row corresponding to a distance, while relative patterns emit a sequence of vibrations starting from the bottom row and ending at the row mapped to that distance. These two encoding strategies are comparatively evaluated for identification accuracy and perceived intuitiveness of mapping among ten adult participants who are blind or visually impaired. No significant difference was found between the intuitiveness of the two encodings based on these metrics, with each showing promising results for application during navigation tasks.
Individual differences in spontaneous facial expressions in people with visual impairment and blindnessPeople can visualize their spontaneous and voluntary emotions via facial expressions, which play a critical role in social interactions. However, less is known about mechanisms of spontaneous emotion expressions, especially in adults with visual impairment and blindness. Nineteen adults with visual impairment and blindness participated in interviews where the spontaneous facial expressions were observed and analyzed via the Facial Action Coding System (FACS). We found a set of Action Units, primarily engaged in expressing the spontaneous emotions, which were likely to be affected by participants’ different characteristics. The results of this study could serve as evidence to suggest that adults with visual impairment and blindness show individual differences in spontaneous facial expressions of emotions.
Facial expressions of emotions by people with visual impairment and blindness via video conferencing
Many people including those with visual impairment and blindness take advantage of video conferencing tools to meet people. Video conferencing tools enable them to share facial expressions that are considered as one of the most important aspects of human communication. This study aims to advance knowledge of how those with visual impairment and blindness share their facial expressions of emotions virtually. This study invited a convenience sample of 28 adults with visual impairment and blindness to Zoom video conferencing. The participants were instructed to pose facial expressions of basic human emotions (anger, fear, disgust, happiness, surprise, neutrality, calmness, and sadness), which were video recorded. The facial expressions were analyzed using the Facial Action Coding System (FACS) that encodes the movement of specific facial muscles called Action Units (AUs). This study found that there was a particular set of AUs significantly engaged in expressing each emotion, except for sadness. Individual differences were also found in AUs influenced by the participants’ visual acuity levels and emotional characteristics such as valence and arousal levels. The research findings are anticipated to serve as the foundation of knowledge, contributing to developing emotion-sensing technologies for those with visual impairment and blindness.
Automated Pitch Convergence Improves Learning in a Social, Teachable Robot for Middle School MathematicsPedagogical agents have the potential to provide not only cognitive support to learners but socio-emotional support through social behavior. Socioemotional support can be a critical element to a learner’s success, influencing their self-efficacy and motivation. Several social behaviors have been explored with pedagogical agents including facial expressions, movement, and social dialogue; social dialogue has especially been shown to positively influence interactions. In this work, we explore the role of paraverbal social behavior or social behavior in the form of paraverbal cues such as tone of voice and intensity. To do this, we focus on the phenomenon of entrainment, where individuals adapt their paraverbal features of speech to one another. Paraverbal entrainment in human-human studies has been found to be correlated with rapport and learning. In a study with 72 middle school students, we evaluate the effects of entrainment with a teachable robot, a pedagogical agent that learners teach how to solve ratio problems. We explore how a teachable robot which entrains and introduces social dialogue influences rapport and learning; we compare with two baseline conditions: a social condition, in which the robot speaks socially, and a non-social condition, in which the robot neither entrains nor speaks socially. We find thatmore »
Preferential attachment, homophily, and their consequences such as scale-free (i.e. power-law) degree distributions, the glass ceiling effect (the unseen, yet unbreakable barrier that keeps minorities and women from rising to the upper rungs of the corporate ladder, regardless of their qualifications or achievements) and perception bias are well-studied in undirected networks. However, such consequences and the factors that lead to their emergence in directed networks (e.g. author–citation graphs, Twitter) are yet to be coherently explained in an intuitive, theoretically tractable manner using a single dynamical model. To this end, we present a theoretical and numerical analysis of the novel Directed Mixed Preferential Attachment model in order to explain the emergence of scale-free degree distributions and the glass ceiling effect in directed networks with two groups (minority and majority). Specifically, we first derive closed-form expressions for the power-law exponents of the in-degree and out-degree distributions of each of the two groups and then compare the derived exponents with each other to obtain useful insights. These insights include answers to questions such as: when does the minority group have an out-degree (or in-degree) distribution with a heavier tail compared to the majority group? what factors cause the tail of the out-degreemore »