skip to main content


Title: Inclusion in human–machine interactions
Human–machine interactions (HMIs) describe how humans engage various systems, including those that are smart, autonomous, or both. Most HMIs either allow the human to control the machine (an instrument panel), allow the machine to obtain data (a heart monitor), or even both (a virtual reality setup). HMIs may be placed in three broad classes. In one class, the individual is active in the interaction—that is, the individual is the user or purchaser of a technology such as an automobile. In another class, the user is passive but consenting in the interaction—that is, the interaction occurs with their consent, such as the use of devices for medical diagnosis. There is also a class in which the user is passive and nonconsenting in the interaction, such as the use of facial recognition for law enforcement purposes.  more » « less
Award ID(s):
1836952
PAR ID:
10344295
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Science
Volume:
375
Issue:
6577
ISSN:
0036-8075
Page Range / eLocation ID:
149 to 150
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Innovative human–machine interfaces (HMIs) have attracted increasing attention in the field of system control and assistive devices for disabled people. Conventional HMIs that are designed based on the interaction of physical movements or language communication are not effective or appliable to severely disabled users. Here, a breath‐driven triboelectric sensor is reported consisting of a soft fixator and two circular‐shaped triboelectric nanogenerators (TENGs) for self‐powered respiratory monitoring and smart system control. The sensor device is capable of effectively detecting the breath variation and generates responsive electrical signals based on different breath patterns without affecting the normal respiration. A breathing‐driven HMI system is demonstrated for severely disabled people to control electrical household appliances and shows an intelligent respiration monitoring system for emergence alarm. The new system provides the advantages of high sensitivity, good stability, low cost, and ease of use. This work will not only expand the development of the TENGs in self‐powered sensors, but also opens a new avenue to develop assistive devices for disabled people through innovation of advanced HMIs.

     
    more » « less
  2. null (Ed.)
    This work presents a novel prototype autonomous vehicle (AV) human-machine interface (HMI) in virtual reality (VR) that utilizes a human-like visual embodiment in the driver’s seat of an AV to communicate AV intent to pedestrians in a crosswalk scenario. There is currently a gap in understanding the use of virtual humans in AV HMIs for pedestrian crossing despite the demonstrated efcacy of human-like interfaces in improving human-machine relationships. We conduct a 3x2 within-subjects experiment in VR using our prototype to assess the efects of a virtual human visual embodiment AV HMI on pedestrian crossing behavior and experience. In the experiment participants walk across a virtual crosswalk in front of an AV. How long they took to decide to cross and how long it took for them to reach the other side were collected, in addition to their subjective preferences and feelings of safety. Of 26 participants, 25 preferred the condition with the most anthropomorphic features. An intermediate condition where a human-like virtual driver was present but did not exhibit any behaviors was least preferred and also had a signifcant efect on time to decide. This work contributes the frst empirical work on using human-like visual embodiments for AV HMIs. 
    more » « less
  3. Abstract Background

    Human-human (HH) interaction mediated by machines (e.g., robots or passive sensorized devices), which we call human-machine-human (HMH) interaction, has been studied with increasing interest in the last decade. The use of machines allows the implementation of different forms of audiovisual and/or physical interaction in dyadic tasks. HMH interaction between two partners can improve the dyad’s ability to accomplish a joint motor task (task performance) beyond either partner’s ability to perform the task solo. It can also be used to more efficiently train an individual to improve their solo task performance (individual motor learning). We review recent research on the impact of HMH interaction on task performance and individual motor learning in the context of motor control and rehabilitation, and we propose future research directions in this area.

    Methods

    A systematic search was performed on the Scopus, IEEE Xplore, and PubMed databases. The search query was designed to find studies that involve HMH interaction in motor control and rehabilitation settings. Studies that do not investigate the effect of changing the interaction conditions were filtered out. Thirty-one studies met our inclusion criteria and were used in the qualitative synthesis.

    Results

    Studies are analyzed based on their results related to the effects of interaction type (e.g., audiovisual communication and/or physical interaction), interaction mode (collaborative, cooperative, co-active, and competitive), and partner characteristics. Visuo-physical interaction generally results in better dyadic task performance than visual interaction alone. In cases where the physical interaction between humans is described by a spring, there are conflicting results as to the effect of the stiffness of the spring. In terms of partner characteristics, having a more skilled partner improves dyadic task performance more than having a less skilled partner. However, conflicting results were observed in terms of individual motor learning.

    Conclusions

    Although it is difficult to draw clear conclusions as to which interaction type, mode, or partner characteristic may lead to optimal task performance or individual motor learning, these results show the possibility for improved outcomes through HMH interaction. Future work that focuses on selecting the optimal personalized interaction conditions and exploring their impact on rehabilitation settings may facilitate the transition of HMH training protocols to clinical implementations.

     
    more » « less
  4. An increasingly common use case for machine learning models is augmenting the abilities of human decision makers. For classification tasks where neither the human nor model are perfectly accurate, a key step in obtaining high performance is combining their individual predictions in a manner that leverages their relative strengths. In this work, we develop a set of algorithms that combine the probabilistic output of a model with the class-level output of a human. We show theoretically that the accuracy of our combination model is driven not only by the individual human and model accuracies, but also by the model's confidence. Empirical results on image classification with CIFAR-10 and a subset of ImageNet demonstrate that such human-model combinations consistently have higher accuracies than the model or human alone, and that the parameters of the combination method can be estimated effectively with as few as ten labeled datapoints. 
    more » « less
  5. An increasingly common use case for machine learning models is augmenting the abilities of human decision makers. For classification tasks where neither the human nor model are perfectly accurate, a key step in obtaining high performance is combining their individual predictions in a manner that leverages their relative strengths. In this work, we develop a set of algorithms that combine the probabilistic output of a model with the class-level output of a human. We show theoretically that the accuracy of our combination model is driven not only by the individual human and model accuracies, but also by the model's confidence. Empirical results on image classification with CIFAR-10 and a subset of ImageNet demonstrate that such human-model combinations consistently have higher accuracies than the model or human alone, and that the parameters of the combination method can be estimated effectively with as few as ten labeled datapoints. 
    more » « less