skip to main content


Title: Helping People Through Space and Time: Assistance as a Perspective on Human-Robot Interaction
As assistive robotics has expanded to many task domains, comparing assistive strategies among the varieties of research becomes increasingly difficult. To begin to unify the disparate domains into a more general theory of assistance, we present a definition of assistance, a survey of existing work, and three key design axes that occur in many domains and benefit from the examination of assistance as a whole. We first define an assistance perspective that focuses on understanding a robot that is in control of its actions but subordinate to a user’s goals. Next, we use this perspective to explore design axes that arise from the problem of assistance more generally and explore how these axes have comparable trade-offs across many domains. We investigate how the assistive robot handles other people in the interaction, how the robot design can operate in a variety of action spaces to enact similar goals, and how assistive robots can vary the timing of their actions relative to the user’s behavior. While these axes are by no means comprehensive, we propose them as useful tools for unifying assistance research across domains and as examples of how taking a broader perspective on assistance enables more cross-domain theorizing about assistance.  more » « less
Award ID(s):
1943072
NSF-PAR ID:
10327642
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Frontiers in Robotics and AI
Volume:
8
ISSN:
2296-9144
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Many robot-delivered health interventions aim to support people longitudinally at home to complement or replace in-clinic treat- ments. However, there is little guidance on how robots can support collaborative goal setting (CGS). CGS is the process in which a person works with a clinician to set and modify their goals for care; it can improve treatment adherence and efficacy. However, for home-deployed robots, clinicians will have limited availability to help set and modify goals over time, which necessitates that robots support CGS on their own. In this work, we explore how robots can facilitate CGS in the context of our robot CARMEN (Cognitively Assistive Robot for Motivation and Neurorehabilitation), which delivers neurorehabilitation to people with mild cognitive impairment (PwMCI). We co-designed robot behaviors for supporting CGS with clinical neuropsychologists and PwMCI, and prototyped them on CARMEN. We present feedback on how PwMCI envision these behaviors supporting goal progress and motivation during an intervention. We report insights on how to support this process with home-deployed robots and propose a framework to support HRI researchers interested in exploring this both in the context of cognitively assistive robots and beyond. This work supports design- ing and implementing CGS on robots, which will ultimately extend the efficacy of robot-delivered health interventions. 
    more » « less
  2. null (Ed.)
    Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications. 
    more » « less
  3. Much prior work on creating social agents that assist users relies on preconceived assumptions of what it means to be helpful. For example, it is common to assume that a helpful agent just assists with achieving a user’s objective. However, as assistive agents become more widespread, human-agent interactions may be more ad-hoc, providing opportunities for unexpected agent assistance. How would this affect human notions of an agent’s helpfulness? To investigate this question, we conducted an exploratory study (N=186) where participants interacted with agents displaying unexpected, assistive behaviors in a Space Invaders game and we studied factors that may influence perceived helpfulness in these interactions. Our results challenge the idea that human perceptions of the helpfulness of unexpected agent assistance can be derived from a universal, objective definition of help. Also, humans will reciprocate unexpected assistance, but might not always consider that they are in fact helping an agent. Based on our findings, we recommend considering personalization and adaptation when designing future assistive behaviors for prosocial agents that may try to help users in unexpected situations. 
    more » « less
  4. null (Ed.)
    As the field of engineering education continues to evolve, the number of early career scholars who identify as members of the discipline will continue to increase. These engineering education scholars will need to take strategic and intentional actions towards their professional goals and the goals of the engineering education community to be impactful within their positions. In other words, they must exercise agency. Accordingly, the purpose of this study is to investigate how the agency of early career, engineering education scholars manifests across different contexts. Our overarching research question is: How do institutional, individual, and disciplinary field and societal features influence early career engineering education faculty member’s agency to impact engineering education in their particular positions? To investigate how faculty agency manifests across different contexts, we adopted a longitudinal research approach to focus on our own experiences as engineering education scholars. Due to the complexity of the phenomenon, more common approaches to qualitative research (e.g., interviews, surveys, etc.) were unlikely to illuminate the manifestation of agency, which requires capturing the nuances associated with one’s day-to-day experiences. Thus, to address our research purpose, we required a research design that provided a space to explore one’s acceptance of ambiguity, responses to disappointments, willingness to adapt, process of adapting, and experiences with collaboration. The poster presented will provide a preliminary version of the model along with a detailed description of the methods used to develop it. In short, we integrated collaborative inquiry and collaborative autoethnography as a means for building our model. Autoethnography is a research approach that critically examines personal experience to explore a cultural phenomenon. Collaborative autoethnography, which leverages collective sense-making of the data, informed the structure of our data collection. Specifically, we documented our individual experiences over the course of six semesters by (1) completing weekly, monthly, pre-semester, and post-semester reflection questions; (2) participating in periodic activities and discussions focused on targeted areas of our theoretical framework and relevant literature; and (3) discussing the outcomes from both (1) and (2) in weekly meetings. Collaborative inquiry, in contrast to collaborative autoethnography, is a research approach where people pair reflection on practice with action through multiple inquiry cycles. Collaborative inquiry guided the topics of discussion within our weekly meetings and how we approached challenges and other aspects of our positions. The combination of these methodologies allowed us to deeply and systematically explore our own experiences, allowing us to develop a model of professional agency towards change in engineering education through collaborative sense-making. By sharing our findings with current and developing engineering education graduate programs, we will enable them to make programmatic changes to benefit current and future engineering education scholars. These findings also will provide a mechanism for divisions within ASEE to develop programming and resources to support the sustained success and impact of their members. 
    more » « less
  5. Shared control systems can make complex robot teleoperation tasks easier for users. These systems predict the user’s goal, determine the motion required for the robot to reach that goal, and combine that motion with the user’s input. Goal prediction is generally based on the user’s control input (e.g., the joystick signal). In this paper, we show that this prediction method is especially effective when users follow standard noisily optimal behavior models. In tasks with input constraints like modal control, however, this effectiveness no longer holds, so additional sources for goal prediction can improve assistance. We implement a novel shared control system that combines natural eye gaze with joystick input to predict people’s goals online, and we evaluate our system in a real-world, COVID-safe user study. We find that modal control reduces the efficiency of assistance according to our model, and when gaze provides a prediction earlier in the task, the system’s performance improves. However, gaze on its own is unreliable and assistance using only gaze performs poorly. We conclude that control input and natural gaze serve different and complementary roles in goal prediction, and using them together leads to improved assistance. 
    more » « less