Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Robotic telepresence enables users to navigate and experience remote environments. However, effective navigation and situational awareness depend on users’ prior knowledge of the environment, limiting the usefulness of these systems for exploring unfamiliar places. We explore how integrating location-aware LLM-based narrative capabilities into a mobile robot can support remote exploration. We developed a prototype system, called NarraGuide, that provides narrative guidance for users to explore and learn about a remote place through a dialogue-based interface. We deployed our prototype in a geology museum, where remote participants (đť‘› = 20) used the robot to tour the museum. Our findings reveal how users perceived the robot’s role, engaged in dialogue in the tour, and expressed preferences for bystander encountering. Our work demonstrates the potential of LLM-enabled robotic capabilities to deliver location-aware narrative guidance and enrich the experience of exploring remote environments.more » « lessFree, publicly-accessible full text available September 27, 2026
-
Nations around the world are struggling with challenges related to an increasingly aging population coupled with a growing shortage of caregivers. Intelligent, interactive systems such as robots show great promise in helping to address this care crisis. While a wealth of research exists targeting various healthcare needs, the majority of this work focuses on short-term interactions between the care recipient and the technology and do not fully consider how care robots fit into the broader scope of day-to-day life in the facility. For the long-term, sustained use of technology to support care, we need to consider how the technology fits into the broader ecosystem, considering questions such as: who is managing it? how does it alter existing workflows and routines? what extra resources (especially time) are required? Broadening technology design to encompass these ecological aspects is necessary, but it presents a rich set of challenges for robots and other intelligent systems, such as many stakeholders with different priorities and needs, safety constraints, and highly dynamic environments. Especially considering the critical role of human relationships in care, it is imperative to develop effective ways for intelligent systems to support healthcare practices rather than replace invaluable human contact. The goal of this dissertation is to help integrate robots into senior living facilities by considering how stakeholders such as caregivers and older adults can make use of autonomous robot capabilities to support their needs. To achieve this end, I present a design journey toward understanding how end-user development can support the care ecosystem and facilitate care robot integration. In this dissertation, I first present two design studies to build a case for end-user development and identify key design requirements. Building on this design work, I then present the design and evaluation of the CareAssist system an an exemplar end-user development tool that shows promise in helping to facilitate care robot integration. Overall, I do not suggest that end-user development is the only solution, and instead show that it is a critical component of the broader vision of safe, effective care robots.more » « lessFree, publicly-accessible full text available May 1, 2026
-
Over 1 billion people worldwide are estimated to experience significant disability, which impacts their ability to independently conduct activities of daily living (ADLs) such as eating, ambulating, and dressing. Physically assistive robots (PARs) have emerged as a promising technology to help people with disabilities conduct ADLs, thereby restoring independence and reducing caregiver burden. However, despite decades of research on PARs, deployments of them in end-users’ homes are still few and far between. This thesis focuses on robot-assisted feeding as a case study for how we can achieve in-home deployments of PARs. Our ultimate goal is to develop a robot-assisted feeding system that enables any user, in any environment, to feed themselves a meal of their choice in a way that aligns with their preferences. We collaborate closely with 2 community researchers with motor impairments to design, implement, and evaluate a robot-assisted feeding system that makes progress towards this ultimate goal. Specifically, this thesis presents the following work: 1. A systematic survey of research on PARs, identifying key themes and trends; 2. A formative study investigating the meal-related needs of people with motor impairments and their priorities regarding the design of robot-assisted feeding systems; 3. An action schema and unsupervised learning pipeline that uses human data to learn representative actions a robot can use acquire diverse bites of food; and 4. The key system design considerations, both software and hardware, that enabled us to develop a robot-assisted feeding system to deploy in users’ homes. We evaluate the system with two studies: (1) an out-of-lab study where 5 participants and 1 community researcher use the robot to feed themselves a meal of their choice in a cafeteria, conference room, or office; and (2) a 5-day, in-home deployment where 1 community researcher uses the robot to feed himself 10 meals across various spatial, social, and activity contexts. The studies reveal promising results in terms of the usability and functionality of the system, as well as key directions for future work that are necessary to achieve the aforementioned ultimate goal. We present key lessons learned regarding in-home deployments of PARs: (1) spatial contexts are numerous, customizability lets users adapt to them; (2) off-nominals will arise, variable autonomy lets users overcome them; (3) assistive robots’ benefits depend on context; and (4) work with end-users and stakeholders.more » « lessFree, publicly-accessible full text available May 1, 2026
-
With the introduction of Industry 5.0, there is a growing focus on human-robot collaboration and the empowerment of human workers through the se of robotic technologies. Collaborative robots, or cobots, are well suited for filling the needs of industry. Cobots have a prioritization on safety and collaboration, giving them the unique ability to work in close proximity with people. This has the potential impact of increasing task productivity and efficiency while reducing ergonomic strain on human workers, as cobots can collaborate on tasks as teammates and support their human collaborators. However, effectively deploying and using cobots requires multidisciplinary knowledge spanning fields such as human factors and ergonomics, economics, and human-robot interaction. This knowledge barrier represents a growing challenge in industry, as workers lack the skills necessary to effectively leverage and realize the potential of cobots within their applications, resulting in cobots often being used non-collaboratively as a form of cheap automation. This presents several research opportunities for the creation of new cobot systems that support users in the creation of cobot interactions. The goal of this dissertation is to explore the use of abstraction and scaffolding supports within cobot systems to assist users in building human-robot collaborations. Specifically, this research (1) presents updates to the design of systems for planning and programming collaborative tasks, and (2) evaluates each system to understand how it can support user creation of cobot interactions. First, I present the CoFrame cobot programming system, a tool built on prior work, and illustrate how it supports user creation and understanding of cobot programs. Then, I present the evaluation of the system with domain experts, novices, and a real-world deployment to understand in which ways CoFrame does and does not successfully support users. I then describe the Allocobot system for allocating work and planning collaborative interactions, describing how it encodes multiple models of domain knowledge within its representation. Finally, I evaluate the Allocobot system in two real-world scenarios to understand how it produces and optimizes viable interaction plans.more » « lessFree, publicly-accessible full text available April 30, 2026
-
Automated planning is traditionally the domain of experts, utilized in fields like manufacturing and healthcare with the aid of expert planning tools. Recent advancements in LLMs have made planning more accessible to everyday users due to their potential to assist users with complex planning tasks. However, LLMs face several application challenges within end-user planning, including consistency, accuracy, and user trust issues. This paper introduces VeriPlan, a system that applies formal verification techniques, specifically model checking, to enhance the reliability and flexibility of LLMs for end-user planning. In addition to the LLM planner, VeriPlan includes three additional core features—a rule translator, flexibility sliders, and a model checker—that engage users in the verification process. Through a user study (đť‘› = 12), we evaluate VeriPlan, demonstrating improvements in the perceived quality, usability, and user satisfaction of LLMs. Our work shows the effective integration of formal verification and user-control features with LLMs for end-user planning tasks.more » « lessFree, publicly-accessible full text available April 25, 2026
-
The widespread adoption of Large Language Models (LLMs) and LLM-powered agents in multi-user settings underscores the need for reliable, usable methods to accommodate diverse preferences and resolve conflicting directives. Drawing on conflict resolution theory, we introduce a user-centered workflow for multi-user personalization comprising three stages: Reflection, Analysis, and Feedback. We then present MAP—a Multi-Agent system for multi-user Personalization—to operationalize this workflow. By delegating subtasks to specialized agents, MAP (1) retrieves and reflects on relevant user information, while enhancing reliability through agent-toagent interactions, (2) provides detailed analysis for improved transparency and usability, and (3) integrates user feedback to iteratively refine results. Our user study findings (đť‘› = 12) highlight MAP’s effectiveness and usability for conflict resolution while emphasizing the importance of user involvement in resolution verification and failure management. This work highlights the potential of multi-agent systems to implement user-centered, multi-user personalization workflows and concludes by offering insights for personalization in multi-user contexts.more » « lessFree, publicly-accessible full text available April 25, 2026
-
Intergenerational co-creation using technology between grandparents and grandchildren can be challenging due to differences in technological familiarity. AI has emerged as a promising tool to support co-creative activities, offering flexibility and creative assistance, but its role in facilitating intergenerational connection remains underexplored. In this study, we conducted a user study with 29 grandparent-grandchild groups engaged in AI-supported story creation to examine how AI-assisted co-creation can foster meaningful intergenerational bonds. Our findings show that grandchildren managed the technical aspects, while grandparents contributed creative ideas and guided the storytelling. AI played a key role in structuring the activity, facilitating brainstorming, enhancing storytelling, and balancing the contributions of both generations. The process fostered mutual appreciation, with each generation recognizing the strengths of the other, leading to an engaging and cohesive co-creation process. We offer design implications for integrating AI into intergenerational co-creative activities, emphasizing how AI can enhance connection across skill levels and technological familiarity.more » « lessFree, publicly-accessible full text available April 25, 2026
-
Foundation models are rapidly improving the capability of robots in performing everyday tasks autonomously such as meal preparation, yet robots will still need to be instructed by humans due to model performance, the difficulty of capturing user preferences, and the need for user agency. Robots can be instructed using various methods-natural language conveys immediate instructions but can be abstract or ambiguous, whereas end-user programming supports longer-horizon tasks but interfaces face difficulties in capturing user intent. In this work, we propose using direct manipulation of images as an alternative paradigm to instruct robots, and introduce a specific instantiation called ImageInThat which allows users to perform direct manipulation on images in a timeline-style interface to generate robot instructions. Through a user study, we demonstrate the efficacy of ImageInThat to instruct robots in kitchen manipulation tasks, comparing it to a text-based natural language instruction method. The results show that participants were faster with ImageInThat and preferred to use it over the text-based method. Supplementary material including code can be found at: https://image-in-that.github.io/.more » « lessFree, publicly-accessible full text available March 4, 2026
-
Robots and other autonomous agents are well-positioned in the research discourse to support the care of people with challenges such as physical and/or cognitive disabilities. However, designing these robots can be complex as it involves considering a wide range of factors (e.g., individual needs, physical environment, technology capabilities, digital literacy), stakeholders (e.g., care recipients, formal and informal caregivers, technology developers), and contexts (e.g., hospitals, nursing homes, outpatient care facilities, private homes). The challenges are in gaining design insights for this unique use case and translating this knowledge into actionable, generalizable guidelines for other designers. This one-day workshop seeks to bring together researchers with diverse expertise and experience across academia, healthcare, and industry, spanning perspectives from multiple disciplines, including design, robotics, and human-computer interaction, with the primary goal being a consensus on best practices for generating and operationalizing design knowledge for robotic systems for care settings.more » « less
-
Novel end-user programming (EUP) tools enable on-the-fly (i.e., spontaneous, easy, and rapid) creation of interactions with robotic systems. These tools are expected to empower users in determining system behavior, although very little is understood about how end users perceive, experience, and use these systems. In this paper, we seek to address this gap by investigating end-user experience with on-the-fly robot EUP. We trained 21 end users to use an existing on-the-fly EUP tool, asked them to create robot interactions for four scenarios, and assessed their overall experience. Our findings provide insight into how these systems should be designed to better support end-user experience with on-the-fly EUP, focusing on user interaction with an automatic program synthesizer that resolves imprecise user input, the use of multimodal inputs to express user intent, and the general process of programming a robot.more » « less
An official website of the United States government
