More than 1 billion people in the world are estimated to experience significant disability. These disabilities can impact people's ability to independently conduct activities of daily living, including ambulating, eating, dressing, taking care of personal hygiene, and more. Mobile and manipulator robots, which can move about human environments and physically interact with objects and people, have the potential to assist people with disabilities in activities of daily living. Although the vision of physically assistive robots has motivated research across subfields of robotics for decades, such robots have only recently become feasible in terms of capabilities, safety, and price. More and more research involves end-to-end robotic systems that interact with people with disabilities in real-world settings. In this article, we survey papers about physically assistive robots intended for people with disabilities from top conferences and journals in robotics, human–computer interactions, and accessible technology, to identify the general trends and research methodologies. We then dive into three specific research themes—interaction interfaces, levels of autonomy, and adaptation—and present frameworks for how these themes manifest across physically assistive robot research. We conclude with directions for future research. 
                        more » 
                        « less   
                    
                            
                            Robot-Centric Perception of Human Groups
                        
                    
    
            The robotics community continually strives to create robots that are deployable in real-world environments. Often, robots are expected to interact with human groups. To achieve this goal, we introduce a new method, the Robot-Centric Group Estimation Model (RoboGEM), which enables robots to detect groups of people. Much of the work reported in the literature focuses on dyadic interactions, leaving a gap in our understanding of how to build robots that can effectively team with larger groups of people. Moreover, many current methods rely on exocentric vision, where cameras and sensors are placed externally in the environment, rather than onboard the robot. Consequently, these methods are impractical for robots in unstructured, human-centric environments, which are novel and unpredictable. Furthermore, the majority of work on group perception is supervised, which can inhibit performance in real-world settings. RoboGEM addresses these gaps by being able to predict social groups solely from an egocentric perspective using color and depth (RGB-D) data. To achieve group predictions, RoboGEM leverages joint motion and proximity estimations. We evaluated RoboGEM against a challenging, egocentric, real-world dataset where both pedestrians and the robot are in motion simultaneously, and show RoboGEM outperformed two state-of-the-art supervised methods in detection accuracy by up to 30%, with a lower miss rate. Our work will be helpful to the robotics community, and serve as a milestone to building unsupervised systems that will enable robots to work with human groups in real-world environments. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1734482
- PAR ID:
- 10221074
- Date Published:
- Journal Name:
- ACM Transactions on Human-Robot Interaction
- Volume:
- 9
- Issue:
- 3
- ISSN:
- 2573-9522
- Page Range / eLocation ID:
- 1 to 21
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            arXiv (Ed.)This study addresses the challenge of integrating social norms into robot navigation, which is essential for ensuring that robots operate safely and efficiently in human-centric environments. Social norms, often unspoken and implicitly understood among people, are difficult to explicitly define and implement in robotic systems. To overcome this, we derive these norms from real human trajectory data, utilizing the comprehensive ATC dataset to identify the minimum social zones humans and robots must respect. These zones are integrated into the robot’s navigation system by applying barrier functions, ensuring the robot consistently remains within the designated safety set. Simulation results demonstrate that our system effectively mimics human-like navigation strategies, such as passing on the right side and adjusting speed or pausing in constrained spaces. The proposed framework is versatile, easily comprehensible, and tunable, demonstrating the potential to advance the development of robots designed to navigate effectively in human-centric environments.more » « less
- 
            Abstract Robotics researchers have been focusing on developing autonomous and human-like intelligent robots that are able to plan, navigate, manipulate objects, and interact with humans in both static and dynamic environments. These capabilities, however, are usually developed for direct interactions with people in controlled environments, and evaluated primarily in terms of human safety. Consequently, human-robot interaction (HRI) in scenarios with no intervention of technical personnel is under-explored. However, in the future, robots will be deployed in unstructured and unsupervised environments where they will be expected to work unsupervised on tasks which require direct interaction with humans and may not necessarily be collaborative. Developing such robots requires comparing the effectiveness and efficiency of similar design approaches and techniques. Yet, issues regarding the reproducibility of results, comparing different approaches between research groups, and creating challenging milestones to measure performance and development over time make this difficult. Here we discuss the international robotics competition called RoboCup as a benchmark for the progress and open challenges in AI and robotics development. The long term goal of RoboCup is developing a robot soccer team that can win against the world’s best human soccer team by 2050. We selected RoboCup because it requires robots to be able to play with and against humans in unstructured environments, such as uneven fields and natural lighting conditions, and it challenges the known accepted dynamics in HRI. Considering the current state of robotics technology, RoboCup’s goal opens up several open research questions to be addressed by roboticists. In this paper, we (a) summarise the current challenges in robotics by using RoboCup development as an evaluation metric, (b) discuss the state-of-the-art approaches to these challenges and how they currently apply to RoboCup, and (c) present a path for future development in the given areas to meet RoboCup’s goal of having robots play soccer against and with humans by 2050.more » « less
- 
            Effective human-robot interaction is increasingly vital across various domains, including assistive robotics, emotional communication, entertainment, and industrial automation. Visual feedback, a common feature of current interfaces, may not be suitable for all environments. Audio feedback serves as a critical supplementary communication layer in settings where visibility is low or where robotic operations generate extensive data. Sonification, which transforms a robot's trajectory, motion, and environmental signals into sound, enhances users' comprehension of robot behavior. This improvement in understanding fosters more effective, safe, and reliable Human-Robot Interaction (HRI). Demonstrations of auditory data sonification's benefits are evident in real-world applications such as industrial assembly, robot-assisted rehabilitation, and interactive robotic exhibitions, where it promotes cooperation, boosts performance, and heightens engagement. Beyond conventional HRI environments, auditory data sonification shows substantial potential in managing complex robotic systems and intricate structures, such as hyper-redundant robots and robotic teams. These systems often challenge operators with complex joint monitoring, mathematical kinematic modeling, and visual behavior verification. This dissertation explores the sonification of motion in hyper-redundant robots and teams of industrial robots. It delves into the Wave Space Sonification (WSS) framework developed by Hermann, applying it to the motion datasets of protein molecules modeled as hyper-redundant mechanisms with numerous rigid nano-linkages. This research leverages the WSS framework to develop a sonification methodology for protein molecules' dihedral angle folding trajectories. Furthermore, it introduces a novel approach for the systematic sonification of robotic motion across varying configurations. By employing localized wave fields oriented within the robots' configuration space, this methodology generates auditory outputs with specific timbral qualities as robots move through predefined configurations or along certain trajectories. Additionally, the dissertation examines a team of wheeled industrial/service robots whose motion patterns are sonified using sinusoidal vibratory sounds, demonstrating the practical applications and benefits of this innovative approach.more » « less
- 
            null (Ed.)Mobile robots are increasingly populating homes, hospitals, shopping malls, factory floors, and other human environments. Human society has social norms that people mutually accept; obeying these norms is an essential signal that someone is participating socially with respect to the rest of the population. For robots to be socially compatible with humans, it is crucial for robots to obey these social norms. In prior work, we demonstrated a Socially-Aware Navigation (SAN) planner, based on Pareto Concavity Elimination Transformation (PaCcET), in a hallway scenario, optimizing two objectives so the robot does not invade the personal space of people. This article extends our PaCcET-based SAN planner to multiple scenarios with more than two objectives. We modified the Robot Operating System’s (ROS) navigation stack to include PaCcET in the local planning task. We show that our approach can accommodate multiple Human-Robot Interaction (HRI) scenarios. Using the proposed approach, we achieved successful HRI in multiple scenarios such as hallway interactions, an art gallery, waiting in a queue, and interacting with a group. We implemented our method on a simulated PR2 robot in a 2D simulator (Stage) and a pioneer-3DX mobile robot in the real-world to validate all the scenarios. A comprehensive set of experiments shows that our approach can handle multiple interaction scenarios on both holonomic and non-holonomic robots; hence, it can be a viable option for a Unified Socially-Aware Navigation (USAN).more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    