This article presents the design process of a supernumerary wearable robotic forearm (WRF), along with methods for stabilizing the robot’s end-effector using human motion prediction. The device acts as a lightweight “third arm” for the user, extending their reach during handovers and manipulation in close-range collaborative activities. It was developed iteratively, following a user-centered design process that included an online survey, contextual inquiry, and an in-person usability study. Simulations show that the WRF significantly enhances a wearer’s reachable workspace volume, while remaining within biomechanical ergonomic load limits during typical usage scenarios. While operating the device in such scenarios, the user introduces disturbances in its pose due to their body movements. We present two methods to overcome these disturbances: autoregressive (AR) time series and a recurrent neural network (RNN). These models were used for forecasting the wearer’s body movements to compensate for disturbances, with prediction horizons determined through linear system identification. The models were trained offline on a subset of the KIT Human Motion Database, and tested in five usage scenarios to keep the 3D pose of the WRF’s end-effector static. The addition of the predictive models reduced the end-effector position errors by up to 26% compared to direct feedback control.
Design and Analysis of a Wearable Robotic Forearm
This paper presents the design of a wearable robotic forearm for close-range human-robot collaboration. The robot's function is to serve as a lightweight supernumerary third arm for shared workspace activities. We present a functional prototype resulting from an iterative design process including several user studies. An analysis of the robot's kinematics shows an increase in reachable workspace by 246 % compared to the natural human reach. The robot's degrees of freedom and range of motion support a variety of usage scenarios with the robot as a collaborative tool, including self-handovers, fetching objects while the human's hands are occupied, assisting human-human collaboration, and stabilizing an object. We analyze the bio-mechanical loads for these scenarios and find that the design is able to operate within human ergonomic wear limits. We then report on a pilot human-robot interaction study that indicates robot autonomy is more task-time efficient and preferred by users when compared to direct voice-control. These results suggest that the design presented here is a promising configuration for a lightweight wearable robotic augmentation device, and can serve as a basis for further research into human-wearable collaboration.
- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- 2018 IEEE International Conference on Robotics and Automation (ICRA)
- Page Range or eLocation-ID:
- 5489 to 5496
- Sponsoring Org:
- National Science Foundation
More Like this
This paper presents the design of a wearable robotic forearm that provides the user with an assistive third hand, along with a study of interaction scenarios for the design. Technical advances in sensors, actuators, and materials have made wearable robots feasible for personal use, but the interaction with such robots has not been sufficiently studied. We describe the development of a working prototype along with three usability studies. In an online survey we find that respondents presented with images and descriptions of the device see its use mainly as a functional tool in professional and military contexts. A subsequent contextual inquiry among building construction workers reveals three themes for user needs: extending a worker's reach, enhancing their safety and comfort through bracing and stabilization, and reducing their cognitive load in repetitive tasks. A subsequent laboratry study in which participants wear a working prototype of the robot finds that they prioritize lowered weight and enhanced dexterity, seek adjustable autonomy and transparency of the robot's intent, and prefer a robot that looks distinct from a human arm. These studies inform design implications for further development of wearable robotic arms.
Using robots capable of collaboration with humans to complete physical tasks in unstructured spaces is a rapidly growing approach to work. Particular examples where increased levels of automation can increase productivity include robots used as nursing assistants. In this paper, we present a mobile manipulator designed to serve as an assistant to nurses in patient walking and patient sitting tasks in hospital environments. The Adaptive Robotic Nursing Assistant (ARNA) robot consists of an omnidirectional base with an instrumented handlebar, and a 7-DOF robotic arm. We describe its components and the novelties in its mechanisms and instrumentation. Experiments with human subjects that gauge the usability and ease of use of the ARNA robot in a medical environment indicate that the robot will get significant actual usage, and are used as a basis for a discussion on how the robot's features facilitate its adaptability for use in other scenarios and environment.
We describe a physical interactive system for human-robot collaborative design (HRCD) consisting of a tangible user interface (TUI) and a robotic arm that simultaneously manipulates the TUI with the human designer. In an observational study of 12 participants exploring a complex design problem together with the robot, we find that human designers have to negotiate both the physical and the creative space with the machine. They also often ascribe social meaning to the robot's pragmatic behaviors. Based on these findings, we propose four considerations for future HRCD systems: managing the shared workspace, communicating preferences about design goals, respecting different design styles, and taking into account the social meaning of design acts.
In this paper, an adjustable autonomy framework is proposed for Human-Robot Collaboration (HRC) in which a robot uses a reinforcement learning mechanism guided by a human operator's rewards in an initially unknown workspace. Within the proposed framework, the robot can adjust its autonomy level in an HRC setting that is represented by a Markov Decision Process. A novel Q-learning mechanism with an integrated greedy approach is implemented for robot learning to capture the correct actions and the robot's mistakes for adjusting its autonomy level. The proposed HRC framework can adapt to changes in the workspace, and can adjust the autonomy level, provided consistent human operator's reward. The developed algorithm is applied to a realistic HRC setting, involving a Baxter humanoid robot. The experimental results confirm the capability of the developed framework to successfully adjust the robot's autonomy level in response to changes in the human operator's commands or the workspace.