Vision-Based Calibration of Dual RCM-Based Robot Arms in Human-Robot Collaborative Minimally Invasive Surgery
- Award ID(s):
- 1637789
- PAR ID:
- 10075847
- Date Published:
- Journal Name:
- IEEE Robotics and Automation Letters
- Volume:
- 3
- Issue:
- 2
- ISSN:
- 2377-3774
- Page Range / eLocation ID:
- 672 to 679
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Human-centered environments provide affordances for and require the use of two-handed, or bimanual, manipulations. Robots designed to function in, and physically interact with, these environments have not been able to meet these requirements because standard bimanual control approaches have not accommodated the diverse, dynamic, and intricate coordinations between two arms to complete bimanual tasks. In this work, we enabled robots to more effectively perform bimanual tasks by introducing a bimanual shared-control method. The control method moves the robot’s arms to mimic the operator’s arm movements but provides on-the-fly assistance to help the user complete tasks more easily. Our method used a bimanual action vocabulary, constructed by analyzing how people perform two-hand manipulations, as the core abstraction level for reasoning about how to assist in bimanual shared autonomy. The method inferred which individual action from the bimanual action vocabulary was occurring using a sequence-to-sequence recurrent neural network architecture and turned on a corresponding assistance mode, signals introduced into the shared-control loop designed to make the performance of a particular bimanual action easier or more efficient. We demonstrate the effectiveness of our method through two user studies that show that novice users could control a robot to complete a range of complex manipulation tasks more successfully using our method compared to alternative approaches. We discuss the implications of our findings for real-world robot control scenarios.more » « less
-
This paper addresses the visibility-based pursuit-evasion problem where a team of pursuer robots operating in a two-dimensional polygonal space seek to establish visibility of an arbitrarily fast evader. This is a computationally challenging task for which the best known complete algorithm takes time doubly exponential in the number of robots. However, recent advances that utilize sampling-based methods have shown progress in generating feasible solutions. An aspect of this problem that has yet to be explored concerns how to ensure that the robots can recover from catastrophic failures which leave one or more robots unexpectedly incapable of continuing to contribute to the pursuit of the evader. To address this issue, we propose an algorithm that can rapidly recover from catastrophic failures. When such failures occur, a replanning occurs, leveraging both the information retained from the previous iteration and the partial progress of the search completed before the failure to generate a new motion strategy for the reduced team of pursuers. We describe an implementation of this algorithm and provide quantitative results that show that the proposed method is able to recover from robot failures more rapidly than a baseline approach that plans from scratch.more » « less
-
The field of human-robot interaction has been rapidly expanding but an ever-present obstacle facing this field is developing accessible, reliable, and effective forms of communication. It is often imperative to the efficacy of the robot and the overall human-robot interaction that a robot be capable of expressing information about itself to humans in the environment. Amidst the evolving approaches to this obstacle is the use of light as a communication modality. Light-based communication effectively captures attention, can be seen at a distance, and is commonly utilized in our daily lives. Our team explored the ways light-based signals on robots are being used to improve human understanding of robot operating state. In other words, we sought to determine how light-based signals are being used to help individuals identify the conditions (e.g., capabilities, goals, needs) that comprise and dictate a robot’s current functionality. We identified four operating states (e.g., “Blocked”, “Error”, “Seeking Interaction”, “Not Seeking Interaction”) in which light is utilized to increase individuals’ understanding of the robot’s operations. These operating states are expressed through manipulation of three visual dimensions of the onboard lighting features of robots (e.g., color, pattern of lighting, frequency of pattern). In our work, we outline how these dimensions vary across operating states and the effect they have on human understanding. We also provide potential explanations for the importance of each dimension. Additionally, we discuss the main shortcomings of this technology. The first is the overlapping use of combinations of dimensions across operating states. The remainder relate to the difficulties of leveraging color to convey information. Finally, we provide considerations on how this technology might be improved going into the future through the standardization of light-based signals and increasing the amount of information provided within interactions between agents.more » « less
An official website of the United States government

