Abstract One of the key differentiators between biological and artificial systems is the dynamic plasticity of living tissues, enabling adaptation to different environmental conditions, tasks, or damage by reconfiguring physical structure and behavioral control policies. Lack of dynamic plasticity is a significant limitation for artificial systems that must robustly operate in the natural world. Recently, researchers have begun to leverage insights from regenerating and metamorphosing organisms, designing robots capable of editing their own structure to more efficiently perform tasks under changing demands and creating new algorithms to control these changing anatomies. Here, an overview of the literature related to robots that change shape to enhance and expand their functionality is presented. Related grand challenges, including shape sensing, finding, and changing, which rely on innovations in multifunctional materials, distributed actuation and sensing, and somatic control to enable next‐generation shape changing robots are also discussed.
more »
« less
Active Learning in Robotics: A Review of Control Principles
Active learning is a decision-making process. In both abstract and physical settings, active learning demands both analysis and action. This is a review of active learning in robotics, focusing on methods amenable to the demands of embodied learning systems. Robots must be able to learn efficiently and flexibly through continuous online deployment. This poses a distinct set of control-oriented challenges—one must choose suitable measures as objectives, synthesize real- time control, and produce analyses that guarantee performance and safety with limited knowledge of the environment or robot itself. In this work, we survey the fundamental components of robotic active learning systems. We discuss classes of learning tasks that robots typically encounter, measures with which they gauge the information content of observations, and algorithms for generating action plans. Moreover, we provide a variety of examples—from environmental mapping to nonparametric shape estimation—that highlight the qualitative differences between learning tasks, information measures, and control techniques. We conclude with a discussion of control-oriented open challenges, including safety-constrained learning and distributed learning.
more »
« less
- Award ID(s):
- 1837515
- PAR ID:
- 10283233
- Date Published:
- Journal Name:
- Mechatronics
- Volume:
- 77
- ISSN:
- 0957-4158
- Page Range / eLocation ID:
- 102576
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Goal-conditioned policies, such as those learned via imitation learning, provide an easy way for humans to influence what tasks robots accomplish. However, these robot policies are not guaranteed to execute safely or to succeed when faced with out-of-distribution goal requests. In this work, we enable robots to know when they can confidently execute a user’s desired goal, and automatically suggest safe alternatives when they cannot. Our approach is inspired by control-theoretic safety filtering, wherein a safety filter minimally adjusts a robot’s candidate action to be safe. Our key idea is to pose alternative suggestion as a safe control problem in goal space, rather than in action space. Offline, we use reachability analysis to compute a goal-parameterized reach-avoid value network which quantifies the safety and liveness of the robot’s pre- trained policy. Online, our robot uses the reach-avoid value network as a safety filter, monitoring the human’s given goal and actively suggesting alternatives that are similar but meet the safety specification. We demonstrate our Safe ALTernatives (SALT) framework in simulation experiments with indoor navigation and Franka Panda tabletop manipulation, and with both discrete and continuous goal representations. We find that SALT is able to learn to predict successful and failed closed-loop executions, is a less pessimistic monitor than open- loop uncertainty quantification, and proposes alternatives that consistently align with those that people find acceptable.more » « less
-
Abstract As technology advances, Human-Robot Interaction (HRI) is boosting overall system efficiency and productivity. However, allowing robots to be present closely with humans will inevitably put higher demands on precise human motion tracking and prediction. Datasets that contain both humans and robots operating in the shared space are receiving growing attention as they may facilitate a variety of robotics and human-systems research. Datasets that track HRI with rich information other than video images during daily activities are rarely seen. In this paper, we introduce a novel dataset that focuses on social navigation between humans and robots in a future-oriented Wholesale and Retail Trade (WRT) environment (https://uf-retail-cobot-dataset.github.io/). Eight participants performed the tasks that are commonly undertaken by consumers and retail workers. More than 260 minutes of data were collected, including robot and human trajectories, human full-body motion capture, eye gaze directions, and other contextual information. Comprehensive descriptions of each category of data stream, as well as potential use cases are included. Furthermore, analysis with multiple data sources and future directions are discussed.more » « less
-
Despite the potential of reinforcement learning (RL) for building general-purpose robotic systems, training RL agents to solve robotics tasks still remains challenging due to the difficulty of exploration in purely continuous action spaces. Addressing this problem is an active area of research with the majority of focus on improving RL methods via better optimization or more efficient exploration. An alternate but important component to consider improving is the interface of the RL algorithm with the robot. In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy. These parameterized primitives are expressive, simple to implement, enable efficient exploration and can be transferred across robots, tasks and environments. We perform a thorough empirical study across challenging tasks in three distinct domains with image input and a sparse terminal reward. We find that our simple change to the action interface substantially improves both the learning efficiency and task performance irrespective of the underlying RL algorithm, significantly outperforming prior methods which learn skills from offline expert data. Code and videos at https://mihdalal.github.io/raps/more » « less
-
Collaborative robots should ideally use low torque actuators for passive safety reasons. However, some applications require these collaborative robots to reach deep into confined spaces while assisting a human operator in physically demanding tasks. In this paper, we consider the use of in-situ collaborative robots (ISCRs) that balance the conflicting demands of passive safety dictating low torque actuation and the need to reach into deep confined spaces. We consider the judicious use of bracing as a possible solution to these conflicting demands and present a modeling framework that takes into account the constrained kinematics and the effect of bracing on the endeffector compliance. We then define a redundancy resolution framework that minimizes the directional compliance of the end-effector while maximizing end-effector dexterity. Kinematic simulation results show that the redundancy resolution strategy successfully decreases compliance and improves kinematic conditioning while satisfying the constraints imposed by the bracing task. Applications of this modeling framework can support future research on the choice of bracing locations and support the formation of an admittance control framework for collaborative control of ISCRs under bracing constraints. Such robots can benefit workers in the future by reducing the physiological burdens that contribute to musculoskeletal injury.more » « less
An official website of the United States government

