- Award ID(s):
- 2133091
- PAR ID:
- 10534286
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 978-1-6654-9190-7
- Page Range / eLocation ID:
- 2752 to 2759
- Format(s):
- Medium: X
- Location:
- Detroit, MI, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
Legged robots have shown remarkable advantages in navigating uneven terrain. However, realizing effective loco-motion and manipulation tasks on quadruped robots is still challenging. In addition, object and terrain parameters are generally unknown to the robot in these problems. Therefore, this paper proposes a hierarchical adaptive control framework that enables legged robots to perform loco-manipulation tasks without any given assumption on the object's mass, the friction coefficient, or the slope of the terrain. In our approach, we first present an adaptive manipulation control to regulate the contact force to manipulate an unknown object on unknown terrain. We then introduce a unified model predictive control (MPC) for loco-manipulation that takes into account the manipulation force in our robot dynamics. The proposed MPC framework thus can effectively regulate the interaction force between the robot and the object while keeping the robot balance. Experimental validation of our proposed approach is successfully conducted on a Unitree A1 robot, allowing it to manipulate an unknown time-varying load up to 7 kg (60% of the robot's weight). Moreover, our framework enables fast adaptation to unknown slopes or different surfaces with different friction coefficients.more » « less
-
Recent studies on quadruped robots have focused on either locomotion or mobile manipulation using a robotic arm. However, legged robots can manipulate large objects using non-prehensile manipulation primitives, such as planar pushing, to drive the object to the desired location. This paper presents a novel hierarchical model predictive control (MPC) for contact optimization of the manipulation task. Using two cascading MPCs, we split the loco-manipulation problem into two parts: the first to optimize both contact force and contact location between the robot and the object, and the second to regulate the desired interaction force through the robot locomotion. Our method is successfully validated in both simulation and hardware experiments. While the baseline locomotion MPC fails to follow the desired trajectory of the object, our proposed approach can effectively control both object's position and orientation with minimal tracking error. This capability also allows us to perform obstacle avoidance for both the robot and the object during the loco-manipulation task.more » « less
-
Agile-legged robots have proven to be highly effective in navigating and performing tasks in complex and challenging environments, including disaster zones and industrial settings. However, these applications commonly require the capability of carrying heavy loads while maintaining dynamic motion. Therefore, this article presents a novel methodology for incorporating adaptive control into a force-based control system. Recent advancements in the control of quadruped robots show that force control can effectively realize dynamic locomotion over rough terrain. By integrating adaptive control into the force-based controller, our proposed approach can maintain the advantages of the baseline framework while adapting to significant model uncertainties and unknown terrain impact models. Experimental validation was successfully conducted on the Unitree A1 robot. With our approach, the robot can carry heavy loads (up to 50% of its weight) while performing dynamic gaits such as fast trotting and bounding across uneven terrains.more » « less
-
During in-hand manipulation, robots must be able to continuously estimate the pose of the object in order to generate appropriate control actions. The performance of algorithms for pose estimation hinges on the robot's sensors being able to detect discriminative geometric object features, but previous sensing modalities are unable to make such measurements robustly. The robot's fingers can occlude the view of environment- or robot-mounted image sensors, and tactile sensors can only measure at the local areas of contact. Motivated by fingertip-embedded proximity sensors' robustness to occlusion and ability to measure beyond the local areas of contact, we present the first evaluation of proximity sensor based pose estimation for in-hand manipulation. We develop a novel two-fingered hand with fingertip-embedded optical time-of-flight proximity sensors as a testbed for pose estimation during planar in-hand manipulation. Here, the in-hand manipulation task consists of the robot moving a cylindrical object from one end of its workspace to the other. We demonstrate, with statistical significance, that proximity-sensor based pose estimation via particle filtering during in-hand manipulation: a) exhibits 50% lower average pose error than a tactile-sensor based baseline; b) empowers a model predictive controller to achieve 30% lower final positioning error compared to when using tactile-sensor based pose estimates.more » « less
-
Despite the existence of robots that can physically lift heavy loads, robots that can collaborate with people to move heavy objects are not readily available. This article makes progress toward effective human-robot co-manipulation by studying 30 human-human dyads that collaboratively manipulated an object weighing
without being co-located (i.e., participants were at either end of the extended object). Participants maneuvered around different obstacles with the object while exhibiting one of four modi–the manner or objective with which a team moves an object together–at any given time. Using force and motion signals to classify modus or behavior was the primary objective of this work. Our results showed that two of the originally proposed modi were very similar, such that one could effectively be removed while still spanning the space of common behaviors during our co-manipulation tasks. The three modi used in classification were\(27 \mathrm{kg}\) quickly ,smoothly andavoiding obstacles . Using a deep convolutional neural network (CNN), we classified three modi with up to 89% accuracy from a validation set. The capability to detect or classify modus during co-manipulation has the potential to greatly improve human-robot performance by helping to define appropriate robot behavior or controller parameters depending on the objective or modus of the team.