skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.


Title: Feedback Control and 3D Motion of Heterogeneous Janus Particles
This paper presents 2D feedback control and open loop 3D trajectories of heterogeneous chemically catalyzing Janus particles. Self-actuated particles have enormous implications for both in vivo and in vitro environments, which make them a diverse resource for a variety of medical and assembly applications. Janus particles, consisting of cobalt and platinum hemispheres, can self-propel in hydrogen peroxide solutions due to platinum’s catalyzation properties. These particles are directionally controlled using static magnetic fields produced from a triaxial approximate Helmholtz coil system. Since the magnetization direction of Janus particles is often heterogeneous, and thereby not consistent with the propulsion direction, this creates a unique opportunity to explore the motion effects of these particles under 2D feedback control and open loop 3D control. Using a modified closed loop controller, Janus particles with magnetization both closely aligned and greatly misaligned to the propulsion vectors, were instructed to perform complex trajectories. These trajectories were then compared between trials to measure both consistency and accuracy. The effects of increasing offset between the magnetization and propulsion vectors were also analyzed. The effects this heterogeneity had on 3D motion is also briefly discussed. It is our hope going forward to develop a 3D closed loop control system that can retroactively account for variations in the magnetization vector.  more » « less
Award ID(s):
1619278
NSF-PAR ID:
10130234
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
IEEE International Conference on Robotics and Automation (ICRA 2019, Montreal CA)
Page Range / eLocation ID:
1352 to 1357
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ani Hsieh (Ed.)
    Reconfigurable modular robots can dynamically assemble/disassemble to accomplish the desired task better. Magnetic modular cubes are scalable modular subunits with embedded permanent magnets in a 3D-printed cubic body and can be wirelessly controlled by an external, uniform, timevarying magnetic field. This paper considers the problem of self-assembling these modules into desired 2D polyomino shapes using such magnetic fields. Although the applied magnetic field is the same for each magnetic modular cube, we use collisions with workspace boundaries to rearrange the cubes. We present a closed-loop control method for self-assembling the magnetic modular cubes into polyomino shapes, using computer vision-based feedback with re-planning. Experimental results demonstrate that the proposed closed-loop control improves the success rate of forming 2D user-specified polyominoes compared to an open-loop baseline. We also demonstrate the validity of the approach over changes in length scales, testing with both 10mm edge length cubes and 2.8mm edge length cubes. 
    more » « less
  2. In this paper we derive a new capability for robots to measure relative direction, or Angle-of-Arrival (AOA), to other robots, while operating in non-line-of-sight and unmapped environments, without requiring external infrastructure. We do so by capturing all of the paths that a WiFi signal traverses as it travels from a transmitting to a receiving robot in the team, which we term as an AOA profile. The key intuition behind our approach is to emulate antenna arrays in the air as a robot moves freely in 2D or 3D space. The small differences in the phase and amplitude of WiFi signals are thus processed with knowledge of a robots’ local displacements (often provided via inertial sensors) to obtain the profile, via a method akin to Synthetic Aperture Radar (SAR). The main contribution of this work is the development of i) a framework to accommodate arbitrary 2D and 3D trajectories, as well as continuous mobility of both transmitting and receiving robots, while computing AOA profiles between them and ii) an accompanying analysis that provides a lower bound on variance of AOA estimation as a function of robot trajectory geometry that is based on the Cramer Rao Bound and antenna array theory. This is a critical distinction with previous work on SAR that restricts robot mobility to prescribed motion patterns, does not generalize to the full 3D space, and/or requires transmitting robots to be static during data acquisition periods. In fact, we find that allowing robots to use their full mobility in 3D space while performing SAR, results in more accurate AOA profiles and thus better AOA estimation. We formally characterize this observation as the informativeness of the trajectory; a computable quantity for which we derive a closed form. All theoretical developments are substantiated by extensive simulation and hardware experiments on air/ground robot platforms. Our experimental results bolster our theoretical findings, demonstrating that 3D trajectories provide enhanced and consistent accuracy, with AOA error of less than 10 deg for 95% of trials. We also show that our formulation can be used with an off-the-shelf trajectory estimation sensor (Intel RealSense T265 tracking camera), for estimating the robots’ local displacements, and we provide theoretical as well as empirical results that show the impact of typical trajectory estimation errors on the measured AOA. Finally, we demonstrate the performance of our system on a multi-robot task where a heterogeneous air/ground pair of robots continuously measure AOA profiles over a WiFi link to achieve dynamic rendezvous in an unmapped, 300 square meter environment with occlusions. 
    more » « less
  3. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less
  4. In this paper, we develop the analytical framework for a novel Wireless signal-based Sensing capability for Robotics (WSR) by leveraging a robots’ mobility in 3D space. It allows robots to primarily measure relative direction, or Angle-of-Arrival (AOA), to other robots, while operating in non-line-of-sight unmapped environments and without requiring external infrastructure. We do so by capturing all of the paths that a wireless signal traverses as it travels from a transmitting to a receiving robot in the team, which we term as an AOA profile. The key intuition behind our approach is to enable a robot to emulate antenna arrays as it moves freely in 2D and 3D space. The small differences in the phase of the wireless signals are thus processed with knowledge of robots’ local displacement to obtain the profile, via a method akin to Synthetic Aperture Radar (SAR). The main contribution of this work is the development of (i) a framework to accommodate arbitrary 2D and 3D motion, as well as continuous mobility of both signal transmitting and receiving robots, while computing AOA profiles between them and (ii) a Cramer–Rao Bound analysis, based on antenna array theory, that provides a lower bound on the variance in AOA estimation as a function of the geometry of robot motion. This is a critical distinction with previous work on SAR-based methods that restrict robot mobility to prescribed motion patterns, do not generalize to the full 3D space, and require transmitting robots to be stationary during data acquisition periods. We show that allowing robots to use their full mobility in 3D space while performing SAR results in more accurate AOA profiles and thus better AOA estimation. We formally characterize this observation as the informativeness of the robots’ motion, a computable quantity for which we derive a closed form. All analytical developments are substantiated by extensive simulation and hardware experiments on air/ground robot platforms using 5 GHz WiFi. Our experimental results bolster our analytical findings, demonstrating that 3D motion provides enhanced and consistent accuracy, with a total AOA error of less than 10for 95% of trials. We also analytically characterize the impact of displacement estimation errors on the measured AOA and validate this theory empirically using robot displacements obtained using an off-the-shelf Intel Tracking Camera T265. Finally, we demonstrate the performance of our system on a multi-robot task where a heterogeneous air/ground pair of robots continuously measure AOA profiles over a WiFi link to achieve dynamic rendezvous in an unmapped, 300 m2environment with occlusions.

     
    more » « less
  5. The traditional locomotion paradigm of quadruped robots is to use dexterous (multi degrees of freedom) legs and dynamically optimized footholds to balance the body and achieve stable locomotion. With the introduction of a robotic tail, a new locomotion paradigm becomes possible as the balancing is achieved by the tail and the legs are only responsible for propulsion. Since the burden on the leg is reduced, leg complexity can be also reduced. This paper explores this new paradigm by tackling the dynamic locomotion control problem of a reduced complexity quadruped (RCQ) with a pendulum tail. For this specific control task, a new control strategy is proposed in a manner that the legs are planned to execute the open-loop gait motion in advance, while the tail is controlled in a closed-loop to prepare the quadruped body in the desired orientation. With these two parts working cooperatively, the quadruped achieves dynamic locomotion. Partial feedback linearization (PFL) controller is used for the closed-loop tail control. Pronking, bounding, and maneuvering are tested to evaluate the controller’s performance. The results validate the proposed controller and demonstrate the feasibility and potential of the new locomotion paradigm. 
    more » « less