skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Non-Contacting Two-Dimensional Position Estimation Using an External Magnet and Monocular Computer Vision
Abstract This paper develops a position estimation system for a robot moving over a two-dimensional plane with three degrees-of-freedom. The position estimation system is based on an external rotating platform containing a permanent magnet and a monocular camera. The robot is equipped with a two-axes magnetic sensor. The rotation of the external platform is controlled using the monocular camera so as to always point at the robot as it moves over the 2D plane. The radial distance to the robot can then be obtained using a one-degree-of-freedom nonlinear magnetic field model and a nonlinear observer. Extensive experimental results are presented on the performance of the developed system. Results show that the position of the robot can be estimated with sub-mm accuracy over a radial distance range of +/−60 cm from the magnet.  more » « less
Award ID(s):
1830958
PAR ID:
10561414
Author(s) / Creator(s):
; ;
Publisher / Repository:
ASME Letters in Dynamic Systems and Control
Date Published:
Journal Name:
ASME Letters in Dynamic Systems and Control
Volume:
3
Issue:
3
ISSN:
2689-6117
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The Georgia Tech Miniature Autonomous Blimp (GT-MAB) needs localization algorithms to navigate to way-points in an indoor environment without leveraging an external motion capture system. Indoor aerial robots often require a motion capture system for localization or employ simultaneous localization and mapping (SLAM) algorithms for navigation. The proposed strategy for GT-MAB localization can be accomplished using lightweight sensors on a weight-constrained platform like the GT-MAB. We train an end-to-end convolutional neural network (CNN) that predicts the horizontal position and heading of the GT-MAB using video collected by an onboard monocular RGB camera. On the other hand, the height of the GT-MAB is estimated from measurements through a time-of-flight (ToF) single-beam laser sensor. The monocular camera and the single-beam laser sensor are sufficient for the localization algorithm to localize the GT-MAB in real time, achieving the averaged 3D positioning errors to be less than 20 cm, and the averaged heading errors to be less than 3 degrees. With the accuracy of our proposed localization method, we are able to use simple proportional-integral-derivative controllers to control the GT-MAB for waypoint navigation. Experimental results on the waypoint following are provided, which demonstrates the use of a CNN as the primary localization method for estimating the pose of an indoor robot that successfully enables navigation to specified waypoints. 
    more » « less
  2. A magnetic levitation system consists of a magnet facing groundward to attract a magnetic object against gravity and levitate it at a distance from the face of magnet. Due to the unstable nature of this system, it must be stabilized by means of feedback control, which adjusts the magnetic force applied to the levitating object depending on its measured position and possibly velocity. Conventionally, electromagnets have been used for magnetic levitation, as they can be simply controlled via their terminal voltages. This paper, however, studies a levitation system relying on a permanent magnet and a linear servomotor to control the applied magnetic force by changing the distance between the magnet and the levitating object. For the proposed system, which is highly nonlinear, a stabilizing feedback control law is developed using feedback linearization and other control design tools. Then, the closed-loop stability is examined against system parameters such as the size of the levitating object, the viscosity of the medium it moves in, and certain characteristics of the magnet in use. The emphasis here is on understanding the impact of intrinsic servomotor limitations, particularly its finite slew rate (cap on its maximum velocity), on the ability of feedback control to stabilize the closed-loop system. This particular limitation seems to be a major concern in utilizing permanent magnets for noncontact actuation and control. 
    more » « less
  3. Kyriakopoulos, Kostas J; Polygerinos, Panagiotis (Ed.)
    We demonstrate proprioceptive feedback control of a one degree of freedom soft, pneumatically actuated origami robot and an assembly of two robots into a two degree of freedom system. The base unit of the robot is a 41 mm long, 3-D printed Kresling-inspired structure with six sets of sidewall folds and one degree of freedom. Pneumatic actuation, provided by negative fluidic pressure, causes the robot to contract. Capacitive sensors patterned onto the robot provide position estimation and serve as input to a feedback controller. Using a finite element approach, the electrode shapes are optimized for sensitivity at larger (more obtuse) fold angles to improve control across the actuation range. We demonstrate stable position control through discrete-time proportional-integral-derivative (PID) control on a single unit Kresling robot via a series of static set points to 17 mm, dynamic set point stepping, and sinusoidal signal following, with error under 3 mm up to 10 mm contraction. We also demonstrate a two-unit Kresling robot with two degree of freedom extension and rotation control, which has error of 1.7 mm and 6.1°. This work contributes optimized capacitive electrode design and the demonstration of closed-loop feedback position control without visual tracking as an input. This approach to capacitance sensing and modeling constitutes a major step towards proprioceptive state estimation and feedback control in soft origami robotics. 
    more » « less
  4. This paper presents a method of tracking multiple ground targets from an unmanned aerial vehicle (UAV) in a 3D reference frame. The tracking method uses a monocular camera and makes no assumptions on the shape of the terrain or the target motion. The UAV runs two cascaded estimators. The first is an Extended Kalman Filter (EKF), which is responsible for tracking the UAV’s state, such as position and velocity relative to a fixed frame. The second estimator is an EKF that is responsible for estimating a fixed number of landmarks within the camera’s field of view. Landmarks are parameterized by a quaternion associated with bearing from the camera’s optical axis and an inverse distance parameter. The bearing quaternion allows for a minimal representation of each landmark’s direction and distance, a filter with no singularities, and a fast update rate due to few trigonometric functions. Three methods for estimating the ground target positions are demonstrated: the first uses the landmark estimator directly on the targets, the second computes the target depth with a weighted average of converged landmark depths, and the third extends the target’s measured bearing vector to intersect a ground plane approximated from the landmark estimates. Simulation results show that the third target estimation method yields the most accurate results. 
    more » « less
  5. Abstract—Robotic geo-fencing and surveillance systems require accurate monitoring of objects if/when they violate perimeter restrictions. In this paper, we seek a solution for depth imaging of such objects of interest at high accuracy (few tens of cm) over extended ranges (up to 300 meters) from a single vantage point, such as a pole mounted platform. Unfortunately, the rich literature in depth imaging using camera, lidar and radar in isolation struggles to meet these tight requirements in real-world conditions. This paper proposes Metamoran, a solution that explores long-range depth imaging of objects of interest by fusing the strengths of two complementary technologies: mmWave radar and camera. Unlike cameras, mmWave radars offer excellent cm-scale depth resolution even at very long ranges. However, their angular resolution is at least 10× worse than camera systems. Fusing these two modalities is natural, but in scenes with high clutter and at long ranges, radar reflections are weak and experience spurious artifacts. Metamoran’s core contribution is to leverage image segmentation and monocular depth estimation on camera images to help declutter radar and discover true object reflections.We perform a detailed evaluation of Metamoran’s depth imaging capabilities in 400 diverse scenarios. Our evaluation shows that Metamoran estimates the depth of static objects up to 90 m away and moving objects up to 305 m away and with a median error of 28 cm, an improvement of 13× over a naive radar+camera baseline and 23× compared to monocular depth estimation. 
    more » « less