<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Relative Moving Target Tracking and Circumnavigation</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>07/01/2019</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10135576</idno>
					<idno type="doi">10.23919/ACC.2019.8815178</idno>
					<title level='j'>2019 American Control Conference (ACC)</title>
<idno></idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Jerel Nielsen</author><author>Randal Beard</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[This paper develops observers and controllers for relative estimation and circumnavigation of a moving ground target using bearing-only measurements or range with bearing measurements. A bearing-only observer, range with bearing observer, a general circumnavigation velocity command for an arbitrary aircraft, and nonlinear velocity-based multirotor controller are developed. The observers are designed in the body-fixed reference frame, while the velocity command and multirotor controller are developed in the body-level frame, independent of aircraft heading. This enables target circumnavigation in GPS-denied environments when only a camera-IMU estimator is used for state estimation and ensures observable conditions for the estimator. Simulation results demonstrate the effectiveness of the observers, velocity command, and multirotor controller under various target motions.
⊤(2)and denote other unit vectors by e * .]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The skew symmetric operator is defined by &#8226; &#8743; , such that a &#8743; b = a &#215; b. We also make use the basis vectors e 1 = 1 0 0 &#8868; (1) e 2 = 0 1 0</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. INTRODUCTION</head><p>Target tracking and surveillance from an unmanned air vehicle (UAV) has been an area of interest in the research community for many years, primarily aimed at military applications, due to the cost sensors and aerial platforms. In recent years however, the development and proliferation of increasingly smaller sensors, such as inertial measurement units (IMU) and video cameras has caused the emergence of small unmanned air systems (sUAS). This phenomenon has greatly multiplied the possibilities for aerial target tracking and brought forth many works related to the circumnavigation of targets [1], [2], [3], [4], [5], [6], <ref type="bibr">[7]</ref>.</p><p>In addition to the interest in target circumnavigation brought on by the multiplicity of sUAS, a significant effort has been put forth to fuse IMU measurements with camera measurements. The IMU provides high rate measurements of linear acceleration and angular rate, while the camera provides direction or full vector measurements to landmarks, depending on the type of camera. These types of measurements are ideal for continuous-discrete Kalman filtering because the mechanization of IMU measurements provides high rate prediction, while the camera provides low rate corrections. The fusion of these measurements has been thoroughly demonstrated through Kalman filtering, optimization, and nonlinear techniques in many recent visual-inertial odometry (VIO) and simultaneous localization and mapping (SLAM) works [8], [9], [10], <ref type="bibr">[11], [12]</ref>, <ref type="bibr">[13]</ref>.</p><p>Many of the existing circumnavigation algorithms assume a known inertial state [1], [5]. These will not perform as well when the following agent is only equipped with a camera and IMU for state estimation because the global position and heading are not observable <ref type="bibr">[14]</ref>. Others have developed algorithms for GPS-denied target tracking and following [3], <ref type="bibr">[15]</ref> with successful hardware demonstrations. Some even follow targets using image-based visual servoing (IBVS) techniques <ref type="bibr">[16]</ref>. However, these assume a known object size or appearance and do not address the observability required for the state estimator to estimate IMU biases, enabling accurate attitude estimation.</p><p>This paper develops observers and controllers for relative estimation and circumnavigation of a moving ground target, assuming that the aircraft is equipped with only a camera and IMU for state estimation. While cameras typically provide only bearing information, some modern cameras are capable of providing depth information. When a depth measurement is not available, we can create pseudo-depth measurements by approximating target position relative to static landmark position estimates <ref type="bibr">[17]</ref>. Therefore, we develop two observers in this paper, where the first uses bearing-only information and the second takes advantage of full vector information. Following the target observers, we also use nonlinear system theory to define a commanded velocity vector to drive the aircraft to a circumnavigating path about the target. Finally, a nonlinear controller for a multirotor aircraft is derived in the body-level reference frame to drive the multirotor to a commanded velocity. While we do not develop multirotor state estimator in this paper, we note that observability of the IMU biases is also guaranteed because of the persistent excitation of landmark bearing measurements <ref type="bibr">[18]</ref> resulting from the circumnavigating motion.</p><p>We begin in Section III by developing the target observers using bearing-only and range with bearing measurements. This is followed by Section IV-A, where we derive the commanded body-level velocity needed to bring an arbitrary aircraft to a desired radius and altitude relative to the target. Next in Section IV-B, we design a controller specific to a multirotor aircraft to drive the multirotor's velocity to a commanded body-level velocity. Lastly, Section V demonstrates the effectiveness of these observers and controllers under various target motions and discusses the results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. TARGET ESTIMATION A. Bearing-only Measurements</head><p>The position of the target relative to the UAS and its time derivative are given by</p><p>Assuming a stationary target &#7767;i t/i = 0 and inserting &#7768;b</p><p>where &#969; b b/i and v b b/i are the angular and linear rates of the UAS. Here, we have assumed that the aircraft's visual-inertial estimator is working well, such that its observable states are known, and we have also assumed a stationary target. However, the error will remain bounded for slowly moving targets as shown in <ref type="bibr">[5]</ref>.</p><p>Proposition 1: Assuming that the camera only measures</p><p>, an observer for relative target position may be given by   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Range with Bearing Measurements</head><p>The position and velocity of the target relative to the agent are given by</p><p>Assuming a constant velocity target vi t/i = 0 and inserting</p><p>where the aircraft's acceleration vb b/i is measured by its IMU. This may be written in terms of the accelerometer measurement as</p><p>where g is gravity's magnitude and &#257;b b/i is the measured acceleration of the IMU.</p><p>Proposition 2: Assuming that the camera measures relative target position p b t/b via RGB-D camera or pseudodepth, an observer may be given by</p><p>where k 1 and k 2 are a positive gains.</p><p>Proof: With the errors pb  </p><p>indicating that this trajectory will leave the set S, except when pb t/b = &#7805;b t/b = 0.</p><p>IV. CIRCUMNAVIGATION We derive the following velocity and multirotor controllers in the body-level reference frame to remove any dependence on heading.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Velocity Control</head><p>In this section, we derive the commanded velocity in the body-level reference frame needed to bring an arbitrary aircraft to a circumnavigating orbit about a target at some constant, desired relative radius r d and altitude h d . The relative radius and altitude in the body-level frame are computed by rotating the body-fixed relative target position into the body-level frame and projecting onto the horizontal plane and vertical axis. These are written in terms of the relative target position by</p><p>Differentiating these w.r.t. time yields</p><p>where e r is the horizontal target direction in the body-level frame and e r &#8869; e 3 . Writing e r in terms of e t , we also have</p><p>Proposition 3: Define the errors r = r-r d , h = h-h d , and let the commanded velocity in the body-level frame be given by</p><p>where k r and k h are positive gains and v t is the tangential velocity chosen by the user. Note that for bearing-only target estimation vb t/i = 0 and for range with bearing target estimation vb</p><p>Proof: Differentiating the relative radial and altitude errors with respect to time, we obtain</p><p>ensuring that r &#8594; 0 and h &#8594; 0.</p><p>When the target velocity estimate is fixed at zero, as is the case for bearing-only estimation, the Lyapunov function candidate L = 1 2 r2 + h2 has the time derivative</p><p>Define x = r h &#8868; and ( <ref type="formula">33</ref>) becomes</p><p>where</p><p>With &#963; min = min (k r , k h ) and &#963; max = max (k r , k h ), we then have</p><p>which yields the relation</p><p>Therefore, L &lt; 0 if x &gt; b &#963;min and the solution is ultimately bounded in finite time. After convergence to the bounded region, we have</p><p>which depends on target velocity but can be made arbitrarily small by choosing k r , k h to be arbitrarily large.</p><p>We note that in practice, the UAS is constrained by its own maximum velocity, so there exists a limit to the effect of these gains.</p><p>To reduce the likelihood of the target moving out of the camera's field of view, the desired radius and altitude may be chosen according to the direction of the camera optical axis during level flight. For a known optical axis angle from vertical &#952;, we may choose a desired relative altitude and let the desired radius be chosen mathematically to align the level optical axis with the target by</p><p>(40)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Multirotor Velocity Control</head><p>Neglecting wind and drag, velocity dynamics of a multirotor aircraft in the body-level frame are given by</p><p>where g gravitational magnitude, T is thrust, m is vehicle mass, and the yaw rate of the body-level frame is</p><p>Assuming a known hover throttle signal s h , we approximate thrust with T &#8776; mg s s h and (41) becomes</p><p>Defining the velocity error &#7805; = v l cv l b/i and holding the command constant, the time derivative is given by</p><p>and ( <ref type="formula">44</ref>) becomes</p><p>This allows us to select u as a vector input that drives the velocity error to zero. Define the Lyapunov function candidate L = 1 2 &#7805;&#8868; &#7805; and differentiate to obtain</p><p>and ( <ref type="formula">47</ref>) reduces to</p><p>which is negative definite for positive definite K v . Equating ( <ref type="formula">45</ref>) and ( <ref type="formula">48</ref>) gives</p><p>Using the current aircraft attitude estimate and solving for the commanded thrust signal yields</p><p>Now using the commanded thrust signal, we need to solve for the commanded attitude. Substituting s</p><p>in (50) and dividing by s c , we have</p><p>We cannot directly solve for R l b c , but this equation tells us that the commanded body-down axis in the body-level frame should point in the direction</p><p>Having the body-down axis pointed in this direction relative to the body-level frame will drive the multirotor to the commanded velocity, regardless of its heading. The commanded body-forward and body-right axes in the body-level frame may then be given by opt &#215; e t ,</p><p>where k r is a positive gain. This does not drive the angular error in yaw exactly to zero but a high gain can reduce the error close to zero.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>V. SIMULATION RESULTS</head><p>To demonstrate the performance of the observers and controllers derived in Sections III and IV, we designed a simulation of a multirotor and a nonholonomic ground vehicle. The multirotor is modeled with nonlinear aerodynamic drag and collects measurements of the ground vehicle in the body reference frame each time step. Measurements of the target are also corrupted with random Gaussian noise that is zero mean and has a standard deviation of magnitude 1 10 . The ground vehicle uses a bicycle steering model, has variable elevation, and follows a cycled list of four waypoints that form the shape of a square about the inertial origin.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2019" xml:id="foot_0"><p>American Control Conference (ACC) Philadelphia, PA, USA, July 10-12, 2019 978-1-5386-7926-5/$31.00 &#169;2019 AACC</p></note>
		</body>
		</text>
</TEI>
