<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Relative multiplicative extended Kalman filter for observable GPS-denied navigation</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>08/01/2020</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10316379</idno>
					<idno type="doi">10.1177/0278364920903094</idno>
					<title level='j'>The International Journal of Robotics Research</title>
<idno>0278-3649</idno>
<biblScope unit="volume">39</biblScope>
<biblScope unit="issue">9</biblScope>					

					<author>Daniel P Koch</author><author>David O Wheeler</author><author>Randal W Beard</author><author>Timothy W McLain</author><author>Kevin M Brink</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[This work presents a multiplicative extended Kalman filter (MEKF) for estimating the relative state of a multirotor vehicle operating in a GPS-denied environment. The filter fuses data from an inertial measurement unit and altimeter with relative-pose updates from a keyframe-based visual odometry or laser scan-matching algorithm. Because the global position and heading states of the vehicle are unobservable in the absence of global measurements such as GPS, the filter in this article estimates the state with respect to a local frame that is colocated with the odometry keyframe. As a result, the odometry update provides nearly direct measurements of the relative vehicle pose, making those states observable. Recent publications have rigorously documented the theoretical advantages of such an observable parameterization, including improved consistency, accuracy, and system robustness, and have demonstrated the effectiveness of such an approach during prolonged multirotor flight tests. This article complements this prior work by providing a complete, self-contained, tutorial derivation of the relative MEKF, which has been thoroughly motivated but only briefly described to date. This article presents several improvements and extensions to the filter while clearly defining all quaternion conventions and properties used, including several new useful properties relating to error quaternions and their Euler-angle decomposition. Finally, this article derives the filter both for traditional dynamics defined with respect to an inertial frame, and for robocentric dynamics defined with respect to the vehicle’s body frame, and provides insights into the subtle differences that arise between the two formulations.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>GPS-denied navigation for small unmanned aircraft systems (UASs) is an active and rich field of research with significant practical applications such as infrastructure inspection and security. Most UASs fuse GPS with accelerometer and rate-gyro data to provide accurate global state estimates suitable for feedback control. When GPS is not available, however, additional sensors such as cameras or lidar are required. Owing to the size, weight, and power constraints and fast vehicle dynamics associated with small UASs, many such systems incorporate these additional sensors using filter-based estimation techniques rather than traditional full simultaneous localization and mapping (SLAM) algorithms. Filter-based approaches are computationally efficient and ensure smooth, timely state estimates for control.</p><p>In the absence of GPS updates, many filtering methods utilize incremental odometry measurements from either visual odometry or laser scan matching. These odometry measurements can be computed frame-to-frame, or several measurements can be computed with respect to the same keyframe image or scan. The keyframe image or scan is updated when there is insufficient overlap with current images or scans to compute reliable odometry measurements. Keyframe-based approaches have the advantage of reducing temporal drift in the odometry measurements <ref type="bibr">(Leutenegger et al., 2015)</ref>.</p><p>Despite having only incremental measurements available, the majority of GPS-denied navigation approaches directly estimate the vehicle's global pose with respect to Fig. <ref type="figure">1</ref>. Block diagram of the general relative navigation system architecture. Flight critical estimation and control is performed with respect to a local frame. The framework is described in more detail by <ref type="bibr">Wheeler et al. (2020)</ref>. some fixed origin. Without global position measurements, however, the vehicle's global pose and heading are unobservable <ref type="bibr">(Martinelli, 2012;</ref><ref type="bibr">Weiss et al., 2012;</ref><ref type="bibr">Jones et al., 2007)</ref>. As a result, global filters can suffer from inconsistent and unbounded state uncertainties, erratic state jumps when applying relative measurements, and the inability to directly apply intermittent global information without causing large state jumps <ref type="bibr">(Julier and Uhlmann, 2001;</ref><ref type="bibr">Bailey and Durrant-Whyte, 2006;</ref><ref type="bibr">Kottas et al., 2013;</ref><ref type="bibr">Wheeler et al., 2018)</ref>. In contrast, the relative navigation approach estimates only the relative state of the vehicle with respect to the location of the most recent odometry keyframe <ref type="bibr">(Leishman et al., 2014b)</ref>. As a result, the odometry provides direct measurements of the position and heading states, making them observable by construction and thereby ensure consistent and bounded state estimates. Each time a new keyframe is declared, the current state and covariance estimates are passed to a back-end map that concatenates them as edges in a pose graph to reconstruct the global path of the vehicle. The position and heading states in the filter are then reset to zero and estimation continues. The relative navigation architecture is illustrated graphically in Figure <ref type="figure">1</ref>, and parallels ideas found in the SLAM literature <ref type="bibr">(Chong and Kleeman, 1999;</ref><ref type="bibr">Kim et al., 2010)</ref>.</p><p>Recent simulation results have shown that using the relative navigation framework to ensure observability provides significant advantages in terms of consistency of the estimated relative and global states, as well as some improvement in accuracy and system robustness when incorporating intermittent global information <ref type="bibr">(Wheeler et al., 2018)</ref>. Furthermore, recent multirotor hardware flight test results have demonstrated the effectiveness of relative navigation for prolonged GPS-degraded navigation of small UASs <ref type="bibr">(Wheeler et al., 2020)</ref>. This article compliments the work in <ref type="bibr">Wheeler et al. (2018)</ref> and <ref type="bibr">Wheeler et al. (2020)</ref> by providing a complete, self-contained, tutorial derivation of the relative state-estimation filter used in those papers to obtain hardware results. While those papers provide thorough theoretical and practical motivation for the relative navigation approach, the core relative estimator component itself has only been briefly described in the literature to date.</p><p>The purpose of this article is to fill that gap. The flight tests presented by <ref type="bibr">Wheeler et al. (2018)</ref> and <ref type="bibr">Wheeler et al. (2020)</ref> successfully leveraged the relative multiplicative extended Kalman filter (RMEKF) presented in this article. While that work demonstrated the effectiveness of the relative navigation framework and gave an overview of the various components, it did not describe the RMEKF in detail. The RMEKF builds upon the multiplicative extended Kalman filter (MEKF), which uses a quaternion to represent attitude and quaternion multiplication to define attitude error. The RMEKF extends the MEKF by defining the UAS state to be with respect to a local coordinate frame associated with the current keyframe image. To accommodate this relative state, the RMEKF introduces an additional keyframe reset step that is applied each time a new keyframe is declared.</p><p>This article contributes to the literature in three ways. First, the article provides a tutorial derivation of the MEKF for UAS state estimation given Hamilton quaternions. Second, the article presents a complete derivation of the RMEKF for multirotor UASs, including several important extensions to the original presentation by <ref type="bibr">Leishman and McLain (2014)</ref>. Third, the article provides a thorough derivation of the RMEKF for both inertial and body-fixed (robocentric) state representations, highlighting the subtle but important differences that exist between the two methods. The following paragraphs describe these contributions in further detail and relate how they compare to the existing literature.</p><p>Significant portions of the article are tutorial in nature, clearly motivating why an indirect or error state formulation is necessary when quaternions are used to represent attitude, and providing complete explanations of each step in the derivation of the filter equations. The MEKF was first introduced by <ref type="bibr">Lefferts et al. (1982)</ref>, and several in-depth discussions and derivations of MEKF implementations have been published <ref type="bibr">(Markley, 2003;</ref><ref type="bibr">Sola, 2016;</ref><ref type="bibr">Trawny and Roumeliotis, 2005)</ref>. Some of these valuable publications are of similar scope to the current work, but this article provides several meaningful extensions. First, this article derives an estimator for the full state of a UAS (position, velocity, attitude, accelerometer biases, and gyroscope biases), while most previous MEKF papers of similar scope focus only on the attitude and bias estimation. <ref type="bibr">Sola (2016)</ref> did derive a full-state MEKF, but this article provides the derivation for a unique set of propagation and measurement models. Second, this article derives the MEKF using the Hamilton quaternion convention as opposed to the JPL convention used in some other works. While the choice of quaternion convention does not fundamentally change the problem, Hamilton quaternions are commonly used in the robotics literature and subtle but important differences arise. This article provides a contrasting perspective to help deepen understanding of quaternions, and aims to clarify some of the confusion that can arise due to the varying conventions used in different works that are not always defined explicitly. Third, the tutorial nature of this article provides sufficient context for the derivation of several new properties relating to quaternions, their error representations, and their Euler-angle decomposition. These properties play a key role in the derivation of the RMEKF to allow partial attitude updates.</p><p>Another purpose of this article is to provide a thorough derivation of the RMEKF estimator used successfully by <ref type="bibr">Wheeler et al. (2020)</ref> for prolonged UAS navigation in GPS-degraded environments. The RMEKF presented in this article extends the original RMEKF derivation by <ref type="bibr">Leishman and McLain (2014)</ref> in several important ways that have proven necessary for prolonged flight. First, this article presents a new visual odometry measurement model, laser scan-matching measurement model, and keyframe reset operation, that together ensure the state remains observable in GPS-denied environments. Second, several novel properties of error quaternions are derived that enable partial updates to quaternion states and their covariances. Third, new terms are added to the state vector in this article to correctly account for uncertainty in the roll, pitch, and altitude of the vehicle at the time a keyframe is declared. Finally, smaller differences include reversing the direction of the odometry measurement model to avoid unnecessarily coupling heading uncertainty into the update, and estimating the global height of the vehicle above ground rather than treating altitude as a relative state.</p><p>Another unique contribution of this article is the derivation of the RMEKF when the state is defined with respect to either an inertial frame or a body-fixed robocentric frame. Vehicle dynamics are traditionally expressed in an inertially fixed, gravity-aligned frame (see <ref type="bibr">Leishman et al. (2014b)</ref> and <ref type="bibr">Leishman and McLain (2014)</ref>). This makes sense especially when the sensors are inertial sensors such as GPS. However, expressing the dynamics in a robocentric frame is often more natural when using robocentric sensors such as cameras and laser range finders, and can help address some of the inconsistency issues of EKF-SLAM <ref type="bibr">(Bloesch et al., 2015;</ref><ref type="bibr">Castellanos et al., 2007)</ref> when only relative states are required. A related contribution of this article is a presentation of the subtle differences that arise between using an inertial and robocentric reference frame. These differences can cause confusion in the literature where a side-by-side comparison does not currently exist. For example, in addition to the change in dynamics, subtle changes appear in the quaternion integration, error state definition, measurement models, and keyframe reset operations. By presenting both formulations side-by-side, these differences are clearly explained.</p><p>The final contribution of this article is a complete, selfcontained derivation of the filter and all relevant quaternion properties. The definitions of quaternions and error states used across the current estimation literature differ in subtle ways. When these definitions are not documented thoroughly, it becomes difficult to correctly leverage properties from multiple sources. With its tutorial nature and step-by-step explanations, this article is designed to present a complete, self-contained derivation with respect to a consistent, explicitly stated definition. This allows the reader to understand, implement, and potentially modify the RMEKF for new vehicles or applications. Note that while the keyframe reset step and several measurement models are specific to relative navigation, the propagation equations and general filter structure are equally relevant for other applications, such as GPS/inertial navigation system (INS) navigation.</p><p>Section 2 summarizes the notation used throughout the article. Section 3 provides an overview of the quaternion definitions used in the article. Specifically, Section 3.8 derives several relevant, new properties of error quaternions. Section 4 outlines the structure of the MEKF. The keyframe reset step is described in Section 5, and an overview of the complete RMEKF algorithm is given. Section 6 derives the specific filter equations for inertial relative navigation (iRN), and Section 7 derives the equations for body-fixed robocentric relative navigation (bRN). Finally, Sections 8 and 9 respectively present results and conclusions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Nomenclature</head><p>The following variables, operators, and notation are defined and motivated throughout the article and are summarized here for convenience. Let B denote the vehicle's body frame and I denote an inertial frame. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>State variables</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Quaternion properties</head><p>Quaternions are a common method for representing attitude owing to their improved computational efficiency and accuracy compared with alternative approaches <ref type="bibr">(Casey et al., 2013)</ref>. A variety of definitions exist for quaternions and their associated operations, leading to subtle discrepancies and potential confusion. The various approaches, described in more detail by <ref type="bibr">Sola (2016)</ref>, include left-handed versus right-handed quaternion multiplication, active versus passive representations, local-to-global versus globalto-local attitude direction, and quaternion ordering. This section explicitly establishes the definitions and notation used throughout this article and additionally derives several properties required for the filter's derivation. This section is not intended as a complete introduction to quaternions, but rather as a summary of relevant points.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Quaternion conventions</head><p>A quaternion q &#8712; H is a hyper-complex number of rank four consisting of a scalar and vector portion as q = q 0 + q x i + q y j + q z k.</p><p>We use the Hamilton definition of the quaternion, with</p><p>(1)</p><p>For notational convenience we define the vector portion of the quaternion as q = [q x q y q z ] T , and write the quaternion as</p><p>Quaternion multiplication is denoted with the &#8855; operator, and is carried out according to the rules in (1) and standard algebraic multiplication. Using the notation in (2), quaternion multiplication can be written as a matrix multiplication according to</p><p>where the operator &#8226; is the skew-symmetric operator </p><p>The conjugate of a quaternion q is denoted by q * , and is equal to q but with the elements of the vector portion negated. The inverse of a quaternion is given by</p><p>The quaternions used in this article all represent rotations and so are unit quaternions, meaning that their norm is 1. Therefore, for unit quaternions we have</p><p>Inverting the product of two quaternions results in the product of the inverse of each quaternion in the opposite order, as</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Vector rotation</head><p>In this article, quaternions are denoted passively, meaning that they represent the rotation necessary to express a vector in a different frame. Let quaternion q b a represent the rotation from frame a to frame b and let a y represent a vector expressed in frame a. As described by <ref type="bibr">Kuipers (1999)</ref>, a y can be expressed in frame b using the quaternion conjugation operation as</p><p>The term a y T 0 T is referred to as the pure quaternion constructed from a y.</p><p>It is convenient to define an equivalent rotation matrix R(q) such that</p><p>An expression for R(q) can be derived by expanding the right-hand side of (7) according to ( <ref type="formula">5</ref>), (3a), and (3b) as R(q) y 0 = q 0 I -qq qT q 0 y 0 &#8855; q = q 0 y -q y qT y &#8855; q = q 0 I -q q -qT q 0 q 0 y -q y qT y = q 2 0 I -2q 0 q + q qT + q 2 y 0 ,</p><p>which implies that R(q) = q 2 0 I -2q 0 q + q qT + q 2 . It can be shown, however, that</p><p>Rotation matrices exhibit the following properties:</p><p>The formula for vector rotation in Equation ( <ref type="formula">6</ref>) can be used to derive the manner in which two rotations are compounded together. If q b a defines the rotation from frame a to frame b, and q c b the rotation from frame b to frame c, then to take a vector expressed in frame a and express it in frame c we have</p><p>We therefore conclude that</p><p>Comparing ( <ref type="formula">10</ref>) and ( <ref type="formula">11</ref>), we see that rotation matrices and quaternions compound in the opposite order:</p><p>When the quaternion conjugation operation is applied to a quaternion that is not a pure quaternion, the analysis procedure in Equation ( <ref type="formula">8</ref>) can be repeated to show that</p><p>for the same rotation matrix R(q) defined by ( <ref type="formula">9</ref>). The result of this operation is that the basis of the vector portion of p is rotated by the rotation defined by q. This property is useful when the quaternion is interpreted according to its axis-angle representation, and it is desirable to express the axis vector in a different coordinate frame. This arises in the derivation of some of the measurement models and Jacobians in this article.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Unit sphere propagation</head><p>Attitude is represented using quaternions of unit length. Unit quaternions do not form a vector space, but rather form a group on the unit sphere S 3 &#8834; H. The group operator is quaternion multiplication and the group of unit quaternions gives a double cover parameterization of the group of rotations SO( 3). Multiplying a unit quaternion by a non-unit quaternion will cause the product to leave the unit sphere. Normalizing the resulting quaternion according to q &#8592; q q returns the quaternion to the unit sphere, but linearization errors are introduced.</p><p>To properly rotate a quaternion along the manifold, it is necessary to represent the rotation in terms of a unit quaternion. A rotation can be represented using the unit quaternion as</p><p>Rotating a quaternion q along the unit sphere by the rotation &#952; is accomplished as q &#8855; &#952; &#8743; . This is analogous to the notation q &#952; and q &#8855; exp( &#952; 2 ) found in the estimation literature <ref type="bibr">(Hertzberg, et al., 2013)</ref>. The &#8744; operator is defined as</p><p>Extracting the underlying rotation between q a and q b is accomplished as ( q a &#8855; q b -1 ) &#8744; . This is analogous to the notation log( q a q b ) found in the estimation literature <ref type="bibr">(Hertzberg, et al., 2013)</ref>. As inverse mappings, it can be shown that &#952; = &#952; &#8743; &#8744; and q = q &#8744; &#8743; .</p><p>Equation ( <ref type="formula">15</ref>) is undefined when &#952; equals zero, and in practice becomes numerically unstable as &#952; approaches zero. There are a number of common approximations of (15) for a small angle &#948;&#952; , such as the second-order Gibbs vector parameterization <ref type="bibr">(Markley, 2003</ref>)</p><p>The first-order approximation of both ( <ref type="formula">15</ref>) and ( <ref type="formula">17</ref>) is</p><p>which is useful when deriving first-order Jacobians. Equation (16) can similarly be approximated for a small quaternion &#948;q as &#948;q &#8744; &#8776; 2 sign (&#948;q 0 ) &#948; q.</p><p>(19)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Time integration</head><p>Several methods can be used to numerically integrate a quaternion that represents the attitude of a rigid body. Numerical integration is carried out over a finite time step t, and is governed by the angular velocity of the body with respect to an inertial frame as expressed in the body frame, &#969; B &#969; B I . Let q t be an incremental quaternion and &#969; 0 be the nominal angular velocity. Zero-order quaternion integration assumes that the angular velocity is constant over Fig. <ref type="figure">2</ref>. Quaternion definitions used in the derivation of the quaternion time derivative and integration. The same q t is used for both iRN and bRN because it corresponds to the body-fixed angular velocity &#969; measured by the rate gyros.</p><p>the timestep, &#969; 0 = &#969; t , while first-order integration uses linear interpolation, &#969; 0 = 1 2 ( &#969; t + &#969; t-1 ). From (15), we write q t as</p><p>The value of the quaternion q t+ t at time t + t can be expressed as the combination of the quaternion q t at time t and the incremental quaternion q t . The manner in which these quaternions are combined depends on whether the attitude quaternion represents the attitude of the body with respect to an inertial frame (iRN) or the attitude of an inertial frame with respect to the body (bRN). As illustrated in Figure <ref type="figure">2</ref>, the attitude at time t + t for these cases is iRN: q t+ t = q t &#8855; q t , bRN:</p><p>where the order of compounding follows Equation (11). Substituting (20) into (21) gives iRN:</p><p>bRN:</p><p>Integrating according to (22) maintains unit norm, allowing the attitude to propagate on the unit sphere S 3 &#8834; H. In practice, however, this definition becomes numerically unstable as &#969; 0 approaches zero. As described by <ref type="bibr">Trawny and Roumeliotis (2005)</ref>, applying L'Hospital's rule to (22a) for iRN shows that lim</p><p>Comparing ( <ref type="formula">24</ref>) to (3b) shows that (23) can be written as</p><p>For bRN, a similar analysis can be applied to (22b) to show</p><p>Note that for bRN both the order of the quaternion multiplication and the sign of &#969; 0 have been reversed. In summary, the attitude quaternion is integrated according to (22) when &#969; 0 is sufficiently large to avoid numerical issues. When &#969; 0 is small, the integration is approximated as iRN:</p><p>bRN:</p><p>Integrating according to (25) causes the quaternion to depart from the unit sphere S 3 ; when this method is used a normalization step therefore follows.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Attitude kinematics</head><p>The attitude kinematics of a rigid body are described by the time derivative of the attitude quaternion. Some authors <ref type="bibr">(Bloesch et al., 2016;</ref><ref type="bibr">Hertzberg et al., 2013)</ref> emphasize the manifold structure of attitude dynamics by defining q &#8712; R 3 as a member of the associated Lie algebra, while other authors <ref type="bibr">(Markley, 2003;</ref><ref type="bibr">Trawny and Roumeliotis, 2005)</ref> assume a first-order approximation such that q &#8712; R 4 . In this article, while we use (22) to propagate along the manifold S 3 , we use the first-order approximation of the quaternion dynamics over a finite timestep t for computing the firstorder Jacobian matrices required by the extended Kalman filter (EKF). The first-order Taylor series approximation of ( <ref type="formula">22</ref>), which assumes 1 2 &#969; 0 t is small, 1 yields the same result as (25). Using (25a), the quaternion time derivative for iRN is computed as</p><p>A similar analysis follows for bRN using (25b). In summary, attitude quaternion kinematics are represented by </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6.">Error state</head><p>Because unit quaternions do not form a vector space, quaternion error cannot be computed using vector subtraction. Rather, a true quaternion state q is represented as the quaternion multiplication of an estimated quaternion q and quaternion attitude error &#948;q. By varying the order and direction of the quaternion multiplication, there are four possible methods to define attitude error:</p><p>Method 1: q q &#8855; &#948;q, (</p><p>Method 2: q &#948;q &#8855; q, (28b) Method 3: q q &#8855; &#948;q -1 , Method 4: q &#948;q -1 &#8855; q.</p><p>In this article, we use (28a) for iRN described in Section 6 and use (28b) for bRN described in Section 7: iRN: q q &#8855; &#948;q, (29a) bRN: q &#948;q &#8855; q.</p><p>(29b)</p><p>While using different definitions requires additional care when deriving the filter, ultimately this minimizes differences between the dynamics, measurement models, and keyframe reset steps of iRN and bRN. Figure <ref type="figure">3</ref>  </p><p>bRN: &#948;q = q &#8855; q-1 . (30b)</p><p>Using ( <ref type="formula">12</ref>), we can express (29) as iRN: R(q) = R(&#948;q) R( q) , (31a) bRN: R(q) = R( q) R(&#948;q) .</p><p>(31b)</p><p>When representing the attitude uncertainty associated with a quaternion error, a minimal representation is required. A quaternion is parameterized with four numbers, but only three are required to fully parameterize an orientation since orientations are associated with unit quaternions, elements of the three-dimensional group S 3 . Because the group is three-dimensional, the tangent space at the identity element, or Lie algebra, will be isomorphic to R 3 , and error covariances can be defined in this three-dimensional vector space. Accordingly, we represent the uncertainty in &#948;q as the covariance of the vector &#948;&#952; &#8712; R 3 . The &#8743; and &#8744; operators of ( <ref type="formula">18</ref>) and ( <ref type="formula">19</ref>) define the mapping between the error quaternion &#948;q and its minimal representation &#948;&#952; as</p><p>By substituting (32) into ( <ref type="formula">9</ref>) and ignoring second-order terms, it follows that</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.7.">Euler decomposition</head><p>Aircraft attitude is commonly represented using three angles: roll &#966;, pitch &#952;, and yaw &#968;. Yaw represents the rotation about the inertial z-axis (down). Pitch represents the rotation about the resulting y-axis. Roll represents the rotation about the x-axis formed after pitching and yawing. This sequence of rotations, known as 3-2-1 Euler angles, relates the vehicle's body frame to an inertial frame, and can be represented as the multiplication of three rotation matrices</p><p>where</p><p>Because quaternions are generally less intuitive, they are often mapped into roll, pitch, and yaw angles for plotting, analysis, and control. Expanding (35) using (9) and comparing terms, we obtain &#966; = atan 2q 0 q x + 2q y q z q 2 zq 2 xq 2 y + q 2 0 , &#952; = asin 2q 0 q y -2q x q z &#968; = atan 2q 0 q z + 2q x q y q 2</p><p>xq 2 yq 2 z + q 2 0 .</p><p>(37)</p><p>To map Euler angles into a quaternion, we compose the attitude quaternion from the roll, pitch, and yaw components using the order derived in (11) such that</p><p>Note the order of composition is the opposite of ( <ref type="formula">35</ref>) as described by ( <ref type="formula">12</ref>). From ( <ref type="formula">14</ref>), we obtain</p><p>(39) To show that ( <ref type="formula">38</ref>) and ( <ref type="formula">35</ref>) are consistent using the definitions in (39), we need only apply (9) and double-angle trigonometry identities to (39). For example, letting k 0 0 1 T , we see that</p><p>By substituting ( <ref type="formula">39</ref>) into ( <ref type="formula">38</ref>), a unit quaternion can be constructed from roll, pitch, and yaw angles as</p><p>Appendix A presents the derivation of the Jacobian that relates errors in the Euler angle decomposition to the attitude error state. Defining</p><p>the covariance of is related to the covariance of &#948;&#952; as</p><p>where N = &#8706; /&#8706;&#948;&#952; is given for the iRN case by (100) in Appendix A.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.8.">Error quaternion properties</head><p>This section presents several properties of error quaternions that are needed in the derivation of the Jacobians used in the RMEKF. The first of these properties relates to the product of two error quaternions. Using ( <ref type="formula">32</ref>) and (3a), and by dropping second-order terms, we obtain</p><p>from which we can conclude that</p><p>It can be similarly shown that</p><p>For the next property, we recall from Section 3.2 that quaternion conjugation operating on an arbitrary quaternion rotates the basis of the vector portion of that quaternion. Using Equation ( <ref type="formula">13</ref>) with (32), we see that</p><p>In addition, ( <ref type="formula">40</ref>) and ( <ref type="formula">41</ref>) can be combined to verify that, for example,</p><p>(42)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">MEKF</head><p>With the quaternion notation and properties established, we are prepared to outline the proposed estimation framework: a continuous-discrete, indirect, multiplicative extended Kalman filter (MEKF). For generality, in this section we derive the MEKF without defining the specific state, input, and measurement variables. This derivation will be made more concrete in Sections 6 and 7, where the actual implementations with specific variable definitions are presented. This section is predominantly a review of techniques found in the literature, but introduces notation that helps to clarify the derivation.</p><p>A Kalman filter provides the optimal, maximumlikelihood state estimate for a linear system under Gaussian noise. It recursively estimates the evolution of the system state x as a function of the current state estimate x = E[x], input u, and measurement z. A Kalman filter also maintains an estimate of the state uncertainty, represented by the covariance matrix P, typically defined as</p><p>The EKF is an extension of the Kalman filter for systems with nonlinear dynamics and/or nonlinear measurement models. The EKF linearizes the system about the current maximum-likelihood state estimate. While optimality and convergence are no longer guaranteed as opposed to a linear Kalman filter, EKFs are widely used in practice for their robust performance and straightforward implementation. If the state x includes a quaternion, however, Equation ( <ref type="formula">43</ref>) is fundamentally flawed. First, quaternion subtraction is not well-defined as described in Section 3.6, and second, Equation ( <ref type="formula">43</ref>) is never full rank because quaternions are not a minimal representation. These issues are addressed using an error-state, or indirect, formulation of the Kalman filter.</p><p>The indirect Kalman filter tracks the error state &#948;x and its uncertainty. Unlike the state x, the error state is defined as an element of a vector space by using a minimal attitude representation. The error state is a measure of the discrepancy between the true state x and a nominal state x nom , where x nom can be defined in a number of different ways, as described by <ref type="bibr">Farrell (2008)</ref>. When the system dynamics are especially well-modeled, such as for a spacecraft in orbit, x nom may be a predetermined feedforward state estimate. More commonly, the nominal state is the maximum likelihood state estimate x nom = E [x]. In this case, measurements provide feedback to update the nominal state, forcing the expected value of the error state to zero. Some indirect Kalman filter implementations differentiate between fast and slow measurements, and only update the nominal state for the slow measurements <ref type="bibr">(Farrell, 2008;</ref><ref type="bibr">Maybeck, 1979)</ref>. For such systems, x nom = E [x] at the fast rate, so that the expected error state is non-zero and must be propagated. For the derivation in this article, the nominal state is updated equivalently for every measurement, ensuring that x nom = x = E[x] at any given time.</p><p>Let x v and x q represent vector and quaternion portions of the state x, and &#948;x v and &#948;x &#952; be the corresponding elements of the error-state &#948;x. We define these error-state elements as</p><p>where &#948;q is defined by (30) using x q and xq . This results in the property</p><p>as derived in Appendix B.1. Because the error state is part of a vector space, and as a result of ( <ref type="formula">45</ref>), the indirect Kalman filter represents state uncertainty with the well-defined covariance</p><p>As a note on interpretation, in this formulation the unknown true state is modeled as a random variable centered around the current state estimate. More specifically, the vector and quaternion portions of the true state are modeled as random variables using the inverse of (44) as</p><p>where &#948;x v and &#948;x &#952; are elements of the Gaussian random vector &#948;x &#8764; N (0, P) .</p><p>Kalman filters are decomposed into two steps: the propagation step and the update step, which are described in Sections 4.1 and 4.2. Section 4.3 discusses a method to handle delayed or out-of-order measurements. In general, the MEKF can use either an inertial or body-fixed coordinate frame. Several nuanced differences exist, however, and are highlighted as specific to iRN or bRN.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Propagation</head><p>Consider the continuous-time system</p><p>where &#965; &#8764; N (0, Q u ) and &#951; &#8764; N (0, Q x ) are zero-mean Gaussian random variables. More specifically, we assume that &#951; and &#965; are uncorrelated,</p><p>and that the input and process noise are not correlated in time</p><p>where &#948;( t) is the Dirac delta function.</p><p>In the propagation step, the filter propagates the state estimates forward according to the nominal propagation dynamics as</p><p>When x is part of a vector space, the Jacobians needed for propagating the covariance are obtained from the first-order Taylor series expansion of the error-state dynamics as</p><p>When the state x includes quaternion terms, however, the error-state dynamics cannot be formed by simple subtraction as in (51a). In fact, as described in Section 3.6, the error state &#948;x is of lower dimension than x. Noting that x is a function of x and &#948;x according to (44), the dynamics of the error state can be expressed generally as a single function</p><p>where only &#948;x, &#965;, and &#951; are stochastic variables. 2  The function f then handles the quaternion portions of the state appropriately. We show in Appendix B.2 that</p><p>x, u) = 0. The Jacobians for the covariance propagation are computed from the first-order Taylor series expansion of ( <ref type="formula">52</ref>) about</p><p>where <ref type="formula">53</ref>) has the form of (51b), but the Jacobian terms differentiate the error-state dynamics with respect to the error state and input noise, rather than with respect to the state and input.</p><p>The error-state covariance propagation is given by differentiating (46) with respect to time and utilizing the linearized error dynamics from (53) as</p><p>To simplify terms, we solve the differential equation in ( <ref type="formula">53</ref>) with initial conditions &#948;x 0 to obtain</p><p>Using ( <ref type="formula">55</ref>) and the properties ( <ref type="formula">48</ref>) and ( <ref type="formula">49</ref>), we see that</p><p>where the 1 2 is because the bounds of integration only use half of the area inside of the delta function. Similarly,</p><p>Because Q u and Q x are symmetric, combining ( <ref type="formula">54</ref>), (56), and (57) we have that P evolves between measurements as</p><p>In summary, during the propagation step, x is propagated forward using (50) and the error covariance is propagated forward using (58). In addition, because the evolution of &#948;x is given by ( <ref type="formula">53</ref>) and E[&#948;x]( 0) = 0, we have that E[&#948;x]( t) = 0 over the propagation window.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Measurement update</head><p>For the update step, consider the measurement</p><p>where &#951; z &#8764; N (0, R) represents measurement noise. This measurement noise is usually additive if the measurement is a vector quantity, but if the measured quantity is a quaternion the noise is applied through quaternion multiplication.</p><p>The residual r is the discrepancy between the true measurement and the predicted measurement &#7825; = h(x, 0) . Conventionally, a vector-space measurement is assumed, such that (59) is simplified as</p><p>In this case, the residual is found by subtraction,</p><p>and is modeled as</p><p>Jacobians are then computed using the first-order Taylorseries expansion as</p><p>These measurement and residual models share similar shortcomings with the conventional error-state dynamics presented in (51), namely they do not hold for quaternion measurements and (60) assumes the state x is of the same dimensionality as &#948;x.</p><p>These issues can be addressed by expressing the residual model as a single function, paralleling the approach taken for the propagation step in Section 4.1. Again noting that x is a function of x and &#948;x, the residual is modeled as a function r = h(&#948;x, &#951; z , x) , where only the first two variables are stochastic. For measurements z v of vector values, the residual value is computed by subtraction as</p><p>and is modeled as</p><p>where x can be rewritten in terms of &#948;x and x according to (29) and by rearranging (44a). For measurements z q of quaternion values, the residual value is computed as the three-vector minimal representation of the error between the observed and expected quaternions according to (30) as</p><p>and is modeled as</p><p>where again x can be rewritten in terms of &#948;x and x according to (29) and (44a).</p><p>It follows from ( <ref type="formula">62</ref>) and ( <ref type="formula">64</ref>) that the residual models have the property h(E[&#948;x], E[&#951; z ], x) = 0. The measurement models in this article are chosen so that the noise is additive in the residual space, implying that &#8706;r &#8706;&#951; z = I. The measurement Jacobians are computed using the first-order Taylor-series expansion of</p><p>where</p><p>and</p><p>Assuming the measurement noise, error state, and input noise are uncorrelated, the residual uncertainty is</p><p>The Kalman gain uses the residual and state uncertainty to find the extent to which the residual should be trusted and applied. Using the residual covariance, the Kalman gain is</p><p>In fusing the information provided by the measurement, the a posteriori estimate of the error state, denoted with a + , is</p><p>With the additional information provided by the measurement update, the error state is no longer zero mean, violating the property in (45). Let x v and x q again be vector and quaternion states within x, and let v and &#952; be the corresponding portions of the Kalman update Kr. To ensure the error state remains zero mean, the Kalman update Kr is used to adjust x as</p><p>where ( <ref type="formula">67a</ref>) and ( <ref type="formula">67b</ref>) are specific to the quaternion errorstate definition used.</p><p>Finally, the covariance is updated conventionally as</p><p>In practice, we use the Joseph-form Kalman update,</p><p>because it improves numerical stability and ensures that the covariance matrix remains symmetric <ref type="bibr">(Bar-Shalom et al., 2002)</ref>. In summary, during the update step the measurement residual (61) or (63) provides additional information causing a non-zero E <ref type="bibr">[&#948;x]</ref>. Using the Kalman gain (66), x is updated according to (67) to ensure E[&#948;x] remains zero mean. The error-state covariance is updated according to (68).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Delayed out-of-order measurements</head><p>The Kalman filter assumes the state evolves according to a Markov process. As such, state estimates for all previous time steps are marginalized. This makes it difficult to compute a residual for delayed or out-of-order measurements when they finally arrive. In practice, delayed measurements are common. For example, visual odometry algorithms may require hundreds of milliseconds to perform the computer vision operations necessary to compute a measurement. Ideally this measurement is used to constrain the state of the vehicle when the image was taken, not to constrain the state of the vehicle when the measurement arrives.</p><p>To address delayed measurements, we use priority queues to save the last T seconds of inputs, measurements, states, and error-state covariances ordered by time. If a delayed measurement arrives with a time stamp more than T seconds old, we discard the measurement. Otherwise, we discard all saved states and error-state covariances that have a time stamp later than the time stamp of the incoming delayed measurement. At this point, we are left with the state and covariance estimate at the instant the delayed measurement should have arrived. We apply the measurement normally, and then use the input and measurement queues to re-propagate the MEKF to the current time instance. This approach provides the same state estimate as if all measurements had arrived at the correct time. Handling delayed messages in this way may not be practical for all processors. Similar methods are described by <ref type="bibr">Bopardikar et al. (2013)</ref> and <ref type="bibr">Shen et al. (2014)</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Relative navigation</head><p>Section 4 provided a general overview of the indirect MEKF. This section describes how the MEKF is adapted to the relative navigation framework.</p><p>Conventional filtering approaches directly estimate the vehicle's global state with respect to some inertially fixed origin, such as the GPS origin or the vehicle's starting location; however, when only relative position measurements such as those obtained from visual odometry or laser scan matching are available, the vehicle's global position and heading are unobservable <ref type="bibr">(Jones et al., 2007;</ref><ref type="bibr">Martinelli, 2012;</ref><ref type="bibr">Weiss et al., 2012)</ref>. Over time, directly estimating these unobservable states leads to inconsistent and unbounded state uncertainties, which can degrade accuracy and cause irregular state jumps in the filter <ref type="bibr">(Bailey and Durrant-Whyte, 2006;</ref><ref type="bibr">Julier and Uhlmann, 2001;</ref><ref type="bibr">Kottas et al., 2013)</ref>. Methods for mitigating these issues have been proposed <ref type="bibr">(Bailey and Durrant-Whyte, 2006;</ref><ref type="bibr">Castellanos et al., 2007;</ref><ref type="bibr">Kottas et al., 2013)</ref>, but the core underlying issue of unobservable states can be avoided entirely by reformulating the problem in terms of relative states <ref type="bibr">(Wheeler et al., 2018)</ref>.</p><p>The relative navigation approach maintains observability of the filter states by estimating the pose of the vehicle with respect to a local coordinate frame referred to as the node frame. This node frame is positioned at zero altitude directly below the most recent odometry keyframe, but is gravity-aligned (i.e., the heading is aligned with the vehicle's heading when the keyframe was declared, but there is no pitch or roll). As a result, the odometry provides nearly direct measurements of the position and heading of the vehicle with respect to the current node frame, making those states observable by construction.</p><p>Owing to the way the node frame is defined, the roll and pitch components of the vehicle's attitude (&#966;, &#952; ), as well as the vehicle's altitude p z , are estimated as if they were defined with respect to a global origin. These states are not affected when transitioning from one node frame to another and so are, in effect, independent of the current node frame. On the other hand, the horizontal position and heading states (p x , p y , and &#968;) define how the vehicle has moved since the last node frame, and are termed relative states. Each time a new keyframe is declared, a new node frame is also declared and the relative states (p x , p y , and &#968;) are reset to zero. The covariances associated with the relative states are also reset to zero, since the vehicle is, by definition, at the location of the node frame and so there is no uncertainty in these states. This is illustrated in Figure <ref type="figure">4</ref>. The non-relative states (roll, pitch, altitude, body-fixed velocities, and body-fixed inertial measurement unit (IMU) biases) and their covariances are unchanged by the keyframe reset operation. Note that resetting the heading component of the state and covariance is non-trivial when attitude is parameterized with a quaternion. Prior to the reset, the vehicle's current relative pose estimate and covariance are passed to a back-end pose-graph map that concatenates the relative poses into an estimate of the vehicle's global path and current global pose <ref type="bibr">(Wheeler et al. 2018)</ref>.</p><p>We describe the keyframe reset operation mathematically as follows. Let n(x) define the keyframe reset operation. The estimated state is reset as</p><p>The error state after the reset, &#948;x + , is the difference between x + and x+ as defined by ( <ref type="formula">44</ref>). Again recalling that since x is a function of &#948;x and x, we can express the error state after the reset as a single function</p><p>This allows the covariance to be updated as</p><p>where</p><p>.</p><p>The details of the reset operation n(x) and the Jacobian N are presented in Section 6.3 for iRN and in Section 7.3 for bRN.</p><p>This formulation provides several advantages in terms of estimator performance. One of these advantages is that when the covariance associated with the relative states is reset to zero, uncertainty is in essence removed from the filter and delegated to the back-end map, which helps to maintain filter consistency <ref type="bibr">(Barfoot and Furgale, 2014)</ref>. As a result, the covariance in the filter also remains bounded. In addition, because the distances between keyframes are relatively small, the state error remains small, avoiding significant linearization errors that can cause inconsistency in a global estimator <ref type="bibr">(Castellanos et al., 2007)</ref>.</p><p>While reconstructing the global pose estimate requires implementing a back-end pose-graph map, this architecture has also been shown to improve consistency and accuracy of the final global pose estimate over global filter approaches, even in the absence of additional pose-graph constraints such as loop closures <ref type="bibr">(Wheeler et al., 2018)</ref>. It also has the advantage of avoiding large, potentially destabilizing state jumps when new global information becomes available. The requirement to implement this pose-graph map is also not a particularly onerous one, seeing as some global filter approaches already use a back-end batch-processed map to provide updates to their global filter <ref type="bibr">(Shen et al., 2014;</ref><ref type="bibr">Weiss and Siegwart, 2011)</ref>.</p><p>The various steps for implementing an RMEKF are summarized in Algorithm 1, along with references to the key equations. The specific implementation equations are derived in Section 6 for iRN and in Section 7 for body-fixed robocentric relative navigation. Reset state using (69) 16:</p><p>Reset uncertainty using (70) </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Inertial Relative Navigation (iRN)</head><p>iRN estimates the vehicle's position and attitude with respect to the current node frame. While the current node frame changes regularly, each is gravity-aligned and defined inertially. For this reason, typical UAS dynamics from the GPS/INS literature are applicable. Section 6.1 outlines the input, state, and dynamics for the system, including the error-state dynamics. Section 6.2 defines the measurement models and Section 6.3 outlines the keyframe reset step. In general, the derivations in this section improve upon the mathematical rigor of the derivations in prior work <ref type="bibr">(Leishman and McLain, 2014)</ref> and provide some corrections. New contributions that extend that work are noted in their respective sections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">State dynamics</head><p>This section derives the state propagation model used in the filter. Section 6.1.1 defines the state vector. Section 6.1.2 discusses how measurements from an IMU are incorporated into the propagation model, and Section 6.1.3 defines the state propagation model and derives the associated Jacobians.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.1.">State vector.</head><p>Vectors use a forward-right-down coordinate frame, with axes labeled x, y, and z. In this article, the position vector c p b a denotes the position of frame b with respect to frame a, expressed in frame c. Unless otherwise noted, position vectors are expressed in the originating frame, that is, frames a and c are the same. When this is the case, the prescript is usually omitted for brevity, so that p b a a p b a . However, the prescript is occasionally included for clarity.</p><p>iRN estimates the position and attitude of the vehicle's body frame b with respect to and expressed in the current node frame n, denoted by the pose ( p b n , q b n ). Let frame k represent the vehicle's body frame at the instant in time that the current keyframe image was taken. The estimator tracks the pose of k with respect to and expressed in n, denoted as ( p k n , q k n ), which is an extension to prior work (Leishman and McLain, 2014). As illustrated in Figure <ref type="figure">2</ref>, the keyframe states ( p k n , q k n ) include the altitude, roll, and pitch of the vehicle at the moment a keyframe is declared. As the vehicle state is not a vector when it contains quaternion elements, we define the state as the tuple I define the true body-fixed angular rates and accelerations as measured by an ideal IMU, respectively. For brevity, we omit specifying the frames and write &#969; and a throughout the article.</p><p>An IMU provides measurements &#969; and &#227; that are corrupted by unknown biases &#946; &#969; and &#946; a and zero-mean Gaussian noise processes &#965; &#969; and &#965; a , such that the measured values could be modeled as</p><p>Recognizing that the actual measured values &#969; and &#227; are fixed at a given timestep and that the noise terms are considered to be uncorrelated in time, the unknown true values are modeled as random variables as</p><p>where &#969;, a, &#946; &#969; , &#946; a , &#965; &#969; , and &#965; a are random variables and &#969; and &#227; are considered as constants. The nominal value of the angular rate used for propagation is then the expected value of the true angular rate</p><p>and the nominal value of the acceleration used for propagation is the expected value of the true acceleration</p><p>IMU data can be treated either as a system input or as a measurement, in a trade-off based on IMU quality and model accuracy. When a reliable vehicle model is available, the controller's output (e.g., motor commands) can be used as an input to the observer to propagate the state forward. In this case, IMU data are incorporated as measurement updates providing feedback. This approach leverages the most information, but requires careful characterization of the vehicle's dynamics. Another common approach, known as mechanization, treats the IMU measurements directly as inputs to the filter dynamics, which replaces the vehiclespecific dynamics with kinematic equations. This simplifies the propagation dynamics and eliminates sensitivity to modeling errors, but does not use any information about how the vehicle behaves.</p><p>For the filter design in this article, the angular velocity measured by the rate gyros &#969; is treated as an input to the propagation equations. Following <ref type="bibr">Leishman et al. (2014a)</ref>, the z component of the accelerometer measurement is also treated as an input, while the x and y components are used as measurement updates. The following paragraphs explain the derivation and justification for this approach.</p><p>Using Newton's second law, the velocity dynamics can be modeled as</p><p>where m is the mass of the vehicle and B F are the forces acting on the vehicle expressed in the body frame. As illustrated in Figure <ref type="figure">6</ref>, the principal forces that act on a multirotor are gravity B F G = mR(q b n ) g, thrust B F T = Tk, and the simplified drag force B F D = -&#956; 0 v, where k 0 0 1 T , T is the total rotor thrust, &#956; 0 is the nominal drag coefficient, and g = gk with g being the standard acceleration due to gravity. Substituting these forces into (75</p><p>While technically correct, the dynamics in ( <ref type="formula">76</ref>) are challenging to use in practice because the thrust T is difficult to model and so is generally unknown. This difficulty is addressed through mechanization by utilizing the accelerometer measurements directly in the propagation model. As explained by <ref type="bibr">Leishman et al. (2014a)</ref>, accelerometers measure the specific force (not including gravity) expressed in the body frame, so that</p><p>Substituting ( <ref type="formula">77</ref>) into ( <ref type="formula">76</ref>) yields the common mechanization dynamics</p><p>Equation ( <ref type="formula">78</ref>) eliminates the need for modeling complicated vehicle dynamics, but ignores information about the dynamics that might improve estimator performance by building up cross-correlation terms in the covariance matrix P.</p><p>In practice, we have found the most success using a combination of ( <ref type="formula">76</ref>) and (78). Noting that the unknown thrust term in (76) appears only in the z component, a hybrid propagation model is obtained by substituting only the z component of ( <ref type="formula">77</ref>) into (76) to obtain</p><p>where a z = k T a, &#956; &#956; 0 /m is the specific drag coefficient, and k Ikk T is a projection matrix that projects onto the plane normal to the k-axis.</p><p>As a result of this hybrid approach, we consider the gyroscope and z-axis accelerometer measurements as system inputs, while using the horizontal accelerometer measurements as feedback in the update step. The system input and input noise are therefore defined as</p><p>where &#227;z = k T &#227; and &#965; a z = k T &#965; a .</p><p>6.1.3. Propagation model. The system dynamics are modeled as</p><p>where &#969; is given by (72a), a z is given by the z component of (72b), and &#951; v , &#951; &#946; &#969; , and &#951; &#946; a are zero-mean Gaussian noise processes for the corresponding states. The state p b n is propagated according to a standard kinematic model, q b n is propagated according to (26) and the discussion in Section 3.4, and v is propagated according to (79). The dynamics for the bias states &#946; &#969; and &#946; a are modeled as random walks, while the keyframe states p k n and q k n represent the relative pose of static coordinate frames and therefore do not change. The drag term &#956; is constant and so has zero dynamics, but uncertainty in this state is considered by assigning it a non-zero initial covariance.</p><p>The state estimate is propagated by substituting the expected values of the state, input, and process noise terms into (80) as</p><p>where &#969; is given by ( <ref type="formula">73</ref>) and &#226;z is given by the z component of ( <ref type="formula">74</ref>). The error-state dynamics are found by relating ( <ref type="formula">80</ref>) and (81) using the error-state definition (44). The first-order approximation of the error-state dynamics are</p><p>and are derived in Appendix C. Differentiating the error state dynamics with respect to the error state and input noise results in the following propagation Jacobians:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Measurement models</head><p>The accelerometer, altimeter, and visual odometry or laser scan matching algorithm provide measurements to constrain state estimates. For each sensor, the measurement model, residual model, and residual Jacobians are defined.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.1.">Accelerometer.</head><p>Because the z portion of the accelerometer measurement is used as an input to the propagation, the update step uses only the x and y components of &#227;, such that z acc = I 2&#215;3 &#227;.</p><p>The accelerometer measurement model can be derived from (71a) and (77) as</p><p>where the thrust term T m k drops out because I 2&#215;3 T m k = 0. We can expand this model in terms of &#948;x and x according to (44) and drop second-order terms 3 to obtain h acc ( x, u+&#965;)=I 2&#215;3 -&#956;+&#948;&#956; v+&#948;v + &#946;a +&#948;&#946; a +&#965; a &#8776;I 2&#215;3 -&#956;v-&#956;&#948;v-&#948;&#956;v+ &#946;a +&#948;&#946; a +&#965; a .</p><p>From (82), the estimated measurement is</p><p>For a given acceleration measurement z acc , the residual is r acc = z acch acc ( x, u) , and is modeled as</p><p>The measurement Jacobian is therefore</p><p>The measurement noise is the x and y components of the accelerometer noise,</p><p>6.2.2. Altimeter. The altimeter model is for an ultrasonic range finder, which reports the nearest return in its conical field of view. As a result, the sensor reports height above ground regardless of the current attitude of the vehicle, as long as roll and pitch angles are moderate. The measurement model and its estimate are</p><p>For a given altimeter measurement z alt , the residual is</p><p>which is modeled as</p><p>resulting in the measurement Jacobian H alt = -k T 0 0 0 0 0 0 0 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.3.">Visual odometry translation.</head><p>Incorporating measurements from visual odometry algorithms is somewhat more involved than the previous measurement models. <ref type="bibr">Figure 7</ref> outlines the relationship between the visual odometry output and the state. The visual odometry output ( p c kc , q c kc ) relates the current camera frame to the keyframe camera frame. The transform ( p c b , q c b ) express the pose of the camera frame with respect to the vehicle's center of mass. Commonly the camera frame used in visual odometry algorithms is derived from the image plane such that the camera frame's z-axis references forward motion (depth). In this case, q c b encodes the mapping from the body frame's forward-right-down coordinate frame to the coordinate frame used by the camera. The transform ( p c b , q c b ) can be found through offline calibration or online as described by kc , q c kc ) is the output of the visual odometry algorithm. The transform ( p c b , q c b ) defines the pose of the camera frame with respect to the vehicle's center of mass, and is assumed to be fixed and known. The frame k represents the pose of the body at the time of the keyframe, and the frame kc represents the pose of the camera at that time. As a result, p kc k = p c b and q kc k = q c b . <ref type="bibr">Leishman and McLain (2014)</ref>. In this article, we assume that the camera is rigidly mounted to the body and ( p c b , q c b ) is static and known. In addition, because ( p kc k , q kc k ) also represents the transform between the body and camera, but at the time of the keyframe, p kc k = p c b and q kc k = q c b . Figure <ref type="figure">7</ref> can be used to informally understand how defining a relative state improves observability. The pose ( p k n , q k n ) encodes the roll, pitch, and altitude of the vehicle when the keyframe is declared, all of which are observable using an altimeter and IMU. This fact, in connection with assuming ( p c b , q c b ) is known, ensures that the measurements ( p c kc , q c kc ) constrain the vehicle's current pose ( p b n , q b n ). Using Figure <ref type="figure">7</ref>, the relative translation measurement is modeled as</p><p>Dropping prescripts and recalling that p kc k = p c b and q kc k = q c b , this becomes</p><p>We expand this model according to ( <ref type="formula">44</ref>) and (31a), then use ( <ref type="formula">33</ref>) and ( <ref type="formula">34</ref>) to obtain</p><p>The measurement Jacobian is</p><p>The derivation of this Jacobian is a new contribution compared with prior work <ref type="bibr">(Leishman and McLain, 2014)</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.5.">Laser scan matcher.</head><p>The laser scan matcher measurement model is a novel contribution of this work. Laser scan matching algorithms, such as the iterative closest point (ICP) algorithm <ref type="bibr">(Censi, 2008)</ref>, compute the planar translation and heading change of a laser scanner between the keyframe and current scans. Because the out-of-plane motion of UAVs (roll and pitch) introduces distortions that are not modeled by the scan-matching algorithm, in practice laser scans are commonly tilt-compensated before they are passed to the scan matcher <ref type="bibr">(Shen et al., 2011;</ref><ref type="bibr">Tomic et al., 2012)</ref>. Tilt compensation orthorectifies the laser scan by projecting it onto a level horizontal plane, using the 2.5D world assumption that all scanned surfaces are vertically uniform. Scan returns from the floor or ceiling are also often removed during this process. The 2.5D assumption is commonly violated in the real world, but in urban environments this approach has been demonstrated to provide acceptable performance <ref type="bibr">(Shen et al., 2011)</ref>.</p><p>To perform tilt compensation on a laser scan, we first compute the quaternion q o c that rotates points expressed in the rolled and pitched laser frame c (including any roll or pitch in the body-to-sensor transform) to the orthorectified frame o as</p><p>where q &#952; b n and q &#966; b n are computed from q b n using Equations ( <ref type="formula">37</ref>) and ( <ref type="formula">39</ref>). Then for a laser scan S = {(r, &#952; )} represented as a set of range/bearing tuples (r, &#952; ), the tilt-compensated scan S is computed as</p><p>For the orthorectification of the keyframe scan, we note that because q k n = q &#952; b n &#8855; q &#966; b n (as shown in Section 6.3) and q kc k = q c b , the rotation q o c is equivalent to</p><p>The associated rotation matrix is given by</p><p>In the following discussion, we derive the translation and heading portions of the measurement model independently, but they can be combined into a single update by stacking the two measurement model vectors and Jacobian matrices.</p><p>The translation measurement model is constructed by expressing the visual odometry measurement model of Equation ( <ref type="formula">84</ref>) in the orthorectified frame associated with the keyframe, and then extracting the x and y components:</p><p>We expand using ( <ref type="formula">44</ref>), (31a), and (34) then drop secondorder terms to obtain</p><p>The estimated measurement model is</p><p>and the residual is</p><p>which is modeled as</p><p>The measurement Jacobian is</p><p>The derivation of the heading portion of the measurement model follows a similar strategy. Recalling from Section 3.2 that the quaternion conjugation operation can be used to express the vector portion of a quaternion in another coordinate frame, we construct the heading measurement model by first rotating the visual odometry rotation model from Equation ( <ref type="formula">85</ref>) into the orthorectified frame:</p><p>The heading measurement model is then the yaw component of q lr : h lr (x) = &#968;(q lr ) +&#951; lr</p><p>The estimated measurement model is</p><p>where qlr = q b n &#8855;( q k n ) -1 . The residual is given by r lr = z lrh lr (x) , which is modeled as</p><p>as in Appendix A. We can then compute the Jacobian of the residual using the chain rule as</p><p>where N lr is given by Equation ( <ref type="formula">100</ref>) in Appendix A. We compute &#948;&#952; lr using (30a), ( <ref type="formula">29a</ref>), ( <ref type="formula">40</ref>), and (41) as</p><p>which we take the partial derivatives of and substitute into (86) to obtain</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.3.">Keyframe reset</head><p>When a new keyframe is established, the relative part of the state is reset, as described in Section 5. As shown in </p><p>As such, the error state reset is</p><p>The other vector portions of the state, including v, &#946; &#969; , &#946; a , and &#956;, do not change.</p><p>Resetting the yaw portion of the attitude states is slightly more complicated. Each new node frame is established such that the vehicle's yaw is identically zero. Setting &#968; = 0 in (39), we see that</p><p>where &#966; and &#952; are computed from q b n using (37). The covariance update for the attitude error states is governed by</p><p>which is derived in Appendix E. When roll and pitch are approximately zero, we note that</p><p>In summary, the keyframe reset requires updating the state estimate according to (87) and ( <ref type="formula">89</ref>) and updating the covariance according to (70) where</p><p>kk T 0 0 0 0 0 0 0 0 N &#952; 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 I 0 0 0 kk T 0 0 0 0 0 0 0 0 N &#952; 0 0 0 0 0 0 0 0 0 0 0 0 0 1</p><p>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Body-fixed Relative Navigation (bRN)</head><p>While defining the vehicle's pose with respect to an inertially defined origin uses more conventional dynamic equations and results in more intuitive state estimates, an increasing number of estimators use body-fixed or robocentric state definitions. In the body-fixed case, the position and attitude of an inertially fixed origin is estimated with respect to the current vehicle's pose. Body-fixed state definitions have been shown to improve filter consistency <ref type="bibr">(Castellanos et al., 2007)</ref> and facilitate local guidance and control algorithms <ref type="bibr">(Owen and Montano, 2006;</ref><ref type="bibr">Yu and Beard, 2013)</ref>. For example, the obstacle avoidance and visual-servoing problems requires a vehicle to make navigation decisions after estimating the pose of objects with respect to its current pose. When an inertial representation of the state is desired, the body-fixed state and its covariance can be readily transformed into an inertial frame as described in Appendix G.</p><p>The relative estimator in the relative navigation architecture can be formulated using a body-fixed state definition. In this case, bRN estimates the state of the node frame with respect to the current body. As before, when a new keyframe is declared, the horizontal position and heading states are reset. The principal difference is that body-fixed dynamics are used and attitude error is defined differently.</p><p>Unless specified explicitly, the equations and definitions in Section 6 are also assumed for bRN. This section follows the same outline as Section 6, first describing the input, state, and dynamics for the system in Section 7.1, and then defining the measurement models and keyframe reset in Sections 7.2 and 7.3, respectively. The derivations in this section represent a novel contribution not contained in prior work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.1.">State dynamics</head><p>bRN estimates the pose of the node frame n with respect to and expressed in the current body frame b, denoted as ( b p n b , q n b ). The states b p n b and n p b n represent the same displacement, but are pointed in opposite directions and are expressed in different frames. Specifically,</p><p>The states q n b and q b n are inverses of each other:</p><p>The estimator also tracks the pose of frame n with respect to and expressed in frame k, denoted as ( k p n k , q n k ) and shown in Figure <ref type="figure">8</ref>. For bRN, the vehicle's state is the tuple</p><p>while the input remains unchanged from Section 6.1. The state dynamics are</p><p>where derivations for the position and attitude dynamics are found in Appendix D, and where &#969; is given by (72a) and a z is given by the z component of (72b). Note that (92b) only differs from (80c) by a single transpose.</p><p>For bRN, the error-state vector &#948;x &#8712; R 22 is defined as</p><p>The first-order approximation of the error-state dynamics, also derived in Appendix D, is</p><p>Differentiating the error-state dynamics with respect to the error state and input noise results in the following propagation Jacobians:</p><p>where &#969; is defined in (73).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2.">Measurement models</head><p>The measurement models differ when using a body-fixed parameterization as outlined in the following.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2.1.">Accelerometer.</head><p>Because the accelerometer model is independent of attitude and position, the model remains unchanged for bRN. See Section 6.2.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2.2.">Altimeter.</head><p>As described in Section 6.2.2, the sonar altimeter measures height above ground. By relating ( <ref type="formula">90</ref>) and ( <ref type="formula">83</ref>), the vehicle's altitude is</p><p>where the rotation expresses the height in an inertial frame.</p><p>The estimated measurement model is </p><p>For a given altimeter measurement z alt , the residual is</p><p>which is approximated using (31b), (44a), (33), and (4) as</p><p>resulting in the measurement Jacobian</p><p>p n b , 0, 0, 0, 0, 0, 0 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2.3.">Visual odometry translation.</head><p>Figure <ref type="figure">8</ref> presents the relative position and attitude measurement model for bRN. Note that the direction of ( p n b , q n b ), ( p n k , q n k ), and ( p kc c , q kc c ) are reversed when compared with Figure <ref type="figure">7</ref>. A relative measurement can generally be measured either direction. Defining the measurement to point in the same direction as the state estimate avoids unnecessary coupling of the heading estimate into the translation measurement model, which in practice has shown to improve performance.</p><p>From Figure <ref type="figure">8</ref>, the relative position update is</p><p>Dropping prescripts and recalling that p kc k = p c b and q kc k = q c b , this becomes</p><p>Expanding according to (44), (31b), (33), and (34) gives</p><p>Expanding and removing higher-order terms,</p><p>The estimated measurement model is</p><p>We then model the residual according to (62) as</p><p>resulting in the residual Jacobian</p><p>where</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2.4.">Visual odometry rotation.</head><p>The relative rotation measurement model also follows from Figure <ref type="figure">8</ref>. Specifically,</p><p>The estimated measurement model is</p><p>For a given relative attitude measurement z vor , the residual is</p><p>which is modeled and then simplified using ( <ref type="formula">41</ref>) and ( <ref type="formula">42</ref>) as</p><p>resulting in the measurement Jacobian</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2.5.">Laser scan matcher.</head><p>As with the iRN case, laser scans are orthorectified before being passed to the scan matcher. The quaternion q o c that describes the rotation from the laser frame c to the orthorectified frame o is given by</p><p>where q b n =(q n b ) -1 , q &#952; b n and q &#966; b n are computed from q b n using Equations (37), and q o b is defined as</p><p>The orthorectified laser scan S is then computed from the original laser scan S = {(r, &#952; )} as</p><p>We note that the rotation matrix associated with q o c can be written as</p><p>The translation portion of the measurement is modeled as the x and y component of visual odometry translation model in Section 7.2.3, after it has been rotated into the orthorectified frame:</p><p>The estimated measurement model is</p><p>The residual Jacobian is derived in Appendix F, and is given by</p><p>where</p><p>For the rotation portion of the measurement model, we first express the visual odometry rotation model in the orthorectified frame:</p><p>which is estimated as</p><p>The measurement model is then the yaw portion of q lr , h lr (x) = &#968;(q lr ) +&#951; lr , which is estimated as</p><p>The residual Jacobian is derived in Appendix F, and is given by</p><p>where</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.3.">Keyframe reset</head><p>The keyframe reset step for bRN is somewhat less intuitive owing to the body-centric representation of position and orientation states, but can be derived by following the iRN reset step. Following ( <ref type="formula">91</ref>) and ( <ref type="formula">88</ref>),</p><p>, where &#966; and &#952; are computed from q b n =( q n b ) -1 using (37). Note that even though bRN expresses attitude in the body frame, the angles &#966;, &#952; , and &#968; continue to represent conventional roll, pitch, and yaw Euler angles that express the orientation of a body with respect to an inertial frame. The Jacobian for the attitude reset is derived in Appendix E, and happens to be identical to that for the iRN attitude reset:</p><p>The derivation for the position reset is more involved, and is given in Appendix E. The resulting reset operation is</p><p>Because frames b and k are at the same location when the reset occurs, we also have</p><p>The total keyframe reset Jacobian, also derived in Appendix E, is given by </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Results</head><p>The RMEKF was implemented in C++ using the ROS <ref type="bibr">(Quigley et al., 2009)</ref> framework. The following sections present simulation and hardware results for the performance of the RMEKF on a multirotor vehicle. The simulation results in Section 8.1 illustrate the performance of the estimator under ideal conditions, where all noise is Gaussian with known covariance and all biases are known. The hardware results in Section 8.2 demonstrate the performance of the estimator in real-world conditions, where the Gaussian noise assumption is not necessarily met and where covariance and biases are unknown. The results in this section illustrate the typical accuracy of the relative state estimates produced by the RMEKF. Since estimating relative states is a unique approach, direct comparison with related global estimation techniques in the literature cannot readily be accomplished unless the estimator in this article is combined with a simple global back end. For those comparisons, the reader is referred to the companion work by <ref type="bibr">Wheeler et al. (2018)</ref>. The more extensive flight-test results of <ref type="bibr">Wheeler et al. (2020)</ref> additionally demonstrate the practical performance of the RMEKF for several extended flights of a multirotor platform, including closing control loops around the estimates.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.1.">Simulation results</head><p>The simulation provided accelerometer, gyro, altimeter, and visual odometry measurements corrupted by normally distributed noise. Slowly drifting biases were also added to the simulated IMU data. Sensor noise (&#965; &#969; , &#965; a , &#951; vot , &#951; vor , and &#951; alt ) was sampled from normal distributions with the following standard deviations that are typical of low-cost hardware sensors: &#963; &#969; = 0.13 rad/s, &#963; a = 1.15 m/s 2 , &#963; vot = 0.02 m, &#963; vor = 0.01 rad, and &#963; alt = 0.01 m. New keyframes were established when the vehicle moved more than 0.2 m or yawed more than 20 &#8226; . The estimator was evaluated during various maneuvers ranging from mild to aggressive, where during the aggressive maneuvers the vehicle's speed exceeded 25 m/s and the bank angle exceeded 45 &#8226; .</p><p>Figures 9 and 10 show a 3 second snapshot of the performance of the iRN and bRN estimator at tracking the vehicle's pose, where the small time window was selected to make the relative state reset visible. During these 3 seconds, the vehicle was moving forward at nearly constant velocity while maintaining a nominal height above ground of 1.25 m, and while gradually slowing its clockwise yaw motion. The vertical gray lines indicate the time when a new keyframe is declared. While the state is defined using quaternions, Figure <ref type="figure">9</ref>(b) uses ( <ref type="formula">37</ref>) to plot roll, pitch, and yaw angles.</p><p>Figure <ref type="figure">9</ref> highlights several interesting practicalities of relative navigation. As discussed in Section 5, Figure <ref type="figure">9</ref> illustrates how the forward, right, and yaw states are reset to zero with each newly declared keyframe, while the altitude, roll, and pitch states remain continuous. It should be noted that while roll and pitch are continuous, discontinuities appear in each of the four quaternion states. Because the discontinuities in the relative states occur at known times, they are easily accounted for and so, in practice, do not cause problems with control stability. Figure <ref type="figure">9</ref> also illustrates that keyframes do not reset at fixed intervals, but rather reset based on how far the vehicle has traveled since the previous keyframe. Certain sensors, such as a laser scanner with a long range and wide field of view, facilitate longer distances between keyframes.</p><p>The RMEKF performed very similarly when using the body-fixed dynamics presented in Section 7. Figure <ref type="figure">10</ref> shows the performance of the bRN estimator for position states over the same window of time as Figure <ref type="figure">9</ref>. The attitude performance of bRN (not shown) is nearly identical to the attitude performance of iRN with the exception of a change in sign for the angles. Note in Figure <ref type="figure">10</ref> that each of the position states experiences slight discontinuities at the keyframe resets and that the position estimates do not reset to zero. This is because the keyframe reset step removes all horizontal translation from the state but continues to track the vehicle's height above ground. As the bRN position state is expressed with respect to the rolled and pitched body frame, some of the height above ground is mapped into the forward and right components. When the bRN state estimates are expressed with respect to the current node frame using ( <ref type="formula">90</ref>) and (91), they are nearly identical to the estimates shown in Figure <ref type="figure">9</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.2.">Hardware results</head><p>The vehicle used for hardware results, a hexacopter in a Y6 configuration, is shown in Figure <ref type="figure">11</ref>. Attitude control is performed by a Naze32 autopilot running the ROSflight 4 firmware. IMU data is streamed from the low-cost MEMS IMU on the autopilot (an InvenSense MPU-6050) at 500 Hz. Altimeter data is obtained from a MaxBotix LV-MaxSonar-EZ3 ultrasonic range finder.  All estimation is performed in real-time on the onboard computer with an Intel i7 processor. Visual odometry is also computed on the onboard computer at 15 Hz from a forward-facing Asus Xtion Pro Live RGB-D camera, using the depth-enhanced monocular odometry algorithm <ref type="bibr">(Zhang et al., 2014)</ref> modified to output relative measurements. New keyframes were established when the vehicle moved more than 0.5 m or yawed more than 20 &#8226; , although more sophisticated keyframe-selection algorithms could be implemented. A constant, typical value was chosen for the visual odometry measurement covariance, but future implementations could take advantage of algorithms that estimate the covariance from frame to frame (e.g., <ref type="bibr">Anderson et al., 2019)</ref>.</p><p>For the comparisons in this article, truth reference was provided by a motion capture system. The positioning accuracy of the motion capture system is on the order of 0.5 cm or better, and the attitude accuracy is on the order of 0.1 &#8226; . A calibration routine was used to register the quadrotor body frame to the motion-capture frame. To compare the relative state estimates with the global motion-capture data, the reference frames were synchronized by applying the same position and orientation reset steps described in Sections 6.3 and 7.3 to the motion-capture data each time a visual-odometry keyframe reset occurred. While there are small timing delays and other error sources inherent in this approach, we believe this "truth" reference to be highly representative of the true motion and an appropriate reference source for system evaluation.</p><p>Figure <ref type="figure">12</ref> shows the estimator performance for the RMEKF using inertial dynamics (iRN). A typical 10second window of the flight is shown to make the relative state reset visible. The estimates track the truth well. Again, note that the forward and right position estimates in  <ref type="table">1</ref>.</p><p>The performance of the estimator using body-fixed dynamics (bRN) was very similar to the inertial dynamics. Plots are not shown for the bRN results, but the RMS errors for a comparable flight test are summarized in Table <ref type="table">1</ref>.</p><p>Looking more closely at Figures <ref type="figure">12(a</ref>) and (b), it appears as if the relative states do not reset all the way to zero. The reason for this is the delay in the visual odometry processing, and is illustrated in Figure <ref type="figure">13</ref>. On average, it takes approximately 115 ms from the time an image is captured (the dashed vertical line in Figure <ref type="figure">13</ref>) to the time the visual odometry message is published (the solid vertical line). However, the estimator uses the timestamp for when the image was captured when applying the odometry measurement, as described in Section 4.3. When a new keyframe image is captured, during the time window between the image capture and odometry message the estimator continues operating with respect to the previous keyframe. Once the new keyframe odometry message arrives, the estimator rewinds to the time of the image capture, resets the state to zero, then re-propagates the IMU and altimeter messages that were queued up to the visual odometry message time. As a result, the first message that the estimator publishes with respect to the new keyframe includes the 115 ms of IMU re-propagation and is non-zero. However, the relative state was actually reset to zero at the time of the image capture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9.">Conclusions</head><p>While the global state of a UAS is not observable when navigating through GPS-denied environments, it is possible to maintain observability by subtly restructuring the problem using the relative navigation framework. Previous simulation studies show strong theoretical motivations for the relative navigation framework, including improved consistency, bounded covariance, and improved accuracy <ref type="bibr">(Wheeler et al., 2018)</ref>. In addition, significant hardware results have demonstrated the efficacy and practicality of relative navigation for small UASs when using a relative MEKF for state estimation <ref type="bibr">(Wheeler et al., 2020)</ref>. This article rigorously derives the RMEKF used in <ref type="bibr">Wheeler et al. (2018)</ref> and <ref type="bibr">Wheeler et al. (2020)</ref>, presenting the mathematics necessary to apply relative navigation to the UAS state estimation problem. Its tutorial nature and step-by-step derivations make this article a self-contained resource for extending the approach to other applications.</p><p>By defining the state with respect to a local coordinate frame, this article demonstrates how to leverage relative measurements from a visual odometry algorithm to ensure an observable state. New visual odometry and laser scan-matching measurement models are proposed, and a unique keyframe reset step is presented to ensure filter states are fully observable even when global information is not available.</p><p>The RMEKF is demonstrated in simulation and hardware to work effectively for both inertially defined and body-fixed vehicle dynamics to produce accurate state estimates with bounded uncertainty. While the inertial and body-fixed definitions yield similar results, this article explicitly outlines the differences that arise in the state estimator, including differences in the error state definition, measurement models, quaternion integration procedure, and keyframe reset step. Additional contributions of this article include a tutorial introduction to indirect MEKF, an exposition of Hamilton quaternions, and the derivation of several novel properties of error quaternions necessary for partially updating a quaternion and its covariance.</p><p>we wish to derive an expression for the Jacobian N &#8706; &#8706;&#948;&#952; .</p><p>The derivation and expressions that follow show some similarity to those presented by <ref type="bibr">Barfoot et al. (2011)</ref>, but are derived in the context of an error-state formulation. We begin by deriving expressions for the error quaternions for the yaw, pitch, and roll quaternions of Equations ( <ref type="formula">38</ref>) and (39). For example, with the iRN definition of the error quaternion, we can compute &#948;q &#968; from (30a) and ( <ref type="formula">39</ref>) as</p><p>where the final step uses the standard angle-difference trigonometric identities. Following a similar analysis, we see that (94) also holds for bRN using (30b). We likewise see that</p><p>for both the iRN and bRN attitude error definitions. Because these errors are all expressed in different intermediate frames of the Euler rotation sequence, &#948;q = &#948;q &#968; &#8855; &#948;q &#952; &#8855; &#948;q &#966; in general.</p><p>Assuming small attitude errors, we approximate ( <ref type="formula">94</ref>) and ( <ref type="formula">95</ref>) with the first-order Taylor series as</p><p>These error states are represented minimally according to (19) as</p><p>(96) We then define</p><p>Again, it should be noted that &#948;&#952; = in general. We now wish to solve for &#948;&#952; in terms &#948;&#952; &#968; , &#948;&#952; &#952; , and &#948;&#952; &#966; . For the iRN case, starting with (30a) and using (38), we see that</p><p>We can expand (97) using ( <ref type="formula">36</ref>) and ( <ref type="formula">96</ref>) to express &#948;&#952; in terms of as</p><p>The determinant of the matrix in (98) equals cos &#952; . Therefore, assuming that &#952; = &#177; &#960; 2 , we solve (98) for as</p><p>from which we obtain</p><p>A similar procedure could be used to derive the Jacobian N for the bRN case. However, the definition and interpretation of roll, pitch, and yaw are less obvious for the bRN case, so we will omit the derivation of N here and instead derive the relevant expressions in the text as they arise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Appendix B. Zero-mean error state</head><p>In this appendix, we show that the expected error state remains zero mean. Specifically, we show that the expected value of the error state is zero, as indicated in (45), and that the error-state dynamics are trivial when the estimation error and input noise are both zero.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B.1. Expected value of error state</head><p>Property (45) stems from the linearity of the expectation and quaternion multiplication, inverse, and &#8744; For the following derivations, recall that x is modeled as a random variable, while x is a constant value for a given timestep. Given (44a) and x = E [x], for vector portions of the state we have</p><p>Using (30a) as the attitude error definition for (44b), for quaternion portions of the state we have</p><p>A similar derivation is also possible for the attitude error definition in (30b).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B.2. Error-state dynamics</head><p>For vector portions of the state, from (44a), (47), and (50) we have &#948; &#7819;v = &#7819;v -&#7819;v = f v (x, u + &#965;) +&#951; vf v (x, u) fv (&#948;x, &#965;, x, u) +&#951; v .</p><p>We note that x can be rewritten in terms of &#948;x and x using For quaternion portions of the state, for iRN we start with (29a):</p><p>x q = xq &#8855;( &#948;x &#952; ) &#8743; =&#8658; &#7819;q = &#7819;q &#8855;( &#948;x &#952; ) &#8743; +x q &#8855;( &#948; &#7819;&#952; ) &#8743; .</p><p>Multiplying on the left by (x q ) -1 gives (x q ) -1 &#8855; &#7819;q =(x q ) -1 &#8855; &#7819;q &#8855;( &#948;x &#952; ) &#8743; +( &#948; &#7819;&#952; ) &#8743; =&#8658; &#948; &#7819;&#952; = (x q ) -1 &#8855; &#7819;q -(x q ) -1 &#8855; &#7819;q &#8855;( &#948;x &#952; ) &#8743; &#8744; = (x q ) -1 &#8855; f q (x, u + &#965;) -(x q ) -1 &#8855; f q (x, u) &#8855;( &#948;x &#952; ) &#8743; &#8744; f(&#948;x, &#965;, x, u) , where again x can be rewritten in terms of &#948;x and x using (29a) and (44a). Note that we have omitted the process noise &#951; since the quaternion propagation dynamics in this article are exact kinematic expressions. Evaluating fq at E[&#948;x], E[&#965;], x, u gives fq (E[&#948;x], E[&#965;], x, u) = (x q ) -1 &#8855; f q (x, u) -(x q ) -1 &#8855; f q (x, u) &#8855;0 &#8743; &#8744; = 0.</p><p>A similar analysis can be carried out with the bRN attitude error definition in (29b).</p><p>Multiplying both sides by two, dropping the scalar term, and ignoring higher-order terms yields </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D.3. Velocity</head><p>The only change in the velocity error dynamics is the gravity term, so that</p><p>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Appendix E. Keyframe-reset derivation</head><p>During the keyframe reset step, introduced in Section 5 and detailed in Sections 6.3 and 7.3, the relative states and their associated covariance are reset to zero. For the position states, only the altitude of the vehicle is kept, while for the attitude states the uncertainty associated with yaw is removed from the filter while the uncertainty associated with roll and pitch is maintained. Appendices E.1 and E. (104) for both iRN and bRN attitude definitions. Appendix E.3 derives the position reset and its Jacobian for bRN.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>E.1. iRN attitude reset</head><p>Recall from (99) in Appendix A that the Euler angle errors are related to the attitude error state &#948;&#952; as The true and estimated attitude states after the keyframe reset step consist of the roll and pitch components of the attitude prior to the reset,</p><p>The resulting attitude error is obtained from (30a) and (29a) as &#948;q b n + =( q b n + ) -1 &#8855; q b n + = q-1 &#966; &#8855; q-1 &#952; &#8855; q &#952; &#8855; q &#966; = q-1 &#966; &#8855; q-1 &#952; &#8855; q&#952; &#8855; &#948;q &#952; &#8855; q&#966; &#8855; &#948;q &#966; = q-1 &#966; &#8855; &#948;q &#952; &#8855; q&#966; &#8855; &#948;q &#966; , which, similar to (42), implies that </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>E.2. bRN attitude reset</head><p>Continuing the notation presented in Section 7, we describe bRN attitude error with &#948;q n b and &#948;&#952; n b . The typical 3-2-1 Euler angles assume an inertial attitude. To express a body-fixed attitude while still maintaining the intuitive roll, pitch, and yaw rotations, the order and sign of the rotation sequence must be flipped as</p><p>Expanding using the error state definition (30b),</p><p>which is approximated as</p><p>Combining ( <ref type="formula">108</ref>) and ( <ref type="formula">109</ref>) using (30b) gives  <ref type="bibr">(113)</ref> which confirms (104) for bRN. Note that the values of &#966; and &#952; used to evaluate N &#952; are obtained from the Euler decomposition of ( q n b ) -1 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>E.3. bRN position reset</head><p>Following (90) we can write by rearranging (90). We note from ( <ref type="formula">91</ref>) and (38) that</p><p>=&#8658; R(q n b ) = R T (q &#968; ) R T (q &#952; ) R T (q &#966; ) , so (114) becomes</p><p>We next observe that kk T R T (q &#968; ) = = -kk T R T (q &#952; ) R T (q &#966; ) p n b . Noting that The keyframe pose ( p n k , q n k ) is transformed similarly, yielding </p></div></body>
		</text>
</TEI>
