<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Worst-Case Latency Analysis of Message Synchronization in ROS</title></titleStmt>
			<publicationStmt>
				<publisher>IEEE</publisher>
				<date>12/05/2023</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10512374</idno>
					<idno type="doi">10.1109/RTSS59052.2023.00025</idno>
					<title level='j'>2023 IEEE Real-Time Systems Symposium (RTSS)</title>
<idno>2576-3172</idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Ruoxiang Li</author><author>Xu Jiang</author><author>Zheng Dong</author><author>Jen-Ming Wu</author><author>Chun Jason Xue</author><author>Nan Guan</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>combines it together with the message, which arrives at time 19 into an output message set, which is sent to the fusion task. Then, the fusion task sends the message to the actuator task at time 26, and the actuator task finishes its execution at time 28. Note that the messages arriving at time 7 and 13 will be discarded and not included in any output message sets.</p><p>In this example, the passing latency of the message arriving at time 19 caused by the synchronizer is the time difference between its arrival at time 19 and the publishing of the output message set at time 24, i.e., 24 -19 = 5. The corresponding end-to-end delay is the duration from the start time of sensor task 1 at time 18 to the completion of the actuator task at time 28, i.e., 28-18 = 10, which includes the passing latency from time 19 to 24. The reaction latency is the time duration from the arrival of the message at time 1 to the publishing of the output message set at time 24, i.e., 24 -1 = 23, which includes the passing latency and the extra latency caused by the discarded messages. And the corresponding end-to-end reaction time is the time duration from the occurrence of event B at time 0 to the completion of the actuator task at time 28, i.e., 28 -0 = 28, which includes the reaction latency from time 1 to 24.</p><p>It is worth mentioning that the reaction latency of the synchronizer is defined regarding the arrival time of the last nondiscarded message but not the occurrence time of the external event, which seems problematic. For example, if an external event A occurs at time 10, then the end-to-end reaction time regarding this event should be 28 -10 = 18. However, by our definition, the reaction latency of the synchronizer is 24 -1 = 23, which is larger than the end-to-end reaction time 28 -10 = 18. This is actually not a problem as our interest is to analyze the worst-case end-to-end reaction time no matter when the event actually occurs. The worst-case scenario is that the external event happens right after the sampling time of the last non-discarded message (event B in Fig. <ref type="figure">2-(b)</ref>). Therefore, the worst-case time gap between the occurrence of event B and the generation of the first output message group containing the information of event B (24 -0 in this example) equals the sum of two parts <ref type="bibr">(1)</ref> the difference between the timestamp of the last non-discarded message and its arrival time to the synchronizer (1 -0 in this example) and (2) the reaction latency (24 -1 in this example). The former can be bounded using existing response time analysis techniques, while analyzing the latter is the goal of this paper. In summary, we define the reaction latency of the synchronizer assuming the worst-case scenario, i.e., the external event occurs right after the sampling time of the last non-discarded message. In this way, the definition of the reaction latency is simple yet sufficient to serve the purpose of bounding the worst-case endto-end reaction time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. ROS MESSAGE SYNCHRONIZATION POLICY</head><p>There are two synchronization policies in ROS, i.e., the Exact Time policy <ref type="bibr">[28]</ref> and the Approximate Time policy <ref type="bibr">[29]</ref>. The Exact Time policy only combines messages from different input channels with exactly the same timestamp into an output set and discards any messages without an exact match. As a result, any output message set published under the Exact Time policy will have a time disparity of 0. However, in reality, it is too restrictive to require data from different sensors to have exactly the same timestamp, so the Exact Time policy is rarely used in practice. Consequently, we focus on the Approximate Time policy in this paper, which is used to combine messages under a certain tolerance of time disparity. Please note that the model and results of this paper apply to both ROS 1 (the first generation of ROS) and ROS 2. More specifically, the Approximate Time policy is the same for all ROS 1 C++ versions since Diamondback and ROS 2 C++ versions until the latest Rolling, which was also stated in <ref type="bibr">[3]</ref>. For the sake of brevity, we use the term "ROS" in this paper to include both ROS 1 and ROS 2. Throughout the remainder of this paper, we will use the term "policy" or "synchronization policy" interchangeably to represent the Approximate Time policy. In this paper, we adopt the abstract model presented in <ref type="bibr">[3]</ref>, but to keep our paper self-contained, we will provide a detailed explanation of this model in its entirety. We first define some concepts, followed by the abstract policy model.</p><p>We use S = {m 1 , ..., m N } to denote a regular set containing N messages, each of which comes from a different queue. The time disparity of a regular set is defined as: Definition 3 (Time Disparity). Let S = {m 1 , ..., m N } be a regular set. The time disparity of S, denoted by &#8710;(S), is the maximum difference between the timestamps of the messages in S, i.e.,</p><p>Each queue Q i stores not only messages that are already arrived (called arrived messages), but also an artificial predicted message at the end of Q i . The timestamp of a predicted message is set based on the timestamp of the latest arrived message in Q i and T B i . It is important to note that the selection procedure of the output message set is not solely based on the arrived messages but also considers the predicted messages, which can provide auxiliary information for the selection procedure. Nevertheless, a predicted message is never included in output message sets. Suppose there are currently k messages</p><p>When the system starts at time 0, a predicted message with timestamp 0 was initially put into each queue. Note that sometime a queue may only have a predicted message but no arrived message.</p><p>Definition 4 (Pivot). Let S 1 = {m 1 1 , ..., m 1 N }, where each m 1 i is the arrived message with the earliest timestamp in Q i . The pivot m P is the one with the largest timestamp among all elements in S 1 . If several messages in S 1 all have the latest timestamp, the message with the maximum queue number is the pivot.</p><p>The queue to which the pivot belongs is denoted as the pivot queue, while the remaining queues are denoted as the non-pivot queues. We use &#923; to denote all the regular sets corresponding to the pivot m P . Note that the regular sets in &#923; consist of messages currently in queues (either arrived or predicted) and must include m P . The selected set has the smallest time disparity among all regular sets in &#923;.</p><p>Definition 5 (Selected Set). Let m P be a pivot and &#923; be the corresponding set of the regular sets that include m P . The selected set is the set that has the smallest time disparity among all elements in &#923;. If multiple elements in &#923; all have the smallest time disparity, the selected set S = {m 1 , ..., m N } must satisfy the following condition: there does not exist another regular set</p><p>A selected set can include both arrived messages and predicted messages. We call a selected set containing only arrived messages a published set (denoted as S PUB ). The messages in a published set are called published messages. If the selected set contains any predicted messages, the synchronizer must wait for them to arrive. Intuitively, the predicted message(s) can be used to combine a regular set with a smaller time disparity compared with the current selected set. However, in the case of</p><p>i , a message may arrive with a larger timestamp than predicted. If the difference between the actual and predicted timestamp is significant, the message cannot be included in a selected set as expected. Therefore, the synchronizer can waste some time waiting for messages to arrive, further contributing to passing latency or reaction latency. The insight here is that if the predicted timestamp is too large (e.g., the difference between the predicted timestamp and the timestamp of the pivot exceeds the worst-case time disparity of the published set), the synchronizer does not need to wait for the predicted message, thereby to avoid wasting time. We will explain more about the above insights as well as the aspects relevant to the passing latency and reaction latency with an illustrative example in Section III-B.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Synchronization Policy</head><p>When a new message m i arrives, the synchronizer will invoke Algorithm 1. First, the last message (must be a predicted message) is discarded from Q i (Line 1). Then, m i is put into the end of Q i , which then is followed by a new predicted message with timestamp &#964; (m i )+T B i (Line 2-3). After that, the pivot is set (Line 5) once there is at least one arrived message in each queue. And a selected set can only be obtained (Line 7) if all predicted messages have timestamps greater than &#964; (m P ). If the selected set only contains arrived messages, it will be published and all published messages should be discarded from the queues. Additionally, the messages earlier than the published messages will also be discarded from the queues, which are not included in any published sets (Line 8-11). Otherwise, if a selected set contains one or several predicted messages, Algorithm 1 exits immediately to wait for the predicted message(s) to arrive. We assume that the time required by Algorithm 1 to identify a selected set is negligible, i.e., it is considered to be 0. This assumption is made to simplify our analysis and to focus solely on the latency caused by waiting for messages to arrive and also the discarded messages. Furthermore, we use &#8710; to represent the upper bound of time disparity for any published set, which is equal to the RHS of <ref type="bibr">(12)</ref> in <ref type="bibr">[3]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. An Illustrative Example</head><p>We use Fig. <ref type="figure">3</ref> to illustrate Algorithm 1. The x-axis represents the timestamp and the messages' arrival time is not explicitly depicted in the figure. The downward arrows represent the messages buffered in the queues.</p><p>At some time point, a message with timestamp 0 arrives in Q 3 and is set as the pivot as shown in Fig. <ref type="figure">3-(a)</ref>. The message set {m 1  1 , m 1 2 , m 1 3 } in Fig. <ref type="figure">3-(a</ref>) is the first published set and the corresponding published messages will be discarded from the queues. Then, a message with timestamp 10 arrives at Q 3 , which is set as the new pivot as shown in Fig. <ref type="figure">3-(b)</ref>. Please note that the indexes of messages are automatically updated in Algorithm 1 after discarding messages. For example, from Fig. <ref type="figure">2-(a</ref>) to Fig. <ref type="figure">2-(b)</ref>, the notation of the message with timestamp of 3 in Q 1 is updated from m 2 1 to m 1 1 after m 1 1 with timestamp 0 is discarded.</p><p>The regular set {m 3 1 , m 2 2 , m 1 3 } in Fig. <ref type="figure">3-(b</ref>) has the minimum time disparity, so it is the selected set. However, it cannot be published since m 2  2 is a predicted message. So the synchronizer will wait for m 2 2 to arrive. At some later point, m 4  1 and m 2 2 arrive successively as illustrated in Fig. <ref type="figure">3-(c</ref>). Then the selected set {m 3  1 , m 2 2 , m 1 3 } in Fig. <ref type="figure">3-(c</ref>) will be published</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. The Second Upper Bound for Passing Latency</head><p>Below, we derive the second upper bound for the passing latency by dividing the published set S PUB into three cases<ref type="foot">foot_0</ref> :</p><p>&#8226; Case 1: all published messages in S PUB from non-pivot queues have timestamp later than &#964; (m P ). &#8226; Case 2: all published messages in S PUB from non-pivot queues have timestamp earlier than &#964; (m P ). &#8226; Case 3: some published messages in S PUB have timestamp earlier than &#964; (m P ), while others' timestamp is later than &#964; (m P ). We begin by demonstrating that in Case 1, the term M in inequality (1) can be simplified to only account for the delay element D W j , without the need to consider the minimal timestamp difference element T W j . Lemma 5. Let S PUB be any arbitrary published set and m P &#8712; S PUB be the pivot. If S PUB falls into Case 1, the passing latency experienced by m</p><p>Proof. The published message in each non-pivot queue must be the first message with a timestamp later than &#964; (m P ). When the latest arrived message m L &#8712; S PUB arrives, a selected set can be obtained from Line 7 in Algorithm 1. This selected set does not contain any predicted messages and must be the published set S PUB . So we have t f = &#945;(m L ). Since both m L and m P are included in S PUB , we can have &#964; (m L ) &#8804; &#964; (m P ) + &#8710;. In the worst-case, m &#961;(i) i can be m P . So, we have</p><p>The lemma is proved.</p><p>The key insight learned from Lemma 5 is that in Case 1, the latest arrived message is precisely the last one that Algorithm 1 should wait for, leading to the published set being obtained at the time t f = &#945;(m L ). In this case, the algorithm only needs to wait for all published messages included in the published set to arrive. However, in Case 2 or Case 3, there is a possibility that Algorithm 1 may need to wait for additional messages to arrive, even if the latest arrived message m L has already been arrived (i.e., all published messages have already arrived). More specifically, certain predicted messages have the potential to be included a selected set with a smaller time disparity. If these messages arrive with the predicted timestamps, the time disparity of the published set could be reduced. However, it is possible that they may arrive with timestamps that are later than predicted, thus disqualifying them from being included in a selected set, thereby prolonging the passing latency. In this case, the publishing time t f &gt; &#945;(m L ) must hold. For example, in Fig. <ref type="figure">3-(d), m 3</ref> 1 is a predicted message with a timestamp of 21. The selected set is {m 3  1 , m 2 2 , m 1 3 }. Suppose that m 2 2 and m 3 2 arrive at Q 2 , and then m 3  1 arrives with a timestamp of 31, as shown in Fig. <ref type="figure">3-(e)</ref>. Therefore, the published set will be {m 2  1 , m 2 2 , m 1 3 }, and we have</p><p>1 &#8805; 4, Algorithm 1 would not wait for m 3 1 to arrive. In both Case 2 and Case 3, the challenge is to identify and exclude those predicted messages that will not be waited for (after m L arrives), so that we can reduce the pessimism in the analysis of passing latency. Below, we first introduce how to do this by adding constraints on the minimal timestamp difference T B j in Lemmas 7 and 8 for Case 2 and Case 3. Then we analyze how to incorporate these constraints into the term M for Case 2 (in Lemma 9) and Case 3 (in Lemma 10). Lemma 6. Let m P be any pivot, and S be a selected set corresponding to m P . If m j &#8712; S is a predicted message in Q j (j &#8712; [1, N ]), then it satisfies:</p><p>&#8226; &#964; (m j ) &gt; &#964; (m P ), and &#8226; &#8708;m &#8242; j : &#964; (m P ) &lt; &#964; (m &#8242; j ) &lt; &#964; (m j ). Proof. By line 6 of Algorithm 1, S can only be obtained when all predicted messages have timestamps later than &#964; (m P ). m j is a predicted message so &#964; (m j ) &gt; &#964; (m P ) must hold. We can assume that there exist m &#8242; j in Q j such that &#964; (m P ) &lt; &#964; (m &#8242; j ) &lt; &#964; (m j ). Therefore, m &#8242; j must be an arrived message and &#964; (m &#8242; j )&#964; (m P ) &lt; &#964; (m j )&#964; (m P ). We can construct a regular set S &#8242; with all messages in S, replacing only m j with m &#8242; j . Then, we have &#8710;(S &#8242; ) &#8804; &#8710;(S), which contradicts the fact that S is a selected set. The lemma is proved.</p><p>In the following, we analyze the constraints on T B j under the context that for any pivot m P , the corresponding published set S PUB falls into Case 2 or Case 3, m L &#8712; S PUB is the latest arrived message, and S is a selected set obtained by Algorithm 1 not earlier than &#945;(m L ).</p><p>We prove this by contradiction, assuming T B j &gt; 2&#8710;. By Lemma 6, m &#961;(j)-1 j is an arrived message satisfying:</p><p>Since S PUB falls into Case 2 or Case 3, the published message (it must be an arrived message) m &#961;(j)-x j &#8712; S PUB (x &#8712; N + ) must satisfy:</p><p>The predicted message m &#961;(j) j has a timestamp &#964; (m</p><p>) + T B j . Combining it with ( <ref type="formula">4</ref>) and ( <ref type="formula">3</ref>), we have &#964; (m &#961;(j) j</p><p>)&#964; (m P ) &gt; &#8710;. Therefore, any regular set containing m &#961;(j) j and m P will have a time disparity greater than &#8710;. Since S is obtained not earlier than &#945;(m L ), there must exist a regular set that contains only arrived messages (which actually is the published set for m P ) and it has a time disparity less than &#8710;. So, the synchronizer will not wait for m &#961;(j) j to arrive in any case, i.e., m &#961;(j) j can not be included in the selected set S, which contradicts the prerequisite m &#961;(j) j &#8712; S. Therefore, our assumption is incorrect and T B j &#8804; 2&#8710; must hold. Lemma 7 states that in both Case 2 and Case 3, the selected set obtained not earlier than &#945;(m L ) for a given pivot can only include the predicted messages from the non-pivot queue Q j that satisfies the condition T B j &#8804; 2&#8710;. These predicted messages will be waited to arrive until the publishing time. It is noted that the above condition is necessary but not sufficient.</p><p>) must satisfy 4 :</p><p>Proof. Since the predicted message m &#961;(j) j is included in S, &#964; (m &#961;(j) j</p><p>)&#964; (m P ) must not be large than &#8710;, i.e., &#964; (m</p><p>Lemma 8 reveals that for any pivot, when &#8710; &#8804; T B j &#8804; 2&#8710;, the predicted messages in Q j can be included into a selected set and then be waited for arrival before the selected set for this pivot can be published, but only if the condition specified in Eq. ( <ref type="formula">5</ref>) is satisfied.</p><p>To derive the upper bound for Case 2 and Case 3, we first introduce some auxiliary notations. For any pivot m P , the time disparity of its published set is upper-bounded by &#8710;. We define two sets for the queue index j:</p><p>Lemma 9. Let S PUB be any arbitrary published set and m P &#8712; S PUB be the pivot. If S PUB falls into Case 2, the passing latency experienced by m</p><p>where</p><p>Proof. Suppose that before publishing S PUB , m &#961;(j) j</p><p>(j &#8712; [1, N ]) be the first message, that arrives in Q j , with a timestamp of &#964; (m &#961;(j) j ) &gt; &#964; (m P ). By Lemma 7, the synchronizer 4 Of course, there exists a minimal limit as well, i.e., &#964; (m &#961;(j)-1 j ) &gt; &#964; (mP) -&#8710;. However, our focus here lies on the maximum limit. waits for m &#961;(j) j to arrive only if T B j &#8804; 2&#8710;. When j &#8712; &#966; 2 , by Lemma 8, the message m &#961;(j)-1 j has a maximum timestamp value of &#964; (m &#961;(j)-1 j ) = &#964; (m P ) + &#8710; -T B j . Therefore, the timestamp of m &#961;(j) j (when it arrives) must satisfy</p><p>When j &#8712; &#966; 1 , the message m &#961;(j)-1 j has a timestamp not later than &#964; (m P ). In the worst case, we have</p><p>be the first message with a timestamp larger than &#964; (m P ) in Q h , and let it be the last one to arrive among all such messages in the queues. Based on Eqs. ( <ref type="formula">7</ref>) and ( <ref type="formula">8</ref>), we can derive</p><p>Proved.</p><p>Lemma 10. Let S PUB be any arbitrary published set and m P &#8712; S PUB be the pivot. If S PUB falls into Case 3, the passing latency experienced by m</p><p>) be the first message that arrives in Q j before publishing S PUB and &#964; (m &#961;(j) j ) &gt; &#964; (m P ). Let &#964; (m P ) + &#963; (0 &lt; &#963; &lt; &#8710;) be the timestamp of the published message that has the latest timestamp among all messages in S PUB . By Lemma 7, T B j &#8804; 2&#8710; must hold. Similarly, we have &#964; (m</p><p>is the first message with a timestamp larger than &#964; (m P ) in Q h , and let it be the last one to arrive among all such messages in the queues. We have</p><p>In conclusion, the lemma is proved.</p><p>Theorem 2 (Passing Latency Upper Bound 2). Let S PUB be any arbitrary published set. The passing latency experienced by m &#961;(i) i &#8712; S PUB is upper-bounded by</p><p>reach the same evaluation conclusion for all other channels.</p><p>To further demonstrate this, we also illustrate the results of all six channels in Fig. <ref type="figure">6</ref>-(b) as an example, where we use the same setting as Fig. <ref type="figure">6</ref>-(a), except that the number of channels was kept constant at 6. In Fig. <ref type="figure">6-(c</ref>), the messages are no longer generated periodically, but with timestamp separation randomly distributed between T B i and T W i , and the ratio between T W i and T B i varies as indicated by the x-axis. In Fig. <ref type="figure">6-(d)</ref>, we use the same setting as in Fig. <ref type="figure">6-(a)</ref>, but set the number of channels to 6 and change the range of delay experienced by each message as shown by the x-axis.</p><p>For the reaction latency evaluation, we use the same setting in Fig. <ref type="figure">7-(</ref>  <ref type="figure">7-(c</ref>) and (d)). And an example with the results of all six channels is illustrated in Fig. <ref type="figure">7-(b)</ref>.</p><p>From the experiment results in Fig. <ref type="figure">6</ref>-(a) to (d), we can see that our upper bounds for the passing latency (Theorem 1 and 2) have good precision. As depicted in Fig. <ref type="figure">6-(c</ref>), as the ratio between T W i and T B i increases, particularly at ratios of 1.6 and 1.8), the difference between Upper Bound 1 and Upper Bound 2 becomes negligible. The reason is that as the ratio increases, it exacerbates the difference between T B i and T W i . Since &#8710; is calculated based on T W i , T B i &gt; &#8710; can always hold, and the constraints introduced in the second upper bound for the passing latency become invalid. From the experiment results (Fig. <ref type="figure">7</ref>-(a) to (d)), we can know that our upper bound for the reaction latency (Theorem 3) has a certain level of pessimism. Further analysis and refinement may be necessary to assess the upper bound accurately.</p><p>Based on the experiment results, we can observe that the synchronization policy can produce considerable latency. As illustrated in Fig. <ref type="figure">6-(</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VII. RELATED WORK</head><p>Data fusion algorithms are commonly developed with the assumption that data from multiple sensors are perfectly aligned, although this is rarely the case in reality. To address this issue, various techniques have been proposed to compensate for the temporal inconsistency of input data <ref type="bibr">[30]</ref>- <ref type="bibr">[33]</ref>, which only work when the temporal inconsistency falls within a certain range. Message synchronization before data fusion is a crucial component that warrants careful consideration and attention. Previous studies <ref type="bibr">[2]</ref>, <ref type="bibr">[34]</ref>- <ref type="bibr">[36]</ref> have focused on precisely timestamping sensor data in the context of multisensor data fusion. In this paper, we assume that sensor data has already been associated with valid timestamps in the same coordinate system using these existing techniques. Our focus is on the problem that arises after timestamping, i.e., the latency caused when managing the sensor data flows in the computing system based on these timestamps.</p><p>In recent years, some work has been conducted on formal real-time performance analysis of ROS2, such as exploring response time analysis by modeling execution of ROS2 applications as processing chains or a DAG <ref type="bibr">[16]</ref>, <ref type="bibr">[17]</ref>, <ref type="bibr">[20]</ref>, <ref type="bibr">[22]</ref>, <ref type="bibr">[24]</ref> executing on the ROS2 default scheduler, i.e., the executor. <ref type="bibr">[18]</ref>, <ref type="bibr">[21]</ref>, <ref type="bibr">[23]</ref>, <ref type="bibr">[24]</ref> proposed to address the limitations of the default scheduling strategy of ROS2 by enhancing or redesigning the executor. In <ref type="bibr">[19]</ref>, the authors propose an automatic latency manager that applies existing real-time scheduling theory to latency control of critical callback chains in ROS2 applications. <ref type="bibr">[14]</ref> proposed an end-to-end timing analysis for cause-effect chains in ROS2, considering the maximum end-to-end reaction time and maximum data age metrics. However, all of the research mentioned above focuses solely on the executor component in ROS2 only for the endto-end latency (response time) analysis without considering the Message Synchronizer. <ref type="bibr">[37]</ref> proposed a synchronization system implemented in a node to harmonize communication between nodes, which works similarly to the message synchronization policy in ROS. Recent work <ref type="bibr">[3]</ref>, <ref type="bibr">[7]</ref> modeled the message synchronization policy in ROS and formally analyzed the worst-case time disparity of the output message set as well as the important properties of the policy. However, their analysis only focuses on the time disparity metric and neglects to consider the latency caused by the policy, which is closely tied to end-to-end latency and is a critical factor in the reaction time of the system as a whole.</p><p>Previous research on real-time scheduling and analysis has investigated various real-time performance metrics, including response time <ref type="bibr">[38]</ref>, <ref type="bibr">[39]</ref>, tardiness <ref type="bibr">[40]</ref> and data freshness <ref type="bibr">[10]</ref>- <ref type="bibr">[13]</ref>, <ref type="bibr">[41]</ref>. However, these analysis techniques cannot be directly applied to ROS systems. Furthermore, analyzing latency associated with the message synchronization policy in ROS remains an open research question.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VIII. CONCLUSION</head><p>In this paper, we explore two types of latency metrics associated with the ROS message synchronization policy, i.e., the passing latency and the reaction latency, and formally analyze the upper bounds for both latency. We conduct experiments under different settings, including the different number of channels, the varied data sampling periods, and the random delay time experienced by messages before arriving at the synchronization policy, to evaluate the precision of our proposed latency upper bounds against the maximal observed latency in real execution. In the future, we plan to improve the design and implementation of the ROS message synchronization policy, considering both the time disparity and latency aspects, with the ultimate goal of achieving better real-time performance in ROS systems.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0"><p>We emit the cases that the timestamp of a predicted message exactly equals &#964; (mP) to simplify the presentation of the following proofs. This does not compromise the generality of our analysis, since we can add (or subtract) an infinitesimal value to (or from) its timestamp to fit our analysis.</p></note>
		</body>
		</text>
</TEI>
