<?xml-model href='http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/tei_all.rng' schematypens='http://relaxng.org/ns/structure/1.0'?><TEI xmlns="http://www.tei-c.org/ns/1.0">
	<teiHeader>
		<fileDesc>
			<titleStmt><title level='a'>Anti-Aging Scheduling in Single-Server Queues: A Systematic and Comparative Study</title></titleStmt>
			<publicationStmt>
				<publisher></publisher>
				<date>07/06/2020</date>
			</publicationStmt>
			<sourceDesc>
				<bibl> 
					<idno type="par_id">10192357</idno>
					<idno type="doi">10.1109/INFOCOMWKSHPS50562.2020.9162767</idno>
					<title level='j'>IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)</title>
<idno></idno>
<biblScope unit="volume"></biblScope>
<biblScope unit="issue"></biblScope>					

					<author>Zhongdong Liu</author><author>Liang Huang</author><author>Bin Li</author><author>Bo Ji</author>
				</bibl>
			</sourceDesc>
		</fileDesc>
		<profileDesc>
			<abstract><ab><![CDATA[The Age-of-Information (AoI) is a new performance metric recently proposed for measuring the freshness of information in information-update systems. In this work, we conduct a systematic and comparative study to investigate the impact of scheduling policies on the AoI performance in single-server queues and provide useful guidelines for the design of AoI-efficient scheduling policies. Specifically, we first perform extensive simulations to demonstrate that the update-size information can be leveraged for achieving a substantially improved AoI compared to non-size-based (or arrival-time-based) policies. Then, by utilizing both the update-size and arrival-time information, we propose three AoI-based policies. Observing improved AoI performance of policies that allow service preemption and that prioritize informative updates, we further propose preemptive, informative, AoI-based scheduling policies. Our simulation results show that such policies empirically achieve the best AoI performance among all the considered policies. Interestingly, we also prove sample-path equivalence between some size-based policies and AoI-based policies. This provides an intuitive explanation for why some size-based policies, such as Shortest-Remaining-Processing-Time (SRPT), achieve a very good AoI performance.]]></ab></abstract>
		</profileDesc>
	</teiHeader>
	<text><body xmlns="http://www.tei-c.org/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xlink="http://www.w3.org/1999/xlink">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>Recently, the study of information freshness has received increasing attentions, especially for time-sensitive applications that require real-time information/status updates, such as road congestion information, stock quotes, and weather forecast. In order to measure the freshness of information, a new metric, called the Age-of-Information (AoI), is proposed. The AoI is defined as the time elapsed since the generation of the freshest update among those that have been received by the destination <ref type="bibr">[1]</ref>. Prior studies reveal that the AoI depends on both the inter-arrival time and the delay of the updates. Due to the dependency between the inter-arrival time and the delay, this new AoI metric exhibits very different characteristics than the traditional delay metric and is generally much harder to analyze (see, e.g., <ref type="bibr">[1]</ref>).</p><p>Although it is well-known that scheduling policies play an important role in reducing the delay in single-server queues, it remains largely unknown how exactly scheduling policies impact the AoI performance. To that end, we aim to holistically study the impact of various aspects of scheduling policies Fig. <ref type="figure">1</ref>: Our position in the design space of AoI-efficient scheduling policies for a G/G/1 queue on the AoI performance in single-server queues and provide useful guidelines for the design of scheduling policies that can achieve a small AoI. While much research effort has already been exerted to the design and analysis of scheduling policies aiming to reduce the AoI, almost all of these policies are only based on the arrival time of updates, such as First-Come-First-Served (FCFS) and Last-Come-First-Served (LCFS), assuming that the updatesize information is unavailable. Here, the size of an update is the amount of time required to serve the update if there were no other updates around. In some applications, such as smart grid and traffic monitoring, the update-size information can be obtained or fairly well estimated <ref type="bibr">[2]</ref>. It has been shown that scheduling policies that leverage the size information can substantially reduce the delay, especially when the system load is high or when the size variability is large <ref type="bibr">[3]</ref>. This motivates us to investigate the AoI performance of size-based policies in a G/G/1 queue. Note that the update-size information is "orthogonal" to the arrival-time information, both of which could significantly impact the AoI performance. Therefore, it is quite natural to further consider AoI-based policies that use both the update-size and arrival-time information of updates.</p><p>In addition, prior work has revealed that scheduling policies that allow service preemption and that prioritize informative updates (also called effective updates, which are those that lead to a reduced AoI once delivered; see Section VI-A for a formal definition) yield a good AoI performance <ref type="bibr">[4]</ref>- <ref type="bibr">[6]</ref>. Intuitively, preemption prevents fresh updates from being blocked by a large and/or stale update in service; informative policies discard stale updates, which do not bring new information but may block fresh updates. To that end, we also consider AoIbased scheduling designs that both allow service preemption  In Fig. <ref type="figure">1</ref>, we position our work in the literature by summarizing various design aspects of scheduling policies for a G/G/1 queue. Existing work mostly explores the design based on the arrival-time information along with considering service preemption and informative updates. We point out that the size-based design is an orthogonal dimension of great importance, which somehow has not received sufficient attentions yet. Unsurprisingly, designing AoI-efficient policies requires the consideration of all these dimensions. In Table <ref type="table">I</ref>, we summarize several useful guidelines for the design of AoI-efficient policies, which are also labeled in Fig. <ref type="figure">1</ref>. To the best of our knowledge, this is the first work that conducts a systematic and comparative study to investigate the design of AoI-efficient scheduling policies for a G/G/1 queue. In the following, we summarize our key contributions along with an explanation of Fig. <ref type="figure">1</ref> and<ref type="figure">Table I</ref>.</p><p>First, we investigate the AoI performance of size-based scheduling policies (i.e., the green arrow in Fig. <ref type="figure">1</ref>), which is an orthogonal approach to the arrival-time-based design studied in most existing work. We conduct extensive simulations to show that size-based policies that prioritize small updates significantly improve AoI performance. We also explain interesting observations from the simulation results and summarize useful guidelines (i.e., Guidelines 1, 2, and 3 in Table <ref type="table">I</ref>) for the design of AoI-efficient policies.</p><p>Second, leveraging both the update-size and arrival-time information, we introduce Guideline 4 and propose AoI-based scheduling policies (i.e., the blue arrow in Fig. <ref type="figure">1</ref>). These AoIbased policies attempt to optimize the AoI at a specific future time instant from three different perspectives: the AoI-Drop-Earliest (ADE) policy, which makes the AoI drop the earliest; the AoI-Drop-to-Smallest (ADS) policy, which makes the AoI drop to the smallest; the AoI-Drop-Most (ADM) policy, which makes the AoI drop the most. The simulation results show that such AoI-based policies indeed have a good AoI performance.</p><p>Third, we observe that informative policies can significantly improve the AoI performance compared to their noninformative counterparts, which leads to Guideline 5. Integrating all the guidelines, we propose preemptive, informative, AoI-based policies (i.e., the red arrow in Fig. <ref type="figure">1</ref>). The simulation results show that such policies empirically achieve the best AoI performance among all the considered policies.</p><p>Finally, we prove sample-path equivalence between some size-based policies and AoI-based policies. These results provide an intuitive explanation for why some size-based policies, such as Shortest-Remaining-Processing-Time (SRPT), achieve a very good AoI performance.</p><p>To summarize, our study reveals that among various aspects of scheduling policies we investigated, prioritizing small updates, allowing service preemption, and prioritizing informative updates play the most important role in the design of AoI-efficient scheduling policies. The rest of this paper is organized as follows. We first discuss related work in Section II. Then, we describe our system model in Section III. In Section IV, we evaluate the AoI performance of size-based scheduling policies. We further propose AoI-based scheduling policies in Section V. In addition, we evaluate the AoI performance of preemptive, informative, AoI-based policies in Section VI. Finally, we make concluding remarks in Section VII.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. RELATED WORK</head><p>The traditional queueing literature on single-server queues is largely focused on the delay analysis. In <ref type="bibr">[7]</ref>, the authors prove that all non-preemptive scheduling policies that do not make use of job size information have the same distribution of the number of jobs in the system. The work of <ref type="bibr">[8]</ref>, <ref type="bibr">[9]</ref> proves that for a work-conserving queue, the SRPT policy minimizes the number of jobs in the system at any point and is therefore delay-optimal. The work of <ref type="bibr">[10]</ref> derives a formula of the average delay for several common scheduling polices (which will be discussed in Section IV).</p><p>On the other hand, although the AoI research is still in a nascent stage, it has already attracted a lot of interests (see <ref type="bibr">[11]</ref>, <ref type="bibr">[12]</ref> for a survey). Here we only discuss the most relevant work, which is focused on the AoI-oriented queueing analysis. Much of existing work considers scheduling policies that are based on the arrival time (such as FCFS and LCFS). The AoI is introduced in <ref type="bibr">[1]</ref>, where the authors study the average AoI in the M/M/1, M/D/1, and D/M/1 queues under the FCFS policy. In <ref type="bibr">[13]</ref>, the AoI performance of the FCFS policy in the M/M/1/1 and M/M/1/2 queues is studied, where new arrivals are discarded if the buffer is full. The average AoI of the LCFS policy in the M/M/1 queue is also discussed in <ref type="bibr">[13]</ref>.</p><p>There has been some work that aims to reduce the AoI by making use of service preemption. In <ref type="bibr">[14]</ref>, the average AoI of LCFS in the M/M/1 queue with and without service preemption is analyzed. The work of <ref type="bibr">[15]</ref> is quite similar to <ref type="bibr">[14]</ref>, but it considers the average AoI in the M/M/2 queue. In <ref type="bibr">[16]</ref>, the average AoI for the M/G/1/1 preemptive system with a multi-stream updates source is derived. The age-optimality of the preemptive LCFS (LCFS P) policy is proved in <ref type="bibr">[4]</ref>, where the service times are exponentially distributed.</p><p>In addition to taking advantage of service preemption, some of the prior studies also consider the strategy of prioritizing informative updates for reducing the AoI. The work of <ref type="bibr">[5]</ref>, <ref type="bibr">[6]</ref> reveals that the AoI performance can be improved by prioritizing informative updates and discarding non-informative policies when making scheduling decisions. In <ref type="bibr">[17]</ref>, the authors consider a G/G/1 queue with informative updates and derive the stationary distribution of the AoI, which is in terms of the stationary distribution of the delay and the Peak AoI (PAoI). With the AoI distribution, one can analyze the mean or higher moments of the AoI in GI/GI/1, M/GI/1, and GI/M/1 queues under several scheduling policies (e.g., FCFS and LCFS).</p><p>Recent research effort has also been exerted to understanding the relation between the AoI and the delay. In <ref type="bibr">[18]</ref>, the authors analyze the tradeoff between the AoI and the delay in a single-server M/G/1 system under a specific scheduling policy without knowing the service time of each individual update. In <ref type="bibr">[19]</ref>, the violation probability of the delay and the PAoI is investigated under an additive white Gaussian noise (AWGN) channel, but the update size is assumed to be identical.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. SYSTEM MODEL</head><p>In this section, we consider a single-server queueing system and give the definitions of the Age-of-Information (AoI) and the Peak AoI (PAoI).</p><p>We model the information-update system as a G/G/1 queue where a single source generates updates (which contain current state of a measurement or observation of the source) with rate &#955;. The updates enter the queueing system immediately after they are generated. Hence, the generation time is the same as the arrival time. We use S to denote the size of an update (i.e., the amount of time required for the update to complete service), which has a general distribution with mean E [S] = 1/&#181;. The system load is defined as &#961; &#955;/&#181;.</p><p>We use t i and t i to denote the time at which the i-th update was generated at the source and the time at which it leaves the server, respectively. The AoI at time t is then defined as &#8710;(t) t -U (t), where U (t) max i {t i : t i &#8804; t} is the generation time of the freshest update among those that have been processed by the server. An example of the AoI evolution under the FCFS policy is shown in Fig. <ref type="figure">2</ref>. Then, the average AoI can be defined as</p><p>In general, the analysis of the average AoI is quite difficult since it is determined by two dependent quantities: the interarrival time and the delay of updates <ref type="bibr">[1]</ref>. We define the interarrival time between the i-th update and (i -1)-th update as X i t i -t i-1 and define the delay of the i-th update as T i t i -t i . Alternatively, the Peak AoI (PAoI) is also proposed as an information freshness metric <ref type="bibr">[5]</ref>, which is defined as the maximum value of the AoI before it drops due to a newly delivered fresh update. Let A i be the i-th PAoI. From Fig. <ref type="figure">2</ref>, we can see A i = t i -t i-1 . This can be rewritten as the sum of the inter-arrival time between the i-th update and the previous update (i.e., X i ) and the delay of the i-th update (i.e., T i ). Therefore, the PAoI of the i-th update can also be expressed as A i = X i +T i , and its expectation is  In this section, we investigate the AoI performance of several common scheduling policies, including size-based policies and non-size-based policies, via extensive simulations. Note that these common scheduling policies may serve the noninformative updates (which do not lead to a reduced AoI). This is because in some applications, such as news and social network, obsolete updates are still useful and need to be served <ref type="bibr">[4]</ref>. In Section VI, we will discuss the case where obsolete updates are discarded.</p><p>Following <ref type="bibr">[3]</ref>, we first give the definitions of several common scheduling policies that can be divided into four types: depending on whether they are size-based or not, where the size-based policies use the update-size information (which is available in some applications, such as smart grid <ref type="bibr">[2]</ref>) for making scheduling decisions; depending on whether they are preemptive or not. The definition of preemption is given below. In this paper, we do not consider the cost of preemption. Definition 1. A policy is preemptive if an update may be stopped partway through its execution and then restarted at a later time without losing intermediary work.</p><p>The first type consists of policies that are non-preemptive and blind to the update size:</p><p>&#8226; First-Come-First-Served (FCFS): When the server frees up, it chooses to serve the update that arrived first if any. &#8226; Last-Come-First-Served (LCFS): When the server frees up, it chooses to serve the update that arrived last if any. &#8226; Random-Order-Service (RANDOM): When the server frees up, it randomly chooses one update to serve if any.</p><p>The second type consists of policies that are non-preemptive and make scheduling decisions based on the update size:</p><p>&#8226; Shortest-Job-First (SJF): When the server frees up, it chooses to serve the update with the smallest size if any.</p><p>The third type consists of policies that are preemptive and blind to the update size:</p><p>&#8226; Processor-Sharing (PS): All the updates in the system are served simultaneously and equally (i.e., each update receives an equal fraction of the available service capacity). &#8226; Preemptive Last-Come-First-Served (LCFS P): This is the preemptive version of the LCFS policy. Specifically, a preemption happens when there is a new update. The fourth type consists of policies that are preemptive and make scheduling decisions based on the update size:</p><p>&#8226; Preemptive Shortest-Job-First (SJF P): This is the preemptive version of the SJF policy. Specifically, a preemption happens when there is a new update that has the smallest size. &#8226; Shortest-Remaining-Processing-Time (SRPT): When the server frees up, it chooses to serve the update with the smallest remaining size. In addition, a preemption happens only when there is a new update whose size is smaller than the remaining size of the update in service. Previous work (see, e.g., [3, Section VII]) reveals that sizebased policies can greatly improve the delay performance. Due to such results, we conjecture that size-based policies also achieve a better AoI performance given that the AoI is dominantly determined by the delay when the system load is high or when the size variability is large <ref type="bibr">[1]</ref>. As we mentioned earlier, it is in general very difficult to obtain the exact expression of the average AoI except for some special cases (e.g., FCFS and LCFS) <ref type="bibr">[1]</ref>, <ref type="bibr">[17]</ref>. Therefore, we attempt to investigate the AoI performance of size-based policies through extensive simulations.</p><p>In Fig. <ref type="figure">3</ref> and<ref type="figure">4</ref>, we present the simulation results of the average AoI and PAoI performance under the scheduling policies we introduced above, respectively. Here we assume that a single source generates updates according to a Poisson process with rate &#955;, and the update size is independent and identically distributed (i.i.d.). In Fig. <ref type="figure">3</ref>(a), we assume that the update size follows an exponential distribution with mean 1/&#181; = 1. In Figs. <ref type="figure">3(b</ref>) and 3(c), we assume that the update size follows a Weibull distribution with mean 1/&#181; = 1. We define the squared coefficient of variation of the update size as C 2 Var (S) /E[S]</p><p>2 , i.e., the variance normalized by the square of the mean <ref type="bibr">[3]</ref>. Hence, a larger C 2 means a larger variability. In Fig. <ref type="figure">3</ref>(b), we fix C 2 = 10 and change the value of system load &#961;, while in Fig. <ref type="figure">3</ref>(c), we fix system load &#961; = 0.7 and change the value of C 2 . Note that throughout the paper, these simulation settings are used as default settings unless otherwise specified.</p><p>In the following, we will discuss key observations from the simulation results and propose useful guidelines for the design of AoI-efficient policies. Note that similar observations can also be made for the G/G/1 queue. An additional interesting observation is that the average PAoI could be much smaller than the average AoI when the interarrival time has a large variability. More detailed simulation results can be found in our technical report <ref type="bibr">[20]</ref>.</p><p>Observation 1. Size-based policies achieve a better average AoI/PAoI performance than non-size-based policies in both non-preemptive and preemptive cases.</p><p>In Fig. <ref type="figure">3</ref>, we can see that for the non-preemptive case, SJF has a better average AoI performance than FCFS, RANDOM, and LCFS in various settings. Similarly, for the preemptive case, SJF P and SRPT have a better average AoI performance than PS and LCFS P. Similar observations can be made for the average PAoI performance in Fig. <ref type="figure">4</ref>.</p><p>Observation 2. Under preemptive, size-based policies, the average AoI/PAoI decreases as the system load increases.</p><p>In Figs. <ref type="figure">3(a</ref>) and 3(b), we can see that under SJF P and SRPT, the average AoI decreases as the system load &#961; increases. There are two reasons. First, when &#961; increases, there will be more updates with small size arriving to the queue when C 2 is fixed. Therefore, size-based policies that prioritize updates with small size lead to more frequent AoI drops.</p><p>Second, preemption operations prevent fresh updates from being blocked by a large or stale update in service. Similar observations can be made for the average PAoI performance in Figs. <ref type="bibr">4</ref></p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>(a) and 4(b).</head><p>Observations 1 and 2 lead to the following guideline:</p><p>Guideline 1. When the update-size information is available, one should prioritize updates with small size.</p><p>However, in certain application scenarios, the update-size information may not be available or is difficult to estimate. Hence, the scheduling decisions have to be made without the update-size information. In such scenarios, we make the following observations from Figs. <ref type="figure">3</ref> and<ref type="figure">4</ref>.</p><p>Observation 3. LCFS and LCFS P achieve the best average AoI performance among non-preemptive, non-size-based policies and preemptive, non-size-based policies, respectively. Observation 4. Under LCFS P, the average AoI/PAoI decreases as the system load increases.</p><p>Observations 3 and 4 have also been made in previous work <ref type="bibr">[4]</ref>, <ref type="bibr">[13]</ref>, <ref type="bibr">[21]</ref>. It is quite intuitive that when the updatesize information is unavailable, one should give a higher priority to more recent updates. This is because while all the updates have the same expected service time, the most recent update arrives the last and thus leads to the smallest AoI once delivered. Therefore, Observations 3 and 4 lead to the following guideline: Guideline 2. When the update-size information is unavailable, one should prioritize recent updates.</p><p>Note that Observations 2 and 4 also suggest that under preemptive policies, the average AoI/PAoI decreases as the system load &#961; increases. This is because preemptions prevent fresh updates from being blocked by a large or stale update in service. In addition, we have also observed the following nice properties of preemptive policies. Observation 5. Not only do preemptive policies achieve a better average AoI/PAoI performance than non-preemptive policies, but they are also less sensitive when the update-size variability changes, i.e., they are more robust.</p><p>In Figs. <ref type="figure">3(a</ref>) and 3(b), we can see that preemptive policies (e.g., LCFS P, SJF P, and SRPT) generally have a better average AoI performance than non-preemptive ones (e.g., FCFS, RANDOM, LCFS, and SJF), especially when the system load is high. In Fig. <ref type="figure">3</ref>(c), we can see that the advantage of preemptive policies becomes larger as the update-size variability (i.e., C 2 ) increases. Moreover, the AoI performance of preemptive policies is only very slightly impacted when the update-size variability changes, while that of non-preemptive policies varies significantly. Therefore, Observations 2, 4, and 5 lead to the following guideline: Guideline 3. Service preemption should be employed when it is allowed.  In Section IV, we have demonstrated that size-based policies achieve a better average AoI/PAoI performance than non-sizebased policies. However, size-based policies do not utilize the arrival-time information, which also plays an important role in reducing the AoI. In this section, we propose three AoIbased scheduling policies, which leverage both the update-size and arrival-time information to reduce the AoI. Our simulation results show that these AoI-based policies outperform non-AoI-based policies.</p><p>We begin with the definitions of three AoI-based policies that attempt to optimize the AoI at a specific future time instant from three different perspectives:</p><p>&#8226; AoI-Drop-Earliest (ADE): When the server frees up, it chooses to serve an update such that once it is delivered, the AoI drop as soon as possible.</p><p>&#8226; AoI-Drop-to-Smallest (ADS): When the server frees up, it chooses to serve an update such that once it is delivered, the AoI drops to a value as small as possible. &#8226; AoI-Drop-Most (ADM): When the server frees up, it chooses to serve an update such that once it is delivered, the AoI has the largest drop. If all updates waiting in the queue are obsolete, then the above policies choose to serve an update with the smallest size.</p><p>Although all of these AoI-based policies are quite intuitive, they behave very differently. In order to explain the differences of these AoI-based policies, we present an example in Fig. <ref type="figure">5</ref> to show how the AoI evolves under these policies. Suppose that when the (i -1)-st update is being served, three new updates (i.e., the i-th, (i + 1)-st, and (i + 2)-nd updates) arrive in sequence at times t i , t i+1 , and t i+2 , respectively. The sizes of these updates satisfy S i &lt; S i+1 &lt; S i+2 . When the server frees up after it finishes serving the (i -1)-st update at time t i-1 , ADE, ADS, and ADM choose to serve the i-th, (i+1)-st, and (i+2)-nd updates, respectively. This is because serving the i-th update leads to the earliest AoI drop at time t i (following the red curve), serving the (i + 1)-st update leads to the AoI dropping to the smallest at time t i+1 (following the blue curve), and serving the (i + 2)-nd update leads to the largest AoI drop at time t i+2 (following the green curve). Clearly, ADE, ADS and ADM aim to optimize AoI at a specific future time instant (i.e., the future delivery time of chosen update) with different myopic goals. Note that at first glance, ADS and ADM may look the same. Indeed, they would be equivalent if the events of AoI drop have happened at the same time instant. However, these two policies are different as the time instants at which the AoI drops are not necessarily the same (e.g., t i+1 vs. t i+2 in Fig. <ref type="figure">5</ref>).</p><p>Next we conduct simulations to investigate the AoI performance of these AoI-based policies. In this paper, we present the simulation results for the AoI performance only due to space limitation. The results for the PAoI performance can be found in our technical report <ref type="bibr">[20]</ref>. In Fig. <ref type="figure">6</ref>, we present the simulation results of the average AoI performance of the AoIbased policies compared to a representative arrival-time-based policy (i.e., LCFS) and a representative size-based-policy (i.e., SJF). All the policies considered here are non-preemptive; the preemptive cases will be discussed in Section VI.</p><p>In Fig. <ref type="figure">6</ref>(a), we observe that most AoI-based policies are slightly better than non-AoI-based policies, although their performances are very close. Among the AoI-based policies, ADE is the best, ADM is the worst, and ADS is in-between. This is not surprising that ADM is the worst: although ADM has the largest AoI drop, this is at the cost that it may have to wait until the AoI become large first. ADE being the best suggests that giving a higher priority to small updates (so that the AoI drops as soon as possible) is a good strategy. In Figs. 6(b) and 6(c), similar observations can be made for update size following Weibull distributions.</p><p>The above observations lead to the following guideline:</p><p>Guideline 4. Leveraging both the update-size and arrival-time information can further improve the AoI performance. However, the benefit seems marginal.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VI. PREEMPTIVE, INFORMATIVE, AOI-BASED POLICIES</head><p>In Section IV, we have observed that preemptive policies have several advantages and perform better than nonpreemptive policies. In this section, we first demonstrate that policies that prioritize informative updates (i.e., those that can lead to AoI drops once delivered) perform better than noninformative policies. Then, by integrating the guidelines we have, we consider preemptive, informative, AoI-based policies and evaluate their performances through simulations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Informative Policies</head><p>As far as the AoI is concerned, there are two types of updates: informative updates and non-informative updates <ref type="bibr">[22]</ref>. Informative updates lead to AoI drops once delivered while non-informative updates do not. In some applications, such as autonomous vehicles and stock quotes, it is reasonable to discard non-informative updates (which do not help reduce the AoI but may block new updates). In this subsection, we introduce the "informative" versions of various policies, which prioritize informative updates and discards non-informative updates. Then, we use simulation results to demonstrate that informative policies generally have a better average AoI/PAoI performance than the original (non-informative) ones. Furthermore, we rigorously prove that in an G/M/1 queue, the informative version of LCFS is stochastically better than the original LCFS policy.</p><p>We use &#960; I to denote the informative version 1 of policy &#960;. All the scheduling policies we consider have their informative versions. In some cases, the informative version is simply the same as the original policy (e.g., FCFS and LCFS P).</p><p>In Fig. <ref type="figure">7</ref>, we show the simulation results of the average AoI performance of several informative policies compared to their non-informative counterparts. In order to evaluate the benefit of informative policies, we plot the informative AoI gain, which is the ratio of the difference between the average AoI of the non-informative version and the informative version to the average AoI of the non-informative version. Hence, a larger informative gain means a larger benefit of the informative version. One important observation from Fig. <ref type="figure">7</ref> is as follows. Observation 6. Informative policies achieve a better average AoI performance than their non-informative counterparts. The informative gain is larger for non-preemptive policies and increases as the system load increases.</p><p>Intuitively, informative policies are expected to outperform their non-informative counterparts because serving noninformative updates cannot reduce the AoI but may block new updates. The simulation results verify this intuition as the informative AoI gain is always non-negative. Second, we can see that most non-preemptive policies (e.g., RANDOM, LCFS, and SJF) benefit more from prioritizing informative updates. Third, as the system load &#961; increases, the informative AoI gain increases under most considered policies, especially those nonpreemptive ones. This is because as the system load increases, the number of non-informative updates also increases, which has a larger negative impact on the AoI performance for nonpreemptive, non-informative policies. Observation 6 leads to the following guideline:</p><p>Guideline 5. The server should prioritize informative updates and discard non-informative updates when it is allowed.</p><p>Based on Observation 6, we conjecture that an informative policy is as least as good as its non-informative counterpart. As a preliminary result, we prove that this conjecture is indeed true for LCFS in a G/M/1 queue. In the following, we introduce the stochastic ordering notion, which will be used in the statement of Proposition 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 2. Stochastic Ordering of Stochastic Processes</head><p>where X (X(t 1 ), X(t 2 ), . . . , X(t n )) and Y (Y (t 1 ), Y (t 2 ), . . . , Y (t n )). Stochastic equality can be defined in a similar manner and is denoted by</p><p>Roughly speaking, Eq. ( <ref type="formula">2</ref>) implies that X is less likely than Y to take on large values, where "large" means any value in an upper set S U . We also use &#8710; &#960; (t) to denote the AoI process under policy &#960;. Furthermore, we let I = {t 1 , t 2 . . . . } to denote a sample path specified by the arrival times of a sequence of updates. Having these definitions and notations, we are now ready to state Proposition 1.</p><p>Proposition 1. In a G/M/1 queue, for all I, the AoI under LCFS I is stochastically smaller than that under LCFS, i.e.,</p><p>We present a sketch of the proof in the following and provide the detailed proof in our online technical report <ref type="bibr">[20]</ref>.</p><p>Proof sketch. We define the system state at time t under policy &#960; as S &#960; (t) U &#960; (t), where U &#960; (t) is the largest arrival time of the updates that have been served under policy &#960; by time t. Let {S &#960; (t), t &#8712; [0, &#8734;)} be the state process of policy &#960;. By the definition of AoI, Eq. ( <ref type="formula">3</ref>) holds if the following holds:</p><p>(4) Next, we prove Eq. ( <ref type="formula">4</ref>) by contradiction through a coupling argument. Suppose that stochastic processes &#348;LCFS I (t) and &#348;LCFS (t) have the same stochastic laws as S LCFS I (t) and S LCFS (t), respectively. We couple &#348;LCFS I (t) and &#348;LCFS (t) in the following manner: If an update i is delivered at t i in &#348;LCFS (t), then the update j being served at t i (if any) in &#348;LCFS I (t) is also delivered at the same time. This coupling is reasonable because: (i) the updates served in &#348;LCFS I (t) are not chosen based on update size; (ii) the service time of an update in both &#348;LCFS I (t) and &#348;LCFS (t) is exponentially distributed and has the memoryless property. By Theorem 6.B.30 in <ref type="bibr">[23]</ref>, Eq. ( <ref type="formula">4</ref>) holds if the following holds:</p><p>which can be proven by contradiction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Preemptive, Informative, AoI-based Policies</head><p>So far, we have demonstrated the advantages of preemptive policies, AoI-based policies, and informative policies. In this subsection, we want to integrate all of these three ideas and propose preemptive, informative, AoI-based policies.</p><p>We first consider preemptive, informative version of three AoI-based policies: ADE PI, ADS PI, and ADM PI. Interestingly, we can show equivalence between ADE PI and SRPT I (i.e., the informative version of SRPT) and between ADE I and SJF I (i.e., the informative version of ADE and SJF, respectively) in the sample-path sense. These results are stated in Propositions 2 and 3.  We prove Propositions 2 and 3 using the strong induction. The detailed proofs are provided in our online technical report <ref type="bibr">[20]</ref>. Propositions 2 and 3 imply that although SRPT I and SJF I do not explicitly follow an AoI-based design, they are essentially AoI-based policies. This provides an intuitive explanation for why size-based policies, such as variants of SRPT and SJF, have a good empirical AoI performance.</p><p>In Fig. <ref type="figure">8</ref>, we present the simulation results for the average AoI performance of the preemptive, informative, AoI-based policies (ADE PI) compared to several other policies. We observe that in various settings we consider, ADE PI achieves the best AoI performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VII. CONCLUSION</head><p>In this paper, we systematically studied the impact of various aspects of scheduling policies on the AoI performance and provided several useful guidelines for the design of AoIefficient scheduling policies. Our study reveals that among various aspects of scheduling policies we investigated, prioritizing small updates, allowing service preemption, and prioritizing informative updates play the most important role in the design of AoI-efficient scheduling policies. It turns out that common scheduling policies like SRPT and SJF P and their informative variants can achieve a very good AoI performance, although they do not explicitly make scheduling decisions based on the AoI. This can be partially explained by the equivalence between such size-based policies and some AoI-based policies.</p><p>Our findings also raise several interesting questions that are worth investigating as future work. One important direction is to pursue more theoretical results beyond the simulation results we provided in this paper. For example, it would be interesting to see whether one can rigorously prove that any informative policy always outperforms its non-informative counterpart, which is consistently observed in the simulation results.</p></div><note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0"><p>For simplicity, we omit the additional " " in the policy name if policy &#960; is a preemptive policy ending with " P". For example, we use LCFS PI to denote the informative version of LCFS P.</p></note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1"><p>A set S U &#8838; R n is an upper set if y &#8712; S U whenever y &#8805; x and x &#8712; S U , where x = (x 1 , . . . , xn) and y = (y 1 , . . . , yn) are two vectors in R n and y &#8805; x if y i &#8805; x i for all i = 1, 2, . . . , n.</p></note>
		</body>
		</text>
</TEI>
