This work explores systems that deliver source updates requiring multiple sequential processing steps. We model and analyze the Age of Information (AoI) performance of various system designs under both parallel and series server setups. In parallel setups, each processor executes all computation steps with multiple processors working in parallel, while in series setups, each processor performs a specific step in sequence. In practice, processing faster is better in terms of age but it also consumes more power. To address this age-power trade-off, we formulate and solve an optimization problem to determine the optimal service rates for each processing step under a given power budget. Our analysis focuses on a special case where updates require two computational steps. The results show that the service rate of the second step should generally be faster than that of the first step to achieve minimum AoI and reduce power wastage. Furthermore, parallel processing is found to offer a better age-power trade-off compared to series processing. 
                        more » 
                        « less   
                    
                            
                            Storing, Retrieving, and Processing Updates: A Timeliness Perspective
                        
                    
    
            Time-critical applications, such as virtual reality and cyber-physical systems, require not only low end-to-end latency, but also the timely delivery of information. While high-speed Ethernet adoption has reduced interconnect fabric latency, bottlenecks persist in data storage, retrieval, and processing. This work examines status updating systems where sources generate time-stamped updates that are stored in memory, and readers fulfill client requests by accessing these stored updates. Clients then utilize the retrieved updates for further computations. The asynchronous interaction between writers and readers presents challenges, including: (i) the potential for readers to encounter stale updates due to temporal disparities between the writing and reading processes, (ii) the necessity to synchronize writers and readers to prevent race conditions, and (iii) the imperative for clients to process and deliver updates within strict temporal constraints. In the first part, we study optimal reading policies in both discrete and continuous time domains to minimize the Age of Information (AoI) of source updates at the client. One of the main contributions of this part includes showing that lazy reading is timely. In the second part, we analyze the impact of synchronization primitives on update timeliness in a packet forwarding scenario, where location updates are written to a shared routing table, and application updates read from it to ensure correct delivery. Our theoretical and experimental results show that using a lock-based primitive is suitable for timely application update delivery at higher location update rates, while a lock-free mechanism is more effective at lower rates. The final part focuses on optimizing update processing when updates require multiple sequential computational steps. We compare the age performance across a multitude of pipelined and parallel server models and characterize the age-power trade-off in these models. Additionally, our analysis reveals that synchronous sequential processing is more conducive to timely update processing than asynchronous methods, and that parallel processing outperforms pipeline services in terms of AoI. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2148104
- PAR ID:
- 10586392
- Publisher / Repository:
- ProQuest
- Date Published:
- Subject(s) / Keyword(s):
- Status updating systems Time-stamped updates Age of Information Lock-based primitive
- Format(s):
- Medium: X
- Institution:
- Rutgers, The State University of New Jersey
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Future real-time applications like smart cities will use complex Machine Learning (ML) models for a variety of tasks. Timely status information is required for these applications to be reliable. Offloading computation to a mobile edge cloud (MEC) can reduce the completion time of these tasks. However, using the MEC may come at a cost such as related to use of a cloud service or privacy. In this paper, we consider a source that generates time-stamped status updates for delivery to a monitor after processing by the mobile device or MEC. We study how a scheduler must forward these updates to achieve timely updates at the monitor but also limit MEC usage. We measure timeliness at the monitor using the age of information (AoI) metric. We formulate this problem as an infinite horizon Markov decision process (MDP) with an average cost criterion. We prove that an optimal scheduling policy has an age-threshold structure that depends on how long an update has been in service.more » « less
- 
            A source submits status update jobs to a service fa- cility for processing and delivery to a monitor. The status updates belong to service classes with different service requirements. We model the service requirements using a hyperexponential service time model. To avoid class-specific bias in the service process, the system implements an M/G/1/1 blocking queue; new arrivals are discarded if the server is busy. Using an age-of-information (AoI) metric to characterize timeliness of the updates, a stochastic hybrid system (SHS) approach is employed to derive the overall average AoI and the average AoI for each service class. We observe that both the overall AoI and class-specific AoI share a common penalty that is a function of the second moment of the average service time and they differ chiefly because of their different arrival rates. We show that each high-probability service class has an associated age-optimal update arrival rate while low- probability service classes incur an average age that is always decreasing in the update arrival rate.more » « less
- 
            Consistency in data storage systems requires any read operation to return the most recent written version of the content. In replicated storage systems, consistency comes at the price of delay due to large-scale write and read operations. Many applications with low latency requirements tolerate data staleness in order to provide high availability and low operation latency. Using age of information as the staleness metric, we examine a data updating system in which real-time content updates are replicated and stored in a Dynamo-style quorum-based distributed system. A source sends updates to all the nodes in the system and waits for acknowledgements from the earliest subset of nodes, known as a write quorum. An interested client fetches the update from another set of nodes, defined as a read quorum. We analyze the staleness-delay tradeoff in replicated storage by varying the write quorum size. With a larger write quorum, an instantaneous read is more likely to get the latest update written by the source. However, the age of the content written to the system is more likely to become stale as the write quorum size increases. For shifted exponential distributed write delay, we derive the age optimized write quorum size that balances the likelihood of reading the latest update and the freshness of the latest update written by the source.more » « less
- 
            A source generates time-stamped update packets that are sent to a server and then forwarded to a monitor. This occurs in the presence of an adversary that can infer information about the source by observing the output process of the server. The server wishes to release updates in a timely way to the monitor but also wishes to minimize the information leaked to the adversary. We analyze the trade-off between the age of information (AoI) and the maximal leakage for systems in which the source generates updates as a Bernoulli process. For a time slotted system in which sending an update requires one slot, we consider three server policies: (1) Memoryless with Bernoulli Thinning (MBT): arriving updates are queued with some probability and head-of-line update is released after a geometric holding time; (2) Deterministic Accumulate-and-Dump (DAD): the most recently generated update (if any) is released after a fixed time; (3) Random Accumulate-and-Dump (RAD): the most recently generated update (if any) is released after a geometric waiting time. We show that for the same maximal leakage rate, the DAD policy achieves lower age compared to the other two policies but is restricted to discrete age-leakage operating points.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    