skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Timely and Energy-Efficient Multi-Step Update Processing
This work explores systems that deliver source updates requiring multiple sequential processing steps. We model and analyze the Age of Information (AoI) performance of various system designs under both parallel and series server setups. In parallel setups, each processor executes all computation steps with multiple processors working in parallel, while in series setups, each processor performs a specific step in sequence. In practice, processing faster is better in terms of age but it also consumes more power. To address this age-power trade-off, we formulate and solve an optimization problem to determine the optimal service rates for each processing step under a given power budget. Our analysis focuses on a special case where updates require two computational steps. The results show that the service rate of the second step should generally be faster than that of the first step to achieve minimum AoI and reduce power wastage. Furthermore, parallel processing is found to offer a better age-power trade-off compared to series processing.  more » « less
Award ID(s):
2148104
PAR ID:
10586385
Author(s) / Creator(s):
; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-5405-8
Page Range / eLocation ID:
116 to 120
Format(s):
Medium: X
Location:
Pacific Grove, CA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Time-critical applications, such as virtual reality and cyber-physical systems, require not only low end-to-end latency, but also the timely delivery of information. While high-speed Ethernet adoption has reduced interconnect fabric latency, bottlenecks persist in data storage, retrieval, and processing. This work examines status updating systems where sources generate time-stamped updates that are stored in memory, and readers fulfill client requests by accessing these stored updates. Clients then utilize the retrieved updates for further computations. The asynchronous interaction between writers and readers presents challenges, including: (i) the potential for readers to encounter stale updates due to temporal disparities between the writing and reading processes, (ii) the necessity to synchronize writers and readers to prevent race conditions, and (iii) the imperative for clients to process and deliver updates within strict temporal constraints. In the first part, we study optimal reading policies in both discrete and continuous time domains to minimize the Age of Information (AoI) of source updates at the client. One of the main contributions of this part includes showing that lazy reading is timely. In the second part, we analyze the impact of synchronization primitives on update timeliness in a packet forwarding scenario, where location updates are written to a shared routing table, and application updates read from it to ensure correct delivery. Our theoretical and experimental results show that using a lock-based primitive is suitable for timely application update delivery at higher location update rates, while a lock-free mechanism is more effective at lower rates. The final part focuses on optimizing update processing when updates require multiple sequential computational steps. We compare the age performance across a multitude of pipelined and parallel server models and characterize the age-power trade-off in these models. Additionally, our analysis reveals that synchronous sequential processing is more conducive to timely update processing than asynchronous methods, and that parallel processing outperforms pipeline services in terms of AoI. 
    more » « less
  2. A source submits status update jobs to a service fa- cility for processing and delivery to a monitor. The status updates belong to service classes with different service requirements. We model the service requirements using a hyperexponential service time model. To avoid class-specific bias in the service process, the system implements an M/G/1/1 blocking queue; new arrivals are discarded if the server is busy. Using an age-of-information (AoI) metric to characterize timeliness of the updates, a stochastic hybrid system (SHS) approach is employed to derive the overall average AoI and the average AoI for each service class. We observe that both the overall AoI and class-specific AoI share a common penalty that is a function of the second moment of the average service time and they differ chiefly because of their different arrival rates. We show that each high-probability service class has an associated age-optimal update arrival rate while low- probability service classes incur an average age that is always decreasing in the update arrival rate. 
    more » « less
  3. Age of information has been proposed recently to measure information freshness, especially for a class of real-time video applications. These applications often demand timely updates with edge cloud computing to guarantee the user experience. However, the edge cloud is usually equipped with limited computation and network resources and therefore, resource contention among different video streams can contribute to making the updates stale. Aiming to minimize a penalty function of the weighted sum of the average age over multiple end users, this paper presents a greedy traffic scheduling policy for the processor to choose the next processing request with the maximum immediate penalty reduction. In this work, we formulate the service process when requests from multiple users arrive at edge cloud servers asynchronously and show that the proposed greedy scheduling algorithm is the optimal work- conserving policy for a class of age penalty functions. 
    more » « less
  4. null (Ed.)
    Age of information has been proposed recently to measure information freshness, especially for a class of real-time video applications. These applications often demand timely updates with edge cloud computing to guarantee the user experience. However, the edge cloud is usually equipped with limited computation and network resources and therefore, resource contention among different video streams can contribute to making the updates stale. Aiming to minimize a penalty function of the weighted sum of the average age over multiple end users, this paper presents a greedy traffic scheduling policy for the processor to choose the next processing request with the maximum immediate penalty reduction. In this work, we formulate the service process when requests from multiple users arrive at edge cloud servers asynchronously and show that the proposed greedy scheduling algorithm is the optimal work-conserving policy for a class of age penalty functions. 
    more » « less
  5. Future real-time applications like smart cities will use complex Machine Learning (ML) models for a variety of tasks. Timely status information is required for these applications to be reliable. Offloading computation to a mobile edge cloud (MEC) can reduce the completion time of these tasks. However, using the MEC may come at a cost such as related to use of a cloud service or privacy. In this paper, we consider a source that generates time-stamped status updates for delivery to a monitor after processing by the mobile device or MEC. We study how a scheduler must forward these updates to achieve timely updates at the monitor but also limit MEC usage. We measure timeliness at the monitor using the age of information (AoI) metric. We formulate this problem as an infinite horizon Markov decision process (MDP) with an average cost criterion. We prove that an optimal scheduling policy has an age-threshold structure that depends on how long an update has been in service. 
    more » « less