Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract Recent advances in AI culminate a shift in science and engineering away from strong reliance on algorithmic and symbolic knowledge towards new data-driven approaches. How does the emerging intelligent data-centric world impact research on real-time and embedded computing? We argue for two effects: (1) new challenges in embedded system contexts, and (2) new opportunities for community expansion beyond the embedded domain. First,on the embedded system side, the shifting nature of computing towardsdata-centricityaffects the types of bottlenecks that arise. At training time, the bottlenecks are generallydata-related. Embedded computing relies onscarcesensor data modalities, unlike those commonly addressed in mainstream AI, necessitating solutions forefficient learningfrom scarce sensor data. At inference time, the bottlenecks areresource-related, calling forimproved resource economyandnovel scheduling policies. Further ahead, the convergence of AI around large language models (LLMs) introduces additionalmodel-relatedchallenges in embedded contexts. Second,on the domain expansion side, we argue that community expertise in handling resource bottlenecks is becoming increasingly relevant to a new domain: thecloudenvironment, driven by AI needs. The paper discusses the novel research directions that arise in the data-centric world of AI, covering data-, resource-, and model-related challenges in embedded systems as well as new opportunities in the cloud domain.more » « lessFree, publicly-accessible full text available June 1, 2026
- 
            Time-critical applications, such as virtual reality and cyber-physical systems, require not only low end-to-end latency, but also the timely delivery of information. While high-speed Ethernet adoption has reduced interconnect fabric latency, bottlenecks persist in data storage, retrieval, and processing. This work examines status updating systems where sources generate time-stamped updates that are stored in memory, and readers fulfill client requests by accessing these stored updates. Clients then utilize the retrieved updates for further computations. The asynchronous interaction between writers and readers presents challenges, including: (i) the potential for readers to encounter stale updates due to temporal disparities between the writing and reading processes, (ii) the necessity to synchronize writers and readers to prevent race conditions, and (iii) the imperative for clients to process and deliver updates within strict temporal constraints. In the first part, we study optimal reading policies in both discrete and continuous time domains to minimize the Age of Information (AoI) of source updates at the client. One of the main contributions of this part includes showing that lazy reading is timely. In the second part, we analyze the impact of synchronization primitives on update timeliness in a packet forwarding scenario, where location updates are written to a shared routing table, and application updates read from it to ensure correct delivery. Our theoretical and experimental results show that using a lock-based primitive is suitable for timely application update delivery at higher location update rates, while a lock-free mechanism is more effective at lower rates. The final part focuses on optimizing update processing when updates require multiple sequential computational steps. We compare the age performance across a multitude of pipelined and parallel server models and characterize the age-power trade-off in these models. Additionally, our analysis reveals that synchronous sequential processing is more conducive to timely update processing than asynchronous methods, and that parallel processing outperforms pipeline services in terms of AoI.more » « less
- 
            This work explores systems that deliver source updates requiring multiple sequential processing steps. We model and analyze the Age of Information (AoI) performance of various system designs under both parallel and series server setups. In parallel setups, each processor executes all computation steps with multiple processors working in parallel, while in series setups, each processor performs a specific step in sequence. In practice, processing faster is better in terms of age but it also consumes more power. To address this age-power trade-off, we formulate and solve an optimization problem to determine the optimal service rates for each processing step under a given power budget. Our analysis focuses on a special case where updates require two computational steps. The results show that the service rate of the second step should generally be faster than that of the first step to achieve minimum AoI and reduce power wastage. Furthermore, parallel processing is found to offer a better age-power trade-off compared to series processing.more » « less
- 
            We consider a system where the updates from independent sources are disseminated via a publish-subscribe mechanism. The sources are the publishers and a decision process (DP), acting as a subscriber, derives decision updates from the source data. We derive the stationary expected age of information (AoI) of decision updates delivered to a monitor. We show that a lazy computation policy in which the DP may sit idle before computing its next decision update can reduce the average AoI at the monitor even though the DP exerts no control over the generation of source updates. This AoI reduction is shown to occur because lazy computation can offset the negative effect of high variance in the computation time.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available