skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 2233769

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Digital Twins (DTs) have emerged as essential tools for virtualizing and enhancing Cyber-Physical Systems (CPS) by providing synchronized digital counterparts that enable monitoring, control, prediction, and optimization. Initially conceived as passive digital shadows, DTs are increasingly evolving into intelligent and proactive entities, enabled by the integration of Artificial Intelligence (AI). Among these advancements, Opportunistic Digital Twins (ODTs) represent a novel class of DTs: living, AI-aided, and actionable models that opportunistically exploit edge–cloud resources to deliver enriched and adaptive representations of physical entities and processes. However, despite their promise, current research lacks systematic engineering methods to ensure reliable coordination, determinism, and real-time responsiveness of ODTs in distributed and resource-constrained CPS. This article addresses this gap by introducing an engineering approach to build dependable and efficient ODTs by leveraging the deterministic concurrency, explicit timing semantics, and disciplined event handling of LINGUA FRANCA (LF). The approach is exemplified through a Smart Traffic Management case study centered on Emergency Vehicle Preemption (EVP), where the ODT dynamically selects AI models based on runtime conditions while ensuring deterministic coordination across distributed nodes. Experimental results confirm the feasibility and effectiveness of our methodology, underscoring the potential of LF-based ODT engineering to enhance reliability, adaptability, and scalability in intelligent and distributed CPS deployments. 
    more » « less
  2. Discrete-event (DE) systems are concurrent programs where components communicate via tagged events, where tags are drawn from a totally ordered set. Distributed DE (DDE) systems are DE systems where the components (reactors) communicate over networks. Most execution platforms require that for DDE systems with cycles, each cycle must contain at least one logical delay, where the tag of events is incremented. Some impose an even stronger constraint, that no component produces outputs with the same timestamp as a triggering input (the “lookahead” for the component must be greater than zero). Such restrictions, however, are not required by the elegant fixed-point semantics of DE. The only fundamental requirement is that the program be constructive, meaning it is free from causality cycles. In this article, we propose a way to coordinate the execution of DDE systems that can execute any constructive program, even one with zero-delay cycles (ZDC), facilitating the elegant programming of strongly consistent distributed real-time systems. The proposed coordination provides a formal model that exposes exactly the information that must be shared across networks for such execution to be possible. Our solution avoids speculative execution and rollback, making it suitable for situations that do not tolerate rollback, such asdeployment(vs.simulation) of cyber-physical systems (CPS’s). We describe an extension to the coordination mechanisms in Lingua Franca, a recent DE-based coordination language, to support ZDC. 
    more » « less
  3. To design performant, expressive, and reliable cyber-physical systems (CPSs), researchers extensively perform quasi-static scheduling for concurrent models of computation (MoCs) on multi-core hardware. However, these quasi-static scheduling approaches are developed independently for their corresponding MoCs, despite commonality in the approaches. To help generalize the use of quasi-static scheduling to new and emerging MoCs, this article proposes aunifiedapproach for a class of deterministic timed concurrent models (DTCMs), including prominent models such as synchronous dataflow (SDF), Boolean-controlled dataflow (BDF), scenario-aware dataflow (SADF), and Logical Execution Time (LET). In contrast to scheduling techniques tailored exclusively to specific MoCs, our unified approach leverages a commonintermediateformalism called state space finite automata (SSFA), bridging the gap between high-level MoCs and executable schedules. Once identified as DTCMs, new MoCs can directly adopt SSFA-based scheduling, significantly easing adoption. We show that quasi-static schedules facilitated by SSFA are provably free from timing anomalies and enable straightforward worst-case makespan analysis. We demonstrate the approach using the reactor model—an emerging discrete-event MoC—programmed using the Lingua Franca (LF) language. Experiments show that quasi-statically scheduledLFprograms exhibit lower runtime overhead compared to the dynamically scheduledLFprograms, and that the analyzable worst-case makespans enable compile-time deadline checking. 
    more » « less
  4. Ensuring predictable and deterministic behavior in distributed cyber-physical systems (CPS) is essential for guaranteeing safety, reliability, and real-time behavior. However, achieving this predictability is challenging due to network uncertainties, asynchronous execution, and complex timing interactions. This manuscript is based on a special session at Embedded SystemsWeek (ESWeek) 2025, which brings together experts to explore in four presentations how this uncertainty can be addressed and how to introduce additional determinism into the system to achieve predictable timing behavior in distributed CPS. We begin by exploring cornerstones of timing analysis techniques to provide end-to-end latency guarantees for distributed systems (Chen and Günzel). Next, we discuss design strategies for meeting timing constraints, focusing on how system parameters influence cause-effect chains and how these parameters can be tuned to ensure predictable behavior in industrial automation settings (Dasari and Becker).We then turn to approaches to achieve more predictable system behavior. To that end, we examine deterministic semantic models for distributed systems that enable the design of robust and fault-tolerant systems (Lee). Finally, we discuss how solving constraints for scheduling cause-effect chains can be used to enforce strict timing guarantees and improve predictability (Bourke). 
    more » « less
  5. The nondeterministic ordering of message handling in the original actor model makes it difficult to achieve the consistency across a distributed system that some applications require. This paper explores a number of mitigations, focusing primarily on the use of logical time to define a semantic ordering for messages.Avariety of coordination mechanisms can ensure that messages are handled in logical time order, but they all come with costs. A fundamental tradeoff (the CAL theorem) makes it impossible to achieve consistency without paying a price in availability, where the price depends on the latencies introduced by network communication, computation overhead, and clock synchronization error. This paper shows how to use the Lingua Franca coordination language to navigate this tradeoff, and particularly how to ensure eventual consistency while bounding unavailability with manageable risk. 
    more » « less
  6. We use two actor-based languages, Timed Rebeca and Lin- gua Franca, to show modeling, model checking, implementation, and timing analysis of an industry-suggested algorithm for role selection in distributed control systems with redundancy. The algorithm prioritizes consistency over availability in trade-off situations. We show scenarios that simulate the environment and possible faults and use the Timed Rebeca model checking tool to investigate whether they may cause a failure. We also show the maximum latency that can be tolerated with- out causing inconsistency. We then use the coordination language Lingua Franca to implement the model. It can also simulate network switches, allowing you to set up test scenarios that include network degradation, such as switch failures, packet losses, and excessive latency. This can be set up as a hardware-in-the-loop simulation, where the actual node implementations interact with simulated switches and the network. 
    more » « less
  7. This paper explores the integration of Edge Intelligence (EI) with the coordination language LINGUA FRANCA (LF), leveraging the Consistency-Availability-Latency (CAL) theorem as a theoretical foundation for optimizing Cyber-Physical Systems (CPS) design and deployment. We propose a distributed EI-based approach for CPS to develop an Emergency Vehicle Detection (EVD) system that dynamically adjusts traffic signals at intersections to prioritize emergency vehicles, improving emergency response times while maintaining traffic efficiency. The system employs multimodal detection techniques, including audio classification and object detection, and utilizes LF’s deterministic coordination to ensure seamless execution across the computing continuum. We analyze two deployment scenarios: cloud-assisted and fully edge-based. The CAL theorem guides tradeoffs between consistency, availability, and latency, informing optimal service placement at design time. Experimental results validate the theoretical analysis, showing that the edge-based deployment achieves 2.8x lower inference-to-actuation latency and 10.26% lower energy consumption compared to the cloud-assisted scenario, while also eliminating bandwidth overhead associated with data transmission to the cloud. 
    more » « less
  8. The rise of intelligent autonomous systems, especially in robotics and autonomous agents, has created a critical need for robust communication middleware that can ensure real-time processing of extensive sensor data. Current robotics middleware like Robot Operating System (ROS) 2 faces challenges with nondeterminism and high communication latency when dealing with large data across multiple subscribers on a multi-core compute platform. To address these issues, we present High-Performance Robotic Middleware (HPRM), built on top of the deterministic coordination language Lingua Franca (LF). HPRM employs optimizations including an in-memory object store for efficient zero-copy transfer of large payloads, adaptive serialization to minimize serialization overhead, and an eager protocol with real-time sockets to reduce handshake latency. Benchmarks show HPRM achieves up to 114x lower latency than ROS2 when broadcasting large messages to multiple nodes. We then demonstrate the benefits of HPRM by integrating it with the CARLA simulator and running reinforcement learning agents along with object detection workloads. In the CARLA autonomous driving application, HPRM attains 91.1% lower latency than ROS2. The deterministic coordination semantics of HPRM, combined with its optimized IPC mechanisms, enable efficient and predictable real-time communication for intelligent autonomous systems. Code and videos can be found on our project page: https://hprm-robotics.github.io/HPRM 
    more » « less
  9. Lee, EA; Mousavi, MR; Talcott, C (Ed.)
    Driving progress in science and engineering for centuries, models are powerful tools for understanding systems and building abstractions. However, the goal of models in science is different from that in engineering, and we observe the misuse of models undermining research goals. Specifically in the field of formal methods, we advocate that verification should be performed on engineering models rather than scientific models, to the extent possible. We observe that models under verification are, very often, scientific models rather than engineering models, and we show why verifying scientific models is ineffective in engineering efforts. To guarantee safety in an engineered system, it is the engineering model one should verify. This model can be used to derive a correct-by-construction implementation. To demonstrate our proposed principle, we review lessons learned from verifying programs in a language called Lingua Franca using Timed Rebeca. 
    more » « less
  10. Real-time systems need to be built out of tasks for which the worst-case execution time is known. To enable accurate estimates of worst-case execution time, some researchers propose to build processors that simplify that analysis. These architectures are called precision-timed machines or time-predictable architectures. However, what does this term mean? This paper explores the meaning of time predictability and how it can be quantified. We show that time predictability is hard to quantify. Rather, the worst-case performance as the combination of a processor, a compiler, and a worst-case execution time analysis tool is an important property in the context of real-time systems. Note that the actual software has implications as well on the worst-case performance. We propose to define a standard set of benchmark programs that can be used to evaluate a time-predictable processor, a compiler, and a worst-case execution time analysis tool. We define worst-case performance as the geometric mean of worst-case execution time bounds on a standard set of benchmark programs. 
    more » « less