skip to main content


Title: Harm-DoS: Hash Algorithm Replacement for Mitigating Denial-of-Service Vulnerabilities in Binary Executables
Binary analysis of closed-source, low-level, and embedded systems software has emerged at the heart of cyberphysical vulnerability assessment of third-party or legacy devices in safety-critical systems. In particular, recovering the semantics of the source algorithmic implementations enables analysts to understand the context of a particular binary program snippet. However, experimentation and evaluation of binary analysis techniques on real-world embedded cyber-physical systems are limited to domain-specific testbeds with a low number of use cases–insufficient to support emerging data-driven techniques. Moreover, the use cases rarely have the source mathematical expressions, algorithms, and compiled binaries. In this paper, we present AUTOCPS, a framework for generating a large corpus of control systems binaries along with their source algorithmic expressions and source code. AUTOCPS enables researchers to tune the control system binary data generation by varying different permutations of cyber-physical modules, e.g., the underlying control algorithm, while ensuring a semantically valid binary. We initially constrain AUTOCPS to the flight software domain and generate over 4000 semantically different control systems source representations, which are then used to generate hundreds of thousands of binaries. We describe current and future use cases of AUTOCPS towards cyber-physical vulnerability assessment of safety-critical systems.  more » « less
Award ID(s):
2051101
NSF-PAR ID:
10396202
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
25th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2022)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Unmanned aerial vehicles (UAVs) are becoming increasingly pervasive in everyday life, supporting diverse use cases such as aerial photography, delivery of goods, or disaster reconnaissance and management. UAVs are cyber-physical systems (CPS): they integrate computation (embedded software and control systems) with physical components (the UAVs flying in the physical world). UAVs in particular and CPS in general require monitoring capabilities to detect and possibly mitigate erroneous and safety-critical behavior at runtime. Existing monitoring approaches mostly do not adequately address UAV CPS characteristics such as the high number of dynamically instantiated components, the tight integration of elements, and the massive amounts of data that need to be processed. In this paper we report results of a case study on monitoring in UAVs. We discuss CPS-specific monitoring challenges and present a prototype we implemented by extending REMINDS, a framework for software monitoring so far mainly used in the domain of metallurgical plants. Additionally, we demonstrate the applicability and scalability of our approach by monitoring a real control and management system for UAVs in simulations with up to 30 drones flying in an urban area. 
    more » « less
  2. null (Ed.)
    Ensuring the integrity of embedded programmable logic controllers (PLCs) is critical for safe operation of industrial control systems. In particular, a cyber-attack could manipulate control logic running on the PLCs to bring the process of safety-critical application into unsafe states. Unfortunately, PLCs are typically not equipped with hardware support that allows the use of techniques such as remote attestation to verify the integrity of the logic code. In addition, so far remote attestation is not able to verify the integrity of the physical process controlled by the PLC. In this work, we present PAtt, a system that combines remote software attestation with control process validation. PAtt leverages operation permutations—subtle changes in the operation sequences based on integrity measurements—which do not affect the physical process but yield unique traces of sensor readings during execution. By encoding integrity measurements of the PLC’s memory state (software and data) into its control operation, our system allows to remotely verify the integrity of the control logic based on the resulting sensor traces. We implement the proposed system on a real PLC controlling a robot arm, and demonstrate its feasibility. Our implementation enables the detection of attackers that manipulate the PLC logic to change process state and/or report spoofed sensor readings (with an accuracy of 97% against tested attacks). 
    more » « less
  3. In many regulated domains, traceability is established across diverse artifacts such as requirements, design, code, test cases, and hazards -- either manually or with the help of supporting tools, and the resulting trace links are used to support activities such as impact analysis, compliance verification, and safety inspections. Automated tracing techniques need to leverage the semantics of underlying artifacts in order to establish more accurate trace links and to provide explanations of links that have been created in either a manual or automated fashion. To support this, we propose an automated technique which leverages source code, project artifacts and an external domain corpus to generate a domain-specific concept model. We then use the generated concept model to improve traceability results and to provide explanations of the results. Our approach overcomes existing problems with deep-learning traceability algorithms, as it does not require a training set of existing trace links. Finally, as an initial proof-of-concept, we apply our semantically-guided approach to the Dronology project, and show that it improves over other tracing techniques that do not use a concept model. 
    more » « less
  4. Papadopoulos, Alessandro V. (Ed.)
    The correctness of safety-critical systems depends on both their logical and temporal behavior. Control-flow integrity (CFI) is a well-established and understood technique to safeguard the logical flow of safety-critical applications. But unfortunately, no established methodologies exist for the complementary problem of detecting violations of control flow timeliness. Worse yet, the latter dimension, which we term Timely Progress Integrity (TPI), is increasingly more jeopardized as the complexity of our embedded systems continues to soar. As key resources of the memory hierarchy become shared by several CPUs and accelerators, they become hard-to-analyze performance bottlenecks. And the precise interplay between software and hardware components becomes hard to predict and reason about. How to restore control over timely progress integrity? We postulate that the first stepping stone toward TPI is to develop methodologies for Timely Progress Assessment (TPA). TPA refers to the ability of a system to live-monitor the positive/negative slack - with respect to a known reference - at key milestones throughout an application’s lifespan. In this paper, we propose one such methodology that goes under the name of Milestone-Based Timely Progress Assessment or MB-TPA, for short. Among the key design principles of MB-TPA is the ability to operate on black-box binary executables with near-zero time overhead and implementable on commercial platforms. To prove its feasibility and effectiveness, we propose and evaluate a full-stack implementation called Timely Progress Assessment with 0 Overhead (TPAw0v). We demonstrate its capability in providing live TPA for complex vision applications while introducing less than 0.6% time overhead for applications under test. Finally, we demonstrate one use case where TPA information is used to restore TPI in the presence of temporal interference over shared memory resources. 
    more » « less
  5. Modern cyber-physical systems (CPS) are often developed in a model-based development (MBD) paradigm. The MBD paradigm involves the construction of different kinds of models: (1) a plant model that encapsulates the physical components of the system (e.g., mechanical, electrical, chemical components) using representations based on differential and algebraic equations, (2) a controller model that encapsulates the embedded software components of the system, and (3) an environment model that encapsulates physical assumptions on the external environment of the CPS application. In order to reason about the correctness of CPS applications, we typically pose the following question: For all possible environment scenarios, does the closed-loop system consisting of the plant and the controller exhibit the desired behavior? Typically, the desired behavior is expressed in terms of properties that specify unsafe behaviors of the closed-loop system. Often, such behaviors are expressed using variants of real-time temporal logics. In this chapter, we will examine formal methods based on bounded-time reachability analysis, simulation-guided reachability analysis, deductive techniques based on safety invariants, and formal, requirement-driven testing techniques. We will review key results in the literature, and discuss the scalability and applicability of such systems to various academic and industrial contexts. We conclude this chapter by discussing the challenge to formal verification and testing techniques posed by newer CPS applications that use AI-based software components. 
    more » « less