skip to main content


Title: Logging to the Danger Zone: Race Condition Attacks and Defenses on System Audit Frameworks
For system logs to aid in security investigations, they must be beyond the reach of the adversary. Unfortunately, attackers that have escalated privilege on a host are typically able to delete and modify log events at will. In response to this threat, a variety of secure logging systems have appeared over the years that attempt to provide tamper-resistance (e.g., write once read many drives, remote storage servers) or tamper-evidence (e.g., cryptographic proofs) for system logs. These solutions expose an interface through which events are committed to a secure log, at which point they enjoy protection from future tampering. However, all proposals to date have relied on the assumption that an event's occurrence is concomitant with its commitment to the secured log. In this work, we challenge this assumption by presenting and validating a race condition attack on the integrity of audit frameworks. Our attack exploits the intrinsically asynchronous nature of I/O and IPC activity, demonstrating that an attacker can snatch events about their intrusion out of message buffers after they have occurred but before they are committed to the log, thus bypassing existing protections. We present a first step towards defending against our attack by introducing KennyLoggings, the first kernel- based tamper-evident logging system that satisfies the synchronous integrity property, meaning that it guarantees tamper-evidence of events upon their occurrence. We implement KennyLoggings on top of the Linux kernel and show that it imposes between 8% and 11% overhead on log-intensive application workloads.  more » « less
Award ID(s):
1750024
NSF-PAR ID:
10225348
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ACM Conference on Computer and Communications Security
Page Range / eLocation ID:
1551 to 1574
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. System auditing is a central concern when investigating and responding to security incidents. Unfortunately, attackers regularly engage in anti-forensic activities after a break-in, covering their tracks from the system logs in order to frustrate the efforts of investigators. While a variety of tamper-evident logging solutions have appeared throughout the industry and the literature, these techniques do not meet the operational and scalability requirements of system-layer audit frameworks. In this work, we introduce Custos, a practical framework for the detection of tampering in system logs. Custos consists of a tamper-evident logging layer and a decentralized auditing protocol. The former enables the verification of log integrity with minimal changes to the underlying logging framework, while the latter enables near real-time detection of log integrity violations within an enterprise-class network. Custos is made practical by the observation that we can decouple the costs of cryptographic log commitments from the act of creating and storing log events, without trading off security, leveraging features of off-the-shelf trusted execution environments. Supporting over one million events per second, we show that Custos' tamper-evident logging protocol is three orders of magnitude (1000×) faster than prior solutions and incurs only between 2% and 7% runtime overhead over insecure logging on intensive workloads. Further, we show that Custos' auditing protocol can detect violations in near real-time even in the presence of a powerful distributed adversary and with minimal (3%) network overhead. Our case study on a real-world APT attack scenario demonstrates that Custos forces anti-forensic attackers into a "lose-lose" situation, where they can either be covert and not tamper with logs (which can be used for forensics), or erase logs but then be detected by Custos. 
    more » « less
  2. Recent advances in causality analysis have enabled investigators to trace multi-stage attacks using whole- system provenance graphs. Based on system-layer audit logs (e.g., syscalls), these approaches omit vital sources of application context (e.g., email addresses, HTTP response codes) that can found in higher layers of the system. Although this information is often essential to understanding attack behaviors, incorporating this evidence into causal analysis engines is difficult due to the semantic gap that exists between system layers. To address this shortcoming, we propose the notion of universal provenance, which encodes all forensically-relevant causal dependencies regardless of their layer of origin. To transparently realize this vision on commodity systems, we present ωLOG (“Omega Log”), a provenance tracking mechanism that bridges the semantic gap between system and application logging contexts. ωLOG analyzes program binaries to identify and model application-layer logging behaviors, enabling application events to be accurately reconciled with system-layer accesses. ωLOG then intercepts applications’ runtime logging activities and grafts those events onto the system-layer provenance graph, allowing investigators to reason more precisely about the nature of attacks. We demonstrate that ωLOG is widely-applicable to existing software projects and can transparently facilitate execution partitioning of dependency graphs without any training or developer intervention. Evaluation on real-world attack scenarios shows that universal provenance graphs are concise and rich with semantic information as compared to the state-of-the-art, with 12% average runtime overhead. 
    more » « less
  3. Information flow control is a canonical approach to access control in systems, allowing administrators to assure confidentiality and integrity through restricting the flow of data. Decentralized Information Flow Control (DIFC) harnesses application-layer semantics to allow more precise and accurate mediation of data. Unfortunately, past approaches to DIFC have depended on dedicated instrumentation efforts or developer buy-in. Thus, while DIFC has existed for decades, it has seen little-to-no adoption in commodity systems; the requirement for complete redesign or retrofitting of programs has proven too high a barrier. In this work, we make the surprising observation that developers have already unwittingly performed the instrumentation efforts required for DIFC — application event logging, a software development best practice used for telemetry and debugging, often contains the information needed to identify application-layer event processes that DIFC mediates. We present T-difc, a kernel-layer reference monitor framework that leverages the insights of application event logs to perform precise decentralized flow control. T-difc identifies and extracts these application events as they are created by monitoring application I/O to log files, then references an administrator-specified security policy to assign data labels and mediate the flow of data through the system. To our knowledge, T-difc is the first approach to DIFC that does not require developer support or custom instrumentation. In a survey of 15 popular open source applications, we demonstrate that T-difc works seamlessly on a variety of popular open source programs while imposing negligible runtime overhead on realistic policies and workloads. Thus, T-difc demonstrates a transparent and non-invasive path forward for the dissemination of decentralized information flow controls. 
    more » « less
  4. Recent advances in causality analysis have enabled investigators to trace multi-stage attacks using provenance graphs. Based on system-layer audit logs (e.g., syscalls), these approaches omit vital sources of application context (e.g., email addresses, HTTP response codes) that can be found in higher layers of the system. Although such information is often essential to understanding attack behaviors, it is difficult to incorporate this evidence into causal analysis engines because of the semantic gap that exists between system layers. To address that shortcoming, we propose the notion of universal provenance, which encodes all forensically relevant causal dependencies regardless of their layer of origin. To transparently realize that vision on commodity systems, we present OmegaLog, a provenance tracker that bridges the semantic gap between system and application logging contexts. OmegaLog analyzes program binaries to identify and model application-layer logging behaviors, enabling accurate reconciliation of application events with system-layer accesses. OmegaLog then intercepts applications’ runtime logging activities and grafts those events onto the system-layer provenance graph, allowing investigators to reason more precisely about the nature of attacks. We demonstrate that our system is widely applicable to existing software projects and can transparently facilitate execution partitioning of provenance graphs without any training or developer intervention. Evaluation on real-world attack scenarios shows that our technique generates concise provenance graphs with rich semantic information relative to the state-of-the-art, with an average runtime overhead of 4% 
    more » « less
  5. Internet of Things (IoT) and Storage-as-a-Service (STaaS) continuum permit cost-effective maintenance of security-sensitive information collected by IoT devices over cloud systems. It is necessary to guarantee the security of sensitive data in IoT-STaaS applications. Especially, log entries trace critical events in computer systems and play a vital role in the trustworthiness of IoT-STaaS. An ideal log protection tool must be scalable and lightweight for vast quantities of resource-limited IoT devices while permitting efficient and public verification at STaaS. However, the existing cryptographic logging schemes either incur significant computation/signature overhead to the logger or extreme storage and verification costs to the cloud. There is a critical need for a cryptographic forensic log tool that respects the efficiency requirements of the IoT-STaaS continuum. In this paper, we created novel digital signatures for logs called Optimal Signatures for secure Logging (OSLO), which are the first (to the best of our knowledge) to offer both small-constant signature and public key sizes with near-optimal signing and batch verification via various granularities. We introduce new design features such as one-time randomness management, flexible aggregation along with various optimizations to attain these seemingly conflicting properties simultaneously. Our experiments show that OSLO offers 50× faster verification (for 235 entries) than the most compact alternative with equal signature sizes, while also being several magnitudes of more compact than its most logger efficient counterparts. These properties make OSLO an ideal choice for the IoT-STaaS, wherein lightweight logging and efficient batch verification of massive-size logs are vital for the IoT edge and cold storage servers, respectively. 
    more » « less