Serverless Computing has quickly emerged as a dominant cloud computing paradigm, allowing developers to rapidly prototype event-driven applications using a composition of small functions that each perform a single logical task. However, many such application workflows are based in part on publicly-available functions developed by third-parties, creating the potential for functions to behave in unexpected, or even malicious, ways. At present, developers are not in total control of where and how their data is flowing, creating significant security and privacy risks in growth markets that have embraced serverless (e.g., IoT). As a practical means of addressing this problem, we present Valve, a serverless platform that enables developers to exert complete fine-grained control of information flows in their applications. Valve enables workflow developers to reason about function behaviors, and specify restrictions, through auditing of network-layer information flows. By proxying network requests and propagating taint labels across network flows, Valve is able to restrict function behavior without code modification. We demonstrate that Valve is able defend against known serverless attack behaviors including container reuse-based persistence and data exfiltration over cloud platform APIs with less than 2.8% runtime overhead, 6.25% deployment overhead and 2.35% teardown overhead.
more »
« less
ALASTOR: Reconstructing the Provenance of Serverless Intrusions
Serverless computing has freed developers from the burden of managing their own platform and infrastructure, allowing them to rapidly prototype and deploy applications. Despite its surging popularity, however, serverless raises a number of concerning security implications. Among them is the difficulty of investigating intrusions – by decomposing traditional applications into ephemeral re-entrant functions, serverless has enabled attackers to conceal their activities within legitimate workflows, and even prevent root cause analysis by abusing warm container reuse policies to break causal paths. Unfortunately, neither traditional approaches to system auditing nor commercial serverless security products provide the transparency needed to accurately track these novel threats. In this work, we propose ALASTOR, a provenance-based auditing framework that enables precise tracing of suspicious events in serverless applications. ALASTOR records function activity at both system and application layers to capture a holistic picture of each function instances' behavior. It then aggregates provenance from different functions at a central repository within the serverless platform, stitching it together to produce a global data provenance graph of complex function workflows. ALASTOR is both function and language-agnostic, and can easily be integrated into existing serverless platforms with minimal modification. We implement ALASTOR for the OpenFaaS platform and evaluate its performance using the well-established Nordstrom Hello,Retail! application, discovering in the process that ALASTOR imposes manageable overheads (13.74%), in exchange for significantly improved forensic capabilities as compared to commercially-available monitoring tools. To our knowledge, ALASTOR is the first auditing framework specifically designed to satisfy the operational requirements of serverless platforms.
more »
« less
- PAR ID:
- 10346422
- Date Published:
- Journal Name:
- 31st USENIX Security Symposium
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)As serverless computing continues to revolutionize the design and deployment of web services, it has become an increasingly attractive target to attackers. These adversaries are developing novel tactics for circumventing the ephemeral nature of serverless functions, exploiting container reuse optimizations and achieving lateral movement by “living off the land” provided by legitimate serverless workflows. Unfortunately, the traditional security controls currently offered by cloud providers are inadequate to counter these new threats. In this work, we propose will.iam,1 a workflow-aware access control model and reference monitor that satisfies the functional requirements of the serverless computing paradigm. will.iam encodes the protection state of a serverless application as a permissions graph that describes the permissible transitions of its workflows, associating web requests with a permissions set at the point of ingress according to a graph-based labeling state. By proactively enforcing the permissions requirements of downstream workflow components, will.iam is able to avoid the costs of partially processing unauthorized requests and reduce the attack surface of the application. We implement the will.iam framework in Go and evaluate its performance as compared to recent related work against the well-established Nordstrom “Hello, Retail!” application. We demonstrate that will.iam imposes minimal burden to requests, averaging 0.51% overhead across representative workflows, but dramatically improves performance when handling unauthorized requests (e.g., DDoS attacks) as compared to past solutions. will.iam thus demonstrates an effective and practical alternative for authorization in the serverless paradigm.more » « less
-
null (Ed.)Third-party security analytics allow companies to outsource threat monitoring tasks to teams of experts and avoid the costs of in-house security operations centers. By analyzing telemetry data from many clients these services are able to offer enhanced insights, identifying global trends and spotting threats before they reach most customers. Unfortunately, the aggregation that drives these insights simultaneously risks exposing sensitive client data if it is not properly sanitized and tracked. In this work, we present SCIFFS, an automated information flow monitoring framework for preventing sensitive data exposure in third-party security analytics platforms. SCIFFS performs decentralized information flow control over customer data it in a serverless setting, leveraging the innate polyinstantiated nature of serverless functions to assure precise and lightweight tracking of data flows. Evaluating SCIFFS against a proof-of-concept security analytics framework on the widely-used OpenFaaS platform, we demonstrate that our solution supports common analyst workflows data ingestion, custom dashboards, threat hunting) while imposing just 3.87% runtime overhead on event ingestion and the overhead on aggregation queries grows linearly with the number of records in the database (e.g., 18.75% for 50,000 records and 104.27% for 500,000 records) as compared to an insecure baseline. Thus, SCIFFS not only establishes a privacy-respecting model for third-party security analytics, but also highlights the opportunities for security-sensitive applications in the serverless computing model.more » « less
-
The increased use of micro-services to build web applications has spurred the rapid growth of Function-as-a-Service (FaaS) or serverless computing platforms. While FaaS simplifies provisioning and scaling for application developers, it introduces new challenges in resource management that need to be handled by the cloud provider. Our analysis of popular serverless workloads indicates that schedulers need to handle functions that are very short-lived, have unpredictable arrival patterns, and require expensive setup of sandboxes. The challenge of running a large number of such functions in a multi-tenant cluster makes existing scheduling frameworks unsuitable. We present Archipelago, a platform that enables low latency request execution in a multi-tenant serverless setting. Archipelago views each application as a DAG of functions, and every DAG in associated with a latency deadline. Archipelago achieves its per-DAG request latency goals by: (1) partitioning a given cluster into a number of smaller worker pools, and associating each pool with a semi-global scheduler (SGS), (2) using a latency-aware scheduler within each SGS along with proactive sandbox allocation to reduce overheads, and (3) using a load balancing layer to route requests for different DAGs to the appropriate SGS, and automatically scale the number of SGSs per DAG. Our testbed results show that Archipelago meets the latency deadline for more than 99% of realistic application request workloads, and reduces tail latencies by up to 36X compared to state-of-the-art serverless platforms.more » « less
-
Identifying the root cause and impact of a system intrusion remains a foundational challenge in computer security. Digital provenance provides a detailed history of the flow of information within a computing system, connecting suspicious events to their root causes. Although existing provenance-based auditing techniques provide value in forensic analysis, they assume that such analysis takes place only retrospectively. Such post-hoc analysis is insufficient for realtime security applications; moreover, even for forensic tasks, prior provenance collection systems exhibited poor performance and scalability, jeopardizing the timeliness of query responses. We present CamQuery, which provides inline, realtime provenance analysis, making it suitable for implementing security applications. CamQuery is a Linux Security Module that offers support for both userspace and in-kernel execution of analysis applications. We demonstrate the applicability of CamQuery to a variety of runtime security applications including data loss prevention, intrusion detection, and regulatory compliance. In evaluation, we demonstrate that CamQuery reduces the latency of realtime query mechanisms, while imposing minimal overheads on system execution. CamQuery thus enables the further deployment of provenance-based technologies to address central challenges in computer security.more » « less