skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Characterizing Static Analysis Alerts for Terraform Manifests: An Experience Report
While Terraform has gained popularity to implement the practice of infrastructure as code (IaC), there is a lack of characterization of static analysis for Terraform manifests. Such lack of characterization hinders practitioners to assess how to use static analysis for their Terraform development process, as it happened for Company A, an organization who uses Terraform to create automated software deployment pipelines. In this experience report, we have investigated 491 static analysis alerts that occur for 10 open source and one proprietary Terraform repositories. From our analysis we observe: (i) 10 categories of static analysis alerts to appear for Terraform manifests, of which five are related to security, (ii) Terraform resources with dependencies to have more static analysis alerts than that of resources with no dependencies, and (iii) practitioner perceptions to vary from one alert category to another while deciding on taking actions for reported alerts. We conclude our paper by providing a list of lessons for practitioners and toolsmiths on how to improve static analysis for Terraform manifests.  more » « less
Award ID(s):
2312321 2247141
PAR ID:
10540928
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-3132-5
Page Range / eLocation ID:
7 to 13
Subject(s) / Keyword(s):
configuration as code empirical study experience report infrastructure as code static analysis terraform
Format(s):
Medium: X
Location:
Atlanta, GA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Sebastian Uchitel (Ed.)
    Despite being beneficial for managing computing infrastructure automatically, Puppet manifests are susceptible to security weaknesses, e.g., hard-coded secrets and use of weak cryptography algorithms. Adequate mitigation of security weaknesses in Puppet manifests is thus necessary to secure computing infrastructure that are managed with Puppet manifests. A characterization of how security weaknesses propagate and affect Puppet-based infrastructure management, can inform practitioners on the relevance of the detected security weaknesses, as well as help them take necessary actions for mitigation. We conduct an empirical study with 17,629 Puppet manifests with Taint Tracker for Pup pet Manifests ( TaintPup ). We observe 2.4 times more precision, and 1.8 times more F-measure for TaintPup, compared to that of a state-of-the-art security static analysis tool. From our empirical study, we observe security weaknesses to propagate into 4,457 resources, i.e, Puppet-specific code elements used to manage infrastructure. A single instance of a security weakness can propagate into as many as 35 distinct resources. We observe security weaknesses to propagate into 7 categories of resources, which include resources used to manage continuous integration servers and network controllers. According to our survey with 24 practitioners, propagation of security weaknesses into data storage-related resources is rated to have the most severe impact for Puppet-based infrastructure management. 
    more » « less
  2. Mauro Pezzè (Ed.)
    Context: Kubernetes has emerged as the de-facto tool for automated container orchestration. Business and government organizations are increasingly adopting Kubernetes for automated software deployments. Kubernetes is being used to provision applications in a wide range of domains, such as time series forecasting, edge computing, and high performance computing. Due to such a pervasive presence, Kubernetes-related security misconfigurations can cause large-scale security breaches. Thus, a systematic analysis of security misconfigurations in Kubernetes manifests, i.e., configuration files used for Kubernetes, can help practitioners secure their Kubernetes clusters. Objective: The goal of this paper is to help practitioners secure their Kubernetes clusters by identifying security misconfigurations that occur in Kubernetes manifests . Methodology: We conduct an empirical study with 2,039 Kubernetes manifests mined from 92 open-source software repositories to systematically characterize security misconfigurations in Kubernetes manifests. We also construct a static analysis tool called Security Linter for Kubernetes Manifests (SLI-KUBE) to quantify the frequency of the identified security misconfigurations. Results: In all, we identify 11 categories of security misconfigurations, such as absent resource limit, absent securityContext, and activation of hostIPC. Specifically, we identify 1,051 security misconfigurations in 2,039 manifests. We also observe the identified security misconfigurations affect entities that perform mesh-related load balancing, as well as provision pods and stateful applications. Furthermore, practitioners agreed to fix 60% of 10 misconfigurations reported by us. Conclusion: Our empirical study shows Kubernetes manifests to include security misconfigurations, which necessitates security-focused code reviews and application of static analysis when Kubernetes manifests are developed. 
    more » « less
  3. Generative artificial intelligence (AI) technologies, such as ChatGPT have shown promise in solving software engineering problems. However, these technologies have also shown to be susceptible to generating software artifacts that contain quality issues. A systematic characterization of quality issues, such as smells in ChatGPT-generated artifacts can help in providing recommendations for practitioners who use generative AI for container orchestration.We conduct an empirical study with 98 Kubernetes manifests to quantify smells in manifests generated by ChatGPT. Our empirical study shows: (i) 35.8% of the 98 Kubernetes manifests generated include at least one instance of smell; (ii) two types of objects Kubernetes namely, Deployment and Service are impacted by identified smells; and (iii) the most frequently occurring smell is unset CPU and memory requirements. Based on our findings, we recommend practitioners to apply quality assurance activities for ChatGPT-generated Kubernetes manifests prior to using these manifests for container orchestration. 
    more » « less
  4. Cyber Intrusion alerts are commonly collected by corporations to analyze network traffic and glean information about attacks perpetrated against the network. However, datasets of true malignant alerts are rare and generally only show one potential attack scenario out of many possible ones. Furthermore, it is difficult to expand the analysis of these alerts through artificial means due to the complexity of feature dependencies within an alert and lack of rare yet critical samples. This work proposes the use of a Mutual Information constrained Generative Adversarial Network as a means to synthesize new alerts from historical data. Histogram Intersection and Conditional Entropy are used to show the performance of this model as well as its ability to learn intricate feature dependencies. The proposed models are able to capture a much wider domain of alert feature values than standard Generative Adversarial Networks. Finally, we show that when looking at alerts from the perspective of attack stages, the proposed models are able to capture critical attacker behavior providing direct semantic meaning to generated samples. 
    more » « less
  5. Static analysis is one of the most important tools for developers in the modern software industry. However, due to limitations by current tools, many developers opt out of using static analysis in their development process. Some of these limitations include the lack of a concise, coherent overview, missing support for multiple repository applications and multiple languages and lastly a lack of standardized integration mechanisms for third-party frameworks. We propose an evaluation metric for static analysis tools and offer a comparison of many common static analysis tools. To demonstrate the goal of our metric we introduce the Fabric8-Analytics Quality Assurance Tool as a benchmark of a tool which successfully passes our evaluation metric. We demonstrate usage of this tool via a case study on the Fabric8-Analytics Framework, a framework for finding vulnerabilities in application dependencies. We issue a challenge to developers of modern static analysis tools to make their tools more usable and appealing to developers. 
    more » « less