Microservices have emerged as a strong architecture for large-scale, distributed systems in the context of cloud computing and containerization. However, the size and complexity of microservice systems have strained current access control mechanisms. Intricate dependency structures, such as multi-hop dependency chains, go uncaptured by existing access control mechanisms and leave microservice deployments open to adversarial actions and influence. This work introduces CloudCover, an access control mechanism and enforcement framework for microservices. CloudCover provides holistic, deployment-wide analysis of microservice operations and behaviors. It implements a verificationin-the-loop access control approach, mitigating multi-hop microservice threats through control-flow integrity checks. We evaluate these domain-relevant multi-hop threats and CloudCover under existing, real-world scenarios such as Istio’s opensource microservice example and under theoretic and synthetic network loads of 10,000 requests per second. Our results show that CloudCover is appropriate for use in real deployments, requiring no microservice code changes by administrators
more »
« less
Assessing Evolution of Microservices Using Static Analysis
Microservices have gained widespread adoption in enterprise software systems because they encapsulate the expertise of specific organizational subunits. This approach offers valuable insights into internal processes and communication channels. The advantage of microservices lies in their self-contained nature, streamlining management and deployment. However, this decentralized approach scatters knowledge across microservices, making it challenging to grasp the holistic system. As these systems continually evolve, substantial changes may affect not only individual microservices but the entire system. This dynamic environment increases the complexity of system maintenance, emphasizing the need for centralized assessment methods to analyze these changes. This paper derives and introduces quantification metrics to serve as indicators for investigating system architecture evolution across different system versions. It focuses on two holistic viewpoints of inter-service interaction and data perspectives derived through static analysis of the system’s source code. The approach is demonstrated with a case study using established microservice system benchmarks.
more »
« less
- Award ID(s):
- 2409933
- PAR ID:
- 10572039
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Applied Sciences
- Volume:
- 14
- Issue:
- 22
- ISSN:
- 2076-3417
- Page Range / eLocation ID:
- 10725
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Reducing tail latency has become a crucial issue for optimizing the performance of online cloud services and distributed applications. In distributed applications, there are many causes of high end-to-end tail latency, including operating system delays, request re-ordering due to fan-out/fanin, and network congestion. Although recent research has focused on reducing tail latency for individual application components, such as by replicating requests and scheduling, in this paper, we argue for a holistic approach for reducing the end-to-end tail latency across application components. We propose TailClipper, a distributed scheduler that tags each arriving request with an arrival timestamp, and propagates it across the microservices' call chain. TailClipper then uses arrival timestamps to implement an oldest request first scheduler that combines global first-come first serve with a limited form of processor sharing to reduce end-to-end tail latency. In doing so, TailClipper can counter the performance degradation caused by request reordering in multi-tiered and microservices-based applications. We implement TailClipper as a userspace Linux scheduler and evaluate it using cloud workload traces and a real-world microservices application. Compared to state-of-the-art schedulers, our experiments reveal that TailClipper improves the 99th percentile response time by up to 81%, while also improving the mean response time and the system throughput by up to 54% and 29% respectively under high loads.more » « less
-
Background: Understanding dependencies within microservices is essential for maintaining and evolving scalable and efficient software architectures. Dependencies influence how changes in one microservice might propagate to other microservices. With the decentralized nature of microservices, these dependencies might not be explicit to developers and lead to unique challenges in modern software development environments. Objective: The objective of this study is to synthesize existing literature on microservice dependencies, identify the types of dependencies, and examine the strategies employed to manage and analyze these relationships. This effort aims to elucidate how dependencies affect microservice systems and to provide a comprehensive overview of dependency management within microservices. Method: We conducted a multivocal literature review, starting with an initial dataset of 1,733 papers from academic literature (white literature). This corpus was narrowed down through a rigorous filtering process to 45 key publications that address the identification, management, and impacts of dependencies in microservices. Additionally, we incorporated 926 articles from grey literature sources such as Google, Stack Overflow, and Stack Exchange, expanding the scope beyond traditional academic research. After the filtration process, 45 articles were fully synthesized to integrate practical insights and professional experiences into our review. Results: The review identifies several types of dependencies in microservice systems and synthesizes this information into a unified dependency taxonomy. This review highlights a range of approaches to dependency management, revealing a significant gap in systematic catering approaches to generate taxonomies for dependencies and the need for integrated management tools. The findings underscore the fragmented nature of existing dependency management practices and the potential for more holistic approaches. Conclusion: This study provides valuable insights for researchers and practitioners, outlining effective strategies and pointing out areas needing improvement in dependency management. By offering a structured overview of the topic, the study serves as a roadmap for future research and development efforts to enhance the robustness and maintainability of microservices.more » « less
-
Test coverage is a critical aspect of the software development process, aiming for overall confidence in the product. When considering cloud-native systems, testing becomes complex, as it becomes necessary to deal with multiple distributed microservices that are developed by different teams and may change quite rapidly. In such a dynamic environment, it is important to track test coverage. This is especially relevant for end-to-end (E2E) and API testing, as these might be developed by teams distinct from microservice developers. Moreover, indirection exists in E2E, where the testers may see the user interface but not know how comprehensive the test suits are. To ensure confidence in health checks in the system, mechanisms and instruments are needed to indicate the test coverage level. Unfortunately, there is a lack of such mechanisms for cloud-native systems. This manuscript introduces test coverage metrics for evaluating the extent of E2E and API test suite coverage for microservice endpoints. It elaborates on automating the calculation of these metrics with access to microservice codebases and system testing traces, delves into the process, and offers feedback with a visual perspective, emphasizing test coverage across microservices. To demonstrate the viability of the proposed approach, we implement a proof-of-concept tool and perform a case study on a well-established system benchmark assessing existing E2E and API test suites with regard to test coverage using the proposed endpoint metrics. The results of endpoint coverage reflect the diverse perspectives of both testing approaches. API testing achieved 91.98% coverage in the benchmark, whereas E2E testing achieved 45.42%. Combining both coverage results yielded a slight increase to approximately 92.36%, attributed to a few endpoints tested exclusively through one testing approach, not covered by the other.more » « less
-
Cloud applications are increasingly shifting from large monolithic services to complex graphs of loosely-coupled microservices. Despite their benefits, microservices are prone to cascading performance issues, and can lead to prolonged periods of degraded performance. We present Sage, a machine learning-driven root cause analysis system for interactive cloud microservices that is both accurate and practical. We show that Sage correctly identifies the root causes of performance issues across a diverse set of microservices and takes action to address them, leading to more predictable, performant, and efficient cloud systems.more » « less
An official website of the United States government

