skip to main content

Title: Formal Modelling and Automated Trade-off Analysis of Enforcement Architectures for Cryptographic Access Control in the Cloud
To facilitate the adoption of cloud by organizations, Cryptographic Access Control (CAC) is the obvious solution to control data sharing among users while preventing partially trusted Cloud Service Providers (CSP) from accessing sensitive data. Indeed, several CAC schemes have been proposed in the literature. Despite their differences, available solutions are based on a common set of entities—e.g., a data storage service or a proxy mediating the access of users to encrypted data—that operate in different (security) domains—e.g., on-premise or the CSP. However, the majority of these CAC schemes assumes a fixed assignment of entities to domains; this has security and usability implications that are not made explicit and can make inappropriate the use of a CAC scheme in certain scenarios with specific trust assumptions and requirements. For instance, assuming that the proxy runs at the premises of the organization avoids the vendor lock-in effect but may give rise to other security concerns (e.g., malicious insiders attackers). To the best of our knowledge, no previous work considers how to select the best possible architecture (i.e., the assignment of entities to domains) to deploy a CAC scheme for the trust assumptions and requirements of a given scenario. In this article, we propose more » a methodology to assist administrators in exploring different architectures for the enforcement of CAC schemes in a given scenario. We do this by identifying the possible architectures underlying the CAC schemes available in the literature and formalizing them in simple set theory. This allows us to reduce the problem of selecting the most suitable architectures satisfying a heterogeneous set of trust assumptions and requirements arising from the considered scenario to a decidable Multi-objective Combinatorial Optimization Problem (MOCOP) for which state-of-the-art solvers can be invoked. Finally, we show how we use the capability of solving the MOCOP to build a prototype tool assisting administrators to preliminarily perform a “What-if” analysis to explore the trade-offs among the various architectures and then use available standards and tools (such as TOSCA and Cloudify) for automated deployment in multiple CSPs. « less
Authors:
; ; ;
Award ID(s):
1704139
Publication Date:
NSF-PAR ID:
10369128
Journal Name:
ACM Transactions on Privacy and Security
Volume:
25
Issue:
1
Page Range or eLocation-ID:
1 to 37
ISSN:
2471-2566
Sponsoring Org:
National Science Foundation
More Like this
  1. In today's mobile-first, cloud-enabled world, where simulation-enabled training is designed for use anywhere and from multiple different types of devices, new paradigms are needed to control access to sensitive data. Large, distributed data sets sourced from a wide-variety of sensors require advanced approaches to authorizations and access control (AC). Motivated by large-scale, publicized data breaches and data privacy laws, data protection policies and fine-grained AC mechanisms are an imperative in data intensive simulation systems. Although the public may suffer security incident fatigue, there are significant impacts to corporations and government organizations in the form of settlement fees and senior executive dismissal. This paper presents an analysis of the challenges to controlling access to big data sets. Implementation guidelines are provided based upon new attribute-based access control (ABAC) standards. Best practices start with AC for the security of large data sets processed by models and simulations (M&S). Currently widely supported eXtensible Access Control Markup Language (XACML) is the predominant framework for big data ABAC. The more recently developed Next Generation Access Control (NGAC) standard addresses additional areas in securing distributed, multi-owner big data sets. We present a comparison and evaluation of standards and technologies for different simulation data protection requirements. Amore »concrete example is included to illustrate the differences. The example scenario is based upon synthetically generated very sensitive health care data combined with less sensitive data. This model data set is accessed by representative groups with a range of trust from highly-trusted roles to general users. The AC security challenges and approaches to mitigate risk are discussed.« less
  2. The healthcare sector is constantly improving patient health record systems. However, these systems face a significant challenge when confronted with patient health record (PHR) data due to its sensitivity. In addition, patient’s data is stored and spread generally across various healthcare facilities and among providers. This arrangement of distributed data becomes problematic whenever patients want to access their health records and then share them with their care provider, which yields a lack of interoperability among various healthcare systems. Moreover, most patient health record systems adopt a centralized management structure and deploy PHRs to the cloud, which raises privacy concerns when sharing patient information over a network. Therefore, it is vital to design a framework that considers patient privacy and data security when sharing sensitive information with healthcare facilities and providers. This paper proposes a blockchain framework for secured patient health records sharing that allows patients to have full access and control over their health records. With this novel approach, our framework applies the Ethereum blockchain smart contracts, the Inter-Planetary File System (IPFS) as an off-chain storage system, and the NuCypher protocol, which functions as key management and blockchain-based proxy re-encryption to create a secured on-demand patient health records sharing systemmore »effectively. Results show that the proposed framework is more secure than other schemes, and the PHRs will not be accessible to unauthorized providers or users. In addition, all encrypted data will only be accessible to and readable by verified entities set by the patient.« less
  3. Mobile edge computing (MEC) is an emerging paradigm that integrates computing resources in wireless access networks to process computational tasks in close proximity to mobile users with low latency. In this paper, we propose an online double deep Q networks (DDQN) based learning scheme for task assignment in dynamic MEC networks, which enables multiple distributed edge nodes and a cloud data center to jointly process user tasks to achieve optimal long-term quality of service (QoS). The proposed scheme captures a wide range of dynamic network parameters including non-stationary node computing capabilities, network delay statistics, and task arrivals. It learns the optimal task assignment policy with no assumption on the knowledge of the underlying dynamics. In addition, the proposed algorithm accounts for both performance and complexity, and addresses the state and action space explosion problem in conventional Q learning. The evaluation results show that the proposed DDQN-based task assignment scheme significantly improves the QoS performance, compared to the existing schemes that do not consider the effects of network dynamics on the expected long-term rewards, while scaling reasonably well as the network size increases.
  4. Zero trust (ZT) is the term for an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets, and resources. It assumes no implicit trust is granted to assets or user accounts based solely on their physical or network location. We have billions of devices in IoT ecosystems connected to enable smart environments, and these devices are scattered around different locations, sometimes multiple cities or even multiple countries. Moreover, the deployment of resource-constrained devices motivates the integration of IoT and cloud services. This adoption of a plethora of technologies expands the attack surface and positions the IoT ecosystem as a target for many potential security threats. This complexity has outstripped legacy perimeter-based security methods as there is no single, easily identified perimeter for different use cases in IoT. Hence, we believe that the need arises to incorporate ZT guiding principles in workflows, systems design, and operations that can be used to improve the security posture of IoT applications. This paper motivates the need to implement ZT principles when developing access control models for smart IoT systems. It first provides a structured mapping between the ZT basic tenets and the PEI framework when designingmore »and implementing a ZT authorization system. It proposes the ZT authorization requirements framework (ZT-ARF), which provides a structured approach to authorization policy models in ZT systems. Moreover, it analyzes the requirements of access control models in IoT within the proposed ZT-ARF and presents the vision and need for a ZT score-based authorization framework (ZT-SAF) that is capable of maintaining the access control requirements for ZT IoT connected systems.« less
  5. The main goal of traceable cryptography is to protect against unauthorized redistribution of cryptographic functionalities. Such schemes provide a way to embed identities (i.e., a “mark”) within cryptographic objects (e.g., decryption keys in an encryption scheme, signing keys in a signature scheme). In turn, the tracing guarantee ensures that any “pirate device” that successfully replicates the underlying functionality can be successfully traced to the set of identities used to build the device. In this work, we study traceable pseudorandom functions (PRFs). As PRFs are the workhorses of symmetric cryptography, traceable PRFs are useful for augmenting symmetric cryptographic primitives with strong traceable security guarantees. However, existing constructions of traceable PRFs either rely on strong notions like indistinguishability obfuscation or satisfy weak security guarantees like single-key security (i.e., tracing only works against adversaries that possess a single marked key). In this work, we show how to use fingerprinting codes to upgrade a single-key traceable PRF into a fully collusion resistant traceable PRF, where security holds regardless of how many keys the adversary possesses. We additionally introduce a stronger notion of security where tracing security holds even against active adversaries that have oracle access to the tracing algorithm. In conjunction with known constructionsmore »of single-key traceable PRFs, we obtain the first fully collusion resistant traceable PRF from standard lattice assumptions. Our traceable PRFs directly imply new lattice-based secret-key traitor tracing schemes that are CCA-secure and where tracing security holds against active adversaries that have access to the tracing oracle.« less