skip to main content


Title: Role-Based Ecosystem for the Design, Development, and Deployment of Secure Multi-Party Data Analytics Applications
Software applications that employ secure multi-party computation (MPC) can empower individuals and organizations to benefit from privacy-preserving data analyses when data sharing is encumbered by confidentiality concerns, legal constraints, or corporate policies. MPC is already being incorporated into software solutions in some domains; however, individual use cases do not fully convey the variety, extent, and complexity of the opportunities of MPC. This position paper articulates a role-based perspective that can provide some insight into how future research directions, infrastructure development and evaluation approaches, and deployment practices for MPC may evolve. Drawing on our own lessons from existing real-world deployments and the fundamental characteristics of MPC that make it a compelling technology, we propose a role-based conceptual framework for describing MPC deployment scenarios. Our framework acknowledges and leverages a novel assortment of roles that emerge from the fundamental ways in which MPC protocols support federation of functionalities and responsibilities. Defining these roles using the new opportunities for federation that MPC enables in turn can help identify and organize the capabilities, concerns, incentives, and trade-offs that affect the entities (software engineers, government regulators, corporate executives, end-users, and others) that participate in an MPC deployment scenario. This framework can not only guide the development of an ecosystem of modular and composable MPC tools, but can make explicit some of the opportunities that researchers and software engineers (and any organizations they form) have to differentiate and specialize the artifacts and services they choose to design, develop, and deploy. We demonstrate how this framework can be used to describe existing MPC deployment scenarios, how new opportunities in a scenario can be observed by disentangling roles inhabited by the involved parties, and how this can motivate the development of MPC libraries and software tools that specialize not by application domain but by role.  more » « less
Award ID(s):
1718135 1739000
NSF-PAR ID:
10165776
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
IEEE SecDev
Volume:
2019
Page Range / eLocation ID:
129 - 140
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Why the new findings matter

    The process of teaching and learning is complex, multifaceted and dynamic. This paper contributes a seminal resource to highlight the digitisation of the educational sciences by demonstrating how new machine learning methods can be effectively and reliably used in research, education and practical application.

    Implications for educational researchers and policy makers

    The progressing digitisation of societies around the globe and the impact of the SARS‐COV‐2 pandemic have highlighted the vulnerabilities and shortcomings of educational systems. These developments have shown the necessity to provide effective educational processes that can support sometimes overwhelmed teachers to digitally impart knowledge on the plan of many governments and policy makers. Educational scientists, corporate partners and stakeholders can make use of machine learning techniques to develop advanced, scalable educational processes that account for individual needs of learners and that can complement and support existing learning infrastructure. The proper use of machine learning methods can contribute essential applications to the educational sciences, such as (semi‐)automated assessments, algorithmic‐grading, personalised feedback and adaptive learning approaches. However, these promises are strongly tied to an at least basic understanding of the concepts of machine learning and a degree of data literacy, which has to become the standard in education and the educational sciences.

    Demonstrating both the promises and the challenges that are inherent to the collection and the analysis of large educational data with machine learning, this paper covers the essential topics that their application requires and provides easy‐to‐follow resources and code to facilitate the process of adoption.

     
    more » « less
  2. Multisectoral models of regional bio-physical systems simulate policy responses to climate change and support climate mitigation and adaptation planning at multiple scales. Challenges facing these efforts include sometimes weak understandings of causal relationships, lack of integrated data streams, spatial and temporal incongruities with policy interests, and how to incorporate dynamics associated with human values, governance structures, and vulnerable populations. There are two general approaches to developing integrated models. The first involves stakeholder involvement in model design -- a participatory modeling approach. The second is to integrate existing models. This can be done in two ways: by integrating existing models or by a soft-linked confederation of existing models. A benefit of utilizing existing models is the leveraging of validated and familiar models that provide credibility. We report opportunities and challenges manifested in one effort to develop a regional food, energy, and water systems (FEWS) modeling framework using existing bio-physical models. The C-FEWS modeling framework (Climate-induced extremes on the linked food, energy, water system) is intended to identify and evaluate response options to extreme weather in the Midwest and Northeast United States thru the year 2100. We interviewed ten modelers associated with development of the C-FEWS framework and ten stakeholders from government agencies, planning agencies, and non-governmental organizations in New England. We inquired about their perspectives on the roles and challenges of regional FEWS modeling frameworks to inform planning and information needed to support planning in integrated food, energy, and water systems. We also analyzed discussions of meetings among modelers and among stakeholders and modelers. These sources reveal many agreements among modelers and stakeholders about the role of modeling frameworks, their benefits for policymakers, and the types of outputs they should produce. They also identify challenges to developing regional modeling frameworks that couple existing models and balancing model capabilities with stakeholder preferences for information. The results indicate the importance of modelers and stakeholders engaging in dialogue to craft modeling frameworks and scenarios that are credible and relevant for policymakers. We reflect on the implications for how FEWS modeling frameworks comprised of existing bio-physical models can be designed to better inform policy making at the regional scale. 
    more » « less
  3. To facilitate the adoption of cloud by organizations, Cryptographic Access Control (CAC) is the obvious solution to control data sharing among users while preventing partially trusted Cloud Service Providers (CSP) from accessing sensitive data. Indeed, several CAC schemes have been proposed in the literature. Despite their differences, available solutions are based on a common set of entities—e.g., a data storage service or a proxy mediating the access of users to encrypted data—that operate in different (security) domains—e.g., on-premise or the CSP. However, the majority of these CAC schemes assumes a fixed assignment of entities to domains; this has security and usability implications that are not made explicit and can make inappropriate the use of a CAC scheme in certain scenarios with specific trust assumptions and requirements. For instance, assuming that the proxy runs at the premises of the organization avoids the vendor lock-in effect but may give rise to other security concerns (e.g., malicious insiders attackers). To the best of our knowledge, no previous work considers how to select the best possible architecture (i.e., the assignment of entities to domains) to deploy a CAC scheme for the trust assumptions and requirements of a given scenario. In this article, we propose a methodology to assist administrators in exploring different architectures for the enforcement of CAC schemes in a given scenario. We do this by identifying the possible architectures underlying the CAC schemes available in the literature and formalizing them in simple set theory. This allows us to reduce the problem of selecting the most suitable architectures satisfying a heterogeneous set of trust assumptions and requirements arising from the considered scenario to a decidable Multi-objective Combinatorial Optimization Problem (MOCOP) for which state-of-the-art solvers can be invoked. Finally, we show how we use the capability of solving the MOCOP to build a prototype tool assisting administrators to preliminarily perform a “What-if” analysis to explore the trade-offs among the various architectures and then use available standards and tools (such as TOSCA and Cloudify) for automated deployment in multiple CSPs. 
    more » « less
  4. Enterprise software updates depend on the interaction between user and developer organizations. This interaction becomes especially complex when a single developer organization writes software that services hundreds of different user organizations. Miscommunication during patching and deployment efforts lead to insecure or malfunctioning software installations. While developers oversee the code, the update process starts and ends outside their control. Since developer test suites may fail to capture buggy behavior finding and fixing these bugs starts with user generated bug reports and 3rd party disclosures. The process ends when the fixed code is deployed in production. Any friction between user, and developer results in a delay patching critical bugs. Two common causes for friction are a failure to replicate user specific circumstances that cause buggy behavior and incompatible software releases that break critical functionality. Existing test generation techniques are insufficient. They fail to test candidate patches for post-deployment bugs and to test whether the new release adversely effects customer workloads. With existing test generation and deployment techniques, users can't choose (nor validate) compatible portions of new versions and retain their previous version's functionality. We present two new technologies to alleviate this friction. First, Test Generation for Ad Hoc Circumstances transforms buggy executions into test cases. Second, Binary Patch Decomposition allows users to select the compatible pieces of update releases. By sharing specific context around buggy behavior and developers can create specific test cases that demonstrate if their fixes are appropriate. When fixes are distributed by including extra context users can incorporate only updates that guarantee compatibility between buggy and fixed versions. We use change analysis in combination with binary rewriting to transform the old executable and buggy execution into a test case including the developer's prospective changes that let us generate and run targeted tests for the candidate patch. We also provide analogous support to users, to selectively validate and patch their production environments with only the desired bug-fixes from new version releases. This paper presents a new patching workflow that allows developers to validate prospective patches and users to select which updates they would like to apply, along with two new technologies that make it possible. We demonstrate our technique constructs tests cases more effectively and more efficiently than traditional test case generation on a collection of real world bugs compared to traditional test generation techniques, and provides the ability for flexible updates in real world scenarios. 
    more » « less
  5. ABSTRACT

    Animal locomotion is the result of complex and multi-layered interactions between the nervous system, the musculo-skeletal system and the environment. Decoding the underlying mechanisms requires an integrative approach. Comparative experimental biology has allowed researchers to study the underlying components and some of their interactions across diverse animals. These studies have shown that locomotor neural circuits are distributed in the spinal cord, the midbrain and higher brain regions in vertebrates. The spinal cord plays a key role in locomotor control because it contains central pattern generators (CPGs) – systems of coupled neuronal oscillators that provide coordinated rhythmic control of muscle activation that can be viewed as feedforward controllers – and multiple reflex loops that provide feedback mechanisms. These circuits are activated and modulated by descending pathways from the brain. The relative contributions of CPGs, feedback loops and descending modulation, and how these vary between species and locomotor conditions, remain poorly understood. Robots and neuromechanical simulations can complement experimental approaches by testing specific hypotheses and performing what-if scenarios. This Review will give an overview of key knowledge gained from comparative vertebrate experiments, and insights obtained from neuromechanical simulations and robotic approaches. We suggest that the roles of CPGs, feedback loops and descending modulation vary among animals depending on body size, intrinsic mechanical stability, time required to reach locomotor maturity and speed effects. We also hypothesize that distal joints rely more on feedback control compared with proximal joints. Finally, we highlight important opportunities to address fundamental biological questions through continued collaboration between experimentalists and engineers.

     
    more » « less