skip to main content


Title: Learning Network Design Objectives Using A Program Synthesis Approach
While the networking community has extensively tackled network design problems using optimization or other techniques (e.g., in areas such as traffic-engineering, and resource allocation), much of this work focuses on efficiently generating designs assuming well-defined objectives. In this paper, we argue that in practice, the objectives of a network design task may not be easy to specify for an architect. We argue for, and present a structured approach where the objectives of a network design task are learnt through iterative interactions with the architect. Our approach is inspired by a programming-by-examples approach that has seen success in the programming languages community. However, conventional program synthesis techniques do not apply because in our context a user can only provide a relative comparison between multiple choices on which one is more desirable, rather than provide an exact output for a given input. We propose a novel comparative synthesis approach to tackle these challenges. We sketch the approach, present promising preliminary results, and discuss future research questions.  more » « less
Award ID(s):
1837023
NSF-PAR ID:
10128360
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 18th ACM Workshop on Hot Topics in Networks - HotNets '19
Page Range / eLocation ID:
69 to 76
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Defense mechanisms against network-level attacks are commonly based on the use of cryptographic techniques, such as lengthy message authentication codes (MAC) that provide data integrity guarantees. However, such mechanisms require significant resources (both computational and network bandwidth), which prevents their continuous use in resource-constrained cyber-physical systems (CPS). Recently, it was shown how physical properties of controlled systems can be exploited to relax these stringent requirements for systems where sensor measurements and actuator commands are transmitted over a potentially compromised network; specifically, that merely intermittent use of data authentication (i.e., at occasional time points during system execution), can still provide strong Quality-of-Control (QoC) guarantees even in the presence of false-data injection attacks, such as Man-in-the-Middle (MitM) attacks. Consequently, in this work, we focus on integrating security into existing resource-constrained CPS, in order to protect against MitM attacks on a system where a set of control tasks communicates over a real-time network with system sensors and actuators. We introduce a design-time methodology that incorporates requirements for QoC in the presence of attacks into end-to-end timing constraints for real-time control transactions, which include data acquisition and authentication, real-time network messages, and control tasks. This allows us to formulate a mixed integer linear programming-based method for direct synthesis of schedulable tasks and message parameters (i.e., deadlines and offsets) that do not violate timing requirements for the already deployed controllers, while adding a sufficient level of protection against network-based attacks; specifically, the synthesis method also provides suitable intermittent authentication policies that ensure the desired QoC levels under attack. To additionally reduce the security-related bandwidth overhead, we propose the use of cumulative message authentication at time instances when the integrity of messages from subsets of sensors should be ensured. Furthermore, we introduce a method for the opportunistic use of the remaining resources to further improve the overall QoC guarantees while ensuring system (i.e., task and message) schedulability. Finally, we demonstrate applicability and scalability of our methodology on synthetic automotive systems as well as a real-world automotive case-study. 
    more » « less
  2. Two programs that provide high-quality long-term ecological data, the Environmental Data Initiative (EDI) and the National Ecological Observatory Network (NEON), have recently teamed up with data users interested in synthesizing biodiversity data, such as ecological synthesis working groups supported by the US Long Term Ecological Research (LTER) Network Office, to make their data more Findable, Interoperable, Accessible, and Reusable (FAIR). To this end: we have developed a flexible intermediate data design pattern for ecological community data (L1 formatted data in Fig. 1, see Fig. 2 for design details) called "ecocomDP" (O'Brien et al. 2021), and we provide tools to work with data packages in which this design pattern has been implemented. we have developed a flexible intermediate data design pattern for ecological community data (L1 formatted data in Fig. 1, see Fig. 2 for design details) called "ecocomDP" (O'Brien et al. 2021), and we provide tools to work with data packages in which this design pattern has been implemented. The ecocomDP format provides a data pattern commonly used for reporting community level data, such as repeated observations of species-level measures of biomass, abundance, percent cover, or density across multiple locations. The ecocomDP library for R includes tools to search for data packages, download or import data packages into an R (programming language) session in a standard format, and visualization tools for data exploration steps that are recommended for data users prior to any cross-study synthesis work. To date, EDI has created 70 ecocomDP data packages derived from their holdings, which include data from the US Long Term Ecological Research (US LTER) program, Long Term Research in Environmental Biology (LTREB) program, and other projects, which are now discoverable and accessible using the ecocomDP library. Similarly, NEON data products for 12 taxonomic groups are discoverable using the ecocomDP search tool. Input from data users provided guidance for the ecocomDP developers in mapping the NEON data products to the ecocomDP format to facilitate interoperability with the ecocomDP data packages available from the EDI repository. The standardized data design pattern allows common data visualizations across data packages, and has the potential to facilitate the development of new tools and workflows for biodiversity synthesis. The broader impacts of this collaboration are intended to lower the barriers for researchers in ecology and the environmental sciences to access and work with long-term biodiversity data and provide a hub around which data providers and data users can develop best practices that will build a diverse and inclusive community of practice. 
    more » « less
  3. Gradually typed languages allow programmers to mix statically and dynamically typed code, enabling them to incrementally reap the benefits of static typing as they add type annotations to their code. However, this type migration process is typically a manual effort with limited tool support. This paper examines the problem of automated type migration: given a dynamic program, infer additional or improved type annotations. Existing type migration algorithms prioritize different goals, such as maximizing type precision, maintaining compatibility with unmigrated code, and preserving the semantics of the original program. We argue that the type migration problem involves fundamental compromises: optimizing for a single goal often comes at the expense of others. Ideally, a type migration tool would flexibly accommodate a range of user priorities. We present TypeWhich, a new approach to automated type migration for the gradually-typed lambda calculus with some extensions. Unlike prior work, which relies on custom solvers, TypeWhich produces constraints for an off-the-shelf MaxSMT solver. This allows us to easily express objectives, such as minimizing the number of necessary syntactic coercions, and constraining the type of the migration to be compatible with unmigrated code. We present the first comprehensive evaluation of GTLC type migration algorithms, and compare TypeWhich to four other tools from the literature. Our evaluation uses prior benchmarks, and a new set of "challenge problems." Moreover, we design a new evaluation methodology that highlights the subtleties of gradual type migration. In addition, we apply TypeWhich to a suite of benchmarks for Grift, a programming language based on the GTLC. TypeWhich is able to reconstruct all human-written annotations on all but one program. 
    more » « less
  4. Purpose In response to the evolving COVID-19 pandemic, many universities have transitioned to online instruction. With learning promising to be online, at least in part, for the near future, instructors may be thinking of providing online collaborative learning opportunities to their students who are increasingly isolated from their peers because of social distancing guidelines. This paper aims to provide design recommendations for online collaborative project-based learning exercises based on this research in a software engineering course at the university level. Design/methodology/approach Through joint work between learning scientists, course instructors and software engineering practitioners, instructional design best practices of alignment between the context of the learners, the learning objectives, the task and the assessment are actualized in the design of collaborative programming projects for supporting learning. The design, first segments a short real-time collaborative exercise into tasks, each with a problem-solving phase where students participate in collaborative programming, and a reflection phase for reflecting on what they learned in the task. Within these phases, a role-assignment paradigm scaffolds collaboration by assigning groups of four students to four complementary roles that rotate after each task. Findings By aligning each task with granular learning objectives, significant pre- to post-test learning from the exercise as well as each task is observed. Originality/value The roles used in the paradigm discourage divide-and-conquer tendencies often associated with collaborative projects. By requiring students to discuss conflicting ideas to arrive at a consensus implementation, their ideas are made explicit, thus providing opportunities for clarifying misconceptions through discussion and learning from the collaboration. 
    more » « less
  5. Abstract Correlation for radio interferometer array applications, including Very Long Baseline Interferometry (VLBI), is a multidisciplinary field that traditionally involves astronomy, geodesy, signal processing, and electronic design. In recent years, however, high-performance computing has been taking over electronic design, complicating this mix with the addition of network engineering, parallel programming, and resource scheduling, among others. High-performance applications go a step further by using specialized hardware like Graphics Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), challenging engineers to build and maintain high-performance correlators that efficiently use the available resources. Existing literature has generally benchmarked correlators through narrow comparisons on specific scenarios, and the lack of a formal performance characterization prevents a systematic comparison. This combination of ongoing increasing complexity in software correlation together with the lack of performance models in the literature motivates the development of a performance model that allows us not only to characterize existing correlators and predict their performance in different scenarios but, more importantly, to provide an understanding of the trade-offs inherent to the decisions associated with their design. In this paper, we present a model that achieves both objectives. We validate this model against benchmarking results in the literature, and provide an example for its application for improving cost-effectiveness in the usage of cloud resources. 
    more » « less