skip to main content


The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.

Title: FABRIC: A National-ScaleProgrammable ExperimentalNetwork Infrastructure
FABRIC is a unique national research infrastructure to enable cutting-edge andexploratory research at-scale in networking, cybersecurity, distributed computing andstorage systems, machine learning, and science applications. It is an everywhere-programmable nationwide instrument comprised of novel extensible network elementsequipped with large amounts of compute and storage, interconnected by high speed,dedicated optical links. It will connect a number of specialized testbeds for cloudresearch (NSF Cloud testbeds CloudLab and Chameleon), for research beyond 5Gtechnologies (Platforms for Advanced Wireless Research or PAWR), as well as productionhigh-performance computing facilities and science instruments to create a rich fabric fora wide variety of experimental activities.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
IEEE internet computing
Page Range / eLocation ID:
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A key dimension of reproducibility in testbeds is stable performance that scales in regular and predictable ways in accordance with declarative specifications for virtual resources. We contend that reproducibility is crucial for elastic performance control in live experiments, in which testbed tenants (slices) provide services for real user traffic that varies over time. This paper gives an overview of ExoPlex, a framework for deploying network service providers (NSPs) as a basis for live inter-domain networking experiments on the ExoGENI testbed. As a motivating example, we show how to use ExoPlex to implement a virtual software-defined exchange (vSDX) as a tenant NSP. The vSDX implements security-managed interconnection of customer IP networks that peer with it via direct L2 links stitched dynamically into its slice. An elastic controller outside of the vSDX slice provisions network links and computing capacity for a scalable monitoring fabric within the tenant vSDX slice. The vSDX checks compliance of traffic flows with customer-specified interconnection policies, and blocks traffic from senders that trigger configured rules for intrusion detection in Bro security monitors. We present initial results showing the effect of resource provisioning on Bro performance within the vSDX. 
    more » « less
  2. Given the highly empirical nature of research in cloud computing, networked systems, and related fields, testbeds play an important role in the research ecosystem. In this paper, we cover one such facility, CloudLab, which supports systems research by providing raw access to programmable hardware, enabling research at large scales, and creating as hared platform for repeatable research.We present our experiences designing CloudLab and operating it for four years, serving nearly 4,000 users who have run over 79,000 experiments on 2,250 servers, switches, and other pieces of datacenter equipment. From this experience,we draw lessons organized around two themes. The first set comes from analysis of data regarding the use of CloudLab:how users interact with it, what they use it for, and the implications for facility design and operation. Our second set of lessons comes from looking at the ways that algorithms used“under the hood,” such as resource allocation, have important—and sometimes unexpected—effects on user experience and behavior. These lessons can be of value to the designers and operators of IaaS facilities in general, systems testbeds in particular, and users who have a stake in understanding how these systems are built. 
    more » « less
  3. The Chameleon testbed is a case study in adapting the cloud paradigm for computer science research. In this paper, we explain how this adaptation was achieved, evaluate it from the perspective of supporting the most experiments for the most users, and make a case that utilizing mainstream technology in research testbeds can increase efficiency without compro- mising on functionality. We also highlight the opportunity inherent in the shared digital artifacts generated by testbeds and give an overview of the efforts we’ve made to develop it to foster reproducibility. 
    more » « less
  4. This special issue is devoted to progress in one of the most important challenges facing computing education.The work published here is of relevance to those who teach computing related topics at all levels, with greatest implications for undergraduate education. Parallel and distributed computing (PDC) has become ubiquitous to the extent that even casual users feel their impact. This necessitates that every programmer understands how parallelism and a distributed environment affect problem solving. Thus,teaching only traditional, sequential programming is no longer adequate. For this reason, it is essential to impart a range of PDC and high performance computing (HPC) knowledge and skills at various levels within the educational fabric woven by Computer Science (CS), Computer Engineering (CE), and related computational science and engineering curricula. This special issue sought high quality contributions in the fields of PDC and HPC education. Submissions were on the topics of EduPar2016, Euro-EduPar2016 and EduHPC2016 workshops,but the submission was open to all. This special issue includes 12 paper spanning pedagogical techniques, tools and experiences. 
    more » « less
  5. null (Ed.)
    It has been recognized that jobs across different domains is becoming more data driven, and many aspects of the economy, society, and daily life depend more and more on data. Undergraduate education offers a critical link in providing more data science and engineering (DSE) exposure to students and expanding the supply of DSE talent. The National Academies have identified that effective DSE education requires both appropriate classwork and hands-on experience with real data and real applications. Currently significant progress has been made in classwork, while progress in hands-on research experience has been lacking. To fill this gap, we have proposed to create data-enabled engineering project (DEEP) modules based on real data and applications, which is currently funded by the National Science Foundation (NSF) under the Improving Undergraduate STEM Education (IUSE) program. To achieve project goal, we have developed two internet-of-things (IoT) enabled laboratory engineering testbeds (LETs) and generated real data under various application scenarios. In addition, we have designed and developed several sample DEEP modules in interactive Jupyter Notebook using the generated data. These sample DEEP modules will also be ported to other interactive DSE learning environments, including Matlab Live Script and R Markdown, for wide and easy adoption. Finally, we have conducted metacognitive awareness gain (MAG) assessments to establish a baseline for assessing the effectiveness of DEEP modules in enhancing students’ reflection and metacognition. The DEEP modules that are currently being developed target students in Chemical Engineering, Electrical Engineering, Computer Science, and MS program in Data Science at xxx University. The modules will be deployed in the Spring of 2021, and we expect to have immediate impact to the targeted classes and students. We also anticipate that the DEEP modules can be adopted without modification to other disciplines in Engineering such as Mechanical, Industrial and Aerospace Engineering. They can also be easily extended to other disciplines in other colleges such as Liberal Arts by incorporating real data and applications from the respective disciplines. In this work, we will share our ideas, the rationale behind the proposed approach, the planned tasks for the project, the demonstration of modules developed, and potential dissemination venues. 
    more » « less