skip to main content


Title: Revitalizing the public internet by making it extensible
There is now a significant and growing functional gap between the public Internet, whose basic architecture has remained unchanged for several decades, and a new generation of more sophisticated private networks. To address this increasing divergence of functionality and overcome the Internet's architectural stagnation, we argue for the creation of an Extensible Internet (EI) that supports in-network services that go beyond best-effort packet delivery. To gain experience with this approach, we hope to soon deploy both an experimental version (for researchers) and a prototype version (for early adopters) of EI. In the longer term, making the Internet extensible will require a community to initiate and oversee the effort; this paper is the first step in creating such a community.  more » « less
Award ID(s):
1817115
NSF-PAR ID:
10297962
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
ACM SIGCOMM Computer Communication Review
Volume:
51
Issue:
2
ISSN:
0146-4833
Page Range / eLocation ID:
18 to 24
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    Practitioner and family experiences of pediatric re/habilitation can be inequitable. The Young Children’s Participation and Environment Measure (YC-PEM) is an evidence-based and promising electronic patient-reported outcome measure that was designed with and for caregivers for research and practice. This study examined historically minoritized caregivers’ responses to revised YC-PEM content modifications and their perspectives on core intelligent virtual agent functionality needed to improve its reach for equitable service design.

    Methods

    Caregivers were recruited during a routine early intervention (EI) service visit and met five inclusion criteria: (1) were 18 + years old; (2) identified as the parent or legal guardian of a child 0–3 years old enrolled in EI services for 3 + months; (3) read, wrote, and spoke English; (4) had Internet and telephone access; and (5) identified as a parent or legal guardian of a Black, non-Hispanic child or as publicly insured. Three rounds of semi-structured cognitive interviews (55–90 min each) used videoconferencing to gather caregiver feedback on their responses to select content modifications while completing YC-PEM, and their ideas for core intelligent virtual agent functionality. Interviews were transcribed verbatim, cross-checked for accuracy, and deductively and inductively content analyzed by multiple staff in three rounds.

    Results

    Eight Black, non-Hispanic caregivers from a single urban EI catchment and with diverse income levels (Mdn = $15,001–20,000) were enrolled, with children (M = 21.2 months,SD = 7.73) enrolled in EI. Caregivers proposed three ways to improve comprehension (clarify item wording, remove or simplify terms, add item examples). Environmental item edits prompted caregivers to share how they relate and respond to experiences with interpersonal and institutional discrimination impacting participation. Caregivers characterized three core functions of a virtual agent to strengthen YC-PEM navigation (read question aloud, visual and verbal prompts, more examples and/or definitions).

    Conclusions

    Results indicate four ways that YC-PEM content will be modified to strengthen how providers screen for unmet participation needs and determinants to design pediatric re/habilitation services that are responsive to family priorities. Results also motivate the need for user-centered design of an intelligent virtual agent to strengthen user navigation, prior to undertaking a community-based pragmatic trial of its implementation for equitable practice.

     
    more » « less
  2. null (Ed.)
    Edge intelligence (EI) has received a lot of interest because it can reduce latency, increase efficiency, and preserve privacy. More significantly, as the Internet of Things (IoT) has proliferated, billions of portable and embedded devices have been interconnected, producing zillions of gigabytes on edge networks. Thus, there is an immediate need to push AI (artificial intelligence) breakthroughs within edge networks to achieve the full promise of edge data analytics. EI solutions have supported digital technology workloads and applications from the infrastructure level to edge networks; however, there are still many challenges with the heterogeneity of computational capabilities and the spread of information sources. We propose a novel event-driven deep-learning framework, called EDL-EI (event-driven deep learning for edge intelligence), via the design of a novel event model by defining events using correlation analysis with multiple sensors in real-world settings and incorporating multi-sensor fusion techniques, a transformation method for sensor streams into images, and lightweight 2-dimensional convolutional neural network (CNN) models. To demonstrate the feasibility of the EDL-EI framework, we presented an IoT-based prototype system that we developed with multiple sensors and edge devices. To verify the proposed framework, we have a case study of air-quality scenarios based on the benchmark data provided by the USA Environmental Protection Agency for the most polluted cities in South Korea and China. We have obtained outstanding predictive accuracy (97.65% and 97.19%) from two deep-learning models on the cities’ air-quality patterns. Furthermore, the air-quality changes from 2019 to 2020 have been analyzed to check the effects of the COVID-19 pandemic lockdown. 
    more » « less
  3. One foundational justification for regulatory intervention is that there are harms occurring of a character that create a public interest in mitigating them. This paper is concerned with such harms that arise in the Internet ecosystem. Looking at news headlines for the last few years, it may seem that the range of such harms is unbounded. Hoping to add some order to the chaos, we undertake an effort to classify harms in the Internet ecosystem, in pursuit of a more or less complete taxonomy of harms. Our goal in structuring this taxonomy can help to mitigate harms in a more systematic way, as opposed to fighting an endless defensive battle against whatever happens next. The background we bring to this paper is on the one hand architectural—how the Internet ecosystem is actually structured—and on the other hand empirical—how we should measure the Internet to best understand what is happening. If everything were wonderful about the Internet today, the need to measure and understand would not be so compelling. A justification for measurement follows from its ability to shed light on problems and challenges. Sustained measurement or compelled reporting of data, and the analysis of the collected data, generally comes at considerable effort and cost, so must be justified by an argument that it will shed light on something important. This reasoning naturally motivates our taxonomy of things that are wrong—what we call harms. That is where we, the research community generally, and governments should focus attention. We do not intend this paper as a catalog of pessimism, but to help define an action agenda for the research community and for governments. The structure of the paper proceeds "up the layers'', from technology to society. For harms that are closer to the technology, we can be more specific about the harms, and more specific about possible measurements and remedies, and actors that could undertake them. One motivation for this paper is that we believe the Internet ecosystem is at an inflection point. The Internet has revolutionized our ability to store, move, and process information, including information about people, and we are only at the beginning of understanding its impact on society and how to manage and mitigate harms resulting from unregulated commercial use of these capabilities. Current events suggest that now is a point of transition from laissez-faire to regulation. However, the path to good regulation is not obvious, and now is the time for the research community to think hard about what advice to give the governments of the world, and what sort of data can back up that advice. Our highest-level goal for this paper is to contribute to a conversation along those lines. 
    more » « less
  4. The NASA-NSF sponsored Space Weather with Quantified Uncertainty (SWQU) project's main objective is to develop a data-driven, time-dependent, open source model of the solar corona and heliosphere. One key component of the SWQU effort is using a data-assimilation flux transport model to generate an ensemble of synchronic radial magnetic field maps as boundary conditions for the coronal field model. To accomplish this goal, we are developing a new Open-source Flux Transport (OFT) software suite. While there are a number of established flux transport models in the community, OFT is distinguished from many of these efforts in 3 key attributes: (1) It is based on modern computing techniques that will allow many realizations to be rapidly computed on multi-core systems and/or GPUs, (2) it is designed to be easily extensible, and (3) OFT will be released as an open source project. OFT consists of three software packages: 1) OFTpy: a python package for data acquisition, database organization, and Carrington map processing, 2) ConFlow: a Fortran code that generates super granular convective flows, and 3) High-Performance Flux Transport (HipFT): a modular, GPU-accelerated Fortran code for modeling surface flux transport with data assimilation. Here, we present the current state of the OFT project, key features and methods of OFTpy, ConFlow, and HipFt, and real-world examples of data-assimilation and flux transport with HipFT. Validation and performance tests are shown, including generating an ensemble of OFT maps. 
    more » « less
  5. Doglioni, C. ; Kim, D. ; Stewart, G.A. ; Silvestris, L. ; Jackson, P. ; Kamleh, W. (Ed.)
    Boost.Histogram, a header-only C++14 library that provides multidimensional histograms and profiles, became available in Boost 1.70. It is extensible, fast, and uses modern C++ features. Using template metaprogramming, the most efficient code path for any given configuration is automatically selected. The library includes key features designed for the particle physics community, such as optional under- and overflow bins, weighted increments, reductions, growing axes, thread-safe filling, and memory-efficient counters with high-dynamic range. Python bindings for Boost.Histogram are being developed in the Scikit-HEP project to provide a fast, easy-to-install package as a backend for other Python libraries and for advanced users to manipulate histograms. Versatile and efficient histogram filling, effective manipulation, multithreading support, and other features make this a powerful tool. This library has also driven package distribution efforts in Scikit-HEP, allowing binary packages hosted on PyPI to be available for a very wide variety of platforms. Two other libraries fill out the remainder of the Scikit-HEP Python histogramming effort. Aghast is a library designed to provide conversions between different forms of histograms, enabling interaction between histogram libraries, often without an extra copy in memory. This enables a user to make a histogram in one library and then save it in another form, such as saving a Boost.Histogram in ROOT. And Hist is a library providing friendly, analyst-targeted syntax and shortcuts for quick manipulations and fast plotting using these two libraries. 
    more » « less