skip to main content


Title: PEDaLS: Persisting Versioned Data Structures
In this paper, we investigate how to automatically persist versioned data structures in distributed settings (e.g. cloud + edge) using append-only storage. By doing so, we facilitate resiliency by enabling program state to survive program activations and termination, and program-level data structures and their version information to be accessed programmatically by multiple clients (for replay, provenance tracking, debugging, and coordination avoidance, and more). These features are useful in distributed, failure-prone contexts such as those for heterogeneous and pervasive Internet of Things (IoT) deployments. We prototype our approach within an open-source, distributed operating system for IoT. Our results show that it is possible to achieve algorithmic complexities similar to those of in-memory versioning but in a distributed setting.  more » « less
Award ID(s):
2107101 2027977 1703560
NSF-PAR ID:
10334315
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
IEEE International Conference on Cloud Engineering
Page Range / eLocation ID:
179 to 190
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Serverless computing has increased in popularity as a programming model for “Internet of Things” (IoT) applications that amalgamate IoT devices, edge-deployed computers and systems, and the cloud to interoperate. In this paper, we present Laminar – a dataflow pro- gram representation for distributed IoT application programming – and describe its implementation based on a network-transparent, event-driven, serverless computing infrastructure that uses append- only log storage to store all program state. We describe the initial implementation of Laminar, discuss some useful properties we obtained by leveraging log-based data structures and triggered com- putations of the underlying serverless runtime, and illustrate its performance and reliability characteristics using a set of benchmark applications. 
    more » « less
  2. The development of communication technologies in edge computing has fostered progress across various applications, particularly those involving vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. Enhanced infrastructure has improved data transmission network availability, promoting better connectivity and data collection from IoT devices. A notable IoT application is with the Intelligent Transportation System (ITS). IoT technology integration enables ITS to access a variety of data sources, including those pertaining to weather and road conditions. Real-time data on factors like temperature, humidity, precipitation, and friction contribute to improved decision-making models. Traditionally, these models are trained at the cloud level, which can lead to communication and computational delays. However, substantial advancements in cloud-to-edge computing have decreased communication relays and increased computational distribution, resulting in faster response times. Despite these benefits, the developments still largely depend on central cloud sources for computation due to restrictions in computational and storage capacity at the edge. This reliance leads to duplicated data transfers between edge servers and cloud application servers. Additionally, edge computing is further complicated by data models predominantly based on data heuristics. In this paper, we propose a system that streamlines edge computing by allowing computation at the edge, thus reducing latency in responding to requests across distributed networks. Our system is also designed to facilitate quick updates of predictions, ensuring vehicles receive more pertinent safety-critical model predictions. We will demonstrate the construction of our system for V2V and V2I applications, incorporating cloud-ware, middleware, and vehicle-ware levels. 
    more » « less
  3. Who and by what means do we ensure that engineering education evolves to meet the ever changing needs of our society? This and other papers presented by our research team at this conference offer our initial set of findings from an NSF sponsored collaborative study on engineering education reform. Organized around the notion of higher education governance and the practice of educational reform, our open-ended study is based on conducting semi-structured interviews at over three dozen universities and engineering professional societies and organizations, along with a handful of scholars engaged in engineering education research. Organized as a multi-site, multi-scale study, our goal is to document differences in perspectives and interest the exist across organizational levels and institutions, and to describe the coordination that occurs (or fails to occur) in engineering education given the distributed structure of the engineering profession. This paper offers for all engineering educators and administrators a qualitative and retrospective analysis of ABET EC 2000 and its implementation. The paper opens with a historical background on the Engineers Council for Professional Development (ECPD) and engineering accreditation; the rise of quantitative standards during the 1950s as a result of the push to implement an engineering science curriculum appropriate to the Cold War era; EC 2000 and its call for greater emphasis on professional skill sets amidst concerns about US manufacturing productivity and national competitiveness; the development of outcomes assessment and its implementation; and the successive negotiations about assessment practice and the training of both of program evaluators and assessment coordinators for the degree programs undergoing evaluation. It was these negotiations and the evolving practice of assessment that resulted in the latest set of changes in ABET engineering accreditation criteria (“1-7” versus “a-k”). To provide an insight into the origins of EC 2000, the “Gang of Six,” consisting of a group of individuals loyal to ABET who used the pressure exerted by external organizations, along with a shared rhetoric of national competitiveness to forge a common vision organized around the expanded emphasis on professional skill sets. It was also significant that the Gang of Six was aware of the fact that the regional accreditation agencies were already contemplating a shift towards outcomes assessment; several also had a background in industrial engineering. However, this resulted in an assessment protocol for EC 2000 that remained ambiguous about whether the stated learning outcomes (Criterion 3) was something faculty had to demonstrate for all of their students, or whether EC 2000’s main emphasis was continuous improvement. When it proved difficult to demonstrate learning outcomes on the part of all students, ABET itself began to place greater emphasis on total quality management and continuous process improvement (TQM/CPI). This gave institutions an opening to begin using increasingly limited and proximate measures for the “a-k” student outcomes as evidence of effort and improvement. In what social scientific terms would be described as “tactical” resistance to perceived oppressive structures, this enabled ABET coordinators and the faculty in charge of degree programs, many of whom had their own internal improvement processes, to begin referring to the a-k criteria as “difficult to achieve” and “ambiguous,” which they sometimes were. Inconsistencies in evaluation outcomes enabled those most discontented with the a-k student outcomes to use ABET’s own organizational processes to drive the latest revisions to EAC accreditation criteria, although the organization’s own process for member and stakeholder input ultimately restored much of the professional skill sets found in the original EC 2000 criteria. Other refinements were also made to the standard, including a new emphasis on diversity. This said, many within our interview population believe that EC 2000 had already achieved much of the changes it set out to achieve, especially with regards to broader professional skills such as communication, teamwork, and design. Regular faculty review of curricula is now also a more routine part of the engineering education landscape. While programs vary in their engagement with ABET, there are many who are skeptical about whether the new criteria will produce further improvements to their programs, with many arguing that their own internal processes are now the primary drivers for change. 
    more » « less
  4. Multi-sensor IoT devices can gather different types of data by executing different sensing activities or tasks. Therefore, IoT applications are also becoming more complex in order to process multiple data types and provide a targeted response to the monitored phenomena. However, IoT devices which are usually resource-constrained still face energy challenges since using each of these sensors has an energy cost. Therefore, energy-efficient solutions are needed to extend the device lifetime while balancing the sensing data requirements of the IoT application. Cooperative monitoring is one approach for managing energy and involves reducing the duplication of sensing tasks between neighboring IoT devices. Setting up cooperative monitoring is a scheduling problem and is challenging in a distributed environment with resource-constrained IoT devices. In this work, we present our Distributed Token and Tier-based task Scheduler (DTTS) for a multi-sensor IoT network. Our algorithm divides the monitoring period (5 min epochs) into a set of non-overlapping intervals called tiers and determines the start deadlines for the task at each IoT device. Then to minimize temporal sensing overlap, DTTS distributes task executions throughout the epoch and uses tokens to share minimal information between IoT devices. Tasks with earlier start deadlines are scheduled in earlier tiers while tasks with later start deadlines are scheduled in later tiers. Evaluating our algorithm against a simple round-robin scheduler shows that the DTTS algorithm always schedules tasks before their start deadline expires. 
    more » « less
  5. The P4 language and programmable switch hardware, like the Intel Tofino, have made it possible for network engineers to write new programs that customize operation of computer networks, thereby improving performance, fault-tolerance, energy use, and security. Unfortunately, possible does not mean easy —there are many implicit constraints that programmers must obey if they wish their programs to compile to specialized networking hardware. In particular, all computations on the same switch must access data structures in a consistent order, or it will not be possible to lay that data out along the switch’s packet-processing pipeline. In this paper, we define Lucid 2.0, a new language and type system that guarantees programs access data in a consistent order and hence are pipeline-safe . Lucid 2.0 builds on top of the original Lucid language, which is also pipeline-safe, but lacks the features needed for modular construction of data structure libraries. Hence, Lucid 2.0 adds (1) polymorphism and ordering constraints for code reuse; (2) abstract, hierarchical pipeline locations and data types to support information hiding; (3) compile-time constructors, vectors and loops to allow for construction of flexible data structures; and (4) type inference to lessen the burden of program annotations. We develop the meta-theory of Lucid 2.0, prove soundness, and show how to encode constraint checking as an SMT problem. We demonstrate the utility of Lucid 2.0 by developing a suite of useful networking libraries and applications that exploit our new language features, including Bloom filters, sketches, cuckoo hash tables, distributed firewalls, DNS reflection defenses, network address translators (NATs) and a probabilistic traffic monitoring service. 
    more » « less