skip to main content


Title: Scale-out Edge Storage Systems with Embedded Storage Nodes to Get Better Availability and Cost-Efficiency At the Same Time
In the resource-rich environment of data centers most failures can quickly failover to redundant resources. In contrast, failure in edge infrastructures with limited resources might require maintenance personnel to drive to the location in order to fix the problem. The operational cost of these "truck rolls" to locations at the edge infrastructure competes with the operational cost incurred by extra space and power needed for redundant resources at the edge. Computational storage devices with network interfaces can act as network-attached storage servers and offer a new design point for storage systems at the edge. In this paper we hypothesize that a system consisting of a larger number of such small "embedded" storage nodes provides higher availability due to a larger number of failure domains while also saving operational cost in terms of space and power. As evidence for our hypothesis, we compared the possibility of data loss between two different types of storage systems: one is constructed with general-purpose servers, and the other one is constructed with embedded storage nodes. Our results show that the storage system constructed with general-purpose servers has 7 to 20 times higher risk of losing data over the storage system constructed with embedded storage devices. We also compare the two alternatives in terms of power and space using the Media-Based Work Unit (MBWU) that we developed in an earlier paper as a reference point.  more » « less
Award ID(s):
1764102 1705021
NSF-PAR ID:
10275745
Author(s) / Creator(s):
Date Published:
Journal Name:
HotEdge'20
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Larger penetration of Distributed Generations (DG) in the power system brings new flexibility and opportunity as well as new challenges due to the generally intermittent nature of DG. When these DG are installed in the medium voltage distribution systems as components of the smart grid, further support is required to ensure a smooth and controllable operation. To complement the uncontrollable output power of these resources, energy storage devices need to be incorporated to absorb excessive power and provide power shortage in time of need. They also can provide reactive power to dynamically help the voltage profile. Energy Storage Systems (ESS) can be expensive and limited number of them can practically be installed in distribution systems. In addition to frequency regulation and energy time shifting, ESS can support voltage and angle stability in the power network. This paper applies a Jacobian matrix-based sensitivity analysis to determine the most appropriate node in a grid to collectively improve the voltage magnitude and angle of all the nodes by active/reactive power injection. IEEE 14, 24, and 123-bus distribution system are selected to demonstrate the performance of the proposed method. As opposed to most previous studies, this method does not require an iteration loop with a convergence problem nor a network-related complicated objective function. 
    more » « less
  2. We aim to preserve a large amount of data generated insidebase station-less sensor networks(BSNs) while considering that sensor nodes are selfish. BSNs refer to emerging sensing applications deployed in challenging and inhospitable environments (e.g., underwater exploration); as such, there do not exist data-collecting base stations in the BSN to collect the data. Consequently, the generated data has to be stored inside the BSN before uploading opportunities become available. Our goal is to preserve the data inside the BSN with minimum energy cost by incentivizing the storage- and energy-constrained sensor nodes to participate in the data preservation process. We refer to the problem as DPP:datapreservationproblem in the BSN. Previous research assumes that all the sensor nodes are cooperative and that sensors have infinite battery power and design a minimum-cost flow-based data preservation solution. However, in a distributed setting and under different control, the resource-constrained sensor nodes could behave selfishly only to conserve their resources and maximize their benefit.

    In this article, we first solve DPP by designing an integer linear programming (ILP)-based optimal solution without considering selfishness. We then establish a game-theoretical framework that achieves provably truthful and optimal data preservation in BSNs. For a special case of DPP wherein nodes are not energy-constrained, referred to as DPP-W, we design a data preservation game DPG-1 that integrates algorithmic mechanism design (AMD) and a more efficient minimum cost flow-based data preservation solution. We show that DPG-1 yields dominant strategies for sensor nodes and delivers truthful and optimal data preservation. For the general case of DPP (wherein nodes are energy-constrained), however, DPG-1 fails to achieve truthful and optimal data preservation. Utilizing packet-level flow observation of sensor node behaviors computed by minimum cost flow and ILP, we uncover the cause of the failure of the DPG-1. It is due to the packet dropping by the selfish nodes that manipulate the AMD technique. We then design a data preservation game DPG-2 for DPP that traces and punishes manipulative nodes in the BSN. We show that DPG-2 delivers dominant strategies for truth-telling nodes and achieves provably optimal data preservation with cheat-proof guarantees. Via extensive simulations under different network parameters and dynamics, we show that our games achieve system-wide data preservation solutions with optimal energy cost while enforcing truth-telling of sensor nodes about their private cost types. One salient feature of our work is its integrated game theory and network flows approach. With the observation of flow level sensor node behaviors provided by the network flows, our proposed games can synthesize “microscopic” (i.e., selfish and local) behaviors of sensor nodes and yield targeted “macroscopic” (i.e., optimal and global) network performance of data preservation in the BSN.

     
    more » « less
  3. Embedded and real-time devices in many domains are increasingly dependent on network connectivity. The ability to offload computations encourages Cost, Size, Weight and Power (C-SWaP) optimizations, while coordination over the network effectively enables systems to sense the environment beyond their own local sensors, and to collaborate globally. The promise is significant: Autonomous Vehicles (AVs) coordinating with each other through infrastructure, factories aggregating data for global optimization, and power-constrained devices leveraging offloaded inference tasks. Low-latency wireless (e.g., 5G) technologies paired with the edge cloud, are further enabling these trends. Unfortunately, computation at the edge poses significant challenges due to the challenging combination of limited resources, required high performance, security due to multi-tenancy, and real-time latency. This paper introduces Edge-RT, a set of OS extensions for the edge designed to meet the end-to-end (packet reception to transmission) deadlines across chains of computations. It supports strong security by executing a chain per-client device, thus isolating tenant and device computations. Despite a practical focus on deadlines and strong isolation, it maintains high system efficiency. To do so, Edge-RT focuses on per-packet deadlines inherited by the computations that operate on it. It introduces mechanisms to avoid per-packet system overheads, while trading only bounded impacts on predictable scheduling. Results show that compared to Linux and EdgeOS, Edge-RT can both maintain higher throughput and meet significantly more deadlines both for systems with bimodal workloads with utilization above 60%, in the presence of malicious tasks, and as the system scales up in clients. 
    more » « less
  4. Access libraries such as ROOT[1] and HDF5[2] allow users to interact with datasets using high level abstractions, like coordinate systems and associated slicing operations. Unfortunately, the implementations of access libraries are based on outdated assumptions about storage systems interfaces and are generally unable to fully benefit from modern fast storage devices. For example, access libraries often implement buffering and data layout that assume that large, single-threaded sequential access patterns are causing less overall latency than small parallel random access: while this is true for spinning media, it is not true for flash media. The situation is getting worse with rapidly evolving storage devices such as non-volatile memory and ever larger datasets. This project explores distributed dataset mapping infrastructures that can integrate and scale out existing access libraries using Ceph’s extensible object model, avoiding re-implementation or even modifications of these access libraries as much as possible. These programmable storage extensions coupled with our distributed dataset mapping techniques enable: 1) access library operations to be offloaded to storage system servers, 2) the independent evolution of access libraries and storage systems and 3) fully leveraging of the existing load balancing, elasticity, and failure management of distributed storage systems like Ceph. They also create more opportunities to conduct storage server-local optimizations specific to storage servers. For example, storage servers might include local key/value stores combined with chunk stores that require different optimizations than a local file system. As storage servers evolve to support new storage devices like non-volatile memory, these server-local optimizations can be implemented while minimizing disruptions to applications. We will report progress on the means by which distributed dataset mapping can be abstracted over particular access libraries, including access libraries for ROOT data, and how we address some of the challenges revolving around data partitioning and composability of access operations. 
    more » « less
  5. As interest in DNA-based information storage grows, the costs of synthesis have been identified as a key bottleneck. A potential direction is to tune synthesis for data. Data strands tend to be composed of a small set of recurring code word sequences, and they contain longer sequences of repeated data. To exploit these properties, we propose a new framework called DINOS. DINOS consists of three key parts: (i) The first is a hierarchical strand assembly algorithm, inspired by gene assembly techniques that can assemble arbitrary data strands from a small set of primitive blocks. (ii) The assembly algorithm relies on our novel formulation for how to construct primitive blocks, spanning a variety of useful configurations from a set of code words and overhangs. Each primitive block is a code word flanked by a pair of overhangs that are created by a cyclic pairing process that keeps the number of primitive blocks small. Using these primitive blocks, any data strand of arbitrary length can be assembled, theoretically. We show a minimal system for a binary code with as few as six primitive blocks, and we generalize our processes to support an arbitrary set of overhangs and code words. (iii) We exploit our hierarchical assembly approach to identify redundant sequences and coalesce the reactions that create them to make assembly more efficient. We evaluate DINOS and describe its key characteristics. For example, the number of reactions needed to make a strand can be reduced by increasing the number of overhangs or the number of code words, but increasing the number of overhangs offers a small advantage over increasing code words while requiring substantially fewer primitive blocks. However, density is improved more by increasing the number of code words. We also find that a simple redundancy coalescing technique is able to reduce reactions by 90.6% and 41.2% on average for decompressed and compressed data, respectively, even when the smallest data fragments being assembled are 16 bits. With a simple padding heuristic that finds even more redundancy, we can further decrease reactions for the same operating point up to 91.1% and 59% for decompressed and compressed data, respectively, on average. Our approach offers greater density by up to 80% over a prior general purpose gene assembly technique. Finally, in an analysis of synthesis costs in which we make 1 GB volume using de novo synthesis versus making only the primitive blocks with de novo synthesis and otherwise assembling using DINOS, we estimate DINOS as 10 5 × cheaper than de novo synthesis. 
    more » « less