skip to main content


Title: The Digital Power Networks: Energy Dissemination Through a Micro-Grid
The Digital Power Network (DPN) is an energy-on-demand approach. In terms of Internet of Things (IoT), it treats the energy itself as a `thing' to be manipulated (in contrast to energy as the `thing's enabler'). The approach is mostly appropriate for energy starving micro-grids with limited capacity, such as a generator for the home while the power grid is down. The process starts with a request of a user (such as, appliance) for energy. Each appliance, energy source or energy storage has an address which is able to communicate its status. A network server, collects all requests and optimizes the energy dissemination based on priority and availability. Energy is then routed in discrete units to each particular address (say air-condition, or, A/C unit). Contrary to packets of data over a computer network whose data bits are characterized by well-behaved voltage and current values at high frequencies, here we deal with energy demands at highvoltage, low-frequency and fluctuating current. For example, turning a motor ON requires 8 times more power than the level needed to maintain a steady states operation. Our approach is seamlessly integrating all energy resources (including alternative sources), energy storage units and the loads since they are but addresses in the network. Optimization of energy requests and the analysis of satisfying these requests is the topic of this paper. Under energy constraints and unlike the current power grid, for example, some energy requests are queued and granted later. While the ultimate goal is to fuse information and energy together through energy digitization, in its simplest form, this micro-grid can be realized by overlaying an auxiliary (communication) network of controllers on top of an energy delivery network and coupling the two through an array of addressable digital power switches.  more » « less
Award ID(s):
1641033
NSF-PAR ID:
10124392
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData)
Page Range / eLocation ID:
230 to 235
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the current grid, power is available at all times, to all users, indiscriminately. This makes the grid vulnerable to sporadic demands and much effort has been invested to mitigate their effect. We offer here a digital approach to power distribution: an energy-on-demand approach in which the user initiates an energy request to the server of the energy provider before receiving the energy. Considering a micro-grid with a mix of generators (sustainable and other sources), the server optimizes the entire power network before granting the energy requests, fully or partially. The energy is packetized and is routed to the user's address by an array of switches. For example, in an office building, the energy provider may queue energy requests by some air-condition units and grant these requests later. During recovery from a blackout, pockets of instability may be isolated by their unusual energy demands. In its simplest form, this network can be realized by overlaying an auxiliary (control, or, data) network on top of an energy delivery network and coupling the two through an array of addressable digital power switches. In assessing this approach, we are concentrating in this paper on the management of energy requests by using statistical models. An energy network with a limited channel capacity and the optimal path for energy flow in a standard IEEE 39 bus are considered. 
    more » « less
  2. We present experiments with combined reactive and resistive loads on a testbed based on the Controlled-Delivery power Grid (CDG) concept. The CDG is a novel data-based paradigm for distribution of energy in smart cities and smart buildings. This approach to the power grid distributes controlled amounts of power of loads following a request-grant protocol performed through a parallel data network. This network is used as a data plane that notifies the energy supplier about requests and inform loads of the amount of granted power. The energy supplier decides the load, amount, and the time power is granted. Each load is associated with a network address, which is used at the time when power is requested and granted. In this way, power is only delivered to selected loads. Knowing the amount of power being supplied in the CDG requires knowing the precise amount of power demand before this is requested. While the concept works well for an array of resistive loads, it is unclear how to apply it to reactive loads, such as motors, whose power consumption varies over time. Therefore, in this paper, we implement a testbed with multiple loads, two light bulbs as resistive loads and an electrical motor as a reactive load. We then propose to use power profiles for the adoption of the request-grant protocol in the CDG concept. We adopt the use of power profiles to leverage the generation of power requests and evaluate the efficiency of the request-grant protocol on the amount of supplied power. In addition, the deviation of delivered power in the data and power planes is evaluated and results show that the digitized power profile of the reactive loads enables the issuing of power requests for such loads with high accuracy. 
    more » « less
  3. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The goal of this work was to design a low-cost computing facility that can support the development of an open source digital pathology corpus containing 1M images [1]. A single image from a clinical-grade digital pathology scanner can range in size from hundreds of megabytes to five gigabytes. A 1M image database requires over a petabyte (PB) of disk space. To do meaningful work in this problem space requires a significant allocation of computing resources. The improvements and expansions to our HPC (highperformance computing) cluster, known as Neuronix [2], required to support working with digital pathology fall into two broad categories: computation and storage. To handle the increased computational burden and increase job throughput, we are using Slurm [3] as our scheduler and resource manager. For storage, we have designed and implemented a multi-layer filesystem architecture to distribute a filesystem across multiple machines. These enhancements, which are entirely based on open source software, have extended the capabilities of our cluster and increased its cost-effectiveness. Slurm has numerous features that allow it to generalize to a number of different scenarios. Among the most notable is its support for GPU (graphics processing unit) scheduling. GPUs can offer a tremendous performance increase in machine learning applications [4] and Slurm’s built-in mechanisms for handling them was a key factor in making this choice. Slurm has a general resource (GRES) mechanism that can be used to configure and enable support for resources beyond the ones provided by the traditional HPC scheduler (e.g. memory, wall-clock time), and GPUs are among the GRES types that can be supported by Slurm [5]. In addition to being able to track resources, Slurm does strict enforcement of resource allocation. This becomes very important as the computational demands of the jobs increase, so that they have all the resources they need, and that they don’t take resources from other jobs. It is a common practice among GPU-enabled frameworks to query the CUDA runtime library/drivers and iterate over the list of GPUs, attempting to establish a context on all of them. Slurm is able to affect the hardware discovery process of these jobs, which enables a number of these jobs to run alongside each other, even if the GPUs are in exclusive-process mode. To store large quantities of digital pathology slides, we developed a robust, extensible distributed storage solution. We utilized a number of open source tools to create a single filesystem, which can be mounted by any machine on the network. At the lowest layer of abstraction are the hard drives, which were split into 4 60-disk chassis, using 8TB drives. To support these disks, we have two server units, each equipped with Intel Xeon CPUs and 128GB of RAM. At the filesystem level, we have implemented a multi-layer solution that: (1) connects the disks together into a single filesystem/mountpoint using the ZFS (Zettabyte File System) [6], and (2) connects filesystems on multiple machines together to form a single mountpoint using Gluster [7]. ZFS, initially developed by Sun Microsystems, provides disk-level awareness and a filesystem which takes advantage of that awareness to provide fault tolerance. At the filesystem level, ZFS protects against data corruption and the infamous RAID write-hole bug by implementing a journaling scheme (the ZFS intent log, or ZIL) and copy-on-write functionality. Each machine (1 controller + 2 disk chassis) has its own separate ZFS filesystem. Gluster, essentially a meta-filesystem, takes each of these, and provides the means to connect them together over the network and using distributed (similar to RAID 0 but without striping individual files), and mirrored (similar to RAID 1) configurations [8]. By implementing these improvements, it has been possible to expand the storage and computational power of the Neuronix cluster arbitrarily to support the most computationally-intensive endeavors by scaling horizontally. We have greatly improved the scalability of the cluster while maintaining its excellent price/performance ratio [1]. 
    more » « less
  4. We present a feasibility analysis of the controlled delivery power grid (CDG) that uses aggregated power request by users to reduce communications overhead. The CDG, as an approach to the power grid, uses a data network to communicate requests and grants of power in the distribution of electrical power. These requests and grants allow the energy supplier know the power demand in advance and to designate the loads and the time when power is supplied to them. Each load is assigned a power-network address that is used for communication of requests and grants with the energy supplier. With addressed loads, power is only delivered to selected loads. However, issuing a request for power before delivery takes place requires knowing the demand of power the load consumes during the operation interval. However, it is a general concern that having issuing requests in a time-slot basis may risk request losses and therefore, generate intermittent supply. Therefore, we propose request aggregation to minimize the number of requests issued. We show by simulation that the CDG with request aggregation attains high performance, in terms of satisfaction ratio and waiting time for power supply. 
    more » « less
  5. In this paper, we present GraphTM, an efficient and scalable framework for processing transactions in a distributed environment. The distributed environment is modeled as a graph where each node of the graph is a processing node that issues transactions. The objects that transactions use to execute are also on the graph nodes (the initial placement may be arbitrary). The transactions execute on the nodes which issue them after collecting all the objects that they need following the data-flow model of computation. This collection is done by issuing the requests for the objects as soon as transaction starts and wait until all required objects for the transaction come to the requesting node. The challenge is on how to schedule the transactions so that two crucial performance metrics, namely (i) total execution time to commit all the transactions, and (ii) total communication cost involved in moving the objects to the requesting nodes, are minimized. We implemented GraphTM in Java and assessed its performance through 3 micro-benchmarks and 5 complex benchmarks from STAMP benchmark suite on 5 different network topologies, namely, clique, line, grid, cluster, and star, that make an underlying communication network for a representative set of distributed systems commonly used in practice. The results show the efficiency and scalability of our approach. 
    more » « less