skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Fog Computing Framework for Cognitive Portable Ground Penetrating Radars
With distributed communication, computation, and storage resources close to end users/devices, fog computing (FC) makes it very promising to develop cognitive portable ground penetrating radars (GPRs) operating intelligently and adaptively under varying sensing conditions. However both strict performance requirement and tradeoffs between communication and computation pose significant challenges. This paper presents a fog computing framework for cognitive portable GPRs. Specifically, the system architecture of an FC-enabled cognitive portable GPR is developed. Based on the identification of various involved computation tasks, an offloading policy was proposed to determine whether computation tasks should be executed locally or offloaded to the fog server. Experimental results show the efficacy of the proposed methods. The framework also provides insight into the design of cognitive Internet of things (IoT) supported by fog computing.  more » « less
Award ID(s):
1647095
PAR ID:
10207228
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE International Conference on Communications (ICC)
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. With distributed communication, computation, and storage resources close to end users, edge computing has great potentials to support delay-sensitive industrial applications involving intelligent edge devices. Cognitive portable ground penetrating radars (GPRs) are expected to achieve high-quality sensing performance in a variety of industrial environments by operating intelligently and adaptively under varying sensing conditions. Although edge computing makes it very promising to develop cognitive portable GPRs, both strict performance requirement and trade-offs between communication and computation pose significant challenges. This paper presents an edge computing framework for cognitive portable GPRs. Specifically, the system architecture of an EC-enabled cognitive portable GPR is developed. Based on the identification of various involved computation tasks, an offloading policy was proposed to determine whether computation tasks should be executed locally or offloaded to the edge server. Experimental results show the efficacy of the proposed methods. The framework also provides insight into the design of cognitive Internet of things (IoT) supported by edge computing. 
    more » « less
  2. The vehicular fog is a relatively new computing paradigm where fog computing works with the vehicular network. It provides computation, storage, and location-aware services with low latency to the vehicles in close proximity. A vehicular fog network can be formed on-the-fly by adding underutilized or unused resources of nearby parked or moving vehicles. Interested vehicles can outsource their resources or data by being added to the vehicular fog network while maintaining proper security and privacy. Client vehicles can use these resources or services for performing computation-intensive tasks, storing data, or getting crowdsource reports through the proper secure and privacy-preserving communication channel. As most vehicular network applications are latency and location sensitive, fog is more suitable than the cloud because of the capability of performing calculations with low latency, location awareness, and the support of mobility. Architecture, security, and privacy models of vehicular fog are not well defined and widely accepted yet as it is in its early stage. In this paper, we have analyzed existing studies on vehicular fog to determine the requirements and issues related to the architecture, security, and privacy of vehicular fog computing. We have also identified and highlighted the open research problems in this promising area. 
    more » « less
  3. The confluence of advanced networking (5G/6G) and distributed cloud technologies (edge/fog computing) are rapidly transforming next-generation networks into highly distributed computation platforms, especially suited to host emerging resource-intensive and latency-sensitive services (e.g., smart transportation/city/factory, real-time computer vision, augmented reality). In this paper, we leverage the recently proposed Cloud Network Flow (CNF) modeling and optimization framework to design a novel two-timescale orchestration system for the joint control of communication and computation resources in cloud-integrated networks. The Long-Term Controller solves a properly constructed CNF optimization problem at a longer timescale that determines i) the end-to-end CNF routes (defining data paths and processing locations) for each service chain and ii) the associated allocation of communication and computation resources. The Short-Term Controller uses a local control policy to adjust the allocation of communication and computation resources based on queue state observations at a shorter timescale. Driven by the lack of proper simulation tools, we also develop new ns-3 features that allow modeling and simulation of cloud-integrated networks equipped with both communication and computation resources hosting arbitrary service chains. Finally, we integrate the proposed orchestration system into ns-3 to evaluate and analyze the dynamic orchestration of a set of representative service chains over a hierarchical cloud-integrated network. 
    more » « less
  4. Abstract: Task offloading, which refers to processing (computation-intensive) data at facilitating servers, is an exemplary service that greatly benefits from the fog computing paradigm, which brings computation resources to the edge network for reduced application latency. However, the resource-consuming nature of task execution, as well as the sheer scale of IoT systems, raises an open and challenging question: whether fog is a remedy or a resource drain, considering frequent and massive offloading operations? This question is nontrivial, because participants of offloading processes, i.e., fog nodes, may have diversified technical specifications, while task generators, i.e., task nodes, may employ a variety of criteria to select offloading targets, resulting in an unmanageable space for performance evaluation. To overcome these challenges of heterogeneity, we propose a gravity model that characterizes offloading criteria with various gravity functions, in which individual/system resource consumption can be examined by the device/network effort metrics, respectively. Simulation results show that the proposed gravity model can flexibly describe different offloading schemes in terms of application and node-level behavior. We find that the expected lifetime and device effort of individual tasks decrease as O(1/N) over the network size N , while the network effort decreases much slower, even remain O(1) when load balancing measures are employed, indicating a possible resource drain in the edge network. 
    more » « less
  5. null (Ed.)
    For robots using motion planning algorithms such as RRT and RRT*, the computational load can vary by orders of magnitude as the complexity of the local environment changes. To adaptively provide such computation, we propose Fog Robotics algorithms in which cloud-based serverless lambda computing provides parallel computation on demand. To use this parallelism, we propose novel motion planning algorithms that scale effectively with an increasing number of serverless computers. However, given that the allocation of computing is typically bounded by both monetary and time constraints, we show how prior learning can be used to efficiently allocate resources at runtime. We demonstrate the algorithms and application of learned parallel allocation in both simulation and with the Fetch commercial mobile manipulator using Amazon Lambda to complete a sequence of sporadically computationally intensive motion planning tasks. 
    more » « less