skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Data-priority Aware Fair Task Scheduling for Stream Processing at the Edge
Real-time data stream processing at the edge is crucial for time-sensitive tasks within large-scale IoT systems. Task scheduling plays a key role in managing the Quality of Service (QoS), necessitating a prioritization system to distinguish between high and low-priority tasks, thus ensuring efficient data processing on edge nodes. Existing scheduling algorithms rigidly prioritize tasks deemed as high-priority, often at the expense of fairness and overall system efficiency. In this paper, we propose a Priority-aware Fair Task Scheduling (FTS-Hybrid) algorithm that addresses these challenges by managing priority based task execution in a controlled manner. Our task scheduling algorithm streamlines resource utilization and enhances system responsiveness, contributing to low latency and high throughput, outperforming competing techniques including First-Come-FirstServe (FCFS), Round Robin (RR), and Priority Scheduling (PS). We implemented FTS-Hybrid on Apache Storm and evaluated its performance using an open-source real-time IoT benchmark (RIoTBench). Experimental results show that the FTS-Hybrid algorithm reduces task execution latency by 24%, 31%, and 26% compared with FCFS, RR, and PS, respectively, by strategically mitigating queuing delays under dynamic workload conditions.  more » « less
Award ID(s):
2135439
PAR ID:
10530954
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Arbitrary processor affinities are used in multiprocessor systems to specify the processors on which a task can be scheduled. However, affinity constraints can prevent some high priority real-time tasks from being scheduled, while lower priority tasks execute. This paper presents an implementation and evaluation of the Strong Arbitrary Processor Affinity scheduling on a real-time operating system, an approach that not only respects user-defined affinities, but also supports migration of a higher priority task to allow execution of a task limited by affinity constraints. Results show an improvement in response and turnaround times of higher priority tasks. 
    more » « less
  2. Cloud computing has become an emerging trend for the software industry with the requirement of large infrastructure and resources. The future success of cloud computing depends on the effectiveness of instantiation of the infrastructure and utilization of available resources. Load Balancing ensures the fulfillment of these conditions to improve the cloud environment for the users. Load Balancing dynamically distributes the workload among the nodes in such a way that no single resource is either overwhelmed with tasks or underutilized. In this paper we propose a threshold based load balancing algorithm to ensure the equal distribution of the workload among the nodes. The main objective of the algorithms is to stop the VMs in the cloud being overloaded with tasks or being idle for lack allocation of tasks, when there are active tasks. We have simulated our proposed algorithm in the Cloudanalyst simulator with real world data scenarios. Simulation results shows that our proposed threshold based algorithm can provide a better response time for the task/requests and data processing time for the datacenters compared to the existing algorithms such as First Come First Serve (FCFS), Round Robin(RR) and Equally Spread Current Execution Load Balancing algorithm(ESCELB). 
    more » « less
  3. null (Ed.)
    In today's era of Internet of Things (IoT), where massive amounts of data are produced by IoT and other devices, edge computing has emerged as a prominent paradigm for low-latency data processing. However, applications may have diverse latency requirements: certain latency-sensitive processing operations may need to be performed at the edge, while delay-tolerant operations can be performed on the cloud, without occupying the potentially limited edge computing resources. To achieve that, we envision an environment where computing resources are distributed across edge and cloud offerings. In this paper, we present the design of CLEDGE (CLoud + EDGE), an information-centric hybrid cloud-edge framework, aiming to maximize the on-time completion of computational tasks offloaded by applications with diverse latency requirements. The design of CLEDGE is motivated by the networking challenges that mixed reality researchers face. Our evaluation demonstrates that CLEDGE can complete on-time more than 90% of offloaded tasks with modest overheads. 
    more » « less
  4. Fixed-priority multicore schedulers are often preferable to dynamic-priority ones because they entail less overhead, are easier to im-plement, and enable certain tasks to be favored over others. Underglobal fixed-priority (G-FP) scheduling, as applied to the standardsporadic task model, response times for low-priority tasks may beunbounded, even if total task-system utilization is low. In this paper,it is shown that this negative result can be circumvented if differentjobs of the same task are allowed to execute in parallel. In particu-lar, a response-time bound is presented for task systems that allowintra-task parallelism. This bound merely requires that total utiliza-tion does not exceed the overall processing capacity—individualtask utilizations need not be further restricted. This result impliesthat G-FP is optimal for scheduling soft real-time tasks that requirebounded tardiness, if intra-task parallelism is allowed 
    more » « less
  5. Serverless computing is an emerging event-driven programming model that accelerates the development and deployment of scalable web services on cloud computing systems. Though widely integrated with the public cloud, serverless computing use is nascent for edge-based, IoT deployments. In this work, we design and develop STOIC (Serverless TeleOperable HybrId Cloud), an IoT application deployment and offloading system that extends the serverless model in three ways. First, STOIC adopts a dynamic feedback control mechanism to precisely predict latency and dispatch workloads uniformly across edge and cloud systems using a distributed serverless framework. Second, STOIC leverages hardware acceleration (e.g. GPU resources) for serverless function execution when available from the underlying cloud system. Third, STOIC can be configured in multiple ways to overcome deployment variability associated with public cloud use. Finally, we empirically evaluate STOIC using real-world machine learning applications and multi-tier IoT deployments (edge and cloud). We show that STOIC can be used for training image processing workloads (for object recognition) – once thought too resource intensive for edge deployments. We find that STOIC reduces overall execution time (response latency) and achieves placement accuracy that ranges from 92% to 97%. 
    more » « less