skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 AM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: Efficient and Effective Proactive Scheduling for mmWave WLANs
To cope with growing wireless bandwidth demand, millimeter wave (mmWave) communication has been identified as a promising technology to deliver Gbps throughput. However, due to the susceptibility of mmWave signals to blockage, applications can experience significant performance variability as users move around due to rapid and significant variation in channel conditions. In this context, proactive schedulers that make use of future data rate prediction have potential to bring a significant performance improvement as compared to traditional schedulers. In this work, we propose an efficient proactive algorithm that prioritizes the scheduling of scarce resources to achieve better performance than traditional schedulers. The results show that our scheduler can increase average data rate by up to 20% compared to non-proactive scheduling and achieves from 60% to 75% of the performance gain of an optimal proactive scheduler.  more » « less
Award ID(s):
2016381
PAR ID:
10556583
Author(s) / Creator(s):
;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400703669
Page Range / eLocation ID:
57 to 64
Format(s):
Medium: X
Location:
Montreal Quebec Canada
Sponsoring Org:
National Science Foundation
More Like this
  1. To cope with growing wireless bandwidth demand, millimeter wave (mmWave) communication has been identified as a promising technology to deliver Gbps throughput. However, due to the susceptibility of mmWave signals to blockage, applications can experience significant performance variability as users move around due to rapid and significant variation in channel conditions. In this context, proactive schedulers that make use of future data rate prediction have potential to bring a significant performance improvement as compared to traditional schedulers. In this work, we explore the possibility of proactive scheduling that uses mobility prediction and some knowledge of the environment to predict future channel conditions. We present both an optimal proactive scheduler, which is based on an integer linear programming formulation and provides an upper bound on proactive scheduling performance, and a greedy heuristic proactive scheduler that is suitable for practical implementation. Extensive simulation results show that proactive scheduling has the potential to increase average user data rate by up to 35% over the classic proportional fair scheduler without any loss of fairness and incurring only a small increase in jitter. The results also show that the efficient proactive heuristic scheduler achieves from 60% to 75% of the performance gains of the optimal proactive scheduler. Finally, the results show that proactive scheduling performance is sensitive to the quality of mobility prediction and, thus, use of state-of-the-art mobility prediction techniques will be necessary to realize its full potential. 
    more » « less
  2. Although the millimeter wave (mmWave) band has great potential to address ever-increasing demands for wireless bandwidth, its intrinsically unique propagation characteristics call for different scheduling strategies in order to minimize performance drops caused by blockages. A promising approach to mitigate the blockage problem is proactive scheduling, which uses blockage predictions to schedule users when they are experiencing good channel conditions. In this paper, we formulate an optimal scheduling problem with fairness constraints that allows us to find a schedule with maximum aggregate rate that achieves approximately the same fairness as the classic proportional fair scheduler. The results show that, for the problem settings studied, up to around 30% increase in aggregate rate compared to classic proportional fair scheduling (PFS) is possible with no decrease in fairness when blockages can be accurately predicted 0.5 seconds in advance. Furthermore, aggregate rate could be doubled compared to PFS if blockages can be accurately predicted 5 seconds in advance. While these results demonstrate the very promising potential of proactive scheduling, we also discuss several future research directions that must be pursued to effectively realize the approach. 
    more » « less
  3. Kernel task scheduling is important for application performance, adaptability to new hardware, and complex user requirements. However, developing, testing, and debugging new scheduling algorithms in Linux, the most widely used cloud operating system, is slow and difficult. We developed Enoki, a framework for high velocity development of Linux kernel schedulers. Enoki schedulers are written in safe Rust, and the system supports live upgrade of new scheduling policies into the kernel, userspace debugging, and bidirectional communication with applications. A scheduler implemented with Enoki achieved near identical performance (within 1% on average) to the default Linux scheduler CFS on a wide range of benchmarks. Enoki is also able to support a range of research schedulers, specifically the Shinjuku scheduler, a locality aware scheduler, and the Arachne core arbiter, with good performance. 
    more » « less
  4. Recently, Graph Neural Networks (GNNs) have been applied for scheduling jobs over clusters, achieving better performance than hand-crafted heuristics. Despite their impressive performance, concerns remain over whether these GNN-based job schedulers meet users’ expectations about other important properties, such as strategy-proofness, sharing incentive, and stability. In this work, we consider formal verification of GNN-based job schedulers. We address several domain-specific challenges such as networks that are deeper and specifications that are richer than those encountered when verifying image and NLP classifiers. We develop vegas, the first general framework for verifying both single-step and multi-step properties of these schedulers based on carefully designed algorithms that combine abstractions, refinements, solvers, and proof transfer. Our experimental results show that vegas achieves significant speed-up when verifying important properties of a state-of-the-art GNN-based scheduler compared to previous methods. 
    more » « less
  5. Serverless computing enables a new way of building and scaling cloud applications by allowing developers to write fine-grained serverless or cloud functions. The execution duration of a cloud function is typically short---ranging from a few milliseconds to hundreds of seconds. However, due to resource contentions caused by public clouds' deep consolidation, the function execution duration may get significantly prolonged and fail to accurately account for the function's true resource usage. We observe that the function duration can be highly unpredictable with huge amplification of more than 50× for an open-source FaaS platform (OpenLambda). Our experiments show that the OS scheduling policy of cloud functions' host server can have a crucial impact on performance. The default Linux scheduler, CFS (Completely Fair Scheduler), being oblivious to workloads, frequently context-switches short functions, causing a turnaround time that is much longer than their service time. We propose SFS (Smart Function Scheduler), which works entirely in the user space and carefully orchestrates existing Linux FIFO and CFS schedulers to approximate Shortest Remaining Time First (SRTF). SFS uses two-level scheduling that seamlessly combines a new FILTER policy with Linux CFS, to trade off increased duration of long functions for significant performance improvement for short functions. We implement SFS in the Linux user space and port it to OpenLambda. Evaluation results show that SFS significantly improves short functions' duration with a small impact on relatively longer functions, compared to CFS. 
    more » « less