skip to main content


Title: Indistinguishability Prevents Scheduler Side Channels in Real-Time Systems
Scheduler side-channels can leak critical information in real-time systems, thus posing serious threats to many safety-critical applications. The main culprit is the inherent determinism in the runtime timing behavior of such systems, e.g., the (expected) periodic behavior of critical tasks. In this paper, we introduce the notion of "schedule indistinguishability/", inspired by work in differential privacy, that introduces diversity into the schedules of such systems while offering analyzable security guarantees. We achieve this by adding a sufficiently large (controlled) noise to the task schedules in order to break their deterministic execution patterns. An "epsilon-Scheduler" then implements schedule indistinguishability in real-time Linux. We evaluate our system using two real applications: (a) an autonomous rover running on a real hardware platform (Raspberry Pi) and (b) a video streaming application that sends data across large geographic distances. Our results show that the epsilon-Scheduler offers better protection against scheduler side-channel attacks in real-time systems while still maintaining good performance and quality-of-service(QoS) requirements.  more » « less
Award ID(s):
1718952
NSF-PAR ID:
10313430
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The large number of antennas in massive MIMO systems allows the base station to communicate with multiple users at the same time and frequency resource with multi-user beamforming. However, highly correlated user channels could drastically impede the spectral efficiency that multi-user beamforming can achieve. As such, it is critical for the base station to schedule a suitable group of users in each time and frequency resource block to achieve maximum spectral efficiency while adhering to fairness constraints among the users. In this paper, we consider the resource scheduling problem for massive MIMO systems with its optimal solution known to be NP-hard. Inspired by recent achievements in deep reinforcement learning (DRL) to solve problems with large action sets, we propose SMART, a dynamic scheduler for massive MIMO based on the state-of-the-art Soft Actor-Critic (SAC) DRL model and the K-Nearest Neighbors (KNN) algorithm. Through comprehensive simulations using realistic massive MIMO channel models as well as real-world datasets from channel measurement experiments, we demonstrate the effectiveness of our proposed model in various channel conditions. Our results show that our proposed model performs very close to the optimal proportionally fair (Opt-PF) scheduler in terms of spectral efficiency and fairness with more than one order of magnitude lower computational complexity in medium network sizes where Opt-PF is computationally feasible. Our results also show the feasibility and high performance of our proposed scheduler in networks with a large number of users and resource blocks. 
    more » « less
  2. null (Ed.)
    The concept of Industry 4.0 introduces the unification of industrial Internet-of-Things (IoT), cyber physical systems, and data-driven business modeling to improve production efficiency of the factories. To ensure high production efficiency, Industry 4.0 requires industrial IoT to be adaptable, scalable, real-time, and reliable. Recent successful industrial wireless standards such as WirelessHART appeared as a feasible approach for such industrial IoT. For reliable and real-time communication in highly unreliable environments, they adopt a high degree of redundancy. While a high degree of redundancy is crucial to real-time control, it causes a huge waste of energy, bandwidth, and time under a centralized approach and are therefore less suitable for scalability and handling network dynamics. To address these challenges, we propose DistributedHART—a distributed real-time scheduling system for WirelessHART networks. The essence of our approach is to adopt local (node-level) scheduling through a time window allocation among the nodes that allows each node to schedule its transmissions using a real-time scheduling policy locally and online. DistributedHART obviates the need of creating and disseminating a central global schedule in our approach, thereby significantly reducing resource usage and enhancing the scalability. To our knowledge, it is the first distributed real-time multi-channel scheduler for WirelessHART. We have implemented DistributedHART and experimented on a 130-node testbed. Our testbed experiments as well as simulations show at least 85% less energy consumption in DistributedHART compared to existing centralized approach while ensuring similar schedulability. 
    more » « less
  3. This paper presents a new Single Source Shortest Path (SSSP) algorithm for GPUs. Our key advancement is an improved work scheduler, which is central to the performance of SSSP algorithms. Previous GPU solutions for SSSP use simple work schedulers that can be implemented efficiently on GPUs but that produce low quality schedules. Such solutions yield poor work efficiency and can underutilize the hardware due to a lack of parallelism. Our solution introduces a more sophisticated work scheduler---based on a novel highly parallel approximate priority queue---that produces high quality schedules while being efficiently implementable on GPUs. To evaluate our solution, we use 226 graph inputs from the Lonestar 4.0 benchmark suite and the SuiteSparse Matrix Collection, and we find that our solution outperforms the previous state-of-the-art solution by an average of 2.9×, showing that an efficient work scheduling mechanism can be implemented on GPUs without sacrificing schedule quality. While this paper focuses on the SSSP problem, it has broader implications for the use of GPUs, illustrating that seemingly ill-suited data structures, such as priority queues, can be efficiently implemented for GPUs if we use the proper software structure. 
    more » « less
  4. null (Ed.)
    Summary Slickwater fracturing has become one of the most leveraging completion technologies in unlocking hydrocarbon in unconventional reservoirs. In slickwater treatments, proppant transport becomes a big concern because of the inefficiency of low-viscosity fluids to suspend the particles. Many studies have been devoted to proppant transport experimentally and numerically. However, only a few focused on the proppant pumping schedules in slickwater fracturing. The impact of proppant schedules on well production remains unclear. The goal of our work is to simulate the proppant transport under real pumping schedules (multisize proppants and varying concentration) at the field scale and quantitatively evaluate the effects of proppant schedules on well production for slickwater fracturing. The workflow consists of three steps. First, a validated 3D multiphase particle-in-cell (MP-PIC) model has been used to simulate the proppant transport at real pumping schedules in a field-scale fracture (180-m length, 30-m height). Second, we applied a propped fracture conductivity model to calculate the distribution of propped fracture width, permeability, and fracture conductivity. In the last step, we incorporated the fracture geometry, propped fracture conductivity, and the estimated unpropped fracture conductivity into a reservoir simulation model to predict gas production. Based on the field designs of pumping schedules in slickwater treatments, we have generated four proppant schedules, in which 100-mesh and 40/70-mesh proppants were loaded successively with stair-stepped and incremental stages. The first three were used to study the effects of the mass percentages of the multisize proppants. From Schedules 1 through 3, the mass percentage of 100-mesh proppants is 30, 50, and 70%, respectively. Schedule 4 has the same proppant percentage as Schedule 2 but has a flush stage after slurry injection. The comparison between Schedules 2 and 4 enables us to evaluate the effect of the flush stage on well production. The results indicate that the proppant schedule has a significant influence on treatment performance. The schedule with a higher percentage of 100-mesh proppants has a longer proppant transport distance, a larger propped fracture area, but a lower propped fracture conductivity. Then, the reservoir simulation results show that both the small and large percentages of 100-mesh proppants cannot maximize well production because of the corresponding small propped area and low propped fracture conductivity. Schedule 2, with a median percentage (50%) of 100-mesh proppants, has the highest 1,000-day cumulative gas production. For Schedule 4, the flush stage significantly benefits the gas production by 8.2% because of a longer and more uniform proppant bed along the fracture. In this paper, for the first time, we provide both the qualitative explanation and quantitative evaluation for the impact of proppant pumping schedules on the performance of slickwater treatments at the field scale by using an integrated numerical simulation workflow, providing crucial insights for the design of proppant schedules in the field slickwater treatments. 
    more » « less
  5. The ability to accurately estimate job runtime properties allows a scheduler to effectively schedule jobs. State-of-the-art online cluster job schedulers use history-based learning, which uses past job execution information to estimate the runtime properties of newly arrived jobs. However, with fast-paced development in cluster technology (in both hardware and software) and changing user inputs, job runtime properties can change over time, which lead to inaccurate predictions. In this paper, we explore the potential and limitation of real-time learning of job runtime properties, by proactively sampling and scheduling a small fraction of the tasks of each job. Such a task-sampling-based approach exploits the similarity among runtime properties of the tasks of the same job and is inherently immune to changing job behavior. Our analytical and experimental analysis of 3 production traces with different skew and job distribution shows that learning in space can be substantially more accurate. Our simulation and testbed evaluation on Azure of the two learning approaches anchored in a generic job scheduler using 3 production cluster job traces shows that despite its online overhead, learning in space reduces the average Job Completion Time (JCT) by 1.28×, 1.56×, and 1.32× compared to the prior-art history-based predictor. We further analyze the experimental results to give intuitive explanations to why learning in space outperforms learning in time in these experiments. Finally, we show how sampling-based learning can be extended to schedule DAG jobs and achieve similar speedups over the prior-art history-based predictor. 
    more » « less