skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Under the Spell of Deadlines
Deadlines are constitutional aspects of research life that the CHI community frequently observes. Despite their importance, deadlines are understudied. Here we bring a mixed art and science perspective on deadlines, which may find broader applications as a starter methodology. In a field study, we monitored four academics at the office, two days before a deadline and two regular days, after the deadline had passed. Based on face video, questionnaire, and interview data we constructed their profiles. We added a dose of fictionalization to these profiles, composing anonymized comic stories that are as humorous as they are enlightening. In the stressful and lonely days towards deadlines, the only common presence in all cases is the researchers’ computer. Accordingly, this work aspires to prompt an effort for a deeper understanding of “deadline users’’, in support of designing much needed affective interfaces.  more » « less
Award ID(s):
1704682
PAR ID:
10284702
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
CHI EA '21: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Autonomous vehicles (AVs) must drive across a variety of challenging environments that impose continuously-varying deadlines and runtime-accuracy tradeoffs on their software pipelines. A deadline-driven execution of such AV pipelines requires a new class of systems that enable the computation to maximize accuracy under dynamically-varying deadlines. Designing these systems presents interesting challenges that arise from combining ease-of-development of AV pipelines with deadline specification and enforcement mechanisms. Our work addresses these challenges through D3 (Dynamic Deadline-Driven), a novel execution model that centralizes the deadline management, and allows applications to adjust their computation by modeling missed deadlines as exceptions. Further, we design and implement ERDOS, an open-source realization of D3 for AV pipelines that exposes finegrained execution events to applications, and provides mechanisms to speculatively execute computation and enforce deadlines between an arbitrary set of events. Finally, we address the crucial lack of AV benchmarks through our state-of-the-art open-source AV pipeline, Pylot, that works seamlessly across simulators and real AVs. We evaluate the efficacy of D3 and ERDOS by driving Pylot across challenging driving scenarios spanning 50km, and observe a 68% reduction in collisions as compared to prior execution models. 
    more » « less
  2. In this paper, we consider the problem of joint offloading and wireless scheduling design for parallel computing applications with hard deadlines. This is motivated by the rapid growth of compute-intensive mobile parallel computing applications (e.g., real-time video analysis, language translation) that require to be processed within a hard deadline. While there are many works on joint computing and communication algorithm design, most of them focused on the minimization of average computing time and may not be applicable for mobile applications with hard deadlines. In this work, we explicitly take hard deadlines for computing tasks into account and develop a joint offloading and scheduling algorithm based on the stochastic network optimization framework. The proposed algorithm is shown to achieve average energy consumption arbitrarily close to the optimal one. However, this algorithm involves a strong coupling between offloading and scheduling decisions, which yields significant challenges on its implementation. Towards this end, we first successfully decouple the offloading and scheduling decisions in the case with one time slot deadline by exploring the intrinsic structure of the proposed algorithm. Based on this, we further implement the proposed algorithm in the general setups. Simulations are provided to corroborate our findings. 
    more » « less
  3. Abstract—Lingua Franca is a programming paradigm that eases the development of distributed cyber-physical systems and ensures determinism. These systems are subject to stringent timing constraints, generally expressed as task deadlines, and meeting them requires real-time scheduling. This work presents a layered scheduling strategy for Lingua Franca for enhanced real-time performance that builds upon any priority-based operating system thread scheduler. The application designers need to specify only the application-specific deadlines, and the Lingua Franca runtime automatically converts them into appropriate priority values for the OS scheduler to obtain earliest deadline first scheduling. 
    more » « less
  4. Data movement latency when using on-chip accelerators in emerging heterogeneous architectures is a serious performance bottleneck. While hardware/software mechanisms such as peer-to-peer DMA between producer/consumer accelerators allow bypassing main memory and significantly reduce main memory contention, schedulers in both the hardware and software domains remain oblivious to their presence. Instead, most contemporary schedulers tend to be deadline-driven, with improved utilization and/or throughput serving as secondary or co-primary goals. This lack of focus on data communication will only worsen execution times as accelerator latencies reduce. In this paper, we present RELIEF (RElaxing Least-laxIty to Enable Forwarding), an online least laxity-driven accelerator scheduling policy that relieves memory pressure in accelerator-rich architectures via data movement-aware scheduling. RELIEF leverages laxity (time margin to a deadline) to opportunistically utilize available hardware data forwarding mechanisms while minimizing quality-of-service (QoS) degradation and unfairness. RELIEF achieves up to 50% more forwards compared to state-of- the-art policies, reducing main memory traffic and energy consumption by up to 32% and 18%, respectively. At the same time, RELIEF meets 14% more task deadlines on average and reduces worst-case deadline violation by 14%, highlighting QoS and fairness improvements. 
    more » « less
  5. null (Ed.)
    Modern latency-sensitive and real-time systems often use multi-core platforms; thus, tasks on different cores share certain hardware resources, such as the memory bus and certain cache levels. This has two undesirable consequences: (1) tasks can interfere with each other, causing high latency for the system as a whole, and (2) it becomes difficult to meet deadlines, since the worst-case timing of a given task depends on the worst task it might have to compete with. Static partitioning isolates tasks from each other by allocating a certain fraction of the resources to each; however, many tasks execute in different phases (e.g., memory-intensive and CPU-intensive) that have different requirements. Thus, system designers are left with a choice between overprovisioning, based on the most demanding phase, or suboptimal performance. In this paper, we propose a pair of techniques, called DNA and DADNA, to address the above challenge. DNA increases throughput and decreases latency, by building an execution profile of each task to identify the phases, and then dynamically allocating resources based on which task can benefit the most; DADNA further adds support for soft real-time workloads by taking deadlines into account. We have built a prototype of both techniques in the Xen hypervisor; our experimental results show that, compared to a state-of-the-art solution, DNA and DADNA can substantially improve schedulability, reduce job deadline miss ratios, and cut latencies by more than a factor of two even in extremely overloaded situations. 
    more » « less