skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Ultra-Low-Power Mode for Screenless Mobile Interaction
Smartphones are now a central technology in the daily lives of billions, but it relies on its battery to perform. Battery optimization is thereby a crucial design constraint in any mobile OS and device. However, even with new low-power methods, the ever-growing touchscreen remains the most power-hungry component. We propose an Ultra-Low-Power Mode (ULPM) for mobile devices that allows for touch interaction without visual feedback and exhibits significant power savings of up to 60% while allowing to complete interactive tasks. We demonstrate the effectiveness of the screenless ULPM in text-entry tasks, camera usage, and listening to videos, showing only a small decrease in usability for typical users.  more » « less
Award ID(s):
1717973
PAR ID:
10099662
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
31st Annual ACM Symposium on User Interface Software and Technology
Page Range / eLocation ID:
557--568
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile sequencing technologies, including Oxford Nanopore’s MinION, MklC, and SmidgION, are bringing genomics in the palm of a hand, opening unprecedented new opportunities in clinical and ecological research and translational applications. While sequencers now need only a USB outlet and provide on-board preprocessing (e.g., base calling), the main data analysis phases are tied to an available broadband Internet connection and cloud computing. Yet the ubiquity of tablets and smartphones, along with their increase in computational power, makes them a perfect candidate for enabling mobile/edge mobile bioinformatics analytics. Also, in on site experimental settings tablets and smartphones are preferable to standard computers due to resilience to humidity or spills, and ease of sterilization. We here present an experimental study on power dissipation, aiming at reducing the battery consumption that currently impedes the execution of intensive bioinformatics analytics pipelines. In particular, we investigated the effects of assorted data structures (including hash tables, vectors, balanced trees, tries) employed in some of the most common tasks of a bioinformatics pipeline, the k- mer representation and counting. By employing a thermal camera, we show how different k-mer-handling data structures impact the power dissipation on a smartphone, finding that a cache-oblivious data structure reduces power dissipation (up to 26% better than others). In conclusion, the choice of data structures in mobile bioinformatics must consider not only computing efficiency (e.g., succinct data structures to reduce RAM usage), but also power consumption of mobile devices that heavily rely on batteries in order to function. 
    more » « less
  2. Any future mobile electronic device with which a user interacts (smartphone, hand-held game console) should not pollute our planet. Consequently, designers need to rethink how to build mobile devices with fewer components that negatively impact the environment (by replacing batteries with energy harvesting sources) while not compromising the user experience quality. This article addresses the challenges of battery-free mobile interaction and presents the first battery-free, personal mobile gaming device powered by energy harvested from gamer actions and sunlight. Our design implements a power failure resilient Nintendo Game Boy emulator that can run off-the-shelf classic Game Boy games like Tetris or Super Mario Land. Beyond a fun toy, our design represents the first battery-free system design for continuous user attention despite frequent power failures caused by intermittent energy harvesting. 
    more » « less
  3. Energy-efficient visual sensing is of paramount importance to enable battery-backed low power IoT and mobile applications. Unfortunately, modern image sensors still consume hundreds of milliwatts of power, mainly due to analog readout. This is because current systems always supply a fixed voltage to the sensor’s analog circuitry, leading to higher power profiles. In this work, we propose to aggressively scale the analog voltage supplied to the camera as a means to significantly reduce sensor power consumption. To that end, we characterize the power and fidelity implications of analog voltage scaling on three off-the-shelf image sensors. Our characterization reveals that analog voltage scaling reduces sensor power but also degrades image quality. Furthermore, the degradation in image quality situationally affects the task accuracy of vision applications. We develop a visual streaming pipeline that flexibly allows application developers to dynamically adapt sensor voltage on a frame-by-frame basis. We develop a voltage controller that programmatically generates desired sensor voltage based on application request. We integrate our voltage controller into the existing RPi-based video streaming IoT pipeline. On top of this, we develop runtime support for flexible voltage specification from vision applications. Evaluating the system over a wide range of voltage scaling policies on popular vision tasks reveals that Squint imaging can deliver up to 73% sensor power savings, while maintaining reasonable task fidelity. Our artifacts are available at: https://gitlab.com/squint1/squint-ae-public 
    more » « less
  4. Autonomous Mobile Robots (AMRs) rely on rechargeable batteries to execute several objective tasks during navigation. Previous research has focused on minimizing task downtime by coordinating task allocation and/or charge scheduling across multiple AMRs. However, they do not jointly ensure low task downtime and high-quality battery life.In this paper, we present TCM, a Task allocation and Charging Manager for AMR fleets. TCM allocates objective tasks to AMRs and schedules their charging times at the available charging stations for minimized task downtime and maximized AMR batteries’ quality of life. We formulate the TCM problem as an MINLP problem and propose a polynomial-time multi-period TCM greedy algorithm that periodically adapts its decisions for high robustness to energy modeling errors. We experimentally show that, compared to the MINLP implementation in Gurobi solver, the designed algorithm provides solutions with a performance ratio of 1.15 at a fraction of the execution time. Furthermore, compared to representative baselines that only focus on task downtime, TCM achieves similar task allocation results while providing much higher battery quality of life. 
    more » « less
  5. null (Ed.)
    There is an increasing emphasis on securing deep learning (DL) inference pipelines for mobile and IoT applications with privacy-sensitive data. Prior works have shown that privacy-sensitive data can be secured throughout deep learning inferences on cloud-offloaded models through trusted execution environments such as Intel SGX. However, prior solutions do not address the fundamental challenges of securing the resource-intensive inference tasks on low-power, low-memory devices (e.g., mobile and IoT devices), while achieving high performance. To tackle these challenges, we propose SecDeep, a low-power DL inference framework demonstrating that both security and performance of deep learning inference on edge devices are well within our reach. Leveraging TEEs with limited resources, SecDeep guarantees full confidentiality for input and intermediate data, as well as the integrity of the deep learning model and framework. By enabling and securing neural accelerators, SecDeep is the first of its kind to provide trusted and performant DL model inferencing on IoT and mobile devices. We implement and validate SecDeep by interfacing the ARM NN DL framework with ARM TrustZone. Our evaluation shows that we can securely run inference tasks with 16× to 172× faster performance than no acceleration approaches by leveraging edge-available accelerators. 
    more » « less