skip to main content

Title: Towards High-Quality Battery Life for Autonomous Mobile Robot Fleets
Autonomous Mobile Robots (AMRs) rely on rechargeable batteries to execute several objective tasks during navigation. Previous research has focused on minimizing task downtime by coordinating task allocation and/or charge scheduling across multiple AMRs. However, they do not jointly ensure low task downtime and high-quality battery life.In this paper, we present TCM, a Task allocation and Charging Manager for AMR fleets. TCM allocates objective tasks to AMRs and schedules their charging times at the available charging stations for minimized task downtime and maximized AMR batteries’ quality of life. We formulate the TCM problem as an MINLP problem and propose a polynomial-time multi-period TCM greedy algorithm that periodically adapts its decisions for high robustness to energy modeling errors. We experimentally show that, compared to the MINLP implementation in Gurobi solver, the designed algorithm provides solutions with a performance ratio of 1.15 at a fraction of the execution time. Furthermore, compared to representative baselines that only focus on task downtime, TCM achieves similar task allocation results while providing much higher battery quality of life.
Authors:
;
Award ID(s):
1948365
Publication Date:
NSF-PAR ID:
10379472
Journal Name:
2022 IEEE International Conference on Autonomic Computing and Self-Organizing Systems (ACSOS)
Page Range or eLocation-ID:
61 to 70
Sponsoring Org:
National Science Foundation
More Like this
  1. Autonomous mobile robots (AMRs) have been widely utilized in industry to execute various on-board computer-vision applications including autonomous guidance, security patrol, object detection, and face recognition. Most of the applications executed by an AMR involve the analysis of camera images through trained machine learning models. Many research studies on machine learning focus either on performance without considering energy efficiency or on techniques such as pruning and compression to make the model more energy-efficient. However, most previous work do not study the root causes of energy inefficiency for the execution of those applications on AMRs. The computing stack on an AMR accounts for 33% of the total energy consumption and can thus highly impact the battery life of the robot. Because recharging an AMR may disrupt the application execution, it is important to efficiently utilize the available energy for maximized battery life. In this paper, we first analyze the breakdown of power dissipation for the execution of computer-vision applications on AMRs and discover three main root causes of energy inefficiency: uncoordinated access to sensor data, performance-oriented model inference execution, and uncoordinated execution of concurrent jobs. In order to fix these three inefficiencies, we propose E2M, an energy-efficient middleware software stack formore »autonomous mobile robots. First, E2M regulates the access of different processes to sensor data, e.g., camera frames, so that the amount of data actually captured by concurrently executing jobs can be minimized. Second, based on a predefined per-process performance metric (e.g., safety, accuracy) and desired target, E2M manipulates the process execution period to find the best energy-performance trade off. Third, E2M coordinates the execution of the concurrent processes to maximize the total contiguous sleep time of the computing hardware for maximized energy savings. We have implemented a prototype of E2M on a real-world AMR. Our experimental results show that, compared to several baselines, E2M leads to 24% energy savings for the computing platform, which translates into an extra 11.5% of battery time and 14 extra minutes of robot runtime, with a performance degradation lower than 7.9% for safety and 1.84% for accuracy.« less
  2. Background Mobile health technology has demonstrated the ability of smartphone apps and sensors to collect data pertaining to patient activity, behavior, and cognition. It also offers the opportunity to understand how everyday passive mobile metrics such as battery life and screen time relate to mental health outcomes through continuous sensing. Impulsivity is an underlying factor in numerous physical and mental health problems. However, few studies have been designed to help us understand how mobile sensors and self-report data can improve our understanding of impulsive behavior. Objective The objective of this study was to explore the feasibility of using mobile sensor data to detect and monitor self-reported state impulsivity and impulsive behavior passively via a cross-platform mobile sensing application. Methods We enrolled 26 participants who were part of a larger study of impulsivity to take part in a real-world, continuous mobile sensing study over 21 days on both Apple operating system (iOS) and Android platforms. The mobile sensing system (mPulse) collected data from call logs, battery charging, and screen checking. To validate the model, we used mobile sensing features to predict common self-reported impulsivity traits, objective mobile behavioral and cognitive measures, and ecological momentary assessment (EMA) of state impulsivity and constructsmore »related to impulsive behavior (ie, risk-taking, attention, and affect). Results Overall, the findings suggested that passive measures of mobile phone use such as call logs, battery charging, and screen checking can predict different facets of trait and state impulsivity and impulsive behavior. For impulsivity traits, the models significantly explained variance in sensation seeking, planning, and lack of perseverance traits but failed to explain motor, urgency, lack of premeditation, and attention traits. Passive sensing features from call logs, battery charging, and screen checking were particularly useful in explaining and predicting trait-based sensation seeking. On a daily level, the model successfully predicted objective behavioral measures such as present bias in delay discounting tasks, commission and omission errors in a cognitive attention task, and total gains in a risk-taking task. Our models also predicted daily EMA questions on positivity, stress, productivity, healthiness, and emotion and affect. Perhaps most intriguingly, the model failed to predict daily EMA designed to measure previous-day impulsivity using face-valid questions. Conclusions The study demonstrated the potential for developing trait and state impulsivity phenotypes and detecting impulsive behavior from everyday mobile phone sensors. Limitations of the current research and suggestions for building more precise passive sensing models are discussed. Trial Registration ClinicalTrials.gov NCT03006653; https://clinicaltrials.gov/ct2/show/NCT03006653« less
  3. We consider a large-scale service system where incoming tasks have to be instantaneously dispatched to one out of many parallel server pools. The user-perceived performance degrades with the number of concurrent tasks and the dispatcher aims at maximizing the overall quality of service by balancing the load through a simple threshold policy. We demonstrate that such a policy is optimal on the fluid and diffusion scales, while only involving a small communication overhead, which is crucial for large-scale deployments. In order to set the threshold optimally, it is important, however, to learn the load of the system, which may be unknown. For that purpose, we design a control rule for tuning the threshold in an online manner. We derive conditions that guarantee that this adaptive threshold settles at the optimal value, along with estimates for the time until this happens. In addition, we provide numerical experiments that support the theoretical results and further indicate that our policy copes effectively with time-varying demand patterns. Summary of Contribution: Data centers and cloud computing platforms are the digital factories of the world, and managing resources and workloads in these systems involves operations research challenges of an unprecedented scale. Due to the massive size,more »complex dynamics, and wide range of time scales, the design and implementation of optimal resource-allocation strategies is prohibitively demanding from a computation and communication perspective. These resource-allocation strategies are essential for certain interactive applications, for which the available computing resources need to be distributed optimally among users in order to provide the best overall experienced performance. This is the subject of the present article, which considers the problem of distributing tasks among the various server pools of a large-scale service system, with the objective of optimizing the overall quality of service provided to users. A solution to this load-balancing problem cannot rely on maintaining complete state information at the gateway of the system, since this is computationally unfeasible, due to the magnitude and complexity of modern data centers and cloud computing platforms. Therefore, we examine a computationally light load-balancing algorithm that is yet asymptotically optimal in a regime where the size of the system approaches infinity. The analysis is based on a Markovian stochastic model, which is studied through fluid and diffusion limits in the aforementioned large-scale regime. The article analyzes the load-balancing algorithm theoretically and provides numerical experiments that support and extend the theoretical results.« less
  4. The Lithium-ion battery (Li-ion) has become the dominant energy storage solution in many applications, such as hybrid electric and electric vehicles, due to its higher energy density and longer life cycle. For these applications, the battery should perform reliably and pose no safety threats. However, the performance of Li-ion batteries can be affected by abnormal thermal behaviors, defined as faults. It is essential to develop a reliable thermal management system to accurately predict and monitor thermal behavior of a Li-ion battery. Using the first-principle models of batteries, this work presents a stochastic fault detection and diagnosis (FDD) algorithm to identify two particular faults in Li-ion battery cells, using easily measured quantities such as temperatures. In addition, models used for FDD are typically derived from the underlying physical phenomena. To make a model tractable and useful, it is common to make simplifications during the development of the model, which may consequently introduce a mismatch between models and battery cells. Further, FDD algorithms can be affected by uncertainty, which may originate from either intrinsic time varying phenomena or model calibration with noisy data. A two-step FDD algorithm is developed in this work to correct a model of Li-ion battery cells and tomore »identify faulty operations in a normal operating condition. An iterative optimization problem is proposed to correct the model by incorporating the errors between the measured quantities and model predictions, which is followed by an optimization-based FDD to provide a probabilistic description of the occurrence of possible faults, while taking the uncertainty into account. The two-step stochastic FDD algorithm is shown to be efficient in terms of the fault detection rate for both individual and simultaneous faults in Li-ion batteries, as compared to Monte Carlo (MC) simulations.« less
  5. Edge computing allows end-user devices to offload heavy computation to nearby edge servers for reduced latency, maximized profit, and/or minimized energy consumption. Data-dependent tasks that analyze locally-acquired sensing data are one of the most common candidates for task offloading in edge computing. As a result, the total latency and network load are affected by the total amount of data transferred from end-user devices to the selected edge servers. Most existing solutions for task allocation in edge computing do not take into consideration that some user tasks may actually operate on the same data items. Making the task allocation algorithm aware of the existing data sharing characteristics of tasks can help reduce network load at a negligible profit loss by allocating more tasks sharing data on the same server. In this paper, we formulate the data sharing-aware task allocation problem that make decisions on task allocation for maximized profit and minimized network load by taking into account the data-sharing characteristics of tasks. In addition, because the problem is NP-hard, we design the DSTA algorithm, which finds a solution to the problem in polynomial time. We analyze the performance of the proposed algorithm against a state-of-the-art baseline that only maximizes profit. Ourmore »extensive analysis shows that DSTA leads to about 8 times lower data load on the network while being within 1.03 times of the total profit on average compared to the state-of-the-art.« less