skip to main content


Title: Demand Characterization of CPS with Conditionally-Enabled Sensors
Characterizing computational demand of Cyber-Physical Systems (CPS) is critical for guaranteeing that multiple hard real-time tasks may be scheduled on shared resources without missing deadlines. In a CPS involving repetition such as industrial automation systems found in chemical process control or robotic manufacturing, sensors and actuators used as part of the industrial process may be conditionally enabled (and disabled) as a sequence of repeated steps is executed. In robotic manufacturing, for example, these steps may be the movement of a robotic arm through some trajectories followed by activation of end-effector sensors and actuators at the end of each completed motion. The conditional enabling of sensors and actuators produces a sequence of Monotonically Ascending Execution times (MAE) with lower WCET when the sensors are disabled and higher WCET when enabled. Since these systems may have several predefined steps to follow before repeating the entire sequence each unique step may result in several consecutive sequences of MAE. The repetition of these unique sequences of MAE result in a repeating WCET sequence. In the absence of an efficient demand characterization technique for repeating WCET sequences composed of subsequences with monotonically increasing execution time, this work proposes a new task model to describe the behavior of real-world systems which generate large repeating WCET sequences with subsequences of monotonically increasing execution times. In comparison to the most applicable current model, the Generalized Multiframe model (GMF), an empirically and theoretically faster method for characterizing the demand is provided. The demand characterization algorithm is evaluated through a case study of a robotic arm and simulation of 10,000 randomly generated tasks where, on average, the proposed approach is 231 and 179 times faster than the state-of-the-art in the case study and simulation respectively.  more » « less
Award ID(s):
2038609 1724227
NSF-PAR ID:
10314290
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Detection of deception attacks is pivotal to ensure the safe and reliable operation of cyber-physical systems (CPS). Detection of such attacks needs to consider time-series sequences and is very challenging especially for autonomous vehicles that rely on high-dimensional observations from camera sensors. The paper presents an approach to detect deception attacks in real-time utilizing sensor observations, with a special focus on high-dimensional observations. The approach is based on inductive conformal anomaly detection (ICAD) and utilizes a novel generative model which consists of a variational autoencoder (VAE) and a recurrent neural network (RNN) that is used to learn both spatial and temporal features of the normal dynamic behavior of the system. The model can be used to predict the observations for multiple time steps, and the predictions are then compared with actual observations to efficiently quantify the nonconformity of a sequence under attack relative to the expected normal behavior, thereby enabling real-time detection of attacks using high-dimensional sequential data. We evaluate the approach empirically using two simulation case studies of an advanced emergency braking system and an autonomous car racing example, as well as a real-world secure water treatment dataset. The experiments show that the proposed method outperforms other detection methods, and in most experiments, both false positive and false negative rates are less than 10%. Furthermore, execution times measured on both powerful cloud machines and embedded devices are relatively short, thereby enabling real-time detection.

     
    more » « less
  2. Creating soft robots with sophisticated, autonomous capabilities requires these systems to possess reliable, on-line proprioception of 3D configuration through integrated soft sensors. We present a framework for predicting a soft robot’s 3D configuration via deep learning using feedback from a soft, proprioceptive sensor skin. Our framework introduces a kirigami-enabled strategy for rapidly sensorizing soft robots using off-the-shelf materials, a general kinematic description for soft robot geometry, and an investigation of neural network designs for predicting soft robot configuration. Even with hysteretic, non-monotonic feedback from the piezoresistive sensors, recurrent neural networks show potential for predicting our new kinematic parameters and, thus, the robot’s configuration. One trained neural network closely predicts steady-state configuration during operation, though complete dynamic behavior is not fully captured. We validate our methods on a trunk-like arm with 12 discrete actuators and 12 proprioceptive sensors. As an essential advance in soft robotic perception, we anticipate our framework will open new avenues towards closed loop control in soft robotics. 
    more » « less
  3. null (Ed.)
    Stimuli-responsive hydrogels are candidate building blocks for soft robotic applications due to many of their unique properties, including tunable mechanical properties and biocompatibility. Over the past decade, there has been significant progress in developing soft and biohybrid actuators using naturally occurring and synthetic hydrogels to address the increasing demands for machines capable of interacting with fragile biological systems. Recent advancements in three-dimensional (3D) printing technology, either as a standalone manufacturing process or integrated with traditional fabrication techniques, have enabled the development of hydrogel-based actuators with on-demand geometry and actuation modalities. This mini-review surveys existing research efforts to inspire the development of novel fabrication techniques using hydrogel building blocks and identify potential future directions. In this article, existing 3D fabrication techniques for hydrogel actuators are first examined. Next, existing actuation mechanisms, including pneumatic, hydraulic, ionic, dehydration-rehydration, and cell-powered actuation, are reviewed with their benefits and limitations discussed. Subsequently, the applications of hydrogel-based actuators, including compliant handling of fragile items, micro-swimmers, wearable devices, and origami structures, are described. Finally, challenges in fabricating functional actuators using existing techniques are discussed. 
    more » « less
  4. This paper presents a compliant, underactuated finger for the development of anthropomorphic robotic and prosthetic hands. The finger achieves both flexion/extension and adduction/abduction on the metacarpophalangeal joint, by using two actuators. The design employs moment arm pulleys to drive the tendon laterally and amplify the abduction motion, while also maintaining the flexion motion. Particular emphasis has been given to the analysis of the mechanism. The proposed finger has been fabricated with the hybrid deposition manufacturing technique and the actuation mechanism's efficiency has been validated with experiments that include the computation of the reachable workspace, the assessment of the exerted forces at the fingertip, the demonstration of the feasible motions, and the presentation of the grasping and manipulation capabilities. The proposed mechanism facilitates the collaboration of the two actuators to increase the exerted finger forces. Moreover, the extended workspace allows the execution of dexterous manipulation tasks. 
    more » « less
  5. Hideki Aoyama ; Keiich Shirase (Ed.)
    An integral part of information-centric smart manufacturing is the adaptation of industrial robots to complement human workers in a collaborative manner. While advancement in sensing has enabled real-time monitoring of workspace, understanding the semantic information in the workspace, such as parts and tools, remains a challenge for seamless robot integration. The resulting lack of adaptivity to perform in a dynamic workspace have limited robots to tasks with pre-defined actions. In this paper, a machine learning-based robotic object detection and grasping method is developed to improve the adaptivity of robots. Specifically, object detection based on the concept of single-shot detection (SSD) and convolutional neural network (CNN) is investigated to recognize and localize objects in the workspace. Subsequently, the extracted information from object detection, such as the type, position, and orientation of the object, is fed into a multi-layer perceptron (MLP) to generate the desired joint angles of robotic arm for proper object grasping and handover to the human worker. Network training is guided by forward kinematics of the robotic arm in a self-supervised manner to mitigate issues such as singularity in computation. The effectiveness of the developed method is validated on an eDo robotic arm in a human-robot collaborative assembly case study. 
    more » « less