skip to main content

Title: Demand Characterization of CPS with Conditionally-Enabled Sensors
Characterizing computational demand of Cyber-Physical Systems (CPS) is critical for guaranteeing that multiple hard real-time tasks may be scheduled on shared resources without missing deadlines. In a CPS involving repetition such as industrial automation systems found in chemical process control or robotic manufacturing, sensors and actuators used as part of the industrial process may be conditionally enabled (and disabled) as a sequence of repeated steps is executed. In robotic manufacturing, for example, these steps may be the movement of a robotic arm through some trajectories followed by activation of end-effector sensors and actuators at the end of each completed motion. The conditional enabling of sensors and actuators produces a sequence of Monotonically Ascending Execution times (MAE) with lower WCET when the sensors are disabled and higher WCET when enabled. Since these systems may have several predefined steps to follow before repeating the entire sequence each unique step may result in several consecutive sequences of MAE. The repetition of these unique sequences of MAE result in a repeating WCET sequence. In the absence of an efficient demand characterization technique for repeating WCET sequences composed of subsequences with monotonically increasing execution time, this work proposes a new task model to describe the more » behavior of real-world systems which generate large repeating WCET sequences with subsequences of monotonically increasing execution times. In comparison to the most applicable current model, the Generalized Multiframe model (GMF), an empirically and theoretically faster method for characterizing the demand is provided. The demand characterization algorithm is evaluated through a case study of a robotic arm and simulation of 10,000 randomly generated tasks where, on average, the proposed approach is 231 and 179 times faster than the state-of-the-art in the case study and simulation respectively. « less
; ;
Award ID(s):
2038609 1724227
Publication Date:
Journal Name:
International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA)
Sponsoring Org:
National Science Foundation
More Like this
  1. Creating soft robots with sophisticated, autonomous capabilities requires these systems to possess reliable, on-line proprioception of 3D configuration through integrated soft sensors. We present a framework for predicting a soft robot’s 3D configuration via deep learning using feedback from a soft, proprioceptive sensor skin. Our framework introduces a kirigami-enabled strategy for rapidly sensorizing soft robots using off-the-shelf materials, a general kinematic description for soft robot geometry, and an investigation of neural network designs for predicting soft robot configuration. Even with hysteretic, non-monotonic feedback from the piezoresistive sensors, recurrent neural networks show potential for predicting our new kinematic parameters and, thus, the robot’s configuration. One trained neural network closely predicts steady-state configuration during operation, though complete dynamic behavior is not fully captured. We validate our methods on a trunk-like arm with 12 discrete actuators and 12 proprioceptive sensors. As an essential advance in soft robotic perception, we anticipate our framework will open new avenues towards closed loop control in soft robotics.
  2. Stimuli-responsive hydrogels are candidate building blocks for soft robotic applications due to many of their unique properties, including tunable mechanical properties and biocompatibility. Over the past decade, there has been significant progress in developing soft and biohybrid actuators using naturally occurring and synthetic hydrogels to address the increasing demands for machines capable of interacting with fragile biological systems. Recent advancements in three-dimensional (3D) printing technology, either as a standalone manufacturing process or integrated with traditional fabrication techniques, have enabled the development of hydrogel-based actuators with on-demand geometry and actuation modalities. This mini-review surveys existing research efforts to inspire the development of novel fabrication techniques using hydrogel building blocks and identify potential future directions. In this article, existing 3D fabrication techniques for hydrogel actuators are first examined. Next, existing actuation mechanisms, including pneumatic, hydraulic, ionic, dehydration-rehydration, and cell-powered actuation, are reviewed with their benefits and limitations discussed. Subsequently, the applications of hydrogel-based actuators, including compliant handling of fragile items, micro-swimmers, wearable devices, and origami structures, are described. Finally, challenges in fabricating functional actuators using existing techniques are discussed.
  3. The value of verification of cyberphysical systems depends on the relationship between the state of the software and the state of the physical system. This relationship can be complex because of the real-time nature and different timelines of the physical plant, the sensors and actuators, and the software that is almost always concurrent and distributed. In this paper, we study different ways to construct a transition system model for the distributed and concurrent software components of a CPS. The purpose of the transition system model is to enable model checking, an established and widely used verification technique. We describe a logical-time-based transition system model, which is commonly used for verifying programs written in synchronous languages, and derive the conditions under which such a model faithfully reflects physical states. When these conditions are not met (a common situation), a finer-grained event-based transition system model may be required. We propose an approach for formal verification of cyberphysical systems using Lingua Franca, a language designed for programming cyberphysical systems, and Rebeca, an actor-based language designed for model checking distributed event-driven systems. We focus on the cyber part and model a faithful interface to the physical part. Our method relies on the assumption thatmore »the alignment of different timelines during the execution of the system is the responsibility of the underlying platforms. We make those assumptions explicit and clear.« less
  4. Hideki Aoyama ; Keiich Shirase (Ed.)
    An integral part of information-centric smart manufacturing is the adaptation of industrial robots to complement human workers in a collaborative manner. While advancement in sensing has enabled real-time monitoring of workspace, understanding the semantic information in the workspace, such as parts and tools, remains a challenge for seamless robot integration. The resulting lack of adaptivity to perform in a dynamic workspace have limited robots to tasks with pre-defined actions. In this paper, a machine learning-based robotic object detection and grasping method is developed to improve the adaptivity of robots. Specifically, object detection based on the concept of single-shot detection (SSD) and convolutional neural network (CNN) is investigated to recognize and localize objects in the workspace. Subsequently, the extracted information from object detection, such as the type, position, and orientation of the object, is fed into a multi-layer perceptron (MLP) to generate the desired joint angles of robotic arm for proper object grasping and handover to the human worker. Network training is guided by forward kinematics of the robotic arm in a self-supervised manner to mitigate issues such as singularity in computation. The effectiveness of the developed method is validated on an eDo robotic arm in a human-robot collaborative assembly casemore »study.« less
  5. In modern industrial manufacturing processes, robotic manipulators are routinely used in the assembly, packaging, and material handling operations. During production, changing end-of-arm tooling is frequently necessary for process flexibility and reuse of robotic resources. In conventional operation, a tool changer is sometimes employed to load and unload end-effectors, however, the robot must be manually taught to locate the tool changers by operators via a teach pendant. During tool change teaching, the operator takes considerable effort and time to align the master and tool side of the coupler by adjusting the motion speed of the robotic arm and observing the alignment from different viewpoints. In this paper, a custom robotic system, the NeXus, was programmed to locate and change tools automatically via an RGB-D camera. The NeXus was configured as a multi-robot system for multiple tasks including assembly, bonding, and 3D printing of sensor arrays, solar cells, and microrobot prototypes. Thus, different tools are employed by an industrial robotic arm to position grippers, printers, and other types of end-effectors in the workspace. To improve the precision and cycle-time of the robotic tool change, we mounted an eye-in-hand RGB-D camera and employed visual servoing to automate the tool change process. We thenmore »compared the teaching time of the tool location using this system and compared the cycle time with those of 6 human operators in the manual mode. We concluded that the tool location time in automated mode, on average, more than two times lower than the expert human operators.« less