skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Real-Time Dynamic Simulator and an Associated Front-End Representation Format for Simulating Complex Robots and Environments
Robot Dynamic Simulators offer convenient implementation and testing of physical robots, thus accelerating research and development. While existing simulators support most real-world robots with serially linked kinematic and dynamic chains, they offer limited or conditional support for complex closed-loop robots. On the other hand, many of the underlying physics computation libraries that these simulators employ support closed-loop kinematic chains and redundant mechanisms. Such mechanisms are often utilized in surgical robots to achieve constrained motions (e.g., the remote center of motion (RCM)). To deal with such robots, we propose a new simulation framework based on a front-end description format and a robust real-time dynamic simulator. Although this study focuses on surgical robots, the proposed format and simulator are applicable to any type of robot. In this manuscript, we describe the philosophy and implementation of the front-end description format and demonstrate its performance and the simulator's capabilities using simulated models of real-world surgical robots.  more » « less
Award ID(s):
1637759
PAR ID:
10207704
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Page Range / eLocation ID:
1875 to 1882
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Creating soft robots with sophisticated, autonomous capabilities requires these systems to possess reliable, on-line proprioception of 3D configuration through integrated soft sensors. We present a framework for predicting a soft robot’s 3D configuration via deep learning using feedback from a soft, proprioceptive sensor skin. Our framework introduces a kirigami-enabled strategy for rapidly sensorizing soft robots using off-the-shelf materials, a general kinematic description for soft robot geometry, and an investigation of neural network designs for predicting soft robot configuration. Even with hysteretic, non-monotonic feedback from the piezoresistive sensors, recurrent neural networks show potential for predicting our new kinematic parameters and, thus, the robot’s configuration. One trained neural network closely predicts steady-state configuration during operation, though complete dynamic behavior is not fully captured. We validate our methods on a trunk-like arm with 12 discrete actuators and 12 proprioceptive sensors. As an essential advance in soft robotic perception, we anticipate our framework will open new avenues towards closed loop control in soft robotics. 
    more » « less
  2. With the advancement of modern robotics, autonomous agents are now capable of hosting sophisticated algorithms, which enables them to make intelligent decisions. But developing and testing such algorithms directly in real-world systems is tedious and may result in the wastage of valuable resources. Especially for heterogeneous multi-agent systems in battlefield environments where communication is critical in determining the system’s behavior and usability. Due to the necessity of simulators of separate paradigms (co-simulation) to simulate such scenarios before deploying, synchronization between those simulators is vital. Existing works aimed at resolving this issue fall short of addressing diversity among deployed agents. In this work, we propose SynchroSim, an integrated co-simulation middleware to simulate a heterogeneous multi-robot system. Here we propose a velocity difference-driven adjustable window size approach with a view to reducing packet loss probability. It takes into account the respective velocities of deployed agents to calculate a suitable window size before transmitting data between them. We consider our algorithm specific simulator agnostic but for the sake of implementation results, we have used Gazebo as a Physics simulator and NS-3 as a network simulator. Also, we design our algorithm considering the Perception-Action loop inside a closed communication channel, which is one of the essential factors in a contested scenario with the requirement of high fidelity in terms of data transmission. We validate our approach empirically at both the simulation and system level for both line-of-sight (LOS) and non-line-of-sight (NLOS) scenarios. Our approach achieves a noticeable improvement in terms of reducing packet loss probability (≈11%), and average packet delay (≈10%) compared to the fixed window size-based synchronization approach. 
    more » « less
  3. Open-sourced kinematic models of the da Vinci Surgical System have previously been developed using serial chains for forward and inverse kinematics. However, these models do not describe the motion of every link in the closed-loop mechanism of the da Vinci manipulators; knowing the kinematics of all components in motion is essential for the foundation of modeling the system dynamics and implementing representative simulations. This paper proposes a modeling method of the closed-loop kinematics, using the existing da Vinci kinematics and an optical motion capture link length calibration. Resulting link lengths and DH parameters are presented and used as the basis for ROS-based simulation models. The models were simulated in RViz visualization simulation and Gazebo dynamics simulation. Additionally, the closed-loop kinematic chain was verified by comparing the remote center of motion location of simulation with the hardware. Furthermore, the dynamic simulation resulted in satisfactory joint stability and performance. All models and simulations are provided as an open-source package. 
    more » « less
  4. null (Ed.)
    Surgical robots for laparoscopy consist of several patient side slave manipulators that are controlled via surgeon operated master telemanipulators. Commercial surgical robots do not perform any sub-tasks - even of repetitive or noninvasive nature - autonomously or provide intelligent assistance. While this is primarily due to safety and regulatory reasons, the state of such automation intelligence also lacks the reliability and robustness for use in high-risk applications. Recent developments in continuous control using Artificial Intelligence and Reinforcement Learning have prompted growing research interest in automating mundane sub-tasks. To build on this, we present an inspired Asynchronous Framework which incorporates realtime dynamic simulation - manipulable with the masters of a surgical robot and various other input devices - and interfaces with learning agents to train and potentially allow for the execution of shared sub-tasks. The scope of this framework is generic to cater to various surgical (as well as non-surgical) training and control applications. This scope is demonstrated by examples of multi-user and multi-manual applications which allow for realistic interactions by incorporating distributed control, shared task allocation and a well-defined communication pipe-line for learning agents. These examples are discussed in conjunction with the design philosophy, specifications, system-architecture and metrics of the Asynchronous Framework and the accompanying Simulator. We show the stability of Simulator while achieving real-time dynamic simulation and interfacing with several haptic input devices and a training agent at the same time. 
    more » « less
  5. As robots operate alongside humans in shared spaces, such as homes and offices, it is essential to have an effective mechanism for interacting with them. Natural language offers an intuitive interface for communicating with robots, but most of the recent approaches to grounded language understanding reason only in the context of an instantaneous state of the world. Though this allows for interpreting a variety of utterances in the current context of the world, these models fail to interpret utterances which require the knowledge of past dynamics of the world, thereby hindering effective human-robot collaboration in dynamic environments. Constructing a comprehensive model of the world that tracks the dynamics of all objects in the robot’s workspace is computationally expensive and difficult to scale with increasingly complex environments. To address this challenge, we propose a learned model of language and perception that facilitates the construction of temporally compact models of dynamic worlds through closed-loop grounding and perception. Our experimental results on the task of grounding referring expressions demonstrate more accurate interpretation of robot instructions in cluttered and dynamic table-top environments without a significant increase in runtime as compared to an open-loop baseline. 
    more » « less