skip to main content


Title: Collaborative Robotics Toolkit (CRTK): Open Software Framework for Surgical Robotics Research
Robot-assisted minimally invasive surgery has made a substantial impact in operating rooms over the past few decades with their high dexterity, small tool size, and impact on adoption of minimally invasive techniques. In recent years, intelligence and different levels of surgical robot autonomy have emerged thanks to the medical robotics endeavors at numerous academic institutions and leading surgical robot companies. To accelerate interaction within the research community and prevent repeated development, we propose the Collaborative Robotics Toolkit (CRTK), a common API for the RAVEN-II and da Vinci Research Kit (dVRK) - two open surgical robot platforms installed at more than 40 institutions worldwide. CRTK has broadened to include other robots and devices, including simulated robotic systems and industrial robots. This common API is a community software infrastructure for research and education in cutting edge human-robot collaborative areas such as semi-autonomous teleoperation and medical robotics. This paper presents the concepts, design details and the integration of CRTK with physical robot systems and simulation platforms.  more » « less
Award ID(s):
1637789
NSF-PAR ID:
10204120
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
IEEE International Conference on Robotic Computing (IRC)
Page Range / eLocation ID:
48-55
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Robot-assisted minimally invasive surgery has made a substantial impact in operating rooms over the past few decades with their high dexterity, small tool size, and impact on adoption of minimally invasive techniques. In recent years, intelligence and different levels of surgical robot autonomy have emerged thanks to the medical robotics endeavors at numerous academic institutions and leading surgical robot companies. To accelerate interaction within the research community and prevent repeated development, we propose the Collaborative Robotics Toolkit (CRTK), a common API for the RAVEN-II and da Vinci Research Kit (dVRK) - two open surgical robot platforms installed at more than 40 institutions worldwide. CRTK has broadened to include other robots and devices, including simulated robotic systems and industrial robots. This common API is a community software infrastructure for research and education in cutting edge human-robot collaborative areas such as semi-autonomous teleoperation and medical robotics. This paper presents the concepts, design details and the integration of CRTK with physical robot systems and simulation platforms. 
    more » « less
  2. null (Ed.)
    ABSTRACT Introduction Short response time is critical for future military medical operations in austere settings or remote areas. Such effective patient care at the point of injury can greatly benefit from the integration of semi-autonomous robotic systems. To achieve autonomy, robots would require massive libraries of maneuvers collected with the goal of training machine learning algorithms. Although this is attainable in controlled settings, obtaining surgical data in austere settings can be difficult. Hence, in this article, we present the Dexterous Surgical Skill (DESK) database for knowledge transfer between robots. The peg transfer task was selected as it is one of the six main tasks of laparoscopic training. In addition, we provide a machine learning framework to evaluate novel transfer learning methodologies on this database. Methods A set of surgical gestures was collected for a peg transfer task, composed of seven atomic maneuvers referred to as surgemes. The collected Dexterous Surgical Skill dataset comprises a set of surgical robotic skills using the four robotic platforms: Taurus II, simulated Taurus II, YuMi, and the da Vinci Research Kit. Then, we explored two different learning scenarios: no-transfer and domain-transfer. In the no-transfer scenario, the training and testing data were obtained from the same domain; whereas in the domain-transfer scenario, the training data are a blend of simulated and real robot data, which are tested on a real robot. Results Using simulation data to train the learning algorithms enhances the performance on the real robot where limited or no real data are available. The transfer model showed an accuracy of 81% for the YuMi robot when the ratio of real-tosimulated data were 22% to 78%. For the Taurus II and the da Vinci, the model showed an accuracy of 97.5% and 93%, respectively, training only with simulation data. Conclusions The results indicate that simulation can be used to augment training data to enhance the performance of learned models in real scenarios. This shows potential for the future use of surgical data from the operating room in deployable surgical robots in remote areas. 
    more » « less
  3. Soft robotics enriches the robotic functionalities by engineering soft materials and electronics toward enhanced compliance, adaptivity, and friendly human machine. This decade has witnessed extraordinary progresses and benefits in scaling down soft robotics to small scale for a wide range of potential and promising applications, including medical and surgical soft robots, wearable and rehabilitation robots, and unconstructed environments exploration. This perspective highlights recent research efforts in miniature soft robotics in a brief and comprehensive way in terms of actuation, powering, designs, fabrication, control, and applications in four sections. Section 2 discusses the key aspects of materials selection and structural designs for small‐scale tethered and untethered actuation and powering, including fluidic actuation, stimuli‐responsive actuation, and soft living biohybrid materials, as well as structural forms from 1D to 3D. Section 3 discusses the advanced manufacturing techniques at small scales for fabricating miniature soft robots, including lithography, mechanical self‐assembly, additive manufacturing, tissue engineering, and other fabrication methods. Section 4 discusses the control systems used in miniature robots, including off‐board/onboard controls and artificial intelligence‐based controls. Section 5 discusses their potential broad applications in healthcare, small‐scale objects manipulating and processing, and environmental monitoring. Finally, outlooks on the challenges and opportunities are discussed.

     
    more » « less
  4. The realms of commonsense knowledge and reasoning, vehicle automation with full as well as partial autonomy, and human-robot collaboration, present growing areas of research in recent times, with much of the concerned data being disseminated through the Web and devices encompassing IoT (Internet of Things); the data per se being heterogeneous including plain text, images, audiovisuals, hypertext and hypermedia. Due to the advent of autonomous vehicles, there is a greater need for the embodiment of commonsense knowledge within their development in order to simulate subtle, intuitive aspects of human judgment. The field of robotics has often encountered collaborative tasks between humans and robots to enhance the respective activities involved and produce better results than humans or robots would achieve working by themselves. Accordingly, this article outlines and organizes some of the research occurring in these areas along with its Web perspectives and applications. Context related to human-robot collaboration and commonsense knowledge appears via a survey of the literature. Vehicle automation is significant with the relevant studies: its definition and methods of improvement are of focus in the article. Some work in this area makes an impact on smart manufacturing. There is discussion on how human-robot collaboration is beneficial, and how commonsense knowledge is useful for the collaboration to occur in an enhanced manner. This article would be potentially interesting to various communities, e.g. AI professionals, Web developers, robotics engineers, and data scientists. 
    more » « less
  5. Cyber-physical systems for robotic surgery have enabled minimally invasive procedures with increased precision and shorter hospitalization. However, with increasing complexity and connectivity of software and major involvement of human operators in the supervision of surgical robots, there remain significant challenges in ensuring patient safety. This paper presents a safety monitoring system that, given the knowledge of the surgical task being performed by the surgeon, can detect safety-critical events in real-time. Our approach integrates a surgical gesture classifier that infers the operational context from the time-series kinematics data of the robot with a library of erroneous gesture classifiers that given a surgical gesture can detect unsafe events. Our experiments using data from two surgical platforms show that the proposed system can detect unsafe events caused by accidental or malicious faults within an average reaction time window of 1,693 milliseconds and F1 score of 0.88 and human errors within an average reaction time window of 57 milliseconds and F1 score of 0.76. 
    more » « less