skip to main content

This content will become publicly available on May 9, 2023

Title: JEDAI: A System for Skill-Aligned Explainable Robot Planning
This paper presents JEDAI Explains Decision-Making AI (JEDAI), an AI system designed for outreach and educational efforts aimed at non-AI experts. JEDAI features a novel synthesis of research ideas from integrated task and motion planning and explainable AI. JEDAI helps users create high-level, intuitive plans while ensuring that they will be executable by the robot. It also provides users customized explanations about errors and helps improve their understanding of AI planning as well as the limits and capabilities of the underlying robot system.
Authors:
; ; ;
Award ID(s):
1942856
Publication Date:
NSF-PAR ID:
10342142
Journal Name:
Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems
Sponsoring Org:
National Science Foundation
More Like this
  1. MAARS (Machine leaning-based Analytics for Automated Rover Systems) is an ongoing JPL effort to bring the latest self-driving technologies to Mars, Moon, and beyond. The ongoing AI revolution here on Earth is finally propagating to the red planet as the High Performance Spaceflight Computing (HPSC) and commercial off-the-shelf (COTS) system-on-a-chip (SoC), such as Qualcomm's Snapdragon, become available to rovers. In this three year project, we are developing, implementing, and benchmarking a wide range of autonomy algorithms that would significantly enhance the productivity and safety of planetary rover missions. This paper is to provide the latest snapshot of the project with broad and high-level description of every capability that we are developing, including scientific scene interpretation, vision-based traversability assessment, resource-aware path planning, information-theoretic path planning, on-board strategic path planning, and on-board optimal kinematic settling for accurate collision checking. All of the onboard software capabilities will be integrated into JPL's Athena test rover using ROS (Robot Operating System).
  2. We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning , the human, robot and IoT, with one single mobile AR device. Users can perform task authoring with the Augmented Reality (AR) handheld interface, then placing the AR device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot, and the IoT oriented tasks, and guides the path planning execution with the embedded simultaneous localization and mapping (SLAM) capability. We demonstrate that V.Ra enables instant, robust and intuitive room-scale navigatory and interactive task authoring through various use cases and preliminary studies.
  3. We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning (human-robot-IoT) with one single AR-SLAM device. Users can perform task authoring in an analogous manner with the Augmented Reality (AR) interface. Then placing the device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot and IoT oriented tasks, and guides the path planning execution with the SLAM capability.
  4. Pose estimation is a basic module in many robot manipulation pipelines. Estimating the pose of objects in the environment can be useful for grasping, motion planning, or manipulation. However, current state-of-the-art methods for pose estimation either rely on large annotated training sets or simulated data. Further, the long training times for these methods prohibit quick interaction with novel objects. To address these issues, we introduce a novel method for zero-shot object pose estimation in clutter. Our approach uses a hypothesis generation and scoring framework, with a focus on learning a scoring function that generalizes to objects not used for training. We achieve zero-shot generalization by rating hypotheses as a function of unordered point differences. We evaluate our method on challenging datasets with both textured and untextured objects in cluttered scenes and demonstrate that our method significantly outperforms previous methods on this task. We also demonstrate how our system can be used by quickly scanning and building a model of a novel object, which can immediately be used by our method for pose estimation. Our work allows users to estimate the pose of novel objects without requiring any retraining.
  5. Robust trajectory execution is an extension of cooperative collision avoidance that takes pre-planned trajectories directly into account. We propose an algorithm for robust trajectory execution that compensates for a variety of dynamic changes, including newly appearing obstacles, robots breaking down, imperfect motion execution, and external disturbances. Robots do not communicate with each other and only sense other robots’ positions and the obstacles around them. At the high-level we use a hybrid planning strategy employing both discrete planning and trajectory optimization with a dynamic receding horizon approach. The discrete planner helps to avoid local minima, adjusts the planning horizon, and provides good initial guesses for the optimization stage. Trajectory optimization uses a quadratic programming formulation, where all safety-critical parts are formulated as hard constraints. At the low-level, we use buffered Voronoi cells as a multi-robot collision avoidance strategy. Compared to ORCA, our approach supports higher-order dynamic limits and avoids deadlocks better. We demonstrate our approach in simulation and on physical robots, showing that it can operate in real time.