We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning (human-robot-IoT) with one single AR-SLAM device. Users can perform task authoring in an analogous manner with the Augmented Reality (AR) interface. Then placing the device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot and IoT oriented tasks, and guides the path planning execution with the SLAM capability. 
                        more » 
                        « less   
                    
                            
                            JEDAI: A System for Skill-Aligned Explainable Robot Planning
                        
                    
    
            This paper presents JEDAI Explains Decision-Making AI (JEDAI), an AI system designed for outreach and educational efforts aimed at non-AI experts. JEDAI features a novel synthesis of research ideas from integrated task and motion planning and explainable AI. JEDAI helps users create high-level, intuitive plans while ensuring that they will be executable by the robot. It also provides users customized explanations about errors and helps improve their understanding of AI planning as well as the limits and capabilities of the underlying robot system. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1942856
- PAR ID:
- 10342142
- Date Published:
- Journal Name:
- Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning , the human, robot and IoT, with one single mobile AR device. Users can perform task authoring with the Augmented Reality (AR) handheld interface, then placing the AR device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot, and the IoT oriented tasks, and guides the path planning execution with the embedded simultaneous localization and mapping (SLAM) capability. We demonstrate that V.Ra enables instant, robust and intuitive room-scale navigatory and interactive task authoring through various use cases and preliminary studies.more » « less
- 
            MAARS (Machine leaning-based Analytics for Automated Rover Systems) is an ongoing JPL effort to bring the latest self-driving technologies to Mars, Moon, and beyond. The ongoing AI revolution here on Earth is finally propagating to the red planet as the High Performance Spaceflight Computing (HPSC) and commercial off-the-shelf (COTS) system-on-a-chip (SoC), such as Qualcomm's Snapdragon, become available to rovers. In this three year project, we are developing, implementing, and benchmarking a wide range of autonomy algorithms that would significantly enhance the productivity and safety of planetary rover missions. This paper is to provide the latest snapshot of the project with broad and high-level description of every capability that we are developing, including scientific scene interpretation, vision-based traversability assessment, resource-aware path planning, information-theoretic path planning, on-board strategic path planning, and on-board optimal kinematic settling for accurate collision checking. All of the onboard software capabilities will be integrated into JPL's Athena test rover using ROS (Robot Operating System).more » « less
- 
            While the ultimate goal of natural-language based Human-Robot Interaction (HRI) may be free-form, mixed-initiative dialogue,social robots deployed in the near future will likely primarily engage in wakeword-driven interaction, in which users’ commands are prefaced by a wakeword such as “Hey, Robot.” This style of interaction helps to allay user privacy concerns, as the robot’s full speech recognition module need not be employed until the target wakeword is used. Unfortunately, there are a number of concerns in the popular media surrounding this style of interaction, with consumers fearing that it is training users (in particular,children) to be rude towards technology, and by extension, rude towards other humans. In this paper, we present a study that demonstrates how an alternate style of wakeword, i.e., “Excuse me, Robot” may allay this concern, by priming users to phrase commands as Indirect Speech Actsmore » « less
- 
            This paper describes two exemplary projects on physical ROS-compatible robots (i.e., Turtlebot3 Burger and Waffle PI) for an undergraduate robotics course, aiming to foster students’ problem-solving skills through project-based learning. The context of the study is a senior-level technical elective course in the Department of Computer Engineering Technology at a primarily undergraduate teaching institution. Earlier courses in the CET curriculum have prepared students with programming skills in several commonly used languages, including Python, C/C++, Java, and MATLAB. Students’ proficiency in programming and hands-on skills makes it possible to implement advanced robotic control algorithms in this robotics course, which has a 3-hour companion lab session each week. The Robot Operating System (ROS) is an open-source framework that helps developers build and reuse code between robotic applications. Though mainly used as a research platform, instructors in higher education take action in bringing ROS and its recent release of ROS 2 into their classrooms. Our earlier work controlled a simulated robot via ROS in a virtual environment on the MATLAB-ROS-Gazebo platform. This paper describes its counterparts by utilizing physical ROS-compatible autonomous ground robots on the MATLAB-ROS2-Turtlebot3 platform. The two exemplary projects presented in this paper cover sensing, perception, and control which are essential to any robotic application. Sensing is via the robot’s onboard 2D laser sensor. Perception involves pattern classification and recognition. Control is shown via path planning. We believe the physical MATLAB-ROS2-Turtlebot3 platform will help to enhance robotics education by exposing students to realistic situations. It will also provide opportunities for educators and students to explore AI-facilitated solutions when tackling everyday problems.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    