skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2150394

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This innovative practice WIP paper describes our ongoing development and deployment of an online robotics education platform that highlighted a gap in providing an interactive, feedback-rich learning environment essential for mastering pro-gramming concepts in robotics, which they were not getting with the traditional code→ simulate→turn-in workflow. Since teaching resources are limited, students would benefit from feedback in real-time to find and fix their mistakes in the programming assignments. To integrate such automated feedback, this paper will focus on creating a system for unit testing while integrating it into the course workflow. We facilitate this real-time feedback by including unit testing in the design of programming assignments so students can understand and fix their errors on their own and without the prior help of instructors/TAs serving as a bottleneck. In line with the framework's personalized student-centered approach, this method makes it easier for students to revise and debug their programming work, encouraging hands-on learning. The updated course workflow, which includes unit tests, will strengthen the learning environment and make it more interactive so that students can learn how to program robots in a self-guided fashion. 
    more » « less
  2. In this work we deal with the problem of establishing a system architecture to facilitate the real-time autonomous volumetric mapping alongside the semantic characterization of sagebrush ecosystem landscapes, in order to support the pre-fire modeling and analysis required to plan for wildfire prevention and/or suppression. The world, and more specifically the broader region of N. Nevada has been facing one of its most challenging periods over the course of the last decade, as far as uncontrolled wildfires are concerned. This has led to the development of research initiatives aimed at the ecosystem-specific modeling of the pre-, during-, and post-fire process effects in order to better understand, predict, and address these phenomena. However, to collect the required wide-field information that contains both centimeter-level volumetric mapping fidelity, as well as semantic details related to plant (sub)-species, which for the common case of sagebrush can only be identified based on close-up inspection of their foliage fine structure, satellite photography remains insufficient. To this end, we propose a perception and mapping architecture of an aerial robotic system that is capable of: a) LiDAR-based centimeter-level reconstruction, b) robust multi-modal sensor fusion Simultaneous Localization and Mapping (SLAM) lever-aging LiDAR, IMU, Visual-Inertial Odometry, and Differential GPS in a global optimization mapping framework, as well as c) a gimbal-driven point-zoom camera for the efficient real-time collection of close-up imagery of foliage pertaining to specific target plants, in order to allow their real-time identification based on their leaf micro-structure, by leveraging Deep-Learned classification deployed on a Neural Processing Unit. We present the associated systems, the overall hardware and software architecture, as well as a series of field deployment studies validating the proposed aerial robotic capabilities. 
    more » « less
  3. This paper addresses the problem of dynamic allocation of robot resources to tasks with hierarchical representations and multiple types of execution constraints, with the goal of enabling single-robot multitasking capabilities. Although the vast majority of robot platforms are equipped with more than one sensor (cameras, lasers, sonars) and several actuators (wheels/legs, two arms), which would in principle allow the robot to concurrently work on multiple tasks, existing methods are limited to allocating robots in their entirety to only one task at a time. This approach employs only a subset of a robot's sensors and actuators, leaving other robot resources unused. Our aim is to enable a robot to make full use of its capabilities by having an individual robot multitask, distributing its sensors and actuators to multiple concurrent activities. We propose a new architectural framework based on Hierarchical Task Trees that supports multitasking through a new representation of robot behaviors that explicitly encodes the robot resources (sensors and actuators) and the environmental conditions needed for execution. This architecture was validated on a two-arm, mobile, PR2 humanoid robot, performing tasks with multiple types of execution constraints. 
    more » « less
  4. The field of automated face verification has become saturated in recent years, with state-of-the-art methods outperforming humans on all benchmarks. Many researchers would say that face verification is close to being a solved problem. We argue that evaluation datasets are not challenging enough, and that there is still significant room for improvement in automated face verification techniques. This paper introduces the DoppelVer dataset, a challenging face verification dataset consisting of doppelganger pairs. Doppelgangers are pairs of individuals that are extremely visually similar, oftentimes mistaken for one another. With this dataset, we introduce two challenging protocols: doppelganger and Visual Similarity from Embeddings (ViSE). The doppelganger protocol utilizes doppelganger pairs as negative verification samples. The ViSE protocol selects negative pairs by isolating image samples that are very close together in a particular embedding space. In order to demonstrate the challenge that the DoppelVer dataset poses, we evaluate a state-of-the-art face verification method on the dataset. Our experiments demonstrate that the DoppelVer dataset is significantly more challenging than its predecessors, indicating that there is still room for improvement in face verification technology. 
    more » « less
  5. In an efficient and flexible human-robot collaborative work environment, a robot team member must be able to recognize both explicit requests and implied actions from human users. Identifying “what to do” in such cases requires an agent to have the ability to construct associations between objects, their actions, and the effect of actions on the environment. In this regard, semantic memory is being introduced to understand the explicit cues and their relationships with available objects and required skills to make “tea” and “sandwich”. We have extended our previous hierarchical robot control architecture to add the capability to execute the most appropriate task based on both feedback from the user and the environmental context. To validate this system, two types of skills were implemented in the hierarchical task tree: 1) Tea making skills and 2) Sandwich making skills. During the conversation between the robot and the human, the robot was able to determine the hidden context using ontology and began to act accordingly. For instance, if the person says “I am thirsty” or “It is cold outside” the robot will start to perform the tea-making skill. In contrast, if the person says, “I am hungry” or “I need something to eat”, the robot will make the sandwich. A humanoid robot Baxter was used for this experiment. We tested three scenarios with objects at different positions on the table for each skill. We observed that in all cases, the robot used only objects that were relevant to the skill. 
    more » « less
  6. In this work we address the System-of-Systems reassembling operation of a marsupial team comprising a hybrid Unmanned Aerial Vehicle and a Legged Locomotion robot, relying solely on vision-based systems and assisted by Deep Learning. The target application domain is that of large-scale field surveying operations under the presence of wireless communication disruptions. While most real-world field deployments of multi-robot systems assume some degree of wireless communication to coordinate key tasks such as multi-agent rendezvous, a desirable feature against unrecoverable communication failures or radio degradation due to jamming cyber-attacks is the ability for autonomous systems to robustly execute their mission with onboard perception. This is especially true for marsupial air / ground teams, wherein landing onboard the ground robot is required. We propose a pipeline that relies on Deep Neural Network-based Vehicle-to-Vehicle detection based on aerial views acquired by flying at typical altitudes for Micro Aerial Vehicle-based real-world surveying operations, such as near the border of the 400ft Above Ground Level window. We present the minimal computing and sensing suite that supports its execution onboard a fully autonomous micro-Tiltrotor aircraft which detects, approaches, and lands onboard a Boston Dynamics Spot legged robot. We present extensive experimental studies that validate this marsupial aerial / ground robot’s capacity to safely reassemble while in the airborne scouting phase without the need for wireless communication. 
    more » « less
  7. In this work we address the flexible physical docking-and-release as well as recharging needs for a marsupial system comprising an autonomous tiltrotor hybrid Micro Aerial Vehicle and a high-end legged locomotion robot. Within persistent monitoring and emergency response situations, such aerial / ground robot teams can offer rapid situational awareness by taking off from the mobile ground robot and scouting a wide area from the sky. For this type of operational profile to retain its long-term effectiveness, regrouping via landing and docking of the aerial robot onboard the ground one is a key requirement. Moreover, onboard recharging is a necessity in order to perform systematic missions. We present a framework comprising: a novel landing mechanism with recharging capabilities embedded into its design, an external battery-based recharging extension for our previously developed power-harvesting Micro Aerial Vehicle module, as well as a strategy for the reliable landing and the docking-and-release between the two robots. We specifically address the need for this system to be ferried by a quadruped ground system while remaining reliable during aggressive legged locomotion when traversing harsh terrain. We present conclusive experimental validation studies by deploying our solution on a marsupial system comprising the MiniHawk micro tiltrotor and the Boston Dynamics Spot legged robot. 
    more » « less
  8. Mobile robots must navigate efficiently, reliably, and appropriately around people when acting in shared social environments. For robots to be accepted in such environments, we explore robot navigation for the social contexts of each setting. Navigating through dynamic environments solely considering a collision-free path has long been solved. In human-robot environments, the challenge is no longer about efficiently navigating from one point to another. Autonomously detecting the context and adapting to an appropriate social navigation strategy is vital for social robots’ long-term applicability in dense human environments. As complex social environments, museums are suitable for studying such behavior as they have many different navigation contexts in a small space.Our prior Socially-Aware Navigation model considered con-text classification, object detection, and pre-defined rules to define navigation behavior in more specific contexts, such as a hallway or queue. This work uses environmental context, object information, and more realistic interaction rules for complex social spaces. In the first part of the project, we convert real-world interactions into algorithmic rules for use in a robot’s navigation system. Moreover, we use context recognition, object detection, and scene data for context-appropriate rule selection. We introduce our methodology of studying social behaviors in complex contexts, different analyses of our text corpus for museums, and the presentation of extracted social norms. Finally, we demonstrate applying some of the rules in scenarios in the simulation environment. 
    more » « less