skip to main content


Search for: All records

Award ID contains: 2150394

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper addresses the problem of dynamic allocation of robot resources to tasks with hierarchical representations and multiple types of execution constraints, with the goal of enabling single-robot multitasking capabilities. Although the vast majority of robot platforms are equipped with more than one sensor (cameras, lasers, sonars) and several actuators (wheels/legs, two arms), which would in principle allow the robot to concurrently work on multiple tasks, existing methods are limited to allocating robots in their entirety to only one task at a time. This approach employs only a subset of a robot's sensors and actuators, leaving other robot resources unused. Our aim is to enable a robot to make full use of its capabilities by having an individual robot multitask, distributing its sensors and actuators to multiple concurrent activities. We propose a new architectural framework based on Hierarchical Task Trees that supports multitasking through a new representation of robot behaviors that explicitly encodes the robot resources (sensors and actuators) and the environmental conditions needed for execution. This architecture was validated on a two-arm, mobile, PR2 humanoid robot, performing tasks with multiple types of execution constraints. 
    more » « less
    Free, publicly-accessible full text available December 12, 2024
  2. The field of automated face verification has become saturated in recent years, with state-of-the-art methods outperforming humans on all benchmarks. Many researchers would say that face verification is close to being a solved problem. We argue that evaluation datasets are not challenging enough, and that there is still significant room for improvement in automated face verification techniques. This paper introduces the DoppelVer dataset, a challenging face verification dataset consisting of doppelganger pairs. Doppelgangers are pairs of individuals that are extremely visually similar, oftentimes mistaken for one another. With this dataset, we introduce two challenging protocols: doppelganger and Visual Similarity from Embeddings (ViSE). The doppelganger protocol utilizes doppelganger pairs as negative verification samples. The ViSE protocol selects negative pairs by isolating image samples that are very close together in a particular embedding space. In order to demonstrate the challenge that the DoppelVer dataset poses, we evaluate a state-of-the-art face verification method on the dataset. Our experiments demonstrate that the DoppelVer dataset is significantly more challenging than its predecessors, indicating that there is still room for improvement in face verification technology. 
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  3. In an efficient and flexible human-robot collaborative work environment, a robot team member must be able to recognize both explicit requests and implied actions from human users. Identifying “what to do” in such cases requires an agent to have the ability to construct associations between objects, their actions, and the effect of actions on the environment. In this regard, semantic memory is being introduced to understand the explicit cues and their relationships with available objects and required skills to make “tea” and “sandwich”. We have extended our previous hierarchical robot control architecture to add the capability to execute the most appropriate task based on both feedback from the user and the environmental context. To validate this system, two types of skills were implemented in the hierarchical task tree: 1) Tea making skills and 2) Sandwich making skills. During the conversation between the robot and the human, the robot was able to determine the hidden context using ontology and began to act accordingly. For instance, if the person says “I am thirsty” or “It is cold outside” the robot will start to perform the tea-making skill. In contrast, if the person says, “I am hungry” or “I need something to eat”, the robot will make the sandwich. A humanoid robot Baxter was used for this experiment. We tested three scenarios with objects at different positions on the table for each skill. We observed that in all cases, the robot used only objects that were relevant to the skill. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  4. In this work we address the System-of-Systems reassembling operation of a marsupial team comprising a hybrid Unmanned Aerial Vehicle and a Legged Locomotion robot, relying solely on vision-based systems and assisted by Deep Learning. The target application domain is that of large-scale field surveying operations under the presence of wireless communication disruptions. While most real-world field deployments of multi-robot systems assume some degree of wireless communication to coordinate key tasks such as multi-agent rendezvous, a desirable feature against unrecoverable communication failures or radio degradation due to jamming cyber-attacks is the ability for autonomous systems to robustly execute their mission with onboard perception. This is especially true for marsupial air / ground teams, wherein landing onboard the ground robot is required. We propose a pipeline that relies on Deep Neural Network-based Vehicle-to-Vehicle detection based on aerial views acquired by flying at typical altitudes for Micro Aerial Vehicle-based real-world surveying operations, such as near the border of the 400ft Above Ground Level window. We present the minimal computing and sensing suite that supports its execution onboard a fully autonomous micro-Tiltrotor aircraft which detects, approaches, and lands onboard a Boston Dynamics Spot legged robot. We present extensive experimental studies that validate this marsupial aerial / ground robot’s capacity to safely reassemble while in the airborne scouting phase without the need for wireless communication. 
    more » « less
    Free, publicly-accessible full text available June 6, 2024
  5. Mobile robots must navigate efficiently, reliably, and appropriately around people when acting in shared social environments. For robots to be accepted in such environments, we explore robot navigation for the social contexts of each setting. Navigating through dynamic environments solely considering a collision-free path has long been solved. In human-robot environments, the challenge is no longer about efficiently navigating from one point to another. Autonomously detecting the context and adapting to an appropriate social navigation strategy is vital for social robots’ long-term applicability in dense human environments. As complex social environments, museums are suitable for studying such behavior as they have many different navigation contexts in a small space.Our prior Socially-Aware Navigation model considered con-text classification, object detection, and pre-defined rules to define navigation behavior in more specific contexts, such as a hallway or queue. This work uses environmental context, object information, and more realistic interaction rules for complex social spaces. In the first part of the project, we convert real-world interactions into algorithmic rules for use in a robot’s navigation system. Moreover, we use context recognition, object detection, and scene data for context-appropriate rule selection. We introduce our methodology of studying social behaviors in complex contexts, different analyses of our text corpus for museums, and the presentation of extracted social norms. Finally, we demonstrate applying some of the rules in scenarios in the simulation environment. 
    more » « less