skip to main content


Search for: All records

Award ID contains: 1757929

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In recent years, there has been a growing interest in so-called smart cities. These cities use technology to connect and enhance the lives of their citizens. Smart cities use many Internet of Things (loT) devices, such as sensors and video cameras, that are interconnected to provide constant feedback and up-to-date information on everything that is happening. Despite the benefits of these cities, they introduce a numerous new vulnerabilities as well. These smart cities are now susceptible to cyber-attacks that aim to “alter, disrupt, deceive, degrade, or destroy computer systems.” Through the use of an educational and research-based loT test-bed with multiple networking layers and heterogeneous devices connected to simultaneously support networking research, anomaly detection, and security principles, we can pinpoint some of these vulnerabilities. This work will contribute potential solutions to these vulnerabilities that can hopefully be replicated in smart cities around the world. Specifically, in the transportation section of our educational smart city several vulnerabilities in the signal lights, street lights, and the cities train network were discovered. To conduct this research two scenarios were developed. These consisted of inside the network security and network perimeter security. For the latter we were able to find extensive vulnerabilities that would allow an attacker to map the entire smart city sub-network. Solutions to this problem are outlined that utilize an Intrusion Detection System and Port Mirroring. However, while we were able to exploit the city's Programmable Logic Controller (PLC) once inside the network, it was found that due to dated Supervisory Control and Data Acquisition (SCADA) systems, there were almost no solutions to these exploits. 
    more » « less
  2. We present a context classification pipeline to allow a robot to change its navigation strategy based on the observed social scenario. Socially-Aware Navigation considers social behavior in order to improve navigation around people. Most of the existing research uses different techniques to incorporate social norms into robot path planning for a single context. Methods that work for hallway behavior might not work for approaching people, and so on. We developed a high-level decision-making subsystem, a model-based context classifier, and a multi-objective optimization-based local planner to achieve socially-aware trajectories for autonomously sensed contexts. Using a context classification system, the robot can select social objectives that are later used by Pareto Concavity Elimination Transformation (PaCcET) based local planner to generate safe, comfortable, and socially appropriate trajectories for its environment. This was tested and validated in multiple environments on a Pioneer mobile robot platform; results show that the robot could select and account for social objectives related to navigation autonomously. 
    more » « less
  3. Robots’ autonomous navigation in public spaces and their social awareness suited to the environmental context is an active investigation in HRI. In this paper, we are presenting a methodology to achieve this goal. While most navigation models focus on objects, context, or human presence in the scene, we will incorporate all three to perceive the environment more accurately. Other than scene perception, the other important aspect of socially aware navigation is the social norms associated with the context. To do so, we have included interviews with museum visitors, volunteers, and staff to gather information about museums and convert the text data to social rules. This effort is currently in progress, we present a framework for future study and analysis of this problem. 
    more » « less
  4. First impressions make up an integral part of our interactions with other humans by providing an instantaneous judgment of the trustworthiness, dominance and attractiveness of an individual prior to engaging in any other form of interaction. Unfortunately, this can lead to unintentional bias in situations that have serious consequences, whether it be in judicial proceedings, career advancement, or politics. The ability to automatically recognize social traits presents a number of highly useful applications: from minimizing bias in social interactions to providing insight into how our own facial attributes are interpreted by others. However, while first impressions are well-studied in the field of psychology, automated methods for predicting social traits are largely non-existent. In this work, we demonstrate the feasibility of two automated approaches—multi-label classification (MLC) and multi-output regression (MOR)—for first impression recognition from faces. We demonstrate that both approaches are able to predict social traits with better than chance accuracy, but there is still significant room for improvement. We evaluate ethical concerns and detail application areas for future work in this direction. 
    more » « less
  5. null (Ed.)
    As Human-Robot Interaction becomes more sophisticated, measuring the performance of a social robot is crucial to gauging the effectiveness of its behavior. However, social behavior does not necessarily have strict performance metrics that other autonomous behavior can have. Indeed, when considering robot navigation, a socially-appropriate action may be one that is sub-optimal, resulting in longer paths, longer times to get to a goal. Instead, we can rely on subjective assessments of the robot's social performance by a participant in a robot interaction or by a bystander. In this paper, we use the newly-validated Perceived Social Intelligence (PSI) scale to examine the perception of non-humanoid robots in non-verbal social scenarios. We show that there are significant differences between the perceived social intelligence of robots exhibiting SAN behavior compared to one using a traditional navigation planner in scenarios such as waiting in a queue and group behavior. 
    more » « less
  6. null (Ed.)
    As systems that utilize computer vision move into the public domain, methods of calibration need to become easier to use. Though multi-plane LiDAR systems have proven to be useful for vehicles and large robotic platforms, many smaller platforms and low cost solutions still require 2D LiDAR combined with RGB cameras. Current methods of calibrating these sensors make assumptions about camera and laser placement and/or require complex calibration routines. In this paper we propose a new method of feature correspondence in the two sensors and an optimization method capable of calibration target with unknown lengths in its geometry. Our system is designed with an inexperienced layperson as the intended user, which has lead us to remove as many assumptions about both the target and laser as possible. We show that our system is capable of calibrating the 2-sensor system from a single sample in configurations other methods are unable to handle. 
    more » « less
  7. null (Ed.)
    This paper presents a novel approach to robot task learning from language-based instructions, which focuses on increasing the complexity of task representations that can be taught through verbal instruction. The major proposed contribution is the development of a framework for directly mapping a complex verbal instruction to an executable task representation, from a single training experience. The method can handle the following types of complexities: 1) instructions that use conjunctions to convey complex execution constraints (such as alternative paths of execution, sequential or nonordering constraints, as well as hierarchical representations) and 2) instructions that use prepositions and multiple adjectives to specify action/object parameters relevant for the task. Specific algorithms have been developed for handling conjunctions, adjectives and prepositions as well as for translating the parsed instructions into parameterized executable task representations. The paper describes validation experiments with a PR2 humanoid robot learning new tasks from verbal instruction, as well as an additional range of utterances that can be parsed into executable controllers by the proposed system. 
    more » « less
  8. null (Ed.)
    Robotic systems typically follow a rigid approach to task execution, in which they perform the necessary steps in a specific order, but fail when having to cope with issues that arise during execution. We propose an approach that handles such cases through dialogue and human-robot collaboration. The proposed approach contributes a hierarchical control architecture that 1) autonomously detects and is cognizant of task execution failures, 2) initiates a dialogue with a human helper to obtain assistance, and 3) enables collaborative human-robot task execution through extended dialogue in order to 4) ensure robust execution of hierarchical tasks with complex constraints, such as sequential, non-ordering, and multiple paths of execution. The architecture ensures that the constraints are adhered to throughout the entire task execution, including during failures. The recovery of the architecture from issues during execution is validated by a human-robot team on a building task. 
    more » « less
  9. As systems that utilize computer vision move into the public domain, methods of calibration need to become easier to use. Though multi-plane LiDAR systems have proven to be useful for vehicles and large robotic platforms, many smaller platforms and low-cost solutions still require 2D LiDAR combined with RGB cameras. Current methods of calibrating these sensors make assumptions about camera and laser placement and/or require complex calibration routines. In this paper we propose a new method of feature correspondence in the two sensors and an optimization method capable of using a calibration target with unknown lengths in its geometry. Our system is designed with an inexperienced layperson as the intended user, which has led us to remove as many assumptions about both the target and laser as possible. We show that our system is capable of calibrating the 2-sensor system from a single sample in configurations other methods are unable to handle. 
    more » « less
  10. null (Ed.)
    Efficient arrangement of UAVs in a swarm formation is essential to the functioning of the swarm as a temporary communication network. Such a network could assist in search and rescue efforts by providing first responders with a means of communication. We propose a user-friendly and effective system for calculating and visualizing an optimal layout of UAVs. An initial calculation to gather parameter information is followed by the proposed algorithm that generates an optimal solution. A visualization is displayed in an easy-to-comprehend manner after the proposed iterative genetic algorithm finds an optimal solution. The proposed system runs iteratively, adding UAV at each intermediate conclusion, until a solution is found. Information is passed between runs of the iterative genetic algorithm to reduce runtime and complexity. The results from testing show that the proposed algorithm yields optimal solutions more frequently than the k-means clustering algorithm. This system finds an optimal solution 80% of the time while k-means clustering is unable to find a solution when presented with a complex problem. 
    more » « less