he pervasive operation of customer drones, or small-scale unmanned aerial vehicles (UAVs), has raised serious concerns about their privacy threats to the public. In recent years, privacy invasion events caused by customer drones have been frequently reported. Given such a fact, timely detection of invading drones has become an emerging task. Existing solutions using active radar, video or acoustic sensors are usually too costly (especially for individuals) or exhibit various constraints (e.g., requiring visual line of sight). Recent research on drone detection with passive RF signals provides an opportunity for low-cost deployment of drone detectors on commodity wireless devices. However, the state of the arts in this direction rely on line-of-sight (LOS) RF signals, which makes them only work under very constrained conditions. The support of more common scenarios, i.e., non-line-of-sight (NLOS), is still missing for low-cost solutions. In this paper, we propose a novel detection system for privacy invasion caused by customer drone. Our system is featured with accurate NLOS detection with low-cost hardware (under $50). By exploring and validating the relationship between drone motions and RF signal under the NLOS condition, we find that RF signatures of drones are somewhat “amplified” by multipaths in NLOS. Based on this observation, we design a two-step solution which first classifies received RSS measurements into LOS and NLOS categories; deep learning is then used to extract the signatures and ultimately detect the drones. Our experimental results show that LOS and NLOS signals can be identified at accuracy rates of 98.4% and 96% respectively. Our drone detection rate for NLOS condition is above 97% with a system implemented using Raspberry PI 3 B+.
more »
« less
Aerial Nondestructive Testing and Evaluation (aNDT&E)
Drones are increasingly used during routine inspections of bridges to improve data consistency, work efficiency, inspector safety, and cost effectiveness. Most drones, however, are operated manually within a visual line of sight and thus unable to inspect long-span bridges that are not completely visible to operators. In this paper, aerial nondestructive evaluation (aNDE) will be envisioned for elevated structures such as bridges, buildings, dams, nuclear power plants, and tunnels. To enable aerial nondestructive testing (aNDT), a human-robot system will be created to integrate haptic sensing and dexterous manipulation into a drone or a structural crawler in augmented/virtual reality (AR/VR) for beyond-visual-line-of-sight (BVLOS) inspection of bridges. Some of the technical challenges and potential solutions associated with aNDT&E will be presented. Example applications of the advanced technologies will be demonstrated in simulated bridge decks with stipulated conditions. The developed human-robot system can transform current on-site inspection to future tele-inspection, minimizing impact to traffic passing over the bridges. The automated tele-inspection can save as much as 75% in time and 95% in cost.
more »
« less
- Award ID(s):
- 2312081
- PAR ID:
- 10483539
- Publisher / Repository:
- Materials Evaluation
- Date Published:
- Journal Name:
- Materials Evaluation
- Volume:
- 81
- Issue:
- 1
- ISSN:
- 0025-5327
- Page Range / eLocation ID:
- 67 to 73
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Autonomous navigation of steel bridge inspection robots are essential for proper maintenance. Majority of existing robotic solutions for bridge inspection require human intervention to assist in the control and navigation. In this paper, a control system framework has been proposed for a previously designed ARA robot [1], which facilitates autonomous real-time navigation and minimizes human involvement. The mechanical design and control framework of ARA robot enables two different configurations, namely the mobile and inch-worm transformation. In addition, a switching control was developed with 3D point clouds of steel surfaces as the input which allows the robot to switch between mobile and inch-worm transformation. The surface availability algorithm (considers plane, area and height) of the switching control enables the robot to perform inch-worm jumps autonomously. The mobile transformation allows the robot to move on continuous steel surfaces and perform visual inspection of steel bridge structures. Practical experiments on actual steel bridge structures highlight the effective performance of ARA robot with the proposed control framework for autonomous navigation during visual inspection of steel bridges.more » « less
-
Ishigami G., Yoshida K. (Ed.)This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments. Robotic tele-operation in remote environments is difficult due to the lack of sufficient situational awareness, mostly caused by stationary and limited field-of-view and lack of depth perception from the robot’s onboard camera. The emerging state of the practice is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the onboard sensors by providing an external viewpoint. However, problems exist when using a tele-operated visual assistant: extra manpower, manually chosen suboptimal viewpoint, and extra teamwork demand between primary and secondary operators. In this work, we use an autonomous tethered aerial visual assistant to replace the secondary robot and operator, reducing the human-robot ratio from 2:2 to 1:2. This visual assistant is able to autonomously navigate through unstructured or confined spaces in a risk-aware manner, while continuously maintaining good viewpoint quality to increase the primary operator’s situational awareness. With the proposed co-robots team, tele-operation missions in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments could benefit from reduced manpower and teamwork demand, along with improved visual assistance quality based on trustworthy risk-aware motion in cluttered environments.more » « less
-
Unmanned aerial vehicles (UAVs) are becoming more common, presenting the need for effective human-robot communication strategies that address the unique nature of unmanned aerial flight. Visual communication via drone flight paths, also called gestures, may prove to be an ideal method. However, the effectiveness of visual communication techniques is dependent on several factors including an observer's position relative to a UAV. Previous work has studied the maximum line-of-sight at which observers can identify a small UAV [1]. However, this work did not consider how changes in distance may affect an observer's ability to perceive the shape of a UAV's motion. In this study, we conduct a series of online surveys to evaluate how changes in line-of-sight distance and gesture size affect observers' ability to identify and distinguish between UAV gestures. We first examine observers' ability to accurately identify gestures when adjusting a gesture's size relative to the size of a UAV. We then measure how observers' ability to identify gestures changes with respect to varying line-of-sight distances. Lastly, we consider how altering the size of a UAV gesture may improve an observer's ability to identify drone gestures from varying distances. Our results show that increasing the gesture size across varying UAV to gesture ratios did not have a significant effect on participant response accuracy. We found that between 17 m and 75 m from the observer, their ability to accurately identify a drone gesture was inversely proportional to the distance between the observer and the drone. Finally, we found that maintaining a gesture's apparent size improves participant response accuracy over changing line-of-sight distances.more » « less
-
Tele-operated collaborative robots are used by many children for academic learning. However, as child-directed play is important for social-emotional learning, it is also important to understand how robots can facilitate play. In this article, we present findings from an analysis of a national, multi-year case study, where we explore how 53 children in grades K–12 (n= 53) used robots for self-directed play activities. The contributions of this article are as follows. First, we present empirical data on novel play scenarios that remote children created using their tele-operated robots. These play scenarios emerged in five categories of play: physical, verbal, visual, extracurricular, and wished-for play. Second, we identify two unique themes that emerged from the data—robot-mediated play as a foundational support of general friendships and as a foundational support of self-expression and identity. Third, our work found that robot-mediated play provided benefits similar to in-person play. Findings from our work will inform novel robot and HRI design for tele-operated and social robots that facilitate self-directed play. Findings will also inform future interdisciplinary studies on robot-mediated play.more » « less