skip to main content


Title: Development of a Peripheral-Central Vision System for Small UAS Tracking
With the rapid proliferation of small unmanned aircraft systems (UAS), the risk of mid-air collisions is growing, as is the risk associated with the malicious use of these systems. Airborne Detect-and-Avoid (ABDAA) and counter-UAS technologies have similar sensing requirements to detect and track airborne threats, albeit for different purposes: to avoid a collision or to neutralize a threat, respectively. These systems typically include a variety of sensors, such as electro-optical or infrared (EO/IR) cameras, RADAR, or LiDAR, and they fuse the data from these sensors to detect and track a given threat and to predict its trajectory. Camera imagery can be an effective method for detection as well as for pose estimation and threat classification, though a single camera cannot resolve range to a threat without additional information, such as knowledge of the threat geometry. To support ABDAA and counter-UAS applications, we consider a merger of two image-based sensing methods that mimic human vision: (1) a "peripheral vision" camera (i.e., with a fisheye lens) to provide a large field-of-view and (2) a "central vision" camera (i.e., with a perspective lens) to provide high resolution imagery of a specific target. Beyond the complementary ability of the two cameras to support detection and classification, the pair form a heterogeneous stereo vision system that can support range resolution. This paper describes the initial development and testing of a peripheral-central vision system to detect, localize, and classify an airborne threat and finally to predict its path using knowledge of the threat class.  more » « less
Award ID(s):
1650465
NSF-PAR ID:
10086398
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
AIAA SciTech
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Wireless security cameras are integral components of security systems used by military installations, corporations, and, due to their increased affordability, many private homes. These cameras commonly employ motion sensors to identify that something is occurring in their fields of vision before starting to record and notifying the property owner of the activity. In this paper, we discover that the motion sensing action can disclose the location of the camera through a novel wireless camera localization technique we call MotionCompass. In short, a user who aims to avoid surveillance can find a hidden camera by creating motion stimuli and sniffing wireless traffic for a response to that stimuli. With the motion trajectories within the motion detection zone, the exact location of the camera can be then computed. We develop an Android app to implement MotionCompass. Our extensive experiments using the developed app and 18 popular wireless security cameras demonstrate that for cameras with one motion sensor, MotionCompass can attain a mean localization error of around 5 cm with less than 140 seconds. This localization technique builds upon existing work that detects the existence of hidden cameras, to pinpoint their exact location and area of surveillance. 
    more » « less
  2. null (Ed.)
    Phenology is a distinct marker of the impacts of climate change on ecosystems. Accordingly, monitoring the spatiotemporal patterns of vegetation phenology is important to understand the changing Earth system. A wide range of sensors have been used to monitor vegetation phenology, including digital cameras with different viewing geometries mounted on various types of platforms. Sensor perspective, view-angle, and resolution can potentially impact estimates of phenology. We compared three different methods of remotely sensing vegetation phenology—an unoccupied aerial vehicle (UAV)-based, downward-facing RGB camera, a below-canopy, upward-facing hemispherical camera with blue (B), green (G), and near-infrared (NIR) bands, and a tower-based RGB PhenoCam, positioned at an oblique angle to the canopy—to estimate spring phenological transition towards canopy closure in a mixed-species temperate forest in central Virginia, USA. Our study had two objectives: (1) to compare the above- and below-canopy inference of canopy greenness (using green chromatic coordinate and normalized difference vegetation index) and canopy structural attributes (leaf area and gap fraction) by matching below-canopy hemispherical photos with high spatial resolution (0.03 m) UAV imagery, to find the appropriate spatial coverage and resolution for comparison; (2) to compare how UAV, ground-based, and tower-based imagery performed in estimating the timing of the spring phenological transition. We found that a spatial buffer of 20 m radius for UAV imagery is most closely comparable to below-canopy imagery in this system. Sensors and platforms agree within +/− 5 days of when canopy greenness stabilizes from the spring phenophase into the growing season. We show that pairing UAV imagery with tower-based observation platforms and plot-based observations for phenological studies (e.g., long-term monitoring, existing research networks, and permanent plots) has the potential to scale plot-based forest structural measures via UAV imagery, constrain uncertainty estimates around phenophases, and more robustly assess site heterogeneity. 
    more » « less
  3. null (Ed.)
    The agricultural industry suffers from a significant amount of food waste, some of which originates from an inability to apply site-specific management at the farm-level. Snap bean, a broad-acre crop that covers hundreds of thousands of acres across the USA, is not exempt from this need for informed, within-field, and spatially-explicit management approaches. This study aimed to assess the utility of machine learning algorithms for growth stage and pod maturity classification of snap bean (cv. Huntington), as well as detecting and discriminating spectral and biophysical features that lead to accurate classification results. Four major growth stages and six main sieve size pod maturity levels were evaluated for growth stage and pod maturity classification, respectively. A point-based in situ spectroradiometer in the visible-near-infrared and shortwave-infrared domains (VNIR-SWIR; 400–2500 nm) was used and the radiance values were converted to reflectance to normalize for any illumination change between samples. After preprocessing the raw data, we approached pod maturity assessment with multi-class classification and growth stage determination with binary and multi-class classification methods. Results from the growth stage assessment via the binary method exhibited accuracies ranging from 90–98%, with the best mathematical enhancement method being the continuum-removal approach. The growth stage multi-class classification method used raw reflectance data and identified a pair of wavelengths, 493 nm and 640 nm, in two basic transforms (ratio and normalized difference), yielding high accuracies (~79%). Pod maturity assessment detected narrow-band wavelengths in the VIS and SWIR region, separating between not ready-to-harvest and ready-to-harvest scenarios with classification measures at the ~78% level by using continuum-removed spectra. Our work is a best-case scenario, i.e., we consider it a stepping-stone to understanding snap bean harvest maturity assessment via hyperspectral sensing at a scalable level (i.e., airborne systems). Future work involves transferring the concepts to unmanned aerial system (UAS) field experiments and validating whether or not a simple multispectral camera, mounted on a UAS, could incorporate < 10 spectral bands to meet the need of both growth stage and pod maturity classification in snap bean production. 
    more » « less
  4. Abstract

    Understanding interactions between environmental stress and genetic variation is crucial to predict the adaptive capacity of species to climate change. Leaf temperature is both a driver and a responsive indicator of plant physiological response to thermal stress, and methods to monitor it are needed. Foliar temperatures vary across leaf to canopy scales and are influenced by genetic factors, challenging efforts to map and model this critical variable. Thermal imagery collected using unoccupied aerial systems (UAS) offers an innovative way to measure thermal variation in plants across landscapes at leaf‐level resolutions. We used a UAS equipped with a thermal camera to assess temperature variation among genetically distinct populations of big sagebrush (Artemisia tridentata), a keystone plant species that is the focus of intensive restoration efforts throughout much of western North America. We completed flights across a growing season in a sagebrush common garden to map leaf temperature relative to subspecies and cytotype, physiological phenotypes of plants, and summer heat stress. Our objectives were to (1) determine whether leaf‐level stomatal conductance corresponds with changes in crown temperature; (2) quantify genetic (i.e., subspecies and cytotype) contributions to variation in leaf and crown temperatures; and (3) identify how crown structure, solar radiation, and subspecies‐cytotype relate to leaf‐level temperature. When considered across the whole season, stomatal conductance was negatively, non‐linearly correlated with crown‐level temperature derived from UAS. Subspecies identity best explained crown‐level temperature with no difference observed between cytotypes. However, structural phenotypes and microclimate best explained leaf‐level temperature. These results show how fine‐scale thermal mapping can decouple the contribution of genetic, phenotypic, and microclimate factors on leaf temperature dynamics. As climate‐change‐induced heat stress becomes prevalent, thermal UAS represents a promising way to track plant phenotypes that emerge from gene‐by‐environment interactions.

     
    more » « less
  5. Automatically detecting the wet/dry shoreline from remote sensing imagery has many benefits for beach management in coastal areas by enabling managers to take measures to protect wildlife during high water events. This paper proposes the use of a modified HED (Holistically-Nested Edge Detection) architecture to create a model for automatic feature identification of the wet/dry shoreline and to compute its elevation from the associated DSM (Digital Surface Model). The model is generalizable to several beaches in Texas and Florida. The data from the multiple beaches was collected using UAS (Uncrewed Aircraft Systems). UAS allow for the collection of high-resolution imagery and the creation of the DSMs that are essential for computing the elevations of the wet/dry shorelines. Another advantage of using UAS is the flexibility to choose locations and metocean conditions, allowing to collect a varied dataset necessary to calibrate a general model. To evaluate the performance and the generalization of the AI model, we trained the model on data from eight flights over four locations, tested it on the data from a ninth flight, and repeated it for all possible combinations. The AP and F1-Scores obtained show the success of the model’s prediction for the majority of cases, but the limitations of a pure computer vision assessment are discussed in the context of this coastal application. The method was also assessed more directly, where the average elevations of the labeled and AI predicted wet/dry shorelines were compared. The absolute differences between the two elevations were, on average, 2.1 cm, while the absolute difference of the elevations’ standard deviations for each wet/dry shoreline was 2.2 cm. The proposed method results in a generalizable model able to delineate the wet/dry shoreline in beach imagery for multiple flights at several locations in Texas and Florida and for a range of metocean conditions. 
    more » « less