skip to main content


Title: Activity Theory as a Framework for Integrating Uas Into the Nas: A Field Study of Crew Member Activity During Uas Operations Near a Non-Towered Airport
Award ID(s):
1619273
NSF-PAR ID:
10100325
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
62
Issue:
1
ISSN:
1541-9312
Page Range / eLocation ID:
39 to 43
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Automatically detecting the wet/dry shoreline from remote sensing imagery has many benefits for beach management in coastal areas by enabling managers to take measures to protect wildlife during high water events. This paper proposes the use of a modified HED (Holistically-Nested Edge Detection) architecture to create a model for automatic feature identification of the wet/dry shoreline and to compute its elevation from the associated DSM (Digital Surface Model). The model is generalizable to several beaches in Texas and Florida. The data from the multiple beaches was collected using UAS (Uncrewed Aircraft Systems). UAS allow for the collection of high-resolution imagery and the creation of the DSMs that are essential for computing the elevations of the wet/dry shorelines. Another advantage of using UAS is the flexibility to choose locations and metocean conditions, allowing to collect a varied dataset necessary to calibrate a general model. To evaluate the performance and the generalization of the AI model, we trained the model on data from eight flights over four locations, tested it on the data from a ninth flight, and repeated it for all possible combinations. The AP and F1-Scores obtained show the success of the model’s prediction for the majority of cases, but the limitations of a pure computer vision assessment are discussed in the context of this coastal application. The method was also assessed more directly, where the average elevations of the labeled and AI predicted wet/dry shorelines were compared. The absolute differences between the two elevations were, on average, 2.1 cm, while the absolute difference of the elevations’ standard deviations for each wet/dry shoreline was 2.2 cm. The proposed method results in a generalizable model able to delineate the wet/dry shoreline in beach imagery for multiple flights at several locations in Texas and Florida and for a range of metocean conditions. 
    more » « less
  2. Uncrewed aerial systems (UASs) have emerged as powerful ecological observation platforms capable of filling critical spatial and spectral observation gaps in plant physiological and phenological traits that have been difficult to measure from space-borne sensors. Despite recent technological advances, the high cost of drone-borne sensors limits the widespread application of UAS technology across scientific disciplines. Here, we evaluate the tradeoffs between off-the-shelf and sophisticated drone-borne sensors for mapping plant species and plant functional types (PFTs) within a diverse grassland. Specifically, we compared species and PFT mapping accuracies derived from hyperspectral, multispectral, and RGB imagery fused with light detection and ranging (LiDAR) or structure-for-motion (SfM)-derived canopy height models (CHM). Sensor–data fusion were used to consider either a single observation period or near-monthly observation frequencies for integration of phenological information (i.e., phenometrics). Results indicate that overall classification accuracies for plant species and PFTs were highest in hyperspectral and LiDAR-CHM fusions (78 and 89%, respectively), followed by multispectral and phenometric–SfM–CHM fusions (52 and 60%, respectively) and RGB and SfM–CHM fusions (45 and 47%, respectively). Our findings demonstrate clear tradeoffs in mapping accuracies from economical versus exorbitant sensor networks but highlight that off-the-shelf multispectral sensors may achieve accuracies comparable to those of sophisticated UAS sensors by integrating phenometrics into machine learning image classifiers. 
    more » « less
  3. null (Ed.)
    Timely and accurate monitoring has the potential to streamline crop management, harvest planning, and processing in the growing table beet industry of New York state. We used unmanned aerial system (UAS) combined with a multispectral imager to monitor table beet (Beta vulgaris ssp. vulgaris) canopies in New York during the 2018 and 2019 growing seasons. We assessed the optimal pairing of a reflectance band or vegetation index with canopy area to predict table beet yield components of small sample plots using leave-one-out cross-validation. The most promising models were for table beet root count and mass using imagery taken during emergence and canopy closure, respectively. We created augmented plots, composed of random combinations of the study plots, to further exploit the importance of early canopy growth area. We achieved a R2 = 0.70 and root mean squared error (RMSE) of 84 roots (~24%) for root count, using 2018 emergence imagery. The same model resulted in a RMSE of 127 roots (~35%) when tested on the unseen 2019 data. Harvested root mass was best modeled with canopy closing imagery, with a R2 = 0.89 and RMSE = 6700 kg/ha using 2018 data. We applied the model to the 2019 full-field imagery and found an average yield of 41,000 kg/ha (~40,000 kg/ha average for upstate New York). This study demonstrates the potential for table beet yield models using a combination of radiometric and canopy structure data obtained at early growth stages. Additional imagery of these early growth stages is vital to develop a robust and generalized model of table beet root yield that can handle imagery captured at slightly different growth stages between seasons. 
    more » « less
  4. The use of small unmanned aerial system (UAS)-based structure-from-motion (SfM; photogrammetry) and LiDAR point clouds has been widely discussed in the remote sensing community. Here, we compared multiple aspects of the SfM and the LiDAR point clouds, collected concurrently in five UAS flights experimental fields of a short crop (snap bean), in order to explore how well the SfM approach performs compared with LiDAR for crop phenotyping. The main methods include calculating the cloud-to-mesh distance (C2M) maps between the preprocessed point clouds, as well as computing a multiscale model-to-model cloud comparison (M3C2) distance maps between the derived digital elevation models (DEMs) and crop height models (CHMs). We also evaluated the crop height and the row width from the CHMs and compared them with field measurements for one of the data sets. Both SfM and LiDAR point clouds achieved an average RMSE of ~0.02 m for crop height and an average RMSE of ~0.05 m for row width. The qualitative and quantitative analyses provided proof that the SfM approach is comparable to LiDAR under the same UAS flight settings. However, its altimetric accuracy largely relied on the number and distribution of the ground control points. 
    more » « less