skip to main content


Title: Effect of Illumination on Human Drone Interaction Tasks: An Exploratory Study
With recent changes by the Federal Aviation Administration (FAA) opening the possibility of more areas for drones to be used, such as delivery, there will be increasingly more intera ctions between humans and drones soon. Although current human drone interaction (HDI) investigate what factors are necessary for safe interactions, very few has focused on drone illumination. Therefore, in this study, we explored how illumination affects users’ perception of the drone through a distance perception task. Data analysis did not indicate any significant effects in the normal distance estimation task for illumination or distance conditions. However, most participants underestimated the distance in the normal distance estimation task and indicated that the LED drone was closer when it wa s illuminated during the relative distance estimation task, even though the drones were equidistant. In future studies, factors such as the weather conditions, lighting patterns, and height of the drone will be explored.  more » « less
Award ID(s):
2024656
NSF-PAR ID:
10346241
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Volume:
65
Issue:
1
ISSN:
2169-5067
Page Range / eLocation ID:
1485 to 1489
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this work we address the adequacy of two machine learning methods to tackle the problem of wind velocity estimation in the lowermost region of the atmosphere using on-board inertial drone data within an outdoor setting. We fed these data, and accompanying wind tower measurements, into a K-nearest neighbor (KNN) algorithm and a long short-term memory (LSTM) neural network to predict future windspeeds, by exploiting the stabilization response of two hovering drones in a wind field. Of the two approaches, we found that LSTM proved to be the most capable supervised learning model during more capricious wind conditions, and made competent windspeed predictions with an average root mean square error of 0.61 m·s−1 averaged across two drones, when trained on at least 20 min of flight data. During calmer conditions, a linear regression model demonstrated acceptable performance, but under more variable wind regimes the LSTM performed considerably better than the linear model, and generally comparable to more sophisticated methods. Our approach departs from other multi-rotor-based windspeed estimation schemes by circumventing the use of complex and specific dynamic models, to instead directly learn the relationship between drone attitude and fluctuating windspeeds. This exhibits utility in a range of otherwise prohibitive environments, like mountainous terrain or off-shore sites. 
    more » « less
  2. he pervasive operation of customer drones, or small-scale unmanned aerial vehicles (UAVs), has raised serious concerns about their privacy threats to the public. In recent years, privacy invasion events caused by customer drones have been frequently reported. Given such a fact, timely detection of invading drones has become an emerging task. Existing solutions using active radar, video or acoustic sensors are usually too costly (especially for individuals) or exhibit various constraints (e.g., requiring visual line of sight). Recent research on drone detection with passive RF signals provides an opportunity for low-cost deployment of drone detectors on commodity wireless devices. However, the state of the arts in this direction rely on line-of-sight (LOS) RF signals, which makes them only work under very constrained conditions. The support of more common scenarios, i.e., non-line-of-sight (NLOS), is still missing for low-cost solutions. In this paper, we propose a novel detection system for privacy invasion caused by customer drone. Our system is featured with accurate NLOS detection with low-cost hardware (under $50). By exploring and validating the relationship between drone motions and RF signal under the NLOS condition, we find that RF signatures of drones are somewhat “amplified” by multipaths in NLOS. Based on this observation, we design a two-step solution which first classifies received RSS measurements into LOS and NLOS categories; deep learning is then used to extract the signatures and ultimately detect the drones. Our experimental results show that LOS and NLOS signals can be identified at accuracy rates of 98.4% and 96% respectively. Our drone detection rate for NLOS condition is above 97% with a system implemented using Raspberry PI 3 B+. 
    more » « less
  3. Recent years have seen a rapid increase in drone usage in both commercial and personal use, due to recent changes in guidelines by the Federal Aviation Administration (FAA). In those guidelines, however, there seems to be very few requirements in terms of illumination requirements, apart from the need to use a visible strob ing anti-collision light for nighttime operations. Hence in this study, wereviewed existing LED illumination systems in off-the-shelf drones to determine what type of configurations they have and how is the LED illumination system generally used. We also introduced a customizable LED illumination system and tested it in a human in the loop study. Our p reliminary findings have revealed that the colors that are preferred by the participants did not match the most used colors in existing LED illumination systems in most off-the-shelf drones. We also observed a possible relationship between the color preferred and the weather conditions. 
    more » « less
  4. A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina and perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely un-explored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by mimicking the perception-action feedback loop of an expert surgeon. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in real-life experiments using a silicone eye phantom. We show that the network can reliably navigate a surgical tool to various desired locations within 137 µm accuracy in phantom experiments and 94 µm in simulation, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions. 
    more » « less
  5. Abstract

    Compared to conventional fabrication, additive manufacturing enables production of far more complex geometries with less tooling and increased automation. However, despite the common perception of AM’s “free” geometric complexity, this freedom comes with a literal cost: more complex geometries may be challenging to design, potentially manifesting as increased engineering labor cost. Being able to accurately predict design cost is essential to reliably forecasting large-scale design for additive manufacturing projects, especially for those using expensive processes like laser powder bed fusion of metals. However, no studies have quantitatively explored designers’ ability to complete this forecasting. In this study, we address this gap by analyzing the uncertainty of expert design cost estimation. First, we establish a methodology to translate computer-aided design data into descriptive vectors capturing design for additive manufacturing activity parameters. We then present a series of case study designs, with varied functionality and geometric complexity, to experts and measure their estimations of design labor for each case. Summary statistics of the cost estimates and a linear mixed effects model predicting labor responses from participant and design attributes was used to estimate the significance of factors on the responses. A task-based, CAD model complexity calculation is then used to infer an estimate of the magnitude and variability of normalized labor cost to understand more generalizable attributes of the observed labor estimates. These two analyses are discussed in the context of advantages and disadvantages of relying on human cost estimation for additive manufacturing forecasts as well as future work that can prioritize and mitigate such challenges.

     
    more » « less