- Award ID(s):
- 1650547
- PAR ID:
- 10136667
- Date Published:
- Journal Name:
- International Conference on Unmanned Aircraft Systems
- Page Range / eLocation ID:
- 1161 to 1167
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
This paper demonstrates a feasible method for using a deep neural network as a sensor to estimate the attitude of a flying vehicle using only flight video. A dataset of still images and associated gravity vectors was collected and used to perform supervised learning. The network builds on a previously trained network and was trained to be able to approximate the attitude of the camera with an average error of about 8 degrees. Flight test video was recorded and processed with a relatively simple visual odometry method. The aircraft attitude is then estimated with the visual odometry as the state propagation and network providing the attitude measurement in an extended Kalman filter. Results show that the proposed method of having the neural network provide a gravity vector attitude measurement from the flight imagery reduces the standard deviation of the attitude error by approximately 12 times compared to a baseline approach.more » « less
-
In this paper, we introduce a neural network (NN)-based symbol detection scheme for Wi-Fi systems and its associated hardware implementation in software radios. To be specific, reservoir computing (RC), a special type of recurrent neural network (RNN), is adopted to conduct the task of symbol detection for Wi-Fi receivers. Instead of introducing extra training overhead/set to facilitate the RC-based symbol detection, a new training framework is introduced to take advantage of the signal structure in existing Wi-Fi protocols (e.g., IEEE 802.11 standards), that is, the introduced RC-based symbol detector will utilize the inherent long/short training sequences and structured pilots sent by the Wi-Fi transmitter to conduct online learning of the transmit symbols. In other words, our introduced NN-based symbol detector does not require any additional training sets compared to existing Wi-Fi systems. The introduced RC-based Wi-Fi symbol detector is implemented on the software-defined radio (SDR) platform to further provide realistic and meaningful performance comparisons against the traditional Wi-Fi receiver. Over the air, experiment results show that the introduced RC based Wi-Fi symbol detector outperforms conventional Wi-Fi symbol detection methods in various environments indicating the significance and the relevance of our work.more » « less
-
Abstract This paper presents a novel application of convolutional neural network (CNN) models for filtering the intraseasonal variability of the tropical atmosphere. In this deep learning filter, two convolutional layers are applied sequentially in a supervised machine learning framework to extract the intraseasonal signal from the total daily anomalies. The CNN-based filter can be tailored for each field similarly to fast Fourier transform filtering methods. When applied to two different fields (zonal wind stress and outgoing longwave radiation), the index of agreement between the filtered signal obtained using the CNN-based filter and a conventional weight-based filter is between 95% and 99%. The advantage of the CNN-based filter over the conventional filters is its applicability to time series with the length comparable to the period of the signal being extracted.
Significance Statement This study proposes a new method for discovering hidden connections in data representative of tropical atmosphere variability. The method makes use of an artificial intelligence (AI) algorithm that combines a mathematical operation known as convolution with a mathematical model built to reflect the behavior of the human brain known as artificial neural network. Our results show that the filtered data produced by the AI-based method are consistent with the results obtained using conventional mathematical algorithms. The advantage of the AI-based method is that it can be applied to cases for which the conventional methods have limitations, such as forecast (hindcast) data or real-time monitoring of tropical variability in the 20–100-day range.
-
Spatial ability is the ability to generate, store, retrieve, and transform visual information to mentally represent a space and make sense of it. This ability is a critical facet of human cognition that affects knowledge acquisition, productivity, and workplace safety. Although having improved spatial ability is essential for safely navigating and perceiving a space on earth, it is more critical in altered environments of other planets and deep space, which may pose extreme and unfamiliar visuospatial conditions. Such conditions may range from microgravity settings with the misalignment of body and visual axes to a lack of landmark objects that offer spatial cues to perceive size, distance, and speed. These altered visuospatial conditions may pose challenges to human spatial cognitive processing, which assists humans in locating objects in space, perceiving them visually, and comprehending spatial relationships between the objects and surroundings. The main goal of this paper is to examine if eye-tracking data of gaze pattern can indicate whether such altered conditions may demand more mental efforts and attention. The key dimensions of spatial ability (i.e., spatial visualization, spatial relations, and spatial orientation) are examined under the three simulated conditions: (1) aligned body and visual axes (control group); (2) statically misaligned body and visual axes (experiment group I); and dynamically misaligned body and visual axes (experiment group II). The three conditions were simulated in Virtual Reality (VR) using Unity 3D game engine. Participants were recruited from Texas A&M University student population who wore HTC VIVE Head-Mounted Displays (HMDs) equipped with eye-tracking technology to work on three spatial tests to measure spatial visualization, orientation, and relations. The Purdue Spatial Visualization Test: Rotations (PSVT: R), the Mental Cutting Test (MCT), and the Perspective Taking Ability (PTA) test were used to evaluate the spatial visualization, spatial relations, and spatial orientation of 78 participants, respectively. For each test, gaze data was collected through Tobii eye-tracker integrated in the HTC Vive HMDs. Quick eye movements, known as saccades, were identified by analyzing raw eye-tracking data using the rate of change of gaze position over time as a measure of mental effort. The results showed that the mean number of saccades in MCT and PSVT: R tests was statistically larger in experiment group II than in the control group or experiment group I. However, PTA test data did not meet the required assumptions to compare the mean number of saccades in the three groups. The results suggest that spatial relations and visualization may require more mental effort under dynamically misaligned idiotropic and visual axes than aligned or statically misaligned idiotropic and visual axes. However, the data could not reveal whether spatial orientation requires more/less mental effort under aligned, statically misaligned, and dynamically misaligned idiotropic and visual axes. The results of this study are important to understand how altered visuospatial conditions impact spatial cognition and how simulation- or game-based training tools can be developed to train people in adapting to extreme or altered work environments and working more productively and safely.
-
Smooth camber morphing aircraft offer increased control authority and improved aerodynamic efficiency. Smart material actuators have become a popular driving force for shape changes, capable of adhering to weight and size constraints and allowing for simplicity in mechanical design. As a step towards creating uncrewed aerial vehicles (UAVs) capable of autonomously responding to flow conditions, this work examines a multifunctional morphing airfoil’s ability to follow commands in various flows. We integrated an airfoil with a morphing trailing edge consisting of an antagonistic pair of macro fiber composites (MFCs), serving as both skin and actuator, and internal piezoelectric flex sensors to form a closed loop composite system. Closed loop feedback control is necessary to accurately follow deflection commands due to the hysteretic behavior of MFCs. Here we used a deep reinforcement learning algorithm, Proximal Policy Optimization, to control the morphing airfoil. Two neural controllers were trained in a simulation developed through time series modeling on long short-term memory recurrent neural networks. The learned controllers were then tested on the composite wing using two state inference methods in still air and in a wind tunnel at various flow speeds. We compared the performance of our neural controllers to one using traditional position-derivative feedback control methods. Our experimental results validate that the autonomous neural controllers were faster and more accurate than traditional methods. This research shows that deep learning methods can overcome common obstacles for achieving sufficient modeling and control when implementing smart composite actuators in an autonomous aerospace environment.