skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 29, 2026

Title: Real-time synchronization of flight simulation and physiological data using Lab Streaming Layer: A custom X-Plane approach
Poor synchronization of physiological and flight data hinders human factors research by obscuring cause-effect relationships and requiring laborious manual alignment. We developed a custom X-Plane plugin which uses Lab Streaming Layer (LSL) for real-time synchronization to integrate flight data with simultaneously streamed ECG and eye-tracking signals. Our setup simplifies data collection and ensures precise synchronization without manual intervention, reducing error and preprocessing time. This paper describes the system configuration, plugin, and advantages over conventional methods. Our approach enables accurate correlation between physiological responses and in-flight events, offering deeper insights into pilot performance and workload, resulting in a practical, reproducible method of simplifying multimodal data collection for accessible and efficient research in aviation human factors.  more » « less
Award ID(s):
2505815
PAR ID:
10613727
Author(s) / Creator(s):
;
Publisher / Repository:
International Symposium on Aviation Psychology
Date Published:
Format(s):
Medium: X
Location:
Corvalis, OR
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The development and validation of computational models to detect daily human behaviors (e.g., eating, smoking, brushing) using wearable devices requires labeled data collected from the natural field environment, with tight time synchronization of the micro-behaviors (e.g., start/end times of hand-to-mouth gestures during a smoking puff or an eating gesture) and the associated labels. Video data is increasingly being used for such label collection. Unfortunately, wearable devices and video cameras with independent (and drifting) clocks make tight time synchronization challenging. To address this issue, we present the Window Induced Shift Estimation method for Synchronization (SyncWISE) approach. We demonstrate the feasibility and effectiveness of our method by synchronizing the timestamps of a wearable camera and wearable accelerometer from 163 videos representing 45.2 hours of data from 21 participants enrolled in a real-world smoking cessation study. Our approach shows significant improvement over the state-of-the-art, even in the presence of high data loss, achieving 90% synchronization accuracy given a synchronization tolerance of 700 milliseconds. Our method also achieves state-of-the-art synchronization performance on the CMU-MMAC dataset. 
    more » « less
  2. Fields in the social sciences, such as education research, have started to expand the use of computer-based research methods to supplement traditional research approaches. Natural language processing techniques, such as topic modeling, may support qualitative data analysis by providing early categories that researchers may interpret and refine. This study contributes to this body of research and answers the following research questions: (RQ1) What is the relative coverage of the latent Dirichlet allocation (LDA) topic model and human coding in terms of the breadth of the topics/themes extracted from the text collection? (RQ2) What is the relative depth or level of detail among identified topics using LDA topic models and human coding approaches? A dataset of student reflections was qualitatively analyzed using LDA topic modeling and human coding approaches, and the results were compared. The findings suggest that topic models can provide reliable coverage and depth of themes present in a textual collection comparable to human coding but require manual interpretation of topics. The breadth and depth of human coding output is heavily dependent on the expertise of coders and the size of the collection; these factors are better handled in the topic modeling approach. 
    more » « less
  3. Human–robot collaboration has emerged as a prominent research topic in recent years. To enhance collaboration and ensure safety between humans and robots, researchers employ a variety of methods. One such method is physiological computing, which aims to estimate a human’s psycho-physiological state by measuring various physiological signals such as galvanic skin response (GSR), electrocardiograph (ECG), heart rate variability (HRV), and electroencephalogram (EEG). This information is then used to provide feedback to the robot. In this paper, we present the latest state-of-the-art methods in physiological computing for human–robot collaboration. Our goal is to provide a comprehensive guide for new researchers to understand the commonly used physiological signals, data collection methods, and data labeling techniques. Additionally, we have categorized and tabulated relevant research to further aid in understanding this area of study. 
    more » « less
  4. ABSTRACT Cochlear hair cell stereocilia bundles are key organelles required for normal hearing. Often, deafness mutations cause aberrant stereocilia heights or morphology that are visually apparent but challenging to quantify. Actin-based structures, stereocilia are easily and most often labeled with phalloidin then imaged with 3D confocal microscopy. Unfortunately, phalloidin non-specifically labels all the actin in the tissue and cells and therefore results in a challenging segmentation task wherein the stereocilia phalloidin signal must be separated from the rest of the tissue. This can require many hours of manual human effort for each 3D confocal image stack. Currently, there are no existing software pipelines that provide an end-to-end automated solution for 3D stereocilia bundle instance segmentation. Here we introduce VASCilia, a Napari plugin designed to automatically generate 3D instance segmentation and analysis of 3D confocal images of cochlear hair cell stereocilia bundles stained with phalloidin. This plugin combines user-friendly manual controls with advanced deep learning-based features to streamline analyses. With VASCilia, users can begin their analysis by loading image stacks. The software automatically preprocesses these samples and displays them in Napari. At this stage, users can select their desired range of z-slices, adjust their orientation, and initiate 3D instance segmentation. After segmentation, users can remove any undesired regions and obtain measurements including volume, centroids, and surface area. VASCilia introduces unique features that measures bundle heights, determines their orientation with respect to planar polarity axis, and quantifies the fluorescence intensity within each bundle. The plugin is also equipped with trained deep learning models that differentiate between inner hair cells and outer hair cells and predicts their tonotopic position within the cochlea spiral. Additionally, the plugin includes a training section that allows other laboratories to fine-tune our model with their own data, provides responsive mechanisms for manual corrections through event-handlers that check user actions, and allows users to share their analyses by uploading a pickle file containing all intermediate results. We believe this software will become a valuable resource for the cochlea research community, which has traditionally lacked specialized deep learning-based tools for obtaining high-throughput image quantitation. Furthermore, we plan to release our code along with a manually annotated dataset that includes approximately 55 3D stacks featuring instance segmentation. This dataset comprises a total of 1,870 instances of hair cells, distributed between 410 inner hair cells and 1,460 outer hair cells, all annotated in 3D. As the first open-source dataset of its kind, we aim to establish a foundational resource for constructing a comprehensive atlas of cochlea hair cell images. Together, this open-source tool will greatly accelerate the analysis of stereocilia bundles and demonstrates the power of deep learning-based algorithms for challenging segmentation tasks in biological imaging research. Ultimately, this initiative will support the development of foundational models adaptable to various species, markers, and imaging scales to advance and accelerate research within the cochlea research community. 
    more » « less
  5. This paper presents an approach for generating linear time invariant state-space models of a small Unmanned Air System. An instrumentation system using the robot operating system with commercial-off-the-shelf components is implemented to record flight data and inject auto- mated excitation signals. Offline system identification is conducted using the Observer/Kalman Identification algorithm to produce a discrete-time linear time invariant state-space model, which is then converted to a continuous time-model for analysis. Challenges concerning data collection and inverted V-Tail modelling are discussed, and solutions are presented. Longitudiunal, lateral/directional and combined longitudinal lateral/directional models of the test vehicle are generated using both manual and automated excitations, and are presented and compared. The generated longitudinal and lateral/directional results are compared to results for a small Unmanned Air System with a standard empennage. Flight test results presented in the paper show decent matching between the decoupled longitudinal and lateral/directional model and the combined longitudinal/lateral directional model. 
    more » « less