skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 1, 2025

Title: Preliminary Analysis of Collar Sensors for Guide Dog Training Using Convolutional Long Short-Term Memory, Kernel Principal Component Analysis and Multi-Sensor Data Fusion
Guide dogs play a crucial role in enhancing independence and mobility for people with visual impairment, offering invaluable assistance in navigating daily tasks and environments. However, the extensive training required for these dogs is costly, resulting in a limited availability that does not meet the high demand for such skilled working animals. Towards optimizing the training process and to better understand the challenges these guide dogs may be experiencing in the field, we have created a multi-sensor smart collar system. In this study, we developed and compared two supervised machine learning methods to analyze the data acquired from these sensors. We found that the Convolutional Long Short-Term Memory (Conv-LSTM) network worked much more efficiently on subsampled data and Kernel Principal Component Analysis (KPCA) on interpolated data. Each attained approximately 40% accuracy on a 10-state system. Not needing training, KPCA is a much faster method, but not as efficient with larger datasets. Among various sensors on the collar system, we observed that the inertial measurement units account for the vast majority of predictability, and that the addition of environmental acoustic sensing data slightly improved performance in most datasets. We also created a lexicon of data patterns using an unsupervised autoencoder. We present several regions of relatively higher density in the latent variable space that correspond to more common patterns and our attempt to visualize these patterns. In this preliminary effort, we found that several test states could be combined into larger superstates to simplify the testing procedures. Additionally, environmental sensor data did not carry much weight, as air conditioning units maintained the testing room at standard conditions.  more » « less
Award ID(s):
1554367 1329738 2319389 1160483
PAR ID:
10561058
Author(s) / Creator(s):
; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Animals
Volume:
14
Issue:
23
ISSN:
2076-2615
Page Range / eLocation ID:
3403
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Wearable devices that have low-power sensors, processors, and communication capabilities are gaining wide adoption in several health applications. The machine learning algorithms on these devices assume that data from all sensors are available during runtime. However, data from one or more sensors may be unavailable due to energy or communication challenges. This loss of sensor data can result in accuracy degradation of the application. Prior approaches to handle missing data, such as generative models or training multiple classifiers for each combination of missing sensors are not suitable for low-energy wearable devices due to their high overhead at runtime. In contrast to prior approaches, we present an energy-efficient approach, referred to as Sensor-Aware iMputation (SAM), to accurately impute missing data at runtime and recover application accuracy. SAM first uses unsupervised clustering to obtain clusters of similar sensor data patterns. Next, it learns inter-relationship between clusters to obtain imputation patterns for each combination of clusters using a principled sensor-aware search algorithm. Using sensor data for clustering before choosing imputation patterns ensures that the imputation isawareof sensor data observations. Experiments on seven diverse wearable sensor-based time-series datasets demonstrate that SAM is able to maintain accuracy within 5% of the baseline with no missing data when one sensor is missing. We also compare SAM against generative adversarial imputation networks (GAIN), transformers, and k-nearest neighbor methods. Results show that SAM outperforms all three approaches on average by more than 25% when two sensors are missing with negligible overhead compared to the baseline. 
    more » « less
  2. Abstract Environmental sensors are crucial for monitoring weather conditions and the impacts of climate change. However, it is challenging to place sensors in a way that maximises the informativeness of their measurements, particularly in remote regions like Antarctica. Probabilistic machine learning models can suggest informative sensor placements by finding sites that maximally reduce prediction uncertainty. Gaussian process (GP) models are widely used for this purpose, but they struggle with capturing complex non-stationary behaviour and scaling to large datasets. This paper proposes using a convolutional Gaussian neural process (ConvGNP) to address these issues. A ConvGNP uses neural networks to parameterise a joint Gaussian distribution at arbitrary target locations, enabling flexibility and scalability. Using simulated surface air temperature anomaly over Antarctica as training data, the ConvGNP learns spatial and seasonal non-stationarities, outperforming a non-stationary GP baseline. In a simulated sensor placement experiment, the ConvGNP better predicts the performance boost obtained from new observations than GP baselines, leading to more informative sensor placements. We contrast our approach with physics-based sensor placement methods and propose future steps towards an operational sensor placement recommendation system. Our work could help to realise environmental digital twins that actively direct measurement sampling to improve the digital representation of reality. 
    more » « less
  3. Most attention in K-12 artificial intelligence and machine learning (AI/ML) education has been given to having youths train models, with much less attention to the equally important testing of models when creating machine learning applications. Testing ML applications allows for the evaluation of models against predictions and can help creators of applications identify and address failure and edge cases that could negatively impact user experiences. We investigate how testing each other's projects supported youths to take perspective about functionality, performance, and potential issues in their own projects. We analyzed testing worksheets, audio and video recordings collected during a two week workshop in which 11 high school youths created physical computing projects that included (audio, pose, and image) ML classifiers. We found that through peer-testing youths reflected on the size of their training datasets, the diversity of their training data, the design of their classes and the contexts in which they produced training data. We discuss future directions for research on peer-testing in AI/ML education and current limitations for these kinds of activities. 
    more » « less
  4. Abstract In the modern industrial setting, there is an increasing demand for all types of sensors. The demand for both the quantity and quality of sensors is increasing annually. Our research focuses on thin-film nitrate sensors in particular, and it seeks to provide a robust method to monitor the quality of the sensors while reducing the cost of production. We are researching an image-based machine learning method to allow for real-time quality assessment of every sensor in the manufacturing pipeline. It opens up the possibility of real-time production parameter adjustments to enhance sensor performance. This technology has the potential to significantly reduce the cost of quality control and improve sensor quality at the same time. Previous research has proven that the texture of the topical layer (ion-selective membrane (ISM) layer) of the sensor directly correlates with the performance of the sensor. Our method seeks to use the correlation so established to train a learning-based system to predict the performance of any given sensor from a still photo of the sensor active region, i.e. the ISM. This will allow for the real-time assessment of every sensor instead of sample testing. Random sample testing is both costly in time and labor, and therefore, it does not account for all of the individual sensors. Sensor measurement is a crucial portion of the data collection process. To measure the performance of the sensors, the sensors are taken to a specialized lab to be measured for performance. During the measurement process, noise and error are unavoidable; therefore, we generated credibility data based on the performance data to show the reliability of each sensor performance signal at each sample time. In this paper, we propose a machine learning based method to predict sensor performance using image features extracted from the non-contact sensor images guided by the credibility data. This will eliminate the need to test every sensor as it is manufactured, which is not practical in a high-speed roll-to-roll setting, thus truely enabling a certify as built framework. 
    more » « less
  5. Video scene analysis is a well-investigated area where researchers have devoted efforts to detect and classify people and objects in the scene. However, real-life scenes are more complex: the intrinsic states of the objects (e.g., machine operating states or human vital signals) are often overlooked by vision-based scene analysis. Recent work has proposed a radio frequency (RF) sensing technique, wireless vibrometry, that employs wireless signals to sense subtle vibrations from the objects and infer their internal states. We envision that the combination of video scene analysis with wireless vibrometry form a more comprehensive understanding of the scene, namely "rich scene analysis". However, the RF sensors used in wireless vibrometry only provide time series, and it is challenging to associate these time series data with multiple real-world objects. We propose a real-time RF-vision sensor fusion system, Capricorn, that efficiently builds a cross-modal correspondence between visual pixels and RF time series to better understand the complex natures of a scene. The vision sensors in Capricorn model the surrounding environment in 3D and obtain the distances of different objects. In the RF domain, the distance is proportional to the signal time-of-flight (ToF), and we can leverage the ToF to separate the RF time series corresponding to each object. The RF-vision sensor fusion in Capricorn brings multiple benefits. The vision sensors provide environmental contexts to guide the processing of RF data, which helps us select the most appropriate algorithms and models. Meanwhile, the RF sensor yields additional information that is originally invisible to vision sensors, providing insight into objects' intrinsic states. Our extensive evaluations show that Capricorn real-timely monitors multiple appliances' operating status with an accuracy of 97%+ and recovers vital signals like respirations from multiple people. A video (https://youtu.be/b-5nav3Fi78) demonstrates the capability of Capricorn. 
    more » « less