Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates andmore »
A Wearable Multi-modal Bio-sensing System Towards Real-world Applications
Multi-modal bio-sensing has recently been used as effective research tools in affective computing, autism, clinical disorders, and virtual reality among other areas. However, none of the existing bio-sensing systems support multi-modality in a wearable manner outside well-controlled laboratory environments with research-grade measurements. This work attempts to bridge this gap by developing a wearable multi-modal biosensing system capable of collecting, synchronizing, recording and transmitting data from multiple bio-sensors: PPG, EEG, eye-gaze headset, body motion capture, GSR, etc. while also providing task modulation features including visual-stimulus tagging. This study describes the development and integration of the various components of our system. We evaluate the developed sensors by comparing their measurements to those obtained by a standard research-grade bio-sensors. We first evaluate different sensor modalities of our headset, namely earlobe-based PPG module with motion-noise canceling for ECG during heart-beat calculation. We also compare the steady-state visually evoked potentials (SSVEP) measured by our shielded dry EEG sensors with the potentials obtained by commercially available dry EEG sensors. We also investigate the effect of head movements on the accuracy and precision of our wearable eyegaze system. Furthermore, we carry out two practical tasks to demonstrate the applications of using multiple sensor modalities for exploring previously more »
- Award ID(s):
- 1719130
- Publication Date:
- NSF-PAR ID:
- 10107953
- Journal Name:
- IEEE transactions on biomedical engineering
- Volume:
- 66
- Issue:
- 4
- Page Range or eLocation-ID:
- 1137 - 1147
- ISSN:
- 0018-9294
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Smart manufacturing, which integrates a multi-sensing system with physical manufacturing processes, has been widely adopted in the industry to support online and real-time decision making to improve manufacturing quality. A multi-sensing system for each specific manufacturing process can efficiently collect the in situ process variables from different sensor modalities to reflect the process variations in real-time. However, in practice, we usually do not have enough budget to equip too many sensors in each manufacturing process due to the cost consideration. Moreover, it is also important to better interpret the relationship between the sensing modalities and the quality variables based on the model. Therefore, it is necessary to model the quality-process relationship by selecting the most relevant sensor modalities with the specific quality measurement from the multi-modal sensing system in smart manufacturing. In this research, we adopted the concept of best subset variable selection and proposed a new model called Multi-mOdal beSt Subset modeling (MOSS). The proposed MOSS can effectively select the important sensor modalities and improve the modeling accuracy in quality-process modeling via functional norms that characterize the overall effects of individual modalities. The significance of sensor modalities can be used to determine the sensor placement strategy in smart manufacturing.more »
-
Over the last decade, facial landmark tracking and 3D reconstruction have gained considerable attention due to their numerous applications such as human-computer interactions, facial expression analysis, and emotion recognition, etc. Traditional approaches require users to be confined to a particular location and face a camera under constrained recording conditions (e.g., without occlusions and under good lighting conditions). This highly restricted setting prevents them from being deployed in many application scenarios involving human motions. In this paper, we propose the first single-earpiece lightweight biosensing system, BioFace-3D, that can unobtrusively, continuously, and reliably sense the entire facial movements, track 2D facial landmarks, and further render 3D facial animations. Our single-earpiece biosensing system takes advantage of the cross-modal transfer learning model to transfer the knowledge embodied in a high-grade visual facial landmark detection model to the low-grade biosignal domain. After training, our BioFace-3D can directly perform continuous 3D facial reconstruction from the biosignals, without any visual input. Without requiring a camera positioned in front of the user, this paradigm shift from visual sensing to biosensing would introduce new opportunities in many emerging mobile and IoT applications. Extensive experiments involving 16 participants under various settings demonstrate that BioFace-3D can accurately track 53 major facialmore »
-
Optimization of pain assessment and treatment is an active area of research in healthcare. The purpose of this research is to create an objective pain intensity estimation system based on multimodal sensing signals through experimental studies. Twenty eight healthy subjects were recruited at Northeastern University. Nine physiological modalities were utilized in this research, namely facial expressions (FE), electroencephalography (EEG), eye movement (EM), skin conductance (SC), and blood volume pulse (BVP), electromyography (EMG), respiration rate (RR), skin temperature (ST), blood pressure (BP). Statistical analysis and machine learning algorithms were deployed to analyze the physiological data. FE, EEG, SC, BVP, and BP proved to be able to detect different pain states from healthy subjects. Multi-modalities proved to be promising in detecting different levels of painful states. A decision-level multi-modal fusion also proved to be efficient and accurate in classifying painful states.
-
In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics includingmore »