skip to main content


Title: A Wearable Multi-modal Bio-sensing System Towards Real-world Applications
Multi-modal bio-sensing has recently been used as effective research tools in affective computing, autism, clinical disorders, and virtual reality among other areas. However, none of the existing bio-sensing systems support multi-modality in a wearable manner outside well-controlled laboratory environments with research-grade measurements. This work attempts to bridge this gap by developing a wearable multi-modal biosensing system capable of collecting, synchronizing, recording and transmitting data from multiple bio-sensors: PPG, EEG, eye-gaze headset, body motion capture, GSR, etc. while also providing task modulation features including visual-stimulus tagging. This study describes the development and integration of the various components of our system. We evaluate the developed sensors by comparing their measurements to those obtained by a standard research-grade bio-sensors. We first evaluate different sensor modalities of our headset, namely earlobe-based PPG module with motion-noise canceling for ECG during heart-beat calculation. We also compare the steady-state visually evoked potentials (SSVEP) measured by our shielded dry EEG sensors with the potentials obtained by commercially available dry EEG sensors. We also investigate the effect of head movements on the accuracy and precision of our wearable eyegaze system. Furthermore, we carry out two practical tasks to demonstrate the applications of using multiple sensor modalities for exploring previously unanswerable questions in bio-sensing. Specifically, utilizing bio-sensing we show which strategy works best for playing Where is Waldo? visual-search game, changes in EEG corresponding to true versus false target fixations in this game, and predicting the loss/draw/win states through biosensing modalities while learning their limitations in a Rock-Paper-Scissors game.  more » « less
Award ID(s):
1719130
NSF-PAR ID:
10107953
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE transactions on biomedical engineering
Volume:
66
Issue:
4
ISSN:
0018-9294
Page Range / eLocation ID:
1137 - 1147
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    ROV operations are mainly performed via a traditional control kiosk and limited data feedback methods, such as the use of joysticks and camera view displays equipped on a surface vessel. This traditional setup requires significant personnel on board (POB) time and imposes high requirements for personnel training. This paper proposes a virtual reality (VR) based haptic-visual ROV teleoperation system that can substantially simplify ROV teleoperation and enhance the remote operator's situational awareness.

    This study leverages the recent development in Mixed Reality (MR) technologies, sensory augmentation, sensing technologies, and closed-loop control, to visualize and render complex underwater environmental data in an intuitive and immersive way. The raw sensor data will be processed with physics engine systems and rendered as a high-fidelity digital twin model in game engines. Certain features will be visualized and displayed via the VR headset, whereas others will be manifested as haptic and tactile cues via our haptic feedback systems. We applied a simulation approach to test the developed system.

    With our developed system, a high-fidelity subsea environment is reconstructed based on the sensor data collected from an ROV including the bathymetric, hydrodynamic, visual, and vehicle navigational measurements. Specifically, the vehicle is equipped with a navigation sensor system for real-time state estimation, an acoustic Doppler current profiler for far-field flow measurement, and a bio-inspired artificial literal-line hydrodynamic sensor system for near-field small-scale hydrodynamics. Optimized game engine rendering algorithms then visualize key environmental features as augmented user interface elements in a VR headset, such as color-coded vectors, to indicate the environmental impact on the performance and function of the ROV. In addition, augmenting environmental feedback such as hydrodynamic forces are translated into patterned haptic stimuli via a haptic suit for indicating drift-inducing flows in the near field. A pilot case study was performed to verify the feasibility and effectiveness of the system design in a series of simulated ROV operation tasks.

    ROVs are widely used in subsea exploration and intervention tasks, playing a critical role in offshore inspection, installation, and maintenance activities. The innovative ROV teleoperation feedback and control system will lower the barrier for ROV pilot jobs.

     
    more » « less
  2. Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision 1 . 
    more » « less
  3. Gonzalez, D. (Ed.)

    Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.

     
    more » « less
  4. null (Ed.)
    Smart manufacturing, which integrates a multi-sensing system with physical manufacturing processes, has been widely adopted in the industry to support online and real-time decision making to improve manufacturing quality. A multi-sensing system for each specific manufacturing process can efficiently collect the in situ process variables from different sensor modalities to reflect the process variations in real-time. However, in practice, we usually do not have enough budget to equip too many sensors in each manufacturing process due to the cost consideration. Moreover, it is also important to better interpret the relationship between the sensing modalities and the quality variables based on the model. Therefore, it is necessary to model the quality-process relationship by selecting the most relevant sensor modalities with the specific quality measurement from the multi-modal sensing system in smart manufacturing. In this research, we adopted the concept of best subset variable selection and proposed a new model called Multi-mOdal beSt Subset modeling (MOSS). The proposed MOSS can effectively select the important sensor modalities and improve the modeling accuracy in quality-process modeling via functional norms that characterize the overall effects of individual modalities. The significance of sensor modalities can be used to determine the sensor placement strategy in smart manufacturing. Moreover, the selected modalities can better interpret the quality-process model by identifying the most correlated root cause of quality variations. The merits of the proposed model are illustrated by both simulations and a real case study in an additive manufacturing (i.e., fused deposition modeling) process. 
    more » « less
  5. Objective: We designed and validated a wireless, low-cost, easy-to-use, mobile, dry-electrode headset for scalp electroencephalography (EEG) recordings for closed-loop brain–computer (BCI) interface and internet-of-things (IoT) applications. Approach: The EEG-based BCI headset was designed from commercial off-the-shelf (COTS) components using a multi-pronged approach that balanced interoperability, cost, portability, usability, form factor, reliability, and closed-loop operation. Main Results: The adjustable headset was designed to accommodate 90% of the population. A patent-pending self-positioning dry electrode bracket allowed for vertical self-positioning while parting the user’s hair to ensure contact of the electrode with the scalp. In the current prototype, five EEG electrodes were incorporated in the electrode bracket spanning the sensorimotor cortices bilaterally, and three skin sensors were included to measure eye movement and blinks. An inertial measurement unit (IMU) provides monitoring of head movements. The EEG amplifier operates with 24-bit resolution up to 500 Hz sampling frequency and can communicate with other devices using 802.11 b/g/n WiFi. It has high signal–to–noise ratio (SNR) and common–mode rejection ratio (CMRR) (121 dB and 110 dB, respectively) and low input noise. In closed-loop BCI mode, the system can operate at 40 Hz, including real-time adaptive noise cancellation and 512 MB of processor memory. It supports LabVIEW as a backend coding language and JavaScript (JS), Cascading Style Sheets (CSS), and HyperText Markup Language (HTML) as front-end coding languages and includes training and optimization of support vector machine (SVM) neural classifiers. Extensive bench testing supports the technical specifications and human-subject pilot testing of a closed-loop BCI application to support upper-limb rehabilitation and provides proof-of-concept validation for the device’s use at both the clinic and at home. Significance: The usability, interoperability, portability, reliability, and programmability of the proposed wireless closed-loop BCI system provides a low-cost solution for BCI and neurorehabilitation research and IoT applications. 
    more » « less