skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Head motion classification using thread-based sensor and machine learning algorithm
Abstract Human machine interfaces that can track head motion will result in advances in physical rehabilitation, improved augmented reality/virtual reality systems, and aid in the study of human behavior. This paper presents a head position monitoring and classification system using thin flexible strain sensing threads placed on the neck of an individual. A wireless circuit module consisting of impedance readout circuitry and a Bluetooth module records and transmits strain information to a computer. A data processing algorithm for motion recognition provides near real-time quantification of head position. Incoming data is filtered, normalized and divided into data segments. A set of features is extracted from each data segment and employed as input to nine classifiers including Support Vector Machine, Naive Bayes and KNN for position prediction. A testing accuracy of around 92% was achieved for a set of nine head orientations. Results indicate that this human machine interface platform is accurate, flexible, easy to use, and cost effective.  more » « less
Award ID(s):
1934553 1931978
PAR ID:
10285433
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Scientific Reports
Volume:
11
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This work presents a prototype of a wireless, flexible, self-powered sensor used to analyze head impact kinematics relevant to concussions, which are frequent in high contact sports. Two untethered, paper-thin, and flexible sensing devices with piezoelectric-like behavior are placed around the neck of a human head substitute and used to monitor stress/strain in this region during an impact. The mechanical energy exerted by an impact force –varied in locations and magnitudes– is converted to pulses of electric energy which are transmitted wirelessly to a smart device for storage and analysis. The wireless prototype system is presented using a microcontroller with an integrated Bluetooth Low Energy module. The static and dynamic characteristics of the transmitted signal are then compared to signals from accelerometers embedded in a head substitute, to map the sensor’s output to the angular velocity and acceleration during impacts. It is demonstrated that using only two sensors is enough to detect impacts coming from any direction; and that placing multiple external sensors around the neck region could provide accurate information on the dynamics of the head, during a collision, which other sensors fail to capture. 
    more » « less
  2. On‐skin electronics have drawn extensive attention as they revolutionize many aspects of healthcare, motion tracking, rehabilitation, robotics, human–machine interaction, among others. Flexible and stretchable strain sensors represent one of the most explored devices for on‐skin electronics. Many printing techniques have recently emerged showing great promises for manufacturing strain sensors. Herein, it is aimed to provide a timely survey of recent advancements in printed strain sensors for on‐skin electronics. This review starts with an overview of sensing mechanisms for printed strain sensors, followed by a review of various printing techniques employed in fabricating these sensors. The materials, structures, and printing processes of representative strain sensors are discussed in detail for each printing method. Finally, potential applications of printed flexible and stretchable strain sensors are presented focusing on three areas: healthcare, sports performance monitoring, and human–machine interfaces. The review concludes with a discussion of challenges and opportunities for future research. 
    more » « less
  3. Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a finite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions. 
    more » « less
  4. Human-robot collaboration systems benefit from recognizing people’s intentions. This capability is especially useful for collaborative manipulation applications, in which users operate robot arms to manipulate objects. For collaborative manipulation, systems can determine users’ intentions by tracking eye gaze and identifying gaze fixations on particular objects in the scene (i.e., semantic gaze labeling). Translating 2D fixation locations (from eye trackers) into 3D fixation locations (in the real world) is a technical challenge. One approach is to assign each fixation to the object closest to it. However, calibration drift, head motion, and the extra dimension required for real-world interactions make this position matching approach inaccurate. In this work, we introduce velocity features that compare the relative motion between subsequent gaze fixations and a nite set of known points and assign fixation position to one of those known points. We validate our approach on synthetic data to demonstrate that classifying using velocity features is more robust than a position matching approach. In addition, we show that a classifier using velocity features improves semantic labeling on a real-world dataset of human-robot assistive manipulation interactions. 
    more » « less
  5. Modern robotics heavily relies on machine learning and has a growing need for training data. Advances and commercialization of virtual reality (VR) present an opportunity to use VR as a tool to gather such data for human-robot interactions. We present the Robot Interaction in VR simulator, which allows human participants to interact with simulated robots and environments in real-time. We are particularly interested in spoken interactions between the human and robot, which can be combined with the robot's sensory data for language grounding. To demonstrate the utility of the simulator, we describe a study which investigates whether a user's head pose can serve as a proxy for gaze in a VR object selection task. Participants were asked to describe a series of known objects, providing approximate labels for the focus of attention. We demonstrate that using a concept of gaze derived from head pose can be used to effectively narrow the set of objects that are the target of participants' attention and linguistic descriptions. 
    more » « less