Background Comprehensive exams such as the Dean-Woodcock Neuropsychological Assessment System, the Global Deterioration Scale, and the Boston Diagnostic Aphasia Examination are the gold standard for doctors and clinicians in the preliminary assessment and monitoring of neurocognitive function in conditions such as neurodegenerative diseases and acquired brain injuries (ABIs). In recent years, there has been an increased focus on implementing these exams on mobile devices to benefit from their configurable built-in sensors, in addition to scoring, interpretation, and storage capabilities. As smartphones become more accepted in health care among both users and clinicians, the ability to use device information (eg, device position, screen interactions, and app usage) for subject monitoring also increases. Sensor-based assessments (eg, functional gait using a mobile device’s accelerometer and/or gyroscope or collection of speech samples using recordings from the device’s microphone) include the potential for enhanced information for diagnoses of neurological conditions; mapping the development of these conditions over time; and monitoring efficient, evidence-based rehabilitation programs. Objective This paper provides an overview of neurocognitive conditions and relevant functions of interest, analysis of recent results using smartphone and/or tablet built-in sensor information for the assessment of these different neurocognitive conditions, and how human-device interactions and the assessment and monitoring of these neurocognitive functions can be enhanced for both the patient and health care provider. Methods This survey presents a review of current mobile technological capabilities to enhance the assessment of various neurocognitive conditions, including both neurodegenerative diseases and ABIs. It explores how device features can be configured for assessments as well as the enhanced capability and data monitoring that will arise due to the addition of these features. It also recognizes the challenges that will be apparent with the transfer of these current assessments to mobile devices. Results Built-in sensor information on mobile devices is found to provide information that can enhance neurocognitive assessment and monitoring across all functional categories. Configurations of positional sensors (eg, accelerometer, gyroscope, and GPS), media sensors (eg, microphone and camera), inherent sensors (eg, device timer), and participatory user-device interactions (eg, screen interactions, metadata input, app usage, and device lock and unlock) are all helpful for assessing these functions for the purposes of training, monitoring, diagnosis, or rehabilitation. Conclusions This survey discusses some of the many opportunities and challenges of implementing configured built-in sensors on mobile devices to enhance assessments and monitoring of neurocognitive functions as well as disease progression across neurodegenerative and acquired neurological conditions.
more »
« less
Side Channel Attack on Smartphone Sensors to Infer Gender of the User
Smartphones incorporate a plethora of diverse and powerful sensors that enhance user experience. Two such sensors are accelerometer and gyroscope, which measure acceleration in all three spatial dimensions and rotation along the three axes of the smartphone, respectively. These sensors are used primarily for screen rotations and advanced gaming applications. However, they can also be employed to gather information about the user’s activity and phone positions. In this work, we investigate using accelerometer and gyroscope as a side-channel to learn highly sensitive information, such as the user’s gender. We present an unobtrusive technique to determine the gender of a user by mining data from the smartphone sensors, which do not require explicit permissions from the user. A preliminary study conducted on 18 participants shows that we can detect the user’s gender with an accuracy of 80%.
more »
« less
- PAR ID:
- 10113742
- Date Published:
- Journal Name:
- 17th ACM Conference on Embedded Networked Sensor Systems (SenSys)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Accurate indoor positioning has attracted a lot of attention for a variety of indoor location-based applications, with the rapid development of mobile devices and their onboard sensors. A hybrid indoor localization method is proposed based on single off-the-shelf smartphone, which takes advantage of its various onboard sensors, including camera, gyroscope and accelerometer. The proposed approach integrates three components: visual-inertial odometry (VIO), point-based area mapping, and plane-based area mapping. A simplified RANSAC strategy is employed in plane matching for the sake of processing time. Since Apple's augmented reality platform ARKit has many powerful high-level APIs on world tracking, plane detection and 3D modeling, a practical smartphone app for indoor localization is developed on an iPhone that can run ARKit. Experimental results demonstrate that our plane-based method can achieve an accuracy of about 0.3 meter, which is based on a much more lightweight model, but achieves more accurate results than the point-based model by directly using ARKit's area mapping. The size of the plane-based model is less than 2KB for a closed-loop corridor area of about 45m*15m, comparing to about 10MB of the point-based model.more » « less
-
Persons with disabilities often rely on caregivers or family members to assist in their daily living activities. Robotic assistants can provide an alternative solution if intuitive user interfaces are designed for simple operations. Current humanrobot interfaces are still far from being able to operate in an intuitive way when used for complex activities of daily living (ADL). In this era of smartphones that are packed with sensors, such as accelerometers, gyroscopes and a precise touch screen, robot controls can be interfaced with smartphones to capture the user’s intended operation of the robot assistant. In this paper, we review the current popular human-robot interfaces, and we present three novel human-robot smartphone-based interfaces to operate a robotic arm for assisting persons with disabilities in their ADL tasks. Useful smartphone data, including 3 dimensional orientation and 2 dimensional touchscreen positions, are used as control variables to the robot motion in Cartesian teleoperation. In this paper, we present the three control interfaces, their implementation on a smartphone to control a robotic arm, and a comparison between the results on using the three interfaces for three different ADL tasks. The developed interfaces provide intuitiveness, low cost, and environmental adaptability.more » « less
-
This paper presents iSpyU, a system that shows the feasibility of recognition of natural speech content played on a phone during conference calls (Skype, Zoom, etc) using a fusion of motion sensors such as accelerometer and gyroscope. While microphones require permissions from the user to be accessible by an app developer, the motion sensors are zero-permission sensors, thus accessible by a developer without alerting the user. This allows a malicious app to potentially eavesdrop on sensitive speech content played by the user's phone. In designing the attack, iSpyU tackles a number of technical challenges including: (i) Low sampling rate of motion sensors (500 Hz in comparison to 44 kHz for a microphone). (ii) Lack of availability of large-scale training datasets to train models for Automatic Speech Recognition (ASR) with motion sensors. iSpyU systematically addresses these challenges by a combination of techniques in synthetic training data generation, ASR modeling, and domain adaptation. Extensive measurement studies on modern smartphones show a word level accuracy of 53.3 - 59.9% over a dictionary of 2000-10000 words, and a character level accuracy of 70.0 - 74.8%. We believe such levels of accuracy poses a significant threat when viewed from a privacy perspective.more » « less
-
Abstract: In the past few years, smart mobile devices have become ubiquitous. Most of these devices have embedded sensors such as GPS, accelerometer, gyroscope, etc. There is a growing trend to use these sensors for user identification and activity recognition. Most prior work, however, contains results on a small number of classifiers, data, or activities. We present a comprehensive evaluation often representative classifiers used in identification on two publicly available data sets (thus our work is reproducible). Our results include data obtained from dynamic activities, such as walking and running; static postures such as sitting and standing; and an aggregate of activities that combine dynamic, static, and postural transitions, such as sit-to-stand or stand-to-sit. Our identification results on aggregate data include both labeled and unlabeled activities. Our results show that the k-Nearest Neighbors algorithm consistently outperforms other classifiers. We also show that by extracting appropriate features and using appropriate classifiers, static and aggregate activities can be used for user identification. We posit that this work will serve as a resource and a benchmark for the selection and evaluation of classification algorithms for activity based identification on smartphones.more » « less
An official website of the United States government

