skip to main content


Title: Side Channel Attack on Smartphone Sensors to Infer Gender of the User
Smartphones incorporate a plethora of diverse and powerful sensors that enhance user experience. Two such sensors are accelerometer and gyroscope, which measure acceleration in all three spatial dimensions and rotation along the three axes of the smartphone, respectively. These sensors are used primarily for screen rotations and advanced gaming applications. However, they can also be employed to gather information about the user’s activity and phone positions. In this work, we investigate using accelerometer and gyroscope as a side-channel to learn highly sensitive information, such as the user’s gender. We present an unobtrusive technique to determine the gender of a user by mining data from the smartphone sensors, which do not require explicit permissions from the user. A preliminary study conducted on 18 participants shows that we can detect the user’s gender with an accuracy of 80%.  more » « less
Award ID(s):
1815494 1842456 1563555
NSF-PAR ID:
10113742
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
17th ACM Conference on Embedded Networked Sensor Systems (SenSys)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background Comprehensive exams such as the Dean-Woodcock Neuropsychological Assessment System, the Global Deterioration Scale, and the Boston Diagnostic Aphasia Examination are the gold standard for doctors and clinicians in the preliminary assessment and monitoring of neurocognitive function in conditions such as neurodegenerative diseases and acquired brain injuries (ABIs). In recent years, there has been an increased focus on implementing these exams on mobile devices to benefit from their configurable built-in sensors, in addition to scoring, interpretation, and storage capabilities. As smartphones become more accepted in health care among both users and clinicians, the ability to use device information (eg, device position, screen interactions, and app usage) for subject monitoring also increases. Sensor-based assessments (eg, functional gait using a mobile device’s accelerometer and/or gyroscope or collection of speech samples using recordings from the device’s microphone) include the potential for enhanced information for diagnoses of neurological conditions; mapping the development of these conditions over time; and monitoring efficient, evidence-based rehabilitation programs. Objective This paper provides an overview of neurocognitive conditions and relevant functions of interest, analysis of recent results using smartphone and/or tablet built-in sensor information for the assessment of these different neurocognitive conditions, and how human-device interactions and the assessment and monitoring of these neurocognitive functions can be enhanced for both the patient and health care provider. Methods This survey presents a review of current mobile technological capabilities to enhance the assessment of various neurocognitive conditions, including both neurodegenerative diseases and ABIs. It explores how device features can be configured for assessments as well as the enhanced capability and data monitoring that will arise due to the addition of these features. It also recognizes the challenges that will be apparent with the transfer of these current assessments to mobile devices. Results Built-in sensor information on mobile devices is found to provide information that can enhance neurocognitive assessment and monitoring across all functional categories. Configurations of positional sensors (eg, accelerometer, gyroscope, and GPS), media sensors (eg, microphone and camera), inherent sensors (eg, device timer), and participatory user-device interactions (eg, screen interactions, metadata input, app usage, and device lock and unlock) are all helpful for assessing these functions for the purposes of training, monitoring, diagnosis, or rehabilitation. Conclusions This survey discusses some of the many opportunities and challenges of implementing configured built-in sensors on mobile devices to enhance assessments and monitoring of neurocognitive functions as well as disease progression across neurodegenerative and acquired neurological conditions. 
    more » « less
  2. Accurate indoor positioning has attracted a lot of attention for a variety of indoor location-based applications, with the rapid development of mobile devices and their onboard sensors. A hybrid indoor localization method is proposed based on single off-the-shelf smartphone, which takes advantage of its various onboard sensors, including camera, gyroscope and accelerometer. The proposed approach integrates three components: visual-inertial odometry (VIO), point-based area mapping, and plane-based area mapping. A simplified RANSAC strategy is employed in plane matching for the sake of processing time. Since Apple's augmented reality platform ARKit has many powerful high-level APIs on world tracking, plane detection and 3D modeling, a practical smartphone app for indoor localization is developed on an iPhone that can run ARKit. Experimental results demonstrate that our plane-based method can achieve an accuracy of about 0.3 meter, which is based on a much more lightweight model, but achieves more accurate results than the point-based model by directly using ARKit's area mapping. The size of the plane-based model is less than 2KB for a closed-loop corridor area of about 45m*15m, comparing to about 10MB of the point-based model. 
    more » « less
  3. Multimer is a new technology that aims to provide a data-driven understanding of how humans cognitively and physically experience spatial environments. By multimodally measuring biosensor data to model how the built environment and its uses influence cognitive processes, Multimer aims to help space professionals like architects, workplace strategists, and urban planners make better design interventions. Multimer is perhaps the first spatial technology that collects biosensor data, like brainwave and heart rate data, and analyzes it with both spatiotemporal and neurophysiological tools. The Multimer mobile app can record data from several kinds of commonly available, inexpensive, wearable sensors, including EEG, ECG, pedometer, accelerometer, and gyroscope modules. The Multimer app also records user-entered information via its user interface and micro-surveys, then also combines all this data with a user's geo-location using GPS, beacons, and other location tools. Multimer's study platform displays all of this data in real-time at the individual and aggregate level. Multimer also validates the data by comparing the collected sensor and sentiment data in spatiotemporal contexts, and then it integrates the collected data with other data sets such as citizen reports, traffic data, and city amenities to provide actionable insights towards the evaluation and redesign of sites and spaces. This report presents preliminary results from the data validation process for a Multimer study of 101 subjects in New York City from August to October 2017. Ultimately, the aim of this study is to prototype a replicable, scalable model of how the built environment and the movement of traffic influence the neurophysiological state of pedestrians, cyclists, and drivers. 
    more » « less
  4. Persons with disabilities often rely on caregivers or family members to assist in their daily living activities. Robotic assistants can provide an alternative solution if intuitive user interfaces are designed for simple operations. Current humanrobot interfaces are still far from being able to operate in an intuitive way when used for complex activities of daily living (ADL). In this era of smartphones that are packed with sensors, such as accelerometers, gyroscopes and a precise touch screen, robot controls can be interfaced with smartphones to capture the user’s intended operation of the robot assistant. In this paper, we review the current popular human-robot interfaces, and we present three novel human-robot smartphone-based interfaces to operate a robotic arm for assisting persons with disabilities in their ADL tasks. Useful smartphone data, including 3 dimensional orientation and 2 dimensional touchscreen positions, are used as control variables to the robot motion in Cartesian teleoperation. In this paper, we present the three control interfaces, their implementation on a smartphone to control a robotic arm, and a comparison between the results on using the three interfaces for three different ADL tasks. The developed interfaces provide intuitiveness, low cost, and environmental adaptability. 
    more » « less
  5. This research presents PACE (Providing Authentication through Computational Gait Evaluation), a novel methodology for gait-based authentication leveraging the power of deep learning algorithms. The primary objective of PACE is to enhance the security and efficiency of user authentication mechanisms by capitalizing on the unique gait patterns exhibited by individuals. This study delineates the development and implementation of a deep learning model, which was trained on a set of extracted features. These features, including mean, variance, standard deviation, kurtosis, and skewness, were derived from accelerometer and gyroscope data, serving as descriptors of users' gait patterns for the deep learning model. The model's performance was evaluated based on its ability to classify and authenticate users accurately using these features. For the purpose of this study, twelve participants were enlisted, with sensors affixed to their back hip and right ankle to collect the requisite accelerometer and gyroscope data. The experimental results were highly promising, with the model achieving an exceptional accuracy rate of 99% in authenticating users. These findings underscore the potential of PACE as a viable alternative to conventional machine learning methods for gait authentication. The implications of this research are far-reaching, with potential applications spanning a multitude of scenarios where security is of paramount importance. 
    more » « less