skip to main content


Title: A Smartphone-Based Cursor Position System in Cross-Device Interaction Using Machine Learning Techniques
The use of mobile devices, especially smartphones, has become popular in recent years. There is an increasing need for cross-device interaction techniques that seamlessly integrate mobile devices and large display devices together. This paper develops a novel cross-device cursor position system that maps a mobile device’s movement on a flat surface to a cursor’s movement on a large display. The system allows a user to directly manipulate objects on a large display device through a mobile device and supports seamless cross-device data sharing without physical distance restrictions. To achieve this, we utilize sound localization to initialize the mobile device position as the starting location of a cursor on the large screen. Then, the mobile device’s movement is detected through an accelerometer and is accordingly translated to the cursor’s movement on the large display using machine learning models. In total, 63 features and 10 classifiers were employed to construct the machine learning models for movement detection. The evaluation results have demonstrated that three classifiers, in particular, gradient boosting, linear discriminant analysis (LDA), and naïve Bayes, are suitable for detecting the movement of a mobile device.  more » « less
Award ID(s):
1722913
NSF-PAR ID:
10253909
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Sensors
Volume:
21
Issue:
5
ISSN:
1424-8220
Page Range / eLocation ID:
1665
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Objective: We designed and validated a wireless, low-cost, easy-to-use, mobile, dry-electrode headset for scalp electroencephalography (EEG) recordings for closed-loop brain–computer (BCI) interface and internet-of-things (IoT) applications. Approach: The EEG-based BCI headset was designed from commercial off-the-shelf (COTS) components using a multi-pronged approach that balanced interoperability, cost, portability, usability, form factor, reliability, and closed-loop operation. Main Results: The adjustable headset was designed to accommodate 90% of the population. A patent-pending self-positioning dry electrode bracket allowed for vertical self-positioning while parting the user’s hair to ensure contact of the electrode with the scalp. In the current prototype, five EEG electrodes were incorporated in the electrode bracket spanning the sensorimotor cortices bilaterally, and three skin sensors were included to measure eye movement and blinks. An inertial measurement unit (IMU) provides monitoring of head movements. The EEG amplifier operates with 24-bit resolution up to 500 Hz sampling frequency and can communicate with other devices using 802.11 b/g/n WiFi. It has high signal–to–noise ratio (SNR) and common–mode rejection ratio (CMRR) (121 dB and 110 dB, respectively) and low input noise. In closed-loop BCI mode, the system can operate at 40 Hz, including real-time adaptive noise cancellation and 512 MB of processor memory. It supports LabVIEW as a backend coding language and JavaScript (JS), Cascading Style Sheets (CSS), and HyperText Markup Language (HTML) as front-end coding languages and includes training and optimization of support vector machine (SVM) neural classifiers. Extensive bench testing supports the technical specifications and human-subject pilot testing of a closed-loop BCI application to support upper-limb rehabilitation and provides proof-of-concept validation for the device’s use at both the clinic and at home. Significance: The usability, interoperability, portability, reliability, and programmability of the proposed wireless closed-loop BCI system provides a low-cost solution for BCI and neurorehabilitation research and IoT applications. 
    more » « less
  2. Smart mobile devices have become an integral part of people's life and users often input sensitive information on these devices. However, various side channel attacks against mobile devices pose a plethora of serious threats against user security and privacy. To mitigate these attacks, we present a novel secure Back-of-Device (BoD) input system, SecTap, for mobile devices. To use SecTap, a user tilts her mobile device to move a cursor on the keyboard and tap the back of the device to secretly input data. We design a tap detection method by processing the stream of accelerometer readings to identify the user's taps in real time. The orientation sensor of the mobile device is used to control the direction and the speed of cursor movement. We also propose an obfuscation technique to randomly and effectively accelerate the cursor movement. This technique not only preserves the input performance but also keeps the adversary from inferring the tapped keys. Extensive empirical experiments were conducted on different smart phones to demonstrate the usability and security on both Android and iOS platforms. 
    more » « less
  3. Background Comprehensive exams such as the Dean-Woodcock Neuropsychological Assessment System, the Global Deterioration Scale, and the Boston Diagnostic Aphasia Examination are the gold standard for doctors and clinicians in the preliminary assessment and monitoring of neurocognitive function in conditions such as neurodegenerative diseases and acquired brain injuries (ABIs). In recent years, there has been an increased focus on implementing these exams on mobile devices to benefit from their configurable built-in sensors, in addition to scoring, interpretation, and storage capabilities. As smartphones become more accepted in health care among both users and clinicians, the ability to use device information (eg, device position, screen interactions, and app usage) for subject monitoring also increases. Sensor-based assessments (eg, functional gait using a mobile device’s accelerometer and/or gyroscope or collection of speech samples using recordings from the device’s microphone) include the potential for enhanced information for diagnoses of neurological conditions; mapping the development of these conditions over time; and monitoring efficient, evidence-based rehabilitation programs. Objective This paper provides an overview of neurocognitive conditions and relevant functions of interest, analysis of recent results using smartphone and/or tablet built-in sensor information for the assessment of these different neurocognitive conditions, and how human-device interactions and the assessment and monitoring of these neurocognitive functions can be enhanced for both the patient and health care provider. Methods This survey presents a review of current mobile technological capabilities to enhance the assessment of various neurocognitive conditions, including both neurodegenerative diseases and ABIs. It explores how device features can be configured for assessments as well as the enhanced capability and data monitoring that will arise due to the addition of these features. It also recognizes the challenges that will be apparent with the transfer of these current assessments to mobile devices. Results Built-in sensor information on mobile devices is found to provide information that can enhance neurocognitive assessment and monitoring across all functional categories. Configurations of positional sensors (eg, accelerometer, gyroscope, and GPS), media sensors (eg, microphone and camera), inherent sensors (eg, device timer), and participatory user-device interactions (eg, screen interactions, metadata input, app usage, and device lock and unlock) are all helpful for assessing these functions for the purposes of training, monitoring, diagnosis, or rehabilitation. Conclusions This survey discusses some of the many opportunities and challenges of implementing configured built-in sensors on mobile devices to enhance assessments and monitoring of neurocognitive functions as well as disease progression across neurodegenerative and acquired neurological conditions. 
    more » « less
  4. null (Ed.)
    Introduction: Alzheimer’s disease (AD) causes progressive irreversible cognitive decline and is the leading cause of dementia. Therefore, a timely diagnosis is imperative to maximize neurological preservation. However, current treatments are either too costly or limited in availability. In this project, we explored using retinal vasculature as a potential biomarker for early AD diagnosis. This project focuses on stage 3 of a three-stage modular machine learning pipeline which consisted of image quality selection, vessel map generation, and classification [1]. The previous model only used support vector machine (SVM) to classify AD labels which limited its accuracy to 82%. In this project, random forest and gradient boosting were added and, along with SVM, combined into an ensemble classifier, raising the classification accuracy to 89%. Materials and Methods: Subjects classified as AD were those who were diagnosed with dementia in “Dementia Outcome: Alzheimer’s disease” from the UK Biobank Electronic Health Records. Five control groups were chosen with a 5:1 ratio of control to AD patients where the control patients had the same age, gender, and eye side image as the AD patient. In total, 122 vessel images from each group (AD and control) were used. The vessel maps were then segmented from fundus images through U-net. A t-test feature selection was first done on the training folds and the selected features was fed into the classifiers with a p-value threshold of 0.01. Next, 20 repetitions of 5-fold cross validation were performed where the hyperparameters were solely tuned on the training data. An ensemble classifier consisting of SVM, gradient boosting tree, and random forests was built and the final prediction was made through majority voting and evaluated on the test set. Results and Discussion: Through ensemble classification, accuracy increased by 4-12% relative to the individual classifiers, precision by 9-15%, sensitivity by 2-9%, specificity by at least 9-16%, and F1 score by 712%. Conclusions: Overall, a relatively high classification accuracy was achieved using machine learning ensemble classification with SVM, random forest, and gradient boosting. Although the results are very promising, a limitation of this study is that the requirement of needing images of sufficient quality decreased the amount of control parameters that can be implemented. However, through retinal vasculature analysis, this project shows machine learning’s high potential to be an efficient, more cost-effective alternative to diagnosing Alzheimer’s disease. Clinical Application: Using machine learning for AD diagnosis through retinal images will make screening available for a broader population by being more accessible and cost-efficient. Mobile device based screening can also be enabled at primary screening in resource-deprived regions. It can provide a pathway for future understanding of the association between biomarkers in the eye and brain. 
    more » « less
  5. Background

    Maternal loneliness is associated with adverse physical and mental health outcomes for both the mother and her child. Detecting maternal loneliness noninvasively through wearable devices and passive sensing provides opportunities to prevent or reduce the impact of loneliness on the health and well-being of the mother and her child.

    Objective

    The aim of this study is to use objective health data collected passively by a wearable device to predict maternal (social) loneliness during pregnancy and the postpartum period and identify the important objective physiological parameters in loneliness detection.

    Methods

    We conducted a longitudinal study using smartwatches to continuously collect physiological data from 31 women during pregnancy and the postpartum period. The participants completed the University of California, Los Angeles (UCLA) loneliness questionnaire in gestational week 36 and again at 12 weeks post partum. Responses to this questionnaire and background information of the participants were collected through our customized cross-platform mobile app. We leveraged participants’ smartwatch data from the 7 days before and the day of their completion of the UCLA questionnaire for loneliness prediction. We categorized the loneliness scores from the UCLA questionnaire as loneliness (scores≥12) and nonloneliness (scores<12). We developed decision tree and gradient-boosting models to predict loneliness. We evaluated the models by using leave-one-participant-out cross-validation. Moreover, we discussed the importance of extracted health parameters in our models for loneliness prediction.

    Results

    The gradient boosting and decision tree models predicted maternal social loneliness with weighted F1-scores of 0.897 and 0.872, respectively. Our results also show that loneliness is highly associated with activity intensity and activity distribution during the day. In addition, resting heart rate (HR) and resting HR variability (HRV) were correlated with loneliness.

    Conclusions

    Our results show the potential benefit and feasibility of using passive sensing with a smartwatch to predict maternal loneliness. Our developed machine learning models achieved a high F1-score for loneliness prediction. We also show that intensity of activity, activity pattern, and resting HR and HRV are good predictors of loneliness. These results indicate the intervention opportunities made available by wearable devices and predictive models to improve maternal well-being through early detection of loneliness.

     
    more » « less