skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Title: Enhancement of Neurocognitive Assessments Using Smartphone Capabilities: Systematic Review
Background Comprehensive exams such as the Dean-Woodcock Neuropsychological Assessment System, the Global Deterioration Scale, and the Boston Diagnostic Aphasia Examination are the gold standard for doctors and clinicians in the preliminary assessment and monitoring of neurocognitive function in conditions such as neurodegenerative diseases and acquired brain injuries (ABIs). In recent years, there has been an increased focus on implementing these exams on mobile devices to benefit from their configurable built-in sensors, in addition to scoring, interpretation, and storage capabilities. As smartphones become more accepted in health care among both users and clinicians, the ability to use device information (eg, device position, screen interactions, and app usage) for subject monitoring also increases. Sensor-based assessments (eg, functional gait using a mobile device’s accelerometer and/or gyroscope or collection of speech samples using recordings from the device’s microphone) include the potential for enhanced information for diagnoses of neurological conditions; mapping the development of these conditions over time; and monitoring efficient, evidence-based rehabilitation programs. Objective This paper provides an overview of neurocognitive conditions and relevant functions of interest, analysis of recent results using smartphone and/or tablet built-in sensor information for the assessment of these different neurocognitive conditions, and how human-device interactions and the assessment and monitoring of these neurocognitive functions can be enhanced for both the patient and health care provider. Methods This survey presents a review of current mobile technological capabilities to enhance the assessment of various neurocognitive conditions, including both neurodegenerative diseases and ABIs. It explores how device features can be configured for assessments as well as the enhanced capability and data monitoring that will arise due to the addition of these features. It also recognizes the challenges that will be apparent with the transfer of these current assessments to mobile devices. Results Built-in sensor information on mobile devices is found to provide information that can enhance neurocognitive assessment and monitoring across all functional categories. Configurations of positional sensors (eg, accelerometer, gyroscope, and GPS), media sensors (eg, microphone and camera), inherent sensors (eg, device timer), and participatory user-device interactions (eg, screen interactions, metadata input, app usage, and device lock and unlock) are all helpful for assessing these functions for the purposes of training, monitoring, diagnosis, or rehabilitation. Conclusions This survey discusses some of the many opportunities and challenges of implementing configured built-in sensors on mobile devices to enhance assessments and monitoring of neurocognitive functions as well as disease progression across neurodegenerative and acquired neurological conditions.  more » « less
Award ID(s):
1908991
NSF-PAR ID:
10164551
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
JMIR mHealth and uHealth
Volume:
8
Issue:
6
ISSN:
2291-5222
Page Range / eLocation ID:
e15517
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Parkinson’s disease (PD) is a neurological progressive movement disorder, affecting more than 10 million people globally. PD demands a longitudinal assessment of symptoms to monitor the disease progression and manage the treatments. Existing assessment methods require patients with PD (PwPD) to visit a clinic every 3–6 months to perform movement assessments conducted by trained clinicians. However, periodic visits pose barriers as PwPDs have limited mobility, and healthcare cost increases. Hence, there is a strong demand for using telemedicine technologies for assessing PwPDs in remote settings. In this work, we present an in-home telemedicine kit, named iTex (intelligent Textile), which is a patient-centered design to carry out accessible tele-assessments of movement symptoms in people with PD. iTex is composed of a pair of smart textile gloves connected to a customized embedded tablet. iTex gloves are integrated with flex sensors on the fingers and inertial measurement unit (IMU) and have an onboard microcontroller unit with IoT (Internet of Things) capabilities including data storage and wireless communication. The gloves acquire the sensor data wirelessly to monitor various hand movements such as finger tapping, hand opening and closing, and other movement tasks. The gloves are connected to a customized tablet computer acting as an IoT device, configured to host a wireless access point, and host an MQTT broker and a time-series database server. The tablet also employs a patient-centered interface to guide PwPDs through the movement exam protocol. The system was deployed in four PwPDs who used iTex at home independently for a week. They performed the test independently before and after medication intake. Later, we performed data analysis of the in-home study and created a feature set. The study findings reported that the iTex gloves were capable to collect movement-related data and distinguish between pre-medication and post-medication cases in a majority of the participants. The IoT infrastructure demonstrated robust performance in home settings and offered minimum barriers for the assessment exams and the data communication with a remote server. In the post-study survey, all four participants expressed that the system was easy to use and poses a minimum barrier to performing the test independently. The present findings indicate that the iTex glove system has the potential for periodic and objective assessment of PD motor symptoms in remote settings. 
    more » « less
  2. null (Ed.)
    Mobile devices are becoming more pervasive in the monitoring of individuals’ health as device functionalities increase as does overall device prevalence in daily life. Therefore, it is necessary that these devices and their interactions are usable by individuals with diverse abilities and conditions. This paper assesses the usability of a neurocognitive assessment application by individuals with Parkinson’s Disease (PD) and proposes a design that focuses on the user interface, specifically on testing instructions, layouts, and subsequent user interactions. Further, we investigate potential benefits of cognitive interference (e.g., the addition of outside stimuli that intrude on task-related activity) on a user’s task performance. Understanding the population’s usability requirements and their performance on configured tasks allows for the formation of usable and objective neurocognitive assessments. 
    more » « less
  3. null (Ed.)
    Digital health technology is becoming more ubiquitous in monitoring individuals’ health as both device functionality and overall prevalence increase. However, as individuals age, challenges arise with using this technology particularly when it involves neurodegenerative issues (e.g., for individuals with Parkinson’s disease, Alzheimer’s disease, and ALS). Traditionally, neurodegenerative diseases have been assessed in clinical settings using pen-and-paper style assessments; however, digital health systems allow for the collection of far more data than we ever could achieve using traditional methods. The objective of this work is the formation and implementation of a neurocognitive digital health system designed to go beyond what pen-and-paper based solutions can do through the collection of (a) objective, (b) longitudinal, and (c) symptom-specific data, for use in (d) personalized intervention protocols. This system supports the monitoring of all neurocognitive functions (e.g., motor, memory, speech, executive function, sensory, language, behavioral and psychological function, sleep, and autonomic function), while also providing methodologies for personalized intervention protocols. The use of specifically designed tablet-based assessments and wearable devices allows for the collection of objective digital biomarkers that aid in accurate diagnosis and longitudinal monitoring, while patient reported outcomes (e.g., by the diagnosed individual and caregivers) give additional insights for use in the formation of personalized interventions. As many interventions are a one-size-fits-all concept, digital health systems should be used to provide a far more comprehensive understanding of neurodegenerative conditions, to objectively evaluate patients, and form personalized intervention protocols to create a higher quality of life for individuals diagnosed with neurodegenerative diseases. 
    more » « less
  4. BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information of an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′   ≈   w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ . 
    more » « less
  5. Context-based pairing solutions increase the usability of IoT device pairing by eliminating any human involvement in the pairing process. This is possible by utilizing on-board sensors (with same sensing modalities) to capture a common physical context (e.g., ambient sound via each device’s microphone). However, in a smart home scenario, it is impractical to assume that all devices will share a common sensing modality. For example, a motion detector is only equipped with an infrared sensor while Amazon Echo only has microphones. In this paper, we develop a new context-based pairing mechanism called Perceptio that uses time as the common factor across differing sensor types. By focusing on the event timing, rather than the specific event sensor data, Perceptio creates event fingerprints that can be matched across a variety of IoT devices. We propose Perceptio based on the idea that devices co-located within a physically secure boundary (e.g., single family house) can observe more events in common over time, as opposed to devices outside. Devices make use of the observed contextual information to provide entropy for Perceptio’s pairing protocol. We design and implement Perceptio, and evaluate its effectiveness as an autonomous secure pairing solution. Our implementation demonstrates the ability to sufficiently distinguish between legitimate devices (placed within the boundary) and attacker devices (placed outside) by imposing a threshold on fingerprint similarity. Perceptio demonstrates an average fingerprint similarity of 94.9% between legitimate devices while even a hypothetical impossibly well-performing attacker yields only 68.9% between itself and a valid device. 
    more » « less