skip to main content


Title: Smart Glasses for Supporting Distributed Care Work: Systematic Review
Background Over the past 2 decades, various desktop and mobile telemedicine systems have been developed to support communication and care coordination among distributed medical teams. However, in the hands-busy care environment, such technologies could become cumbersome because they require medical professionals to manually operate them. Smart glasses have been gaining momentum because of their advantages in enabling hands-free operation and see-what-I-see video-based consultation. Previous research has tested this novel technology in different health care settings. Objective The aim of this study was to review how smart glasses were designed, used, and evaluated as a telemedicine tool to support distributed care coordination and communication, as well as highlight the potential benefits and limitations regarding medical professionals’ use of smart glasses in practice. Methods We conducted a literature search in 6 databases that cover research within both health care and computer science domains. We used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology to review articles. A total of 5865 articles were retrieved and screened by 3 researchers, with 21 (0.36%) articles included for in-depth analysis. Results All of the reviewed articles (21/21, 100%) used off-the-shelf smart glass device and videoconferencing software, which had a high level of technology readiness for real-world use and deployment in care settings. The common system features used and evaluated in these studies included video and audio streaming, annotation, augmented reality, and hands-free interactions. These studies focused on evaluating the technical feasibility, effectiveness, and user experience of smart glasses. Although the smart glass technology has demonstrated numerous benefits and high levels of user acceptance, the reviewed studies noted a variety of barriers to successful adoption of this novel technology in actual care settings, including technical limitations, human factors and ergonomics, privacy and security issues, and organizational challenges. Conclusions User-centered system design, improved hardware performance, and software reliability are needed to realize the potential of smart glasses. More research is needed to examine and evaluate medical professionals’ needs, preferences, and perceptions, as well as elucidate how smart glasses affect the clinical workflow in complex care environments. Our findings inform the design, implementation, and evaluation of smart glasses that will improve organizational and patient outcomes.  more » « less
Award ID(s):
1948292
NSF-PAR ID:
10463041
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
JMIR Medical Informatics
Volume:
11
ISSN:
2291-9694
Page Range / eLocation ID:
e44161
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background Smart glasses have been gaining momentum as a novel technology because of their advantages in enabling hands-free operation and see-what-I-see remote consultation. Researchers have primarily evaluated this technology in hospital settings; however, limited research has investigated its application in prehospital operations. Objective The aim of this study is to understand the potential of smart glasses to support the work practices of prehospital providers, such as emergency medical services (EMS) personnel. Methods We conducted semistructured interviews with 13 EMS providers recruited from 4 hospital-based EMS agencies in an urban area in the east coast region of the United States. The interview questions covered EMS workflow, challenges encountered, technology needs, and users’ perceptions of smart glasses in supporting daily EMS work. During the interviews, we demonstrated a system prototype to elicit more accurate and comprehensive insights regarding smart glasses. Interviews were transcribed verbatim and analyzed using the open coding technique. Results We identified four potential application areas for smart glasses in EMS: enhancing teleconsultation between distributed prehospital and hospital providers, semiautomating patient data collection and documentation in real time, supporting decision-making and situation awareness, and augmenting quality assurance and training. Compared with the built-in touch pad, voice commands and hand gestures were indicated as the most preferred and suitable interaction mechanisms. EMS providers expressed positive attitudes toward using smart glasses during prehospital encounters. However, several potential barriers and user concerns need to be considered and addressed before implementing and deploying smart glasses in EMS practice. They are related to hardware limitations, human factors, reliability, workflow, interoperability, and privacy. Conclusions Smart glasses can be a suitable technological means for supporting EMS work. We conclude this paper by discussing several design considerations for realizing the full potential of this hands-free technology. 
    more » « less
  2. Abstract Objective

    This study aims to investigate key considerations and critical factors that influence the implementation and adoption of smart glasses in fast-paced medical settings such as emergency medical services (EMS).

    Materials and Methods

    We employed a sociotechnical theoretical framework and conducted a set of participatory design workshops with 15 EMS providers to elicit their opinions and concerns about using smart glasses in real practice.

    Results

    Smart glasses were recognized as a useful tool to improve EMS workflow given their hands-free nature and capability of processing and capturing various patient data. Out of the 8 dimensions of the sociotechnical model, we found that hardware and software, human-computer interface, workflow, and external rules and regulations were cited as the major factors that could influence the adoption of this novel technology. EMS participants highlighted several key requirements for the successful implementation of smart glasses in the EMS context, such as durable devices, easy-to-use and minimal interface design, seamless integration with existing systems and workflow, and secure data management.

    Discussion

    Applications of the sociotechnical model allowed us to identify a range of factors, including not only technical aspects, but also social, organizational, and human factors, that impact the implementation and uptake of smart glasses in EMS. Our work informs design implications for smart glass applications to fulfill EMS providers’ needs.

    Conclusion

    The successful implementation of smart glasses in EMS and other dynamic healthcare settings needs careful consideration of sociotechnical issues and close collaboration between different stakeholders.

     
    more » « less
  3. Background Home health aides (HHAs) provide necessary hands-on care to older adults and those with chronic conditions in their homes. Despite their integral role, HHAs experience numerous challenges in their work, including their ability to communicate with other health care professionals about patient care while caring for patients and access to educational resources. Although technological interventions have the potential to address these challenges, little is known about the technological landscape and existing technology-based interventions designed for and used by this workforce. Objective We conducted a scoping review of the scientific literature to identify existing studies that have described, designed, deployed, or tested technology-based tools and apps intended for use by HHAs to care for patients at home. To complement our literature review, we conducted a landscape analysis of existing mobile apps intended for HHAs providing in-home care. Methods We searched the following databases from their inception to October 2020: Ovid MEDLINE, Ovid Embase, Cochrane Library, and CINAHL (EBSCO). A total of 3 researchers screened the yield using prespecified inclusion and exclusion criteria. In addition, 4 researchers independently reviewed these articles, and a fifth researcher arbitrated when needed. Among studies that met the inclusion criteria, data were extracted and summarized narratively. An analysis of mobile health apps designed for HHAs was performed using a predefined set of terms to search Google Play and Apple App stores. Overall, 2 researchers independently screened the resulting apps, and those that met the inclusion criteria were categorized according to their intended purpose and functionality. Results Of the 8643 studies retrieved, 182 (2.11%) underwent full-text review, and 4.9% (9/182) met our inclusion criteria. Approximately half (4/9, 44%) of the studies were descriptive in nature, proposing technology-based systems (eg, web portals and dashboards) or prototypes without a technical or user-based evaluation of the technology. In most (7/9, 78%) papers, HHAs were just one of several users and not the sole or primary intended users of the technology. Our review of mobile apps yielded 166 Android and iOS apps, of which 48 (29%) met the inclusion criteria. These apps provided HHAs with one or more of the following functions: electronic visit verification (29/48, 60%), clocking in and out (23/48, 48%), documentation (22/48, 46%), task checklist (19/48, 40%), communication between HHA and agency (14/48, 29%), patient information (6/48, 13%), resources (5/48, 10%), and communication between HHA and patients (4/48, 8%). Of the 48 apps, 25 (52%) performed monitoring functions, 4 (8%) performed supporting functions, and 19 (40%) performed both. Conclusions A limited number of studies and mobile apps have been designed to support HHAs in their work. Further research and rigorous evaluation of technology-based tools are needed to assess their impact on the work HHAs provide in patient’s homes. 
    more » « less
  4. The PoseASL dataset consists of color and depth videos collected from ASL signers at the Linguistic and Assistive Technologies Laboratory under the direction of Matt Huenerfauth, as part of a collaborative research project with researchers at the Rochester Institute of Technology, Boston University, and the University of Pennsylvania. Access: After becoming an authorized user of Databrary, please contact Matt Huenerfauth if you have difficulty accessing this volume. We have collected a new dataset consisting of color and depth videos of fluent American Sign Language signers performing sequences ASL signs and sentences. Given interest among sign-recognition and other computer-vision researchers in red-green-blue-depth (RBGD) video, we release this dataset for use by the research community. In addition to the video files, we share depth data files from a Kinect v2 sensor, as well as additional motion-tracking files produced through post-processing of this data. Organization of the Dataset: The dataset is organized into sub-folders, with codenames such as "P01" or "P16" etc. These codenames refer to specific human signers who were recorded in this dataset. Please note that there was no participant P11 nor P14; those numbers were accidentally skipped during the process of making appointments to collect video stimuli. Task: During the recording session, the participant was met by a member of our research team who was a native ASL signer. No other individuals were present during the data collection session. After signing the informed consent and video release document, participants responded to a demographic questionnaire. Next, the data-collection session consisted of English word stimuli and cartoon videos. The recording session began with showing participants stimuli consisting of slides that displayed English word and photos of items, and participants were asked to produce the sign for each (PDF included in materials subfolder). Next, participants viewed three videos of short animated cartoons, which they were asked to recount in ASL: - Canary Row, Warner Brothers Merrie Melodies 1950 (the 7-minute video divided into seven parts) - Mr. Koumal Flies Like a Bird, Studio Animovaneho Filmu 1969 - Mr. Koumal Battles his Conscience, Studio Animovaneho Filmu 1971 The word list and cartoons were selected as they are identical to the stimuli used in the collection of the Nicaraguan Sign Language video corpora - see: Senghas, A. (1995). Children’s Contribution to the Birth of Nicaraguan Sign Language. Doctoral dissertation, Department of Brain and Cognitive Sciences, MIT. Demographics: All 14 of our participants were fluent ASL signers. As screening, we asked our participants: Did you use ASL at home growing up, or did you attend a school as a very young child where you used ASL? All the participants responded affirmatively to this question. A total of 14 DHH participants were recruited on the Rochester Institute of Technology campus. Participants included 7 men and 7 women, aged 21 to 35 (median = 23.5). All of our participants reported that they began using ASL when they were 5 years old or younger, with 8 reporting ASL use since birth, and 3 others reporting ASL use since age 18 months. Filetypes: *.avi, *_dep.bin: The PoseASL dataset has been captured by using a Kinect 2.0 RGBD camera. The output of this camera system includes multiple channels which include RGB, depth, skeleton joints (25 joints for every video frame), and HD face (1,347 points). The video resolution produced in 1920 x 1080 pixels for the RGB channel and 512 x 424 pixels for the depth channels respectively. Due to limitations in the acceptable filetypes for sharing on Databrary, it was not permitted to share binary *_dep.bin files directly produced by the Kinect v2 camera system on the Databrary platform. If your research requires the original binary *_dep.bin files, then please contact Matt Huenerfauth. *_face.txt, *_HDface.txt, *_skl.txt: To make it easier for future researchers to make use of this dataset, we have also performed some post-processing of the Kinect data. To extract the skeleton coordinates of the RGB videos, we used the Openpose system, which is capable of detecting body, hand, facial, and foot keypoints of multiple people on single images in real time. The output of Openpose includes estimation of 70 keypoints for the face including eyes, eyebrows, nose, mouth and face contour. The software also estimates 21 keypoints for each of the hands (Simon et al, 2017), including 3 keypoints for each finger, as shown in Figure 2. Additionally, there are 25 keypoints estimated for the body pose (and feet) (Cao et al, 2017; Wei et al, 2016). Reporting Bugs or Errors: Please contact Matt Huenerfauth to report any bugs or errors that you identify in the corpus. We appreciate your help in improving the quality of the corpus over time by identifying any errors. Acknowledgement: This material is based upon work supported by the National Science Foundation under award 1749376: "Collaborative Research: Multimethod Investigation of Articulatory and Perceptual Constraints on Natural Language Evolution." 
    more » « less
  5. Patient-generated health data (PGHD), created and captured from patients via wearable devices and mobile apps, are proliferating outside of clinical settings. Examples include sleep tracking, fitness trackers, continuous glucose monitors, and RFID-enabled implants, with many additional biometric or health surveillance applications in development or envisioned. These data are included in growing stockpiles of personal health data being mined for insight via big data analytics and artificial intelligence/deep learning technologies. Governing these data resources to facilitate patient care and health research while preserving individual privacy and autonomy will be challenging, as PGHD are the least regulated domains of digitalized personal health data (U.S. Department of Health and Human Services, 2018). When patients themselves collect digitalized PGHD using “apps” provided by technology firms, these data fall outside of conventional health data regulation, such as HIPAA. Instead, PGHD are maintained primarily on the information technology infrastructure of vendors, and data are governed under the IT firm’s own privacy policies and within the firm’s intellectual property rights. Dominant narratives position these highly personal data as valuable resources to transform healthcare, stimulate innovation in medical research, and engage individuals in their health and healthcare. However, ensuring privacy, security, and equity of benefits from PGHD will be challenging. PGHD can be aggregated and, despite putative “deidentification,” be linked with other health, economic, and social data for predictive analytics. As large tech companies enter the healthcare sector (e.g., Google Health is partnering with Ascension Health to analyze the PHI of millions of people across 21 U.S. states), the lack of harmonization between regulatory regimes may render existing safeguards to preserve patient privacy and control over their PHI ineffective. While healthcare providers are bound to adhere to health privacy laws, Big Tech comes under more relaxed regulatory regimes that will facilitate monetizing PGHD. We explore three existing data protection regimes relevant to PGHD in the United States that are currently in tension with one another: federal and state health-sector laws, data use and reuse for research and innovation, and industry self-regulation by large tech companies We then identify three types of structures (organizational, regulatory, technological/algorithmic), which synergistically could help enact needed regulatory oversight while limiting the friction and economic costs of regulation. This analysis provides a starting point for further discussions and negotiations among stakeholders and regulators to do so. 
    more » « less