skip to main content


Title: Classification strategies for non‐routine events occurring in high‐risk patient care settings: A scoping review
Abstract Introduction

Non‐routine events (NREs) are atypical or unusual occurrences in a pre‐defined process. Although some NREs in high‐risk clinical settings have no adverse effects on patient care, others can potentially cause serious patient harm. A unified strategy for identifying and describing NREs in these domains will facilitate the comparison of results between studies.

Methods

We conducted a literature search in PubMed, CINAHL, and EMBASE to identify studies related to NREs in high‐risk domains and evaluated the methods used for event observation and description. We applied The Joint Commission on Accreditation of Healthcare Organization (JCAHO) taxonomy (cause, impact, domain, type, prevention, and mitigation) to the descriptions of NREs from the literature.

Results

We selected 25 articles that met inclusion criteria for review. Real‐time documentation of NREs was more common than a retrospective video review. Thirteen studies used domain experts as observers and seven studies validated observations with interrater reliability. Using the JCAHO taxonomy, “cause” was the most frequently applied classification method, followed by “impact,” “type,” “domain,” and “prevention and mitigation.”

Conclusions

NREs are frequent in high‐risk medical settings. Strengths identified in several studies included the use of multiple observers with domain expertise and validation of the event ascertainment approach using interrater reliability. By applying the JCAHO taxonomy to the current literature, we provide an example of a structured approach that can be used for future analyses of NREs.

 
more » « less
Award ID(s):
1763355
NSF-PAR ID:
10453633
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Journal of Evaluation in Clinical Practice
Volume:
27
Issue:
2
ISSN:
1356-1294
Page Range / eLocation ID:
p. 464-471
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background Over the past 2 decades, various desktop and mobile telemedicine systems have been developed to support communication and care coordination among distributed medical teams. However, in the hands-busy care environment, such technologies could become cumbersome because they require medical professionals to manually operate them. Smart glasses have been gaining momentum because of their advantages in enabling hands-free operation and see-what-I-see video-based consultation. Previous research has tested this novel technology in different health care settings. Objective The aim of this study was to review how smart glasses were designed, used, and evaluated as a telemedicine tool to support distributed care coordination and communication, as well as highlight the potential benefits and limitations regarding medical professionals’ use of smart glasses in practice. Methods We conducted a literature search in 6 databases that cover research within both health care and computer science domains. We used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology to review articles. A total of 5865 articles were retrieved and screened by 3 researchers, with 21 (0.36%) articles included for in-depth analysis. Results All of the reviewed articles (21/21, 100%) used off-the-shelf smart glass device and videoconferencing software, which had a high level of technology readiness for real-world use and deployment in care settings. The common system features used and evaluated in these studies included video and audio streaming, annotation, augmented reality, and hands-free interactions. These studies focused on evaluating the technical feasibility, effectiveness, and user experience of smart glasses. Although the smart glass technology has demonstrated numerous benefits and high levels of user acceptance, the reviewed studies noted a variety of barriers to successful adoption of this novel technology in actual care settings, including technical limitations, human factors and ergonomics, privacy and security issues, and organizational challenges. Conclusions User-centered system design, improved hardware performance, and software reliability are needed to realize the potential of smart glasses. More research is needed to examine and evaluate medical professionals’ needs, preferences, and perceptions, as well as elucidate how smart glasses affect the clinical workflow in complex care environments. Our findings inform the design, implementation, and evaluation of smart glasses that will improve organizational and patient outcomes. 
    more » « less
  2. Abstract Background

    Aminoglycosides are potent bactericidal antibiotics naturally produced by soil microorganisms and are commonly used in agriculture. Exposure to these antibiotics has the potential to cause shifts in the microorganisms that impact plant health. The systematic review described in this protocol will compile and synthesize literature on soil and plant root-associated microbiota, with special attention to aminoglycoside exposure. The systematic review should provide insight into how the soil and plant microbiota are impacted by aminoglycoside exposure with specific attention to the changes in the overall species richness and diversity (microbial composition), changes of the resistome (i.e. the changes in the quantification of resistance genes), and maintenance of plant health through suppression of pathogenic bacteria. Moreover, the proposed contribution will provide comprehensive information about data available to guide future primary research studies. This systematic review protocol is based on the question, “What is the impact of aminoglycoside exposure on the soil and plant root-associated microbiota?”.

    Methods

    A boolean search of academic databases and specific websites will be used to identify research articles, conference presentations and grey literature meeting the search criteria. All search results will be compiled and duplicates removed before title and abstract screening. Two reviewers will screen all the included titles and abstracts using a set of predefined inclusion criteria. Full-texts of all titles and abstracts meeting the eligibility criteria will be screened independently by two reviewers. Inclusion criteria will describe the eligible soil and plant root-associated microbiome populations of interest and eligible aminoglycosides constituting our exposure. Study validity will be evaluated using the CEE Critical Appraisal Tool Version 0.2 (Prototype) to evaluate the risk of bias in publications. Data from studies with a low risk of bias will be extracted and compiled into a narrative synthesis and summarized into tables and figures. If sufficient evidence is available, findings will be used to perform a meta-analysis.

     
    more » « less
  3. Abstract

    Erosion along high-latitude coasts has been accelerating in recent decades, resulting in land loss and infrastructure damage, threatening the wellbeing of local communities, and forcing undesired community relocations. This review paper evaluates the state of practice of current coastal stabilization measures across several coastal communities in northern high latitudes. After considering global practices and those in northern high latitude and arctic settings, this paper then explores new and potential coastal stabilization measures to address erosion specific to northern high-latitude coastlines. The challenges in constructing the current erosion control measures and the cost of the measures over the last four decades in northern high-latitude regions are presented through case histories. The synthesis shows that among the current erosion controls being used at high latitudes, revetments built with rocks have the least reported failures and are the most common measures applied along northern high-latitude coastlines including permafrost coasts, while riprap is the most common material used. For seawalls, bulkheads, and groin systems, reported failures are common and mostly associated with displacement, deflection, settlement, vandalism, and material ruptures. Revetments have been successfully implemented at sites with a wide range of mean annual erosion rates (0.3–2.4 m/year) and episodic erosion (6.0–22.9 m) due to the low costs and easy construction, inspection, and decommissioning. No successful case history has been reported for the non-engineered expedient measures that are constructed in the event of an emergency, except for the expedient vegetation measure using root-wads and willows. Soft erosion prevention measures, which include both beach nourishment and dynamically stable beaches, have been considered in this review. The effectiveness of beach nourishment in Utqiaġvik, Alaska, which is affected by permafrost, is inconclusive. Dynamically stable beaches are effective in preventing erosion, and observations show that they experience only minor damages after single storm events. The analysis also shows that more measures have been constructed on a spit (relative to bluffs, islands, barrier islands, and river mouths), which is a landform where many Alaskan coastal communities reside. The emerging erosion control measures that can potentially be adapted to mitigate coastal erosion in high-latitude regions include geosynthetics, static bay beach concept, refrigerating techniques, and biogeochemical applications. However, this review shows that there is a lack of case studies that evaluated the performance of these new measures in high-latitude environments. This paper identifies research gaps so that these emerging measures can be upscaled for full-scale applications on permafrost coasts.

     
    more » « less
  4. Background

    COVID-19 has severely impacted health in vulnerable demographics. As communities transition back to in-person work, learning, and social activities, pediatric patients who are restricted to their homes due to medical conditions face unprecedented isolation. Prior to the pandemic, it was estimated that each year, over 2.5 million US children remained at home due to medical conditions. Confronting gaps in health and technical resources is central to addressing the challenges faced by children who remain at home. Having children use mobile telemedicine units (telerobots) to interact with their outside environment (eg, school and play, etc) is increasingly recognized for its potential to support children’s development. Additionally, social telerobots are emerging as a novel form of telehealth. A social telerobot is a tele-operated unit with a mobile base, 2-way audio/video capabilities, and some semiautonomous features.

    Objective

    In this paper, we aimed to provide a critical review of studies focused on the use of social telerobots for pediatric populations.

    Methods

    To examine the evidence on telerobots as a telehealth intervention, we conducted electronic and full-text searches of private and public databases in June 2010. We included studies with the pediatric personal use of interactive telehealth technologies and telerobot studies that explored effects on child development. We excluded telehealth and telerobot studies with adult (aged >18 years) participants.

    Results

    In addition to telehealth and telerobot advantages, evidence from the literature suggests 3 promising robot-mediated supports that contribute to optimal child development—belonging, competence, and autonomy. These robot-mediated supports may be leveraged for improved pediatric patient socioemotional development, well-being, and quality-of-life activities that transfer traditional developmental and behavioral experiences from organic local environments to the remote child.

    Conclusions

    This review contributes to the creation of the first pediatric telehealth taxonomy of care that includes the personal use of telehealth technologies as a compelling form of telehealth care.

     
    more » « less
  5. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1. The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies files most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data. A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some records start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure. After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file. Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate. Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work. To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5]. The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed. The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions. The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020. The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/. 
    more » « less