skip to main content

Title: Integrating Datasets on Public Health and Clinical Aspects of Sickle Cell Disease for Effective Community-Based Research and Practice
Sickle cell disease (SCD) is a genetic disease that has multiple aspects including public health and clinical aspects. The goals of the research study were to (1) understand the public health aspects of sickle cell disease, and (2) understand the overlap between public health aspects and clinical aspects that can inform research and practice beneficial to stakeholders in sickle cell disease management. The approach involved the construction of datasets from textual data sources produced by experts on sickle cell disease including from landmark publications published in 2020 on sickle cell disease in the United States. The interactive analytics of the integrated datasets that we produced identified that community-based approaches are common to both public health and clinical aspects of sickle cell disease. An interactive visualization that we produced can aid the understanding of the alignment of governmental organizations to recommendations for addressing sickle cell disease in the United States. From a global perspective, the interactive analytics of the integrated datasets can support the knowledge transfer stage of the SICKLE recommendations (Skills transfer, Increasing self-efficacy, Coordination, Knowledge transfer, Linking to adult services, and Evaluating readiness) for effective pediatric to adult transition care for patients with sickle cell disease. Considering the increased more » digital transformations resulting from the COVID-19 pandemic, the constructed datasets from expert recommendations can be integrated within remote digital platforms that expand access to care for individuals living with sickle cell disease. Finally, the interactive analytics of integrated expert recommendations on sickle cell disease management can support individual and team expertise for effective community-based research and practice. « less
Authors:
; ; ; ; ; ; ;
Award ID(s):
2029363
Publication Date:
NSF-PAR ID:
10295329
Journal Name:
Diseases
Volume:
8
Issue:
4
Page Range or eLocation-ID:
39
ISSN:
2079-9721
Sponsoring Org:
National Science Foundation
More Like this
  1. Introduction: Vaso-occlusive crises (VOCs) are a leading cause of morbidity and early mortality in individuals with sickle cell disease (SCD). These crises are triggered by sickle red blood cell (sRBC) aggregation in blood vessels and are influenced by factors such as enhanced sRBC and white blood cell (WBC) adhesion to inflamed endothelium. Advances in microfluidic biomarker assays (i.e., SCD Biochip systems) have led to clinical studies of blood cell adhesion onto endothelial proteins, including, fibronectin, laminin, P-selectin, ICAM-1, functionalized in microchannels. These microfluidic assays allow mimicking the physiological aspects of human microvasculature and help characterize biomechanical properties of adhered sRBCsmore »under flow. However, analysis of the microfluidic biomarker assay data has so far relied on manual cell counting and exhaustive visual morphological characterization of cells by trained personnel. Integrating deep learning algorithms with microscopic imaging of adhesion protein functionalized microfluidic channels can accelerate and standardize accurate classification of blood cells in microfluidic biomarker assays. Here we present a deep learning approach into a general-purpose analytical tool covering a wide range of conditions: channels functionalized with different proteins (laminin or P-selectin), with varying degrees of adhesion by both sRBCs and WBCs, and in both normoxic and hypoxic environments. Methods: Our neural networks were trained on a repository of manually labeled SCD Biochip microfluidic biomarker assay whole channel images. Each channel contained adhered cells pertaining to clinical whole blood under constant shear stress of 0.1 Pa, mimicking physiological levels in post-capillary venules. The machine learning (ML) framework consists of two phases: Phase I segments pixels belonging to blood cells adhered to the microfluidic channel surface, while Phase II associates pixel clusters with specific cell types (sRBCs or WBCs). Phase I is implemented through an ensemble of seven generative fully convolutional neural networks, and Phase II is an ensemble of five neural networks based on a Resnet50 backbone. Each pixel cluster is given a probability of belonging to one of three classes: adhered sRBC, adhered WBC, or non-adhered / other. Results and Discussion: We applied our trained ML framework to 107 novel whole channel images not used during training and compared the results against counts from human experts. As seen in Fig. 1A, there was excellent agreement in counts across all protein and cell types investigated: sRBCs adhered to laminin, sRBCs adhered to P-selectin, and WBCs adhered to P-selectin. Not only was the approach able to handle surfaces functionalized with different proteins, but it also performed well for high cell density images (up to 5000 cells per image) in both normoxic and hypoxic conditions (Fig. 1B). The average uncertainty for the ML counts, obtained from accuracy metrics on the test dataset, was 3%. This uncertainty is a significant improvement on the 20% average uncertainty of the human counts, estimated from the variance in repeated manual analyses of the images. Moreover, manual classification of each image may take up to 2 hours, versus about 6 minutes per image for the ML analysis. Thus, ML provides greater consistency in the classification at a fraction of the processing time. To assess which features the network used to distinguish adhered cells, we generated class activation maps (Fig. 1C-E). These heat maps indicate the regions of focus for the algorithm in making each classification decision. Intriguingly, the highlighted features were similar to those used by human experts: the dimple in partially sickled RBCs, the sharp endpoints for highly sickled RBCs, and the uniform curvature of the WBCs. Overall the robust performance of the ML approach in our study sets the stage for generalizing it to other endothelial proteins and experimental conditions, a first step toward a universal microfluidic ML framework targeting blood disorders. Such a framework would not only be able to integrate advanced biophysical characterization into fast, point-of-care diagnostic devices, but also provide a standardized and reliable way of monitoring patients undergoing targeted therapies and curative interventions, including, stem cell and gene-based therapies for SCD. Disclosures Gurkan: Dx Now Inc.: Patents & Royalties; Xatek Inc.: Patents & Royalties; BioChip Labs: Patents & Royalties; Hemex Health, Inc.: Consultancy, Current Employment, Patents & Royalties, Research Funding.« less
  2. Treating disease according to precision health requires the individualization of therapeutic solutions as a cardinal step that is part of a process that typically depends on multiple factors. The starting point is the collection and assembly of data over time to assess the patient’s health status and monitor response to therapy. Radiomics is a very important component of this process. Its main goal is implementing a protocol to quantify the image informative contents by first mining and then extracting the most representative features. Further analysis aims to detect potential disease phenotypes through signs and marks of heterogeneity. As multimodal imagesmore »hinge on various data sources, and these can be integrated with treatment plans and follow-up information, radiomics is naturally centered on dynamically monitoring disease progression and/or the health trajectory of patients. However, radiomics creates critical needs too. A concise list includes: (a) successful harmonization of intra/inter-modality radiomic measurements to facilitate the association with other data domains (genetic, clinical, lifestyle aspects, etc.); (b) ability of data science to revise model strategies and analytics tools to tackle multiple data types and structures (electronic medical records, personal histories, hospitalization data, genomic from various specimens, imaging, etc.) and to offer data-agnostic solutions for patient outcomes prediction; (c) and model validation with independent datasets to ensure generalization of results, clinical value of new risk stratifications, and support to clinical decisions for highly individualized patient management.« less
  3. Background Cardiovascular disease (CVD) disparities are a particularly devastating manifestation of health inequity. Despite advancements in prevention and treatment, CVD is still the leading cause of death in the United States. Additionally, research indicates that African American (AA) and other ethnic-minority populations are affected by CVD at earlier ages than white Americans. Given that AAs are the fastest-growing population of smartphone owners and users, mobile health (mHealth) technologies offer the unparalleled potential to prevent or improve self-management of chronic disease among this population. Objective To address the unmet need for culturally tailored primordial prevention CVD–focused mHealth interventions, the MOYO appmore »was cocreated with the involvement of young people from this priority community. The overall project aims to develop and evaluate the effectiveness of a novel smartphone app designed to reduce CVD risk factors among urban-AAs, 18-29 years of age. Methods The theoretical underpinning will combine the principles of community-based participatory research and the agile software development framework. The primary outcome goals of the study will be to determine the usability, acceptability, and functionality of the MOYO app, and to build a cloud-based data collection infrastructure suitable for digital epidemiology in a disparity population. Changes in health-related parameters over a 24-week period as determined by both passive (eg, physical activity levels, sleep duration, social networking) and active (eg, use of mood measures, surveys, uploading pictures of meals and blood pressure readings) measures will be the secondary outcome. Participants will be recruited from a majority AA “large city” school district, 2 historically black colleges or universities, and 1 urban undergraduate college. Following baseline screening for inclusion (administered in person), participants will receive the beta version of the MOYO app. Participants will be monitored during a 24-week pilot period. Analyses of varying data including social network dynamics, standard metrics of activity, percentage of time away from a given radius of home, circadian rhythm metrics, and proxies for sleep will be performed. Together with external variables (eg, weather, pollution, and socioeconomic indicators such as food access), these metrics will be used to train machine-learning frameworks to regress them on the self-reported quality of life indicators. Results This 5-year study (2015-2020) is currently in the implementation phase. We believe that MOYO can build upon findings of classical epidemiology and longitudinal studies like the Jackson Heart Study by adding greater granularity to our knowledge of the exposures and behaviors that affect health and disease, and creating a channel for outreach capable of launching interventions, clinical trials, and enhancements of health literacy. Conclusions The results of this pilot will provide valuable information about community cocreation of mHealth programs, efficacious design features, and essential infrastructure for digital epidemiology among young AA adults. International Registered Report Identifier (IRRID) DERR1-10.2196/16699« less
  4. Background Clinical alarm system safety is a national patient safety goal in the United States. Physiologic monitors are associated with the highest number of device alarms and alarm-related deaths. However, research involving nurses’ use of physiologic monitors is rare. Hence, the identification of critical usability issues for monitors, especially those related to patient safety, is a nursing imperative. Objective This study examined nurses’ usability of physiologic monitors in intensive care units with respect to the effectiveness and efficiency of monitor use. Methods In total, 30 nurses from 4 adult intensive care units completed 40 tasks in a simulation environment. Themore »tasks were common monitoring tasks that were crucial for appropriate monitoring and safe alarm management across four categories of competencies: admitting, transferring, and discharging patients using the monitors (7 tasks); managing measurements and monitor settings (23 tasks); performing electrocardiogram (ECG) analysis (7 tasks); and troubleshooting alarm conditions (3 tasks). The nurse-monitor interaction was video-recorded. The principal investigator and two expert intensive care units nurse educators identified, classified, and validated task success (effectiveness) and the time of task completion (efficiency). Results Among the 40 tasks, only 2 (5%) were successfully completed by all the nurses. At least 1-27 (3%-90%) nurses abandoned or did not correctly perform 38 tasks. The task with the shortest completion time was “take monitor out of standby” (mean 0:02, SD 0:01 min:s), whereas the task “record a 25 mm/s ECG strip of any of the ECG leads” had the longest completion time (mean 1:14, SD 0:32 min:s). The total time to complete 37 navigation-related tasks ranged from a minimum of 3 min 57 s to a maximum of 32 min 42 s. Regression analysis showed that it took 6 s per click or step to successfully complete a task. To understand the nurses’ thought processes during monitor navigation, the authors analyzed the paths of the 2 tasks with the lowest successful completion rates, where only 13% (4/30) of the nurses correctly completed these 2 tasks. Although 30% (9/30) of the nurses accessed the correct screen first for task 1 and task 2, they could not find their way easily from there to successfully complete the 2 tasks. Conclusions Usability testing of physiologic monitors revealed major ineffectiveness and inefficiencies in the current nurse-monitor interactions. The results indicate the potential for safety and productivity issues in completing routine tasks. Training on monitor use should include critical monitoring functions that are necessary for safe, effective, efficient, and appropriate monitoring to include knowledge of the shortest navigation path. It is imperative that vendors’ future monitor designs mimic clinicians’ thought processes for successful, safe, and efficient monitor navigation.« less
  5. Obeid, Iyad Selesnick (Ed.)
    Electroencephalography (EEG) is a popular clinical monitoring tool used for diagnosing brain-related disorders such as epilepsy [1]. As monitoring EEGs in a critical-care setting is an expensive and tedious task, there is a great interest in developing real-time EEG monitoring tools to improve patient care quality and efficiency [2]. However, clinicians require automatic seizure detection tools that provide decisions with at least 75% sensitivity and less than 1 false alarm (FA) per 24 hours [3]. Some commercial tools recently claim to reach such performance levels, including the Olympic Brainz Monitor [4] and Persyst 14 [5]. In this abstract, we describemore »our efforts to transform a high-performance offline seizure detection system [3] into a low latency real-time or online seizure detection system. An overview of the system is shown in Figure 1. The main difference between an online versus offline system is that an online system should always be causal and has minimum latency which is often defined by domain experts. The offline system, shown in Figure 2, uses two phases of deep learning models with postprocessing [3]. The channel-based long short term memory (LSTM) model (Phase 1 or P1) processes linear frequency cepstral coefficients (LFCC) [6] features from each EEG channel separately. We use the hypotheses generated by the P1 model and create additional features that carry information about the detected events and their confidence. The P2 model uses these additional features and the LFCC features to learn the temporal and spatial aspects of the EEG signals using a hybrid convolutional neural network (CNN) and LSTM model. Finally, Phase 3 aggregates the results from both P1 and P2 before applying a final postprocessing step. The online system implements Phase 1 by taking advantage of the Linux piping mechanism, multithreading techniques, and multi-core processors. To convert Phase 1 into an online system, we divide the system into five major modules: signal preprocessor, feature extractor, event decoder, postprocessor, and visualizer. The system reads 0.1-second frames from each EEG channel and sends them to the feature extractor and the visualizer. The feature extractor generates LFCC features in real time from the streaming EEG signal. Next, the system computes seizure and background probabilities using a channel-based LSTM model and applies a postprocessor to aggregate the detected events across channels. The system then displays the EEG signal and the decisions simultaneously using a visualization module. The online system uses C++, Python, TensorFlow, and PyQtGraph in its implementation. The online system accepts streamed EEG data sampled at 250 Hz as input. The system begins processing the EEG signal by applying a TCP montage [8]. Depending on the type of the montage, the EEG signal can have either 22 or 20 channels. To enable the online operation, we send 0.1-second (25 samples) length frames from each channel of the streamed EEG signal to the feature extractor and the visualizer. Feature extraction is performed sequentially on each channel. The signal preprocessor writes the sample frames into two streams to facilitate these modules. In the first stream, the feature extractor receives the signals using stdin. In parallel, as a second stream, the visualizer shares a user-defined file with the signal preprocessor. This user-defined file holds raw signal information as a buffer for the visualizer. The signal preprocessor writes into the file while the visualizer reads from it. Reading and writing into the same file poses a challenge. The visualizer can start reading while the signal preprocessor is writing into it. To resolve this issue, we utilize a file locking mechanism in the signal preprocessor and visualizer. Each of the processes temporarily locks the file, performs its operation, releases the lock, and tries to obtain the lock after a waiting period. The file locking mechanism ensures that only one process can access the file by prohibiting other processes from reading or writing while one process is modifying the file [9]. The feature extractor uses circular buffers to save 0.3 seconds or 75 samples from each channel for extracting 0.2-second or 50-sample long center-aligned windows. The module generates 8 absolute LFCC features where the zeroth cepstral coefficient is replaced by a temporal domain energy term. For extracting the rest of the features, three pipelines are used. The differential energy feature is calculated in a 0.9-second absolute feature window with a frame size of 0.1 seconds. The difference between the maximum and minimum temporal energy terms is calculated in this range. Then, the first derivative or the delta features are calculated using another 0.9-second window. Finally, the second derivative or delta-delta features are calculated using a 0.3-second window [6]. The differential energy for the delta-delta features is not included. In total, we extract 26 features from the raw sample windows which add 1.1 seconds of delay to the system. We used the Temple University Hospital Seizure Database (TUSZ) v1.2.1 for developing the online system [10]. The statistics for this dataset are shown in Table 1. A channel-based LSTM model was trained using the features derived from the train set using the online feature extractor module. A window-based normalization technique was applied to those features. In the offline model, we scale features by normalizing using the maximum absolute value of a channel [11] before applying a sliding window approach. Since the online system has access to a limited amount of data, we normalize based on the observed window. The model uses the feature vectors with a frame size of 1 second and a window size of 7 seconds. We evaluated the model using the offline P1 postprocessor to determine the efficacy of the delayed features and the window-based normalization technique. As shown by the results of experiments 1 and 4 in Table 2, these changes give us a comparable performance to the offline model. The online event decoder module utilizes this trained model for computing probabilities for the seizure and background classes. These posteriors are then postprocessed to remove spurious detections. The online postprocessor receives and saves 8 seconds of class posteriors in a buffer for further processing. It applies multiple heuristic filters (e.g., probability threshold) to make an overall decision by combining events across the channels. These filters evaluate the average confidence, the duration of a seizure, and the channels where the seizures were observed. The postprocessor delivers the label and confidence to the visualizer. The visualizer starts to display the signal as soon as it gets access to the signal file, as shown in Figure 1 using the “Signal File” and “Visualizer” blocks. Once the visualizer receives the label and confidence for the latest epoch from the postprocessor, it overlays the decision and color codes that epoch. The visualizer uses red for seizure with the label SEIZ and green for the background class with the label BCKG. Once the streaming finishes, the system saves three files: a signal file in which the sample frames are saved in the order they were streamed, a time segmented event (TSE) file with the overall decisions and confidences, and a hypotheses (HYP) file that saves the label and confidence for each epoch. The user can plot the signal and decisions using the signal and HYP files with only the visualizer by enabling appropriate options. For comparing the performance of different stages of development, we used the test set of TUSZ v1.2.1 database. It contains 1015 EEG records of varying duration. The any-overlap performance [12] of the overall system shown in Figure 2 is 40.29% sensitivity with 5.77 FAs per 24 hours. For comparison, the previous state-of-the-art model developed on this database performed at 30.71% sensitivity with 6.77 FAs per 24 hours [3]. The individual performances of the deep learning phases are as follows: Phase 1’s (P1) performance is 39.46% sensitivity and 11.62 FAs per 24 hours, and Phase 2 detects seizures with 41.16% sensitivity and 11.69 FAs per 24 hours. We trained an LSTM model with the delayed features and the window-based normalization technique for developing the online system. Using the offline decoder and postprocessor, the model performed at 36.23% sensitivity with 9.52 FAs per 24 hours. The trained model was then evaluated with the online modules. The current performance of the overall online system is 45.80% sensitivity with 28.14 FAs per 24 hours. Table 2 summarizes the performances of these systems. The performance of the online system deviates from the offline P1 model because the online postprocessor fails to combine the events as the seizure probability fluctuates during an event. The modules in the online system add a total of 11.1 seconds of delay for processing each second of the data, as shown in Figure 3. In practice, we also count the time for loading the model and starting the visualizer block. When we consider these facts, the system consumes 15 seconds to display the first hypothesis. The system detects seizure onsets with an average latency of 15 seconds. Implementing an automatic seizure detection model in real time is not trivial. We used a variety of techniques such as the file locking mechanism, multithreading, circular buffers, real-time event decoding, and signal-decision plotting to realize the system. A video demonstrating the system is available at: https://www.isip.piconepress.com/projects/nsf_pfi_tt/resources/videos/realtime_eeg_analysis/v2.5.1/video_2.5.1.mp4. The final conference submission will include a more detailed analysis of the online performance of each module. ACKNOWLEDGMENTS Research reported in this publication was most recently supported by the National Science Foundation Partnership for Innovation award number IIP-1827565 and the Pennsylvania Commonwealth Universal Research Enhancement Program (PA CURE). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the official views of any of these organizations. REFERENCES [1] A. Craik, Y. He, and J. L. Contreras-Vidal, “Deep learning for electroencephalogram (EEG) classification tasks: a review,” J. Neural Eng., vol. 16, no. 3, p. 031001, 2019. https://doi.org/10.1088/1741-2552/ab0ab5. [2] A. C. Bridi, T. Q. Louro, and R. C. L. Da Silva, “Clinical Alarms in intensive care: implications of alarm fatigue for the safety of patients,” Rev. Lat. Am. Enfermagem, vol. 22, no. 6, p. 1034, 2014. https://doi.org/10.1590/0104-1169.3488.2513. [3] M. Golmohammadi, V. Shah, I. Obeid, and J. Picone, “Deep Learning Approaches for Automatic Seizure Detection from Scalp Electroencephalograms,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York, New York, USA: Springer, 2020, pp. 233–274. https://doi.org/10.1007/978-3-030-36844-9_8. [4] “CFM Olympic Brainz Monitor.” [Online]. Available: https://newborncare.natus.com/products-services/newborn-care-products/newborn-brain-injury/cfm-olympic-brainz-monitor. [Accessed: 17-Jul-2020]. [5] M. L. Scheuer, S. B. Wilson, A. Antony, G. Ghearing, A. Urban, and A. I. Bagic, “Seizure Detection: Interreader Agreement and Detection Algorithm Assessments Using a Large Dataset,” J. Clin. Neurophysiol., 2020. https://doi.org/10.1097/WNP.0000000000000709. [6] A. Harati, M. Golmohammadi, S. Lopez, I. Obeid, and J. Picone, “Improved EEG Event Classification Using Differential Energy,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium, 2015, pp. 1–4. https://doi.org/10.1109/SPMB.2015.7405421. [7] V. Shah, C. Campbell, I. Obeid, and J. Picone, “Improved Spatio-Temporal Modeling in Automated Seizure Detection using Channel-Dependent Posteriors,” Neurocomputing, 2021. [8] W. Tatum, A. Husain, S. Benbadis, and P. Kaplan, Handbook of EEG Interpretation. New York City, New York, USA: Demos Medical Publishing, 2007. [9] D. P. Bovet and C. Marco, Understanding the Linux Kernel, 3rd ed. O’Reilly Media, Inc., 2005. https://www.oreilly.com/library/view/understanding-the-linux/0596005652/. [10] V. Shah et al., “The Temple University Hospital Seizure Detection Corpus,” Front. Neuroinform., vol. 12, pp. 1–6, 2018. https://doi.org/10.3389/fninf.2018.00083. [11] F. Pedregosa et al., “Scikit-learn: Machine Learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, 2011. https://dl.acm.org/doi/10.5555/1953048.2078195. [12] J. Gotman, D. Flanagan, J. Zhang, and B. Rosenblatt, “Automatic seizure detection in the newborn: Methods and initial evaluation,” Electroencephalogr. Clin. Neurophysiol., vol. 103, no. 3, pp. 356–362, 1997. https://doi.org/10.1016/S0013-4694(97)00003-9.« less