Background: Early-life pain is associated with adverse neurodevelopmental consequences; and current pain assessment practices are discontinuous, inconsistent, and highly dependent on nurses’ availability. Furthermore, facial expressions in commonly used pain assessment tools are not associated with brain-based evidence of pain. Purpose: To develop and validate a machine learning (ML) model to classify pain. Methods: In this retrospective validation study, using a human-centered design for Embedded Machine Learning Solutions approach and the Neonatal Facial Coding System (NFCS), 6 experienced neonatal intensive care unit (NICU) nurses labeled data from randomly assigned iCOPEvid (infant Classification Of Pain Expression video) sequences of 49 neonates undergoing heel lance. NFCS is the only observational pain assessment tool associated with brain-based evidence of pain. A standard 70% training and 30% testing split of the data was used to train and test several ML models. NICU nurses’ interrater reliability was evaluated, and NICU nurses’ area under the receiver operating characteristic curve (AUC) was compared with the ML models’ AUC. Results: Nurses weighted mean interrater reliability was 68% (63%-79%) for NFCS tasks, 77.7% (74%-83%) for pain intensity, and 48.6% (15%-59%) for frame and 78.4% (64%-100%) for video pain classification, with AUC of 0.68. The best performing ML model had 97.7% precision, 98% accuracy, 98.5% recall, and AUC of 0.98. Implications for Practice and Research: The pain classification ML model AUC far exceeded that of NICU nurses for identifying neonatal pain. These findings will inform the development of a continuous, unbiased, brain-based, nurse-in-the-loop Pain Recognition Automated Monitoring System (PRAMS) for neonates and infants.
more »
« less
Application of a human-centered design for embedded machine learning model to develop data labeling software with nurses: Human-to-Artificial Intelligence (H2AI)
Background: Trust is a critical driver of technology usage behaviors and is essential for technology adoption. Thus, nurses’ participation in software development is critical for influencing their involvement, competency, and overall perceptions of software quality. Purpose: To engage nurses as subject matter experts to develop a machine learning (ML) Pain Recognition Automated Monitoring System. Method: Using the Human-centered Design for Embedded Machine Learning Solutions (HCDe-MLS) model, nurses informed the development of an intuitive data labeling software solution, Human-to-Artificial Intelligence (H2AI). Findings: H2AI facilitated efficient data labeling, stored labeled data to train ML models, and tracked inter-rater reliability. OpenCV provided efficient video-to-image data pre-processing for data labeling. MobileFaceNet demonstrated superior results for default landmark placement on neonatal video images. Discussion: Nurses’ engagement in clinical decision support software development is critical for ensuring the end-product addresses nurses’ priorities, reflects nurses’ actual cognitive and decision-making processes, and garners nurses’ trust and technology adoption.
more »
« less
- Award ID(s):
- 2205472
- PAR ID:
- 10531492
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- International Journal of Medical Informatics
- Volume:
- 183
- Issue:
- C
- ISSN:
- 1386-5056
- Page Range / eLocation ID:
- 105337
- Subject(s) / Keyword(s):
- Clinical decision support software, data labeling, Human-centered Design for Embedded Machine Learning Solutions Machine Learning, technology adoption,
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Generating labeled training datasets has become a major bottleneck in Machine Learning (ML) pipelines. Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This paper proposes a new approach to active ML with nonparametric or overparameterized models such as kernel methods and neural networks. In the context of binary classification, the new approach is shown to possess a variety of desirable properties that allow active learning algorithms to automatically and efficiently identify decision boundaries and data clusters.more » « less
-
Millimeter-Wave (mmWave) radar can enable high-resolution human pose estimation with low cost and computational requirements. However, mmWave data point cloud, the primary input to processing algorithms, is highly sparse and carries significantly less information than other alternatives such as video frames. Furthermore, the scarce labeled mmWave data impedes the development of machine learning (ML) models that can generalize to unseen scenarios. We propose a fast and scalable human pose estimation (FUSE) framework that combines multi-frame representation and meta-learning to address these challenges. Experimental evaluations show that FUSE adapts to the unseen scenarios 4× faster than current supervised learning approaches and estimates human joint coordinates with about 7 cm mean absolute error.more » « less
-
The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of IML (interpretable machine learning) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.more » « less
-
Improving the performance and explanations of ML algorithms is a priority for adoption by humans in the real world. In critical domains such as healthcare, such technology has significant potential to reduce the burden on humans and considerably reduce manual assessments by providing quality assistance at scale. In today’s data-driven world, artificial intelligence (AI) systems are still experiencing issues with bias, explainability, and human-like reasoning and interpretability. Causal AI is the technique that can reason and make human-like choices making it possible to go beyond narrow Machine learning-based techniques and can be integrated into human decision-making. It also offers intrinsic explainability, new domain adaptability, bias free predictions, and works with datasets of all sizes. In this tutorial of type lecture style, we detail how a richer representation of causality in AI systems using a knowledge graph (KG) based approach is needed for intervention and counterfactual reasoning (Figure 1), how do we get to model-based and domain explainability, how causal representations helps in web and health care.more » « less
An official website of the United States government

