skip to main content


Title: Deep neural networks learn by using human-selected electrocardiogram features and novel features
Abstract Aims We sought to investigate whether artificial intelligence (AI) and specifically deep neural networks (NNs) for electrocardiogram (ECG) signal analysis can be explained using human-selected features. We also sought to quantify such explainability and test if the AI model learns features that are similar to a human expert. Methods and results We used a set of 100 000 ECGs that were annotated by human explainable features. We applied both linear and non-linear models to predict published ECG AI models output for the detection of patients’ age and sex. We further used canonical correlation analysis to quantify the amount of shared information between the NN features and human-selected features. We reconstructed single human-selected ECG features from the unexplained NN features using a simple linear model. We noticed a strong correlation between the simple models and the AI output (R2 of 0.49–0.57 for the linear models and R2 of 0.69–0.70 for the non-linear models). We found that the correlation of the human explainable features with either 13 of the strongest age AI features or 15 of the strongest sex AI features was above 0.85 (for comparison, the first 14 principal components explain 90% of the human feature variance). We linearly reconstructed single human-selected ECG features from the AI features with R2 up to 0.86. Conclusion This work shows that NNs for ECG signals extract features in a similar manner to human experts and that they also generate additional novel features that help achieve superior performance.  more » « less
Award ID(s):
1830418
NSF-PAR ID:
10355217
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
European Heart Journal - Digital Health
Volume:
2
Issue:
3
ISSN:
2634-3916
Page Range / eLocation ID:
446 to 455
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Methods of explainable artificial intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of neural networks (NNs), highlighting which features in the input contribute the most to a NN prediction. Here, we discuss our “lesson learned” that the task of attributing a prediction to the input does not have a single solution. Instead, the attribution results depend greatly on the considered baseline that the XAI method utilizes—a fact that has been overlooked in the geoscientific literature. The baseline is a reference point to which the prediction is compared so that the prediction can be understood. This baseline can be chosen by the user or is set by construction in the method’s algorithm—often without the user being aware of that choice. We highlight that different baselines can lead to different insights for different science questions and, thus, should be chosen accordingly. To illustrate the impact of the baseline, we use a large ensemble of historical and future climate simulations forced with the shared socioeconomic pathway 3-7.0 (SSP3-7.0) scenario and train a fully connected NN to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We then use various XAI methods and different baselines to attribute the network predictions to the input. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions. We conclude by discussing important implications and considerations about the use of baselines in XAI research.

    Significance Statement

    In recent years, methods of explainable artificial intelligence (XAI) have found great application in geoscientific applications, because they can be used to attribute the predictions of neural networks (NNs) to the input and interpret them physically. Here, we highlight that the attributions—and the physical interpretation—depend greatly on the choice of the baseline—a fact that has been overlooked in the geoscientific literature. We illustrate this dependence for a specific climate task, in which a NN is trained to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions.

     
    more » « less
  2. The emergence of photorealistic and cinematic non-player character (NPC) animation presents new challenges for video game developers. Game player expectations of cinematic acting styles bring a more sophisticated aesthetic in the representation of social interaction. New methods can streamline workflow by integrating actor-driven character design into the development of game character AI and animation. A workflow that tracks actor performance to final neural network (NN) design depends on a rigorous method of producing single-actor video corpora from which to train emotion AI NN models. While numerous video corpora have been developed to study emotion elicitation of the face from which to test theoretical models and train neural networks to recognize emotion, developing single-actor corpora to train NNs of NPCs in video games is uncommon. A class of facial emotion recognition (FER) products have enabled production of single-actor video corpora that use emotion analysis data. This paper introduces a single-actor game character corpora workflow for game character developers. The proposed method uses a single actor video corpus and dataset with the intent to train and implement a NN in an off-the-shelf video game engine for facial animation of an NPC. The efficacy of using a NN-driven animation controller has already been demonstrated (Schiffer, 2021, Kozasa et. al 2006). This paper focuses on using a single-actor video corpus for the purpose of training a NN-driven animation controller. 
    more » « less
  3. Understanding the learning dynamics and inductive bias of neural networks (NNs) is hindered by the opacity of the relationship between NN parameters and the function represented. Partially, this is due to symmetries inherent within the NN parameterization, allowing multiple different parameter settings to result in an identical output function, resulting in both an unclear relationship and redundant degrees of freedom. The NN parameterization is invariant under two symmetries: permutation of the neurons and a continuous family of transformations of the scale of weight and bias parameters. We propose taking a quotient with respect to the second symmetry group and reparametrizing ReLU NNs as continuous piecewise linear splines. Using this spline lens, we study learning dynamics in shallow univariate ReLU NNs, finding unexpected insights and explanations for several perplexing phenomena. We develop a surprisingly simple and transparent view of the structure of the loss surface, including its critical and fixed points, Hessian, and Hessian spectrum. We also show that standard weight initializations yield very flat initial functions, and that this flatness, together with overparametrization and the initial weight scale, is responsible for the strength and type of implicit regularization, consistent with previous work. Our implicit regularization results are complementary to recent work, showing that initialization scale critically controls implicit regularization via a kernel-based argument. Overall, removing the weight scale symmetry enables us to prove these results more simply and enables us to prove new results and gain new insights while offering a far more transparent and intuitive picture. Looking forward, our quotiented spline-based approach will extend naturally to the multivariate and deep settings, and alongside the kernel-based view, we believe it will play a foundational role in efforts to understand neural networks. Videos of learning dynamics using a spline-based visualization are available at http://shorturl.at/tFWZ2 . 
    more » « less
  4. Abstract

    Subgrid processes in global climate models are represented by parameterizations which are a major source of uncertainties in simulations of climate. In recent years, it has been suggested that machine‐learning (ML) parameterizations based on high‐resolution model output data could be superior to traditional parameterizations. Currently, both traditional and ML parameterizations of subgrid processes in the atmosphere are based on a single‐column approach, which only use information from single atmospheric columns. However, single‐column parameterizations might not be ideal since certain atmospheric phenomena, such as organized convective systems, can cross multiple grid boxes and involve slantwise circulations that are not purely vertical. Here we train neural networks (NNs) using non‐local inputs spanning over 3 × 3 columns of inputs. We find that including the non‐local inputs improves the offline prediction of a range of subgrid processes. The improvement is especially notable for subgrid momentum transport and for atmospheric conditions associated with mid‐latitude fronts and convective instability. Using an interpretability method, we find that the NN improvements partly rely on using the horizontal wind divergence, and we further show that including the divergence or vertical velocity as a separate input substantially improves offline performance. However, non‐local winds continue to be useful inputs for parameterizating subgrid momentum transport even when the vertical velocity is included as an input. Overall, our results imply that the use of non‐local variables and the vertical velocity as inputs could improve the performance of ML parameterizations, and the use of these inputs should be tested in online simulations in future work.

     
    more » « less
  5. Machine learning (ML) methods, such as artificial neural networks (ANN), k-nearest neighbors (kNN), random forests (RF), support vector machines (SVM), and boosted decision trees (DTs), may offer stronger predictive performance than more traditional, parametric methods, such as linear regression, multiple linear regression, and logistic regression (LR), for specific mapping and modeling tasks. However, this increased performance is often accompanied by increased model complexity and decreased interpretability, resulting in critiques of their “black box” nature, which highlights the need for algorithms that can offer both strong predictive performance and interpretability. This is especially true when the global model and predictions for specific data points need to be explainable in order for the model to be of use. Explainable boosting machines (EBM), an augmentation and refinement of generalize additive models (GAMs), has been proposed as an empirical modeling method that offers both interpretable results and strong predictive performance. The trained model can be graphically summarized as a set of functions relating each predictor variable to the dependent variable along with heat maps representing interactions between selected pairs of predictor variables. In this study, we assess EBMs for predicting the likelihood or probability of slope failure occurrence based on digital terrain characteristics in four separate Major Land Resource Areas (MLRAs) in the state of West Virginia, USA and compare the results to those obtained with LR, kNN, RF, and SVM. EBM provided predictive accuracies comparable to RF and SVM and better than LR and kNN. The generated functions and visualizations for each predictor variable and included interactions between pairs of predictor variables, estimation of variable importance based on average mean absolute scores, and provided scores for each predictor variable for new predictions add interpretability, but additional work is needed to quantify how these outputs may be impacted by variable correlation, inclusion of interaction terms, and large feature spaces. Further exploration of EBM is merited for geohazard mapping and modeling in particular and spatial predictive mapping and modeling in general, especially when the value or use of the resulting predictions would be greatly enhanced by improved interpretability globally and availability of prediction explanations at each cell or aggregating unit within the mapped or modeled extent. 
    more » « less