This content will become publicly available on July 12, 2023
- Publication Date:
- NSF-PAR ID:
- 10339872
- Journal Name:
- Frontiers in Neuroscience
- Volume:
- 16
- ISSN:
- 1662-453X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Network embedding has been an effective tool to analyze heterogeneous networks (HNs) by representing nodes in a low-dimensional space. Although many recent methods have been proposed for representation learning of HNs, there is still much room for improvement. Random walks based methods are currently popular methods to learn network embedding; however, they are random and limited by the length of sampled walks, and have difculty capturing network structural information. Some recent researches proposed using meta paths to express the sample relationship in HNs. Another popular graph learning model, the graph convolutional network (GCN) is known to be capable of better exploitation of network topology, but the current design of GCN is intended for homogenous networks. This paper proposes a novel combination of meta-graph and graph convolution, the meta-graph based graph convolutional networks (MGCN). To fully capture the complex long semantic information, MGCN utilizes different meta-graphs in HNs. As different meta-graphs express different semantic relationships, MGCN learns the weights of different meta-graphs to make up for the loss of semantics when applying GCN. In addition, we improve the current convolution design by adding node self-signicance. To validate our model in learning feature representation, we present comprehensive experiments on four real-world datasetsmore »
-
Machine learning (ML) methods for causal inference have gained popularity due to their flexibility to predict the outcome model and the propensity score. In this article, we provide a within-group approach for ML-based causal inference methods in order to robustly estimate average treatment effects in multilevel studies when there is cluster-level unmeasured confounding. We focus on one particular ML-based causal inference method based on the targeted maximum likelihood estimation (TMLE) with an ensemble learner called SuperLearner. Through our simulation studies, we observe that training TMLE within groups of similar clusters helps remove bias from cluster-level unmeasured confounders. Also, using within-group propensity scores estimated from fixed effects logistic regression increases the robustness of the proposed within-group TMLE method. Even if the propensity scores are partially misspecified, the within-group TMLE still produces robust ATE estimates due to double robustness with flexible modeling, unlike parametric-based inverse propensity weighting methods. We demonstrate our proposed methods and conduct sensitivity analyses against the number of groups and individual-level unmeasured confounding to evaluate the effect of taking an eighth-grade algebra course on math achievement in the Early Childhood Longitudinal Study.
-
Machine learning (ML) methods, such as artificial neural networks (ANN), k-nearest neighbors (kNN), random forests (RF), support vector machines (SVM), and boosted decision trees (DTs), may offer stronger predictive performance than more traditional, parametric methods, such as linear regression, multiple linear regression, and logistic regression (LR), for specific mapping and modeling tasks. However, this increased performance is often accompanied by increased model complexity and decreased interpretability, resulting in critiques of their “black box” nature, which highlights the need for algorithms that can offer both strong predictive performance and interpretability. This is especially true when the global model and predictions for specific data points need to be explainable in order for the model to be of use. Explainable boosting machines (EBM), an augmentation and refinement of generalize additive models (GAMs), has been proposed as an empirical modeling method that offers both interpretable results and strong predictive performance. The trained model can be graphically summarized as a set of functions relating each predictor variable to the dependent variable along with heat maps representing interactions between selected pairs of predictor variables. In this study, we assess EBMs for predicting the likelihood or probability of slope failure occurrence based on digital terrain characteristics inmore »
-
Brain-computer interface (BCI) actively translates the brain signals into executable actions by establishing direct communication between the human brain and external devices. Recording brain activity through electroencephalography (EEG) is generally contaminated with both physiological and nonphysiological artifacts, which significantly hinders the BCI performance. Artifact subspace reconstruction (ASR) is a well-known statistical technique that automatically removes artifact components by determining the rejection threshold based on the initial reference EEG segment in multichannel EEG recordings. In real-world applications, the fixed threshold may limit the efficacy of the artifact correction, especially when the quality of the reference data is poor. This study proposes an adaptive online ASR technique by integrating the Hebbian/anti-Hebbian neural networks into the ASR algorithm, namely, principle subspace projection ASR (PSP-ASR) and principal subspace whitening ASR (PSW-ASR) that segmentwise self-organize the artifact subspace by updating the synaptic weights according to the Hebbian and anti-Hebbian learning rules. The effectiveness of the proposed algorithm is compared to the conventional ASR approaches on benchmark EEG dataset and three BCI frameworks, including steady-state visual evoked potential (SSVEP), rapid serial visual presentation (RSVP), and motor imagery (MI) by evaluating the root-mean-square error (RMSE), the signal-to-noise ratio (SNR), the Pearson correlation, and classification accuracy. The resultsmore »
-
ABSTRACT New time-domain surveys, such as the Vera C. Rubin Observatory Legacy Survey of Space and Time, will observe millions of transient alerts each night, making standard approaches of visually identifying new and interesting transients infeasible. We present two novel methods of automatically detecting anomalous transient light curves in real-time. Both methods are based on the simple idea that if the light curves from a known population of transients can be accurately modelled, any deviations from model predictions are likely anomalies. The first modelling approach is a probabilistic neural network built using Temporal Convolutional Networks (TCNs) and the second is an interpretable Bayesian parametric model of a transient. We demonstrate our methods’ ability to provide anomaly scores as a function of time on light curves from the Zwicky Transient Facility. We show that the flexibility of neural networks, the attribute that makes them such a powerful tool for many regression tasks, is what makes them less suitable for anomaly detection when compared with our parametric model. The parametric model is able to identify anomalies with respect to common supernova classes with high precision and recall scores, achieving area under the precision-recall curves above 0.79 for most rare classes such asmore »