Successful modeling of degradation data is of great importance for both accurate reliability assessment and effective maintenance decision‐making. Many of existing degradation performance modeling approaches either assume a homogeneous population of units or characterize a heterogeneous population with some restrictive assumptions, such as pre‐specifying the number of sub‐populations. This paper proposes a Bayesian heterogeneous degradation performance modeling framework to relax the conventional modeling assumptions. Specifically, a Bayesian non‐parametric model formulation and learning algorithm are proposed to characterize the historical degradation data of a heterogeneous population of units with an unknown number of homogeneous sub‐populations and allowing the joint model estimation and sub‐population number identification. Based on the off‐line population‐level model, an on‐line individual‐level degradation model with sequential model updating is further developed to improve remaining useful life prediction of individual units with sparse data. A real case study using the heterogeneous degradation data of deteriorating roads is provided to illustrate the proposed approach and demonstrate its validity.
- NSF-PAR ID:
- 10110945
- Date Published:
- Journal Name:
- IEEE Transactions on Reliability
- ISSN:
- 0018-9529
- Page Range / eLocation ID:
- 1 to 11
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Data integration combining a probability sample with another nonprobability sample is an emerging area of research in survey sampling. We consider the case when the study variable of interest is measured only in the nonprobability sample, but comparable auxiliary information is available for both data sources. We consider mass imputation for the probability sample using the nonprobability data as the training set for imputation. The parametric mass imputation is sensitive to parametric model assumptions. To develop improved and robust methods, we consider nonparametric mass imputation for data integration. In particular, we consider kernel smoothing for a low-dimensional covariate and generalized additive models for a relatively high-dimensional covariate for imputation. Asymptotic theories and variance estimation are developed. Simulation studies and real applications show the benefits of our proposed methods over parametric counterparts.
-
Abstract The noniterative conditional expectation (NICE) parametric g-formula can be used to estimate the causal effect of sustained treatment strategies. In addition to identifiability conditions, the validity of the NICE parametric g-formula generally requires the correct specification of models for time-varying outcomes, treatments, and confounders at each follow-up time point. An informal approach for evaluating model specification is to compare the observed distributions of the outcome, treatments, and confounders with their parametric g-formula estimates under the “natural course.” In the presence of loss to follow-up, however, the observed and natural-course risks can differ even if the identifiability conditions of the parametric g-formula hold and there is no model misspecification. Here, we describe 2 approaches for evaluating model specification when using the parametric g-formula in the presence of censoring: 1) comparing factual risks estimated by the g-formula with nonparametric Kaplan-Meier estimates and 2) comparing natural-course risks estimated by inverse probability weighting with those estimated by the g-formula. We also describe how to correctly compute natural-course estimates of time-varying covariate means when using a computationally efficient g-formula algorithm. We evaluate the proposed methods via simulation and implement them to estimate the effects of dietary interventions in 2 cohort studies.
-
Francisco Ruiz, Jennifer Dy (Ed.)Graphical models such as Markov random fields (MRFs) that are associated with undirected graphs, and Bayesian networks (BNs) that are associated with directed acyclic graphs, have proven to be a very popular approach for reasoning under uncertainty, prediction problems and causal inference. Parametric MRF likelihoods are well-studied for Gaussian and categorical data. However, in more complicated parametric and semi-parametric set- tings, likelihoods specified via clique potential functions are generally not known to be congenial (jointly well-specified) or non-redundant. Congenial and non-redundant DAG likelihoods are far simpler to specify in both parametric and semi-parametric settings by modeling Markov factors in the DAG factorization. However, DAG likelihoods specified in this way are not guaranteed to coincide in distinct DAGs within the same Markov equivalence class. This complicates likelihoods based model selection procedures for DAGs by “sneaking in” potentially un- warranted assumptions about edge orientations. In this paper we link a density function decomposition due to Chen with the clique factorization of MRFs described by Lauritzen to provide a general likelihood for MRF models. The proposed likelihood is composed of variationally independent, and non-redundant closed form functionals of the observed data distribution, and is sufficiently general to apply to arbitrary parametric and semi-parametric models. We use an extension of our developments to give a general likelihood for DAG models that is guaranteed to coincide for all members of a Markov equivalence class. Our results have direct applications for model selection and semi-parametric inference.more » « less
-
Abstract In this study, a new high‐latitude empirical model is introduced, named for Auroral energy Spectrum and High‐Latitude Electric field variabilitY (ASHLEY). This model improves specifications of soft electron precipitations and electric field variability that are not well represented in existing high‐latitude empirical models. ASHLEY consists of three components, ASHLEY‐A, ASHLEY‐E, and ASHLEY‐Evar, which are developed based on the electron precipitation and bulk ion drift measurements from the Defense Meteorological Satellite Program (DMSP) satellites during the most recent solar cycle. On the one hand, unlike most existing high‐latitude electron precipitation models, which have assumptions about the energy spectrum of incident electrons, the electron precipitation component of ASHLEY, ASHLEY‐A, provides the differential energy fluxes in the 19 DMSP energy channels under different geophysical conditions without making any assumptions about the energy spectrum. It has been found that the relaxation of spectral assumptions significantly improves soft electron precipitation specifications with respect to a Maxwellian spectrum (up to several orders of magnitude). On the other hand, ASHLEY provides consistent mean electric field and electric field variability under different geophysical conditions by ASHLEY‐E and ASHLEY‐Evar components, respectively. This is different from most existing electric field models which only focus on the large‐scale mean electric field and ignore the electric field variability. Furthermore, the consistency between the electric field and electron precipitation is better taken into account in ASHLEY.