skip to main content


Title: Hybrid Machine Learning EDFA Model
Abstract: A hybrid machine learning (HML) model combining a-priori and a-posteriori knowledge is implemented and tested, which is shown to reduce the prediction error and training complexity, compared to an analytical or neural network learning model.  more » « less
Award ID(s):
1650669
NSF-PAR ID:
10133274
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Optical Fiber Communication Conference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Hybrid Knowledge‐Guided Machine Learning (KGML) models, which are deep learning models that utilize scientific theory and process‐based model simulations, have shown improved performance over their process‐based counterparts for the simulation of water temperature and hydrodynamics. We highlight the modular compositional learning (MCL) methodology as a novel design choice for the development of hybrid KGML models in which the model is decomposed into modular sub‐components that can be process‐based models and/or deep learning models. We develop a hybrid MCL model that integrates a deep learning model into a modularized, process‐based model. To achieve this, we first train individual deep learning models with the output of the process‐based models. In a second step, we fine‐tune one deep learning model with observed field data. In this study, we replaced process‐based calculations of vertical diffusive transport with deep learning. Finally, this fine‐tuned deep learning model is integrated into the process‐based model, creating the hybrid MCL model with improved overall projections for water temperature dynamics compared to the original process‐based model. We further compare the performance of the hybrid MCL model with the process‐based model and two alternative deep learning models and highlight how the hybrid MCL model has the best performance for projecting water temperature, Schmidt stability, buoyancy frequency, and depths of different isotherms. Modular compositional learning can be applied to existing modularized, process‐based model structures to make the projections more robust and improve model performance by letting deep learning estimate uncertain process calculations.

     
    more » « less
  2. A hybrid machine learning (HML) model combining a-priori and a-posteriori knowledge is implemented and tested, which is shown to reduce the prediction error and training complexity, compared to an analytical or neural network learning model. 
    more » « less
  3. null (Ed.)
    This paper presents a framework to refine identified artificial neural networks (ANN) based state-space linear parametervarying (LPV-SS) models with closed-loop data using online transfer learning. An LPV-SS model is assumed to be first identified offline using inputs/outputs data and a model predictive controller (MPC) designed based on this model. Using collected closed-loop batch data, the model is further refined using online transfer learning and thus the control performance is improved. Specifically, fine-tuning, a transfer learning technique, is employed to improve the model. Furthermore, the scenario where the offline identified model and the online controlled system are “similar but not identitical” is discussed. The proposed method is verified by testing on an experimentally validated high-fidelity reactivity controlled compression ignition (RCCI) engine model. The verification results show that the new online transfer learning technique combined with an adaptive MPC law improves the engine control performance to track requested engine loads and desired combustion phasing with minimum errors. 
    more » « less
  4. Transfer learning, where the goal is to transfer the well-trained deep learning models from a primary source task to a new task, is a crucial learning scheme for on-device machine learning, due to the fact that IoT/edge devices collect and then process massive data in our daily life. However, due to the tiny memory constraint in IoT/edge devices, such on-device learning requires ultra-small training memory footprint, bringing new challenges for memory-efficient learning. Many existing works solve this problem by reducing the number of trainable parameters. However, this doesn't directly translate to memory-saving since the major bottleneck is the activations, not parameters. To develop memory-efficient on-device transfer learning, in this work, we are the first to approach the concept of transfer learning from a new perspective of intermediate feature reprogramming of a pre-trained model (i.e., backbone). To perform this lightweight and memory-efficient reprogramming, we propose to train a tiny Reprogramming Network (Rep-Net) directly from the new task input data, while freezing the backbone model. The proposed Rep-Net model interchanges the features with the backbone model using an activation connector at regular intervals to mutually benefit both the backbone model and Rep-Net model features. Through extensive experiments, we validate each design specs of the proposed Rep-Net model in achieving highly memory-efficient on-device reprogramming. Our experiments establish the superior performance (i.e., low training memory and high accuracy) of Rep-Net compared to SOTA on-device transfer learning schemes across multiple benchmarks. 
    more » « less
  5. Lai, Yuan (Ed.)
    Mistrust is a major barrier to implementing deep learning in healthcare settings. Entrustment could be earned by conveying model certainty, or the probability that a given model output is accurate, but the use of uncertainty estimation for deep learning entrustment is largely unexplored, and there is no consensus regarding optimal methods for quantifying uncertainty. Our purpose is to critically evaluate methods for quantifying uncertainty in deep learning for healthcare applications and propose a conceptual framework for specifying certainty of deep learning predictions. We searched Embase, MEDLINE, and PubMed databases for articles relevant to study objectives, complying with PRISMA guidelines, rated study quality using validated tools, and extracted data according to modified CHARMS criteria. Among 30 included studies, 24 described medical imaging applications. All imaging model architectures used convolutional neural networks or a variation thereof. The predominant method for quantifying uncertainty was Monte Carlo dropout, producing predictions from multiple networks for which different neurons have dropped out and measuring variance across the distribution of resulting predictions. Conformal prediction offered similar strong performance in estimating uncertainty, along with ease of interpretation and application not only to deep learning but also to other machine learning approaches. Among the six articles describing non-imaging applications, model architectures and uncertainty estimation methods were heterogeneous, but predictive performance was generally strong, and uncertainty estimation was effective in comparing modeling methods. Overall, the use of model learning curves to quantify epistemic uncertainty (attributable to model parameters) was sparse. Heterogeneity in reporting methods precluded the performance of a meta-analysis. Uncertainty estimation methods have the potential to identify rare but important misclassifications made by deep learning models and compare modeling methods, which could build patient and clinician trust in deep learning applications in healthcare. Efficient maturation of this field will require standardized guidelines for reporting performance and uncertainty metrics. 
    more » « less