skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Chen, Haifeng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The design and fabrication of 3D printed ATESs within vivoadhesion and application potential, shape design capability, as well as accessible and convenient fabrication and application process.

     
    more » « less
    Free, publicly-accessible full text available January 14, 2026
  2. Free, publicly-accessible full text available August 24, 2025
  3. Free, publicly-accessible full text available August 24, 2025
  4. Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging and nascent problem. The leading method mainly considers the local explanations, i.e., important subgraph structure and node features, to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized at the instance level. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to the lack of generalizability and hindering it from being used in the inductive setting. Besides, training the explanation model explaining for each instance is time-consuming for large-scale real-life datasets. In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which renders PGExplainer a natural approach to multi-instance explanations. Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting without training the model for new instances. Thus, PGExplainer is much more efficient than the leading method with significant speed-up. In addition, the explanation networks can also be utilized as a regularizer to improve the generalization power of existing GNNs when jointly trained with downstream tasks. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification over the leading baseline. 
    more » « less
    Free, publicly-accessible full text available August 1, 2025
  5. Free, publicly-accessible full text available May 7, 2025
  6. Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes. GNNs have emerged as pivotal architectures in analyzing graph-structured data, and their expansive application in sensitive domains requires a comprehensive understanding of their decision-making processes — necessitating a framework for GNN explainability. An explanation function for GNNs takes a pre-trained GNN along with a graph as input, to produce a ‘sufficient statistic’ subgraph with respect to the graph label. A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions. This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics, including Fid+, Fid−, and Fid∆. Specifically, a formal, information-theoretic definition of explainability is introduced and it is shown that existing metrics often fail to align with this definition across various statistical scenarios. The reason is due to potential distribution shifts when subgraphs are removed in computing these fidelity measures. Subsequently, a robust class of fidelity measures are introduced, and it is shown analytically that they are resilient to distribution shift issues and are applicable in a wide range of scenarios. Extensive empirical analysis on both synthetic and real datasets are provided to illustrate that the proposed metrics are more coherent with gold standard metrics. The source code is available at https://trustai4s-lab.github.io/fidelity. 
    more » « less
  7. Modern techniques like contrastive learning have been effectively used in many areas, including computer vision, natural language processing, and graph-structured data. Creating positive examples that assist the model in learning robust and discriminative representations is a crucial stage in contrastive learning approaches. Usually, preset human intuition directs the selection of relevant data augmentations. Due to patterns that are easily recognized by humans, this rule of thumb works well in the vision and language domains. However, it is impractical to visually inspect the temporal structures in time series. The diversity of time series augmentations at both the dataset and instance levels makes it difficult to choose meaningful augmentations on the fly. In this study, we address this gap by analyzing time series data augmentation using information theory and summarizing the most commonly adopted augmentations in a unified format. We then propose a contrastive learning framework with parametric augmentation, AutoTCL, which can be adaptively employed to support time series representation learning. The proposed approach is encoder-agnostic, allowing it to be seamlessly integrated with different backbone encoders. Experiments on univariate forecasting tasks demonstrate the highly competitive results of our method, with an average 6.5% reduction in MSE and 4.7% in MAE over the leading baselines. In classification tasks, AutoTCL achieves a 1.2% increase in average accuracy. 
    more » « less
  8. Abstract

    Adhesive tissue engineering scaffolds (ATESs) have emerged as an innovative alternative means, replacing sutures and bioglues, to secure the implants onto target tissues. Relying on their intrinsic tissue adhesion characteristics, ATES systems enable minimally invasive delivery of various scaffolds. This study investigates development of the first class of 3D bioprinted ATES constructs using functionalized hydrogel bioinks. Two ATES delivery strategies, in situ printing onto the adherend versus printing and then transferring to the target surface, are tested using two bioprinting methods, embedded versus air printing. Dopamine‐modified methacrylated hyaluronic acid (HAMA‐Dopa) and gelatin methacrylate (GelMA) are used as the main bioink components, enabling fabrication of scaffolds with enhanced adhesion and crosslinking properties. Results demonstrate that dopamine modification improved adhesive properties of the HAMA‐Dopa/GelMA constructs under various loading conditions, while maintaining their structural fidelity, stability, mechanical properties, and biocompatibility. While directly printing onto the adherend yields superior adhesive strength, embedded printing followed by transfer to the target tissue demonstrates greater potential for translational applications. Together, these results demonstrate the potential of bioprinted ATESs as off‐the‐shelf medical devices for diverse biomedical applications.

     
    more » « less