skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Liu, Jiancheng"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available December 10, 2025
  2. In response to recent data regulation requirements, machine unlearning (MU) has emerged as a critical process to remove the influence of specific examples from a given model. Although exact unlearning can be achieved through complete model retraining using the remaining dataset, the associated computational costs have driven the development of efficient, approximate unlearning techniques. Moving beyond data-centric MU approaches, our study introduces a novel model-based perspective: model sparsification via weight pruning, which is capable of reducing the gap between exact unlearning and approximate unlearning. We show in both theory and practice that model sparsity can boost the multi-criteria unlearning performance of an approximate unlearner, closing the approximation gap, while continuing to be efficient. This leads to a new MU paradigm, termed prune first, then unlearn, which infuses a sparse model prior into the unlearning process. Building on this insight, we also develop a sparsity-aware unlearning method that utilizes sparsity regularization to enhance the training process of approximate unlearning. Extensive experiments show that our proposals consistently benefit MU in various unlearning scenarios. A notable highlight is the 77% unlearning efficacy gain of fine-tuning (one of the simplest unlearning methods) when using sparsity-aware unlearning. Furthermore, we demonstrate the practical impact of our proposed MU methods in addressing other machine learning challenges, such as defending against backdoor attacks and enhancing transfer learning. Codes are available at this https URL. 
    more » « less
  3. Abstract We present a precise measurement of the asymptotic normalization coefficient (ANC) for the16O ground state (GS) through the12C(11B,7Li)16O transfer reaction using the Quadrupole‐3‐Dipole (Q3D) magnetic spectrograph. The present work sheds light on the existing discrepancy of more than 2 orders of magnitude between the previously reported GS ANC values. This ANC is believed to have a strong effect on the12C(α,γ)16O reaction rate by constraining the external capture to the16O ground state, which can interfere with the high-energy tail of the 2+subthreshold state. Based on the new ANC, we determine the astrophysicalS-factor and the stellar rate of the12C(α,γ)16O reaction. An increase of up to 21% in the total reaction rate is found within the temperature range of astrophysical relevance compared with the previous recommendation of a recent review. Finally, we evaluate the impact of our new rate on the pair-instability mass gap for black holes (BH) by evolving massive helium core stars using the MESA stellar evolution code. The updated12C(α,γ)16O reaction rate decreases the lower and upper edges of the BH gap about 12% and 5%, respectively. 
    more » « less