skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Zhang, Fan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 1, 2026
  2. Inspired by the success of Self-Supervised Learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of Continual Learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely Self-Supervised Continual Learning (SSCL). It has been shown that the SSCL outperforms Supervised Continual Learning (SCL) as the learned representations are more informative and robust to catastrophic forgetting. However, building upon the training process of SSL, prior SSCL studies involve training all the parameters for each task, resulting to prohibitively high training cost. In this work, we first analyze the training time and memory consumption and reveals that the backward gradient calculation is the bottleneck. Moreover, by investigating the task correlations in SSCL, we further discover an interesting phenomenon that, with the SSL-learned background model, the intermediate features are highly correlated between tasks. Based on these new finding, we propose a new SSCL method with layer-wise freezing which progressively freezes partial layers with the highest correlation ratios for each task to improve training computation efficiency and memory efficiency. Extensive experiments across multiple datasets are performed, where our proposed method shows superior performance against the SoTA SSCL methods under various SSL frameworks. For example, compared to LUMP, our method achieves 1.18x, 1.15x, and 1.2x GPU training time reduction, 1.65x, 1.61x, and 1.6x memory reduction, 1.46x, 1.44x, and 1.46x backward FLOPs reduction, and 1.31%/1.98%/1.21% forgetting reduction without accuracy degradation on three datasets, respectively. 
    more » « less
    Free, publicly-accessible full text available April 23, 2026
  3. Free, publicly-accessible full text available April 7, 2026
  4. Free, publicly-accessible full text available April 1, 2026
  5. Abstract Spin-orbit coupling (SOC) and electron-electron interaction can mutually influence each other and give rise to a plethora of intriguing phenomena in condensed matter systems. In pristine bilayer graphene (BLG), which has weak SOC, intrinsic Lifshitz transitions and concomitant van-Hove singularities lead to the emergence of many-body correlated phases. Layer-selective SOC can be proximity induced by adding a layer of tungsten diselenide (WSe2) on its one side. By applying an electric displacement field, the system can be tuned across a spectrum wherein electronic correlation, SOC, or a combination of both dominates. Our investigations reveal an intricate phase diagram of proximity-induced SOC-selective BLG. Not only does this phase diagram include those correlated phases reminiscent of SOC-free doped BLG, but it also hosts unique SOC-induced states allowing a compelling measurement of valleyg-factor and a correlated insulator at charge neutrality, thereby showcasing the remarkable tunability of the interplay between interaction and SOC in WSe2enriched BLG. 
    more » « less
    Free, publicly-accessible full text available May 28, 2026
  6. Nowadays, parameter-efficient fine-tuning (PEFT) large pre-trained models (LPMs) for downstream task have gained significant popularity, since it could significantly minimize the training computational overhead. The representative work, LoRA [1], learns a low-rank adaptor for a new downstream task, rather than fine-tuning the whole backbone model. However, for inference, the large size of the learned model remains unchanged, leading to in-efficient inference computation. To mitigate this, in this work, we are the first to propose a learning-to-prune methodology specially designed for fine-tuning downstream tasks based on LPMs with low-rank adaptation. Unlike prior low-rank adaptation approaches that only learn the low-rank adaptors for downstream tasks, our method further leverages the Gumbel-Sigmoid tricks to learn a set of trainable binary channel-wise masks that automatically prune the backbone LPMs. Therefore, our method could leverage the benefits of low-rank adaptation to reduce the training parameters size and smaller pruned backbone LPM size for efficient inference computation. Extensive experiments show that the Pruned-RoBbase model with our method achieves an average channel-wise structured pruning ratio of 24.5% across the popular GLUE Benchmark, coupled with an average of 18% inference time speed-up in real NVIDIA A5000 GPU. The Pruned-DistilBERT shows an average of 13% inference time improvement with 17% sparsity. The Pruned-LLaMA-7B model achieves up to 18.2% inference time improvement with 24.5% sparsity, demonstrating the effectiveness of our learnable pruning approach across different models and tasks. 
    more » « less
    Free, publicly-accessible full text available January 20, 2026
  7. Free, publicly-accessible full text available December 16, 2025
  8. Free, publicly-accessible full text available November 1, 2025
  9. Free, publicly-accessible full text available March 11, 2026
  10. Free, publicly-accessible full text available December 1, 2025