skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 PM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Wei, Donglai"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available December 10, 2025
  2. Free, publicly-accessible full text available November 17, 2025
  3. Proteins work together in nanostructures in many physiological contexts and disease states. We recently developed expansion revealing (ExR), which expands proteins away from each other, in order to support better labeling with antibody tags and nanoscale imaging on conventional microscopes. Here, we report multiplexed expansion revealing (multiExR), which enables high-fidelity antibody visualization of >20 proteins in the same specimen, over serial rounds of staining and imaging. Across all datasets examined, multiExR exhibits a median round-to-round registration error of 39 nm, with a median registration error of 25 nm when the most stringent form of the protocol is used. We precisely map 23 proteins in the brain of 5xFAD Alzheimer’s model mice, and find reductions in synaptic protein cluster volume, and co-localization of specific AMPA receptor subunits with amyloid-beta nanoclusters. We visualize 20 synaptic proteins in specimens of mouse primary somatosensory cortex. multiExR may be of broad use in analyzing how different kinds of protein are organized amidst normal and pathological processes in biology. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  4. Free, publicly-accessible full text available July 21, 2025
  5. Free, publicly-accessible full text available May 27, 2025
  6. Gaze-annotated facial data is crucial for training deep neural networks (DNNs) for gaze estimation. However, obtaining these data is labor-intensive and requires specialized equipment due to the challenge of accurately annotating the gaze direction of a subject. In this work, we present a generative framework to create annotated gaze data by leveraging the benefits of labeled and unlabeled data sources. We propose a Gaze-aware Compositional GAN that learns to generate annotated facial images from a limited labeled dataset. Then we transfer this model to an unlabeled data domain to take advantage of the diversity it provides. Experiments demonstrate our approach's effectiveness in generating within-domain image augmentations in the ETH-XGaze dataset and cross-domain augmentations in the CelebAMask-HQ dataset domain for gaze estimation DNN training. We also show additional applications of our work, which include facial image editing and gaze redirection. 
    more » « less
    Free, publicly-accessible full text available May 17, 2025
  7. Mapping neuronal networks is a central focus in neuroscience. While volume electron microscopy (vEM) can reveal the fine structure of neuronal networks (connectomics), it does not provide molecular information to identify cell types or functions. We developed an approach that uses fluorescent single-chain variable fragments (scFvs) to perform multiplexed detergent-free immunolabeling and volumetric-correlated-light-and-electron-microscopy on the same sample. We generated eight fluorescent scFvs targeting brain markers. Six fluorescent probes were imaged in the cerebellum of a female mouse, using confocal microscopy with spectral unmixing, followed by vEM of the same sample. The results provide excellent ultrastructure superimposed with multiple fluorescence channels. Using this approach, we documented a poorly described cell type, two types of mossy fiber terminals, and the subcellular localization of one type of ion channel. Because scFvs can be derived from existing monoclonal antibodies, hundreds of such probes can be generated to enable molecular overlays for connectomic studies. 
    more » « less
    Free, publicly-accessible full text available December 1, 2025
  8. Free, publicly-accessible full text available June 22, 2025
  9. There has been a growing interest in developing multimodal machine translation (MMT) systems that enhance neural machine translation (NMT) with visual knowledge. This problem setup involves using images as auxiliary information during training, and more recently, eliminating their use during inference. Towards this end, previous works face a challenge in training powerful MMT models from scratch due to the scarcity of annotated multilingual vision-language data, especially for low-resource languages. Simultaneously, there has been an influx of multilingual pretrained models for NMT and multimodal pre-trained models for vision-language tasks, primarily in English, which have shown exceptional generalisation ability. However, these are not directly applicable to MMT since they do not provide aligned multimodal multilingual features for generative tasks. To alleviate this issue, instead of designing complex modules for MMT, we propose CLIPTrans, which simply adapts the independently pre-trained multimodal M-CLIP and the multilingual mBART. In order to align their embedding spaces, mBART is conditioned on the M-CLIP features by a prefix sequence generated through a lightweight mapping network. We train this in a two-stage pipeline which warms up the model with image captioning before the actual translation task. Through experiments, we demonstrate the merits of this framework and consequently push forward the state-of-the-art across standard benchmarks by an average of +2.67 BLEU. The code can be found at www.github.com/devaansh100/CLIPTrans. 
    more » « less