skip to main content

Search for: All records

Creators/Authors contains: "Xu, P."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available March 1, 2023
  2. Ribonucleoside monophosphate (rNMP) incorporation in DNA is a natural and prominent phenomenon resulting in DNA structural change and genome instability. While DNA polymerases have different rNMP incorporation rates, little is known whether these enzymes incorporate rNMPs following specific sequence patterns. In this study, we analyzed a series of rNMP incorporation datasets, generated from three rNMP mapping techniques, and obtained from Saccharomyces cerevisiae cells expressing wild-type or mutant replicative DNA polymerase and ribonuclease H2 genes. We performed computational analyses of rNMP sites around early and late firing autonomously replicating sequences (ARS’s) of the yeast genome, from which bidirectional, leading and lagging DNA synthesis starts. We find the preference of rNMP incorporation on the leading strand in wild-type DNA polymerase yeast cells. The leading/lagging-strand ratio of rNMP incorporation changes dramatically within 500 nt from ARS’s, highlighting the Pol δ - Pol ε handoff during early leading-strand synthesis. Furthermore, the pattern of rNMP incorporation is markedly distinct between the leading the lagging strand. Overall, our results show the different counts and patterns of rNMP incorporation during DNA replication from ARS, which reflects the different labor of division and rNMP incorporation pattern of Pol δ and Pol ε.
  3. Simulation-to-real domain adaptation for semantic segmentation has been actively studied for various applications such as autonomous driving. Existing methods mainly focus on a single-source setting, which cannot easily handle a more practical scenario of multiple sources with different distributions. In this paper, we propose to investigate multi-source domain adaptation for semantic segmentation. Specifically, we design a novel framework, termed Multi-source Adversarial Domain Aggregation Network (MADAN), which can be trained in an end-to-end manner. First, we generate an adapted domain for each source with dynamic semantic consistency while aligning at the pixel-level cycle-consistently towards the target. Second, we propose sub-domain aggregation discriminator and cross-domain cycle discriminator to make different adapted domains more closely aggregated. Finally, feature-level alignment is performed between the aggregated domain and target domain while training the segmentation network. Extensive experiments from synthetic GTA and SYNTHIA to real Cityscapes and BDDS datasets demonstrate that the proposed MADAN model outperforms state-of-the-art approaches. Our source code is released at:
  4. Convolutional neural networks (CNNs) have been increasingly deployed to Internet of Things (IoT) devices. Hence, many efforts have been made towards efficient CNN inference in resource-constrained platforms. This paper attempts to explore an orthogonal direction: how to conduct more energy-efficient training of CNNs, so as to enable on-device training? We strive to reduce the energy cost during training, by dropping unnecessary computations, from three complementary levels: stochastic mini-batch dropping on the data level; selective layer update on the model level; and sign prediction for low-cost, low-precision back-propagation, on the algorithm level. Extensive simulations and ablation studies, with real energy measurements from an FPGA board, confirm the superiority of our proposed strategies and demonstrate remarkable energy savings for training. Specifically, when training ResNet-74 on CIFAR-10, we achieve aggressive energy savings of >90% and >60%, while incurring an accuracy loss of only about 2% and 1.2%, respectively. When training ResNet-110 on CIFAR-100, an over 84% training energy saving comes at the small accuracy costs of 2% (top-1) and 0.1% (top-5).