skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 5:00 PM ET until 11:00 PM ET on Friday, June 21 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Zhao, Kevin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In this paper, we consider how to provide fast estimates of flow-level tail latency performance for very large scale data center networks. Network tail latency is often a crucial metric for cloud application performance that can be affected by a wide variety of factors, including network load, inter-rack traffic skew, traffic burstiness, flow size distributions, oversubscription, and topology asymmetry. Network simulators such as ns-3 and OMNeT++ can provide accurate answers, but are very hard to parallelize, taking hours or days to answer what if questions for a single configuration at even moderate scale. Recent work with MimicNet has shown how to use machine learning to improve simulation performance, but at a cost of including a long training step per configuration, and with assumptions about workload and topology uniformity that typically do not hold in practice. We address this gap by developing a set of techniques to provide fast performance estimates for large scale networks with general traffic matrices and topologies. A key step is to decompose the problem into a large number of parallel independent single-link simulations; we carefully combine these link-level simulations to produce accurate estimates of end-to-end flow level performance distributions for the entire network. Like MimicNet, we exploit symmetry where possible to gain additional speedups, but without relying on machine learning, so there is no training delay. On a large-scale net- work where ns-3 takes 11 to 27 hours to simulate five seconds of network behavior, our techniques run in one to two minutes with accuracy within 9% for tail flow completion times. 
    more » « less
  2. null (Ed.)
  3. CRISPR-Cas–guided base editors convert A•T to G•C, or C•G to T•A, in cellular DNA for precision genome editing. To understand the molecular basis for DNA adenosine deamination by adenine base editors (ABEs), we determined a 3.2-angstrom resolution cryo–electron microscopy structure of ABE8e in a substrate-bound state in which the deaminase domain engages DNA exposed within the CRISPR-Cas9 R-loop complex. Kinetic and structural data suggest that ABE8e catalyzes DNA deamination up to ~1100-fold faster than earlier ABEs because of mutations that stabilize DNA substrates in a constrained, transfer RNA–like conformation. Furthermore, ABE8e’s accelerated DNA deamination suggests a previously unobserved transient DNA melting that may occur during double-stranded DNA surveillance by CRISPR-Cas9. These results explain ABE8e-mediated base-editing outcomes and inform the future design of base editors.

     
    more » « less