skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Cheng, Richard"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The thioamide is a naturally-occurring single atom substitution of the canonical amide bond. The exchange of oxygen to sulfur alters the amide's physical and chemical characteristics, thereby expanding its functionality. Incorporation of thioamides in prevalent secondary structures has demonstrated that they can either have stabilizing, destabilizing, or neutral effects. We performed a systematic investigation of the structural impact of thioamide incorporation in a β-hairpin scaffold with nuclear magnetic resonance (NMR). Thioamides as hydrogen bond donors did not increase the foldedness of the more stable “YKL” variant of this scaffold. In the less stable “HPT” variant of the scaffold, the thioamide could be stabilizing as a hydrogen bond donor and destabilizing as a hydrogen bond acceptor, but the extent of the perturbation depended upon the position of incorporation. To better understand these effects we performed structural modelling of the macrocyclic folded HPT variants. Finally, we compare the thioamide effects that we observe to previous studies of both side-chain and backbone perturbations to this β-hairpin scaffold to provide context for our observations. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
  4. Dealing with high variance is a significant challenge in model-free reinforcement learning (RL). Existing methods are unreliable, exhibiting high variance in performance from run to run using different initializations/seeds. Focusing on problems arising in continuous control, we propose a functional regularization approach to augmenting model-free RL. In particular, we regularize the behavior of the deep policy to be similar to a control prior, i.e., we regularize in function space. We show that functional regularization yields a bias-variance trade-off, and propose an adaptive tuning strategy to optimize this trade-off. When the prior policy has control-theoretic stability guarantees, we further show that this regularization approximately preserves those stability guarantees throughout learning. We validate our approach empirically on a wide range of settings, and demonstrate significantly reduced variance, guaranteed dynamic stability, and more efficient learning than deep RL alone. 
    more » « less