Microtubules are dynamic cytoskeletal filaments that undergo stochastic switching between phases of polymerization and depolymerization—a behavior known as dynamic instability. Many important cellular processes, including cell motility, chromosome segregation, and intracellular transport, require complex spatiotemporal regulation of microtubule dynamics. This coordinated regulation is achieved through the interactions of numerous microtubule-associated proteins (MAPs) with microtubule ends and lattices. Here, we review the recent advances in our understanding of microtubule regulation, focusing on results arising from biochemical in vitro reconstitution approaches using purified multiprotein ensembles. We discuss how the combinatory effects of MAPs affect both the dynamics of individual microtubule ends, as well as the stability and turnover of the microtubule lattice. In addition, we highlight new results demonstrating the roles of protein condensates in microtubule regulation. Our overall intent is to showcase how lessons learned from reconstitution approaches help unravel the regulatory mechanisms at play in complex cellular environments.
more »
« less
Complexity is complicated and so too is comparing complexity metrics‐A response to Mikula et al. (2018)
- Award ID(s):
- 1802605
- PAR ID:
- 10079625
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Evolution
- Volume:
- 72
- Issue:
- 12
- ISSN:
- 0014-3820
- Format(s):
- Medium: X Size: p. 2836-2838
- Size(s):
- p. 2836-2838
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper investigates a model-free algorithm of broad interest in reinforcement learning, namely, Q-learning. Whereas substantial progress had been made toward understanding the sample efficiency of Q-learning in recent years, it remained largely unclear whether Q-learning is sample-optimal and how to sharpen the sample complexity analysis of Q-learning. In this paper, we settle these questions: (1) When there is only a single action, we show that Q-learning (or, equivalently, TD learning) is provably minimax optimal. (2) When there are at least two actions, our theory unveils the strict suboptimality of Q-learning and rigorizes the negative impact of overestimation in Q-learning. Our theory accommodates both the synchronous case (i.e., the case in which independent samples are drawn) and the asynchronous case (i.e., the case in which one only has access to a single Markovian trajectory).more » « less
-
Abstract We prove Farber’s conjecture on the stable topological complexity of configuration spaces of graphs. The conjecture follows from a general lower bound derived from recent insights into the topological complexity of aspherical spaces. Our arguments apply equally to higher topological complexity.more » « less
An official website of the United States government
