skip to main content


Search for: All records

Creators/Authors contains: "Murugan, Arvind"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Inspired by biology’s most sophisticated computer, the brain, neural networks constitute a profound reformulation of computational principles1–3. Analogous high-dimensional, highly interconnected computational architectures also arise within information-processing molecular systems inside living cells, such as signal transduction cascades and genetic regulatory networks4–7. Might collective modes analogous to neural computation be found more broadly in other physical and chemical processes, even those that ostensibly play non-information-processing roles? Here we examine nucleation during self-assembly of multicomponent structures, showing that high-dimensional patterns of concentrations can be discriminated and classified in a manner similar to neural network computation. Specifically, we design a set of 917 DNA tiles that can self-assemble in three alternative ways such that competitive nucleation depends sensitively on the extent of colocalization of high-concentration tiles within the three structures. The system was trained in silico to classify a set of 18 grayscale 30 × 30 pixel images into three categories. Experimentally, fluorescence and atomic force microscopy measurements during and after a 150 hour anneal established that all trained images were correctly classified, whereas a test set of image variations probed the robustness of the results. Although slow compared to previous biochemical neural networks, our approach is compact, robust and scalable. Our findings suggest that ubiquitous physical phenomena, such as nucleation, may hold powerful information-processing capabilities when they occur within high-dimensional multicomponent systems.

     
    more » « less
    Free, publicly-accessible full text available January 18, 2025
  2. Our planet is a self-sustaining ecosystem powered by light energy from the sun, but roughly closed to matter. Many ecosystems on Earth are also approximately closed to matter and recycle nutrients by self-organizing stable nutrient cycles, e.g., microbial mats, lakes, open ocean gyres. However, existing ecological models do not exhibit the self-organization and dynamical stability widely observed in such planetary-scale ecosystems. Here, we advance a conceptual model that explains the self-organization, stability, and emergent features of closed microbial ecosystems. Our model incorporates the bioenergetics of metabolism into an ecological framework. By studying this model, we uncover a crucial thermodynamic feedback loop that enables metabolically diverse communities to almost always stabilize nutrient cycles. Surprisingly, highly diverse communities self-organize to extract10%of the maximum extractable energy, or100 fold more than randomized communities. Further, with increasing diversity, distinct ecosystems show strongly correlated fluxes through nutrient cycles. However, as the driving force from light increases, the fluxes of nutrient cycles become more variable and species-dependent. Our results highlight that self-organization promotes the efficiency and stability of complex ecosystems at extracting energy from the environment, even in the absence of any centralized coordination.

     
    more » « less
    Free, publicly-accessible full text available December 26, 2024
  3. Learning is traditionally studied in biological or computational systems. The power of learning frameworks in solving hard inverse problems provides an appealing case for the development of physical learning in which physical systems adopt desirable properties on their own without computational design. It was recently realized that large classes of physical systems can physically learn through local learning rules, autonomously adapting their parameters in response to observed examples of use. We review recent work in the emerging field of physical learning, describing theoretical and experimental advances in areas ranging from molecular self-assembly to flow networks and mechanical materials. Physical learning machines provide multiple practical advantages over computer designed ones, in particular by not requiring an accurate model of the system, and their ability to autonomously adapt to changing needs over time. As theoretical constructs, physical learning machines afford a novel perspective on how physical constraints modify abstract learning theory. 
    more » « less
  4. Evolution in time-varying environments naturally leads to adaptable biological systems that can easily switch functionalities. Advances in the synthesis of environmentally responsive materials therefore open up the possibility of creating a wide range of synthetic materials which can also be trained for adaptability. We consider high-dimensional inverse problems for materials where any particular functionality can be realized by numerous equivalent choices of design parameters. By periodically switching targets in a given design algorithm, we can teach a material to perform incompatible functionalities with minimal changes in design parameters. We exhibit this learning strategy for adaptability in two simulated settings: elastic networks that are designed to switch deformation modes with minimal bond changes and heteropolymers whose folding pathway selections are controlled by a minimal set of monomer affinities. The resulting designs can reveal physical principles, such as nucleation-controlled folding, that enable such adaptability. 
    more » « less
  5. Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood. Here, we find fundamental limits on how rapidly internal representations encoded along continuous attractors can be updated by an external signal. We apply these results to place cell networks to derive a velocity-dependent nonequilibrium memory capacity in neural networks. 
    more » « less
  6. Mechanical metamaterials are usually designed to show desired responses to prescribed forces. In some applications, the desired force–response relationship is hard to specify exactly, but examples of forces and desired responses are easily available. Here, we propose a framework for supervised learning in thin, creased sheets that learn the desired force–response behavior by physically experiencing training examples and then, crucially, respond correctly (generalize) to previously unseen test forces. During training, we fold the sheet using training forces, prompting local crease stiffnesses to change in proportion to their experienced strain. We find that this learning process reshapes nonlinearities inherent in folding a sheet so as to show the correct response for previously unseen test forces. We show the relationship between training error, test error, and sheet size (model complexity) in learning sheets and compare them to counterparts in machine-learning algorithms. Our framework shows how the rugged energy landscape of disordered mechanical materials can be sculpted to show desired force–response behaviors by a local physical learning process.

     
    more » « less