skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning Without Neurons in Physical Systems
Learning is traditionally studied in biological or computational systems. The power of learning frameworks in solving hard inverse problems provides an appealing case for the development of physical learning in which physical systems adopt desirable properties on their own without computational design. It was recently realized that large classes of physical systems can physically learn through local learning rules, autonomously adapting their parameters in response to observed examples of use. We review recent work in the emerging field of physical learning, describing theoretical and experimental advances in areas ranging from molecular self-assembly to flow networks and mechanical materials. Physical learning machines provide multiple practical advantages over computer designed ones, in particular by not requiring an accurate model of the system, and their ability to autonomously adapt to changing needs over time. As theoretical constructs, physical learning machines afford a novel perspective on how physical constraints modify abstract learning theory.  more » « less
Award ID(s):
2011854 2005749
PAR ID:
10418562
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Annual Review of Condensed Matter Physics
Volume:
14
Issue:
1
ISSN:
1947-5454
Page Range / eLocation ID:
417 to 441
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In the field of soft robotics, harnessing the nonlinear dynamics of soft and compliant bodies as a computational resource to enable embodied intelligence and control is known as morphological computation. Physical reservoir computing (PRC) is a true instance of morphological computation wherein; a physical nonlinear dynamic system is used as a fixed reservoir to perform complex computational tasks. These dynamic reservoirs can be used to approximate nonlinear dynamical systems and even perform machine learning tasks. By numerical simulation, this study illustrates that an origami meta-material can also be used as a dynamic reservoir for pattern generation, output modulation, and input sensing. These results could pave the way for intelligently designed origami-based robots that interact with the environment through a distributed network of sensors and actuators. This embodied intelligence will enable the next generations of soft robots to autonomously coordinate and modulate their activities, such as locomotion gait generation and limb manipulation while resisting external disturbances. 
    more » « less
  2. Machine learning has been proposed as an alternative to theoretical modeling when dealing with complex problems in biological physics. However, in this perspective, we argue that a more successful approach is a proper combination of these two methodologies. We discuss how ideas coming from physical modeling neuronal processing led to early formulations of computational neural networks, e.g., Hopfield networks. We then show how modern learning approaches like Potts models, Boltzmann machines, and the transformer architecture are related to each other, specifically, through a shared energy representation. We summarize recent efforts to establish these connections and provide examples on how each of these formulations integrating physical modeling and machine learning have been successful in tackling recent problems in biomolecular structure, dynamics, function, evolution, and design. Instances include protein structure prediction; improvement in computational complexity and accuracy of molecular dynamics simulations; better inference of the effects of mutations in proteins leading to improved evolutionary modeling and finally how machine learning is revolutionizing protein engineering and design. Going beyond naturally existing protein sequences, a connection to protein design is discussed where synthetic sequences are able to fold to naturally occurring motifs driven by a model rooted in physical principles. We show that this model is “learnable” and propose its future use in the generation of unique sequences that can fold into a target structure. 
    more » « less
  3. Kernel methods provide an elegant framework for developing nonlinear learning algorithms from simple linear methods. Though these methods have superior empirical performance in several real data applications, their usefulness is inhibited by the significant computational burden incurred in large sample situations. Various approximation schemes have been proposed in the literature to alleviate these computational issues, and the approximate kernel machines are shown to retain the empirical performance. However, the theoretical properties of these approximate kernel machines are less well understood. In this work, we theoretically study the trade-off between computational complexity and statistical accuracy in Nystrom approximate kernel principal component analysis (KPCA), wherein we show that the Nystrom approximate KPCA matches the statistical performance of (non-approximate) KPCA while remaining computationally beneficial. Additionally, we show that Nystrom approximate KPCA outperforms the statistical behavior of another popular approximation scheme, the random feature approximation, when applied to KPCA. 
    more » « less
  4. Autonomous Vehicles (AVs) are revolutionizing transportation, but their reliance on interconnected cyber-physical systems exposes them to unprecedented cybersecurity risks. This study addresses the critical challenge of detecting real-time cyber intrusions in self-driving vehicles by leveraging a dataset from the Udacity self-driving car project. We simulate four high-impact attack vectors, Denial of Service (DoS), spoofing, replay, and fuzzy attacks, by injecting noise into spatial features (e.g., bounding box coordinates) to replicate adversarial scenarios. We develop and evaluate two lightweight neural network architectures (NN-1 and NN-2) alongside a logistic regression baseline (LG-1) for intrusion detection. The models achieve exceptional performance, with NN-2 attaining an AUC score of 93.15% and 93.15% accuracy, demonstrating their suitability for edge deployment in AV environments. Through explainable AI techniques, we uncover unique forensic fingerprints of each attack type, such as spatial corruption in fuzzy attacks and temporal anomalies in replay attacks, offering actionable insights for feature engineering and proactive defense. Visual analytics, including confusion matrices, ROC curves, and feature importance plots, validate the models' robustness and interpretability. This research sets a new benchmark for AV cybersecurity, delivering a scalable, field-ready toolkit for Original Equipment Manufacturers (OEMs) and policymakers. By aligning intrusion fingerprints with SAE J3061 automotive security standards, we provide a pathway for integrating machine learning into safety-critical AV systems. Our findings underscore the urgent need for security-by-design AI, ensuring that AVs not only drive autonomously but also defend autonomously. This work bridges the gap between theoretical cybersecurity and life-preserving engineering, offering a leap toward safer, more secure autonomous transportation. 
    more » « less
  5. Abstract There is a growing demand for low-power, autonomously learning artificial intelligence (AI) systems that can be applied at the edge and rapidly adapt to the specific situation at deployment site. However, current AI models struggle in such scenarios, often requiring extensive fine-tuning, computational resources, and data. In contrast, humans can effortlessly adjust to new tasks by transferring knowledge from related ones. The concept of learning-to-learn (L2L) mimics this process and enables AI models to rapidly adapt with only little computational effort and data. In-memory computing neuromorphic hardware (NMHW) is inspired by the brain’s operating principles and mimics its physical co-location of memory and compute. In this work, we pair L2L with in-memory computing NMHW based on phase-change memory devices to build efficient AI models that can rapidly adapt to new tasks. We demonstrate the versatility of our approach in two scenarios: a convolutional neural network performing image classification and a biologically-inspired spiking neural network generating motor commands for a real robotic arm. Both models rapidly learn with few parameter updates. Deployed on the NMHW, they perform on-par with their software equivalents. Moreover, meta-training of these models can be performed in software with high-precision, alleviating the need for accurate hardware models. 
    more » « less