skip to main content


Title: Computing by Programmable Particles.
This is a chapter in Book "Distributed Computing by Mobile Entities: Current Research in Moving and Computing", Springer Nature. The vision for programmable matter is to realize a physical substance that is scalable, versatile, instantly reconfigurable, safe to handle, and robust to failures. Programmable matter could be deployed in a variety of domain spaces to address a wide gamut of problems, including applications in construction, environmental science, synthetic biology, and space exploration. However, there are considerable engineering and computational challenges that must be overcome before such a system could be implemented. Towards developing efficient algorithms for novel programmable matter behaviors, the amoebot model for self-organizing particle systems and its variant, hybrid programmable matter, provide formal computational frameworks that facilitate rigorous algorithmic research. In this chapter, we discuss distributed algorithms under these models for shape formation, shape recognition, object coating, compression, shortcut bridging, and separation in addition to some underlying algorithmic primitives.  more » « less
Award ID(s):
1733680 1637393 1422603
NSF-PAR ID:
10084073
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Springer
Volume:
LNCS
Issue:
11340
ISSN:
0947-5427
Page Range / eLocation ID:
615-681
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bessani, Alysson ; Défago, Xavier ; Nakamura, Junya ; Wada, Koichi ; Yamauchi, Yukiko (Ed.)
    Individual modules of programmable matter participate in their system’s collective behavior by expending energy to perform actions. However, not all modules may have access to the external energy source powering the system, necessitating a local and distributed strategy for supplying energy to modules. In this work, we present a general energy distribution framework for the canonical amoebot model of programmable matter that transforms energy-agnostic algorithms into energy-constrained ones with equivalent behavior and an 𝒪(n²)-round runtime overhead - even under an unfair adversary - provided the original algorithms satisfy certain conventions. We then prove that existing amoebot algorithms for leader election (ICDCN 2023) and shape formation (Distributed Computing, 2023) are compatible with this framework and show simulations of their energy-constrained counterparts, demonstrating how other unfair algorithms can be generalized to the energy-constrained setting with relatively little effort. Finally, we show that our energy distribution framework can be composed with the concurrency control framework for amoebot algorithms (Distributed Computing, 2023), allowing algorithm designers to focus on the simpler energy-agnostic, sequential setting but gain the general applicability of energy-constrained, asynchronous correctness. 
    more » « less
  2. Understanding the algorithmic behaviors that are in principle realizable in a chemical system is necessary for a rigorous understanding of the design principles of biological regulatory networks. Further, advances in synthetic biology herald the time when we will be able to rationally engineer complex chemical systems and when idealized formal models will become blueprints for engineering. Coupled chemical interactions in a well-mixed solution are commonly formalized as chemical reaction networks (CRNs). However, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. Here, we study the following problem: What functions f : ℝ k → ℝ can be computed by a CRN, in which the CRN eventually produces the correct amount of the “output” molecule, no matter the rate at which reactions proceed? This captures a previously unexplored but very natural class of computations: For example, the reaction X 1 + X 2 → Y can be thought to compute the function y = min ( x 1 , x 2 ). Such a CRN is robust in the sense that it is correct whether its evolution is governed by the standard model of mass-action kinetics, alternatives such as Hill-function or Michaelis-Menten kinetics, or other arbitrary models of chemistry that respect the (fundamentally digital) stoichiometric constraints (what are the reactants and products?). We develop a reachability relation based on a broad notion of “what could happen” if reaction rates can vary arbitrarily over time. Using reachability, we define stable computation analogously to probability 1 computation in distributed computing and connect it with a seemingly stronger notion of rate-independent computation based on convergence in the limit t → ∞ under a wide class of generalized rate laws. Besides the direct mapping of a concentration to a nonnegative analog value, we also consider the “dual-rail representation” that can represent negative values as the difference of two concentrations and allows the composition of CRN modules. We prove that a function is rate-independently computable if and only if it is piecewise linear (with rational coefficients) and continuous (dual-rail representation), or non-negative with discontinuities occurring only when some inputs switch from zero to positive (direct representation). The many contexts where continuous piecewise linear functions are powerful targets for implementation, combined with the systematic construction we develop for computing these functions, demonstrate the potential of rate-independent chemical computation. 
    more » « less
  3. The computation of Vietoris-Rips persistence barcodes is both execution-intensive and memory-intensive. In this paper, we study the computational structure of Vietoris-Rips persistence barcodes, and identify several unique mathematical properties and algorithmic opportunities with connections to the GPU. Mathematically and empirically, we look into the properties of apparent pairs, which are independently identifiable persistence pairs comprising up to 99% of persistence pairs. We give theoretical upper and lower bounds of the apparent pair rate and model the average case. We also design massively parallel algorithms to take advantage of the very large number of simplices that can be processed independently of each other. Having identified these opportunities, we develop a GPU-accelerated software for computing Vietoris-Rips persistence barcodes, called Ripser++. The software achieves up to 30x speedup over the total execution time of the original Ripser and also reduces CPU-memory usage by up to 2.0x. We believe our GPU-acceleration based efforts open a new chapter for the advancement of topological data analysis in the post-Moore's Law era. 
    more » « less
  4. Parallel and distributed computing (PDC) has become pervasive in all aspects of computing, and thus it is essential that students include parallelism and distribution in the computational thinking that they apply to problem solving, from the very beginning. Computer science education is still teaching to a 20th century model of algorithmic problem solving. Sequence, branch, and loop are taught in our early courses as the only organizing principles needed for algorithms, and we invest considerable time in showing how best to sequentially process large volumes of data. All computing devices that students use currently have multiple cores as well as a GPU in many cases. Most of their favorite applications use multiple cores and numbers of distributed processors. Often concurrency offers simpler solutions than sequential approaches. Industry is desperate for software engineers who think naturally in terms of exploiting these capabilities, rather than seeing them as an exotic upper-level topic that gets layered over a sequential solution. However, we are still teaching students to solve problems using sequential thinking. In this workshop we overview key PDC concepts and provide examples of how they may naturally be incorporated in early computing classes. We will introduce plugged and unplugged curriculum modules that have been successfully integrated in existing computing classes at multiple institutions. We will highlight the upcoming summer training workshop, for which we have funding to support attendance, as well as other CDER (Center for Parallel and Distributed Computing Curriculum Development and Educational Resources) activities. 
    more » « less
  5. The accurate characterization of how different brain structures interact in terms of both structural and functional networks is an area of active research in neuroscience. A better understanding of these interactions can potentially lead to targeted treatments and improved therapies for many neurological disorders, such as epilepsy, which alone affects over 65 million people worldwide. The study of functional connectivity networks in epilepsy, which is characterized by abnormalities in brain electrical activity, will help to provide new insights into the onset and progression of this complex neurological disorder. In this chapter, we discuss statistical signal processing techniques and their use in determining functional connectivity among brain regions exhibiting epileptic activity. We also discuss computational challenges associated with deriving functional connectivity measures from neurological Big Data, and we introduce our highly scalable signal processing pipeline for quantifying functional connectivity with the goal of addressing these challenges and potentially advancing understanding of the underlying mechanisms of epilepsy. This pipeline makes use of a novel signal data format that facilitates storing and retrieving data in a distributed computing environment. We conclude the chapter by describing our current activities and proposed plans for improving our computational pipeline, such as the inclusion of biomedical ontologies for semantic annotation in order to facilitate the integration and retrieval of signal data. 
    more » « less