skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Mallmann-Trenn, Frederik"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Deploying nanoscopic particles and robots in the human body promises increasingly selective drug delivery with fewer side effects. We consider the problem of a homogeneous swarm of nanobots locating a singular cancerous region and treating it by releasing some onboard payload of drugs once at the site. At nanoscale, the computation, communication, sensing, and locomotion capabilities of individual agents are extremely limited, noisy, and/or nonexistent. We present a general model to formally describe the individual and collective behaviour of agents in a colloidal environment, such as the bloodstream, for the problem of cancer detection and treatment by nanobots. This includes a feasible and precise model of agent locomotion, which is inspired by actual nanoscopic vesicles which, when in the presence of an external chemical gradient, tend towards areas of higher concentration by means of self-propulsion. The delivered payloads have a dual purpose of treating the cancer, as well as diffusing throughout the space to form a chemical gradient which other agents can sense and noisily ascend. We present simulation results to analyze the behavior of individual agents under our locomotion model and to investigate the efficacy of this collectively amplified chemical signal in helping the larger swarm efficiently locate the cancer site. 
    more » « less
  2. We continue our study from [5], of how concepts that have hierarchical structure might be represented in brain-like neural networks, how these representations might be used to recognize the concepts, and how these representations might be learned. In [5], we considered simple tree-structured concepts and feed-forward layered networks. Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges. For these more general cases, we describe and analyze algorithms for recognition and algorithms for learning. 
    more » « less
  3. We use a recently developed synchronous Spiking Neural Network (SNN) model to study the problem of learning hierarchically-structured concepts. We introduce an abstract data model that describes simple hierarchical concepts. We define a feed-forward layered SNN model, with learning modeled using Oja’s local learning rule, a well known biologically-plausible rule for adjusting synapse weights. We define what it means for such a network to recognize hierarchical concepts; our notion of recognition is robust, in that it tolerates a bounded amount of noise. Then, we present a learning algorithm by which a layered network may learn to recognize hierarchical concepts according to our robust definition. We an- alyze correctness and performance rigorously; the amount of time required to learn each concept, after learning all of the sub-concepts, is approximately O( 1ηk(`max log(k) + 1ε) + b log(k)), where k is the number of sub-concepts per concept, `max is the maximum hierarchical depth, η is the learning rate, ε describes the amount of uncertainty allowed in robust recognition, and b describes the amount of weight decrease for "irrelevant" edges. An interesting feature of this algorithm is that it allows the network to learn sub-concepts in a highly interleaved manner. This algorithm assumes that the concepts are presented in a noise-free way; we also extend these results to accommodate noise in the learning process. Finally, we give a simple lower bound saying that, in order to recognize concepts with hierarchical depth two with noise-tolerance, a neural network should have at least two layers. The results in this paper represent first steps in the theoretical study of hierarchical concepts using SNNs. The cases studied here are basic, but they suggest many directions for extensions to more elaborate and realistic cases 
    more » « less
  4. We use a recently developed synchronous Spiking Neural Network (SNN) model to study the problem of learning hierarchically-structured concepts. We introduce an abstract data model that describes simple hierarchical concepts. We define a feed-forward layered SNN model, with learning modeled using Oja’s local learning rule, a well known biologically-plausible rule for adjusting synapse weights. We define what it means for such a network to recognize hierarchical concepts; our notion of recognition is robust, in that it tolerates a bounded amount of noise. Then, we present a learning algorithm by which a layered network may learn to recognize hierarchical concepts according to our robust definition. We analyze correctness and performance rigorously; the amount of time required to learn each concept, after learning all of the sub-concepts, is approximately O ( 1ηk(`max log(k) + 1ε) + b log(k)), where k is the number of sub-concepts per concept, `max is the maximum hierarchical depth, η is the learning rate, ε describes the amount of uncertainty allowed in robust recognition, and b describes the amount of weight decrease for "irrelevant" edges. An interesting feature of this algorithm is that it allows the network to learn sub-concepts in a highly interleaved manner. This algorithm assumes that the concepts are presented in a noise-free way; we also extend these results to accommodate noise in the learning process. Finally, we give a simple lower bound saying that, in order to recognize concepts with hierarchical depth two with noise-tolerance, a neural network should have at least two layers. The results in this paper represent first steps in the theoretical study of hierarchical concepts using SNNs. The cases studied here are basic, but they suggest many directions for extensions to more elaborate and realistic cases. 
    more » « less
  5. We present a distributed Bayesian algorithm for robot swarms to classify a spatially distributed feature of an environment. This type of “go/no-go” decision appears in applications where a group of robots must collectively choose whether to take action, such as determining if a farm field should be treated for pests. Previous bio-inspired approaches to decentralized decision-making in robotics lack a statistical foundation, while decentralized Bayesian algorithms typically require a strongly connected network of robots. In contrast,our algorithm allows simple, sparsely distributed robots to quickly reach accurate decisions about a binary feature of their environment. We investigate the speed vs. accuracy tradeoff in decision-making by varying the algorithm’s parameters.We show that making fewer, less-correlated observations can improve decision-making accuracy, and that a well-chosen combination of prior and decision threshold allows for fast decisions with a small accuracy cost. Both speed and accuracy also improved with the addition of bio-inspired positive feed-back. This algorithm is also adaptable to the difficulty of the environment. Compared to a fixed-time benchmark algorithm with accuracy guarantees, our Bayesian approach resulted in equally accurate decisions, while adapting its decision time to the difficulty of the environment. 
    more » « less
  6. One of the most popular existing models for task allocation in ant colonies is the so-called threshold-based task allocation model. Here, each ant has a fixed, and possibly distinct, threshold. Each task has a fixed demand which represents the number of ants required to perform the task.1Thestimulusanant receives for a task is defined as the demand of the task minus the number of ants currently working at the task. An ant joins a task if the stimulus of the task exceeds the ant’s threshold.A large body of results has studied this model for over four decades; however, most of the theoretical works focuses on the study of two tasks. Interestingly, no work in this line of research shows that the number of ants working at a task eventually converges towards the demand nor does any work bound the distance to the demands over time.In this work, we study precisely this convergence. Our results show that while the threshold-based model works fine in the case of two tasks (for certain distributions of thresholds); the threshold model no longer works for the case of more than two tasks. In fact, we show that there is no possible setting of thresholds that yields a satisfactory deficit (demand minus number of ants working on the task) for each task.This is in stark contrast to other theoretical results in the same setting [CDLN14] that rely on state-machines, i.e., some form of small memory together with probabilistic decisions. Note that, the classical threshold model assumes no states or memory (apart from the bare minimum number of states required to encode which task an ant is working on). The resulting task allocation is near-optimal and much better than what is possible using joining thresholds. This remains true even in a noisy environment [DLM+18]. While the deficit is not the only important metric, it is conceivably one of the most important metrics to guarantee the survival of a colony: for example if the number of workers assigned for foraging stays significantly below the demand, then starvation may occur. Moreover, our results do not imply that ants do not use thresholds; we merely argue that relying on thresholds yields a considerable worse performance. 
    more » « less