skip to main content

Title: Magnetic bio-hybrid micro actuators
Over the past two decades, there has been a growing body of work on wireless devices that can operate on the length scales of biological cells and even smaller. A class of these devices receiving increasing attention are referred to as bio-hybrid actuators: tools that integrate biological cells or subcellular parts with synthetic or inorganic components. These devices are commonly controlled through magnetic manipulation as magnetic fields and gradients can be generated with a high level of control. Recent work has demonstrated that magnetic bio-hybrid actuators can address common challenges in small scale fabrication, control, and localization. Additionally, it is becoming apparent that these magnetically driven bio-hybrid devices can display high efficiency and, in many cases, have the potential for self-repair and even self-replication. Combining these properties with magnetically driven forces and torques, which can be transmitted over significant distances, can be highly controlled, and are biologically safe, gives magnetic bio-hybrid actuators significant advantages over other classes of small scale actuators. In this review, we describe the theory and mechanisms required for magnetic actuation, classify bio-hybrid actuators by their diverse organic components, and discuss their current limitations. Insights into the future of coupling cells and cell-derived components with magnetic materials more » to fabricate multi-functional actuators are also provided. « less
Authors:
; ; ; ; ; ; ; ; ;
Award ID(s):
2000202 2000330
Publication Date:
NSF-PAR ID:
10319701
Journal Name:
Nanoscale
Volume:
14
Issue:
12
ISSN:
2040-3364
Sponsoring Org:
National Science Foundation
More Like this
  1. Shape-memory actuators allow machines ranging from robots to medical implants to hold their form without continuous power, a feature especially advantageous for situations where these devices are untethered and power is limited. Although previous work has demonstrated shape-memory actuators using polymers, alloys, and ceramics, the need for micrometer-scale electro–shape-memory actuators remains largely unmet, especially ones that can be driven by standard electronics (~1 volt). Here, we report on a new class of fast, high-curvature, low-voltage, reconfigurable, micrometer-scale shape-memory actuators. They function by the electrochemical oxidation/reduction of a platinum surface, creating a strain in the oxidized layer that causes bending. They bend to the smallest radius of curvature of any electrically controlled microactuator (~500 nanometers), are fast (<100-millisecond operation), and operate inside the electrochemical window of water, avoiding bubble generation associated with oxygen evolution. We demonstrate that these shape-memory actuators can be used to create basic electrically reconfigurable microscale robot elements including actuating surfaces, origami-based three-dimensional shapes, morphing metamaterials, and mechanical memory elements. Our shape-memory actuators have the potential to enable the realization of adaptive microscale structures, bio-implantable devices, and microscopic robots.

  2. Hybrid organic–inorganic composites possessing both electronic and magnetic properties are promising materials for a wide range of applications. Controlled and ordered arrangement of the organic and inorganic components is key for synergistic cooperation toward desired functions. In this work, we report the self-assemblies of core–shell composite nanofibers from conjugated block copolymers and magnetic nanoparticles through the cooperation of orthogonal non-covalent interactions. We show that well-defined core–shell conjugated polymer nanofibers can be obtained through solvent induced self-assembly and polymer crystallization, while hydroxy and pyridine functional groups located at the shell of nanofibers can immobilize magnetic nanoparticles via hydrogen bonding and coordination interactions. These precisely arranged nanostructures possess electronic properties intrinsic to the polymers and are simultaneously responsive to external magnetic fields. We applied these composite nanofibers in organic solar cells and found that these non-covalent interactions led to controlled thin film morphologies containing uniformly dispersed nanoparticles, although high loadings of these inorganic components negatively impact device performance. Our methodology is general and can be utilized to control the spatial distribution of functionalized organic/inorganic building blocks, and the magnetic responsiveness and optoelectronic activities of these nanostructures may lead to new opportunities in energy and electronic applications.
  3. Polymer nanocomposites have been sought after for their light weight, high performance (strength-to-mass ratio, renewability, etc.), and multi-functionality (actuation, sensing, protection against lightning strikes, etc.). Nano-/micro-engineering has achieved such advanced properties by controlling crystallinity, phases, and interfaces/interphases; hierarchical structuring, often bio-inspired, has been also implemented. While driven by the advanced properties of nanofillers, properties of polymer nanocomposites are critically affected by their structuring and interfaces/interphases due to their small size (< ~50 nm) and large surface area per volume. Measures of their property improvement by nanofiller addition are often smaller than theoretically predicted. Currently, application of these novel engineered materials is limited because these materials cannot often be made in large sizes without compromising nano-scale organization, and because their multi-scale structure-property relationships are not well understood. In this work, we study precise and fast nanofiller structuring with non-contact and energy-efficient application of oscillating magnetic fields. Magnetic assembly is a promising, scalable method to deliver bulk amount of nanocomposites while maintaining organized nanofiller structure throughout the composite volume. In the past, we have demonstrated controlled alignment of nanofillers with tunable inter-assembly distances with application of oscillating one-dimentional magnetic fields (~100s of G), by taking advantage of both magnetic attraction and repulsion.more »The low oscillation frequency (< 1 Hz) was tuned to achieve maghemite nanofiller alignment patterns, in an epoxy matrix, with different amount of inter-nanofiller contacts with the same nanofiller volume fraction (see Figure 1a). This work was recently expanded to three-dimensional assembly using a triaxial Helmholtz coil system (see Figure 1b); the system can apply the triaxial magnetic fields of varying magnitude (max. ±300G, ±250G, ±180G (x-y-z)) and frequency (0 to 1 Hz, ~0.1 Hz resolution) with controlled phase delay to the sample size of 1.5” x 2.5” x 3.5”(x-y-z). Two model systems are currently studied: maghemite nanofillers in an elastomer for magnetoactuation, and nickel-coated CNTs in an thermoset for mehcniacl and transport property reinforcement. The assembled nanofiller structures are currently characterized by microCT; microCT scan data (see Figure 1b) are segmented through a machine learning algorithm, and will be modeled for their transport properties using a Monte Carlo method. These estimated properties will be compared with the experimentally characterized mechanical, transport, and actuation properties, providing the structure-interphase-property relationships. In future, we plan to integrate these nanocomposites to CFRPs for interlaminar property reinforcement, possibly with an out-of-autoclave composite processing.« less
  4. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should producemore »identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues.« less
  5. The integration of synthetic biology and soft robotics can fundamentally advance sensory, diagnostic, and therapeutic functionality of bioinspired machines. However, such integration is currently impeded by the lack of soft-matter architectures that interface synthetic cells with electronics and actuators for controlled stimulation and response during robotic operation. Here, we synthesized a soft gripper that uses engineered bacteria for detecting chemicals in the environment, a flexible light-emitting diode (LED) circuit for converting biological to electronic signals, and soft pneu-net actuators for converting the electronic signals to movement of the gripper. We show that the hybrid bio-LED-actuator module enabled the gripper to detect chemical signals by applying pressure and releasing the contents of a chemical-infused hydrogel. The biohybrid gripper used chemical sensing and feedback to make actionable decisions during a pick-and-place operation. This work opens previously unidentified avenues in soft materials, synthetic biology, and integrated interfacial robotic systems.