skip to main content

Title: Mechanical Simplification of Variable-Stiffness Actuators Using Dielectric Elastomer Transducers
Legged and gait-assistance robots can walk more efficiently if their actuators are compliant. The adjustable compliance of variable-stiffness actuators (VSAs) can enhance this benefit. However, this functionality requires additional mechanical components making VSAs impractical for some uses due to increased weight, volume, and cost. VSAs would be more practical if they could modulate the stiffness of their springs without additional components, which usually include moving parts and an additional motor. Therefore, we designed a VSA that uses dielectric elastomer transducers (DETs) for springs. It does not need mechanical stiffness-adjusting components because DETs soften due to electrostatic forces. This paper presents details and performance of our design. Our DET VSA demonstrated independent modulation of its equilibrium position and stiffness. Our design approach could make it practical to obtain the benefits of variable-stiffness actuation with less weight, volume, and cost than normally accompanies them, once weaknesses of DET technology are addressed.
Authors:
; ; ; ;
Award ID(s):
1830360
Publication Date:
NSF-PAR ID:
10095857
Journal Name:
Actuators
Volume:
8
Issue:
2
Page Range or eLocation-ID:
44
ISSN:
2076-0825
Sponsoring Org:
National Science Foundation
More Like this
  1. Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One bindingmore »method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks.« less
  2. Natural dynamics, nonlinear optimization, and, more recently, convex optimization are available methods for stiffness design of energy-efficient series elastic actuators. Natural dynamics and general nonlinear optimization only work for a limited set of load kinetics and kinematics, cannot guarantee convergence to a global optimum, or depend on initial conditions to the numerical solver. Convex programs alleviate these limitations and allow a global solution in polynomial time, which is useful when the space of optimization variables grows (e.g., when designing optimal nonlinear springs or co-designing spring, controller, and reference trajectories). Our previous work introduced the stiffness design of series elastic actuators via convex optimization when the transmission dynamics are negligible, which is an assumption that applies mostly in theory or when the actuator uses a direct or quasi-direct drive. In this work, we extend our analysis to include friction at the transmission. Coulomb friction at the transmission results in a non-convex expression for the energy dissipated as heat, but we illustrate a convex approximation for stiffness design. We experimentally validated our framework using a series elastic actuator with specifications similar to the knee joint of the Open Source Leg, an open-source robotic knee-ankle prosthesis.
  3. Elastic actuation can improve human-robot interaction and energy efficiency for wearable robots. Previous work showed that the energy consumption of series elastic actuators can be a convex function of the series spring compliance. This function is useful to optimally select the series spring compliance that reduces the motor energy consumption. However, series springs have limited influence on the motor torque, which is a major source of the energy losses due to the associated Joule heating. Springs in parallel to the motor can significantly modify the motor torque and therefore reduce Joule heating, but it is unknown how to design springs that globally minimize energy consumption for a given motion of the load. In this work, we introduce the stiffness design of linear and nonlinear parallel elastic actuators via convex optimization. We show that the energy consumption of parallel elastic actuators is a convex function of the spring stiffness and compare the energy savings with that of optimal series elastic actuators. We analyze robustness of the solution in simulation by adding uncertainty of 20% of the RMS load kinematics and kinetics for the ankle, knee, and hip movements for level-ground human walking. When the winding Joule heating losses are dominant withmore »respect to the viscous losses, our optimal PEA designs outperform SEA designs by further reducing the motor energy consumption up to 63%. Comparing to the linear PEA designs, our nonlinear PEA designs further reduced the motor energy consumption up to 31%. From our convex formulation, our global optimal nonlinear parallel elastic actuator designs give two different elongation-torque curves for positive and negative elongation, suggesting a clutching mechanism for the final implementation. In addition, the different torque-elongation profiles for positive and negative elongation for nonlinear parallel elastic actuators can cause sensitivity of the energy consumption to changes in the nominal load trajectory.« less
  4. The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition. Here we show how this may be accomplished within the framework of Vector Symbolic Architectures (VSAs) (Plate, 1991; Gayler, 1998; Kanerva, 1996), whereby data structures are encoded by combining high-dimensional vectors with operations that together form an algebra on the space of distributed representations. In particular, we propose an efficient solution to a hard combinatorial search problem that arises when decoding elements of a VSA data structure: the factorization of products of multiple codevectors. Our proposed algorithm, called a resonator network, is a new type of recurrent neural network that interleaves VSA multiplication operations and pattern completion. We show in two examples—parsing of a tree-like data structure and parsing of a visual scene—how the factorization problem arises and how the resonator network can solve it. More broadly, resonator networks open the possibility of applying VSAs to myriad artificial intelligence problems in real-world domains. The companion article in this issue (Kent, Frady, Sommer, & Olshausen, 2020) presents a rigorous analysis and evaluation of the performance of resonator networks, showing itmore »outperforms alternative approaches.« less
  5. Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Vector Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing. In this paper, we generalize VSAs to function spaces by mapping continuous-valued data into a vector space such that the inner product between the representations of any two data points represents a similarity kernel. By analogy to VSA, we call this new function encoding and computing framework Vector Function Architecture (VFA). In VFAs, vectors can represent individual data points as well as elements of a function space (a reproducing kernel Hilbert space). The algebraic vector operations, inherited from VSA, correspond to well-defined operations in function space. Furthermore, we study a previously proposed method for encoding continuous data, fractional power encoding (FPE), which uses exponentiation of a random base vector to produce randomized representations of data points and fulfills the kernel properties for inducing a VFA. We show that the distribution from which elements of the base vector are sampled determines the shape of the FPE kernel, which in turn induces a VFA for computing with band-limited functions. In particular, VFAs provide an algebraic framework for implementing large-scalemore »kernel machines with random features, extending Rahimi and Recht, 2007. Finally, we demonstrate several applications of VFA models to problems in image recognition, density estimation and nonlinear regression. Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems, with myriad applications in artificial intelligence.« less