skip to main content

Title: FPGA-Based Velocity Estimation for Control of Robots with Low-Resolution Encoders
Robot control algorithms often rely on measurements of robot joint velocities, which can be estimated by measuring the time between encoder edges. When encoder edges occur infrequently, such as at low velocities and/or with low resolution encoders, this measurement delay may affect the stability of closed-loop control. This is evident in both the joint position control and Cartesian impedance control of the da Vinci Research Kit (dVRK), which contains several low-resolution encoders. We present a hardware-based method that gives more frequent velocity updates and is not affected by common encoder imperfections such as non-uniform duty cycles and quadrature phase error. The proposed method measures the time between consecutive edges of the same type but, unlike prior methods, is implemented for the rising and falling edges of both channels. Additionally, it estimates acceleration to enable software compensation of the measurement delay. The method is shown to improve Cartesian impedance control of the dVRK.
; ; ;
Award ID(s):
Publication Date:
Journal Name:
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We present a measurement of the Hubble constant (H0) and other cosmological parameters from a joint analysis of six gravitationally lensed quasars with measured time delays. All lenses except the first are analyzed blindly with respect to the cosmological parameters. In a flat ΛCDM cosmology, we find $H_{0} = 73.3_{-1.8}^{+1.7}~\mathrm{km~s^{-1}~Mpc^{-1}}$, a $2.4{{\ \rm per\ cent}}$ precision measurement, in agreement with local measurements of H0 from type Ia supernovae calibrated by the distance ladder, but in 3.1σ tension with Planck observations of the cosmic microwave background (CMB). This method is completely independent of both the supernovae and CMB analyses. A combination of time-delay cosmography and the distance ladder results is in 5.3σ tension with Planck CMB determinations of H0 in flat ΛCDM. We compute Bayes factors to verify that all lenses give statistically consistent results, showing that we are not underestimating our uncertainties and are able to control our systematics. We explore extensions to flat ΛCDM using constraints from time-delay cosmography alone, as well as combinations with other cosmological probes, including CMB observations from Planck, baryon acoustic oscillations, and type Ia supernovae. Time-delay cosmography improves the precision of the other probes, demonstrating the strong complementarity. Allowing for spatial curvature doesmore »not resolve the tension with Planck. Using the distance constraints from time-delay cosmography to anchor the type Ia supernova distance scale, we reduce the sensitivity of our H0 inference to cosmological model assumptions. For six different cosmological models, our combined inference on H0 ranges from ∼73–78 km s−1 Mpc−1, which is consistent with the local distance ladder constraints.« less
  2. Drilling and milling operations are material removal processes involved in everyday conventional productions, especially in the high-speed metal cutting industry. The monitoring of tool information (wear, dynamic behavior, deformation, etc.) is essential to guarantee the success of product fabrication. Many methods have been applied to monitor the cutting tools from the information of cutting force, spindle motor current, vibration, as well as sound acoustic emission. However, those methods are indirect and sensitive to environmental noises. Here, the in-process imaging technique that can capture the cutting tool information while cutting the metal was studied. As machinists judge whether a tool is worn-out by the naked eye, utilizing the vision system can directly present the performance of the machine tools. We proposed a phase shifted strobo-stereoscopic method (Figure 1) for three-dimensional (3D) imaging. The stroboscopic instrument is usually applied for the measurement of fast-moving objects. The operation principle is as follows: when synchronizing the frequency of the light source illumination and the motion of object, the object appears to be stationary. The motion frequency of the target is transferring from the count information of the encoder signals from the working rotary spindle. If small differences are added to the frequency, the objectmore »appears to be slowly moving or rotating. This effect can be working as the source for the phase-shifting; with this phase information, the target can be whole-view 3D reconstructed by 360 degrees. The stereoscopic technique is embedded with two CCD cameras capturing images that are located bilateral symmetrically in regard to the target. The 3D scene is reconstructed by the location information of the same object points from both the left and right images. In the proposed system, an air spindle was used to secure the motion accuracy and drilling/milling speed. As shown in Figure 2, two CCDs with 10X objective lenses were installed on a linear rail with rotary stages to capture the machine tool bit raw picture for further 3D reconstruction. The overall measurement process was summarized in the flow chart (Figure 3). As the count number of encoder signals is related to the rotary speed, the input speed (unit of RPM) was set as the reference signal to control the frequency (f0) of the illumination of the LED. When the frequency was matched with the reference signal, both CCDs started to gather the pictures. With the mismatched frequency (Δf) information, a sequence of images was gathered under the phase-shifted process for a whole-view 3D reconstruction. The study in this paper was based on a 3/8’’ drilling tool performance monitoring. This paper presents the principle of the phase-shifted strobe-stereoscopic 3D imaging process. A hardware set-up is introduced, , as well as the 3D imaging algorithm. The reconstructed image analysis under different working speeds is discussed, the reconstruction resolution included. The uncertainty of the imaging process and the built-up system are also analyzed. As the input signal is the working speed, no other information from other sources is required. This proposed method can be applied as an on-machine or even in-process metrology. With the direct method of the 3D imaging machine vision system, it can directly offer the machine tool surface and fatigue information. This presented method can supplement the blank for determining the performance status of the machine tools, which further guarantees the fabrication process.« less
  3. Detecting and localizing contacts is essential for robot manipulators to perform contact-rich tasks in unstructured environments. While robot skins can localize contacts on the surface of robot arms, these sensors are not yet robust or easily accessible. As such, prior works have explored using proprioceptive observations, such as joint velocities and torques, to perform contact localization. Many past approaches assume the robot is static during contact incident, a single contact is made at a time, or having access to accurate dynamics models and joint torque sensing. In this work, we relax these assumptions and propose using Domain Randomization to train a neural network to localize contacts of robot arms in motion without joint torque observations. Our method uses a novel cylindrical projection encoding of the robot arm surface, which allows the network to use convolution layers to process input features and transposed convolution layers to predict contacts. The trained network achieves a contact detection accuracy of 91.5% and a mean contact localization error of 3.0cm. We further demonstrate an application of the contact localization model in an obstacle mapping task, evaluated in both simulation and the real world.
  4. Our goal is to develop a principled and general algorithmic framework for task-driven estimation and control for robotic systems. State-of-the-art approaches for controlling robotic systems typically rely heavily on accurately estimating the full state of the robot (e.g., a running robot might estimate joint angles and velocities, torso state, and position relative to a goal). However, full state representations are often excessively rich for the specific task at hand and can lead to significant computational inefficiency and brittleness to errors in state estimation. In contrast, we present an approach that eschews such rich representations and seeks to create task-driven representations. The key technical insight is to leverage the theory of information bottlenecks}to formalize the notion of a "task-driven representation" in terms of information theoretic quantities that measure the minimality of a representation. We propose novel iterative algorithms for automatically synthesizing (offline) a task-driven representation (given in terms of a set of task-relevant variables (TRVs)) and a performant control policy that is a function of the TRVs. We present online algorithms for estimating the TRVs in order to apply the control policy. We demonstrate that our approach results in significant robustness to unmodeled measurement uncertainty both theoretically and via thorough simulationmore »experiments including a spring-loaded inverted pendulum running to a goal location.« less
  5. This paper presents design and control innovations of wearable robots that tackle two barriers to widespread adoption of powered exoskeletons, namely restriction of human movement and versatile control of wearable co-robot systems. First, the proposed quasi-direct drive actuation comprising of our customized high-torque density motors and low ratio transmission mechanism significantly reduces the mass of the robot and produces high backdrivability. Second, we derive a biomechanics model-based control that generates biological torque profile for versatile control of both squat and stoop lifting assistance. The control algorithm detects lifting postures using compact inertial measurement unit (IMU) sensors to generate an assistive profile that is proportional to the biological torque produced from our model. Experimental results demonstrate that the robot exhibits low mechanical impedance (1.5 Nm resistive torque) when it is unpowered and 0.5 Nm resistive torque with zero-torque tracking control. Root mean square (RMS) error of torque tracking is less than 0.29 Nm (1.21% error of 24 Nm peak torque). Compared with squatting without the exoskeleton, the controller reduces 87.5%, 80% and 75% of the of three knee extensor muscles (average peak EMG of 3 healthy subjects) during squat with 50% of biological torque assistance.