Dr. Xiaochuan Pan
(Ed.)
Ingestible Sensors and Sensing Systems for Minimally Invasive Diagnosis and Monitoring: The Next Frontier in Minimally Invasive Screening
More Like this
-
-
The creation of multiarticulated mechanisms for use with minimally invasive surgical tools is difficult because of fabrication, assembly, and actuation challenges on the millimeter scale of these devices. Nevertheless, such mechanisms are desirable for granting surgeons greater precision and dexterity to manipulate and visualize tissue at the surgical site. Here, we describe the construction of a complex optoelectromechanical device that can be integrated with existing surgical tools to control the position of a fiber-delivered laser. By using modular assembly and a laminate fabrication method, we are able to create a smaller and higher-bandwidth device than the current state of the art while achieving a range of motion similar to existing tools. The device we present is 6 millimeters in diameter and 16 millimeters in length and is capable of focusing and steering a fiber-delivered laser beam at high speed (1.2-kilohertz bandwidth) over a large range (over ±10 degrees in both of two axes) with excellent static repeatability (200 micrometers).more » « less
-
Ultra-miniaturized microendoscopes are vital for numerous biomedical applications. Such minimally invasive imagers allow for navigation into hard-to-reach regions and observation of deep brain activity in freely moving animals. Conventional solutions use distal microlenses. However, as lenses become smaller and less invasive, they develop greater aberrations and restricted fields of view. In addition, most of the imagers capable of variable focusing require mechanical actuation of the lens, increasing the distal complexity and weight. Here, we demonstrate a distal lens-free approach to microendoscopy enabled by computational image recovery. Our approach is entirely actuation free and uses a single pseudorandom spatial mask at the distal end of a multicore fiber. Experimentally, this lensless approach increases the space-bandwidth product, i.e., field of view divided by resolution, by threefold over a best-case lens- based system. In addition, the microendoscope demonstrates color resolved imaging and refocusing to 11 distinct depth planes from a single camera frame without any actuated parts.more » « less
-
Abstract Developable mechanisms conform to and emerge from developable, or specially curved, surfaces. The cylindrical developable mechanism can have applications in many industries due to the popularity of cylindrical or tube-based devices. Laparoscopic surgical devices in particular are widely composed of instruments attached at the proximal end of a cylindrical shaft. In this paper, properties of cylindrical developable mechanisms are discussed, including their behaviors, characteristics, and potential functions. One method for designing cylindrical developable mechanisms is discussed. Two example developable surgical devices that exemplify these behaviors, characteristics, and functions, along with the kinematic mechanisms comprising them, are discussed in detail.more » « less
-
null (Ed.)Surgical robots have been introduced to operating rooms over the past few decades due to their high sensitivity, small size, and remote controllability. The cable-driven nature of many surgical robots allows the systems to be dexterous and lightweight, with diameters as low as 5mm. However, due to the slack and stretch of the cables and the backlash of the gears, inevitable uncertainties are brought into the kinematics calcu- lation [1]. Since the reported end effector position of surgical robots like RAVEN-II [2] is directly calculated using the motor encoder measurements and forward kinematics, it may contain relatively large error up to 10mm, whereas semi-autonomous functions being introduced into abdominal surgeries require position inaccuracy of at most 1mm. To resolve the problem, a cost-effective, real-time and data-driven pipeline for robot end effector position precision estimation is proposed and tested on RAVEN-II. Analysis shows an improved end effector position error of around 1mm RMS traversing through the entire robot workspace without high-resolution motion tracker. The open source code, data sets, videos, and user guide can be found at //github.com/HaonanPeng/RAVEN Neural Network Estimator.more » « less
An official website of the United States government

