skip to main content


Title: Towards a Comprehensive and Robust Micromanipulation System with Force-Sensing and VR Capabilities
In this modern world, with the increase of complexity of many technologies, especially in the micro and nanoscale, the field of robotic manipulation has tremendously grown. Microrobots and other complex microscale systems are often to laborious to fabricate using standard microfabrication techniques, therefore there is a trend towards fabricating them in parts then assembling them together, mainly using micromanipulation tools. Here, a comprehensive and robust micromanipulation platform is presented, in which four micromanipulators can be used simultaneously to perform complex tasks, providing the user with an intuitive environment. The system utilizes a vision-based force sensor to aid with manipulation tasks and it provides a safe environment for biomanipulation. Lastly, virtual reality (VR) was incorporated into the system, allowing the user to control the probes from a more intuitive standpoint and providing an immersive platform for the future of micromanipulation.  more » « less
Award ID(s):
1637961
NSF-PAR ID:
10309219
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Micromachines
Volume:
12
Issue:
7
ISSN:
2072-666X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Engineering education aims to create a learning environment capable of developing vital engineering skill sets, preparing students to enter the workforce and succeed as future leaders. With all the rapid technological advancements, new engineering challenges continuously emerge, impeding the development of engineering skills. This insufficiency in developing the required skills resulted in high regression rates in students’ GPAs, resulting in industries reporting graduates’ unsatisfactory performance. From a pedagogical perspective, this problem is highly correlated with traditional learning methods that are inadequate for engaging students and improving their learning experience when adopted alone. Accordingly, educators have incorporated new learning methodologies to address the pre-defined problem and enhance the students’ learning experience. However, many of the currently adopted teaching methods still lack the potential to expose students to practical examples, and they are inefficient among engineering students, who tend to be active learners and prefer to use a variety of senses. To address this, our research team proposes integrating the technology of virtual reality (VR) into the laboratory work of engineering technology courses to improve the students’ learning experience and engagement. VR technology, an immersive high-tech media, was adopted to develop an interactive teaching module on hydraulic gripper designs in a VR construction-like environment. The module aims to expose engineering technology students to real-life applications by providing a more visceral experience than screen-based media through the generation of fully computer-simulated environments in which everything is digitized. This work presents the development and implementation of the VR construction lab module and the corresponding gripper designs. The virtual gripper models are developed using Oculus Virtual Reality (OVR) Metrics Tool for Unity, a Steam VR Overlay utility created to make visualizing the desktop in a VR setting simple and intuitive. The execution of the module comprises building the VR environment, designing and importing the gripper models, and creating a user-interface VR environment to visualize and interact with the model (gripper assembly/mechanism testing). Besides the visualization, manipulation, and interaction, the developed VR system allows for additional features like displaying technical information, guiding students throughout the assembly process, and other specialized options. Thus, the developed interactive VR module will serve as a perpetual mutable platform that can be readily adjusted to allow future add-ons to address future educational opportunities. 
    more » « less
  2. null (Ed.)
    In this paper, we report on a novel biocompatible micromechanical bioreactor (actuator and sensor) designed for the in situ manipulation and characterization of live microtissues. The purpose of this study was to develop and validate an application-targeted sterile bioreactor that is accessible, inexpensive, adjustable, and easily fabricated. Our method relies on a simple polydimethylsiloxane (PDMS) molding technique for fabrication and is compatible with commonly-used laboratory equipment and materials. Our unique design includes a flexible thin membrane that allows for the transfer of an external actuation into the PDMS beam-based actuator and sensor placed inside a conventional 35 mm cell culture Petri dish. Through computational analysis followed by experimental testing, we demonstrated its functionality, accuracy, sensitivity, and tunable operating range. Through time-course testing, the actuator delivered strains of over 20% to biodegradable electrospun poly (D, L-lactide-co-glycolide) (PLGA) 85:15 non-aligned nanofibers (~91 µm thick). At the same time, the sensor was able to characterize time-course changes in Young’s modulus (down to 10–150 kPa), induced by an application of isopropyl alcohol (IPA). Furthermore, the actuator delivered strains of up to 4% to PDMS monolayers (~30 µm thick), simultaneously characterizing their elastic modulus up to ~2.2 MPa. The platform repeatedly applied dynamic (0.23 Hz) tensile stimuli to live Human Dermal Fibroblast (HDF) cells for 12 hours (h) and recorded the cellular reorientation towards two angle regimes, with averages of −58.85° and +56.02°. The device biocompatibility with live cells was demonstrated for one week, with no signs of cytotoxicity. We can conclude that our PDMS bioreactor is advantageous for low-cost tissue/cell culture micromanipulation studies involving mechanical actuation and characterization. Our device eliminates the need for an expensive experimental setup for cell micromanipulation, increasing the ease of live-cell manipulation studies by providing an affordable way of conducting high-throughput experiments without the need to open the Petri dish, reducing manual handling, cross-contamination, supplies, and costs. The device design, material, and methods allow the user to define the operational range based on their targeted samples/application. 
    more » « less
  3. Faust, Aleksandra ; Hsu, David ; Neumann, Gerhard (Ed.)
    Enabling human operators to interact with robotic agents using natural language would allow non-experts to intuitively instruct these agents. Towards this goal, we propose a novel Transformer-based model which enables a user to guide a robot arm through a 3D multi-step manipulation task with natural language commands. Our system maps images and commands to masks over grasp or place locations, grounding the language directly in perceptual space. In a suite of block rearrangement tasks, we show that these masks can be combined with an existing manipulation framework without re-training, greatly improving learning efficiency. Our masking model is several orders of magnitude more sample efficient than typical Transformer models, operating with hundreds, not millions, of examples. Our modular design allows us to leverage supervised and reinforcement learning, providing an easy interface for experimentation with different architectures. Our model completes block manipulation tasks with synthetic commands more often than a UNet-based baseline, and learns to localize actions correctly while creating a mapping of symbols to perceptual input that supports compositional reasoning. We provide a valuable resource for 3D manipulation instruction following research by porting an existing 3D block dataset with crowdsourced language to a simulated environment. Our method’s absolute improvement in identifying the correct block on the ported dataset demonstrates its ability to handle syntactic and lexical variation. 
    more » « less
  4. In this paper, we introduce a novel method to support remote telemanipulation tasks in complex environments by providing operators with an enhanced view of the task environment. Our method features a novel viewpoint adjustment algorithm designed to automatically mitigate occlusions caused by workspace geometry, supports visual exploration to provide operators with situation awareness in the remote environment, and mediates context-specific visual challenges by making viewpoint adjustments based on sparse input from the user. Our method builds on the dynamic camera telemanipulation viewing paradigm, where a user controls a manipulation robot, and a camera-in-hand robot alongside the manipulation robot servos to provide a sufficient view of the remote environment. We discuss the real-time motion optimization formulation used to arbitrate the various objectives in our shared-control-based method, particularly highlighting how our occlusion avoidance and viewpoint adaptation approaches fit within this framework. We present results from an empirical evaluation of our proposed occlusion avoidance approach as well as a user study that compares our telemanipulation shared-control method against alternative telemanipulation approaches. We discuss the implications of our work for future shared-control research and robotics applications. 
    more » « less
  5. Motion tracking interfaces are intuitive for free-form teleoperation tasks. However, efficient manipulation control can be difficult with such interfaces because of issues like the interference of unintended motions and the limited precision of human motion control. The limitation in control efficiency reduces the operator's performance and increases their workload and frustration during robot teleoperation. To improve the efficiency, we proposed separating controlled degrees of freedom (DoFs) and adjusting the motion scaling ratio of a motion tracking interface. The motion tracking of handheld controllers from a Virtual Reality system was used for the interface. We separated the translation and rotational control into: 1) two controllers held in the dominant and non-dominant hands and 2) hand pose tracking and trackpad inputs of a controller. We scaled the control mapping ratio based on 1) the environmental constraints and 2) the teleoperator's control speed. We further conducted a user study to investigate the effectiveness of the proposed methods in increasing efficiency. Our results show that the separation of position and orientation control into two controllers and the environment-based scaling methods perform better than their alternatives. 
    more » « less