skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Towards a Comprehensive and Robust Micromanipulation System with Force-Sensing and VR Capabilities
In this modern world, with the increase of complexity of many technologies, especially in the micro and nanoscale, the field of robotic manipulation has tremendously grown. Microrobots and other complex microscale systems are often to laborious to fabricate using standard microfabrication techniques, therefore there is a trend towards fabricating them in parts then assembling them together, mainly using micromanipulation tools. Here, a comprehensive and robust micromanipulation platform is presented, in which four micromanipulators can be used simultaneously to perform complex tasks, providing the user with an intuitive environment. The system utilizes a vision-based force sensor to aid with manipulation tasks and it provides a safe environment for biomanipulation. Lastly, virtual reality (VR) was incorporated into the system, allowing the user to control the probes from a more intuitive standpoint and providing an immersive platform for the future of micromanipulation.  more » « less
Award ID(s):
1637961
PAR ID:
10309219
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Micromachines
Volume:
12
Issue:
7
ISSN:
2072-666X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we introduce a novel method to support remote telemanipulation tasks in complex environments by providing operators with an enhanced view of the task environment. Our method features a novel viewpoint adjustment algorithm designed to automatically mitigate occlusions caused by workspace geometry, supports visual exploration to provide operators with situation awareness in the remote environment, and mediates context-specific visual challenges by making viewpoint adjustments based on sparse input from the user. Our method builds on the dynamic camera telemanipulation viewing paradigm, where a user controls a manipulation robot, and a camera-in-hand robot alongside the manipulation robot servos to provide a sufficient view of the remote environment. We discuss the real-time motion optimization formulation used to arbitrate the various objectives in our shared-control-based method, particularly highlighting how our occlusion avoidance and viewpoint adaptation approaches fit within this framework. We present results from an empirical evaluation of our proposed occlusion avoidance approach as well as a user study that compares our telemanipulation shared-control method against alternative telemanipulation approaches. We discuss the implications of our work for future shared-control research and robotics applications. 
    more » « less
  2. In many complex tasks, a remote expert may need to assist a local user or to guide his or her actions in the local user's environment. Existing solutions also allow multiple users to collaborate remotely using high-end Augmented Reality (AR) and Virtual Reality (VR) head-mounted displays (HMD). In this paper, we propose a portable remote collaboration approach, with the integration of AR and VR devices, both running on mobile platforms, to tackle the challenges of existing approaches. The AR mobile platform processes the live video and measures the 3D geometry of the local environment of a local user. The 3D scene is then transited and rendered in the remote side on a mobile VR device, along with a simple and effective user interface, which allows a remote expert to easily manipulate the 3D scene on the VR platform and to guide the local user to complete tasks in the local environment. 
    more » « less
  3. Faust, Aleksandra; Hsu, David; Neumann, Gerhard (Ed.)
    Enabling human operators to interact with robotic agents using natural language would allow non-experts to intuitively instruct these agents. Towards this goal, we propose a novel Transformer-based model which enables a user to guide a robot arm through a 3D multi-step manipulation task with natural language commands. Our system maps images and commands to masks over grasp or place locations, grounding the language directly in perceptual space. In a suite of block rearrangement tasks, we show that these masks can be combined with an existing manipulation framework without re-training, greatly improving learning efficiency. Our masking model is several orders of magnitude more sample efficient than typical Transformer models, operating with hundreds, not millions, of examples. Our modular design allows us to leverage supervised and reinforcement learning, providing an easy interface for experimentation with different architectures. Our model completes block manipulation tasks with synthetic commands more often than a UNet-based baseline, and learns to localize actions correctly while creating a mapping of symbols to perceptual input that supports compositional reasoning. We provide a valuable resource for 3D manipulation instruction following research by porting an existing 3D block dataset with crowdsourced language to a simulated environment. Our method’s absolute improvement in identifying the correct block on the ported dataset demonstrates its ability to handle syntactic and lexical variation. 
    more » « less
  4. null (Ed.)
    In this paper, we report on a novel biocompatible micromechanical bioreactor (actuator and sensor) designed for the in situ manipulation and characterization of live microtissues. The purpose of this study was to develop and validate an application-targeted sterile bioreactor that is accessible, inexpensive, adjustable, and easily fabricated. Our method relies on a simple polydimethylsiloxane (PDMS) molding technique for fabrication and is compatible with commonly-used laboratory equipment and materials. Our unique design includes a flexible thin membrane that allows for the transfer of an external actuation into the PDMS beam-based actuator and sensor placed inside a conventional 35 mm cell culture Petri dish. Through computational analysis followed by experimental testing, we demonstrated its functionality, accuracy, sensitivity, and tunable operating range. Through time-course testing, the actuator delivered strains of over 20% to biodegradable electrospun poly (D, L-lactide-co-glycolide) (PLGA) 85:15 non-aligned nanofibers (~91 µm thick). At the same time, the sensor was able to characterize time-course changes in Young’s modulus (down to 10–150 kPa), induced by an application of isopropyl alcohol (IPA). Furthermore, the actuator delivered strains of up to 4% to PDMS monolayers (~30 µm thick), simultaneously characterizing their elastic modulus up to ~2.2 MPa. The platform repeatedly applied dynamic (0.23 Hz) tensile stimuli to live Human Dermal Fibroblast (HDF) cells for 12 hours (h) and recorded the cellular reorientation towards two angle regimes, with averages of −58.85° and +56.02°. The device biocompatibility with live cells was demonstrated for one week, with no signs of cytotoxicity. We can conclude that our PDMS bioreactor is advantageous for low-cost tissue/cell culture micromanipulation studies involving mechanical actuation and characterization. Our device eliminates the need for an expensive experimental setup for cell micromanipulation, increasing the ease of live-cell manipulation studies by providing an affordable way of conducting high-throughput experiments without the need to open the Petri dish, reducing manual handling, cross-contamination, supplies, and costs. The device design, material, and methods allow the user to define the operational range based on their targeted samples/application. 
    more » « less
  5. Hyperion is a 3D visualization platform for optical design. It provides a fully immersive, intuitive, and interactive 3D user experience by leveraging existing AR/VR technologies. It enables the visualization of models of folded freeform optical systems in a dynamic 3D environment. The frontend user experience is supported by the computational ray-tracing engine of Eikonal+, an optical design research software currently being developed. We have built a cross-platform light-weight version of Eikonal+ that can communicate with any user interface or other scientific software. We have also demonstrated a prototype of the Hyperion 3D user experience using a Hololens AR display. 
    more » « less