skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stereoscopic artificial compound eyes for spatiotemporal perception in three-dimensional space
Arthropods’ eyes are effective biological vision systems for object tracking and wide field of view because of their structural uniqueness; however, unlike mammalian eyes, they can hardly acquire the depth information of a static object because of their monocular cues. Therefore, most arthropods rely on motion parallax to track the object in three-dimensional (3D) space. Uniquely, the praying mantis (Mantodea) uses both compound structured eyes and a form of stereopsis and is capable of achieving object recognition in 3D space. Here, by mimicking the vision system of the praying mantis using stereoscopically coupled artificial compound eyes, we demonstrated spatiotemporal object sensing and tracking in 3D space with a wide field of view. Furthermore, to achieve a fast response with minimal latency, data storage/transportation, and power consumption, we processed the visual information at the edge of the system using a synaptic device and a federated split learning algorithm. The designed and fabricated stereoscopic artificial compound eye provides energy-efficient and accurate spatiotemporal object sensing and optical flow tracking. It exhibits a root mean square error of 0.3 centimeter, consuming only approximately 4 millijoules for sensing and tracking. These results are more than 400 times lower than conventional complementary metal-oxide semiconductor–based imaging systems. Our biomimetic imager shows the potential of integrating nature’s unique design using hardware and software codesigned technology toward capabilities of edge computing and sensing.  more » « less
Award ID(s):
2002902 2033671 2143559 1942868
PAR ID:
10538572
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
Science Robotics
Date Published:
Journal Name:
Science Robotics
Volume:
9
Issue:
90
ISSN:
2470-9476
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    ABSTRACT Vision is one of the most important senses for humans and animals alike. Diverse elegant specializations have evolved among insects and other arthropods in response to specific visual challenges and ecological needs. These specializations are the subject of this Review, and they are best understood in light of the physical limitations of vision. For example, to achieve high spatial resolution, fine sampling in different directions is necessary, as demonstrated by the well-studied large eyes of dragonflies. However, it has recently been shown that a comparatively tiny robber fly (Holcocephala) has similarly high visual resolution in the frontal visual field, despite their eyes being a fraction of the size of those of dragonflies. Other visual specializations in arthropods include the ability to discern colors, which relies on parallel inputs that are tuned to spectral content. Color vision is important for detection of objects such as mates, flowers and oviposition sites, and is particularly well developed in butterflies, stomatopods and jumping spiders. Analogous to color vision, the visual systems of many arthropods are specialized for the detection of polarized light, which in addition to communication with conspecifics, can be used for orientation and navigation. For vision in low light, optical superposition compound eyes perform particularly well. Other modifications to maximize photon capture involve large lenses, stout photoreceptors and, as has been suggested for nocturnal bees, the neural pooling of information. Extreme adaptations even allow insects to see colors at very low light levels or to navigate using the Milky Way. 
    more » « less
  2. Abstract The vision system of arthropods such as insects and crustaceans is based on the compound-eye architecture, consisting of a dense array of individual imaging elements (ommatidia) pointing along different directions. This arrangement is particularly attractive for imaging applications requiring extreme size miniaturization, wide-angle fields of view, and high sensitivity to motion. However, the implementation of cameras directly mimicking the eyes of common arthropods is complicated by their curved geometry. Here, we describe a lensless planar architecture, where each pixel of a standard image-sensor array is coated with an ensemble of metallic plasmonic nanostructures that only transmits light incident along a small geometrically-tunable distribution of angles. A set of near-infrared devices providing directional photodetection peaked at different angles is designed, fabricated, and tested. Computational imaging techniques are then employed to demonstrate the ability of these devices to reconstruct high-quality images of relatively complex objects. 
    more » « less
  3. Abstract With a great variety of shapes and sizes, compound eye morphologies give insight into visual ecology, development, and evolution, and inspire novel engineering. In contrast to our own camera-type eyes, compound eyes reveal their resolution, sensitivity, and field of view externally, provided they have spherical curvature and orthogonal ommatidia. Non-spherical compound eyes with skewed ommatidia require measuring internal structures, such as with MicroCT (µCT). Thus far, there is no efficient tool to characterize compound eye optics, from either 2D or 3D data, automatically. Here we present two open-source programs: (1) the ommatidia detecting algorithm (ODA), which measures ommatidia count and diameter in 2D images, and (2) a µCT pipeline (ODA-3D), which calculates anatomical acuity, sensitivity, and field of view across the eye by applying the ODA to 3D data. We validate these algorithms on images, images of replicas, and µCT eye scans from ants, fruit flies, moths, and a bee. 
    more » « less
  4. We introduce a system that exploits the screen and front-facing camera of a mobile device to perform three-dimensional deflectometry-based surface measurements. In contrast to current mobile deflectometry systems, our method can capture surfaces with large normal variation and wide field of view (FoV). We achieve this by applying automated multi-view panoramic stitching algorithms to produce a large FoV normal map from a hand-guided capture process without the need for external tracking systems, like robot arms or fiducials. The presented work enables 3D surface measurements of specular objects ’in the wild’ with a system accessible to users with little to no technical imaging experience. We demonstrate high-quality 3D surface measurements without the need for a calibration procedure. We provide experimental results with our prototype Deflectometry system and discuss applications for computer vision tasks such as object detection and recognition. 
    more » « less
  5. Abstract—Object perception plays a fundamental role in Cooperative Driving Automation (CDA) which is regarded as a revolutionary promoter for next-generation transportation systems. However, the vehicle-based perception may suffer from the limited sensing range and occlusion as well as low penetration rates in connectivity. In this paper, we propose Cyber Mobility Mirror (CMM), a next-generation real-world object perception system for 3D object detection, tracking, localization, and reconstruction, to explore the potential of roadside sensors for enabling CDA in the real world. The CMM system consists of six main components: i) the data pre-processor to retrieve and preprocess the raw data; ii) the roadside 3D object detector to generate 3D detection results; iii) the multi-object tracker to identify detected objects; iv) the global locator to generate geo-localization information; v) the mobile-edge-cloud-based communicator to transmit perception information to equipped vehicles, and vi) the onboard advisor to reconstruct and display the real-time traffic conditions. An automatic perception evaluation approach is proposed to support the assessment of data-driven models without human-labeling requirements and a CMM field-operational system is deployed at a real-world intersection to assess the performance of the CMM. Results from field tests demonstrate that our CMM prototype system can achieve 96.99% precision and 83.62% recall for detection and 73.55% ID-recall for tracking. High-fidelity real-time traffic conditions (at the object level) can be geo-localized with a root-mean-square error (RMSE) of 0.69m and 0.33m for lateral and longitudinal direction, respectively, and displayed on the GUI of the equipped vehicle with a frequency of 3 − 4Hz. 
    more » « less