skip to main content


Title: Computational imaging with spectral coding increases the spatial resolution of fiber optic bundles

Fiber optic bundles are used in narrow-diameter medical and industrial instruments for acquiring images from confined locations. Images transmitted through these bundles contain only one pixel of information per fiber core and fail to capture information from the cladding region between cores. Both factors limit the spatial resolution attainable with fiber bundles. We show here that computational imaging (CI) can be combined with spectral coding to overcome these two fundamental limitations and improve spatial resolution in fiber bundle imaging. By acquiring multiple images of a scene with a high-resolution mask pattern imposed, up to 17 pixels of information can be recovered from each fiber core. A dispersive element at the distal end of the bundle imparts a wavelength-dependent lateral shift on light from the object. This enables light that would otherwise be lost at the inter-fiber cladding to be transmitted through adjacent fiber cores. We experimentally demonstrate this approach using synthetic and real objects. Using CI with spectral coding, object features 5× smaller than individual fiber cores were resolved, whereas conventional imaging could only resolve features at least 1.5× larger than each core. In summary, CI combined with spectral coding provides an approach for overcoming the two fundamental limitations of fiber optic bundle imaging.

 
more » « less
NSF-PAR ID:
10397385
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Letters
Volume:
48
Issue:
5
ISSN:
0146-9592; OPLEDP
Page Range / eLocation ID:
Article No. 1088
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Fiber bundles have become widely adopted for use in endoscopy, live-organism imaging, and other imaging applications. An inherent consequence of imaging with these bundles is the introduction of a honeycomb-like artifact that arises from the inter-fiber spacing, which obscures features of objects in the image. This artifact subsequently limits applicability and can make interpretation of the image-based data difficult. This work presents a method to reduce this artifact by on-axis rotation of the fiber bundle. Fiber bundle images were first low-pass and median filtered to improve image quality. Consecutive filtered images with rotated samples were then co-registered and averaged to generate a final, reconstructed image. The results demonstrate removal of the artifacts, in addition to increased signal contrast and signal-to-noise ratio. This approach combines digital filtering and spatial resampling to reconstruct higher-quality images, enhancing the utility of images acquired using fiber bundles.

     
    more » « less
  2. Abstract

    Endoscopes are an important component for the development of minimally invasive surgeries. Their size is one of the most critical aspects, because smaller and less rigid endoscopes enable higher agility, facilitate larger accessibility, and induce less stress on the surrounding tissue. In all existing endoscopes, the size of the optics poses a major limitation in miniaturization of the imaging system. Not only is making small optics difficult, but their performance also degrades with downscaling. Meta-optics have recently emerged as a promising candidate to drastically miniaturize optics while achieving similar functionalities with significantly reduced size. Herein, we report an inverse-designed meta-optic, which combined with a coherent fiber bundle enables a 33% reduction in the rigid tip length over traditional gradient-index (GRIN) lenses. We use the meta-optic fiber endoscope (MOFIE) to demonstrate real-time video capture in full visible color, the spatial resolution of which is primarily limited by the fiber itself. Our work shows the potential of meta-optics for integration and miniaturization of biomedical devices towards minimally invasive surgery.

     
    more » « less
  3. The optic nerve transmits visual information to the brain as trains of discrete events, a low-power, low-bandwidth communication channel also exploited by silicon retina cameras. Extracting highfidelity visual input from retinal event trains is thus a key challenge for both computational neuroscience and neuromorphic engineering. Here, we investigate whether sparse coding can enable the reconstruction of high-fidelity images and video from retinal event trains. Our approach is analogous to compressive sensing, in which only a random subset of pixels are transmitted and the missing information is estimated via inference. We employed a variant of the Locally Competitive Algorithm to infer sparse representations from retinal event trains, using a dictionary of convolutional features optimized via stochastic gradient descent and trained in an unsupervised manner using a local Hebbian learning rule with momentum. We used an anatomically realistic retinal model with stochastic graded release from cones and bipolar cells to encode thumbnail images as spike trains arising from ON and OFF retinal ganglion cells. The spikes from each model ganglion cell were summed over a 32 msec time window, yielding a noisy rate-coded image. Analogous to how the primary visual cortex is postulated to infer features from noisy spike trains arising from the optic nerve, we inferred a higher-fidelity sparse reconstruction from the noisy rate-coded image using a convolutional dictionary trained on the original CIFAR10 database. To investigate whether a similar approachworks on non-stochastic data, we demonstrate that the same procedure can be used to reconstruct high-frequency video from the asynchronous events arising from a silicon retina camera moving through a laboratory environment. 
    more » « less
  4. Abstract

    Observations of transient objects, such as short gamma-ray bursts and electromagnetic counterparts of gravitational wave sources, require prompt spectroscopy. To carry out prompt spectroscopy, we have developed an optical-fiber integral field unit (IFU) and connected it with an existing optical spectrograph, KOOLS. KOOLS–IFU was mounted on the Okayama Astrophysical Observatory 188 cm telescope. The fiber core and cladding diameters of the fiber bundle are 100 μm and 125 μm, respectively, and 127 fibers are hexagonally close-packed in the sleeve of the two-dimensional fiber array. We conducted test observations to measure the KOOLS–IFU performance and obtained the following conclusions: (1) the spatial sampling is ${2{^{\prime\prime}_{.}}34}$$\, \pm \,$${0{^{\prime\prime}_{.}}05}$ per fiber, and the total field of view is ${30{^{\prime\prime}_{.}}4}$$\, \pm \,$${0{^{\prime\prime}_{.}}65}$ with 127 fibers; (2) the observable wavelength and the spectral resolving power of the grisms of KOOLS are 4030–7310 Å and 400–600, 5020–8830 Å and 600–900, 4160–6000 Å and 1000–1200, and 6150–7930 Å and 1800–2400, respectively; and (3) the estimated limiting magnitude is 18.2–18.7 AB mag during 30 min exposure under optimal conditions.

     
    more » « less
  5. Spectroscopic single-molecule localization microscopy (sSMLM) simultaneously provides spatial localization and spectral information of individual single-molecules emission, offering multicolor super-resolution imaging of multiple molecules in a single sample with the nanoscopic resolution. However, this technique is limited by the requirements of acquiring a large number of frames to reconstruct a super-resolution image. In addition, multicolor sSMLM imaging suffers from spectral cross-talk while using multiple dyes with relatively broad spectral bands that produce cross-color contamination. Here, we present a computational strategy to accelerate multicolor sSMLM imaging. Our method uses deep convolution neural networks to reconstruct high-density multicolor super-resolution images from low-density, contaminated multicolor images rendered using sSMLM datasets with much fewer frames, without compromising spatial resolution. High-quality, super-resolution images are reconstructed using up to 8-fold fewer frames than usually needed. Thus, our technique generates multicolor super-resolution images within a much shorter time, without any changes in the existing sSMLM hardware system. Two-color and three-color sSMLM experimental results demonstrate superior reconstructions of tubulin/mitochondria, peroxisome/mitochondria, and tubulin/mitochondria/peroxisome in fixed COS-7 and U2-OS cells with a significant reduction in acquisition time.

     
    more » « less