skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Inverse Error Function Trajectories for Image Reconstruction*This material is based upon work supported by the National Science Foundation under Grant No. 1662029
Capturing clear images while a camera is moving fast, is integral to the development of mobile robots that can respond quickly and effectively to visual stimuli. This paper proposes to generate camera trajectories, with position and time constraints, that result in higher reconstructed image quality. The degradation in of an image captured during motion is known as motion blur. Three main methods exist for mitigating the effects of motion blur: (i) controlling optical parameters, (ii) controlling camera motion, and (iii) image reconstruction. Given control of a camera's motion, trajectories can be generated that result in an expected blur kernel or point-spread function. This work compares the motion blur effects and reconstructed image quality of three trajectories: (i) linear, (ii) polynomial, and (iii) inverse error. Where inverse error trajectories result in Gaussian blur kernels. Residence time analysis provides a basis for characterizing the motion blur effects of the trajectories  more » « less
Award ID(s):
1662029
PAR ID:
10099747
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Volume:
1
Issue:
1
Page Range / eLocation ID:
7527 to 7532
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Single-photon avalanche diodes (SPADs) are a rapidly developing image sensing technology with extreme low-light sensitivity and picosecond timing resolution. These unique capabilities have enabled SPADs to be used in applications like LiDAR, non-line-of-sight imaging and fluorescence microscopy that require imaging in photon-starved scenarios. In this work we harness these capabilities for dealing with motion blur in a passive imaging setting in low illumination conditions. Our key insight is that the data captured by a SPAD array camera can be represented as a 3D spatio-temporal tensor of photon detection events which can be integrated along arbitrary spatio-temporal trajectories with dynamically varying integration windows, depending on scene motion. We propose an algorithm that estimates pixel motion from photon timestamp data and dynamically adapts the integration windows to minimize motion blur. Our simulation results show the applicability of this algorithm to a variety of motion profiles including translation, rotation and local object motion. We also demonstrate the real-world feasibility of our method on data captured using a 32x32 SPAD camera. 
    more » « less
  2. The task of automatically assessing and adjusting image quality, when capturing face images using any band-specific camera sensor, can be achieved by eliminating a variety of acquisition parameters such as illumination. One such parameter related to image quality is sharpness. If it is not accurately estimated during data collection, it may affect the quality of the overall face image dataset and thus face recognition accuracy. While manually focusing each camera on the target (human face) can result in sharp looking face images, the process can be cumbersome for the operators and the subjects and, thus, it increases data collection acquisition time. In this work, we developed an electromechanical based system that automatically assesses face image sharpness, prior to capture rather than necessitating post-processing schemes. Various blur quality factors and constraints have been empirically evaluated, before determining the algorithmic steps of our proposed system. This paper discusses the implementation of this system in a live operating system. 
    more » « less
  3. Abstract We consider semantic image segmentation. Our method is inspired by Bayesian deep learning which improves image segmentation accuracy by modeling the uncertainty of the network output. In contrast to uncertainty, our method directly learns to predict the erroneous pixels of a segmentation network, which is modeled as a binary classification problem. It can speed up training comparing to the Monte Carlo integration often used in Bayesian deep learning. It also allows us to train a branch to correct the labels of erroneous pixels. Our method consists of three stages: (i) predict pixel-wise error probability of the initial result, (ii) redetermine new labels for pixels with high error probability, and (iii) fuse the initial result and the redetermined result with respect to the error probability. We formulate the error-pixel prediction problem as a classification task and employ an error-prediction branch in the network to predict pixel-wise error probabilities. We also introduce a detail branch to focus the training process on the erroneous pixels. We have experimentally validated our method on the Cityscapes and ADE20K datasets. Our model can be easily added to various advanced segmentation networks to improve their performance. Taking DeepLabv3+ as an example, our network can achieve 82.88% of mIoU on Cityscapes testing dataset and 45.73% on ADE20K validation dataset, improving corresponding DeepLabv3+ results by 0.74% and 0.13% respectively. 
    more » « less
  4. Objective: The purpose of this paper is to demonstrate the ultrasound tracking strategy for the acoustically actuated bubble-based microswimmer. Methods: The ultrasound tracking performance is evaluated by comparing the tracking results with the camera tracking. A benchtop experiment is conducted to capture the motion of two types of microswimmers by synchronized ultrasound and camera systems. A laboratory developed tracking algorithm is utilized to estimate the trajectory for both tracking methods. Results: The trajectory reconstructed from ultrasound tracking method compares well with the conventional camera tracking, exhibiting a high accuracy and robustness for three different types of moving trajectories. Conclusion: Ultrasound tracking is an accurate and reliable approach to track the motion of the acoustically actuated microswimmers. Significance: Ultrasound imaging is a promising candidate for noninvasively tracking the motion of microswimmers inside body in biomedical applications and may further promote the real-time control strategy for the microswimmers. 
    more » « less
  5. Synopsis Digital photography and videography provide rich data for the study of animal behavior and are consequently widely used techniques. For fixed, unmoving cameras there is a resolution versus field-of-view tradeoff and motion blur smears the subject on the sensor during exposure. While these fundamental tradeoffs with stationary cameras can be sidestepped by employing multiple cameras and providing additional illumination, this may not always be desirable. An alternative that overcomes these issues of stationary cameras is to direct a high-magnification camera at an animal continually as it moves. Here, we review systems in which automatic tracking is used to maintain an animal in the working volume of a moving optical path. Such methods provide an opportunity to escape the tradeoff between resolution and field of view and also to reduce motion blur while still enabling automated image acquisition. We argue that further development will be useful and outline potential innovations that may improve the technology and lead to more widespread use. 
    more » « less