skip to main content


Search for: All records

Creators/Authors contains: "Zickler, Todd"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Let$$\textbf{p}$$pbe a configuration ofnpoints in$$\mathbb R^d$$Rdfor somenand some$$d \ge 2$$d2. Each pair of points defines an edge, which has a Euclidean length in the configuration. A path is an ordered sequence of the points, and a loop is a path that begins and ends at the same point. A path or loop, as a sequence of edges, also has a Euclidean length, which is simply the sum of its Euclidean edge lengths. We are interested in reconstructing$$\textbf{p}$$pgiven a set of edge, path and loop lengths. In particular, we consider the unlabeled setting where the lengths are given simply as a set of real numbers, and are not labeled with the combinatorial data describing which paths or loops gave rise to these lengths. In this paper, we study the question of when$$\textbf{p}$$pwill be uniquely determined (up to an unknowable Euclidean transform) from some given set of path or loop lengths through an exhaustive trilateration process. Such a process has already been used for the simpler problem of reconstruction using unlabeled edge lengths. This paper also provides a complete proof that this process must work in that edge-setting when given a sufficiently rich set of edge measurements and assuming that$$\textbf{p}$$pis generic.

     
    more » « less
  2. Jumping spiders (Salticidae) rely on accurate depth perception for predation and navigation. They accomplish depth perception, despite their tiny brains, by using specialized optics. Each principal eye includes a multitiered retina that simultaneously receives multiple images with different amounts of defocus, and from these images, distance is decoded with relatively little computation. We introduce a compact depth sensor that is inspired by the jumping spider. It combines metalens optics, which modifies the phase of incident light at a subwavelength scale, with efficient computations to measure depth from image defocus. Instead of using a multitiered retina to transduce multiple simultaneous images, the sensor uses a metalens to split the light that passes through an aperture and concurrently form 2 differently defocused images at distinct regions of a single planar photosensor. We demonstrate a system that deploys a 3-mm-diameter metalens to measure depth over a 10-cm distance range, using fewer than 700 floating point operations per output pixel. Compared with previous passive depth sensors, our metalens depth sensor is compact, single-shot, and requires a small amount of computation. This integration of nanophotonics and efficient computation brings artificial depth sensing closer to being feasible on millimeter-scale, microwatts platforms such as microrobots and microsensor networks. 
    more » « less
  3. The focal track sensor is a monocular and computationally efficient depth sensor that is based on defocus controlled by a liquid membrane lens. It synchronizes small lens oscillations with a photosensor to produce real-time depth maps by means of differential defocus, and it couples these oscillations with bigger lens deformations that adapt the defocus working range to track objects over large axial distances. To create the focal track sensor, we derive a texture-invariant family of equations that relate image derivatives to scene depth when a lens changes its focal length differentially. Based on these equations, we design a feed-forward sequence of computations that: robustly incorporates image derivatives at multiple scales; produces confidence maps along with depth; and can be trained endto- end to mitigate against noise, aberrations, and other non-idealities. Our prototype with 1-inch optics produces depth and confidence maps at 100 frames per second over an axial range of more than 75cm. 
    more » « less