Much of sensory neuroscience focuses on sensory features that are chosen by the experimenter because they are thought to be behaviorally relevant to the organism. However, it is not generally known what these features are in complex, natural scenes. This work focuses on using the retinal encoding of natural movies to determine the presumably behaviorally-relevant features that the brain represents. It is prohibitive to parameterize a natural movie and its respective retinal encoding fully. We use time within a natural movie as a proxy for the whole suite of features evolving across the scene. We then use a task-agnostic deep architecture, an encoder-decoder, to model the retinal encoding process and characterize its representation of``time in the natural scene''in a compressed latent space. In our end-to-end training, an encoder learns a compressed latent representation from a large population of salamander retinal ganglion cells responding to natural movies, while a decoder samples from this compressed latent space to generate the appropriate movie frame. By comparing latent representations of retinal activity from three movies, we find that the retina performs transfer learning to encode time: the precise, low-dimensional representation of time learned from one movie can be used to represent time in a different movie, with up to 17ms resolution. We then show that static textures and velocity features of a natural movie are synergistic. The retina simultaneously encodes both to establishes a generalizable, low-dimensional representation of time in the natural scene.
more »
« less
An End-to-End Learning Architecture for Efficient Image Encoding and Deep Learning
Learning-based image/video codecs typically utilizethe well known auto-encoder structure where the encoder trans-forms input data to a low-dimensional latent representation.Efficient latent encoding can reduce bandwidth needs duringcompression for transmission and storage. In this paper, weexamine the effect of assigning high level coarse grouping labelsto each latent vector. Designing coding profiles for each latentgroup can achieve high compression encoding. We show thatsuch grouping can be learned via end-to-end optimization of thecodec and the deep learning (DL) model to optimize rate-accuracyfor a given data set. For cloud-based inference, source encodercan select a coding profile based on its learned grouping andencode the data features accordingly. Our test results on imageclassification show that significant performance improvementcan be achieved with learned grouping over its non-groupingcounterpart.
more »
« less
- Award ID(s):
- 1934568
- PAR ID:
- 10350087
- Date Published:
- Journal Name:
- 29th European Signal Processing Conference (EUSIPCO)
- Page Range / eLocation ID:
- 691 to 695
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Generative models based on latent variables, such as generative adversarial networks (GANs) and variationalauto-encoders (VAEs), have gained lots of interests due to their impressive performance in many fields.However, many data such as natural images usually do not populate the ambient Euclidean space but insteadreside in a lower-dimensional manifold. Thus an inappropriate choice of the latent dimension fails to uncoverthe structure of the data, possibly resulting in mismatch of latent representations and poor generativequalities. Toward addressing these problems, we propose a novel framework called the latent WassersteinGAN (LWGAN) that fuses the Wasserstein auto-encoder and the Wasserstein GAN so that the intrinsicdimension of the data manifold can be adaptively learned by a modified informative latent distribution. Weprove that there exist an encoder network and a generator network in such a way that the intrinsic dimensionof the learned encoding distribution is equal to the dimension of the data manifold. We theoreticallyestablish that our estimated intrinsic dimension is a consistent estimate of the true dimension of the datamanifold. Meanwhile, we provide an upper bound on the generalization error of LWGAN, implying that weforce the synthetic data distribution to be similar to the real data distribution from a population perspective.Comprehensive empirical experiments verify our framework and show that LWGAN is able to identify thecorrect intrinsic dimension under several scenarios, and simultaneously generate high-quality syntheticdata by sampling from the learned latent distribution. Supplementary materials for this article are availableonline, including a standardized description of the materials available for reproducing the work.more » « less
-
Abstract Hydrologic signatures are quantitative metrics that describe a streamflow time series. Examples include annual maximum flow, baseflow index and recession shape descriptors. In this paper, we use machine learning (ML) to learn encodings that are optimal ML equivalents of hydrologic signatures, and that are derived directly from the data. We compare the learned signatures to classical signatures, interpret their meaning, and use them to build rainfall‐runoff models in otherwise ungauged watersheds. Our model has an encoder–decoder structure. The encoder is a convolutional neural net mapping historical flow and climate data to a low‐dimensional vector encoding, analogous to hydrological signatures. The decoder structure includes stores and fluxes similar to a classical hydrologic model. For each timestep, the decoder uses current climate data, watershed attributes and the encoding to predict coefficients that distribute precipitation between stores and store outflow coefficients. The model is trained end‐to‐end on the U.S. CAMELS watershed data set to minimize streamflow error. We show that learned signatures can extract new information from streamflow series, because using learned signatures as input to the process‐informed model improves prediction accuracy over benchmark configurations that use classical signatures or no signatures. We interpret learned signatures by correlation with classical signatures, and by using sensitivity analysis to assess their impact on modeled store dynamics. Learned signatures are spatially correlated and relate to streamflow dynamics including seasonality, high and low extremes, baseflow and recessions. We conclude that process‐informed ML models and other applications using hydrologic signatures may benefit from replacing expert‐selected signatures with learned signatures.more » « less
-
Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations. Prior works show that structured latent space such as visual keypoints often outperforms unstructured representations for robotic control. However, most of these representations, whether structured or unstructured are learned in a 2D space even though the control tasks are usually performed in a 3D environment. In this work, we propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner. The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective. These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space. The proposed approach outperforms prior state-of-art methods across a variety of reinforcement learning benchmarks. Code and videos at https://buoyancy99.github.io/unsup-3d-keypoints/more » « less
-
Contrastive learning has been applied successfully to learn vector representations of text. Previous research demonstrated that learning high-quality representations benefits from batch-wise contrastive loss with a large number of negatives. In practice, the technique of in-batch negative is used, where for each example in a batch, other batch examples’ positives will be taken as its negatives, avoiding encoding extra negatives. This, however, still conditions each example’s loss on all batch examples and requires fitting the entire large batch into GPU memory. This paper introduces a gradient caching technique that decouples backpropagation between contrastive loss and the encoder, removing encoder backward pass data dependency along the batch dimension. As a result, gradients can be computed for one subset of the batch at a time, leading to almost constant memory usage.more » « less
An official website of the United States government

