Much of sensory neuroscience focuses on sensory features that are chosen by the experimenter because they are thought to be behaviorally relevant to the organism. However, it is not generally known what these features are in complex, natural scenes. This work focuses on using the retinal encoding of natural movies to determine the presumably behaviorally-relevant features that the brain represents. It is prohibitive to parameterize a natural movie and its respective retinal encoding fully. We use time within a natural movie as a proxy for the whole suite of features evolving across the scene. We then use a task-agnostic deep architecture, an encoder-decoder, to model the retinal encoding process and characterize its representation of``time in the natural scene''in a compressed latent space. In our end-to-end training, an encoder learns a compressed latent representation from a large population of salamander retinal ganglion cells responding to natural movies, while a decoder samples from this compressed latent space to generate the appropriate movie frame. By comparing latent representations of retinal activity from three movies, we find that the retina performs transfer learning to encode time: the precise, low-dimensional representation of time learned from one movie can be used to represent time in a different movie, with up to 17ms resolution. We then show that static textures and velocity features of a natural movie are synergistic. The retina simultaneously encodes both to establishes a generalizable, low-dimensional representation of time in the natural scene.
more »
« less
Markov chain models of emitter activations in single molecule localization microscopy
A well-reasoned model of data movie in single molecule localization microscopy (SMLM) is desired. A model of data movie can be decomposed into a model of emitter activation process and a model of data frame. In this paper, we focus on Markov chain modeling and analyzing of emitter activation process for both cycled and continuous illuminations. First, a two-phase Markov chain is proposed to model the activation process for a pair of conjugated activator and emitter with cycled illumination. By converting the frame-based Markov chain into several cycle-based Markov chains, the stationary state distribution in the photoactivatable period is derived. Further obtained are several formulas that capture the characterization of the two-phase Markov chain. Second, the Markov chain and analytical result are extended to the continuous illumination where an emitter is excited continuously in all frames. Finally, incorporating the model of emitter activation process with our previous model of data frame, the model of data movie for both cycled and continuous illuminations in 3D and 2D imaging are simulated by custom codes. It is shown that the model can synthesize data movies well and the analytical formulas predict the simulation results accurately. The models provide a means to be broadly utilized in generating well-reasoned data movies for training of neural networks and evaluation of localization algorithms.
more »
« less
- Award ID(s):
- 2313072
- PAR ID:
- 10540060
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Express
- Volume:
- 32
- Issue:
- 19
- ISSN:
- 1094-4087; OPEXFF
- Format(s):
- Medium: X Size: Article No. 33779
- Size(s):
- Article No. 33779
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Summary Conditional density estimation seeks to model the distribution of a response variable conditional on covariates. We propose a Bayesian partition model using logistic Gaussian processes to perform conditional density estimation. The partition takes the form of a Voronoi tessellation and is learned from the data using a reversible jump Markov chain Monte Carlo algorithm. The methodology models data in which the density changes sharply throughout the covariate space, and can be used to determine where important changes in the density occur. The Markov chain Monte Carlo algorithm involves a Laplace approximation on the latent variables of the logistic Gaussian process model which marginalizes the parameters in each partition element, allowing an efficient search of the approximate posterior distribution of the tessellation. The method is consistent when the density is piecewise constant in the covariate space or when the density is Lipschitz continuous with respect to the covariates. In simulation and application to wind turbine data, the model successfully estimates the partition structure and conditional distribution.more » « less
-
Two recent and seemingly-unrelated techniques for proving mixing bounds for Markov chains are: (i) the framework of Spectral Independence, introduced by Anari, Liu and Oveis Gharan, and its numerous extensions, which have given rise to several breakthroughs in the analysis of mixing times of discrete Markov chains and (ii) the Stochastic Localization technique which has proven useful in establishing mixing and expansion bounds for both log-concave measures and for measures on the discrete hypercube. In this paper, we introduce a framework which connects ideas from both techniques. Our framework unifies, simplifies and extends those two techniques. In its center is the concept of a localization scheme which, to every probability measure, assigns a martingale of probability measures which localize in space as time evolves. As it turns out, to every such scheme corresponds a Markov chain, and many chains of interest appear naturally in this framework. This viewpoint provides tools for deriving mixing bounds for the dynamics through the analysis of the corresponding localization process. Generalizations of concepts of Spectral Independence and Entropic Independence naturally arise from our definitions, and in particular we recover the main theorems in the spectral and entropic independence frameworks via simple martingale arguments (completely bypassing the need to use the theory of high-dimensional expanders). We demonstrate the strength of our proposed machinery by giving short and (arguably) simpler proofs to many mixing bounds in the recent literature, including giving the first O(nlogn) bound for the mixing time of Glauber dynamics on the hardcore-model (of arbitrary degree) in the tree-uniqueness regime.more » « less
-
Continuous speaker separation aims to separate overlapping speakers in real-world environments like meetings, but it often falls short in isolating speech segments of a single speaker. This leads to split signals that adversely affect downstream applications such as automatic speech recognition and speaker diarization. Existing solutions like speaker counting have limitations. This paper presents a novel multi-channel approach for continuous speaker separation based on multi-input multi-output (MIMO) complex spectral mapping. This MIMO approach enables robust speaker localization by preserving inter-channel phase relations. Speaker localization as a byproduct of the MIMO separation model is then used to identify single-talker frames and reduce speaker splitting. We demonstrate that this approach achieves superior frame-level sound localization. Systematic experiments on the LibriCSS dataset further show that the proposed approach outperforms other methods, advancing state-of-the-art speaker separation performance.more » « less
-
Assortment optimization finds many important applications in both brick-and-mortar and online retailing. Decision makers select a subset of products to offer to customers from a universe of substitutable products, based on the assumption that customers purchase according to a Markov chain choice model, which is a very general choice model encompassing many popular models. The existing literature predominantly assumes that the customer arrival process and the Markov chain choice model parameters are given as input to the stochastic optimization model. However, in practice, decision makers may not have this information and must learn them while maximizing the total expected revenue on the fly. In “Online Learning for Constrained Assortment Optimization under the Markov Chain Choice Model,” S. Li, Q. Luo, Z. Huang, and C. Shi developed a series of online learning algorithms for Markov chain choice-based assortment optimization problems with efficiency, as well as provable performance guarantees.more » « less
An official website of the United States government
