skip to main content


Search for: All records

Creators/Authors contains: "Tegmark, Max"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Background Determining cell identity in volumetric images of tagged neuronal nuclei is an ongoing challenge in contemporary neuroscience. Frequently, cell identity is determined by aligning and matching tags to an “atlas” of labeled neuronal positions and other identifying characteristics. Previous analyses of such C. elegans  datasets have been hampered by the limited accuracy of such atlases, especially for neurons present in the ventral nerve cord, and also by time-consuming manual elements of the alignment process. Results We present a novel automated alignment method for sparse and incomplete point clouds of the sort resulting from typical C. elegans  fluorescence microscopy datasets. This method involves a tunable learning parameter and a kernel that enforces biologically realistic deformation. We also present a pipeline for creating alignment atlases from datasets of the recently developed NeuroPAL transgene. In combination, these advances allow us to label neurons in volumetric images with confidence much higher than previous methods. Conclusions We release, to the best of our knowledge, the most complete full-body C. elegans  3D positional neuron atlas, incorporating positional variability derived from at least 7 animals per neuron, for the purposes of cell-type identity prediction for myriad applications (e.g., imaging neuronal activity, gene expression, and cell-fate). 
    more » « less
  2. At the heart of both lossy compression and clustering is a trade-off between the fidelity and size of the learned representation. Our goal is to map out and study the Pareto frontier that quantifies this trade-off. We focus on the optimization of the Deterministic Information Bottleneck (DIB) objective over the space of hard clusterings. To this end, we introduce the primal DIB problem, which we show results in a much richer frontier than its previously studied Lagrangian relaxation when optimized over discrete search spaces. We present an algorithm for mapping out the Pareto frontier of the primal DIB trade-off that is also applicable to other two-objective clustering problems. We study general properties of the Pareto frontier, and we give both analytic and numerical evidence for logarithmic sparsity of the frontier in general. We provide evidence that our algorithm has polynomial scaling despite the super-exponential search space, and additionally, we propose a modification to the algorithm that can be used where sampling noise is expected to be significant. Finally, we use our algorithm to map the DIB frontier of three different tasks: compressing the English alphabet, extracting informative color classes from natural images, and compressing a group theory-inspired dataset, revealing interesting features of frontier, and demonstrating how the structure of the frontier can be used for model selection with a focus on points previously hidden by the cloak of the convex hull. 
    more » « less
  3. We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find the network needs to be trained on only a small sampling of the data in order to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used solve nanophotonic inverse design problems by using backpropagation - where the gradient is analytical, not numerical. 
    more » « less
  4. null (Ed.)
    ABSTRACT Precision calibration poses challenges to experiments probing the redshifted 21-cm signal of neutral hydrogen from the Cosmic Dawn and Epoch of Reionization (z ∼ 30–6). In both interferometric and global signal experiments, systematic calibration is the leading source of error. Though many aspects of calibration have been studied, the overlap between the two types of instruments has received less attention. We investigate the sky based calibration of total power measurements with a HERA dish and an EDGES-style antenna to understand the role of autocorrelations in the calibration of an interferometer and the role of sky in calibrating a total power instrument. Using simulations we study various scenarios such as time variable gain, incomplete sky calibration model, and primary beam model. We find that temporal gain drifts, sky model incompleteness, and beam inaccuracies cause biases in the receiver gain amplitude and the receiver temperature estimates. In some cases, these biases mix spectral structure between beam and sky resulting in spectrally variable gain errors. Applying the calibration method to the HERA and EDGES data, we find good agreement with calibration via the more standard methods. Although instrumental gains are consistent with beam and sky errors similar in scale to those simulated, the receiver temperatures show significant deviations from expected values. While we show that it is possible to partially mitigate biases due to model inaccuracies by incorporating a time-dependent gain model in calibration, the resulting errors on calibration products are larger and more correlated. Completely addressing these biases will require more accurate sky and primary beam models. 
    more » « less