skip to main content

Title: Workflow for modeling of generalized mid-spatial frequency errors in optical systems

We propose a workflow for modeling generalized mid-spatial frequency (MSF) errors in optical imaging systems. This workflow enables the classification of MSF distributions, filtering of bandlimited signatures, propagation of MSF errors to the exit pupil, and performance predictions that differentiate performance impacts due to the MSF distributions. We demonstrate the workflow by modeling the performance impacts of MSF errors for both transmissive and reflective imaging systems with near-diffraction-limited performance.

more » « less
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
1094-4087; OPEXFF
Medium: X Size: Article No. 2688
["Article No. 2688"]
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we present a methodology for predicting the optical performance impacts of random and structured MSF surface errors using pupil-difference probability distribution (PDPD) moments. In addition, we show that, for random mid-spatial frequency (MSF) surface errors, performance estimates from the PDPD moments converge to performance estimates that assume random statistics. Finally, we apply these methods to several MSF surface errors with different distributions and compare estimated optical performance values to predictions based on earlier methods assuming random error distributions.

    more » « less
  2. Standard surface specifications for mid-spatial frequency (MSF) errors do not capture complex surface topography and often lose critical information by making simplifying assumptions about surface distribution and statistics. As a result, it is challenging to link surface specifications with optical performance. In this work, we present use of the pupil-difference probability distribution (PDPD) moments to assess general MSF surface errors and show how the PDPD moments relate to the relative modulation.

    more » « less
  3. Abstract

    Estimating phenotypic distributions of populations and communities is central to many questions in ecology and evolution. These distributions can be characterized by their moments (mean, variance, skewness and kurtosis) or diversity metrics (e.g. functional richness). Typically, such moments and metrics are calculated using community‐weighted approaches (e.g. abundance‐weighted mean). We propose an alternative bootstrapping approach that allows flexibility in trait sampling and explicit incorporation of intraspecific variation, and show that this approach significantly improves estimation while allowing us to quantify uncertainty.

    We assess the performance of different approaches for estimating the moments of trait distributions across various sampling scenarios, taxa and datasets by comparing estimates derived from simulated samples with the true values calculated from full datasets. Simulations differ in sampling intensity (individuals per species), sampling biases (abundance, size), trait data source (local vs. global) and estimation method (two types of community‐weighting, two types of bootstrapping).

    We introduce thetraitstrapR package, which contains a modular and extensible set of bootstrapping and weighted‐averaging functions that use community composition and trait data to estimate the moments of community trait distributions with their uncertainty. Importantly, the first function in the workflow,trait_fill, allows the user to specify hierarchical structures (e.g. plot within site, experiment vs. control, species within genus) to assign trait values to each taxon in each community sample.

    Across all taxa, simulations and metrics, bootstrapping approaches were more accurate and less biased than community‐weighted approaches. With bootstrapping, a sample size of 9 or more measurements per species per trait generally included the true mean within the 95% CI. It reduced average percent errors by 26%–74% relative to community‐weighting. Random sampling across all species outperformed both size‐ and abundance‐biased sampling.

    Our results suggest randomly sampling ~9 individuals per sampling unit and species, covering all species in the community and analysing the data using nonparametric bootstrapping generally enable reliable inference on trait distributions, including the central moments, of communities. By providing better estimates of community trait distributions, bootstrapping approaches can improve our ability to link traits to both the processes that generate them and their effects on ecosystems.

    more » « less
  4. Abstract Long-read sequencing technology enables significant progress in de novo genome assembly. However, the high error rate and the wide error distribution of raw reads result in a large number of errors in the assembly. Polishing is a procedure to fix errors in the draft assembly and improve the reliability of genomic analysis. However, existing methods treat all the regions of the assembly equally while there are fundamental differences between the error distributions of these regions. How to achieve very high accuracy in genome assembly is still a challenging problem. Motivated by the uneven errors in different regions of the assembly, we propose a novel polishing workflow named BlockPolish. In this method, we divide contigs into blocks with low complexity and high complexity according to statistics of aligned nucleotide bases. Multiple sequence alignment is applied to realign raw reads in complex blocks and optimize the alignment result. Due to the different distributions of error rates in trivial and complex blocks, two multitask bidirectional Long short-term memory (LSTM) networks are proposed to predict the consensus sequences. In the whole-genome assemblies of NA12878 assembled by Wtdbg2 and Flye using Nanopore data, BlockPolish has a higher polishing accuracy than other state-of-the-arts including Racon, Medaka and MarginPolish & HELEN. In all assemblies, errors are predominantly indels and BlockPolish has a good performance in correcting them. In addition to the Nanopore assemblies, we further demonstrate that BlockPolish can also reduce the errors in the PacBio assemblies. The source code of BlockPolish is freely available on Github ( 
    more » « less
  5. Abstract Background

    Catheter ablation is associated with limited success rates in patients with persistent atrial fibrillation (AF). Currently, existing mapping systems fail to identify critical target sites for ablation. Recently, we proposed and validated several techniques (multiscale frequency [MSF], Shannon entropy [SE], kurtosis [Kt], and multiscale entropy [MSE]) to identify pivot point of rotors using ex‐vivo optical mapping animal experiments. However, the performance of these techniques is unclear for the clinically recorded intracardiac electrograms (EGMs), due to the different nature of the signals.


    This study aims to evaluate the performance of MSF, MSE, SE, and Kt techniques to identify the pivot point of the rotor using unipolar and bipolar EGMs obtained from numerical simulations.


    Stationary and meandering rotors were simulated in a 2D human atria. The performances of new approaches were quantified by comparing the “true” core of the rotor with the core identified by the techniques. Also, the performances of all techniques were evaluated in the presence of noise, scar, and for the case of the multielectrode multispline and grid catheters.


    Our results demonstrate that all the approaches are able to accurately identify the pivot point of both stationary and meandering rotors from both unipolar and bipolar EGMs. The presence of noise and scar tissue did not significantly affect the performance of the techniques. Finally, the core of the rotors was correctly identified for the case of multielectrode multispline and grid catheter simulations.


    The core of rotors can be successfully identified from EGMs using novel techniques; thus, providing motivation for future clinical implementations.

    more » « less