skip to main content


Title: Sub-Nyquist Sampling with Optical Pulses for Photonic Blind Source Separation

We proposed and demonstrated an optical pulse sampling method for photonic blind source separation. It can separate large bandwidth of mixed signals by small sampling frequency, which can reduce the workload of digital signal processing.

 
more » « less
Award ID(s):
2128616
NSF-PAR ID:
10437284
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Frontiers in Optics + Laser Science
Page Range / eLocation ID:
FTu6C.5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Purpose

    Most commercially available treatment planning systems (TPSs) approximate the continuous delivery of volumetric modulated arc therapy (VMAT) plans with a series of discretized static beams for treatment planning, which can make VMAT dose computation extremely inefficient. In this study, we developed a polar‐coordinate‐based pencil beam (PB) algorithm for efficient VMAT dose computation with high‐resolution gantry angle sampling that can improve the computational efficiency and reduce the dose discrepancy due to the angular under‐sampling effect.

    Methods and Materials

    6 MV pencil beams were simulated on a uniform cylindrical phantom under an EGSnrc Monte Carlo (MC) environment. The MC‐generated PB kernels were collected in the polar coordinate system for each bixel on a fluence map and subsequently fitted via a series of Gaussians. The fluence was calculated using a detectors’ eye view with off‐axis and MLC transmission factors corrected. Doses of VMAT arc on the phantom were computed by summing the convolution results between the corresponding PB kernels and fluence for each bixel in the polar coordinate system. The convolution was performed using fast Fourier transform to expedite the computing speed. The calculated doses were converted to the Cartesian coordinate system and compared with the reference dose computed by a collapsed cone convolution (CCC) algorithm of the TPS. A heterogeneous phantom was created to study the heterogeneity corrections using the proposed algorithm. Ten VMAT arcs were included to evaluate the algorithm performance. Gamma analysis and computation complexity theory were used to measure the dosimetric accuracy and computational efficiency, respectively.

    Results

    The dosimetric comparisons on the homogeneous phantom between the proposed PB algorithm and the CCC algorithm for 10 VMAT arcs demonstrate that the proposed algorithm can achieve a dosimetric accuracy comparable to that of the CCC algorithm with average gamma passing rates of 96% (2%/2mm) and 98% (3%/3mm). In addition, the proposed algorithm can provide better computational efficiency for VMAT dose computation using a PC equipped with a 4‐core processor, compared to the CCC algorithm utilizing a dual 10‐core server. Moreover, the computation complexity theory reveals that the proposed algorithm has a great advantage with regard to computational efficiency for VMAT dose computation on homogeneous medium, especially when a fine angular sampling rate is applied. This can support a reduction in dose errors from the angular under‐sampling effect by using a finer angular sampling rate, while still preserving a practical computing speed. For dose calculation on the heterogeneous phantom, the proposed algorithm with heterogeneity corrections can still offer a reasonable dosimetric accuracy with comparable computational efficiency to that of the CCC algorithm.

    Conclusions

    We proposed a novel polar‐coordinate‐based pencil beam algorithm for VMAT dose computation that enables a better computational efficiency while maintaining clinically acceptable dosimetric accuracy and reducing dose error caused by the angular under‐sampling effect. It also provides a flexible VMAT dose computation structure that allows adjustable sampling rates and direct dose computation in regions of interest, which makes the algorithm potentially useful for clinical applications such as independent dose verification for VMAT patient‐specific QA.

     
    more » « less
  2. Abstract

    Network analysis of infectious disease in wildlife can reveal traits or individuals critical to pathogen transmission and help inform disease management strategies. However, estimates of contact between animals are notoriously difficult to acquire. Researchers commonly use telemetry technologies to identify animal associations, but such data may have different sampling intervals and often captures a small subset of the population. The objectives of this study were to outline best practices for telemetry sampling in network studies of infectious disease by determining (a) the consequences of telemetry sampling on our ability to estimate network structure, (b) whether contact networks can be approximated using purely spatial contact definitions and (c) how wildlife spatial configurations may influence telemetry sampling requirements.

    We simulated individual movement trajectories for wildlife populations using a home range‐like movement model, creating full location datasets and corresponding ‘complete’ networks. To mimic telemetry data, we created ‘sample’ networks by subsampling the population (10%–100% of individuals) with a range of sampling intervals (every minute to every 3 days). We varied the definition of contact for sample networks, using either spatiotemporal or spatial overlap, and varied the spatial configuration of populations (random, lattice or clustered). To compare complete and sample networks, we calculated seven network metrics important for disease transmission and assessed mean ranked correlation coefficients and percent error between complete and sample network metrics.

    Telemetry sampling severely reduced our ability to calculate global node‐level network metrics, but had less impact on local and network‐level metrics. Even so, in populations with infrequent associations, high intensity telemetry sampling may still be necessary. Defining contact in terms of spatial overlap generally resulted in overly connected networks, but in some instances, could compensate for otherwise coarse telemetry data.

    By synthesizing movement and disease ecology with computational approaches, we characterized trade‐offs important for using wildlife telemetry data beyond ecological studies of individual movement, and found that careful use of telemetry data has the potential to inform network models. Thus, with informed application of telemetry data, we can make significant advances in leveraging its use for a better understanding and management of wildlife infectious disease.

     
    more » « less
  3. Abstract

    Estimating phenotypic distributions of populations and communities is central to many questions in ecology and evolution. These distributions can be characterized by their moments (mean, variance, skewness and kurtosis) or diversity metrics (e.g. functional richness). Typically, such moments and metrics are calculated using community‐weighted approaches (e.g. abundance‐weighted mean). We propose an alternative bootstrapping approach that allows flexibility in trait sampling and explicit incorporation of intraspecific variation, and show that this approach significantly improves estimation while allowing us to quantify uncertainty.

    We assess the performance of different approaches for estimating the moments of trait distributions across various sampling scenarios, taxa and datasets by comparing estimates derived from simulated samples with the true values calculated from full datasets. Simulations differ in sampling intensity (individuals per species), sampling biases (abundance, size), trait data source (local vs. global) and estimation method (two types of community‐weighting, two types of bootstrapping).

    We introduce thetraitstrapR package, which contains a modular and extensible set of bootstrapping and weighted‐averaging functions that use community composition and trait data to estimate the moments of community trait distributions with their uncertainty. Importantly, the first function in the workflow,trait_fill, allows the user to specify hierarchical structures (e.g. plot within site, experiment vs. control, species within genus) to assign trait values to each taxon in each community sample.

    Across all taxa, simulations and metrics, bootstrapping approaches were more accurate and less biased than community‐weighted approaches. With bootstrapping, a sample size of 9 or more measurements per species per trait generally included the true mean within the 95% CI. It reduced average percent errors by 26%–74% relative to community‐weighting. Random sampling across all species outperformed both size‐ and abundance‐biased sampling.

    Our results suggest randomly sampling ~9 individuals per sampling unit and species, covering all species in the community and analysing the data using nonparametric bootstrapping generally enable reliable inference on trait distributions, including the central moments, of communities. By providing better estimates of community trait distributions, bootstrapping approaches can improve our ability to link traits to both the processes that generate them and their effects on ecosystems.

     
    more » « less
  4. Abstract

    The amount and patterns of phylodiversity in a community are often used to draw inferences about the local and historical factors affecting community assembly and can be used to prioritize communities and locations for conservation. Because measures of phylodiversity are based on the topology and branch lengths of phylogenetic trees, which are affected by the number and diversity of taxa in the tree, these analyses may be sensitive to changes in taxon sampling and tree reconstruction methods.

    To investigate the effects of taxon sampling and tree reconstruction methods on measures of phylodiversity, we investigated the community phylogenetics of the Ordway‐Swisher Biological Station (Florida), which is home to over 600 species of vascular plants. We studied the effects of (a) the number of taxa included in the regional phylogeny; (b) random versus targeted sampling of species to assemble the regional species pool; (c) including only species from specific clades rather than broad sampling; (d) using trees reconstructed directly for the taxa under study compared to trees pruned from a larger reconstructed tree; and (e) using phylograms compared to chronograms.

    We found that including more taxa in a study increases the likelihood of observing significantly nonrandom phylogenetic patterns. However, there were no consistent trends in the phylodiversity patterns based on random taxon sampling compared to targeted sampling, or within individual clades compared to the complete dataset. Using pruned and reconstructed phylogenies resulted in similar patterns of phylodiversity, while chronograms in some cases led to significantly different results from phylograms.

    The methods commonly used in community phylogenetic studies can significantly impact the results, potentially influencing both inferences of community assembly and conservation decisions. We highlight the need for both careful selection of methods in community phylogenetic studies and appropriate interpretation of results, depending on the specific questions to be addressed.

     
    more » « less
  5. null (Ed.)
    Abstract

    Objective-driven adaptive sampling is a widely used tool for the optimization of deterministic black-box functions. However, the optimization of stochastic simulation models as found in the engineering, biological, and social sciences is still an elusive task. In this work, we propose a scalable adaptive batch sampling scheme for the optimization of stochastic simulation models with input-dependent noise. The developed algorithm has two primary advantages: (i) by recommending sampling batches, the designer can benefit from parallel computing capabilities, and (ii) by replicating of previously observed sampling locations the method can be scaled to higher-dimensional and more noisy functions. Replication improves numerical tractability as the computational cost of Bayesian optimization methods is known to grow cubicly with the number of unique sampling locations. Deciding when to replicate and when to explore depends on what alternative minimizes the posterior prediction accuracy at and around the spatial locations expected to contain the global optimum. The algorithm explores a new sampling location to reduce the interpolation uncertainty and replicates to improve the accuracy of the mean prediction at a single sampling location. Through the application of the proposed sampling scheme to two numerical test functions and one real engineering problem, we show that we can reliably and efficiently find the global optimum of stochastic simulation models with input-dependent noise.

     
    more » « less