skip to main content

Title: A characterization of nested canalyzing functions with maximum average sensitivity
Nested canalyzing functions (NCFs) are a class of Boolean functions which are used to model certain biological phenomena. We derive a complete characterization of NCFs with the largest average sensitivity, expressed in terms of a simple structural property of the NCF. This characterization provides an alternate, but elementary, proof of the tight upper bound on the average sensitivity of any NCF established by Klotz et al. (2013). We also utilize the characterization to derive a closed form expression for the number of NCFs that have the largest average sensitivity.
Authors:
; ; ;
Award ID(s):
1633028
Publication Date:
NSF-PAR ID:
10067739
Journal Name:
Discrete applied mathematics
ISSN:
1872-6771
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT

    This paper aims to quantify how the lowest halo mass that can be detected with galaxy-galaxy strong gravitational lensing depends on the quality of the observations and the characteristics of the observed lens systems. Using simulated data, we measure the lowest detectable NFW mass at each location of the lens plane, in the form of detailed sensitivity maps. In summary, we find that: (i) the lowest detectable mass Mlow decreases linearly as the signal-to-noise ratio (SNR) increases and the sensitive area is larger when we decrease the noise; (ii) a moderate increase in angular resolution (0.07″ versus 0.09″) and pixel scale (0.01″ versus 0.04″) improves the sensitivity by on average 0.25 dex in halo mass, with more significant improvement around the most sensitive regions; (iii) the sensitivity to low-mass objects is largest for bright and complex lensed galaxies located inside the caustic curves and lensed into larger Einstein rings (i.e rE ≥ 1.0″). We find that for the sensitive mock images considered in this work, the minimum mass that we can detect at the redshift of the lens lies between 1.5 × 108 and $3\times 10^{9}\, \mathrm{M}_{\odot }$. We derive analytic relations between Mlow, the SNR and resolution and discuss themore »impact of the lensing configuration and source structure. Our results start to fill the gap between approximate predictions and real data and demonstrate the challenging nature of calculating precise forecasts for gravitational imaging. In light of our findings, we discuss possible strategies for designing strong lensing surveys and the prospects for HST, Keck, ALMA, Euclid and other future observations.

    « less
  2. Purpose Product developers using life cycle toxicity characterization models to understand the potential impacts of chemical emissions face serious challenges related to large data demands and high input data uncertainty. This motivates greater focus on model sensitivity toward input parameter variability to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according to parameter influence on characterization factors (CFs). Proof of concept is illustrated with the UNEP-SETAC scientific consensus model USEtox. Methods Using Monte Carlo analysis, we demonstrate a sensitivity-based approach to prioritize data collection with an illustrative example of aquatic ecotoxicity CFs for the vitamin B derivative niacinamide, which is an antioxidant used in personal care products. We calculate CFs via 10,000 iterations assuming plus-or-minus one order of magnitude variability in fate and exposure-relevant data inputs, while uncertainty in effect factor data is modeled as a central t distribution. Spearman’s rank correlation indices are used for all variable inputs to identify parameters with the largest influence on CFs. Results and discussion For emissions to freshwater, the niacinamide CFmore »is near log-normally distributed with a geometric mean of 0.02 and geometric standard deviation of 8.5 PAF m3 day/kg. Results of Spearman’s rank correlation show that degradation rates in air, water, and soil are the most influential parameters in calculating CFs, thus benefiting the most from future data refinement and experimental research. Kow, sediment degradation rate, and vapor pressure were the least influential parameters on CF results. These results may be very different for other, e.g., more lipophilic chemicals, where Kow is known to drive many fate and exposure aspects in multimedia modeling. Furthermore, non-linearity between input parameters and CF results prevents transferring sensitivity conclusions from one chemical to another. Conclusions A sensitivity-based approach for data refinement and research prioritization can provide guidance to database managers, life cycle assessment practitioners, and experimentalists to concentrate efforts on the few parameters that are most influential on toxicity characterization model results. Researchers can conserve resources and address parameter uncertainty by applying this approach when developing new or refining existing CFs for the inventory items that contribute most to toxicity impacts.« less
  3. Abstract This paper builds upon two key principles behind the Bourgain–Dyatlov quantitative uniqueness theorem for functions with Fourier transform supported in an Ahlfors regular set. We first provide a characterization of when a quantitative uniqueness theorem holds for functions with very quickly decaying Fourier transform, thereby providing an extension of the classical Paneah–Logvinenko–Sereda theorem. Secondly, we derive a transference result which converts a quantitative uniqueness theorem for functions with fast decaying Fourier transform to one for functions with Fourier transform supported on a fractal set. In addition to recovering the result of Bourgain–Dyatlov, we obtain analogous uniqueness results for denser fractals.
  4. The human circadian pacemaker entrains to the 24-h day, but interindividual differences in properties of the pacemaker, such as intrinsic period, affect chronotype and mediate responses to challenges to the circadian system, such as shift work and jet lag, and the efficacy of therapeutic interventions such as light therapy. Robust characterization of circadian properties requires desynchronization of the circadian system from the rest-activity cycle, and these forced desynchrony protocols are very time and resource intensive. However, circadian protocols designed to derive the relationship between light intensity and phase shift, which is inherently affected by intrinsic period, may be applied more broadly. To exploit this relationship, we applied a mathematical model of the human circadian pacemaker with a Markov-Chain Monte Carlo parameter estimation algorithm to estimate the representative group intrinsic period for a group of participants using their collective illuminance-response curve data. We first validated this methodology using simulated illuminance-response curve data in which the intrinsic period was known. Over a physiological range of intrinsic periods, this method accurately estimated the representative intrinsic period of the group. We also applied the method to previously published experimental data describing the illuminance-response curve for a group of healthy adult participants. We estimated themore »study participants’ representative group intrinsic period to be 24.26 and 24.27 h using uniform and normal priors, respectively, consistent with estimates of the average intrinsic period of healthy adults determined using forced desynchrony protocols. Our results establish an approach to estimate a population’s representative intrinsic period from illuminance-response curve data, thereby facilitating the characterization of intrinsic period across a broader range of participant populations than could be studied using forced desynchrony protocols. Future applications of this approach may improve the understanding of demographic differences in the intrinsic circadian period.« less
  5. Aerosol particles negatively affect human health while also having climatic relevance due to, for example, their ability to act as cloud condensation nuclei. Ultrafine particles (diameter D p < 100 nm) typically comprise the largest fraction of the total number concentration, however, their chemical characterization is difficult because of their low mass. Using an extractive electrospray time-of-flight mass spectrometer (EESI-TOF), we characterize the molecular composition of freshly nucleated particles from naphthalene and β-caryophyllene oxidation products at the CLOUD chamber at CERN. We perform a detailed intercomparison of the organic aerosol chemical composition measured by the EESI-TOF and an iodide adduct chemical ionization mass spectrometer equipped with a filter inlet for gases and aerosols (FIGAERO-I-CIMS). We also use an aerosol growth model based on the condensation of organic vapors to show that the chemical composition measured by the EESI-TOF is consistent with the expected condensed oxidation products. This agreement could be further improved by constraining the EESI-TOF compound-specific sensitivity or considering condensed-phase processes. Our results show that the EESI-TOF can obtain the chemical composition of particles as small as 20 nm in diameter with mass loadings as low as hundreds of ng m −3 in real time. This was until nowmore »difficult to achieve, as other online instruments are often limited by size cutoffs, ionization/thermal fragmentation and/or semi-continuous sampling. Using real-time simultaneous gas- and particle-phase data, we discuss the condensation of naphthalene oxidation products on a molecular level.« less