skip to main content


Title: A characterization of nested canalyzing functions with maximum average sensitivity
Nested canalyzing functions (NCFs) are a class of Boolean functions which are used to model certain biological phenomena. We derive a complete characterization of NCFs with the largest average sensitivity, expressed in terms of a simple structural property of the NCF. This characterization provides an alternate, but elementary, proof of the tight upper bound on the average sensitivity of any NCF established by Klotz et al. (2013). We also utilize the characterization to derive a closed form expression for the number of NCFs that have the largest average sensitivity.  more » « less
Award ID(s):
1633028
NSF-PAR ID:
10067739
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Discrete applied mathematics
ISSN:
1872-6771
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT

    This paper aims to quantify how the lowest halo mass that can be detected with galaxy-galaxy strong gravitational lensing depends on the quality of the observations and the characteristics of the observed lens systems. Using simulated data, we measure the lowest detectable NFW mass at each location of the lens plane, in the form of detailed sensitivity maps. In summary, we find that: (i) the lowest detectable mass Mlow decreases linearly as the signal-to-noise ratio (SNR) increases and the sensitive area is larger when we decrease the noise; (ii) a moderate increase in angular resolution (0.07″ versus 0.09″) and pixel scale (0.01″ versus 0.04″) improves the sensitivity by on average 0.25 dex in halo mass, with more significant improvement around the most sensitive regions; (iii) the sensitivity to low-mass objects is largest for bright and complex lensed galaxies located inside the caustic curves and lensed into larger Einstein rings (i.e rE ≥ 1.0″). We find that for the sensitive mock images considered in this work, the minimum mass that we can detect at the redshift of the lens lies between 1.5 × 108 and $3\times 10^{9}\, \mathrm{M}_{\odot }$. We derive analytic relations between Mlow, the SNR and resolution and discuss the impact of the lensing configuration and source structure. Our results start to fill the gap between approximate predictions and real data and demonstrate the challenging nature of calculating precise forecasts for gravitational imaging. In light of our findings, we discuss possible strategies for designing strong lensing surveys and the prospects for HST, Keck, ALMA, Euclid and other future observations.

     
    more » « less
  2. Abstract

    We derive the bolometric luminosities (Lbol) of 865 field-age and 189 young ultracool dwarfs (spectral types M6–T9, including 40 new discoveries presented here) by directly integrating flux-calibrated optical to mid-infrared (MIR) spectral energy distributions (SEDs). The SEDs consist of low-resolution (R∼ 150) near-infrared (NIR; 0.8–2.5μm) spectra (including new spectra for 97 objects), optical photometry from the Pan-STARRS1 survey, and MIR photometry from the CatWISE2020 survey and Spitzer/IRAC. OurLbolcalculations benefit from recent advances in parallaxes from Gaia, Spitzer, and UKIRT, as well as new parallaxes for 19 objects from CFHT and Pan-STARRS1 presented here. Coupling ourLbolmeasurements with a new uniform age analysis for all objects, we estimate substellar masses, radii, surface gravities, and effective temperatures (Teff) using evolutionary models. We construct empirical relationships forLbolandTeffas functions of spectral type and absolute magnitude, determine bolometric corrections in optical and infrared bandpasses, and study the correlation between evolutionary model-derived surface gravities and NIR gravity classes. Our sample enables a detailed characterization ofBT-SettlandATMO2020 atmospheric model systematics as a function of spectral type and position in the NIR color–magnitude diagram. We find the greatest discrepancies between atmospheric and evolutionary model-derivedTeff(up to 800 K) and radii (up to 2.0RJup) at the M/L spectral type transition boundary. With 1054 objects, this work constitutes the largest sample to date of ultracool dwarfs with determinations of their fundamental parameters.

     
    more » « less
  3. Abstract

    The North American Newark Canyon Formation (NCF; ∼113–98 Ma) presents an opportunity to examine how terrestrial carbonate facies reflect different aspects of paleoclimate during one of the hottest periods of Earth's history. The lower NCF type section preserves heterogeneous palustrine facies and the upper NCF preserves lacustrine deposits. We combined carbonate facies analysis withδ13C,δ18O, and Δ47data sets to assess which carbonate facies preserve stable isotope signals that are most representative of climatic conditions. Palustrine facies record the heterogeneity of the original wetland environment in which they formed. Using the pelmicrite facies that formed in deeper wetlands, we interpret a lower temperature zone (35–40°C) to reflect warm season water temperatures. In contrast, a mottled micrite facies which formed in shallower wetlands records hotter temperatures (36–68°C). These hotter temperatures reflect radiatively heated “bare‐skin” temperatures that occurred in a shallow depositional setting. The lower lacustrine unit has been secondarily altered by hydrothermal fluids while the upper lacustrine unit likely preserves primary temperatures andδ18Owaterof catchment‐integrated precipitation. Resultantly, the palustrine pelmicrite and lacustrine micrite are the facies most likely to reflect ambient climate conditions, and therefore, are the best facies to use for paleoclimate interpretations. Average warm season water temperatures of 41.1 ± 3.6°C and 37.8 ± 2.5°C are preserved by the palustrine pelmicrite (∼113–112 Ma) and lacustrine micrite (∼112–103 Ma), respectively. These data support previous interpretations of the mid‐Cretaceous as a hothouse climate and demonstrate the importance of characterizing facies for identifying the data most representative of past climates.

     
    more » « less
  4. Abstract

    We explore the potential of the adjoint‐state tsunami inversion method for rapid and accurate near‐field tsunami source characterization using S‐net, an array of ocean bottom pressure gauges. Compared to earthquake‐based methods, this method can obtain more accurate predictions for the initial water elevation of the tsunami source, including potential secondary sources, leading to accurate water height and wave run‐up predictions. Unlike finite‐fault tsunami source inversions, the adjoint method achieves high‐resolution results without requiring densely gridded Green's functions, reducing computation time. However, optimal results require a dense instrument network with sufficient azimuthal coverage. S‐net meets these requirements and reduces data collection time, facilitating the inversion and timely issuance of tsunami warnings. Since the method has not yet been applied to dense, near‐field data, we test it on synthetic waveforms of the 2011Mw9.0 Tohoku earthquake and tsunami, including triggered secondary sources. The results indicate that with a static source model without noise, using the first 5 min of the waveforms yields a favorable performance with an average accuracy score of 93%, and the largest error of predicted wave amplitudes ranges between −5.6 and 1.9 m. Using the first 20 min, secondary sources were clearly resolved. We also demonstrate the method's applicability using S‐net recordings of the 2016Mw6.9 Fukushima earthquake. The findings suggest that lower‐magnitude events require a longer waveform duration for accurate adjoint inversion. Moreover, the estimated stress drop obtained from inverting our obtained tsunami source, assuming uniform slip, aligns with estimations from recent studies.

     
    more » « less
  5. Purpose Product developers using life cycle toxicity characterization models to understand the potential impacts of chemical emissions face serious challenges related to large data demands and high input data uncertainty. This motivates greater focus on model sensitivity toward input parameter variability to guide research efforts in data refinement and design of experiments for existing and emerging chemicals alike. This study presents a sensitivity-based approach for estimating toxicity characterization factors given high input data uncertainty and using the results to prioritize data collection according to parameter influence on characterization factors (CFs). Proof of concept is illustrated with the UNEP-SETAC scientific consensus model USEtox. Methods Using Monte Carlo analysis, we demonstrate a sensitivity-based approach to prioritize data collection with an illustrative example of aquatic ecotoxicity CFs for the vitamin B derivative niacinamide, which is an antioxidant used in personal care products. We calculate CFs via 10,000 iterations assuming plus-or-minus one order of magnitude variability in fate and exposure-relevant data inputs, while uncertainty in effect factor data is modeled as a central t distribution. Spearman’s rank correlation indices are used for all variable inputs to identify parameters with the largest influence on CFs. Results and discussion For emissions to freshwater, the niacinamide CF is near log-normally distributed with a geometric mean of 0.02 and geometric standard deviation of 8.5 PAF m3 day/kg. Results of Spearman’s rank correlation show that degradation rates in air, water, and soil are the most influential parameters in calculating CFs, thus benefiting the most from future data refinement and experimental research. Kow, sediment degradation rate, and vapor pressure were the least influential parameters on CF results. These results may be very different for other, e.g., more lipophilic chemicals, where Kow is known to drive many fate and exposure aspects in multimedia modeling. Furthermore, non-linearity between input parameters and CF results prevents transferring sensitivity conclusions from one chemical to another. Conclusions A sensitivity-based approach for data refinement and research prioritization can provide guidance to database managers, life cycle assessment practitioners, and experimentalists to concentrate efforts on the few parameters that are most influential on toxicity characterization model results. Researchers can conserve resources and address parameter uncertainty by applying this approach when developing new or refining existing CFs for the inventory items that contribute most to toxicity impacts. 
    more » « less