skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Buffon's needle problem for random planar disk-like Cantor sets
A model of randomness for self-similar Cantor sets was considered. Th expectation of Buffon needle probability (Favard length) for small neighborhood of such random sets was estimated as logarithmic in the smallness was proved. It is best possible by P. Mattila classical estimate.  more » « less
Award ID(s):
2154402
PAR ID:
10494851
Author(s) / Creator(s):
;
Editor(s):
Aron, R.
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Journal of Mathematical Analysis and Applications
Volume:
529
Issue:
2
ISSN:
0022-247X
Page Range / eLocation ID:
127622
Subject(s) / Keyword(s):
Favard length, random Cantor sets, Buffon needle probability
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Two interleaved stimulus sets were identical except for the background. In one, the flow stimuli background was the mid-gray of the interstimulus interval (equal background, eqbg), leading to a change of 9-10% in the space-average luminance. In the other, the space-average luminance of the entire stimulus field was adjusted to a constant (equal luminance, eqlum) within 0.5%; i.e., the background was slightly lightened when the dots in the flow were dark, and darkened when the dots were bright. Most cortical cells appeared to respond similarly to the two stimulus sets, as if stimulus structure mattered but not the background change, while the responses of most retinal ganglion cells appeared to differ between the two conditions. Machine learning algorithms confirmed this quantitatively. A manifold embedding of neurons to the two stimulus sets was constructed using diffusion maps. In this manifold, the responses of the same cell to eqlum and eqbg stimuli were significantly closer to one another for V1 rather than for the retina. Geometrically, the median ratio of the distance between the responses of each cell to the two stimulus sets as compared to the distance to the closest cell on the manifold was 3.5 for V1 compared to 12.7 for retina. Topologically, the fraction of cells for which the responses of the same cell to the two stimulus sets were connected in the diffusion map datagraph was 53% for V1 but only 9% for retina; when retina and cortex were co-embedded in the manifold, these fractions were 44% and 6%. While retina and cortex differ on average, it will be intriguing to determine whether particular classes of retinal cells behave more like V1 neurons, and vice versa. 
    more » « less
  2. Abstract There are multiple protocols for determining total nitrogen (TN) in water, but most can be grouped into direct approaches (TN‐d) that convert N forms to nitrogen‐oxides (NOx) and combined approaches (TN‐c) that combine Kjeldahl N (organic N +NH3) and nitrite+nitrate (NO2+NO3‐N). TN concentrations from these two approaches are routinely treated as equal in studies that use data derived from multiple sources (i.e., integrated data sets) despite the distinct chemistries of the two methods. We used two integrated data sets to determine if TN‐c and TN‐d results were interchangeable. Accuracy, determined as the difference between reported concentrations and the most probable value (MPV) of reference samples, was high and similar in magnitude (within 3.5–4.5% of the MPV) for both methods, although the bias was significantly smaller at low concentrations for TN‐d. Detection limits and data flagged as below detection suggested greater sensitivity for TN‐d for one data set, while patterns from the other data set were ambiguous. TN‐c results were more variable (less precise) by many measures, although TN‐d data included a small fraction of notably inaccurate results. Precision of TN‐c was further compromised by propagated error, which may not be acknowledged or detectable in integrated data sets unless complete metadata are available and inspected. Finally, concurrent measures of TN‐c and TN‐d in lake samples were extremely similar. Overall, TN‐d tended to be slightly more accurate and precise, but similarities in accuracy and the near 1 : 1 relationship for concurrent TN‐d and TN‐c measurements support careful use of data interchangeably in analyses of heterogeneous, integrated data sets. 
    more » « less
  3. null (Ed.)
    Bleeding frequency and severity within clinical categories of hemophilia A are highly variable and the origin of this variation is unknown. Solving this mystery in coagulation requires the generation and analysis of large data sets comprised of experimental outputs or patient samples, both of which are subject to limited availability. In this review, we describe how a computationally driven approach bypasses such limitations by generating large synthetic patient data sets. These data sets were created with a mechanistic mathematical model, by varying the model inputs, clotting factor, and inhibitor concentrations, within normal physiological ranges. Specific mathematical metrics were chosen from the model output, used as a surrogate measure for bleeding severity, and statistically analyzed for further exploration and hypothesis generation. We highlight results from our recent study that employed this computationally driven approach to identify FV (factor V) as a key modifier of thrombin generation in mild to moderate hemophilia A, which was confirmed with complementary experimental assays. The mathematical model was used further to propose a potential mechanism for these observations whereby thrombin generation is rescued in FVIII-deficient plasma due to reduced substrate competition between FV and FVIII for FXa. 
    more » « less
  4. There is an increased interest in model agnostic search strategies for physics beyond the standard model at the Large Hadron Collider.We introduce a Deep Set Variational Autoencoder and present results on the Dark Machines Anomaly Score Challenge.We find that the method attains the best anomaly detection ability when there is no decoding step for the network, and the anomaly score is based solely on the representation within the encoded latent space.This method was one of the top-performing models in the Dark Machines Challenge, both for the open data sets as well as the blinded data sets. 
    more » « less
  5. Abstract High entropy alloys (HEAs) are an important material class in the development of next-generation structural materials, but the astronomically large composition space cannot be efficiently explored by experiments or first-principles calculations. Machine learning (ML) methods might address this challenge, but ML of HEAs has been hindered by the scarcity of HEA property data. In this work, the EMTO-CPA method was used to generate a large HEA dataset (spanning a composition space of 14 elements) containing 7086 cubic HEA structures with structural properties, 1911 of which have the complete elastic tensor calculated. The elastic property dataset was used to train a ML model with the Deep Sets architecture. The Deep Sets model has better predictive performance and generalizability compared to other ML models. Association rule mining was applied to the model predictions to describe the compositional dependence of HEA elastic properties and to demonstrate the potential for data-driven alloy design. 
    more » « less