Abstract We investigate the performance of , a 7.5 GPU-accelerated photon propagation tool compared with a single-threaded simulation. We compare the simulations using an improved model of the gaseous time projection chamber. Performance results suggest that improves simulation speeds by between$$58.47\pm {0.02}$$ and$$181.39\pm {0.28}$$ times relative to a CPU-only simulation and these results vary between different types of GPU and CPU. A detailed comparison shows that the number of detected photons, along with their times and wavelengths, are in good agreement between and .
more »
« less
Investigation of the rate-mediated form-function relationship in biological puncture
Abstract Puncture is a vital mechanism for survival in a wide range of organisms across phyla, serving biological functions such as prey capture, defense, and reproduction. Understanding how the shape of the puncture tool affects its functional performance is crucial to uncovering the mechanics underlying the diversity and evolution of puncture-based systems. However, such form-function relationships are often complicated by the dynamic nature of living systems. Puncture systems in particular operate over a wide range of speeds to penetrate biological tissues. Current studies on puncture biomechanics lack systematic characterization of the complex, rate-mediated, interaction between tool and material across this dynamic range. To fill this knowledge gap, we establish a highly controlled experimental framework for dynamic puncture to investigate the relationship between the puncture performance (characterized by the depth of puncture) and the tool sharpness (characterized by the cusp angle) across a wide range of bio-relevant puncture speeds (from quasi-static to$$\sim$$ 50 m/s). Our results show that the sensitivity of puncture performance to variations in tool sharpness reduces at higher puncture speeds. This trend is likely due to rate-based viscoelastic and inertial effects arising from how materials respond to dynamic loads. The rate-dependent form-function relationship has important biological implications: While passive/low-speed puncture organisms likely rely heavily on sharp puncture tools to successfully penetrate and maintain functionalities, higher-speed puncture systems may allow for greater variability in puncture tool shape due to the relatively geometric-insensitive puncture performance, allowing for higher adaptability during the evolutionary process to other mechanical factors.
more »
« less
- Award ID(s):
- 1942906
- PAR ID:
- 10490601
- Publisher / Repository:
- Nature Portfolio
- Date Published:
- Journal Name:
- Scientific Reports
- Volume:
- 13
- Issue:
- 1
- ISSN:
- 2045-2322
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract We present a broad review of$$1/f$$ noise observations in the heliosphere, and discuss and complement the theoretical background of generic$$1/f$$ models as relevant to NASA’s Polarimeter to UNify the Corona and Heliosphere (PUNCH) mission. First observed in the voltage fluctuations of vacuum tubes, the scale-invariant$$1/f$$ spectrum has since been identified across a wide array of natural and artificial systems, including heart rate fluctuations and loudness patterns in musical compositions. In the solar wind the interplanetary magnetic field trace spectrum exhibits$$1/f$$ scaling within the frequency range from around$$\unit[2 \times 10^{-6}]{Hz}$$to around$$\unit[10^{-3}]{{Hz}}$$at 1 au. One compelling mechanism for the generation of$$1/f$$ noise is the superposition principle, where a composite$$1/f$$ spectrum arises from the superposition of a collection of individual power-law spectra characterized by a scale-invariant distribution of correlation times. In the context of the solar wind, such a superposition could originate from scale-invariant reconnection processes in the corona. Further observations have detected$$1/f$$ signatures in the photosphere and corona at frequency ranges compatible with those observed at 1 au, suggesting an even lower altitude origin of$$1/f$$ spectrum in the solar dynamo itself. This hypothesis is bolstered by dynamo experiments and simulations that indicate inverse cascade activities, which can be linked to successive flux tube reconnections beneath the corona, and are known to generate$$1/f$$ noise possibly through nonlocal interactions at the largest scales. Conversely, models positing in situ generation of$$1/f$$ signals face causality issues in explaining the low-frequency portion of the$$1/f$$ spectrum. Understanding$$1/f$$ noise in the solar wind may inform central problems in heliospheric physics, such as the solar dynamo, coronal heating, the origin of the solar wind, and the nature of interplanetary turbulence.more » « less
-
Heavy fermion criticality has been a long-standing problem in condensed matter physics. Here we study a one-dimensional Kondo lattice model through numerical simulation and observe signatures of local criticality. We vary the Kondo couplingJ_K at fixed doping x. At large positiveJ_K , we confirm the expected conventional Luttinger liquid phase with2k_F=\frac{1+x}{2} (in units of2\pi ), an analogue of the heavy Fermi liquid (HFL) in the higher dimension. In theJ_K ≤ 0 side, our simulation finds the existence of a fractional Luttinger liquid (LL\star ) phase with2k_F=\frac{x}{2} , accompanied by a gapless spin mode originating from localized spin moments, which serves as an analogue of the fractional Fermi liquid (FL\star ) phase in higher dimensions. The LL\star phase becomes unstable and transitions to a spin-gapped Luther-Emery (LE) liquid phase at small positiveJ_K . Then we mainly focus on the “critical regime” between the LE phase and the LL phase. Approaching the critical point from the spin-gapped LE phase, we often find that the spin gap vanishes continuously, while the spin-spin correlation length in real space stays finite and small. For a certain range of doping, in a point (or narrow region) ofJ_K , the dynamical spin structure factor obtained through the time-evolving block decimation (TEBD) simulation shows dispersion-less spin fluctuations in a finite range of momentum space above a small energy scale (around0.035 J ) that is limited by the TEBD accuracy. All of these results are unexpected for a regular gapless phase (or critical point) described by conformal field theory (CFT). Instead, they are more consistent with exotic ultra-local criticality with an infinite dynamical exponentz=+ . The numerical discovery here may have important implications on our general theoretical understanding of the strange metals in heavy fermion systems. Lastly, we propose to simulate the model in a bilayer optical lattice with a potential difference.more » « less
-
Abstract The evolutionary path of massive stars begins at helium burning. Energy production for this phase of stellar evolution is dominated by the reaction path 3$$\alpha \rightarrow ^{12}$$ C$$(\alpha ,\gamma )^{16}$$ O and also determines the ratio of$$^{12}$$ C/$$^{16}$$ O in the stellar core. This ratio then sets the evolutionary trajectory as the star evolves towards a white dwarf, neutron star or black hole. Although the reaction rate of the 3$$\alpha $$ process is relatively well known, since it proceeds mainly through a single narrow resonance in$$^{12}$$ C, that of the$$^{12}$$ C$$(\alpha ,\gamma )^{16}$$ O reaction remains uncertain since it is the result of a more difficult to pin down, slowly-varying, portion of the cross section over a strong interference region between the high-energy tails of subthreshold resonances, the low-energy tails of higher-energy broad resonances and direct capture. Experimental measurements of this cross section require herculean efforts, since even at higher energies the cross section remains small and large background sources are often present that require the use of very sensitive experimental methods. Since the$$^{12}$$ C$$(\alpha ,\gamma )^{16}$$ O reaction has such a strong influence on many different stellar objects, it is also interesting to try to back calculate the required rate needed to match astrophysical observations. This has become increasingly tempting, as the accuracy and precision of observational data has been steadily improving. Yet, the pitfall to this approach lies in the intermediary steps of modeling, where other uncertainties needed to model a star’s internal behavior remain highly uncertain.more » « less
-
Abstract In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularization. Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser. The methodology is readily adapted to graphs and here we consider graph poly-Laplacian regularization in a fully supervised, non-parametric, noise corrupted, regression problem. In particular, given a dataset$$\{x_i\}_{i=1}^n$$ and a set of noisy labels$$\{y_i\}_{i=1}^n\subset \mathbb {R}$$ we let$$u_n{:}\{x_i\}_{i=1}^n\rightarrow \mathbb {R}$$ be the minimizer of an energy which consists of a data fidelity term and an appropriately scaled graph poly-Laplacian term. When$$y_i = g(x_i)+\xi _i$$ , for iid noise$$\xi _i$$ , and using the geometric random graph, we identify (with high probability) the rate of convergence of$$u_n$$ togin the large data limit$$n\rightarrow \infty $$ . Furthermore, our rate is close to the known rate of convergence in the usual smoothing spline model.more » « less
An official website of the United States government

