Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
ABSTRACT The light curves of radioactive transients, such as supernovae and kilonovae, are powered by the decay of radioisotopes, which release high-energy leptons through $$\beta ^+$$ and $$\beta ^-$$ decays. These leptons deposit energy into the expanding ejecta. As the ejecta density decreases during expansion, the plasma becomes collisionless, with particle motion governed by electromagnetic forces. In such environments, strong or turbulent magnetic fields are thought to confine particles, though the origin of these fields and the confinement mechanism have remained unclear. Using fully kinetic particle-in-cell (PIC) simulations, we demonstrate that plasma instabilities can naturally confine high-energy leptons. These leptons generate magnetic fields through plasma streaming instabilities, even in the absence of pre-existing fields. The self-generated magnetic fields slow lepton diffusion, enabling confinement, and transferring energy to thermal electrons and ions. Our results naturally explain the positron trapping inferred from late-time observations of thermonuclear and core-collapse supernovae. Furthermore, they suggest potential implications for electron dynamics in the ejecta of kilonovae. We also estimate synchrotron radio luminosities from positrons for Type Ia supernovae and find that such emission could only be detectable with next-generation radio observatories from a Galactic or local-group supernova in an environment without any circumstellar material.more » « less
-
Abstract A wide range of astrophysical sources exhibit extreme and rapidly varying electromagnetic emission indicative of efficient nonthermal particle acceleration. Understanding these sources often involves comparing data with a broad range of theoretical scenarios. To this end, it is beneficial to have tools that enable not only fast and efficient parametric investigation of the predictions of a specific scenario but also the flexibility to explore different theoretical ideas. In this paper, we introduceTleco, a versatile and lightweight toolkit for developing numerical models of relativistic outflows, including their particle acceleration mechanisms and resultant electromagnetic signature. Built on the Rust programming language and wrapped into a Python library,Tlecooffers efficient algorithms for evolving relativistic particle distributions and for solving the resulting emissions in a customizable fashion.Tlecouses a fully implicit discretization algorithm to solve the Fokker–Planck equation with user-defined diffusion, advection, cooling, injection, and escape and offers prescriptions for radiative emission and cooling. These include, but are not limited to, synchrotron, inverse-Compton, and self-synchrotron absorption.Tlecois designed to be user friendly and adaptable to model particle acceleration and the resulting electromagnetic spectrum and temporal variability in a wide variety of astrophysical scenarios, including, but not limited to, gamma-ray bursts, pulsar wind nebulae, and jets from active galactic nuclei. In this work, we outline the core algorithms and proceed to evaluate and demonstrate their effectiveness. The code is open source and available in the GitHub repository:https://github.com/zkdavis/Tleco.more » « less
-
Abstract Kinetic simulations of relativistic turbulence have significantly advanced our understanding of turbulent particle acceleration. Recent progress has highlighted the need for an updated acceleration theory that can account for particle acceleration within the plasma’s coherent structures. Here, we investigate how intermittency modeling connects statistical fluctuations in turbulence to regions of high-energy dissipation. This connection is established by employing a generalized She–Leveque model to characterize the exponentsζpfor the structure functions . The fitting of the scaling exponents provides us with a measure of the codimension of the dissipative structures, for which we subsequently determine the filling fraction. We perform our analysis for a range of magnetizationsσand relative fluctuation amplitudesδB0/B0. We find that increasing values ofσandδB0/B0allow the turbulent cascade to break sheetlike structures into smaller regions of dissipation that resemble chains of flux ropes. However, as their dissipation measure increases, the dissipative regions become less volume filling. With this work, we aim to inform future turbulent acceleration theories that incorporate particle energization from interactions with coherent structures within relativistic turbulence.more » « less
-
Abstract Recent analyses have found waves of neural activity traveling across entire visual cortical areas in awake animals. These traveling waves modulate the excitability of local networks and perceptual sensitivity. The general computational role of these spatiotemporal patterns in the visual system, however, remains unclear. Here, we hypothesize that traveling waves endow the visual system with the capacity to predict complex and naturalistic inputs. We present a network model whose connections can be rapidly and efficiently trained to predict individual natural movies. After training, a few input frames from a movie trigger complex wave patterns that drive accurate predictions many frames into the future solely from the network’s connections. When the recurrent connections that drive waves are randomly shuffled, both traveling waves and the ability to predict are eliminated. These results suggest traveling waves may play an essential computational role in the visual system by embedding continuous spatiotemporal structures over spatial maps.more » « less
-
SAE (Ed.)An investigation of the performance and emissions of a Fischer-Tropsch Coal-to-Liquid (CTL) Iso-Paraffinic Kerosene (IPK) was conducted using a CRDI compression ignition research engine with ULSD as a reference. Due to the low Derived Cetane Number (DCN), of IPK, an extended Ignition Delay (ID), and Combustion Delay (CD) were found for it, through experimentation in a Constant Volume Combustion Chamber (CVCC). Neat IPK was analyzed in a research engine at 4 bar Indicated Mean Effective Pressure (IMEP) at three injection timings: 15°, 20°, and 25° BTDC. Combustion phasing (CA50) was matched with ULSD at 10.8° and 16° BTDC. The IPK DCN was found to be 26, while the ULSD DCN was significantly higher at 47 in a PAC CID 510. In the engine, IPK’s DCN combined with its short physical ignition delay and long chemical ignition delay compared to ULSD, caused extended duration in Low Temperature Heat Release (LTHR) and cool flame formation. It was found in an analysis of the Apparent Heat Release Rate (AHRR) curve for IPK that there were multiple Negative Temperature Coefficient (NTCR) regions before the main combustion event. The High Temperature Heat Release (HTHR) of IPK achieved a greater peak heat release rate compared to ULSD. Pressure rise rate for IPK was observed to increase significantly with increase in injection timing. The peak in-cylinder pressure was also greater for IPK when matching CA50 by varying injection timing. Emissions analysis revealed that IPK produced less NOx, soot, and CO2 compared to ULSD. CO and UHC emissions for IPK increased.more » « less
-
SAE; Transactions (Ed.)Alternative fuels are sought after because they produce lower emissions and sometimes, they have feedstock and production advantages over fossil fuels, but their wear effects on engine components are largely unknown. In this study, the lubricity properties of a Fischer-Tropsch Gas-to-Liquid alternative fuel (Synthetic Paraffinic Kerosene-S8) and of Jet-A fuel were investigated and compared to those of Ultra Low Sulphur Diesel (ULSD). A pin-on-disk tribometer was employed to test wear and friction for a material pair of an AISI 316 steel ball on an AISI 1018 steel disk when lubricated by the fuels in this research work. Advanced digital microscopy was used to compare the wear patterns of the disks. Viscosity and density analysis of the tested fluids were also carried out.Tribometry for the fuel showed that S8 fell between Jet-A and ULSD when friction force was calculated and showed higher wear over time and after each test when compared to that of Jet-A and ULSD. An initially higher running-in friction force of 0.35N to 0.38N was observed for all three tested fluids, and then quasi-steady-state lower values of friction force of .310N for S8, 0.320 N for Jet-A and 0.295N for ULSD (the lowest observed).Wear values obtained by mass loss of the tested AISI 108 steel disks show that Jet-A and the reference fuel ULSD may yield lower wear (which is associated to better lubricity) than that of S8, and microscopy images are consistent with the wear results.more » « less
-
The cortical column is one of the fundamental computational circuits in the brain. In order to understand the role neurons in different layers of this circuit play in cortical function it is necessary to identify the boundaries that separate the laminar compartments. While histological approaches can reveal ground truth they are not a practical means of identifying cortical layers in vivo. The gold standard for identifying laminar compartments in electrophysiological recordings is current-source density (CSD) analysis. However, laminar CSD analysis requires averaging across reliably evoked responses that target the input layer in cortex, which may be difficult to generate in less well-studied cortical regions. Further, the analysis can be susceptible to noise on individual channels resulting in errors in assigning laminar boundaries. Here, we have analyzed linear array recordings in multiple cortical areas in both the common marmoset and the rhesus macaque. We describe a pattern of laminar spike–field phase relationships that reliably identifies the transition between input and deep layers in cortical recordings from multiple cortical areas in two different non-human primate species. This measure corresponds well to estimates of the location of the input layer using CSDs, but does not require averaging or specific evoked activity. Laminar identity can be estimated rapidly with as little as a minute of ongoing data and is invariant to many experimental parameters. This method may serve to validate CSD measurements that might otherwise be unreliable or to estimate laminar boundaries when other methods are not practical.more » « less
-
Abstract Recent analyses have found waves of neural activity traveling across entire visual cortical areas in awake animals. These traveling waves modulate excitability of local networks and perceptual sensitivity. The general computational role for these spatiotemporal patterns in the visual system, however, remains unclear. Here, we hypothesize that traveling waves endow the brain with the capacity to predict complex and naturalistic visual inputs. We present a new network model whose connections can be rapidly and efficiently trained to predict natural movies. After training, a few input frames from a movie trigger complex wave patterns that drive accurate predictions many frames into the future, solely from the network’s connections. When the recurrent connections that drive waves are randomly shuffled, both traveling waves and the ability to predict are eliminated. These results show traveling waves could play an essential computational role in the visual system by embedding continuous spatiotemporal structures over spatial maps.more » « less
-
The goal of tissue decellularization is to efficiently remove unwanted cellular components, such as DNA and cellular debris, while retaining the complex structural and molecular milieu within the extracellular matrix (ECM). Decellularization protocols to date are centered on customized tissue-specific and lab-specific protocols that involve consecutive manual steps which results in variable and protocol-specific ECM material. The differences that result from the inconsistent protocols between decellularized ECMs affect consistency across batches, limit comparisons between results obtained from different laboratories, and could limit the transferability of the material for consistent laboratory or clinical use. The present study is the first proof-of-concept towards the development of a standardized protocol that can be used to derive multiple ECM biomaterials (powders and hydrogels) via a previously established automated system. The automated decellularization method developed by our group was used due to its short decellularization time (4 hours) and its ability to reduce batch-to-batch variability. The ECM obtained using this first iteration of a unified protocol was able to produce ECM hydrogels from skin, lung, muscle, tendons, cartilage, and laryngeal tissues. All hydrogels formed in this study were cytocompatible and showed gelation and rheological properties consistent with previous ECM hydrogels. The ECMs also showed unique proteomic composition. The present study represents the first step towards developing standardized protocols that can be used on multiple tissues in a fast, scalable, and reproducible manner.more » « less
An official website of the United States government
