skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Szalay, Alexander S"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Massively multiplexed spectrographs will soon gather large statistical samples of stellar spectra. The accurate estimation of uncertainties on derived parameters, such as the line-of-sight velocityvlos, especially for spectra with low signal-to-noise ratios (S/Ns), is paramount. We generated an ensemble of simulated optical spectra of stars as if they were observed with low- and medium-resolution fiber-fed instruments on an 8 m class telescope, similar to the Subaru Prime Focus Spectrograph, and determinedvlosby fitting stellar templates to the simulated spectra. We compared the empirical errors of the derived parameters—calculated from an ensemble of simulations—to the asymptotic errors determined from the Fisher matrix, as well as from Monte Carlo sampling of the posterior probability. We confirm that the uncertainty ofvlosscales with the inverse square root of the S/N, but also show how this scaling breaks down at low S/N and analyze the error and bias caused by template mismatch. We outline a computationally optimized algorithm to fit multiexposure data and provide a mathematical model of stellar spectrum fitting that maximizes the so called significance, which allows for calculating the error from the Fisher matrix analytically. We also introduce the effective line count, and provide a scaling relation to estimate the errors ofvlosmeasurements based on stellar type. Our analysis covers a range of stellar types with parameters that are typical of the Galactic outer disk and halo, together with analogs of stars in M31 and in satellite dwarf spheroidal galaxies around the Milky Way. 
    more » « less
  2. Two common definitions of the spatially local rate of kinetic energy cascade at some scale$$\ell$$in turbulent flows are (i) the cubic velocity difference term appearing in the ‘scale-integrated local Kolmogorov–Hill’ equation (structure-function approach), and (ii) the subfilter-scale energy flux term in the transport equation for subgrid-scale kinetic energy (filtering approach). We perform a comparative study of both quantities based on direct numerical simulation data of isotropic turbulence at Taylor-scale Reynolds number 1250. While in the past observations of negative subfilter-scale energy flux (backscatter) have led to debates regarding interpretation and relevance of such observations, we argue that the interpretation of the local structure-function-based cascade rate definition is unambiguous since it arises from a divergence term in scale space. Conditional averaging is used to explore the relationship between the local cascade rate and the local filtered viscous dissipation rate as well as filtered velocity gradient tensor properties such as its invariants. We find statistically robust evidence of inverse cascade when both the large-scale rotation rate is strong and the large-scale strain rate is weak. Even stronger net inverse cascading is observed in the ‘vortex compression’$$R>0$$,$$Q>0$$quadrant, where$$R$$and$$Q$$are velocity gradient invariants. Qualitatively similar but quantitatively much weaker trends are observed for the conditionally averaged subfilter-scale energy flux. Flow visualizations show consistent trends, namely that spatially, the inverse cascade events appear to be located within large-scale vortices, specifically in subregions when$$R$$is large. 
    more » « less
  3. The brain is arguably the most powerful computation system known. It is extremely efficient in processing large amounts of information and can discern signals from noise, adapt, and filter faulty information all while running on only 20 watts of power. The human brain's processing efficiency, progressive learning, and plasticity are unmatched by any computer system. Recent advances in stem cell technology have elevated the field of cell culture to higher levels of complexity, such as the development of three-dimensional (3D) brain organoids that recapitulate human brain functionality better than traditional monolayer cell systems. Organoid Intelligence (OI) aims to harness the innate biological capabilities of brain organoids for biocomputing and synthetic intelligence by interfacing them with computer technology. With the latest strides in stem cell technology, bioengineering, and machine learning, we can explore the ability of brain organoids to compute, and store given information (input), execute a task (output), and study how this affects the structural and functional connections in the organoids themselves. Furthermore, understanding how learning generates and changes patterns of connectivity in organoids can shed light on the early stages of cognition in the human brain. Investigating and understanding these concepts is an enormous, multidisciplinary endeavor that necessitates the engagement of both the scientific community and the public. Thus, on Feb 22–24 of 2022, the Johns Hopkins University held the first Organoid Intelligence Workshop to form an OI Community and to lay out the groundwork for the establishment of OI as a new scientific discipline. The potential of OI to revolutionize computing, neurological research, and drug development was discussed, along with a vision and roadmap for its development over the coming decade. 
    more » « less