skip to main content

Search for: All records

Creators/Authors contains: "Song, H."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 1, 2023
  2. Abstract We present the results of a search for core-collapse supernova neutrinos, using long-term KamLAND data from 2002 March 9 to 2020 April 25. We focus on the electron antineutrinos emitted from supernovae in the energy range of 1.8–111 MeV. Supernovae will make a neutrino event cluster with the duration of ∼10 s in the KamLAND data. We find no neutrino clusters and give the upper limit on the supernova rate to be 0.15 yr −1 with a 90% confidence level. The detectable range, which corresponds to a >95% detection probability, is 40–59 kpc and 65–81 kpc for core-collapse supernovaemore »and failed core-collapse supernovae, respectively. This paper proposes to convert the supernova rate obtained by the neutrino observation to the Galactic star formation rate. Assuming a modified Salpeter-type initial mass function, the upper limit on the Galactic star formation rate is <(17.5–22.7) M ⊙ yr −1 with a 90% confidence level.« less
    Free, publicly-accessible full text available July 1, 2023
  3. Abstract We present the results of a time-coincident event search for low-energy electron antineutrinos in the KamLAND detector with gamma-ray bursts (GRBs) from the Gamma-ray Coordinates Network and Fermi Gamma-ray Burst Monitor. Using a variable coincidence time window of ±500 s plus the duration of each GRB, no statistically significant excess above the background is observed. We place the world’s most stringent 90% confidence level upper limit on the electron antineutrino fluence below 17.5 MeV. Assuming a Fermi–Dirac neutrino energy spectrum from the GRB source, we use the available redshift data to constrain the electron antineutrino luminosity and effective temperature.
    Free, publicly-accessible full text available March 1, 2023
  4. A major goal in neuroscience is to understand the relationship between an animal’s behavior and how this is encoded in the brain. Therefore, a typical experiment involves training an animal to perform a task and recording the activity of its neurons – brain cells – while the animal carries out the task. To complement these experimental results, researchers “train” artificial neural networks – simplified mathematical models of the brain that consist of simple neuron-like units – to simulate the same tasks on a computer. Unlike real brains, artificial neural networks provide complete access to the “neural circuits” responsible for amore »behavior, offering a way to study and manipulate the behavior in the circuit. One open issue about this approach has been the way in which the artificial networks are trained. In a process known as reinforcement learning, animals learn from rewards (such as juice) that they receive when they choose actions that lead to the successful completion of a task. By contrast, the artificial networks are explicitly told the correct action. In addition to differing from how animals learn, this limits the types of behavior that can be studied using artificial neural networks. Recent advances in the field of machine learning that combine reinforcement learning with artificial neural networks have now allowed Song et al. to train artificial networks to perform tasks in a way that mimics the way that animals learn. The networks consisted of two parts: a “decision network” that uses sensory information to select actions that lead to the greatest reward, and a “value network” that predicts how rewarding an action will be. Song et al. found that the resulting artificial “brain activity” closely resembled the activity found in the brains of animals, confirming that this method of training artificial neural networks may be a useful tool for neuroscientists who study the relationship between brains and behavior. The training method explored by Song et al. represents only one step forward in developing artificial neural networks that resemble the real brain. In particular, neural networks modify connections between units in a vastly different way to the methods used by biological brains to alter the connections between neurons. Future work will be needed to bridge this gap.« less
  5. One means to support for design-by-analogy (DbA) in practice involves giving designers efficient access to source analogies as inspiration to solve problems. The patent database has been used for many DbA support efforts, as it is a preexisting repository of catalogued technology. Latent Semantic Analysis (LSA) has been shown to be an effective computational text processing method for extracting meaningful similarities between patents for useful functional exploration during DbA. However, this has only been shown to be useful at a small-scale (100 patents). Considering the vastness of the patent database and realistic exploration at a large scale, it is importantmore »to consider how these computational analyses change with orders of magnitude more data. We present analysis of 1,000 random mechanical patents, comparing the ability of LSA to Latent Dirichlet Allocation (LDA) to categorize patents into meaningful groups. Resulting implications for large(r) scale data mining of patents for DbA support are detailed.« less