skip to main content


Search for: All records

Creators/Authors contains: "Perelstein, Maxim"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. A bstract We consider theories in which a dark sector is described by a Conformal Field Theory (CFT) over a broad range of energy scales. A coupling of the dark sector to the Standard Model breaks conformal invariance. While weak at high energies, the breaking grows in the infrared, and at a certain energy scale the theory enters a confined (hadronic) phase. One of the hadronic excitations can play the role of dark matter. We study a “Conformal Freeze-In” cosmological scenario, in which the dark sector is populated through its interactions with the SM at temperatures when it is conformal. In this scenario, the dark matter relic density is determined by the CFT data, such as the dimension of the CFT operator coupled to the Standard Model. We show that this simple and highly predictive model of dark matter is phenomenologically viable. The observed relic density is reproduced for a variety of SM operators (“portals”) coupled to the CFT, and the resulting models are consistent with observational constraints. The mass of the COFI dark matter candidate is predicted to be in the keV-MeV range. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
    The algorithm for Monte Carlo simulation of parton-level events basedon an Artificial Neural Network (ANN) proposed in Ref.~ is used toperform a simulation of H\to 4\ell H → 4 ℓ decay. Improvements in the training algorithm have been implemented toavoid numerical instabilities. The integrated decay width evaluated bythe ANN is within 0.7% of the true value and unweighting efficiency of26% is reached. While the ANN is not automatically bijective betweeninput and output spaces, which can lead to issues with simulationquality, we argue that the training procedure naturally prefersbijective maps, and demonstrate that the trained ANN is bijective to avery good approximation. 
    more » « less