skip to main content

Search for: All records

Creators/Authors contains: "Gao, J."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Recent efforts to obtain high data rates in wireless systems have focused on what can be achieved in systems that have nonlinear or coarsely quantized transceiver architectures. Estimating the channel in such a system is challenging because the nonlinearities distort the channel estimation process. It is therefore of interest to determine how much training is needed to estimate the channel sufficiently well so that the channel estimate can be used during data communication. We provide a way to determine how much training is needed by deriving a lower bound on the achievable rate in a training-based scheme that can bemore »computed and analyzed even when the number of antennas is very large. This lower bound can be tight, especially at high SNR. One conclusion is that the optimal number of training symbols may paradoxically be smaller than the number of transmitters for systems with coarselyquantized transceivers. We show how the training time can be strongly dependent on the number of receivers, and give an example where doubling the number of receivers reduces the training time by about 37 percent.« less
  2. Knowledge tracing is an essential and challenging task in intelligent tutoring systems, whose goal is to estimate students’ knowledge state based on their responses to questions. Although many models for knowledge tracing task are developed, most of them depend on either concepts or items as input and ignore the hierarchical structure of items, which provides valuable information for the prediction of student learning results. In this paper, we propose a novel deep hierarchical knowledge tracing (DHKT) model exploiting the hierarchical structure of items. In the proposed DHKT model, the hierarchical relations between concepts and items are modeled by the hingemore »loss on the inner product between the learned concept embeddings and item embeddings. Then the learned embeddings are fed into a neural network to model the learning process of students, which is used to make predictions. The prediction loss and the hinge loss are minimized simultaneously during training process.« less
  3. Abstract The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hardmore »scatter, and stored as combined events. Consequently, for each hard-scatter interaction, only one such presampled event needs to be added as part of the simulation chain. For the Run 2 simulation chain, with an average of 35 interactions per bunch crossing, this new method provides a substantial reduction in MC production CPU needs of around 20%, while reproducing the properties of the reconstructed quantities relevant for physics analyses with good accuracy.« less
    Free, publicly-accessible full text available December 1, 2023
  4. Abstract The ATLAS experiment at the Large Hadron Collider has a broad physics programme ranging from precision measurements to direct searches for new particles and new interactions, requiring ever larger and ever more accurate datasets of simulated Monte Carlo events. Detector simulation with Geant4 is accurate but requires significant CPU resources. Over the past decade, ATLAS has developed and utilized tools that replace the most CPU-intensive component of the simulation—the calorimeter shower simulation—with faster simulation methods. Here, AtlFast3, the next generation of high-accuracy fast simulation in ATLAS, is introduced. AtlFast3 combines parameterized approaches with machine-learning techniques and is deployed tomore »meet current and future computing challenges, and simulation needs of the ATLAS experiment. With highly accurate performance and significantly improved modelling of substructure within jets, AtlFast3 can simulate large numbers of events for a wide range of physics processes.« less
    Free, publicly-accessible full text available December 1, 2023
  5. Free, publicly-accessible full text available March 1, 2023