skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Yin, P."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Physical computing toolkits for children expose young minds to the concepts of computing and electronics within a target activity. To this end, these kits usually make use of a custom Visual Programming Language (or VPL) environment that extends past the functionality of simply programming, often also incorporating representations of electronics aspects in the interface. These representations of the electronics function as a scaffold to help the child focus on programming, instead of having to handle both the programming and details of the electronics at the same time. This paper presents a review of existing physical computing toolkits, looking at the What, How, and Where of electronics representations in their VPL interfaces. We then discuss potential research directions for the design of VPL interfaces for physical computing toolkits for children. 
    more » « less
  2. We present LBW-Net, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs). Specifically, we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during back-propagation (weight learning). We characterize the combinatorial nature of the low bit-width quantization problem. For 2-bit (ternary) CNNs, the quantization of N weights can be done by an exact formula in O(N log N) complexity. When the bit-width is 3 and above, we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive. The free parameter is further determined by network retraining and object detection tests. The LBW-Net has several desirable advantages over full-precision CNNs, including considerable memory savings, energy efficiency, and faster deployment. Our experiments on PASCAL VOC dataset show that compared with its 32-bit floating-point counterpart, the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks, and can even do better in real world visual scenes, while empirically enjoying more than 4× faster deployment. 
    more » « less
  3. Training activation quantized neural networks involves minimizing a piecewise constant function whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule. An empirical way around this issue is to use a straight-through estimator (STE) (Bengio et al., 2013) in the backward pass only, so that the “gradient” through the modified chain rule becomes non-trivial. Since this unusual “gradient” is certainly not the gradient of loss function, the following question arises: why searching in its negative direction minimizes the training loss? In this paper, we provide the theoretical justification of the concept of STE by answering this question. We consider the problem of learning a two-linear-layer network with binarized ReLU activation and Gaussian input data. We shall refer to the unusual “gradient” given by the STE-modifed chain rule as coarse gradient. The choice of STE is not unique. We prove that if the STE is properly chosen, the expected coarse gradient correlates positively with the population gradient (not available for the training), and its negation is a descent direction for minimizing the population loss. We further show the associated coarse gradient descent algorithm converges to a critical point of the population loss minimization problem. Moreover, we show that a poor choice of STE leads to instability of the training algorithm near certain local minima, which is verified with CIFAR-10 experiments. 
    more » « less
  4. Quantized deep neural networks (QDNNs) are attractive due to their much lower memory storage and faster inference speed than their regular full-precision counterparts. To maintain the same performance level especially at low bit-widths, QDNNs must be retrained. Their training involves piece-wise constant activation functions and discrete weights; hence, mathematical challenges arise. We introduce the notion of coarse gradient and propose the blended coarse gradient descent (BCGD) algorithm, for training fully quantized neural networks. Coarse gradient is generally not a gradient of any function but an artificial ascent direction. The weight update of BCGD goes by coarse gradient correction of a weighted average of the full-precision weights and their quantization (the so-called blending), which yields sufficient descent in the objective value and thus accelerates the training. Our experiments demonstrate that this simple blending technique is very effective for quantization at extremely low bit-width such as binarization. In full quantization of ResNet-18 for ImageNet classification task, BCGD gives 64.36% top-1 accuracy with binary weights across all layers and 4-bit adaptive activation. If the weights in the first and last layers are kept in full precision, this number increases to 65.46%. As theoretical justification, we show convergence analysis of coarse gradient descent for a two-linear-layer neural network model with Gaussian input data and prove that the expected coarse gradient correlates positively with the underlying true gradient. 
    more » « less
  5. This report presents a comprehensive collection of searches for new physics performed by the ATLAS Collaboration during the Run~2 period of data taking at the Large Hadron Collider, from 2015 to 2018, corresponding to about 140~$$^{-1}$$ of $$\sqrt{s}=13$$~TeV proton--proton collision data. These searches cover a variety of beyond-the-standard model topics such as dark matter candidates, new vector bosons, hidden-sector particles, leptoquarks, or vector-like quarks, among others. Searches for supersymmetric particles or extended Higgs sectors are explicitly excluded as these are the subject of separate reports by the Collaboration. For each topic, the most relevant searches are described, focusing on their importance and sensitivity and, when appropriate, highlighting the experimental techniques employed. In addition to the description of each analysis, complementary searches are compared, and the overall sensitivity of the ATLAS experiment to each type of new physics is discussed. Summary plots and statistical combinations of multiple searches are included whenever possible. 
    more » « less
    Free, publicly-accessible full text available April 22, 2026
  6. We propose BinaryRelax, a simple two-phase algorithm, for training deep neural networks with quantized weights. The set constraint that characterizes the quantization of weights is not imposed until the late stage of training, and a sequence of pseudo quantized weights is maintained. Specifically, we relax the hard constraint into a continuous regularizer via Moreau envelope, which turns out to be the squared Euclidean distance to the set of quantized weights. The pseudo quantized weights are obtained by linearly interpolating between the float weights and their quantizations. A continuation strategy is adopted to push the weights towards the quantized state by gradually increasing the regularization parameter. In the second phase, exact quantization scheme with a small learning rate is invoked to guarantee fully quantized weights. We test BinaryRelax on the benchmark CIFAR and ImageNet color image datasets to demonstrate the superiority of the relaxed quantization approach and the improved accuracy over the state-of-the-art training methods. Finally, we prove the convergence of BinaryRelax under an approximate orthogonality condition. 
    more » « less
  7. The ATLAS experiment has developed extensive software and distributed computing systems for Run 3 of the LHC. These systems are described in detail, including software infrastructure and workflows, distributed data and workload management, database infrastructure, and validation. The use of these systems to prepare the data for physics analysis and assess its quality are described, along with the software tools used for data analysis itself. An outlook for the development of these projects towards Run 4 is also provided. 
    more » « less
    Free, publicly-accessible full text available March 6, 2026
  8. A search is performed for dark matter particles produced in association with a resonantly produced pair of b-quarks with 30 < mbb < 150 GeV using 140 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the ATLAS detector at the LHC. This signature is expected in extensions of the standard model predicting the production of dark matter particles, in particular those containing a dark Higgs boson s that decays into bb¯. The highly boosted s → bb¯ topology is reconstructed using jet reclustering and a new identification algorithm. This search places stringent constraints across regions of the dark Higgs model parameter space that satisfy the observed relic density, excluding dark Higgs bosons with masses between 30 and 150 GeV in benchmark scenarios with Z0 mediator masses up to 4.8 TeV at 95% confidence level. 
    more » « less
    Free, publicly-accessible full text available March 1, 2026
  9. Free, publicly-accessible full text available February 1, 2026