skip to main content


Search for: All records

Creators/Authors contains: "Hong, B"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 1, 2025
  2. null (Ed.)
  3. Sparse support vector machine (SVM) is a popular classification technique that can simultaneously learn a small set of the most interpretable features and identify the support vectors. It has achieved great successes in many real-world applications. However, for large-scale problems involving a huge number of samples and extremely high-dimensional features, solving sparse SVMs remains challenging. By noting that sparse SVMs induce sparsities in both feature and sample spaces, we propose a novel approach, which is based on accurate estimations of the primal and dual optima of sparse SVMs, to simultaneously identify the features and samples that are guaranteed to be irrelevant to the outputs. Thus, we can remove the identified inactive samples and features from the training phase, leading to substantial savings in both the memory usage and computational cost without sacrificing accuracy. To the best of our knowledge, the proposed method is the first static feature and sample reduction method for sparse SVMs. Experiments on both synthetic and real datasets (e.g., the kddb dataset with about 20 million samples and 30 million features) demonstrate that our approach significantly outperforms state-of-the-art methods and the speedup gained by our approach can be orders of magnitude. 
    more » « less
  4. Free, publicly-accessible full text available December 1, 2024
  5. Free, publicly-accessible full text available November 1, 2024
  6. Free, publicly-accessible full text available November 1, 2024
  7. Abstract

    A description is presented of the algorithms used to reconstruct energy deposited in the CMS hadron calorimeter during Run 2 (2015–2018) of the LHC. During Run 2, the characteristic bunch-crossing spacing for proton-proton collisions was 25 ns, which resulted in overlapping signals from adjacent crossings. The energy corresponding to a particular bunch crossing of interest is estimated using the known pulse shapes of energy depositions in the calorimeter, which are measured as functions of both energy and time. A variety of algorithms were developed to mitigate the effects of adjacent bunch crossings on local energy reconstruction in the hadron calorimeter in Run 2, and their performance is compared.

     
    more » « less
    Free, publicly-accessible full text available November 1, 2024