skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Singh, M"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In optimal experimental design, the objective is to select a limited set of experiments that maximizes information about unknown model parameters based on factor levels. This work addresses the generalized D-optimal design problem, allowing for nonlinear relationships in factor levels. We develop scalable algorithms suitable for cases where the number of candidate experiments grows exponentially with the factor dimension, focusing on both first- and second-order models under design constraints. Particularly, our approach integrates convex relaxation with pricing-based local search techniques, which can provide upper bounds and performance guarantees. Unlike traditional local search methods, such as the ``Fedorov exchange" and its variants, our method effectively accommodates arbitrary side constraints in the design space. Furthermore, it yields both a feasible solution and an upper bound on the optimal value derived from the convex relaxation. Numerical results highlight the efficiency and scalability of our algorithms, demonstrating superior performance compared to the state-of-the-art commercial software, \texttt{JMP}. 
    more » « less
    Free, publicly-accessible full text available August 3, 2026
  2. The multiple traveling salesman problem (mTSP) is an important variant of metric TSP where a set of k salespeople together visit a set of n cities while minimizing the total cost of the k routes under a given cost metric. The mTSP problem has applications to many real-life problems such as vehicle routing. Rothkopf [14] introduced another variant of TSP called many-visits TSP (MV-TSP) where a request r(v) is given for each city v and a single salesperson needs to visit each city r(v) times and return to his starting point. We note that in MV-TSP the cost of loops is positive, so a TSP solution cannot be trivially extended (without an increase in cost) to a MV-TSP solution by consecutively visiting each vertex to satisfy the visit requirements. A combination of mTSP and MV-TSP, called many-visits multiple TSP (MV-mTSP) was studied by Berczi, Mnich, and Vincze [3] where the authors give approximation algorithms for various variants of MV-mTSP. In this work, we show a simple linear programming (LP) based reduction that converts a mTSP LP-based algorithm to an LP-based algorithm for MV-mTSP with the same approximation factor. We apply this reduction to improve or match the current best approximation factors of several variants of the MV-mTSP. Our reduction shows that the addition of visit requests r(v) to mTSP does not make the problem harder to approximate even when r(v) is exponential in the number of vertices. To apply our reduction, we either use existing LP-based algorithms for mTSP variants or show that several existing combinatorial algorithms for mTSP variants can be interpreted as LP-based algorithms. This allows us to apply our reduction to these combinatorial algorithms while achieving improved guarantees. 
    more » « less
    Free, publicly-accessible full text available April 9, 2026
  3. Free, publicly-accessible full text available December 3, 2025
  4. In an instance of the weighted Nash Social Welfare problem, we are given a set of m indivisible items, G, and n agents, A, where each agent i in A has a valuation v_ij ≥ 0 for each item j in G. In addition, every agent i has a non-negative weight w_i such that the weights collectively sum up to 1. The goal is to find an assignment of items to players that maximizes the weighted geometric mean of the valuation received by the players. When all the weights are equal, the problem reduces to the classical Nash Social Welfare problem, which has recently received much attention. In this work, we present an approximation algorithm whose approximation depends on the KL-divergence between the weight distribution and the uniform distribution. We generalize the convex programming relaxations for the symmetric variant of Nash Social Welfare presented in [CDG+17, AGSS17] to two different mathematical programs. The first program is convex and is necessary for computational efficiency, while the second program is a non-convex relaxation that can be rounded efficiently. The approximation factor derives from the difference in the objective values of the convex and non-convex relaxation. 
    more » « less
  5. An investigation of high-transverse-momentum (high- p T ) photon-triggered jets in proton-proton ( p p ) and ion-ion ( A A ) collisions at s N N = 0.2 and 5.02 TeV is carried out, using the multistage description of in-medium jet evolution. Monte Carlo simulations of hard scattering and energy loss in heavy-ion collisions are performed using parameters tuned in a previous study of the nuclear modification factor ( R A A ) for inclusive jets and high- p T hadrons. We obtain a good reproduction of the experimental data for photon-triggered jet R A A , as measured by the ATLAS detector, the distribution of the ratio of jet to photon p T ( X J γ ), measured by both CMS and ATLAS, and the photon-jet azimuthal correlation as measured by CMS. We obtain a moderate description of the photon-triggered jet I A A , as measured by STAR. A noticeable improvement in the comparison is observed when one goes beyond prompt photons and includes bremsstrahlung and decay photons, revealing their significance in certain kinematic regions, particularly at X J γ > 1 . Moreover, azimuthal angle correlations demonstrate a notable impact of bremsstrahlung photons on the distribution, emphasizing their role in accurately describing experimental results. This work highlights the success of the multistage model of jet modification to straightforwardly predict (this set of) photon-triggered jet observables. This comparison, along with the role played by bremsstrahlung photons, has important consequences on the inclusion of such observables in a future Bayesian analysis. Published by the American Physical Society2025 
    more » « less
    Free, publicly-accessible full text available June 1, 2026
  6. We present a comprehensive photometric and spectroscopic study of the Type IIP supernova (SN) 2018is. TheVband luminosity and the expansion velocity at 50 days post-explosion are −15.1 ± 0.2 mag (corrected for AV= 1.34 mag) and 1400 km s−1, classifying it as a low-luminosity SN II. The recombination phase in theVband is shorter, lasting around 110 days, and exhibits a steeper decline (1.0 mag per 100 days) compared to most other low-luminosity SNe II. Additionally, the optical and near-infrared spectra display hydrogen emission lines that are strikingly narrow, even for this class. The Fe IIand Sc IIline velocities are at the lower end of the typical range for low-luminosity SNe II. Semi-analytical modelling of the bolometric light curve suggests an ejecta mass of ∼8 M, corresponding to a pre-supernova mass of ∼9.5 M, and an explosion energy of ∼0.40 × 1051erg. Hydrodynamical modelling further indicates that the progenitor had a zero-age main sequence mass of 9 M, coupled with a low explosion energy of 0.19 × 1051erg. The nebular spectrum reveals weak [O I]λλ6300,6364 lines, consistent with a moderate-mass progenitor, while features typical of Fe core-collapse events, such as He I, [C I], and Fe I, are indiscernible. However, the redder colours and low ratio of Ni to Fe abundance do not support an electron-capture scenario either. As a low-luminosity SN II with an atypically steep decline during the photospheric phase and remarkably narrow emission lines, SN 2018is contributes to the diversity observed within this population. 
    more » « less
    Free, publicly-accessible full text available February 1, 2026
  7. We consider the Max-3-Section problem, where we are given an undirected graph G = (V, E) equipped with non-negative edge weights w : E → R+ and the goal is to find a partition of V into three equisized parts while maximizing the total weight of edges crossing between different parts. Max-3-Section is closely related to other well-studied graph partitioning problems, e.g., Max-Cut, Max-3-Cut, and Max-Bisection. We present a polynomial time algorithm achieving an approximation of 0.795, that improves upon the previous best known approximation of 0.673. The requirement of multiple parts that have equal sizes renders Max-3-Section much harder to cope with compared to, e.g., Max-Bisection. We show a new algorithm that combines the existing approach of Lassere hierarchy along with a random cut strategy that suffices to give our result. 
    more » « less
  8. The Collaboration reports a new determination of the jet transport parameter q ̂ in the quark-gluon plasma (QGP) using Bayesian inference, incorporating all available inclusive hadron and jet yield suppression data measured in heavy-ion collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). This multi-observable analysis extends the previously published Bayesian inference determination of q ̂ , which was based solely on a selection of inclusive hadron suppression data. is a modular framework incorporating detailed dynamical models of QGP formation and evolution, and jet propagation and interaction in the QGP. Virtuality-dependent partonic energy loss in the QGP is modeled as a thermalized weakly coupled plasma, with parameters determined from Bayesian calibration using soft-sector observables. This Bayesian calibration of q ̂ utilizes active learning, a machine-learning approach, for efficient exploitation of computing resources. The experimental data included in this analysis span a broad range in collision energy and centrality, and in transverse momentum. In order to explore the systematic dependence of the extracted parameter posterior distributions, several different calibrations are reported, based on combined jet and hadron data; on jet or hadron data separately; and on restricted kinematic or centrality ranges of the jet and hadron data. Tension is observed in comparison of these variations, providing new insights into the physics of jet transport in the QGP and its theoretical formulation. Published by the American Physical Society2025 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  9. Aims.We investigate the photometric characteristics of a sample of intermediate-luminosity red transients (ILRTs), a class of elusive objects with peak luminosity between that of classical novae and standard supernovae. Our goal is to provide a stepping stone in the path to reveal the physical origin of such events, thanks to the analysis of the datasets collected. Methods.We present the multi-wavelength photometric follow-up of four ILRTs, namely NGC 300 2008OT-1, AT 2019abn, AT 2019ahd, and AT 2019udc. Through the analysis and modelling of their spectral energy distribution and bolometric light curves, we inferred the physical parameters associated with these transients. Results.All four objects display a single-peaked light curve which ends in a linear decline in magnitudes at late phases. A flux excess with respect to a single blackbody emission is detected in the infrared domain for three objects in our sample, a few months after maximum. This feature, commonly found in ILRTs, is interpreted as a sign of dust formation. Mid-infrared monitoring of NGC 300 2008OT-1 761 days after maximum allowed us to infer the presence of ∼10−3–10−5Mof dust, depending on the chemical composition and the grain size adopted. The late-time decline of the bolometric light curves of the considered ILRTs is shallower than expected for56Ni decay, hence requiring an additional powering mechanism. James Webb Space Telescope observations of AT 2019abn prove that the object has faded below its progenitor luminosity in the mid-infrared domain, five years after its peak. Together with the disappearance of NGC 300 2008OT-1 in Spitzer images seven years after its discovery, this supports the terminal explosion scenario for ILRTs. With a simple semi-analytical model we tried to reproduce the observed bolometric light curves in the context of a few solar masses ejected at few 103km s−1and enshrouded in an optically thick circumstellar medium. 
    more » « less
    Free, publicly-accessible full text available March 1, 2026