Abstract A new release of the Monte Carlo event generator (version 7.3) has been launched. This iteration encompasses several enhancements over its predecessor, version 7.2. Noteworthy upgrades include: the implementation of a process-independent electroweak angular-ordered parton shower integrated with QCD and QED radiation; a new recoil scheme for initial-state radiation improving the behaviour of the angular-ordered parton shower; the incorporation of the heavy quark effective theory to refine the hadronization and decay of excited heavy mesons and heavy baryons; a dynamic strategy to regulate the kinematic threshold of cluster splittings within the cluster hadronization model; several improvements to the structure of the cluster hadronization model allowing for refined models; the possibility to extract event-by-event hadronization corrections in a well-defined way; the possibility of using the string model, with a dedicated tune. Additionally, a new tuning of the parton shower and hadronization parameters has been executed. This article discusses the novel features introduced in version 7.3.0. 
                        more » 
                        « less   
                    This content will become publicly available on April 30, 2026
                            
                            Post-hoc reweighting of hadron production in the Lund string model
                        
                    
    
            We present a method for reweighting flavor selection in the Lund string fragmentation model. This is the process of calculating and applying event weights enabling fast and exact variation of hadronization parameters on pre-generated event samples. The procedure is post hoc, requiring only a small amount of additional information stored per event, and allowing for efficient estimation of hadronization uncertainties without repeated simulation. Weight expressions are derived from the hadronization algorithm itself, and validated against direct simulation for a wide range of observables and parameter shifts. The hadronization algorithm can be viewed as a hierarchical Markov process with stochastic rejections, a structure common to many complex simulations outside of high-energy physics. This perspective makes the method modular, extensible, and potentially transferable to other domains. We demonstrate the approach in Pythia, including both numerical stability and timing benefits. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2209769
- PAR ID:
- 10621422
- Editor(s):
- Assi, Benoit; Bierlich, Christian; Ilten, Phil; Menzo, Tony; Szewc, Manuel; Wilkinson, Michael; Youssef, Ahmed; Zupan, Jure
- Publisher / Repository:
- arXiv:2505.00142
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            We introduce a novel method for extracting a fragmentation model directly from experimental data without requiring an explicit parametric form, called Histories and Observables for Monte-Carlo Event Reweighting (HOMER), consisting of three steps: the training of a classifier between simulation and data, the inference of single fragmentation weights, and the calculation of the weight for the full hadronization chain. We illustrate the use of HOMER on a simplified hadronization problem, aq\bar{q} string fragmenting into pions, and extract a modified Lund string fragmentation functionf(z) . We then demonstrate the use of HOMER on three types of experimental data: (i) binned distributions of high-level observables, (ii) unbinned event-by-event distributions of these observables, and (iii) full particle cloud information. After demonstrating thatf(z) can be extracted from data (the inverse of hadronization), we also show that, at least in this limited setup, the fidelity of the extractedf(z) suffers only limited loss when moving from (i) to (ii) to (iii). Public code is available at https://gitlab.com/uchep/mlhad.more » « less
- 
            Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. The result is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study. Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy. Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.more » « less
- 
            The increasing availability of real-time data collected from dynamic systems brings opportunities for simulation models to be calibrated online for improving the accuracy of simulation-based studies. Systematical methods are needed for assimilating real-time measurement data into simulation models. This paper presents a particle filter-based data assimilation method to support online model calibration in discrete event simulation. A joint state-parameter estimation problem is defined, and a particle filter-based data assimilation algorithm is presented. The developed method is applied to a discrete event simulation of a one-way traffic control system. Experiments results demonstrate the effectiveness of the developed method for calibrating simulation models’ parameters in real time and for improving data assimilation results.more » « less
- 
            Heavy-flavor hadrons produced in ultrarelativistic heavy-ion collisions are a sensitive probe for studying hadronization mechanisms of the quark-gluon-plasma. In this paper, we survey how different transport models for the simulation of heavy-quark diffusion through a quark-gluon plasma in heavy-ion collisions implement hadronization and how this affects final state observables. Utilizing the same input charm-quark distribution in all models at the hadronization transition, we find that the transverse-momentum dependence of the nuclear modification factor of various charm hadron species has significant sensitivity to the hadronization scheme. In addition, the charm-hadron elliptic flow exhibits a nontrivial dependence on the elliptic flow of the hadronizing partonic medium.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
