Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available June 3, 2026
- 
            Free, publicly-accessible full text available June 3, 2026
- 
            Achieving precise alignment between textual instructions and generated images in text-to-image generation is a significant challenge, particularly in rendering written text within images. Sate-of-the-art models like Stable Diffusion 3 (SD3), Flux, and AuraFlow still struggle with accurate text depiction, resulting in misspelled or inconsistent text. We introduce a training-free method with minimal computational overhead that significantly enhances text rendering quality. Specifically, we introduce an overshooting sampler for pretrained rectified flow (RF) models, by alternating between over-simulating the learned ordinary differential equation (ODE) and reintroducing noise. Compared to the Euler sampler, the overshooting sampler effectively introduces an extra Langevin dynamics term that can help correct the compounding error from successive Euler steps and therefore improve the text rendering. However, when the overshooting strength is high, we observe over-smoothing artifacts on the generated images. To address this issue, we propose an Attention Modulated Overshooting sampler (AMO), which adaptively controls the strength of overshooting for each image patch according to their attention score with the text content. AMO demonstrates a 32.3% and 35.9% improvement in text rendering accuracy on SD3 and Flux without compromising overall image quality or increasing inference cost.more » « lessFree, publicly-accessible full text available May 3, 2026
- 
            Free, publicly-accessible full text available April 28, 2026
- 
            Although Large Language Models (LLMs) succeed in human-guided conversations such as instruction following and question answering, the potential of LLM-guided conversations-where LLMs direct the discourse and steer the conversation's objectives-remains under-explored. In this study, we first characterize LLM-guided conversation into three fundamental components: (i) Goal Navigation; (ii) Context Management; (iii) Empathetic Engagement, and propose GuideLLM as an installation. We then implement an interviewing environment for the evaluation of LLM-guided conversation. Specifically, various topics are involved in this environment for comprehensive interviewing evaluation, resulting in around 1.4k turns of utterances, 184k tokens, and over 200 events mentioned during the interviewing for each chatbot evaluation. We compare GuideLLM with 6 state-of-the-art LLMs such as GPT-4o and Llama-3-70b-Instruct, from the perspective of interviewing quality, and autobiography generation quality. For automatic evaluation, we derive user proxies from multiple autobiographies and employ LLM-as-a-judge to score LLM behaviors. We further conduct a human-involved experiment by employing 45 human participants to chat with GuideLLM and baselines. We then collect human feedback, preferences, and ratings regarding the qualities of conversation and autobiography. Experimental results indicate that GuideLLM significantly outperforms baseline LLMs in automatic evaluation and achieves consistent leading performances in human ratings.more » « lessFree, publicly-accessible full text available February 10, 2026
- 
            Although Large Language Models (LLMs) succeed in human-guided conversations such as instruction following and question answering, the potential of LLM-guided conversations—where LLMs direct the discourse and steer the conversation’s objectives—remains largely untapped. In this study, we provide an exploration of the LLM-guided conversation paradigm. Specifically, we first characterize LLM-guided conversation into three fundamental properties: (i) Goal Navigation; (ii) Context Management; (iii) Empathetic Engagement, and propose GUIDELLM as a general framework for LLM-guided conversation. We then implement an autobiography interviewing environment as one of the demonstrations of GuideLLM, which is a common practice in Reminiscence Therapy. In this environment, various techniques are integrated with GUIDELLM to enhance the autonomy of LLMs, such as Verbalized Interview Protocol (VIP) and Memory Graph Extrapolation (MGE) for goal navigation, and therapy strategies for empathetic engagement. We compare GUIDELLM with baseline LLMs, such as GPT-4-turbo and GPT-4o, from the perspective of interviewing quality, conversation quality, and autobiography generation quality. Experimental results encompassing both LLM-as-a-judge evaluations and human subject experiments involving 45 participants indicate that GUIDELLM significantly outperforms baseline LLMs in the autobiography interviewing task.more » « lessFree, publicly-accessible full text available December 14, 2025
- 
            This paper presents an overview of experimental results of a laser-produced plasma expanding into a background gas, immersed within a large range of highly uniform magnetic fields (of up to 3 T), that are transverse to the expanding plasma. We used intensified gated imaging to capture the expansion of the plasma across and along the magnetic field lines to observe the spatiotemporal expansion dynamics for different magnetic field strengths. We observe changes in the perpendicular and parallel dynamics of the laser-produced plasmas expansion at high magnetic field. In addition, our results have also indicated the presence of electron-ion hybrid instabilities at relatively high pressures (100 mTorr) and relatively high magnetic field strengths (2 T), in accordance with theoretical calculations.more » « less
- 
            We have observed the behavior of striations caused by ionization waves propagating in low-pressure helium DC discharges using the non-invasive laser-collision induced fluorescence (LCIF) diagnostic. To achieve this, we developed an analytic fit of collisional radiative model (CRM) predictions to interpret the LCIF data and recover quantitative two-dimensional spatial maps of the electron density, ne, and the ratios of LCIF emission states that can be correlated with Te with the use of accurate distribution functions at localized positions within striated helium discharges at 500 mTorr, 750 mTorr, and 1 Torr. To our knowledge, these are the first spatiotemporal, laser-based, experimental measurements of ne in DC striations. The ne and 447:588 ratio distributions align closely with striation theory. Constriction of the positive column appears to occur with decreased gas pressure, as shown by the radial ne distribution. We identify a transition from a slow ionization wave to a fast ionization wave between 750 mTorr and 1 Torr. These experiments validate our analytic fit of ne, allowing the implementation of an LCIF diagnostic in helium without the need to develop a CRM.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available