Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
A parameterized mathematical model for Lithium-ion battery cell is presented in this paper for performance analysis with a particular focus on battery discharge behavior and electrochemical impedance spectroscopy profile. The model utilizes various physical properties as input and consists of two major sub-models in a complementary manner. The first sub-model is an adapted Doyle-Fuller-Newman (DFN) framework to simulate electrochemical, thermodynamic, and transport phenomena within the battery. The second sub-model is a calibrated solid-electrolyte interphase (SEI) layer formation model. This model emphasizes the electrical dynamic response in terms of the reaction process, layer growth, and conductance change. The equivalent circuit component values are derived from the outputs of both sub-models, reflecting the battery’s changing physical parameters. The simulated discharge curves and electrochemical impedance spectroscopy (EIS) profiles are then provided with a comparison against empirical results for validation, which exhibit good agreement. This modeling methodology aims to bridge the gap between the physical model and the equivalent circuit model (ECM), enabling more accurate battery performance predictions and operation status tracking.more » « lessFree, publicly-accessible full text available October 1, 2025
-
Bonial, Claire; Bonn, Julia; Hwang, Jena D (Ed.)We explore using LLMs, GPT-4 specifically, to generate draft sentence-level Chinese Uniform Meaning Representations (UMRs) that human annotators can revise to speed up the UMR annotation process. In this study, we use few-shot learning and Think-Aloud prompting to guide GPT-4 to generate sentence-level graphs of UMR. Our experimental results show that compared with annotating UMRs from scratch, using LLMs as a preprocessing step reduces the annotation time by two thirds on average. This indicates that there is great potential for integrating LLMs into the pipeline for complicated semantic annotation tasks.more » « lessFree, publicly-accessible full text available May 20, 2025
-
Bonial, Claire; Bonn, Julia; Hwang, Jena D (Ed.)We explore using LLMs, GPT-4 specifically, to generate draft sentence-level Chinese Uniform Meaning Representations (UMRs) that human annotators can revise to speed up the UMR annotation process. In this study, we use few-shot learning and Think-Aloud prompting to guide GPT-4 to generate sentence-level graphs of UMR. Our experimental results show that compared with annotating UMRs from scratch, using LLMs as a preprocessing step reduces the annotation time by two thirds on average. This indicates that there is great potential for integrating LLMs into the pipeline for complicated semantic annotation tasks.more » « lessFree, publicly-accessible full text available May 1, 2025
-
Jiang, Jing; Reitter, David; Deng, Shumin (Ed.)This paper explores utilizing Large Language Models (LLMs) to perform Cross-Document Event Coreference Resolution (CDEC) annotations and evaluates how they fare against human annotators with different levels of training. Specifically, we formulate CDEC as a multi-category classification problem on pairs of events that are represented as decontextualized sentences, and compare the predictions of GPT-4 with the judgment of fully trained annotators and crowdworkers on the same data set. Our study indicates that GPT-4 with zero-shot learning outperformed crowd-workers by a large margin and exhibits a level of performance comparable to trained annotators. Upon closer analysis, GPT-4 also exhibits tendencies of being overly confident, and force annotation decisions even when such decisions are not warranted due to insufficient information. Our results have implications on how to perform complicated annotations such as CDEC in the age of LLMs, and show that the best way to acquire such annotations might be to combine the strengths of LLMs and trained human annotators in the annotation process, and using untrained or undertrained crowdworkers is no longer a viable option to acquire high-quality data to advance the state of the art for such problems.more » « less
-
The momentum-forbidden dark excitons can have a pivotal role in quantum information processing, Bose–Einstein condensation, and light-energy harvesting. Anatase TiO2with an indirect band gap is a prototypical platform to study bright to momentum-forbidden dark exciton transition. Here, we examine, by GW plus the real-time Bethe–Salpeter equation combined with the nonadiabatic molecular dynamics (GW + rtBSE-NAMD), the many-body transition that occurs within 100 fs from the optically excited bright to the strongly bound momentum-forbidden dark excitons in anatase TiO2. Comparing with the single-particle picture in which the exciton transition is considered to occur through electron–phonon scattering, within the GW + rtBSE-NAMD framework, the many-body electron–hole Coulomb interaction activates additional exciton relaxation channels to notably accelerate the exciton transition in competition with other radiative and nonradiative processes. The existence of dark excitons and ultrafast bright–dark exciton transitions sheds insights into applications of anatase TiO2in optoelectronic devices and light-energy harvesting as well as the formation process of dark excitons in semiconductors.more » « less
-
Calzolari, Nicoletta; Kan, Min-Yen; Hoste, Veronique; Lenci, Alessandro; Sakti, Sakriani; Xue, Nianwen (Ed.)This paper reports the first release of the UMR (Uniform Meaning Representation) data set. UMR is a graph-based meaning representation formalism consisting of a sentence-level graph and a document-level graph. The sentence-level graph represents predicate-argument structures, named entities, word senses, aspectuality of events, as well as person and number information for entities. The document-level graph represents coreferential, temporal, and modal relations that go beyond sentence boundaries. UMR is designed to capture the commonalities and variations across languages and this is done through the use of a common set of abstract concepts, relations, and attributes as well as concrete concepts derived from words from invidual languages. This UMR release includes annotations for six languages (Arapaho, Chinese, English, Kukama, Navajo, Sanapana) that vary greatly in terms of their linguistic properties and resource availability. We also describe on-going efforts to enlarge this data set and extend it to other genres and modalities. We also briefly describe the available infrastructure (UMR annotation guidelines and tools) that others can use to create similar data sets.more » « lessFree, publicly-accessible full text available May 20, 2025
-
Calzolari, Nicoletta; Kan, Min-Yen; Hoste, Veronique; Lenci, Alessandro; Sakti, Sakriani; Xue, Nianwen (Ed.)This paper reports the first release of the UMR (Uniform Meaning Representation) data set. UMR is a graph-based meaning representation formalism consisting of a sentence-level graph and a document-level graph. The sentence-level graph represents predicate-argument structures, named entities, word senses, aspectuality of events, as well as person and number information for entities. The document-level graph represents coreferential, temporal, and modal relations that go beyond sentence boundaries. UMR is designed to capture the commonalities and variations across languages and this is done through the use of a common set of abstract concepts, relations, and attributes as well as concrete concepts derived from words from invidual languages. This UMR release includes annotations for six languages (Arapaho, Chinese, English, Kukama, Navajo, Sanapana) that vary greatly in terms of their linguistic properties and resource availability. We also describe on-going efforts to enlarge this data set and extend it to other genres and modalities. We also briefly describe the available infrastructure (UMR annotation guidelines and tools) that others can use to create similar data sets.more » « lessFree, publicly-accessible full text available May 1, 2025