Abstract BackgroundWhile most health-care providers now use electronic health records (EHRs) to document clinical care, many still treat them as digital versions of paper records. As a result, documentation often remains unstructured, with free-text entries in progress notes. This limits the potential for secondary use and analysis, as machine-learning and data analysis algorithms are more effective with structured data. ObjectiveThis study aims to use advanced artificial intelligence (AI) and natural language processing (NLP) techniques to improve diagnostic information extraction from clinical notes in a periodontal use case. By automating this process, the study seeks to reduce missing data in dental records and minimize the need for extensive manual annotation, a long-standing barrier to widespread NLP deployment in dental data extraction. Materials and MethodsThis research utilizes large language models (LLMs), specifically Generative Pretrained Transformer 4, to generate synthetic medical notes for fine-tuning a RoBERTa model. This model was trained to better interpret and process dental language, with particular attention to periodontal diagnoses. Model performance was evaluated by manually reviewing 360 clinical notes randomly selected from each of the participating site’s dataset. ResultsThe results demonstrated high accuracy of periodontal diagnosis data extraction, with the sites 1 and 2 achieving a weighted average score of 0.97-0.98. This performance held for all dimensions of periodontal diagnosis in terms of stage, grade, and extent. DiscussionSynthetic data effectively reduced manual annotation needs while preserving model quality. Generalizability across institutions suggests viability for broader adoption, though future work is needed to improve contextual understanding. ConclusionThe study highlights the potential transformative impact of AI and NLP on health-care research. Most clinical documentation (40%-80%) is free text. Scaling our method could enhance clinical data reuse.
more »
« less
This content will become publicly available on January 28, 2026
Testing and Evaluation of Health Care Applications of Large Language Models: A Systematic Review
ImportanceLarge language models (LLMs) can assist in various health care activities, but current evaluation approaches may not adequately identify the most useful application areas. ObjectiveTo summarize existing evaluations of LLMs in health care in terms of 5 components: (1) evaluation data type, (2) health care task, (3) natural language processing (NLP) and natural language understanding (NLU) tasks, (4) dimension of evaluation, and (5) medical specialty. Data SourcesA systematic search of PubMed and Web of Science was performed for studies published between January 1, 2022, and February 19, 2024. Study SelectionStudies evaluating 1 or more LLMs in health care. Data Extraction and SynthesisThree independent reviewers categorized studies via keyword searches based on the data used, the health care tasks, the NLP and NLU tasks, the dimensions of evaluation, and the medical specialty. ResultsOf 519 studies reviewed, published between January 1, 2022, and February 19, 2024, only 5% used real patient care data for LLM evaluation. The most common health care tasks were assessing medical knowledge such as answering medical licensing examination questions (44.5%) and making diagnoses (19.5%). Administrative tasks such as assigning billing codes (0.2%) and writing prescriptions (0.2%) were less studied. For NLP and NLU tasks, most studies focused on question answering (84.2%), while tasks such as summarization (8.9%) and conversational dialogue (3.3%) were infrequent. Almost all studies (95.4%) used accuracy as the primary dimension of evaluation; fairness, bias, and toxicity (15.8%), deployment considerations (4.6%), and calibration and uncertainty (1.2%) were infrequently measured. Finally, in terms of medical specialty area, most studies were in generic health care applications (25.6%), internal medicine (16.4%), surgery (11.4%), and ophthalmology (6.9%), with nuclear medicine (0.6%), physical medicine (0.4%), and medical genetics (0.2%) being the least represented. Conclusions and RelevanceExisting evaluations of LLMs mostly focus on accuracy of question answering for medical examinations, without consideration of real patient care data. Dimensions such as fairness, bias, and toxicity and deployment considerations received limited attention. Future evaluations should adopt standardized applications and metrics, use clinical data, and broaden focus to include a wider range of tasks and specialties.
more »
« less
- PAR ID:
- 10616638
- Publisher / Repository:
- JAMA
- Date Published:
- Journal Name:
- JAMA
- Volume:
- 333
- Issue:
- 4
- ISSN:
- 0098-7484
- Page Range / eLocation ID:
- 319
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
ImportanceVirtual patient-physician communications have increased since 2020 and negatively impacted primary care physician (PCP) well-being. Generative artificial intelligence (GenAI) drafts of patient messages could potentially reduce health care professional (HCP) workload and improve communication quality, but only if the drafts are considered useful. ObjectivesTo assess PCPs’ perceptions of GenAI drafts and to examine linguistic characteristics associated with equity and perceived empathy. Design, Setting, and ParticipantsThis cross-sectional quality improvement study tested the hypothesis that PCPs’ ratings of GenAI drafts (created using the electronic health record [EHR] standard prompts) would be equivalent to HCP-generated responses on 3 dimensions. The study was conducted at NYU Langone Health using private patient-HCP communications at 3 internal medicine practices piloting GenAI. ExposuresRandomly assigned patient messages coupled with either an HCP message or the draft GenAI response. Main Outcomes and MeasuresPCPs rated responses’ information content quality (eg, relevance), using a Likert scale, communication quality (eg, verbosity), using a Likert scale, and whether they would use the draft or start anew (usable vs unusable). Branching logic further probed for empathy, personalization, and professionalism of responses. Computational linguistics methods assessed content differences in HCP vs GenAI responses, focusing on equity and empathy. ResultsA total of 16 PCPs (8 [50.0%] female) reviewed 344 messages (175 GenAI drafted; 169 HCP drafted). Both GenAI and HCP responses were rated favorably. GenAI responses were rated higher for communication style than HCP responses (mean [SD], 3.70 [1.15] vs 3.38 [1.20];P = .01,U = 12 568.5) but were similar to HCPs on information content (mean [SD], 3.53 [1.26] vs 3.41 [1.27];P = .37;U = 13 981.0) and usable draft proportion (mean [SD], 0.69 [0.48] vs 0.65 [0.47],P = .49,t = −0.6842). Usable GenAI responses were considered more empathetic than usable HCP responses (32 of 86 [37.2%] vs 13 of 79 [16.5%]; difference, 125.5%), possibly attributable to more subjective (mean [SD], 0.54 [0.16] vs 0.31 [0.23];P < .001; difference, 74.2%) and positive (mean [SD] polarity, 0.21 [0.14] vs 0.13 [0.25];P = .02; difference, 61.5%) language; they were also numerically longer (mean [SD] word count, 90.5 [32.0] vs 65.4 [62.6]; difference, 38.4%), but the difference was not statistically significant (P = .07) and more linguistically complex (mean [SD] score, 125.2 [47.8] vs 95.4 [58.8];P = .002; difference, 31.2%). ConclusionsIn this cross-sectional study of PCP perceptions of an EHR-integrated GenAI chatbot, GenAI was found to communicate information better and with more empathy than HCPs, highlighting its potential to enhance patient-HCP communication. However, GenAI drafts were less readable than HCPs’, a significant concern for patients with low health or English literacy.more » « less
-
Medical vision-language models (VLMs) combine computer vision (CV) and natural language processing (NLP) to analyze visual and textual medical data. Our paper reviews recent advancements in developing VLMs specialized for healthcare, focusing on publicly available models designed for medical report generation and visual question answering (VQA). We provide background on NLP and CV, explaining how techniques from both fields are integrated into VLMs, with visual and language data often fused using Transformer-based architectures to enable effective learning from multimodal data. Key areas we address include the exploration of 18 public medical vision-language datasets, in-depth analyses of the architectures and pre-training strategies of 16 recent noteworthy medical VLMs, and comprehensive discussion on evaluation metrics for assessing VLMs' performance in medical report generation and VQA. We also highlight current challenges facing medical VLM development, including limited data availability, concerns with data privacy, and lack of proper evaluation metrics, among others, while also proposing future directions to address these obstacles. Overall, our review summarizes the recent progress in developing VLMs to harness multimodal medical data for improved healthcare applications.more » « less
-
ImportanceIdentifying and tracking new infections during an emerging pandemic is crucial to design and deploy interventions to protect populations and mitigate the pandemic’s effects, yet it remains a challenging task. ObjectiveTo characterize the ability of nonprobability online surveys to longitudinally estimate the number of COVID-19 infections in the population both in the presence and absence of institutionalized testing. Design, Setting, and ParticipantsInternet-based online nonprobability surveys were conducted among residents aged 18 years or older across 50 US states and the District of Columbia, using the PureSpectrum survey vendor, approximately every 6 weeks between June 1, 2020, and January 31, 2023, for a multiuniversity consortium—the COVID States Project. Surveys collected information on COVID-19 infections with representative state-level quotas applied to balance age, sex, race and ethnicity, and geographic distribution. Main Outcomes and MeasuresThe main outcomes were (1) survey-weighted estimates of new monthly confirmed COVID-19 cases in the US from January 2020 to January 2023 and (2) estimates of uncounted test-confirmed cases from February 1, 2022, to January 1, 2023. These estimates were compared with institutionally reported COVID-19 infections collected by Johns Hopkins University and wastewater viral concentrations for SARS-CoV-2 from Biobot Analytics. ResultsThe survey spanned 17 waves deployed from June 1, 2020, to January 31, 2023, with a total of 408 515 responses from 306 799 respondents (mean [SD] age, 42.8 [13.0] years; 202 416 women [66.0%]). Overall, 64 946 respondents (15.9%) self-reported a test-confirmed COVID-19 infection. National survey-weighted test-confirmed COVID-19 estimates were strongly correlated with institutionally reported COVID-19 infections (Pearson correlation,r = 0.96;P < .001) from April 2020 to January 2022 (50-state correlation mean [SD] value,r = 0.88 [0.07]). This was before the government-led mass distribution of at-home rapid tests. After January 2022, correlation was diminished and no longer statistically significant (r = 0.55;P = .08; 50-state correlation mean [SD] value,r = 0.48 [0.23]). In contrast, survey COVID-19 estimates correlated highly with SARS-CoV-2 viral concentrations in wastewater both before (r = 0.92;P < .001) and after (r = 0.89;P < .001) January 2022. Institutionally reported COVID-19 cases correlated (r = 0.79;P < .001) with wastewater viral concentrations before January 2022, but poorly (r = 0.31;P = .35) after, suggesting that both survey and wastewater estimates may have better captured test-confirmed COVID-19 infections after January 2022. Consistent correlation patterns were observed at the state level. Based on national-level survey estimates, approximately 54 million COVID-19 cases were likely unaccounted for in official records between January 2022 and January 2023. Conclusions and RelevanceThis study suggests that nonprobability survey data can be used to estimate the temporal evolution of test-confirmed infections during an emerging disease outbreak. Self-reporting tools may enable government and health care officials to implement accessible and affordable at-home testing for efficient infection monitoring in the future.more » « less
-
ImportanceTrust in physicians and hospitals has been associated with achieving public health goals, but the increasing politicization of public health policies during the COVID-19 pandemic may have adversely affected such trust. ObjectiveTo characterize changes in US adults’ trust in physicians and hospitals over the course of the COVID-19 pandemic and the association between this trust and health-related behaviors. Design, Setting, and ParticipantsThis survey study uses data from 24 waves of a nonprobability internet survey conducted between April 1, 2020, and January 31, 2024, among 443 455 unique respondents aged 18 years or older residing in the US, with state-level representative quotas for race and ethnicity, age, and gender. Main Outcome and MeasureSelf-report of trust in physicians and hospitals; self-report of SARS-CoV-2 and influenza vaccination and booster status. Survey-weighted regression models were applied to examine associations between sociodemographic features and trust and between trust and health behaviors. ResultsThe combined data included 582 634 responses across 24 survey waves, reflecting 443 455 unique respondents. The unweighted mean (SD) age was 43.3 (16.6) years; 288 186 respondents (65.0%) reported female gender; 21 957 (5.0%) identified as Asian American, 49 428 (11.1%) as Black, 38 423 (8.7%) as Hispanic, 3138 (0.7%) as Native American, 5598 (1.3%) as Pacific Islander, 315 278 (71.1%) as White, and 9633 (2.2%) as other race and ethnicity (those who selected “Other” from a checklist). Overall, the proportion of adults reporting a lot of trust for physicians and hospitals decreased from 71.5% (95% CI, 70.7%-72.2%) in April 2020 to 40.1% (95% CI, 39.4%-40.7%) in January 2024. In regression models, features associated with lower trust as of spring and summer 2023 included being 25 to 64 years of age, female gender, lower educational level, lower income, Black race, and living in a rural setting. These associations persisted even after controlling for partisanship. In turn, greater trust was associated with greater likelihood of vaccination for SARS-CoV-2 (adjusted odds ratio [OR], 4.94; 95 CI, 4.21-5.80) or influenza (adjusted OR, 5.09; 95 CI, 3.93-6.59) and receiving a SARS-CoV-2 booster (adjusted OR, 3.62; 95 CI, 2.99-4.38). Conclusions and RelevanceThis survey study of US adults suggests that trust in physicians and hospitals decreased during the COVID-19 pandemic. As lower levels of trust were associated with lesser likelihood of pursuing vaccination, restoring trust may represent a public health imperative.more » « less
An official website of the United States government
