Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
ImportanceVirtual patient-physician communications have increased since 2020 and negatively impacted primary care physician (PCP) well-being. Generative artificial intelligence (GenAI) drafts of patient messages could potentially reduce health care professional (HCP) workload and improve communication quality, but only if the drafts are considered useful. ObjectivesTo assess PCPs’ perceptions of GenAI drafts and to examine linguistic characteristics associated with equity and perceived empathy. Design, Setting, and ParticipantsThis cross-sectional quality improvement study tested the hypothesis that PCPs’ ratings of GenAI drafts (created using the electronic health record [EHR] standard prompts) would be equivalent to HCP-generated responses on 3 dimensions. The study was conducted at NYU Langone Health using private patient-HCP communications at 3 internal medicine practices piloting GenAI. ExposuresRandomly assigned patient messages coupled with either an HCP message or the draft GenAI response. Main Outcomes and MeasuresPCPs rated responses’ information content quality (eg, relevance), using a Likert scale, communication quality (eg, verbosity), using a Likert scale, and whether they would use the draft or start anew (usable vs unusable). Branching logic further probed for empathy, personalization, and professionalism of responses. Computational linguistics methods assessed content differences in HCP vs GenAI responses, focusing on equity and empathy. ResultsA total of 16 PCPs (8 [50.0%] female) reviewed 344 messages (175 GenAI drafted; 169 HCP drafted). Both GenAI and HCP responses were rated favorably. GenAI responses were rated higher for communication style than HCP responses (mean [SD], 3.70 [1.15] vs 3.38 [1.20];P = .01,U = 12 568.5) but were similar to HCPs on information content (mean [SD], 3.53 [1.26] vs 3.41 [1.27];P = .37;U = 13 981.0) and usable draft proportion (mean [SD], 0.69 [0.48] vs 0.65 [0.47],P = .49,t = −0.6842). Usable GenAI responses were considered more empathetic than usable HCP responses (32 of 86 [37.2%] vs 13 of 79 [16.5%]; difference, 125.5%), possibly attributable to more subjective (mean [SD], 0.54 [0.16] vs 0.31 [0.23];P < .001; difference, 74.2%) and positive (mean [SD] polarity, 0.21 [0.14] vs 0.13 [0.25];P = .02; difference, 61.5%) language; they were also numerically longer (mean [SD] word count, 90.5 [32.0] vs 65.4 [62.6]; difference, 38.4%), but the difference was not statistically significant (P = .07) and more linguistically complex (mean [SD] score, 125.2 [47.8] vs 95.4 [58.8];P = .002; difference, 31.2%). ConclusionsIn this cross-sectional study of PCP perceptions of an EHR-integrated GenAI chatbot, GenAI was found to communicate information better and with more empathy than HCPs, highlighting its potential to enhance patient-HCP communication. However, GenAI drafts were less readable than HCPs’, a significant concern for patients with low health or English literacy.more » « lessFree, publicly-accessible full text available July 1, 2025
-
Abstract ObjectivesTo evaluate the proficiency of a HIPAA-compliant version of GPT-4 in identifying actionable, incidental findings from unstructured radiology reports of Emergency Department patients. To assess appropriateness of artificial intelligence (AI)-generated, patient-facing summaries of these findings. Materials and MethodsRadiology reports extracted from the electronic health record of a large academic medical center were manually reviewed to identify non-emergent, incidental findings with high likelihood of requiring follow-up, further sub-stratified as “definitely actionable” (DA) or “possibly actionable—clinical correlation” (PA-CC). Instruction prompts to GPT-4 were developed and iteratively optimized using a validation set of 50 reports. The optimized prompt was then applied to a test set of 430 unseen reports. GPT-4 performance was primarily graded on accuracy identifying either DA or PA-CC findings, then secondarily for DA findings alone. Outputs were reviewed for hallucinations. AI-generated patient-facing summaries were assessed for appropriateness via Likert scale. ResultsFor the primary outcome (DA or PA-CC), GPT-4 achieved 99.3% recall, 73.6% precision, and 84.5% F-1. For the secondary outcome (DA only), GPT-4 demonstrated 95.2% recall, 77.3% precision, and 85.3% F-1. No findings were “hallucinated” outright. However, 2.8% of cases included generated text about recommendations that were inferred without specific reference. The majority of True Positive AI-generated summaries required no or minor revision. ConclusionGPT-4 demonstrates proficiency in detecting actionable, incidental findings after refined instruction prompting. AI-generated patient instructions were most often appropriate, but rarely included inferred recommendations. While this technology shows promise to augment diagnostics, active clinician oversight via “human-in-the-loop” workflows remains critical for clinical implementation.more » « less
-
Hastings, Janna (Ed.)BackgroundHealthcare crowdsourcing events (e.g. hackathons) facilitate interdisciplinary collaboration and encourage innovation. Peer-reviewed research has not yet considered a healthcare crowdsourcing event focusing on generative artificial intelligence (GenAI), which generates text in response to detailed prompts and has vast potential for improving the efficiency of healthcare organizations. Our event, the New York University Langone Health (NYULH) Prompt-a-thon, primarily sought to inspire and build AI fluency within our diverse NYULH community, and foster collaboration and innovation. Secondarily, we sought to analyze how participants’ experience was influenced by their prior GenAI exposure and whether they received sample prompts during the workshop. MethodsExecuting the event required the assembly of an expert planning committee, who recruited diverse participants, anticipated technological challenges, and prepared the event. The event was composed of didactics and workshop sessions, which educated and allowed participants to experiment with using GenAI on real healthcare data. Participants were given novel “project cards” associated with each dataset that illuminated the tasks GenAI could perform and, for a random set of teams, sample prompts to help them achieve each task (the public repository of project cards can be found athttps://github.com/smallw03/NYULH-Generative-AI-Prompt-a-thon-Project-Cards). Afterwards, participants were asked to fill out a survey with 7-point Likert-style questions. ResultsOur event was successful in educating and inspiring hundreds of enthusiastic in-person and virtual participants across our organization on the responsible use of GenAI in a low-cost and technologically feasible manner. All participants responded positively, on average, to each of the survey questions (e.g., confidence in their ability to use and trust GenAI). Critically, participants reported a self-perceived increase in their likelihood of using and promoting colleagues’ use of GenAI for their daily work. No significant differences were seen in the surveys of those who received sample prompts with their project task descriptions ConclusionThe first healthcare Prompt-a-thon was an overwhelming success, with minimal technological failures, positive responses from diverse participants and staff, and evidence of post-event engagement. These findings will be integral to planning future events at our institution, and to others looking to engage their workforce in utilizing GenAI.more » « lessFree, publicly-accessible full text available July 23, 2025
-
Abstract Health care delivery is undergoing an accelerated period of digital transformation, spurred in part by the COVID-19 pandemic and the use of “virtual-first” care delivery models such as telemedicine. Medical education has responded to this shift with calls for improved digital health training, but there is as yet no universal understanding of the needed competencies, domains, and best practices for teaching these skills. In this paper, we argue that a “digital determinants of health” (DDoH) framework for understanding the intersections of health outcomes, technology, and training is critical to the development of comprehensive digital health competencies in medical education. Much like current social determinants of health models, the DDoH framework can be integrated into undergraduate, graduate, and professional education to guide training interventions as well as competency development and evaluation. We provide possible approaches to integrating this framework into training programs and explore priorities for future research in digitally-competent medical education.more » « lessFree, publicly-accessible full text available August 1, 2025
-
Abstract The COVID-19 pandemic has boosted digital health utilization, raising concerns about increased physicians’ after-hours clinical work (work-outside-work”). The surge in patients’ digital messages and additional time spent on work-outside-work by telemedicine providers underscores the need to evaluate the connection between digital health utilization and physicians’ after-hours commitments. We examined the impact on physicians’ workload from two types of digital demands - patients’ messages requesting medical advice (PMARs) sent to physicians’ inbox (inbasket), and telemedicine. Our study included 1716 ambulatory-care physicians in New York City regularly practicing between November 2022 and March 2023. Regression analyses assessed primary and interaction effects of (PMARs) and telemedicine on work-outside-work. The study revealed a significant effect ofPMARs on physicians’ work-outside-work and that this relationship is moderated by physicians’ specialties. Non-primary care physicians or specialists experienced a more pronounced effect than their primary care peers. Analysis of their telemedicine load revealed that primary care physicians received fewerPMARs and spent less time in work-outside-work with more telemedicine. Specialists faced increasedPMARs and did more work-outside-work as telemedicine visits increased which could be due to the difference in patient panels. ReducingPMARvolumes and efficient inbasket management strategies needed to reduce physicians’ work-outside-work. Policymakers need to be cognizant of potential disruptions in physicians carefully balanced workload caused by the digital health services.more » « less
-
Abstract Pairwise interactions are critical to collective dynamics of natural and technological systems. Information theory is the gold standard to study these interactions, but recent work has identified pitfalls in the way information flow is appraised through classical metrics—time-delayed mutual information and transfer entropy. These pitfalls have prompted the introduction of intrinsic mutual information to precisely measure information flow. However, little is known regarding the potential use of intrinsic mutual information in the inference of directional influences to diagnose interactions from time-series of individual units. We explore this possibility within a minimalistic, mathematically tractable leader–follower model, for which we document an excess of false inferences of intrinsic mutual information compared to transfer entropy. This unexpected finding is linked to a fundamental limitation of intrinsic mutual information, which suffers from the same sins of time-delayed mutual information: a thin tail of the null distribution that favors the rejection of the null-hypothesis of independence.more » « less
-
Free, publicly-accessible full text available July 1, 2025
-
Oh, Ilkwon; Yeo, Woon-Hong; Porfiri, Maurizio; Kim, Sang-Woo (Ed.)
-
Oh, Ilkwon; Yeo, Woon-Hong; Porfiri, Maurizio; Kim, Sang-Woo (Ed.)