skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Machine learning-enabled screening for aortic stenosis with handheld ultrasound
Aims: Neural network classifiers can detect aortic stenosis (AS) using limited cardiac ultrasound images. While networks perform very well using cart-based imaging, they have never been tested or fine-tuned for use with focused cardiac ultrasound (FoCUS) acquisitions obtained on handheld ultrasound devices. Methods and results: Prospective study performed at Tufts Medical Center. All patients ≥65 years of age referred for clinically indicated transthoracic echocardigraphy (TTE) were eligible for inclusion. Parasternal long axis and parasternal short axis imaging was acquired using a commercially available handheld ultrasound device. Our cart-based AS classifier (trained on ∼10 000 images) was tested on FoCUS imaging from 160 patients. The median age was 74 (inter-quartile range 69–80) years, 50% of patients were women. Thirty patients (18.8%) had some degree of AS. The area under the received operator curve (AUROC) of the cart-based model for detecting AS was 0.87 (95% CI 0.75–0.99) on the FoCUS test set. Last-layer fine-tuning on handheld data established a classifier with AUROC of 0.94 (0.91–0.97). AUROC during temporal external validation was 0.97 (95% CI 0.89–1.0). When performance of the fine-tuned AS classifier was modelled on potential screening environments (2 and 10% AS prevalence), the positive predictive value ranged from 0.72 (0.69–0.76) to 0.88 (0.81–0.97) and negative predictive value ranged from 0.94 (0.94–0.94) to 0.99 (0.99–0.99) respectively. Conclusion: Our cart-based machine-learning model for AS showed a drop in performance when tested on handheld ultrasound imaging collected by sonographers. Fine-tuning the AS classifier improved performance and demonstrates potential as a novel approach to detecting AS through automated interpretation of handheld imaging.  more » « less
Award ID(s):
2338962
PAR ID:
10591808
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
European Heart Journal - Imaging Methods and Practice
Volume:
3
Issue:
1
ISSN:
2755-9637
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract AimsMyocardial infarction and heart failure are major cardiovascular diseases that affect millions of people in the USA with morbidity and mortality being highest among patients who develop cardiogenic shock. Early recognition of cardiogenic shock allows prompt implementation of treatment measures. Our objective is to develop a new dynamic risk score, called CShock, to improve early detection of cardiogenic shock in the cardiac intensive care unit (ICU). Methods and resultsWe developed and externally validated a deep learning-based risk stratification tool, called CShock, for patients admitted into the cardiac ICU with acute decompensated heart failure and/or myocardial infarction to predict the onset of cardiogenic shock. We prepared a cardiac ICU dataset using the Medical Information Mart for Intensive Care-III database by annotating with physician-adjudicated outcomes. This dataset which consisted of 1500 patients with 204 having cardiogenic/mixed shock was then used to train CShock. The features used to train the model for CShock included patient demographics, cardiac ICU admission diagnoses, routinely measured laboratory values and vital signs, and relevant features manually extracted from echocardiogram and left heart catheterization reports. We externally validated the risk model on the New York University (NYU) Langone Health cardiac ICU database which was also annotated with physician-adjudicated outcomes. The external validation cohort consisted of 131 patients with 25 patients experiencing cardiogenic/mixed shock. CShock achieved an area under the receiver operator characteristic curve (AUROC) of 0.821 (95% CI 0.792–0.850). CShock was externally validated in the more contemporary NYU cohort and achieved an AUROC of 0.800 (95% CI 0.717–0.884), demonstrating its generalizability in other cardiac ICUs. Having an elevated heart rate is most predictive of cardiogenic shock development based on Shapley values. The other top 10 predictors are having an admission diagnosis of myocardial infarction with ST-segment elevation, having an admission diagnosis of acute decompensated heart failure, Braden Scale, Glasgow Coma Scale, blood urea nitrogen, systolic blood pressure, serum chloride, serum sodium, and arterial blood pH. ConclusionThe novel CShock score has the potential to provide automated detection and early warning for cardiogenic shock and improve the outcomes for millions of patients who suffer from myocardial infarction and heart failure. 
    more » « less
  2. OBJECTIVETo determine the benefit of starting continuous glucose monitoring (CGM) in adult-onset type 1 diabetes (T1D) and type 2 diabetes (T2D) with regard to longer-term glucose control and serious clinical events. RESEARCH DESIGN AND METHODSA retrospective observational cohort study within the Veterans Affairs Health Care System was used to compare glucose control and hypoglycemia- or hyperglycemia-related admission to an emergency room or hospital and all-cause hospitalization between propensity score overlap weighted initiators of CGM and nonusers over 12 months. RESULTSCGM users receiving insulin (n = 5,015 with T1D and n = 15,706 with T2D) and similar numbers of nonusers were identified from 1 January 2015 to 31 December 2020. Declines in HbA1c were significantly greater in CGM users with T1D (−0.26%; 95% CI −0.33, −0.19%) and T2D (−0.35%; 95% CI −0.40, −0.31%) than in nonusers at 12 months. Percentages of patients achieving HbA1c <8 and <9% after 12 months were greater in CGM users. In T1D, CGM initiation was associated with significantly reduced risk of hypoglycemia (hazard ratio [HR] 0.69; 95% CI 0.48, 0.98) and all-cause hospitalization (HR 0.75; 95% CI 0.63, 0.90). In patients with T2D, there was a reduction in risk of hyperglycemia in CGM users (HR 0.87; 95% CI 0.77, 0.99) and all-cause hospitalization (HR 0.89; 95% CI 0.83, 0.97). Several subgroups (based on baseline age, HbA1c, hypoglycemic risk, or follow-up CGM use) had even greater responses. CONCLUSIONSIn a large national cohort, initiation of CGM was associated with sustained improvement in HbA1c in patients with later-onset T1D and patients with T2D using insulin. This was accompanied by a clear pattern of reduced risk of admission to an emergency room or hospital for hypoglycemia or hyperglycemia and of all-cause hospitalization. 
    more » « less
  3. Wiley Periodicals LLC (Ed.)
    Introduction: Studies investigating the relationship between blood pressure (BP) measurements from electronic health records (EHRs) and Alzheimer’s disease (AD) rely on summary statistics, like BP variability, and have only been validated at a single institution. We hypothesize that leveraging BP trajectories can accurately estimate AD risk across different populations. Methods: In a retrospective cohort study, EHRdata from Veterans Affairs (VA) patients were used to train and internally validate a machine learning model to predict AD onset within 5 years. External validation was conducted on patients from Michigan Medicine (MM). Results: The VA and MM cohorts included 6860 and 1201 patients, respectively. Model performance using BP trajectories was modest but comparable (area under the receiver operating characteristic curve [AUROC] = 0.64 [95% confidence interval (CI) = 0.54–0.73] for VA vs. AUROC = 0.66 [95% CI = 0.55–0.76] for MM). Conclusion: Approaches that directly leverage BP trajectories from EHR data could aid in AD risk stratification across institutions. 
    more » « less
  4. Purpose: Limited studies exploring concrete methods or approaches to tackle and enhance model fairness in the radiology domain. Our proposed AI model utilizes supervised contrastive learning to minimize bias in CXR diagnosis. Materials and Methods: In this retrospective study, we evaluated our proposed method on two datasets: the Medical Imaging and Data Resource Center (MIDRC) dataset with 77,887 CXR images from 27,796 patients collected as of April 20, 2023 for COVID-19 diagnosis, and the NIH Chest X-ray (NIH-CXR) dataset with 112,120 CXR images from 30,805 patients collected between 1992 and 2015. In the NIH-CXR dataset, thoracic abnormalities include atelectasis, cardiomegaly, effusion, infiltration, mass, nodule, pneumonia, pneumothorax, consolidation, edema, emphysema, fibrosis, pleural thickening, or hernia. Our proposed method utilizes supervised contrastive learning with carefully selected positive and negative samples to generate fair image embeddings, which are fine-tuned for subsequent tasks to reduce bias in chest X-ray (CXR) diagnosis. We evaluated the methods using the marginal AUC difference (δ mAUC). Results: The proposed model showed a significant decrease in bias across all subgroups when compared to the baseline models, as evidenced by a paired T-test (p<0.0001). The δ mAUC obtained by our method were 0.0116 (95\% CI, 0.0110-0.0123), 0.2102 (95% CI, 0.2087-0.2118), and 0.1000 (95\% CI, 0.0988-0.1011) for sex, race, and age on MIDRC, and 0.0090 (95\% CI, 0.0082-0.0097) for sex and 0.0512 (95% CI, 0.0512-0.0532) for age on NIH-CXR, respectively. Conclusion: Employing supervised contrastive learning can mitigate bias in CXR diagnosis, addressing concerns of fairness and reliability in deep learning-based diagnostic methods. 
    more » « less
  5. Abstract BackgroundPrevious social determinants of health (SDoH) studies on laryngeal cancer (LC) have assessed individual factors of socioeconomic status and race/ethnicity but seldom investigate a wider breadth of SDoH-factors for their effects in the real-world. This study aims to delineate how a wider array of SDoH-vulnerabilities interactively associates with LC-disparities. MethodsThis retrospective cohort study assessed 74,495 LC-patients between 1975 and 2017 from the Surveillance-Epidemiology-End Results (SEER) database using the Social Vulnerability Index (SVI) from the CDC, total SDoH-vulnerability from 15 SDoH variables across specific vulnerabilities of socioeconomic status, minority-language status, household composition, and infrastructure/housing and transportation, which were measured across US counties. Univariate linear and logistic regressions were performed on length of care/follow-up and survival, staging, and treatment across SVI scores. ResultsSurvival time dropped significantly by 34.37% (from 72.83 to 47.80 months), and surveillance time decreased by 28.09% (from 80.99 to 58.24 months) with increasing overall social vulnerability, alongside advanced staging (OR 1.15; 95%CI 1.13–1.16), increased chemotherapy (OR 1.13; 95%CI 1.11–1.14), decreased surgical resection (OR 0.91; 95%CI 0.90–0.92), and decreased radiotherapy (OR 0.97; 95%CI 0.96–0.99). DiscussionIn this SDoH-study of LCs, detrimental care and prognostic trends were observed with increasing overall SDoH-vulnerability. 
    more » « less