- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0000000002000000
- More
- Availability
-
02
- Author / Contributor
- Filter by Author / Creator
-
-
Adibuzzaman, Mohammad (2)
-
Adams, William G (1)
-
Anderson, Nicholas R (1)
-
Badhon, S_M_Saiful Islam (1)
-
Bahroos, Neil (1)
-
Bell, Douglas S (1)
-
Bian, Jiang (1)
-
Bozdag, Serdar (1)
-
Bumgardner, Cody (1)
-
Campion, Thomas (1)
-
Castro, Mario (1)
-
Cimino, James J (1)
-
Cleveland, Ana D (1)
-
Cohen, I Glenn (1)
-
Ding, Junhua (1)
-
Dorr, David (1)
-
Elkin, Peter L (1)
-
Fan, Jungwei W (1)
-
Ferris, Todd (1)
-
Foran, David J (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Deep learning models have demonstrated impressive accuracy in predicting acute kidney injury (AKI), a condition affecting up to 20% of ICU patients, yet their black-box nature prevents clinical adoption in high-stakes critical care settings. While existing interpretability methods like SHAP, LIME, and attention mechanisms can identify important features, they fail to capture the temporal dynamics essential for clinical decision-making, and are unable to communicate when specific risk factors become critical in a patient's trajectory. This limitation is particularly problematic in the ICU, where the timing of interventions can significantly impact patient outcomes. We present a novel interpretable framework that brings temporal awareness to deep learning predictions for AKI. Our approach introduces three key innovations: (1) a latent convolutional concept bottleneck that learns clinically meaningful patterns from ICU time-series without requiring manual concept annotation, leveraging Conv1D layers to capture localized temporal patterns like sudden physiological changes; (2) Temporal Concept Tracing (TCT), a gradient-based method that identifies not only which risk factors matter but precisely when they become critical addressing the fundamental question of temporal relevance missing from current XAI techniques; and (3) integration with MedAlpaca to generate structured, time-aware clinical explanations that translate model insights into actionable bedside guidance. We evaluate our framework on MIMIC-IV data, demonstrating that our approach performs better than existing explainability frameworks, Occlusion and LIME, in terms of the comprehensiveness score, sufficiency score, and processing time. The proposed method also better captures risk factors inflection points for patients timelines compared to conventional concept bottleneck methods, including dense layer and attention mechanism. This work represents the first comprehensive solution for interpretable temporal deep learning in critical care that addresses both the what and when of clinical risk factors. By making AKI predictions transparent and temporally contextualized, our framework bridges the gap between model accuracy and clinical utility, offering a path toward trustworthy AI deployment in time-sensitive healthcare settings.more » « lessFree, publicly-accessible full text available November 23, 2026
-
Idnay, Betina; Xu, Zihan; Adams, William G; Adibuzzaman, Mohammad; Anderson, Nicholas R; Bahroos, Neil; Bell, Douglas S; Bumgardner, Cody; Campion, Thomas; Castro, Mario; et al (, npj Health Systems)Abstract This study reports a comprehensive environmental scan of the generative AI (GenAI) infrastructure in the national network for clinical and translational science across 36 institutions supported by the CTSA Program led by the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH) at the United States. Key findings indicate a diverse range of institutional strategies, with most organizations in the experimental phase of GenAI deployment. The results underscore the need for a more coordinated approach to GenAI governance, emphasizing collaboration among senior leaders, clinicians, information technology staff, and researchers. Our analysis reveals that 53% of institutions identified data security as a primary concern, followed by lack of clinician trust (50%) and AI bias (44%), which must be addressed to ensure the ethical and effective implementation of GenAI technologies.more » « lessFree, publicly-accessible full text available December 1, 2026
An official website of the United States government
