- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
01000010000
- More
- Availability
-
11
- Author / Contributor
- Filter by Author / Creator
-
-
Liu, Zichuan (2)
-
Chen, Zhuomin (1)
-
Dong, Wenqian (1)
-
Goh, Wang Ling (1)
-
Jiang, Yu (1)
-
Li, Yixing (1)
-
Liu, Wenye (1)
-
Luo, Dongsheng (1)
-
Obeysekera, Jayantha (1)
-
Ren, Fengbo (1)
-
Shi, Jimeng (1)
-
Shirani, Farhad (1)
-
Song, Lei (1)
-
Wang, Tianchun (1)
-
Wang, Yongliang (1)
-
Yu, Hao (1)
-
Zheng, Xu (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Explaining deep learning models operating on time series data is crucial in various applications of interest which require interpretable and transparent insights from time series signals. In this work, we investigate this problem from an information theoretic perspective and show that most existing measures of explainability may suffer from trivial solutions and distributional shift issues. To address these issues, we introduce a simple yet practical objective function for time series explainable learning. The design of the objective function builds upon the principle of information bottleneck (IB), and modifies the IB objective function to avoid trivial solutions and distributional shift issues. We further present TimeX++, a novel explanation framework that leverages a parametric network to produce explanation-embedded instances that are both in-distributed and label-preserving. We evaluate TimeX++ on both synthetic and real-world datasets comparing its performance against leading baselines, and validate its practical efficacy through case studies in a real-world environmental application. Quantitative and qualitative evaluations show that TimeX++ outperforms baselines across all datasets, demonstrating a substantial improvement in explanation quality for time series data.more » « lessFree, publicly-accessible full text available May 29, 2025
-
Li, Yixing ; Liu, Zichuan ; Liu, Wenye ; Jiang, Yu ; Wang, Yongliang ; Goh, Wang Ling ; Yu, Hao ; Ren, Fengbo ( , IEEE Transactions on Industrial Electronics)