- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0000000004000000
- More
- Availability
-
04
- Author / Contributor
- Filter by Author / Creator
-
-
Liu, Dongyu (3)
-
Chen, Juntong (2)
-
Lee, Sam Yu-Te (2)
-
Ma, Kwan-Liu (2)
-
Bahukhandi, Aryaman (1)
-
Guo, Jiajing (1)
-
He, Wenbin (1)
-
Kam-Kwai, Wong (1)
-
Lau, Alexis_Kai Hon (1)
-
Li, Haobo (1)
-
Li, Xueming (1)
-
Liu, Chengzhong (1)
-
Luo, Yan (1)
-
Mohanty, Vikram (1)
-
Ono, Jorge_Piazentin (1)
-
Qu, Huamin (1)
-
Ren, Liu (1)
-
Wu, Jiang (1)
-
Zhang, Yaxuan (1)
-
#Tyler Phillips, Kenneth E. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract The rise of Large Language Models (LLMs) and generative visual analytics systems has transformed data‐driven insights, yet significant challenges persist in accurately interpreting users analytical and interaction intents. While language inputs offer flexibility, they often lack precision, making the expression of complex intents inefficient, error‐prone, and time‐intensive. To address these limitations, we investigate the design space of multimodal interactions for generative visual analytics through a literature review and pilot brainstorming sessions. Building on these insights, we introduce a highly extensible workflow that integrates multiple LLM agents for intent inference and visualization generation. We develop InterChat, a generative visual analytics system that combines direct manipulation of visual elements with natural language inputs. This integration enables precise intent communication and supports progressive, visually driven exploratory data analyses. By employing effective prompt engineering, and contextual interaction linking, alongside intuitive visualization and interaction designs, InterChat bridges the gap between user interactions and LLM‐driven visualizations, enhancing both interpretability and usability. Extensive evaluations, including two usage scenarios, a user study, and expert feedback, demonstrate the effectiveness of InterChat. Results show significant improvements in the accuracy and efficiency of handling complex visual analytics tasks, highlighting the potential of multimodal interactions to redefine user engagement and analytical depth in generative visual analytics.more » « less
-
Lee, Sam Yu-Te; Ma, Kwan-Liu (, IEEE Transactions on Visualization and Computer Graphics)Free, publicly-accessible full text available September 1, 2026
-
Lee, Sam Yu-Te; Bahukhandi, Aryaman; Liu, Dongyu; Ma, Kwan-Liu (, IEEE Transactions on Visualization and Computer Graphics)Free, publicly-accessible full text available January 1, 2026
-
Li, Haobo; Kam-Kwai, Wong; Luo, Yan; Chen, Juntong; Liu, Chengzhong; Zhang, Yaxuan; Lau, Alexis_Kai Hon; Qu, Huamin; Liu, Dongyu (, IEEE Transactions on Visualization and Computer Graphics)Free, publicly-accessible full text available January 1, 2026
An official website of the United States government
