The increasing integration of artificial intelligence (
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract ) in visual analytics (AI ) tools raises vital questions about the behavior of users, their trust, and the potential of induced biases when provided with guidance during data exploration. We present an experiment where participants engaged in a visual data exploration task while receiving intelligent suggestions supplemented with four different transparency levels. We also modulated the difficulty of the task (easy or hard) to simulate a more tedious scenario for the analyst. Our results indicate that participants were more inclined to accept suggestions when completing a more difficult task despite theVA 's lower suggestion accuracy. Moreover, the levels of transparency tested in this study did not significantly affect suggestion usage or subjective trust ratings of the participants. Additionally, we observed that participants who utilized suggestions throughout the task explored a greater quantity and diversity of data points. We discuss these findings and the implications of this research for improving the design and effectiveness ofai ‐guidedai tools.va -
Abstract The visualization community regards visualization literacy as a necessary skill. Yet, despite the recent increase in research into visualization literacy by the education and visualization communities, we lack practical and time‐effective instruments for the widespread measurements of people's comprehension and interpretation of visual designs. We present Mini‐VLAT, a brief but practical visualization literacy test. The Mini‐VLAT is a 12‐item short form of the 53‐item Visualization Literacy Assessment Test (VLAT). The Mini‐VLAT is reliable (coefficient omega = 0.72) and strongly correlates with the VLAT. Five visualization experts validated the Mini‐VLAT items, yielding an average content validity ratio (CVR) of 0.6. We further validate Mini‐VLAT by demonstrating a strong positive correlation between study participants' Mini‐VLAT scores and their aptitude for learning an unfamiliar visualization using a Parallel Coordinate Plot test. Overall, the Mini‐VLAT items showed a similar pattern of validity and reliability as the 53‐item VLAT. The results show that Mini‐VLAT is a psychometrically sound and practical short measure of visualization literacy.