The visualization community has seen a rise in the adoption of user studies. Empirical user studies systematically test the assumptions that we make about how visualizations can help or hinder viewers’ performance of tasks. Although the increase in user studies is encouraging, it is vital that research on human reasoning with visualizations be grounded in an understanding of how the mind functions. Previously, there were no sufficient models that illustrate the process of decision-making with visualizations. However, Padilla et al. [41] recently proposed an integrative model for decision-making with visualizations, which expands on modern theories of visualization cognition and decision-making. In this paper, we provide insights into how cognitive models can accelerate innovation, improve validity, and facilitate replication efforts, which have yet to be thoroughly discussed in the visualization community. To do this, we offer a compact overview of the cognitive science of decision-making with visualizations for the visualization community, using the Padilla et al. [41] cognitive model as a guiding framework. By detailing examples of visualization research that illustrate each component of the model, this paper offers novel insights into how visualization researchers can utilize a cognitive framework to guide their user studies. We provide practical examples of each component of the model from empirical studies of visualizations, along with visualization implications of each cognitive process, which have not been directly addressed in prior work. Finally, this work offers a case study in utilizing an understanding of human cognition to generate a novel solution to a visualization reasoning bias in the context of hurricane forecast track visualizations.
more »
« less
Towards Designing Unbiased Replication Studies in Information Visualization
Experimenter bias and expectancy effects have been well studied in the social sciences and even in human-computer interaction. They refer to the nonideal study-design choices made by experimenters which can unfairly influence the outcomes of their studies. While these biases need to be considered when designing any empirical study, they can be particularly significant in the context of replication studies which can stray from the studies being replicated in only a few admissible ways. Although there are general guidelines for making valid, unbiased choices in each of the several steps in experimental design, making such choices when conducting replication studies has not been well explored. We reviewed 16 replication studies in information visualization published in four top venues between 2008 to present to characterize how the study designs of the replication studies differed from those of the studies they replicated. We present our characterization categories which include the prevalence of crowdsourcing, and the commonly-found replication types and study-design differences. We draw guidelines based on these categories towards helping researchers make meaningful and unbiased decisions when designing replication studies. Our paper presents the first steps in gaining a larger understanding of this topic and contributes to the ongoing efforts of encouraging researchers to conduct and publish more replication studies in information visualization.
more »
« less
- Award ID(s):
- 1816620
- PAR ID:
- 10284849
- Date Published:
- Journal Name:
- 2018 IEEE Evaluation and Beyond-Methodological Approaches for Visualization (BELIV)
- Page Range / eLocation ID:
- 93 to 101
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Projects focused on movement behaviour and home range are commonplace, but beyond a focus on choosing appropriate research questions, there are no clear guidelines for such studies. Without these guidelines, designing an animal tracking study to produce reliable estimates of space‐use and movement properties (necessary to answer basic movement ecology questions), is often done in an ad hoc manner.We developed ‘movedesign’, a user‐friendly Shiny application, which can be utilized to investigate the precision of three estimates regularly reported in movement and spatial ecology studies: home range area, speed and distance travelled. Conceptually similar to statistical power analysis, this application enables users to assess the degree of estimate precision that may be achieved with a given sampling design; that is, the choices regarding data resolution (sampling interval) and battery life (sampling duration).Leveraging the ‘ctmm’Rpackage, we utilize two methods proven to handle many common biases in animal movement datasets: autocorrelated kernel density estimators (AKDEs) and continuous‐time speed and distance (CTSD) estimators. Longer sampling durations are required to reliably estimate home range areas via the detection of a sufficient number of home range crossings. In contrast, speed and distance estimation requires a sampling interval short enough to ensure that a statistically significant signature of the animal's velocity remains in the data.This application addresses key challenges faced by researchers when designing tracking studies, including the trade‐off between long battery life and high resolution of GPS locations collected by the devices, which may result in a compromise between reliably estimating home range or speed and distance. ‘movedesign’ has broad applications for researchers and decision‐makers, supporting them to focus efforts and resources in achieving the optimal sampling design strategy for their research questions, prioritizing the correct deployment decisions for insightful and reliable outputs, while understanding the trade‐off associated with these choices.more » « less
-
The field of human-robot interaction (HRI) research is multidisciplinary and requires researchers to understand diverse fields including computer science, engineering, informatics, philosophy, psychology, and more disciplines. However, it is hard to be an expert in everything. To help HRI researchers develop methodological skills, especially in areas that are relatively new to them, we conducted a virtual workshop, Workshop Your Study Design (WYSD), at the 2021 International Conference on HRI. In this workshop, we grouped participants with mentors, who are experts in areas like real-world studies, empirical lab studies, questionnaire design, interview, participatory design, and statistics. During and after the workshop, participants discussed their proposed study methods, obtained feedback, and improved their work accordingly. In this paper, we present 1) Workshop attendees’ feedback about the workshop and 2) Lessons that the participants learned during their discussions with mentors. Participants’ responses about the workshop were positive, and future scholars who wish to run such a workshop can consider implementing their suggestions. The main contribution of this paper is the lessons learned section, where the workshop participants contributed to forming this section based on what participants discovered during the workshop. We organize lessons learned into themes of 1) Improving study design for HRI, 2) How to work with participants - especially children -, 3) Making the most of the study and robot’s limitations, and 4) How to collaborate well across fields as they were the areas of the papers submitted to the workshop. These themes include practical tips and guidelines to assist researchers to learn about fields of HRI research with which they have limited experience. We include specific examples, and researchers can adapt the tips and guidelines to their own areas to avoid some common mistakes and pitfalls in their research.more » « less
-
In this paper, we present the Vis Repligogy framework that enables conducting replication studies in the class. Replication studies are crucial to strengthening the data visualization field and ensuring its foundations are solid and methods accurate. Although visualization researchers acknowledge the epistemological significance of replications and their necessity to establish trust and reliability, the field has made little progress to support the publication of such studies and, importantly, provide methods to the community to encourage replications. Therefore, we contribute Vis Repligogy, a novel framework to systematically incorporate replications within visualization course curricula that not only teaches students replication and evaluation methodologies but also results in executed replication studies to validate prior work. To validate the feasibility of the framework, we present case studies of two graduate data visualization courses that implemented it. These courses resulted in a total of five replication studies. Finally, we reflect on our experience implementing the Vis Repligogy framework to provide useful recommendations for future use. We envision this framework will encourage instructors to conduct replications in their courses, help facilitate more replications in visualization pedagogy and in research, and support a culture shift towards reproducible research. Supplemental materials of this paper are available at https://osf.io/ncb6d/.more » « less
-
null (Ed.)Visualization research and practice that incorporates the arts make claims to being more effective in connecting with users on a human level. However, these claims are difficult to measure quantitatively. In this paper, we present a follow-on study to use close reading, a humanities method from literary studies, to evaluate visualizations created using artistic processes [Bares 2020]. Close reading is a method in literary studies that we've previously explored as a method for evaluating visualizations. To use close reading as an evaluation method, we guide participants through a series of steps designed to prompt them to interpret the visualization's formal, informational, and contextual features. Here we elaborate on our motivations for using close reading as a method to evaluate visualizations, and enumerate the procedures we used in the study to evaluate a 2D visualization, including modifications made because of the COVID-19 pandemic. Key findings of this study include that close reading is an effective formative method to elicit information related to interpretation and critique; user subject position; and suspicion or skepticism. Information gained through close reading is valuable in the visualization design and iteration processes, both related to designing features and other formal elements more effectively, as well as in considering larger questions of context and framing.more » « less
An official website of the United States government

