Graphical perception studies typically measure visualization encoding effectiveness using the error of an “average observer”, leading to canonical rankings of encodings for numerical attributes: e.g., position > area > angle > volume. Yet different people may vary in their ability to read different visualization types, leading to variance in this ranking across individuals not captured by population-level metrics using “average observer” models. One way we can bridge this gap is by recasting classic visual perception tasks as tools for assessing individual performance, in addition to overall visualization performance. In this article we replicate and extend Cleveland and McGill's graphical comparison experiment using Bayesian multilevel regression, using these models to explore individual differences in visualization skill from multiple perspectives. The results from experiments and modeling indicate that some people show patterns of accuracy that credibly deviate from the canonical rankings of visualization effectiveness. We discuss implications of these findings, such as a need for new ways to communicate visualization effectiveness to designers, how patterns in individuals’ responses may show systematic biases and strategies in visualization judgment, and how recasting classic visual perception tasks as tools for assessing individual performance may offer new ways to quantify aspects of visualization literacy. Experiment data, source code, and analysis scripts are available at the following repository: https://osf.io/8ub7t/?view_only=9be4798797404a4397be3c6fc2a68cc0 .
more »
« less
Investigating Direct Manipulation of Graphical Encodings as a Method for User Interaction
We investigate direct manipulation of graphical encodings as a method for interacting with visualizations. There is an increasing interest in developing visualization tools that enable users to perform operations by directly manipulating graphical encodings rather than external widgets such as checkboxes and sliders. Designers of such tools must decide which direct manipulation operations should be supported, and identify how each operation can be invoked. However, we lack empirical guidelines for how people convey their intended operations using direct manipulation of graphical encodings. We address this issue by conducting a qualitative study that examines how participants perform 15 operations using direct manipulation of standard graphical encodings. From this study, we 1) identify a list of strategies people employ to perform each operation, 2) observe commonalities in strategies across operations, and 3) derive implications to help designers leverage direct manipulation of graphical encoding as a method for user interaction.
more »
« less
- Award ID(s):
- 1750474
- PAR ID:
- 10139407
- Date Published:
- Journal Name:
- IEEE Transactions on Visualization and Computer Graphics
- ISSN:
- 1077-2626
- Page Range / eLocation ID:
- 482 - 491
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Enabling the vision of on-demand cyber manufacturing-as-a-service requires a new set of cloud-based computational tools for design manufacturability feedback and process selection to connect designers with manufacturers. In our prior work, we demonstrated a generative modeling approach in voxel space to model the shape transformation capabilities of machining operations using unsupervised deep learning. Combining this with a deep metric learning model enabled quantitative assessment of the manufacturability of a query part. In this paper, we extend our prior work by developing a semantic segmentation approach for machinable volume decomposition using pre-trained generative process capability models, which output per-voxel manufacturability feedback and labels of candidate machining operations for a query 3D part. Using three types of complex parts as case studies, we show that the proposed method accurately identifies machinable and non-machinable volumes with an average intersection-over-union (IoU) of 0.968 for axisymmetric machining operations, and a class-average F1 score of 0.834 for volume segmentation by machining operation.more » « less
-
Shifts in policy and consumers’ awareness have raised the importance of sustainability in product design, inspiring the development of tools that support more sustainable design. However, such tools are not adopted as quickly as expected. To understand what tools designers consider useful, we explored how much control designers perceive over existing design strategies, and how much impact they think these strategies have. We used a survey (n = 42) and follow-up interviews (n = 12) to ask hardware product design professionals what areas they see opportunities in, and what functions they look for in tools. The findings reveal that designers perceive impact and control differently in different opportunity areas, so to increase the likelihood of adoption, tools should incorporate features that reflect those differences. Designers report the least control over aspects related to manufacturing, and also rate these as having low impact on sustainability. In contrast, designers attribute high control and impact to aspects related to their design practice and their organizations’ business model, which are tightly linked. To address these issues, designers pointed towards tools that improve information transparency, support decision-making, predict results, share knowledge, and discover user needs. Regardless of how much control designers have, they care about tools and strategies that are highly impactful.more » « less
-
Automating operations of objects has made life easier and more convenient for billions of people, especially those with limited motor capabilities. On the other hand, even able-bodied users might not always be able to perform manual operations (e.g., both hands are occupied), and manual operations might be undesirable for hygiene purposes (e.g., contactless devices). As a result, automation systems like motion-triggered doors, remote-control window shades, contactless toilet lids have become increasingly popular in private and public environments. Yet, these systems are hampered by complex building wiring or short battery lifetimes, negating their positive benefits for accessibility, energy saving, healthcare, and other domains. In this paper we explore how these types of objects can be powered in perpetuity by the energy generated from a unique energy source - user interactions, specifically, the manual manipulations of objects by users who can afford them when they can afford them. Our assumption is that users' capabilities for object operations are heterogeneous, there are desires for both manual and automatic operations in most environments, and that automatic operations are often not needed as frequently - for example, an automatic door in a public space is often manually opened many times before a need for automatic operation shows up. The energy harvested by those manual operations would be sufficient to power that one automatic operation. We instantiate this idea by upcycling common everyday objects with devices which have various mechanical designs powered by a general-purpose backbone embedded system. We call these devices, MiniKers. We built a custom driver circuit that can enable motor mechanisms to toggle between generating powers (i.e., manual operation) and actuating objects (i.e., automatic operation). We designed a wide variety of mechanical mechanisms to retrofit existing objects and evaluated our system with a 48-hour deployment study, which proves the efficacy of MiniKers as well as shedding light into this people-as-power approach as a feasible solution to address energy needed for smart environment automation.more » « less
-
Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in DracoThere exists a gap between visualization design guidelines and their application in visualization tools. While empirical studies can provide design guidance, we lack a formal framework for representing design knowledge, integrating results across studies, and applying this knowledge in automated design tools that promote effective encodings and facilitate visual exploration. We propose modeling visualization design knowledge as a collection of constraints, in conjunction with a method to learn weights for soft constraints from experimental data. Using constraints, we can take theoretical design knowledge and express it in a concrete, extensible, and testable form: the resulting models can recommend visualization designs and can easily be augmented with additional constraints or updated weights. We implement our approach in Draco, a constraint-based system based on Answer Set Programming (ASP). We demonstrate how to construct increasingly sophisticated automated visualization design systems, including systems based on weights learned directly from the results of graphical perception experiments.more » « less