skip to main content


Title: Toward a Bias-Aware Future for Mixed-Initiative Visual Analytics
Mixed-initiative visual analytics systems incorporate well-established design principles that improve users' abilities to solve problems. As these systems consider whether to take initiative towards achieving user goals, many current systems address the potential for cognitive bias in human initiatives statically, relying on fixed initiatives they can take instead of identifying, communicating and addressing the bias as it occurs. We argue that mixed-initiative design principles can and should incorporate cognitive bias mitigation strategies directly through development of mitigation techniques embedded in the system to address cognitive biases in situ. We identify domain experts in machine learning adopting visual analytics techniques and systems that incorporate existing mixed-initiative principles and examine their potential to support bias mitigation strategies. This examination considers the unique perspective these experts bring to visual analytics and is situated in existing user-centered systems that make exemplary use of design principles informed by cognitive theory. We then suggest informed opportunities for domain experts to take initiative toward addressing cognitive biases in light of their existing contributions to the field. Finally, we contribute open questions and research directions for designers seeking to adopt visual analytics techniques that incorporate bias-aware initiatives in future systems.  more » « less
Award ID(s):
1813281
NSF-PAR ID:
10226726
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Workshop on Trust and Expertise in Visual Analytics (TREX)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In multiple watershed planning and design problems, such as conservation planning, quantitative estimates of costs, and environmental benefits of proposed conservation decisions may not be the only criteria that influence stakeholders' preferences for those decisions. Their preferences may also be influenced by the conservation decision itself—specifically, the type of practice, where it is being proposed, existing biases, and previous experiences with the practice. While human‐in‐the‐loop type search techniques, such as Interactive Genetic Algorithms (IGA), provide opportunities for stakeholders to incorporate their preferences in the design of alternatives, examination of user‐preferred conservation design alternatives for patterns in Decision Space can provide insights into which local decisions have higher or lower agreement among stakeholders. In this paper, we explore and compare spatial patterns in conservation decisions (specifically involving cover crops and filter strips) within design alternatives generated by IGA and noninteractive GA. Methods for comparing patterns include nonvisual as well as visualization approaches, including a novel visual analytics technique. Results for the study site show that user‐preferred designs generated by all participants had strong bias for cover crops in a majority (50%–83%) of the subbasins. Further, exploration with heat maps visualization indicate that IGA‐based search yielded very different spatial patterns of user‐preferred decisions in subbasins in comparison to decisions within design alternatives that were generated without the human‐in‐the‐loop. Finally, the proposed coincident‐nodes, multiedge graph visualization was helpful in visualizing disagreement among participants in local subbasin scale decisions, and for visualizing spatial patterns in local subbasin scale costs and benefits.

     
    more » « less
  2. The success of DL can be attributed to hours of parameter and architecture tuning by human experts. Neural Architecture Search (NAS) techniques aim to solve this problem by automating the search procedure for DNN architectures making it possible for non-experts to work with DNNs. Specifically, One-shot NAS techniques have recently gained popularity as they are known to reduce the search time for NAS techniques. One-Shot NAS works by training a large template network through parameter sharing which includes all the candidate NNs. This is followed by applying a procedure to rank its components through evaluating the possible candidate architectures chosen randomly. However, as these search models become increasingly powerful and diverse, they become harder to understand. Consequently, even though the search results work well, it is hard to identify search biases and control the search progression, hence a need for explainability and human-in-the-loop (HIL) One-Shot NAS. To alleviate these problems, we present NAS-Navigator, a visual analytics (VA) system aiming to solve three problems with One-Shot NAS; explainability, HIL design, and performance improvements compared to existing state-of-the-art (SOTA) techniques. NAS-Navigator gives full control of NAS back in the hands of the users while still keeping the perks of automated search, thus assisting non-expert users. Analysts can use their domain knowledge aided by cues from the interface to guide the search. Evaluation results confirm the performance of our improved One-Shot NAS algorithm is comparable to other SOTA techniques. While adding Visual Analytics (VA) using NAS-Navigator shows further improvements in search time and performance. We designed our interface in collaboration with several deep learning researchers and evaluated NAS-Navigator through a control experiment and expert interviews. 
    more » « less
  3. The use of cognitive heuristics often leads to fast and effective decisions. However, they can also systematically and predictably lead to errors known as cognitive biases. Strategies for minimizing or mitigating these biases, however, remain largely non-technological (e.g., training courses). The growing use of visual analytic (VA) tools for analysis and decision making enables a new class of bias mitigation strategies. In this work, we explore the ways in which the design of visualizations (vis) may be used to mitigate cognitive biases. We derive a design space comprised of 8 dimensions that can be manipulated to impact a user's cognitive and analytic processes and describe them through an example hiring scenario. This design space can be used to guide and inform future vis systems that may integrate cognitive processes more closely. 
    more » « less
  4. Over the past several decades, environmental governance has made substantial progress in addressing environmental change, but emerging environmental problems require new innovations in law, policy, and governance. While expansive legal reform is unlikely to occur soon, there is untapped potential in existing laws to address environmental change, both by leveraging adaptive and transformative capacities within the law itself to enhance social-ecological resilience and by using those laws to allow social-ecological systems to adapt and transform. Legal and policy research to date has largely overlooked this potential, even though it offers a more expedient approach to addressing environmental change than waiting for full-scale environmental law reform. We highlight examples from the United States and the European Union of untapped capacity in existing laws for fostering resilience in social-ecological systems. We show that governments and other governance agents can make substantial advances in addressing environmental change in the short term—without major legal reform—by exploiting those untapped capacities, and we offer principles and strategies to guide such initiatives. 
    more » « less
  5. Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that due to an apparent conflict between human preferences and fairness, a fair AI algorithm on its own may be insufficient to achieve its intended results in the real world. Using college major recommendation as a case study, we build a fair AI recommender by employing gender debiasing machine learning techniques. Our offline evaluation showed that the debiased recommender makes fairer career recommendations without sacrificing its accuracy in prediction. Nevertheless, an online user study of more than 200 college students revealed that participants on average prefer the original biased system over the debiased system. Specifically, we found that perceived gender disparity is a determining factor for the acceptance of a recommendation. In other words, we cannot fully address the gender bias issue in AI recommendations without addressing the gender bias in humans. We conducted a follow-up survey to gain additional insights into the effectiveness of various design options that can help participants to overcome their own biases. Our results suggest that making fair AI explainable is crucial for increasing its adoption in the real world. 
    more » « less