skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Keeping Designers in the Loop: Communicating Inherent Algorithmic Trade-offs Across Multiple Objectives
Artificial intelligence algorithms have been used to enhance a wide variety of products and services, including assisting human decision making in high-stake contexts. However, these algorithms are complex and have trade-offs, notably between prediction accuracy and fairness to population subgroups. This makes it hard for designers to understand algorithms and design products or services in a way that respects users' goals, values, and needs. We proposed a method to help designers and users explore algorithms, visualize their trade-offs, and select algorithms with trade-offs consistent with their goals and needs. We evaluated our method on the problem of predicting criminal defendants' likelihood to re-offend through (i) a large-scale Amazon Mechanical Turk experiment, and (ii) in-depth interviews with domain experts. Our evaluations show that our method can help designers and users of these systems better understand and navigate algorithmic trade-offs. This paper contributes a new way of providing designers the ability to understand and control the outcomes of algorithmic systems they are creating.  more » « less
Award ID(s):
2001851 2000782
PAR ID:
10178837
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the 2020 ACM Designing Interactive Systems Conference
Page Range / eLocation ID:
1245 to 1257
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    With the growing industry applications of Artificial Intelligence (AI) systems, pre-trained models and APIs have emerged and greatly lowered the barrier of building AI-powered products. However, novice AI application designers often struggle to recognize the inherent algorithmic trade-offs and evaluate model fairness before making informed design decisions. In this study, we examined the Objective Revision Evaluation System (ORES), a machine learning (ML) API in Wikipedia used by the community to build anti-vandalism tools. We designed an interactive visualization system to communicate model threshold trade-offs and fairness in ORES. We evaluated our system by conducting 10 in-depth interviews with potential ORES application designers. We found that our system helped application designers who have limited ML backgrounds learn about in-context ML knowledge, recognize inherent value trade-offs, and make design decisions that aligned with their goals. By demonstrating our system in a real-world domain, this paper presents a novel visualization approach to facilitate greater accessibility and human agency in AI application design. 
    more » « less
  2. Abstract Meeting the United Nations (UN) sustainable development goals efficiently requires designers and engineers to solve multi-objective optimization problems involving trade-offs between social, environmental, and economical impacts. This paper presents an approach for designers and engineers to quantify the social and environmental impacts of a product at a population level and then perform a trade-off analysis between those impacts. In this approach, designers and engineers define the attributes of the product as well as the materials and processes used in the product’s life cycle. Agent-based modeling (ABM) tools that have been developed to model the social impacts of products are combined with life cycle assessment (LCA) tools that have been developed to evaluate the pressures that different processes create on the environment. Designers and engineers then evaluate the trade-offs between impacts by finding non-dominated solutions that minimize environmental impacts while maximizing positive and/or minimizing negative social impacts. Product adoption models generated by ABM allow designers and engineers to approximate population level environmental impacts and avoid Simpson’s paradox, where a reversal in choices is preferred when looking at the population level impacts versus the individual product-level impacts. This analysis of impacts has the potential to help designers and engineers create more impactful products that aid in reaching the UN sustainable development goals. 
    more » « less
  3. Traditional recommender systems help users find the most relevant products or services to match their needs and preferences. However, they overlook the preferences of other sides of the market (aka stakeholders) involved in the system. In this paper, we propose to use contextual bandit algorithms in multi-stakeholder platforms where a multi-sided relevance function with adjusting weights is modeled to consider the preferences of all involved stakeholders. This algorithm sequentially recommends the items based on the contextual features of users along with the priority of the stakeholders and their relevance to the items.Our extensive experimental results on a dataset consisting of MovieLens (1m), IMDB (81k+), and a synthetic dataset show that our proposed approach outperforms the baseline methods and provides a good trade-off between the satisfaction of different stakeholders over time. 
    more » « less
  4. In the past few years, there has been much work on incorporating fairness requirements into the design of algorithmic rankers, with contributions from the data management, algorithms, information retrieval, and recommender systems communities. In this tutorial, we give a systematic overview of this work, offering a broad perspective that connects formalizations and algorithmic approaches across subfields. During the first part of the tutorial, we present a classification framework for fairness-enhancing interventions, along which we will then relate the technical methods. This framework allows us to unify the presentation of mitigation objectives and of algorithmic techniques to help meet those objectives or identify trade-offs. Next, we discuss fairness in score-based ranking and in supervised learning-to-rank. We conclude with recommendations for practitioners, to help them select a fair ranking method based on the requirements of their specific application domain. 
    more » « less
  5. Recommender systems are usually designed by engineers, researchers, designers, and other members of development teams. These systems are then evaluated based on goals set by the aforementioned teams and other business units of the platforms operating the recommender systems. This design approach emphasizes the designers’ vision for how the system can best serve the interests of users, providers, businesses, and other stakeholders. Although designers may be well-informed about user needs through user experience and market research, they are still the arbiters of the system’s design and evaluation, with other stakeholders’ interests less emphasized in user-centered design and evaluation. When extended to recommender systems for social good, this approach results in systems that reflect the social objectives as envisioned by the designers and evaluated as the designers understand them. Instead, social goals and operationalizations should be developed through participatory and democratic processes that are accountable to their stakeholders. We argue that recommender systems aimed at improving social good should be designedbyandwith, not justfor, the people who will experience their benefits and harms. That is, they should be designed in collaboration with their users, creators, and other stakeholders as full co-designers, not only as user study participants. 
    more » « less