skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Fairkit-learn: A Fairness Evaluation and Comparison Toolkit
Advances in how we build and use software, specifically the integration of machine learning for decision making, have led to widespread concern around model and software fairness. We present fairkit-learn, an interactive Python toolkit designed to support data scientists' ability to reason about and understand model fairness. We outline how fairkit-learn can support model training, evaluation, and comparison and describe the potential benefit that comes with using fairkit-learn in comparison to the state-of-the-art. Fairkit-learn is open source at https://go.gmu.edu/fairkit-learn/.  more » « less
Award ID(s):
1763423
PAR ID:
10334566
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the Demonstrations Track at the 44th International Conference on Software Engineering (ICSE)
Page Range / eLocation ID:
70-74
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A goal of software engineering research is advancing software quality and the success of the software engineering process. However, while recent studies have demonstrated a new kind of defect in software related to its ability to operate in fair and unbiased manner, software engineering has not yet wholeheartedly tackled these new kinds of defects, thus leaving software vulnerable. This paper outlines a vision for how software engineering research can help reduce fairness defects and represents a call to action by the software engineering research community to reify that vision. Modern software is riddled with examples of biased behavior, from automated translation injecting gender stereotypes, to vision systems failing to see faces of certain races, to the US criminal justice system relying on biased computational assessments of crime recidivism. While systems may learn bias from biased data, bias can also emerge from ambiguous or incomplete requirement specification, poor design, implementation bugs, and unintended component interactions. We argue that software fairness is analogous to software quality, and that numerous software engineering challenges in the areas of requirements, specification, design, testing, and verification need to be tackled to solve this problem. 
    more » « less
  2. The development of Artificial Intelligence (AI) systems involves a significant level of judgment and decision making on the part of engineers and designers to ensure the safety, robustness, and ethical design of such systems. However, the kinds of judgments that practitioners employ while developing AI platforms are rarely foregrounded or examined to explore areas practitioners might need ethical support. In this short paper, we employ the concept of design judgment to foreground and examine the kinds of sensemaking software engineers use to inform their decisionmaking while developing AI systems. Relying on data generated from two exploratory observation studies of student software engineers, we connect the concept of fairness to the foregrounded judgments to implicate their potential algorithmic fairness impacts. Our findings surface some ways in which the design judgment of software engineers could adversely impact the downstream goal of ensuring fairness in AI systems. We discuss the implications of these findings in fostering positive innovation and enhancing fairness in AI systems, drawing attention to the need to provide ethical guidance, support, or intervention to practitioners as they engage in situated and contextual judgments while developing AI systems. 
    more » « less
  3. We present four elements we believe are key to providing a comprehensive and sustainable support for research software engineering: software development, community, training, and policy. We also show how the wider developer community can learn from, and engage with, these activities. 
    more » « less
  4. null (Ed.)
    Graph mining is an essential component of recommender systems and search engines. Outputs of graph mining models typically provide a ranked list sorted by each item's relevance or utility. However, recent research has identified issues of algorithmic bias in such models, and new graph mining algorithms have been proposed to correct for bias. As such, algorithm developers need tools that can help them uncover potential biases in their models while also exploring the impacts of correcting for biases when employing fairness-aware algorithms. In this paper, we present FairRankVis, a visual analytics framework designed to enable the exploration of multi-class bias in graph mining algorithms. We support both group and individual fairness levels of comparison. Our framework is designed to enable model developers to compare multi-class fairness between algorithms (for example, comparing PageRank with a debiased PageRank algorithm) to assess the impacts of algorithmic debiasing with respect to group and individual fairness. We demonstrate our framework through two usage scenarios inspecting algorithmic fairness. 
    more » « less
  5. VREs are predestined to support many aspects of FAIR because of their characteristics to provide a workspace for collaboration, sharing data and simulations and/or workflows. The FAIR for VRE Working Group has worked on a checklist to measure FAIRness in science gateways. This list considers how to address the complexity in regard to which target group is addressed – developers or users – and the granularity such as VREs as software frameworks, services, APIs, workflows, data and simulations. We assume that not only VREs as software frameworks are FAIR but that they also are FAIR-enabling for the digital objects they contain. The objective of this session will be how to recognize and incentivize that providers, developers and users are actively working towards FAIRness of digital objects. The idea for this session is to address this via badges. It probably makes sense to split the badges for the four principles Findable, Accessible, Interoperable and Reusable. There are many open questions beyond this granularity such as how to create badges, who gives such badges, what are the rules for the duration of badges? 
    more » « less