skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Separating facts and evaluation: motivation, account, and learnings from a novel approach to evaluating the human impacts of machine learning
In this paper, we outline a new method for evaluating the human impact of machine-learning (ML) applications. In partnership with Underwriters Laboratories Inc., we have developed a framework to evaluate the impacts of a particular use of machine learning that is based on the goals and values of the domain in which that application is deployed. By examining the use of artificial intelligence (AI) in particular domains, such as journalism, criminal justice, or law, we can develop more nuanced and practically relevant understandings of key ethical guidelines for artificial intelligence. By decoupling the extraction of the facts of the matter from the evaluation of the impact of the resulting systems, we create a framework for the process of assessing impact that has two distinctly different phases.  more » « less
Award ID(s):
1917707
PAR ID:
10319637
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
AI & SOCIETY
ISSN:
0951-5666
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Pranjol, Zahid (Ed.)
    This perspective article focuses on the exploration and advocacy of approaches to be considered in designing equitable learning experiences for students’ use of artificial intelligence, machine learning, and technology through the Universal Design for Learning Framework (UDL) exemplifying chemistry examples that can be applied to any course in STEM. The use of artificial intelligence (AI) and machine learning are causing disruptions within learning in higher education and is also casting a spotlight on systemic inequities particularly affecting minoritized groups broadly and in STEM fields. Particularly, the emergence of AI has focused on inequities toward minoritized students in academic and professional ethics. As the U.S. education system grapples with a nuanced mix of acceptance and hesitation towards AI, the necessity for inclusive and equitable education, impactful learning practices, and innovative strategies has become more pronounced. Promoting equitable approaches for the use of artificial intelligence and technology in STEM learning will be an important milestone in addressing STEM disparities toward minoritized groups and equitable accessibility to evolving technology. 
    more » « less
  2. The ecosystem for automated offensive security tools has grown in recent years. As more tools automate offensive security techniques via Artificial Intelligence (AI) and Machine Learning (ML), it may result in vulnerabilities due to adversarial attacks. Therefore, it is imperative that research is conducted to help understand the techniques used by these security tools. Our work explores the current state of the art in offensive security tools. First, we employ an abstract model that can be used to understand what phases of an Offensive Cyber Operation (OCO) can be automated. We then adopt a generalizable taxonomy, and apply it to automation tools (such as normal automation and the use of artificial intelligence in automation). We then curated a dataset of tools and research papers and quantitatively analyzed it. Our work resulted in a public dataset that includes analysis of (n=57) papers and OCO tools that are mapped to the the MITRE ATT&CK Framework enterprise techniques, applicable phases of our OCO model, and the details of the automation technique. The results show a need for a granular expansion on the ATT&CK Exploit Public-Facing application technique. A critical finding is that most OCO tools employed Simple Rule Based automation, hinting at a lucrative research opportunity for the use of Artificial Intelligence (AI) and Machine Learning (ML) in future OCO tooling. 
    more » « less
  3. Many organizations seek to ensure that machine learning (ML) and artificial intelligence (AI) systems work as intended in production but currently do not have a cohesive methodology in place to do so. To fill this gap, we propose MLTE (Machine Learning Test and Evaluation, colloquially referred to as "melt"), a framework and implementation to evaluate ML models and systems. The framework compiles state-of-the-art evaluation techniques into an organizational process for interdisciplinary teams, including model developers, software engineers, system owners, and other stakeholders. MLTE tooling supports this process by providing a domain-specific language that teams can use to express model requirements, an infrastructure to define, generate, and collect ML evaluation metrics, and the means to communicate results. 
    more » « less
  4. In this tutorial, we present our recent work on building trusted, resilient and interpretable AI models by combining symbolic methods developed for automated reasoning with connectionist learning methods that use deep neural networks. The increasing adoption of artificial intelligence and machine learning in systems, including safety-critical systems, has created a pressing need for developing scalable techniques that can be used to establish trust over their safe behavior, resilience to adversarial attacks, and interpretability to enable human audits. This tutorial is comprised of three components: review of techniques for verification of neural networks, methods for using geometric invariants to defend against adversarial attacks, and techniques for extracting logical symbolic rules by reverse engineering machine learning models. These techniques form the core of TRINITY: Trusted, Resilient and Interpretable AI framework being developed at SRI. In this tutorial, we identify the key challenges in building the TRINITY framework, and report recent results on each of these three fronts. 
    more » « less
  5. Abstract The intersection between engineering design, manufacturing, and artificial intelligence offers countless opportunities for breakthrough improvements in how we develop new technology. However, achieving this synergy between the physical and the computational worlds involves overcoming a core challenge: few specialists educated today are trained in both engineering design and artificial intelligence. This fact, combined with the recency of both fields’ adoption and the antiquated state of many institutional data management systems, results in an industrial landscape that is relatively devoid of high-quality data and individuals who can rapidly use that data for machine learning and artificial intelligence development. In order to advance the fields of engineering design and manufacturing to the next level of preparedness for the development of effective artificially intelligent, data-driven analytical and generative tools, a new design for X principle must be established: design for artificial intelligence (DfAI). In this paper, a conceptual framework for DfAI is presented and discussed in the context of the contemporary field and the personas which drive it. 
    more » « less